Logistics Problem Set 1 Mailing List Due 10/13 at start of class

Slides:



Advertisements
Similar presentations
Heuristics CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Advertisements

Informed Search Methods How can we improve searching strategy by using intelligence? Map example: Heuristic: Expand those nodes closest in “as the crow.
CSC344: AI for Games Lecture 5 Advanced heuristic search Patrick Olivier
Review: Search problem formulation
Informed (Heuristic) Search Evaluation Function returns a value estimating the desirability of expanding a frontier node Two Basic Approaches –Expand node.
Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2008.
Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2005.
Problem Spaces & Search CSE 473. © Daniel S. Weld Topics Agents & Environments Problem Spaces Search & Constraint Satisfaction Knowledge Repr’n.
Informed Search CSE 473 University of Washington.
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
Review Best-first search uses an evaluation function f(n) to select the next node for expansion. Greedy best-first search uses f(n) = h(n). Greedy best.
Heuristics CSE 473 University of Washington. © Daniel S. Weld Topics Agency Problem Spaces SearchKnowledge Representation Planning PerceptionNLPMulti-agentRobotics.
Informed Search Next time: Search Application Reading: Machine Translation paper under Links Username and password will be mailed to class.
Informed Search Idea: be smart about what paths to try.
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Informed (Heuristic) Search
Informed search algorithms
CSE 473: Artificial Intelligence Spring 2012
CSE 573: Artificial Intelligence Autumn 2012 Heuristic Search With slides from Dan Klein, Stuart Russell, Andrew Moore, Luke Zettlemoyer Dan Weld.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Problem Spaces & Search CSE 573. © Daniel S. Weld 2 Logistics Mailing list Reading Ch 4.2, Ch 6 Mini-Project I Partners Game Playing?
For Wednesday Read chapter 6, sections 1-3 Homework: –Chapter 4, exercise 1.
For Wednesday Read chapter 5, sections 1-4 Homework: –Chapter 3, exercise 23. Then do the exercise again, but use greedy heuristic search instead of A*
Informed Search CSE 473 University of Washington.
Problem Spaces & Search Dan Weld CSE 573. © Daniel S. Weld 2 Logistics Read rest of R&N chapter 4 Also read chapter 5 PS 1 will arrive by , so be.
Heuristics & Constraint Satisfaction CSE 573 University of Washington.
CSE 473: Artificial Intelligence Spring 2012 Search: Cost & Heuristics Luke Zettlemoyer Lecture adapted from Dan Klein’s slides Multiple slides from Stuart.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Lecture 3: Uninformed Search
Informed Search Methods
Review: Tree search Initialize the frontier using the starting state
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
Informed Search Methods
CSE 473 Artificial Intelligence Oren Etzioni
Logistics Problem Set 1 Office Hours Reading Mailing List Dan’s Travel
Local Search Algorithms
Artificial Intelligence (CS 370D)
Artificial Intelligence (CS 370D)
HW #1 Due 29/9/2008 Write Java Applet to solve Goats and Cabbage “Missionaries and cannibals” problem with the following search algorithms: Breadth first.
Artificial Intelligence Informed Search Algorithms
EA C461 – Artificial Intelligence
Artificial Intelligence
CS Fall 2016 (Shavlik©), Lecture 9, Week 5
Heuristics Local Search
Problem Spaces & Search
More on Search: A* and Optimization
Problem Spaces & Search
Heuristics Local Search
CSE 473 University of Washington
Lecture 9 Administration Heuristic search, continued
HW 1: Warmup Missionaries and Cannibals
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
More advanced aspects of search
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
Informed Search Idea: be smart about what paths to try.
Problem Spaces & Search
State-Space Searches.
Local Search Algorithms
Heuristic Search Generate and Test Hill Climbing Best First Search
State-Space Searches.
Search.
The Rich/Knight Implementation
HW 1: Warmup Missionaries and Cannibals
CS 416 Artificial Intelligence
Search.
State-Space Searches.
Informed Search Idea: be smart about what paths to try.
Local Search Algorithms
The Rich/Knight Implementation
Presentation transcript:

Logistics Problem Set 1 Mailing List Due 10/13 at start of class Note start Robocode work Mailing List Sign up via course web page © Daniel S. Weld

Robocode in PS1 http://www.cs.washington.edu/education/courses/cse573/03au/slides/robocode3.avi © Daniel S. Weld

Tank fighting simulator Human players code tanks in Java ~4000 tank control programs online Directional “radar” sensor Must be pointed at enemy to see (but noisefree obs) Actuators Moving, turning takes time Gun must cool before firing No terrain effects Walled combat area Real Time! Adapted from Jacob Eisenstein presentation © Daniel S. Weld

Outputs Inputs Self Enemy Forward / Backward Turn robot Turn radar Position Velocity Heading Energy Gun Heat Enemy Distance Bearing Heading Energy Velocity Outputs Forward / Backward Turn robot Turn radar Turn gun Fire Gun heat must be zero Variable power Adapted from Jacob Eisenstein presentation © Daniel S. Weld

Knowledge Representation 573 Topics Perception NLP Multi-agent Robotics Reinforcement Learning MDPs Supervised Learning Planning Uncertainty Search Knowledge Representation Problem Spaces Agency © Daniel S. Weld

Search Problem spaces Blind Depth-first, breadth-first, iterative-deepening, iterative broadening Informed Best-first, Dijkstra's, A*, IDA*, SMA*, DFB&B, Beam, hill climb,limited discrep, AO*, LAO*, RTDP Local search Heuristics Evaluation, construction, learning Pattern databases Online methods Techniques from operations research Constraint satisfaction Adversary search © Daniel S. Weld

Last Time Agents Problem Spaces Blind Search DFS BFS Iterative Deepening Iterative Broadening © Daniel S. Weld

Speed BFS Nodes Time Iter. Deep. Nodes Time 8 Puzzle 2x2x2 Rubik’s Assuming 10M nodes/sec & sufficient memory BFS Nodes Time Iter. Deep. Nodes Time 8 Puzzle 2x2x2 Rubik’s 15 Puzzle 3x3x3 Rubik’s 24 Puzzle 105 .01 sec 106 .2 sec 1013 6 days 1019 68k yrs 1025 12B yrs 105 .01 sec 106 .2 sec 1017 20k yrs 1020 574k yrs 1037 1023 yrs 1Mx 8x Why the difference? Rubik has higher branch factor 15 puzzle has greater depth  # of duplicates Adapted from Richard Korf presentation © Daniel S. Weld

Model-Based Diagnosis  2 w 3 A + y=12  D 2 v B 3 + z=12 [10]  E 2 x 3 C Model-based vs. Library of fault models Decision tree Use model of correct behavior Hypothesis Generation Testing Discrimination © Daniel S. Weld

Improving Generation Enumerate components (or sets of them) Only if connected to discrepancy Devices often have inputs & outputs Output of suspect must be connected Not every input influences an output Information from multiple discrepancies Constraint reasoning or a=1 b=0 c=1 [0] © Daniel S. Weld

Search for Robocode Controllers Nodes? Operators? Start state? Goal? © Daniel S. Weld

Best-first Search Generalization of breadth first search Priority queue of nodes to be explored Cost function f(n) applied to each node Add initial state to priority queue While queue not empty Node = head(queue) If goal?(node) then return node Add children of node to queue © Daniel S. Weld

Old Friends Breadth first = best first With f(n) = depth(n) Dijkstra’s Algorithm = best first With f(n) = g(n) Where g(n) = sum of edge costs from start to n Space bound (stores all generated nodes) © Daniel S. Weld

{ A* Search Hart, Nilsson & Rafael 1968 Best first search with f(n) = g(n) + h(n) Where g(n) = sum of edge costs from start to n And h(n) = estimate of lowest cost path n-->goal If h(n) is admissible then search will find optimal Underestimates cost of any solution which can reached from node { Space bound since must maintain queue © Daniel S. Weld

Iterative-Deepening A* Like iterative-deepening depth-first, but... Depth bound modified to be an f-limit Start with limit = h(start) Prune any node if f(node) > f-limit Next f-limit=min-cost of any node pruned FL=21 e FL=15 d a f b c © Daniel S. Weld

IDA* Analysis Complete & Optimal (ala A*) Space usage  depth of solution Each iteration is DFS - no priority queue! # nodes expanded relative to A* Depends on # unique values of heuristic function In 8 puzzle: few values  close to # A* expands In traveling salesman: each f value is unique  1+2+…+n = O(n2) where n=nodes A* expands if n is too big for main memory, n2 is too long to wait! Generates duplicate nodes in cyclic graphs © Daniel S. Weld

SMA* Problem with IDA*: f-limit increases slowly Must do iterative search again and again Storing little state between each iteration Just one number: next highest contour level SMA* Uses all available memory to store state Keeps bounded-size priority queue When needs memory, drops node with highest f-cost In ancestor of forgotten nodes, retains cost of best path which has been dropped. This focuses regeneration. Duplicates minimal work Optimal in a number of nice ways © Daniel S. Weld

Depth-First Branch & Bound Single DF search  uses linear space Keep track of best solution so far If f(n) = g(n)+h(n)  cost(best-soln) Then prune n Requires Finite search tree, or Good upper bound on solution cost Generates duplicate nodes in cyclic graphs Adapted from Richard Korf presentation © Daniel S. Weld

Beam Search Idea Evaluation Best first but only keep N best items on priority queue Evaluation Complete? Time Complexity? Space Complexity? No O(b^d) O(b + N) © Daniel S. Weld

Hill Climbing Idea Problems? Local maxima Plateaus Diagonal ridges “Gradient ascent” Idea Always choose best child; no backtracking Beam search with |queue| = 1 Problems? Local maxima Plateaus Diagonal ridges © Daniel S. Weld

Randomizing Hill Climbing Randomly disobeying heuristic Random restarts [Add something on heavy tailed distribution] © Daniel S. Weld

Simulated Annealing Objective: avoid local minima Technique: For the most part use hill climbing When no improvement possible Choose random neighbor Let  be the decrease in quality Move to neighbor with probability e /T Reduce “temperature” (T) over time Pros & cons Optimal? If T decreased slowly enough, will reach optimal state Widely used See also WalkSAT temp © Daniel S. Weld

Limited Discrepancy Search Discrepancy bound indicates how often to violate heuristic Iteratively increase... a d b e g c h f Assume that heuristic says go left © Daniel S. Weld

Genetic Algorithms Start with random population Produce new generation Representation serialized States are ranked with “fitness function” Produce new generation Select random pair(s): probability ~ fitness Randomly choose “crossover point” Offspring mix halves Randomly mutate bits Crossover Mutation 174629844710 174611094281 164611094281   776511094281 776529844710 776029844210 © Daniel S. Weld

Properties Importance of careful reprsentation © Daniel S. Weld