CMSC 671 Fall 2010 Thu 9/9/10 Uninformed Search (summary) Informed Search Functional Programming (revisited) Prof. Laura Zavala,

Slides:



Advertisements
Similar presentations
CMSC 471 Fall 2002 Class #5-6 – Monday, September 16 / Wednesday, September 18.
Advertisements

Review: Search problem formulation
Notes Dijstra’s Algorithm Corrected syllabus.
CMSC 671 Fall 2005 Class #5 – Thursday, September 15.
Informed Search CS 63 Chapter 4
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 4 Jim Martin.
Solving Problem by Searching
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
CMSC 471 Spring 2014 Class #4 Thu 2/6/14 Uninformed Search Professor Marie desJardins,
G5BAIM Artificial Intelligence Methods Graham Kendall Blind Searches.
Search Strategies CPS4801. Uninformed Search Strategies Uninformed search strategies use only the information available in the problem definition Breadth-first.
Informed search.
Review: Search problem formulation
Introduction to Artificial Intelligence A* Search Ruth Bergman Fall 2002.
A* Search Introduction to AI. What is an A* Search? A greedy search method minimizes the cost to the goal by using an heuristic function, h(n). It works.
Artificial Intelligence
Cooperating Intelligent Systems Informed search Chapter 4, AIMA.
Introduction to Artificial Intelligence Problem Solving Ruth Bergman Fall 2002.
Introduction to Artificial Intelligence A* Search Ruth Bergman Fall 2004.
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
Problem Solving and Search in AI Heuristic Search
Cooperating Intelligent Systems Informed search Chapter 4, AIMA 2 nd ed Chapter 3, AIMA 3 rd ed.
Artificial Intelligence Problem solving by searching CSC 361 Prof. Mohamed Batouche Computer Science Department CCIS – King Saud University Riyadh, Saudi.
CMU Snake Robot
Review: Search problem formulation Initial state Actions Transition model Goal state (or goal test) Path cost What is the optimal solution? What is the.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Informed (Heuristic) Search
1 CS 2710, ISSP 2610 Chapter 4, Part 1 Heuristic Search.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
CHAPTER 4: INFORMED SEARCH & EXPLORATION Prepared by: Ece UYKUR.
State-Space Searches. 2 State spaces A state space consists of A (possibly infinite) set of states The start state represents the initial problem Each.
Some material adopted from notes by Charles R. Dyer, University of Wisconsin-Madison Informed Search Chapter 4 (a)
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
Informed Search Methods. Informed Search  Uninformed searches  easy  but very inefficient in most cases of huge search tree  Informed searches  uses.
For Friday Finish reading chapter 4 Homework: –Lisp handout 4.
For Monday Read chapter 4, section 1 No homework..
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
CSC3203: AI for Games Informed search (1) Patrick Olivier
Informed Search I (Beginning of AIMA Chapter 4.1)
Artificial Intelligence Problem solving by searching.
1 Kuliah 4 : Informed Search. 2 Outline Best-First Search Greedy Search A* Search.
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Pengantar Kecerdasan Buatan 4 - Informed Search and Exploration AIMA Ch. 3.5 – 3.6.
Heuristic Search Foundations of Artificial Intelligence.
Introduction to Artificial Intelligence (G51IAI)
3.5 Informed (Heuristic) Searches This section show how an informed search strategy can find solution more efficiently than uninformed strategy. Best-first.
Artificial Intelligence Lecture No. 8 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Informed Search 1.
Search Methodologies Fall 2013 Comp3710 Artificial Intelligence Computing Science Thompson Rivers University.
For Monday Read chapter 4 exercise 1 No homework.
Romania. Romania Problem Initial state: Arad Goal state: Bucharest Operators: From any node, you can visit any connected node. Operator cost, the.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
1 Intro to AI Informed Search. 2 Intro to AI Heuristic search Best-first search –Greedy search –Beam search –A, A* –Examples Memory-conserving variations.
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Review: Tree search Initialize the frontier using the starting state
Artificial Intelligence (CS 370D)
Informed Search Chapter 4 (a)
Artificial Intelligence Problem solving by searching CSC 361
Informed Search Chapter 4 (a)
HW #1 Due 29/9/2008 Write Java Applet to solve Goats and Cabbage “Missionaries and cannibals” problem with the following search algorithms: Breadth first.
Informed search algorithms
Informed search algorithms
Informed Search Chapter 4 (a)
HW 1: Warmup Missionaries and Cannibals
HW 1: Warmup Missionaries and Cannibals
CMSC 471 Fall 2011 Class #4 Tue 9/13/11 Uninformed Search
Presentation transcript:

CMSC 671 Fall 2010 Thu 9/9/10 Uninformed Search (summary) Informed Search Functional Programming (revisited) Prof. Laura Zavala, ITE 373,

Functional Programming

Cool Things About Lisp Functions as objects (pass a function as an argument) Lambda expressions (construct a function on the fly) Lists as first-class objects Program as data Macros (smart expansion of expressions) Symbol manipulation

Functional Programming Decomposes a problem into a set of functions Has its roots in Lambda Calculus Formal system for function definition, function application and recursion. Functions as objects (pass a function as an argument) Lambda expressions (construct a function on the fly) Functions don't have any internal state (global and local variables) Recursion Truly functional programs are highly parallelizable

Functional Programming Languages  A focus on LISt Processing (Lisp).  Built-in map(), reduce(), and filter(), and the operator lambda.  Lisp, Scheme, Haskell, ML, OCAML, Clean, Mercury, Erlang, and a few others.  Python – Procedural, object-oriented, FP  Clojure - a dynamically-typed, functional programming language that runs on the JVM and provides interoperability with Java.

Lisp Map Function  Apply a function to all elements of a list, and return the resulting list  Lisp has a whole family of map functions  map, maplist, mapcar, etc. > mapcar #’(lambda (x) (+ x 10) ‘(1 2 3)) ( ) > mapcar #’list ‘(a b c) ‘( )) ((A 1) (B 2) (C 3))

Lisp Reduce Function  Boiling down a sequence into a single value  The function must be a function of two arguments > reduce #’fn ‘(a b c d)) is equivalent to > (fn (fn (fn ‘a ‘b) ‘c) ‘d) > reduce # ’ + ‘(1 2 3)) (6)

Uninformed Search (summary)

Building goal-based agents To build a goal-based agent we need to answer the following questions:  What is the goal to be achieved?  What are the actions?  What relevant information is necessary to encode in order to describe the state of the world, describe the available transitions, and solve the problem? Initial state Goal state Actions

8 puzzle  State: 3 x 3 array configuration of the tiles on the board.  Operators: Blank Left, Blank Right, Blank Up, Blank Down.  Initial State: A particular configuration of the board.  Goal: A particular configuration of the board.

8 puzzle – Search Tree

Study this map of Romania. Note the distances between cities and try and calculate the shortest route between Arad and Bucharest. Arad Bucharest Oradea Zerind Faragas Neamt Iasi Vaslui Hirsova Eforie Urziceni Giurgui Pitesti Sibiu Dobreta Craiova Rimnicu Mehadia Timisoara Lugoj Sibiu Rimnicu Pitesti Optimal route is ( ) = 418 miles

Search Tree

Search Tree

Search Tree

Evaluating search strategies  Completeness  Guarantees finding a solution whenever one exists  Time complexity  How long (worst or average case) does it take to find a solution? Usually measured in terms of the number of nodes expanded  Space complexity  How much space (memory) is used by the algorithm? Usually measured in terms of the maximum size of the “nodes” list during the search  Optimality/Admissibility  If a solution is found, is it guaranteed to be an optimal one? That is, is it the one with minimum cost?

Uninformed Search Methods

Breadth-First vs Depth-First (DFS) Breadth-First Exponential time and space O(b d ) Optimal if costs are the same Depth-First Exponential time O(b d ) Linear space O(bd) May not terminate Uniform-Cost (UCS)Iterative Deepening solves the infinite-path problemcomplete and optimal

Bi-directional search  Alternate searching from the start state toward the goal and from the goal state toward the start.  Stop when the frontiers intersect.  Works well only when there are unique start and goal states.  Requires the ability to generate “predecessor” states.  Can (sometimes) lead to finding a solution more quickly.

Uninformed Search Results

Example for illustrating uninformed search strategies S CBA D G E

Depth-First Search Expanded node Nodes list { S 0 } S 0 { A 3 B 1 C 8 } A 3 { D 6 E 10 G 18 B 1 C 8 } D 6 { E 10 G 18 B 1 C 8 } E 10 { G 18 B 1 C 8 } G 18 { B 1 C 8 } S CBA D G E Solution path found is S A G, cost 18 Number of nodes expanded (including goal node) = 5

Breadth-First Search Expanded node Nodes list { S 0 } S 0 { A 3 B 1 C 8 } A 3 { B 1 C 8 D 6 E 10 G 18 } B 1 { C 8 D 6 E 10 G 18 G 21 } C 8 { D 6 E 10 G 18 G 21 G 13 } D 6 { E 10 G 18 G 21 G 13 } E 10 { G 18 G 21 G 13 } G 18 { G 21 G 13 } Solution path found is S A G, cost 18 Number of nodes expanded (including goal node) = 7 S CBA D G E

Uniform-Cost Search Expanded node Nodes list { S 0 } S 0 { B 1 A 3 C 8 } B 1 { A 3 C 8 G 21 } A 3 { D 6 C 8 E 10 G 18 G 21 } D 6 { C 8 E 10 G 18 G 1 } C 8 { E 10 G 13 G 18 G 21 } E 10 { G 13 G 18 G 21 } G 13 { G 18 G 21 } Solution path found is S B G, cost 13 Number of nodes expanded (including goal node) = 7 S CBA D G E

How they perform  Depth-First Search:  Expanded nodes: S A D E G  Solution found: S A G (cost 18)  Breadth-First Search:  Expanded nodes: S A B C D E G  Solution found: S A G (cost 18)  Uniform-Cost Search:  Expanded nodes: S A D B C E G  Solution found: S B G (cost 13) This is the only uninformed search that worries about costs.  Iterative-Deepening Search:  nodes expanded: S S A B C S A D E G  Solution found: S A G (cost 18)

Bi-directional search  Alternate searching from the start state toward the goal and from the goal state toward the start.  Stop when the frontiers intersect.  Works well only when there are unique start and goal states.  Requires the ability to generate “predecessor” states.  Can (sometimes) lead to finding a solution more quickly.

Comparing Search Strategies

Avoiding Repeated States  In increasing order of effectiveness in reducing size of state space and with increasing computational costs: 1. Do not return to the state you just came from. 2. Do not create paths with cycles in them. 3. Do not generate any state that was ever created before.  Net effect depends on frequency of “loops” in state space.

Informed Search Methods

Informed search strategy  Uses problem-specific knowledge to assess whether one non-goal state is “more- promising” than another.  The strategy and knowledge used for the assessment is known as heuristic.

Heuristic Merriam-Webster's Online Dictionary Heuristic (pron. \hyu-’ris-tik\): adj. [from Greek heuriskein to discover.] involving or serving as an aid to learning, discovery, or problem- solving by experimental and especially trial-and-error methods The Free On-line Dictionary of Computing (15Feb98) heuristic 1. A rule of thumb, simplification or educated guess that reduces or limits the search for solutions in domains that are difficult and poorly understood. Unlike algorithms, heuristics do not guarantee feasible solutions and are often used with no theoretical guarantee. 2. approximation algorithm. From WordNet (r) 1.6 heuristic adj 1: (computer science) relating to or using a heuristic rule 2: of or relating to a general formulation that serves to guide investigation [ant: algorithmic] n : a commonsense rule (or set of rules) intended to increase the probability of solving some problem [syn: heuristic rule, heuristic program]

Heuristic function  Define a heuristic function h(n) that estimates the “goodness” of a node n.  Specifically, h(n) = estimated cost (or distance) of minimal cost path from n to a goal state.  The heuristic function is an estimate of how close we are to a goal, based on domain- specific information that is computable from the current state description.

Examples of Heuristics  Examples:  Missionaries and Cannibals: Number of people on starting river bank  8-puzzle: Number of tiles out of place  8-puzzle: Sum of distances each tile is from its goal position (Manhattan Distance)  In general:  h(n) ≥ 0 for all nodes n  h(n) = 0 implies that n is a goal node  h(n) = ∞ implies that n is a dead-end that can never lead to a goal

Best-first search  Order nodes on the nodes list by increasing value of an evaluation function f (n)  f (n) incorporates domain-specific information in some way.  Incorportes a heuristic function h(n) as a component of f.  This is a generic way of referring to the class of informed methods.  We get different searches depending on the evaluation function f (n)

Greedy search  Evaluates nodes by using just the heuristic function f (n) = h(n).  Selects node to expand believed to be closest (hence “greedy”) to a goal node (i.e., select node with smallest f value)  Not complete  Not admissible

Greedy Search h = Straight Line Distance

Algorithm A*  Use as an evaluation function f (n) = g(n) + h(n)  g(n) = minimal-cost path from the start state to state n.  The g(n) term adds a “breadth- first” component to the evaluation function.  Ranks nodes on search frontier by estimated cost of solution from start node through the given node to goal. S BA D G C g(d)=4 h(d)=9 C is chosen next to expand

Evaluating  Admissible heuristic – Never overestimates the cost to reach the goal  SLD – Shortest path between any two points is a straight line, so it can not be an overestimate.  Consistency – estimated cost of reaching goal from n is no greater than step cost n to n’ plus estimated cost of reaching goal from n’  h(n) <= c(n,a,n’) + h(n’)

Algorithm A*  Complete whenever the branching factor is finite, and every operator has a fixed positive cost  Optimally efficient for any given consistent heuristic S BA D G C g(d)=4 h(d)=9 C is chosen next to expand

A* in action  Remember that the A* search pattern is the union of two evaluation functions. In the following demonstration the A* search pattern evaluates the cost of the path so far (UCS), together with an admissible heuristic function based on the shortest line distance (SLD) between between the initial state and the goal location, such that  h SLD (n) = straight line distance between n and the goal location.  The following is table of the straight line distances between some of the major Romanian cities and and the goal state, Bucharest.

Straight Line Distances to Bucharest TownSLD Arad366 Bucharest0 Craiova160 Dobreta242 Eforie161 Fagaras178 Giurgiu77 Hirsova151 Iasi226 Lugoj244 TownSLD Mehadai241 Neamt234 Oradea380 Pitesti98 Rimnicu193 Sibiu253 Timisoara329 Urziceni80 Vaslui199 Zerind374 We can use straight line distances as an admissible heuristic as they will never overestimate the cost to the goal: there is no shorter distance between two cities than the straight line distance.

Press space to see an A* search of the Romanian map featured in the previous slide. Note: Throughout the animation all nodes are labelled with f(n) = g(n) + h(n). However,we will be using the abbreviations f, g and h to make the notation simpler Oradea Zerind Fagaras Pitesti Sibiu Craiova Rimnicu Timisoara Bucharest Arad We begin with the initial state of Arad. The cost of reaching Arad from Arad (or g value) is 0 miles. The straight line distance from Arad to Bucharest (or h value) is 366 miles. This gives us a total value of ( f = g + h ) 366 miles. Press space to expand the initial state of Arad. F= F= 366 F= F= 449 F= F= 393 F= F= 447 Once Arad is expanded we look for the node with the lowest cost. Sibiu has the lowest value for f. (The cost to reach Sibiu from Arad is 140 miles, and the straight line distance from Sibiu to the goal state is 253 miles. This gives a total of 393 miles). Press space to move to this node and expand it. We now expand Sibiu (that is, we expand the node with the lowest value of f ). Press space to continue the search. F= F= 417 F= F= 671 F= F= 413 We now expand Rimnicu (that is, we expand the node with the lowest value of f ). Press space to continue the search. F= F= 415 F= F= 526 Once Rimnicu is expanded we look for the node with the lowest cost. As you can see, Pitesti has the lowest value for f. (The cost to reach Pitesti from Arad is 317 miles, and the straight line distance from Pitesti to the goal state is 98 miles. This gives a total of 415 miles). Press space to move to this node and expand it. We now expand Pitesti (that is, we expand the node with the lowest value of f ). Press space to continue the search. We have just expanded a node (Pitesti) that revealed Bucharest, but it has a cost of 418. If there is any other lower cost node (and in this case there is one cheaper node, Fagaras, with a cost of 417) then we need to expand it in case it leads to a better solution to Bucharest than the 418 solution we have already found. Press space to continue. F= F= 418 In actual fact, the algorithm will not really recognise that we have found Bucharest. It just keeps expanding the lowest cost nodes (based on f ) until it finds a goal state AND it has the lowest value of f. So, we must now move to Fagaras and expand it. Press space to continue. We now expand Fagaras (that is, we expand the node with the lowest value of f ). Press space to continue the search. Bucharest (2) F= F= 450 Once Fagaras is expanded we look for the lowest cost node. As you can see, we now have two Bucharest nodes. One of these nodes ( Arad – Sibiu – Rimnicu – Pitesti – Bucharest ) has an f value of 418. The other node (Arad – Sibiu – Fagaras – Bucharest (2) ) has an f value of 450. We therefore move to the first Bucharest node and expand it. Press space to continue Bucharest We have now arrived at Bucharest. As this is the lowest cost node AND the goal state we can terminate the search. If you look back over the slides you will see that the solution returned by the A* search pattern ( Arad – Sibiu – Rimnicu – Pitesti – Bucharest ), is in fact the optimal solution.

Features of an A* Search  A* is optimal and complete, but it is not all good news. It can be shown that the number of nodes that are searched is still exponential to the length of the search space for most problems. This has implications not only for the time taken to perform the search but also the space required. Of these two problems the search complexity is more serious.  If you examine the animation on the previous slide you will notice an interesting phenomenon. Along any path from the root, the f-cost never decreases. This is no accident. It holds true for all admissible heuristics. A heuristic for which it holds is said to exhibit monotonicity.  Because A* expands the leaf nodes of lowest f, we can see that an A* search fans out from the start node, adding nodes in concentric bands of increasing f-cost. These bands can be modelled as contours in the state space.

Dealing with hard problems  For large problems, A* often requires too much space.  Two variations conserve memory: IDA* and SMA*  IDA* -- iterative deepening A*  uses successive iteration with growing limits on f. For example,  A* but don’t consider any node n where f (n) > 10  A* but don’t consider any node n where f (n) > 20  A* but don’t consider any node n where f (n) > 30,...  SMA* -- Simplified Memory-Bounded A*  uses a queue of restricted size to limit memory use.  throws away the “oldest” worst solution.

What’s a good heuristic?  If h 1 (n) < h 2 (n) ≤ h*(n) for all n, h 2 is better than (dominates) h 1  Relaxing the problem: remove constraints to create a (much) easier problem; use the solution cost for this problem as the heuristic function  Combining heuristics: take the max of several admissible heuristics: still have an admissible heuristic, and it’s better!  Identify good features, then use a learning algorithm to find a heuristic function: also may lose admissibility

Summary: Informed search  Best-first search is general search where the minimum-cost nodes (according to some measure) are expanded first.  Greedy search uses minimal estimated cost h(n) to the goal state as measure. This reduces the search time, but the algorithm is neither complete nor optimal.  A* search combines uniform-cost search and greedy search: f (n) = g(n) + h(n). A* handles state repetitions and h(n) never overestimates.  A* is complete and optimal, but space complexity is high.  The time complexity depends on the quality of the heuristic function.  IDA* and SMA* reduce the memory requirements of A*.

Thanks for coming -- see you next Tuesday!