CS B551: E LEMENT SOF A RTIFICIAL I NTELLIGENCE Instructor: Kris Hauser 1.

Slides:



Advertisements
Similar presentations
EcoTherm Plus WGB-K 20 E 4,5 – 20 kW.
Advertisements

Números.
1 A B C
Trend for Precision Soil Testing % Zone or Grid Samples Tested compared to Total Samples.
AGVISE Laboratories %Zone or Grid Samples – Northwood laboratory
Trend for Precision Soil Testing % Zone or Grid Samples Tested compared to Total Samples.
PDAs Accept Context-Free Languages
ALAK ROY. Assistant Professor Dept. of CSE NIT Agartala
EuroCondens SGB E.
Worksheets.
Reinforcement Learning
Slide 1Fig 26-CO, p.795. Slide 2Fig 26-1, p.796 Slide 3Fig 26-2, p.797.
Slide 1Fig 25-CO, p.762. Slide 2Fig 25-1, p.765 Slide 3Fig 25-2, p.765.
Sequential Logic Design
Addition and Subtraction Equations
David Burdett May 11, 2004 Package Binding for WS CDL.
1 When you see… Find the zeros You think…. 2 To find the zeros...
Create an Application Title 1Y - Youth Chapter 5.
Add Governors Discretionary (1G) Grants Chapter 6.
CALENDAR.
CHAPTER 18 The Ankle and Lower Leg
The 5S numbers game..
突破信息检索壁垒 -SciFinder Scholar 介绍
A Fractional Order (Proportional and Derivative) Motion Controller Design for A Class of Second-order Systems Center for Self-Organizing Intelligent.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Break Time Remaining 10:00.
The basics for simulations
Heuristic Search Russell and Norvig: Chapter 4 Slides adapted from:
Chapter 4: Informed Heuristic Search
Heuristic Search. Best First Search A* Heuristic Search Heuristic search exploits additional knowledge about the problem that helps direct search to.
Factoring Quadratics — ax² + bx + c Topic
EE, NCKU Tien-Hao Chang (Darby Chang)
PP Test Review Sections 6-1 to 6-6
MM4A6c: Apply the law of sines and the law of cosines.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Copyright © 2012, Elsevier Inc. All rights Reserved. 1 Chapter 7 Modeling Structure with Blocks.
Progressive Aerobic Cardiovascular Endurance Run
Biology 2 Plant Kingdom Identification Test Review.
Chapter 1: Expressions, Equations, & Inequalities
CSE 6007 Mobile Ad Hoc Wireless Networks
MaK_Full ahead loaded 1 Alarm Page Directory (F11)
Facebook Pages 101: Your Organization’s Foothold on the Social Web A Volunteer Leader Webinar Sponsored by CACO December 1, 2010 Andrew Gossen, Senior.
Artificial Intelligence
When you see… Find the zeros You think….
Midterm Review Part II Midterm Review Part II 40.
2011 WINNISQUAM COMMUNITY SURVEY YOUTH RISK BEHAVIOR GRADES 9-12 STUDENTS=1021.
Before Between After.
2011 FRANKLIN COMMUNITY SURVEY YOUTH RISK BEHAVIOR GRADES 9-12 STUDENTS=332.
Slide R - 1 Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Prentice Hall Active Learning Lecture Slides For use with Classroom Response.
Subtraction: Adding UP
1 Non Deterministic Automata. 2 Alphabet = Nondeterministic Finite Accepter (NFA)
Static Equilibrium; Elasticity and Fracture
Converting a Fraction to %
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Resistência dos Materiais, 5ª ed.
Clock will move after 1 minute
Lial/Hungerford/Holcomb/Mullins: Mathematics with Applications 11e Finite Mathematics with Applications 11e Copyright ©2015 Pearson Education, Inc. All.
Select a time to count down from the clock above
CS B551: E LEMENTS OF A RTIFICIAL I NTELLIGENCE Instructor: Kris Hauser 1.
9. Two Functions of Two Random Variables
A Data Warehouse Mining Tool Stephen Turner Chris Frala
1 Dr. Scott Schaefer Least Squares Curves, Rational Representations, Splines and Continuity.
1 Non Deterministic Automata. 2 Alphabet = Nondeterministic Finite Accepter (NFA)
Schutzvermerk nach DIN 34 beachten 05/04/15 Seite 1 Training EPAM and CANopen Basic Solution: Password * * Level 1 Level 2 * Level 3 Password2 IP-Adr.
Heuristic (Informed) Search (Where we try to be smarter in how we choose among alternatives) R&N III: Chapter 3.5 R&N II: Chap. 4, Sect. 4.1–3.
1 Heuristic (Informed) Search (Where we try to choose smartly) R&N: Chap. 4, Sect. 4.1–3.
Heuristic (Informed) Search (Where we try to choose smartly) R&N: Chap
Presentation transcript:

CS B551: E LEMENT SOF A RTIFICIAL I NTELLIGENCE Instructor: Kris Hauser 1

R ECAP Blind Search Breadth first (complete, optimal) Depth first (incomplete, not optimal) Iterative deepening (complete, optimal) Nonuniform costs Revisited states 2

T HREE S EARCH A LGORITHMS Search #1: unit costs With or without detecting revisited states Search #2: non-unit costs, not detecting revisited states Goal test when node is expanded, not generated Search #3: non-unit costs, detecting revisited states When two nodes in fringe share same state, the one with the lowest path cost is kept 3

H EURISTIC S EARCH 4

B LIND VS. H EURISTIC S TRATEGIES Blind (or un-informed) strategies do not exploit state descriptions to order FRINGE. They only exploit the positions of the nodes in the search tree Heuristic (or informed) strategies exploit state descriptions to order FRINGE (the most promising nodes are placed at the beginning of FRINGE) 5

E XAMPLE 6 For a blind strategy, N 1 and N 2 are just two nodes (at some position in the search tree) Goal state N1N1 N2N2 STATE

E XAMPLE 7 For a heuristic strategy counting the number of misplaced tiles, N 2 is more promising than N 1 Goal state N1N1 N2N2 STATE

B EST -F IRST S EARCH It exploits state description to estimate how good each search node is An evaluation function f maps each node N of the search tree to a real number f(N) 0 [Traditionally, f(N) is an estimated cost; so, the smaller f(N), the more promising N] Best-first search sorts FRINGE in increasing f [Arbitrary order is assumed among nodes with equal f] 8

B EST -F IRST S EARCH It exploits state description to estimate how good each search node is An evaluation function f maps each node N of the search tree to a real number f(N) 0 [Traditionally, f(N) is an estimated cost; so, the smaller f(N), the more promising N] Best-first search sorts FRINGE in increasing f [Arbitrary order is assumed among nodes with equal f] 9 Best does not refer to the quality of the generated path Best-first search does not generate optimal paths in general Best does not refer to the quality of the generated path Best-first search does not generate optimal paths in general

H OW TO CONSTRUCT F ? Typically, f(N) estimates: either the cost of a solution path through N Then f(N) = g(N) + h(N), where g(N) is the cost of the path from the initial node to N h(N) is an estimate of the cost of a path from N to a goal node or the cost of a path from N to a goal node Then f(N) = h(N) Greedy best-first-search But there are no limitations on f. Any function of your choice is acceptable. But will it help the search algorithm? 10

H OW TO CONSTRUCT F ? Typically, f(N) estimates: either the cost of a solution path through N Then f(N) = g(N) + h(N), where g(N) is the cost of the path from the initial node to N h(N) is an estimate of the cost of a path from N to a goal node or the cost of a path from N to a goal node Then f(N) = h(N) Greedy best-first-search But there are no limitations on f. Any function of your choice is acceptable. But will it help the search algorithm? 11 Heuristic function

H EURISTIC F UNCTION The heuristic function h(N) 0 estimates the cost to go from STATE(N) to a goal state Its value is independent of the current search tree; it depends only on STATE(N) and the goal test GOAL? Example: h 1 (N) = number of misplaced numbered tiles = 6 [Why is it an estimate of the distance to the goal?] STATE (N) Goal state

O THER E XAMPLES h 1 (N) = number of misplaced numbered tiles = 6 h 2 (N) = sum of the (Manhattan) distance of every numbered tile to its goal position = = 13 h 3 (N) = sum of permutation inversions = n 5 + n 8 + n 4 + n 2 + n 1 + n 7 + n 3 + n 6 = = STATE (N) Goal state

8-P UZZLE f(N) = h(N) = number of misplaced numbered tiles The white tile is the empty tile

8-P UZZLE f(N) = g(N) + h(N) with h(N) = number of misplaced numbered tiles

8-P UZZLE f(N) = h(N) = distances of numbered tiles to their goals

R OBOT N AVIGATION 17 xNxN yNyN N xgxg ygyg (L 2 or Euclidean distance) h 2 (N) = |x N -x g | + |y N -y g | (L 1 or Manhattan distance)

B EST -F IRST E FFICIENCY 18 f(N) = h(N) = straight distance to the goal Local-minimum problem

A DMISSIBLE H EURISTIC Let h * (N) be the cost of the optimal path from N to a goal node The heuristic function h(N) is admissible if: 0 h(N) h * (N) An admissible heuristic function is always optimistic !

A DMISSIBLE H EURISTIC Let h * (N) be the cost of the optimal path from N to a goal node The heuristic function h(N) is admissible if: 0 h(N) h * (N) An admissible heuristic function is always optimistic ! 20 G is a goal node h(G) = 0

8-P UZZLE H EURISTICS h 1 (N) = number of misplaced tiles = 6 is ??? STATE (N) Goal state

8-P UZZLE H EURISTICS h 1 (N) = number of misplaced tiles = 6 is admissible h 2 (N) = sum of the (Manhattan) distances of every tile to its goal position = = 13 is ??? STATE (N) Goal state

8-P UZZLE H EURISTICS h 1 (N) = number of misplaced tiles = 6 is admissible h 2 (N) = sum of the (Manhattan) distances of every tile to its goal position = = 13 is admissible h 3 (N) = sum of permutation inversions = = 16 is ??? STATE (N) Goal state

8-P UZZLE H EURISTICS h 1 (N) = number of misplaced tiles = 6 is admissible h 2 (N) = sum of the (Manhattan) distances of every tile to its goal position = = 13 is admissible h 3 (N) = sum of permutation inversions = = 16 is not admissible STATE (N) Goal state

R OBOT N AVIGATION H EURISTICS 25 Cost of one horizontal/vertical step = 1 Cost of one diagonal step = is admissible

R OBOT N AVIGATION H EURISTICS 26 Cost of one horizontal/vertical step = 1 Cost of one diagonal step = h 2 (N) = |x N -x g | + |y N -y g | is ???

R OBOT N AVIGATION H EURISTICS 27 Cost of one horizontal/vertical step = 1 Cost of one diagonal step = h 2 (N) = |x N -x g | + |y N -y g | is admissible if moving along diagonals is not allowed, and not admissible otherwise h * (I) = 4 2 h 2 (I) = 8

H OW TO CREATE AN ADMISSIBLE H ? An admissible heuristic can usually be seen as the cost of an optimal solution to a relaxed problem (one obtained by removing constraints) In robot navigation: The Manhattan distance corresponds to removing the obstacles The Euclidean distance corresponds to removing both the obstacles and the constraint that the robot moves on a grid More on this topic later 28

A* S EARCH ( MOST POPULAR ALGORITHM IN AI) f(N) = g(N) + h(N), where: g(N) = cost of best path found so far to N h(N) = admissible heuristic function for all arcs: c(N,N) > 0 SEARCH#2 algorithm is used Best-first search is then called A* search 29

8-P UZZLE f(N) = g(N) + h(N) with h(N) = number of misplaced numbered tiles

R OBOT N AVIGATION 31

R OBOT N AVIGATION f(N) = h(N), with h(N) = Manhattan distance to the goal (not A*)

R OBOT N AVIGATION f(N) = h(N), with h(N) = Manhattan distance to the goal (not A*) 7 0

R OBOT N AVIGATION 34 f(N) = g(N)+h(N), with h(N) = Manhattan distance to goal (A*)

R ESULT #1 A* is complete and optimal [This result holds if nodes revisiting states are not discarded] 35

P ROOF (1/2) If a solution exists, A* terminates and returns a solution 36 - For each node N on the fringe, f(N) = g(N)+h(N) g(N) d(N), where d(N) is the depth of N in the tree

P ROOF (1/2) If a solution exists, A* terminates and returns a solution 37 - For each node N on the fringe, f(N) = g(N)+h(N) g(N) d(N), where d(N) is the depth of N in the tree - As long as A* hasnt terminated, a node K on the fringe lies on a solution path K

P ROOF (1/2) If a solution exists, A* terminates and returns a solution 38 - For each node N on the fringe, f(N) = g(N)+h(N) g(N) d(N), where d(N) is the depth of N in the tree - As long as A* hasnt terminated, a node K on the fringe lies on a solution path - Since each node expansion increases the length of one path, K will eventually be selected for expansion, unless a solution is found along another path K

P ROOF (2/2) Whenever A* chooses to expand a goal node, the path to this node is optimal 39 K - C*= cost of the optimal solution path - G: non-optimal goal node in the fringe f(G) = g(G) + h(G) = g(G) C* - A node K in the fringe lies on an optimal path: f(K) = g(K) + h(K) C* - So, G will not be selected for expansion G

C OMPLEXITY OF A* A* expands all nodes with f(N) < C* A* may expand non-solution nodes with f(N) = C* May be an exponential number of nodes unless the heuristic is sufficiently accurate Within O(log h*(N)) of h*(N) When a problem has no solution, A* runs forever if the state space is infinite or if states can be revisited an arbitrary number of times. In other cases, it may take a huge amount of time to terminate 40

W HAT TO DO WITH REVISITED STATES ? 41 c = h = The heuristic h is clearly admissible

W HAT TO DO WITH REVISITED STATES ? 42 c = h = f = ? If we discard this new node, then the search algorithm expands the goal node next and returns a non-optimal solution

It is not harmful to discard a node revisiting a state if the cost of the new path to this state is cost of the previous path [so, in particular, one can discard a node if it re-visits a state already visited by one of its ancestors] A* remains optimal, but states can still be re- visited multiple times [the size of the search tree can still be exponential in the number of visited states] Fortunately, for a large family of admissible heuristics – consistent heuristics – there is a much more efficient way to handle revisited states 43

C ONSISTENT H EURISTIC An admissible heuristic h is consistent (or monotone) if for each node N and each child N of N: 44 (triangle inequality) N N h(N) c(N,N) Intuition: a consistent heuristics becomes more precise as we get deeper in the search tree h(N) c(N,N) + h(N)

C ONSISTENCY V IOLATION 45 N N h(N) =100 h(N) =10 c(N,N) =10 (triangle inequality) If h tells that N is 100 units from the goal, then moving from N along an arc costing 10 units should not lead to a node N that h estimates to be 10 units away from the goal

C ONSISTENT H EURISTIC ( ALTERNATIVE DEFINITION ) A heuristic h is consistent (or monotone) if 1. for each node N and each child N of N: h(N) c(N,N) + h(N) 2. for each goal node G: h(G) = 0 46 (triangle inequality) N N h(N) c(N,N) A consistent heuristic is also admissible

A DMISSIBILITY AND C ONSISTENCY A consistent heuristic is also admissible An admissible heuristic may not be consistent, but many admissible heuristics are consistent 47

8-P UZZLE STATE (N) goal h 1 (N) = number of misplaced tiles h 2 (N) = sum of the (Manhattan) distances of every tile to its goal position are both consistent (why?) N N h(N) c(N,N) h(N) c(N,N) + h(N)

R OBOT N AVIGATION 49 Cost of one horizontal/vertical step = 1 Cost of one diagonal step = h 2 (N) = |x N -x g | + |y N -y g | is consistent is consistent if moving along diagonals is not allowed, and not consistent otherwise N N h(N) c(N,N) h(N) c(N,N) + h(N)

R ESULT #2 If h is consistent, then whenever A* expands a node, it has already found an optimal path to this nodes state 50

P ROOF (1/2) 1. Consider a node N and its child N Since h is consistent: h(N) c(N,N)+h(N) f(N) = g(N)+h(N) g(N)+c(N,N)+h(N) = f(N) So, f is non-decreasing along any path 51 N N

P ROOF (2/2) 2. If a node K is selected for expansion, then any other node N in the fringe verifies f(N) f(K) If one node N lies on another path to the state of K, the cost of this other path is no smaller than that of the path to K: f(N) f(N) f(K) and h(N) = h(K) So, g(N) g(K) 52 KN N S

P ROOF (2/2) 2. If a node K is selected for expansion, then any other node N in the fringe verifies f(N) f(K) If one node N lies on another path to the state of K, the cost of this other path is no smaller than that of the path to K: f(N) f(N) f(K) and h(N) = h(K) So, g(N) g(K) 53 KN N S If h is consistent, then whenever A* expands a node, it has already found an optimal path to this nodes state Result #2

I MPLICATION OF R ESULT #2 54 NN1N1 SS1S1 The path to N is the optimal path to S N2N2 N 2 can be discarded

R EVISITED S TATES WITH C ONSISTENT H EURISTIC (S EARCH #3) When a node is expanded, store its state into CLOSED When a new node N is generated: If STATE(N) is in CLOSED, discard N If there exists a node N in the fringe such that STATE(N) = STATE(N), discard the node – N or N – with the largest f (or, equivalently, g) 55

A* IS OPTIMAL IF h admissible (but not necessarily consistent) Revisited states not detected Search #2 is used h is consistent Revisited states detected Search #3 is used 56

H EURISTIC A CCURACY Let h 1 and h 2 be two consistent heuristics such that for all nodes N: h 1 (N) h 2 (N) h 2 is said to be more accurate (or more informed) than h 1 57 h 1 (N) = number of misplaced tiles h 2 (N) = sum of distances of every tile to its goal position h 2 is more accurate than h STATE (N) Goal state

R ESULT #3 Let h 2 be more accurate than h 1 Let A 1 * be A* using h 1 and A 2 * be A* using h 2 Whenever a solution exists, all the nodes expanded by A 2 *, except possibly for some nodes such that f 1 (N) = f 2 (N) = C* (cost of optimal solution) are also expanded by A 1 * 58

P ROOF C* = h*(initial-node) [cost of optimal solution] Every node N such that f(N) C* is eventually expanded. No node N such that f(N) > C* is ever expanded Every node N such that h(N) C* g(N) is eventually expanded. So, every node N such that h 2 (N) C* g(N) is expanded by A 2 *. Since h 1 (N) h 2 (N), N is also expanded by A 1 * If there are several nodes N such that f 1 (N) = f 2 (N) = C* (such nodes include the optimal goal nodes, if there exists a solution), A 1 * and A 2 * may or may not expand them in the same order (until one goal node is expanded) 59

H OW TO CREATE GOOD HEURISTICS ? By solving relaxed problems at each node In the 8-puzzle, the sum of the distances of each tile to its goal position (h 2 ) corresponds to solving 8 simple problems: It ignores negative interactions among tiles d i is the length of the shortest path to move tile i to its goal position, ignoring the other tiles, e.g., d 5 = 2 h 2 = i=1,...8 d i

C AN WE DO BETTER ? For example, we could consider two more complex relaxed problems: h = d d5678 [disjoint pattern heuristic] d 1234 = length of the shortest path to move tiles 1, 2, 3, and 4 to their goal positions, ignoring the other tiles d 5678

C AN WE DO BETTER ? For example, we could consider two more complex relaxed problems: h = d d5678 [disjoint pattern heuristic] How to compute d 1234 and d 5678 ? d 1234 = length of the shortest path to move tiles 1, 2, 3, and 4 to their goal positions, ignoring the other tiles d 5678

C AN WE DO BETTER ? For example, we could consider two more complex relaxed problems: h = d d5678 [disjoint pattern heuristic] These distances are pre-computed and stored [Each requires generating a tree of 3,024 nodes/states (breadth-first search)] d 1234 = length of the shortest path to move tiles 1, 2, 3, and 4 to their goal positions, ignoring the other tiles d 5678

C AN WE DO BETTER ? For example, we could consider two more complex relaxed problems: h = d d5678 [disjoint pattern heuristic] These distances are pre-computed and stored [Each requires generating a tree of 3,024 nodes/states (breadth-first search)] d 1234 = length of the shortest path to move tiles 1, 2, 3, and 4 to their goal positions, ignoring the other tiles d 5678 Several order-of-magnitude speedups for the 15- and 24-puzzle (see R&N)

I TERATIVE D EEPENING A* (IDA*) Idea: Reduce memory requirement of A* by applying cutoff on values of f Consistent heuristic function h Algorithm IDA*: Initialize cutoff to f(initial-node) Repeat: Perform depth-first search by expanding all nodes N such that f(N) cutoff Reset cutoff to smallest value f of non-expanded (leaf) nodes 65

8-P UZZLE f(N) = g(N) + h(N) with h(N) = number of misplaced tiles Cutoff=4

8-P UZZLE Cutoff=4 6 f(N) = g(N) + h(N) with h(N) = number of misplaced tiles

8-P UZZLE Cutoff=4 6 5 f(N) = g(N) + h(N) with h(N) = number of misplaced tiles

8-P UZZLE Cutoff= f(N) = g(N) + h(N) with h(N) = number of misplaced tiles

8-P UZZLE Cutoff= f(N) = g(N) + h(N) with h(N) = number of misplaced tiles

8-P UZZLE Cutoff=5 f(N) = g(N) + h(N) with h(N) = number of misplaced tiles

8-P UZZLE Cutoff=5 6 f(N) = g(N) + h(N) with h(N) = number of misplaced tiles

8-P UZZLE Cutoff=5 6 5 f(N) = g(N) + h(N) with h(N) = number of misplaced tiles

8-P UZZLE Cutoff= f(N) = g(N) + h(N) with h(N) = number of misplaced tiles

8-P UZZLE Cutoff= f(N) = g(N) + h(N) with h(N) = number of misplaced tiles

8-P UZZLE Cutoff= f(N) = g(N) + h(N) with h(N) = number of misplaced tiles

8-P UZZLE Cutoff= f(N) = g(N) + h(N) with h(N) = number of misplaced tiles 5

A DVANTAGES /D RAWBACKS OF IDA* Advantages: Still complete and optimal Requires less memory than A* Avoid the overhead to sort the fringe Drawbacks: Cant avoid revisiting states not on the current path Available memory is poorly used Non-unit costs? 78

M EMORY -B OUNDED S EARCH Proceed like A* until memory is full No more nodes can be added to search tree Drop node in fringe with highest f(N) Place parent back in fringe with backed-up f(P) min(f(P),f(N)) Extreme example: RBFS Only keeps nodes in path to current node 79

R ECAP Proving properties of A* performance Admissible heuristics: optimality Consistent heuristics: revisited states 80

N EXT C LASS Beyond classical search R&N 4.1-5,

E FFECTIVE B RANCHING F ACTOR It is used as a measure the effectiveness of a heuristic Let n be the total number of nodes expanded by A* for a particular problem and d the depth of the solution The effective branching factor b * is defined by n = 1 + b * + ( b * ) ( b * ) d 82

E XPERIMENTAL R ESULTS ( SEE R&N FOR DETAILS ) 8-puzzle with: h 1 = number of misplaced tiles h 2 = sum of distances of tiles to their goal positions Random generation of many problem instances Average effective branching factors (number of expanded nodes): 83 dIDSA1*A1*A2*A2* (3,644,035)1.42 (227)1.24 (73) (39,135)1.26 (1,641)