ECE457 Applied Artificial Intelligence Spring 2008 Lecture #3

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Informed search algorithms
Artificial Intelligence Presentation
An Introduction to Artificial Intelligence
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 4 Jim Martin.
Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2008.
Artificial Intelligence
Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2005.
Cooperating Intelligent Systems Informed search Chapter 4, AIMA.
CS 460 Spring 2011 Lecture 3 Heuristic Search / Local Search.
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
Problem Solving and Search in AI Heuristic Search
CSC344: AI for Games Lecture 4: Informed search
Cooperating Intelligent Systems Informed search Chapter 4, AIMA 2 nd ed Chapter 3, AIMA 3 rd ed.
Rutgers CS440, Fall 2003 Heuristic search Reading: AIMA 2 nd ed., Ch
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
Informed search algorithms
Intelligent Systems (2II40) C3 Alexandra I. Cristea September 2005.
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
For Wednesday Read chapter 6, sections 1-3 Homework: –Chapter 4, exercise 1.
Informed Search ECE457 Applied Artificial Intelligence Spring 2007 Lecture #3.
For Wednesday Read chapter 5, sections 1-4 Homework: –Chapter 3, exercise 23. Then do the exercise again, but use greedy heuristic search instead of A*
Princess Nora University Artificial Intelligence Chapter (4) Informed search algorithms 1.
CSC3203: AI for Games Informed search (1) Patrick Olivier
Informed Search I (Beginning of AIMA Chapter 4.1)
Uninformed Search ECE457 Applied Artificial Intelligence Spring 2007 Lecture #2.
1 Kuliah 4 : Informed Search. 2 Outline Best-First Search Greedy Search A* Search.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Heuristic Search Foundations of Artificial Intelligence.
Informed Search ECE457 Applied Artificial Intelligence Fall 2007 Lecture #3.
3.5 Informed (Heuristic) Searches This section show how an informed search strategy can find solution more efficiently than uninformed strategy. Best-first.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Uninformed Search ECE457 Applied Artificial Intelligence Spring 2008 Lecture #2.
Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Lecture 3: Uninformed Search
Optimization Problems
Informed Search Methods
CSCI 4310 Lecture 10: Local Search Algorithms
Informed Search Chapter 4 (b)
Last time: Problem-Solving
Artificial Intelligence (CS 370D)
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
Informed Search Methods
Department of Computer Science
Heuristic Search Introduction to Artificial Intelligence
Local Search Algorithms
Artificial Intelligence (CS 370D)
HW #1 Due 29/9/2008 Write Java Applet to solve Goats and Cabbage “Missionaries and cannibals” problem with the following search algorithms: Breadth first.
Artificial Intelligence Informed Search Algorithms
Optimization Problems
Informed search algorithms
Informed search algorithms
Informed Search Chapter 4 (b)
Informed search algorithms
More on Search: A* and Optimization
Chap 4: Searching Techniques
Local Search Algorithms
ECE457 Applied Artificial Intelligence Fall 2007 Lecture #2
The Rich/Knight Implementation
CS 416 Artificial Intelligence
Midterm Review.
Solving Problems by Searching
Local Search Algorithms
The Rich/Knight Implementation
Presentation transcript:

ECE457 Applied Artificial Intelligence Spring 2008 Lecture #3 Informed Search ECE457 Applied Artificial Intelligence Spring 2008 Lecture #3

Outline Heuristics Informed search techniques More on heuristics Iterative improvement Russell & Norvig, chapter 4 Skip “Genetic algorithms” pages 116-120 (will be covered in Lecture 12) ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 2

Recall: Uninformed Search Travel blindly until they reach Bucharest ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 3

An Idea… It would be better if the agent knew whether or not the city it is travelling to gets it closer to Bucharest Of course, the agent doesn’t know the exact distance or path to Bucharest (it wouldn’t need to search otherwise!) The agent must estimate the distance to Bucharest ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 4

Heuristic Function More generally: We want the search algorithm to be able to estimate the path cost from the current node to the goal This estimate is called a heuristic function Cannot be done based on problem formulation Need to add additional information Informed search ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 5

Heuristic Function Heuristic function h(n) Path cost g(n) h(n): estimated cost from node n to goal h(n1) < h(n2) means it’s probably cheaper to get to the goal from n1 h(ngoal) = 0 Path cost g(n) Evaluation function f(n) f(n) = g(n) Uniform Cost f(n) = h(n) Greedy Best-First f(n) = g(n) + h(n) A* ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 6

Greedy Best-First Search f(n) = h(n) Always expand the node closest to the goal and ignore path cost Complete only if m is finite Rarely true in practice Not optimal Can go down a long path of cheap actions Time complexity = O(bm) Space complexity = O(bm) ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 7

Greedy Best-First Search Upper-bound case: goal is last node of the tree Number of nodes generated: b nodes for each node of m levels (entire tree) Time and space complexity: all generated nodes O(bm) ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 8

A* Search f(n) = g(n) + h(n) Best-first search Complete Optimal, given admissible heuristic Never overestimates the cost to the goal Optimally efficient No other optimal algorithm will expand less nodes Time complexity = O(bC*/є+1) Space complexity = O(bC*/є+1) ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 9

A* Search Upper-bound case: heuristic is the trivial h(n) = 0 A* becomes Uniform Cost Search Goal has path cost C*, all other actions have minimum cost of є Depth explored before taking action C*: C*/є Depth of fringe nodes: C*/є + 1 Space & time complexity: all generated nodes: O(bC*/є+1) C* є є є є є є є є є є є є є є є ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 10

A* Search Using a good heuristic can reduce time complexity Can go down to O(bm) However, space complexity will always be exponential A* runs out of memory before running out of time ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 11

Iterative Deepening A* Search Like Iterative Deepening Search, but cut-off limit is f-value instead of depth Next iteration limit is the smallest f-value of any node that exceeded the cut-off of current iteration Properties Complete and optimal like A* Space complexity of depth-first search (because it’s possible to delete nodes and paths from memory when we explore down to the cut-off limit) Performs poorly if small action cost (small step in each iteration) ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 12

Simplified Memory-Bounded A* Uses all available memory When memory limit reached, delete worst leaf node (highest f-value) If equality, delete oldest leaf node SMA memory problem If the entire optimal path fills the memory and there is only one non-goal leaf node SMA cannot continue expanding Goal is not reachable ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 13

Simplified Memory-Bounded A* Space complexity known and controlled by system designer Complete if shallowest goal depth less than memory size Shallowest goal is reachable Optimal if optimal goal is reachable ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 14

Example: Greedy Search Arad 366 Zerind 374 Sibiu 253 Timisoara 329 Fagaras 176 Rimnicu 193 Arad 366 Zerind 374 Sibiu 253 Timisoara 329 Fagaras 176 Rimnicu 193 Bucharest 0 Arad 366 Zerind 374 Sibiu 253 Timisoara 329 Arad 366 Zerind 374 Sibiu 253 Timisoara 329 Fagaras 176 Rimnicu 193 Bucharest 0 Arad 366 h(n) = straight-line distance ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 15

Example: A* Search h(n) = straight-line distance Arad 366 Zerind 449 Sibiu 393 Timisoara 447 Fagaras 415 Rimnicu 413 Pitesti 417 Craiova 526 Bucharest 450 Arad 366 Zerind 449 Sibiu 393 Timisoara 447 Fagaras 415 Rimnicu 413 Pitesti 417 Craiova 526 Bucharest 418 Arad 366 Zerind 449 Sibiu 393 Timisoara 447 Fagaras 415 Rimnicu 413 Pitesti 417 Craiova 526 Bucharest 418 Arad 366 Zerind 449 Sibiu 393 Timisoara 447 Fagaras 415 Rimnicu 413 Pitesti 417 Craiova 526 Arad 366 Zerind 449 Sibiu 393 Timisoara 447 Arad 366 Arad 366 Zerind 449 Sibiu 393 Timisoara 447 Fagaras 415 Rimnicu 413 h(n) = straight-line distance ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 16

Heuristic Function Properties Admissible Never overestimate the cost Consistency / Monotonicity h(np) ≤ h(nc) + cost(np,nc) h(np) + g(np) ≤ h(nc) + cost(np,nc) + g(np) h(np) + g(np) ≤ h(nc) + g(nc) f(np) ≤ f(nc) f(n) never decreases as we get closer to the goal Domination h1(n) ≥ h2(n) for all n ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 17

Creating Heuristic Functions Found by relaxing the problem Straight-line distance to Bucharest Eliminate constraint of traveling on roads 8-puzzle Move each square that’s out of place (7) Move by the number of squares to get to place (12) Move some tiles in place ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 18

Creating Heuristic Functions Block world Move a block on the table or on another block If there’s nothing on top of it Possible heuristics for this game +1 for each block in the wrong position +1 for each block on top of the wrong block +1 for every block in the support structure of each block with incorrect support 4 - 1 for every block with the correct support structure Partial solving (get to A-B-?-?) B C D A A B C D ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 19

Creating Heuristic Functions State h1 4 3 h2 2 1 h3 6 h4 h5 ½(h3+h4) 5 2.5 B C D A B C D A B C D A B C D A ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 20

Creating Heuristic Functions State h1 2 1 h2 h3 h4 h5 ½(h3+h4) 1.5 0.5 A B C D A B C D A B C D A B C D ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 21

Path to the Goal Sometimes the path to the goal is irrelevant Only the solution matters n-queen puzzle ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 22

Different Search Problem No longer minimizing path cost Improve quality of state Minimize state cost Maximize state payoff Iterative improvement ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 23

Example: Iterative Improvement Minimize cost: number of attacks ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 24

Example: Travelling Salesman Tree search method Start with home city Visit next city until optimal round trip Iterative improvement method Start with random round trip Swap cities until optimal round trip ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 25

Graphic Visualisation State value / state plot: state space “State” axis can be states or specific properties Neighbouring states on the axis are states linked by actions or with similar property values State values are computed using a heuristic and do not include path cost Value State ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 26

Graphic Visualisation State value / state plot: state space Global maximum Global minimum Value State Local maxima Local minima Plateau ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 27

Graphic Visualisation If state payoff is a complex mathematical function depending on one state property -1x * x2 + sin2(x)/x + (1000-x)*cos(5x)/5x – x/10 State space: x  [10, 80] Max: x = 74 payoff = 66.3193 ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 28

Graphic Visualisation More complex state spaces can have several dimensions Example: States are X-Y coordinates, state value is Z coordinate ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 29

Graphic Visualisation Each state is a point on the map Each state’s value is the distance to the CN Tower Locations in water always have the worst value because we can’t swim 2D state space X-Y coordinates of the agent Z coordinate for state value Red = minimum distance Blue = maximum distance ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 30

Hill Climbing (Gradient Descent) Simple but efficient local optimization strategy Always take the action that most improves the state ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 31

Hill Climbing (Gradient Descent) Generate random initial state Each iteration Generate and evaluate neighbours at step size Move to neighbour with greatest increase/decrease (i.e. take one step) End when there are no better neighbours ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 32

Example: Travelling to Toronto Trying to get to downtown Toronto Take steps toward the CN Tower ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 33

Hill Climbing (Gradient Descent) Advantages Fast No search tree Disadvantages Gets stuck in local optimum Does not allow worse moves Solution dependant on initial state Selecting step size Common improvements Random restarts Intelligently-chosen initial state Decreasing step size ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 34

Simulated Annealing Problem with hill climbing: local best move doesn’t lead to optimal goal Solution: allow bad moves Simulated annealing is a popular way of doing that Stochastic search method Simulates annealing process in metallurgy ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 35

Annealing Tempering technique in metallurgy Weakness and defects come from atoms of crystals freezing in the wrong place (local optimum) Heating to unstuck the atoms (escape local optimum) Slow cooling to allow atoms to get to better place (global optimum) ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 36

Simulated Annealing Annealing Simulated Annealing Atoms moving Agent modifying state towards minimum-energy location in crystal towards state with global optimal value while avoiding bad position. while avoiding local optimum. Atoms are more likely to move out of a bad position Agents are more likely to accept bad moves if the metal’s temperature is high. if the “temperature” control parameter has a high value. ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 37

Simulated Annealing Annealing Simulated Annealing The metal’s temperature starts hot, The “temperature” control parameter starts with a high value, then it cools off then it decreases continuously incrementally over time with each iteration of the search until the metal is room temperature until it reaches a pre-set threshold. ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 38

Simulated Annealing Allow some bad moves Bad enough to get out of local optimum Not so bad as to get out of global optimum Probability of accepting bad moves given Badness of the move (i.e. variation in state value V) Temperature T P = e-V/T Stochastic search technique ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 39

Simulated Annealing Generate random initial state and high temperature Each iteration Generate and evaluate a random neighbour If neighbour better than current state Accept Else (if neighbour worse than current state) Accept with probability e-V/T Reduce temperature End when temperature less than threshold ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 40

Simulated Annealing Advantages Disadvantage Avoids local optima Very good at finding high-quality solutions Very good for hard problems with complex state value functions Disadvantage Can be very slow in practice ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 41

Simulated Annealing Application Traveling-wave tube (TWT) Uses focused electron beam to amplify electromagnetic communication waves Produces high-power radio frequency (RF) signals Critical components in deep-space probes and communication satellites Power efficiency becomes a key issue TWT research group at NASA working for over 30 years on improving power efficiency ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 42

Simulated Annealing Application Optimizing TWT efficiency Synchronize electron velocity and phase velocity of RF wave Using “phase velocity tapper” to control and decrease RF wave phase velocity Improving tapper design improves synchronization, improves efficiency of TWT Tapper with simulated annealing algorithm to optimize synchronization Doubled TWT efficiency More flexible then past tappers Maximize overall power efficiency Maximize efficiency over various bandwidth Maximize efficiency while minimize signal distortion ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 43

Assumptions Goal-based agent Environment Fully observable Deterministic Sequential Static Discrete Single agent ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 44

Assumptions Updated Utility-based agent Environment Fully observable Deterministic Sequential Static Discrete / Continuous Single agent ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 45