1 Beyond Classical Search Chapter 3 covered problems that considered the whole search space and produced a sequence of actions leading to a goal. Chapter.

Slides:



Advertisements
Similar presentations
Solving problems by searching
Advertisements

Local Search and Optimization
Local Search Algorithms
Heuristic Search Chapter 3.
CPSC 322, Lecture 16Slide 1 Stochastic Local Search Variants Computer Science cpsc322, Lecture 16 (Textbook Chpt 4.8) February, 9, 2009.
Problem Solving by Searching
CSC344: AI for Games Lecture 5 Advanced heuristic search Patrick Olivier
Optimization via Search CPSC 315 – Programming Studio Spring 2009 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
Beyond Classical Search
Introduction to Artificial Intelligence Local Search (updated 4/30/2006) Henry Kautz.
Informed Search CSE 473 University of Washington.
Simulated Annealing 10/7/2005.
Review Best-first search uses an evaluation function f(n) to select the next node for expansion. Greedy best-first search uses f(n) = h(n). Greedy best.
Simulated Annealing Van Laarhoven, Aarts Version 1, October 2000.
Informed Search Chapter 4 Adapted from materials by Tim Finin, Marie desJardins, and Charles R. Dyer CS 63.
Solving problems by searching
Optimization via Search CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
Local Search and Optimization
1 Local search and optimization Local search= use single current state and move to neighboring states. Advantages: –Use very little memory –Find often.
Search CSE When you can’t use A* Hill-climbing Simulated Annealing Other strategies 2 person- games.
Local Search Algorithms This lecture topic Chapter Next lecture topic Chapter 5 (Please read lecture topic material before and after each lecture.
Boltzmann Machine (BM) (§6.4) Hopfield model + hidden nodes + simulated annealing BM Architecture –a set of visible nodes: nodes can be accessed from outside.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Informed search algorithms
Simulated Annealing.
Iterative Improvement Algorithm 2012/03/20. Outline Local Search Algorithms Hill-Climbing Search Simulated Annealing Search Local Beam Search Genetic.
Local Search Algorithms
Local Search Pat Riddle 2012 Semester 2 Patricia J Riddle Adapted from slides by Stuart Russell,
CHAPTER 4, Part II Oliver Schulte Summer 2011 Local Search.
Genetic Algorithms Przemyslaw Pawluk CSE 6111 Advanced Algorithm Design and Analysis
Introduction to Genetic Algorithms. Genetic Algorithms We’ve covered enough material that we can write programs that use genetic algorithms! –More advanced.
For Wednesday Read chapter 5, sections 1-4 Homework: –Chapter 3, exercise 23. Then do the exercise again, but use greedy heuristic search instead of A*
Local Search and Optimization Presented by Collin Kanaley.
When A* doesn’t work CIS 391 – Intro to Artificial Intelligence A few slides adapted from CS 471, Fall 2004, UBMC (which were adapted from notes by Charles.
Optimization Problems
Local Search. Systematic versus local search u Systematic search  Breadth-first, depth-first, IDDFS, A*, IDA*, etc  Keep one or more paths in memory.
Chapter 4 (Section 4.3, …) 2 nd Edition or Chapter 4 (3 rd Edition) Local Search and Optimization.
Chapter 5. Advanced Search Fall 2011 Comp3710 Artificial Intelligence Computing Science Thompson Rivers University.
Lecture 6 – Local Search Dr. Muhammad Adnan Hashmi 1 24 February 2016.
Local Search Algorithms and Optimization Problems
CPSC 322, Lecture 16Slide 1 Stochastic Local Search Variants Computer Science cpsc322, Lecture 16 (Textbook Chpt 4.8) Oct, 11, 2013.
1 Solving problems by searching Chapter 3. 2 Outline Problem types Example problems Assumptions in Basic Search State Implementation Tree search Example.
Local Search Algorithms CMPT 463. When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and.
Constraints Satisfaction Edmondo Trentin, DIISM. Constraint Satisfaction Problems: Local Search In many optimization problems, the path to the goal is.
Local search algorithms In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution State space = set of "complete"
1 Intro to AI Local Search. 2 Intro to AI Local search and optimization Local search: –use single current state & move to neighboring states Idea: –start.
Solving problems by searching Chapter 3. Types of agents Reflex agent Consider how the world IS Choose action based on current percept Do not consider.
3D Shape Analysis for Quantification, Classification and Retrieval
Solving problems by searching
CSCI 4310 Lecture 10: Local Search Algorithms
Informed Search Chapter 4 (b)
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
Local Search Algorithms
Artificial Intelligence (CS 370D)
Subject Name: Operation Research Subject Code: 10CS661 Prepared By:Mrs
Heuristic search INT 404.
Local Search and Optimization
Heuristics Local Search
Beyond Classical Search
Heuristics Local Search
Chapter 5. Advanced Search
Artificial Intelligence
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
Beam Search Addition According to Wikipedia
Local Search Algorithms
Search.
Search.
Beyond Classical Search
Local Search Algorithms
Presentation transcript:

1 Beyond Classical Search Chapter 3 covered problems that considered the whole search space and produced a sequence of actions leading to a goal. Chapter 4 covers techniques (some developed outside of AI) that don’t try to cover the whole space and only the goal state, not the steps, are important. The techniques of Chapter 4 tend to use much less memory and are not guaranteed to find an optimal solution.

More Search Methods Local Search –Hill Climbing –Simulated Annealing –Beam Search –Genetic Search Local Search in Continuous Spaces Searching with Nondeterministic Actions Online Search (agent is executing actions) 2

3 Local Search Algorithms and Optimization Problems Complete state formulation –For example, for the 8 queens problem, all 8 queens are on the board and need to be moved around to get to a goal state Equivalent to optimization problems often found in science and engineering Start somewhere and try to get to the solution from there Local search around the current state to decide where to go next

Pose Estimation Example Given a geometric model of a 3D object and a 2D image of the object. Determine the position and orientation of the object wrt the camera that snapped the image. image 3D object State (x, y, z, θx, θy, θz) 4

5 Hill Climbing “Gradient ascent” solution Note: solutions shown here as max not min. Often used for numerical optimization problems. How does it work? In continuous space, the gradient tells you the direction in which to move uphill.

6 AI Hill Climbing Steepest-Ascent Hill Climbing current  start node loop do –neighbor  a highest-valued successor of current –if neighbor.Value <= current.Value then return current.State –current  neighbor end loop At each step, the current node is replaced by the best (highest-valued) neighbor. This is sometimes called greedy local search.

7 Hill Climbing Search current What if current had a value of 12?

8 Hill Climbing Problems Local maxima Plateaus Diagonal ridges What is it sensitive to? Does it have any advantages?

9 Solving the Problems Allow backtracking (What happens to complexity?) Stochastic hill climbing: choose at random from uphill moves, using steepness for a probability Random restarts: “If at first you don’t succeed, try, try again.” Several moves in each of several directions, then test Jump to a different part of the search space

10 Simulated Annealing Variant of hill climbing (so up is good) Tries to explore enough of the search space early on, so that the final solution is less sensitive to the start state May make some downhill moves before finding a good way to move uphill.

11 Simulated Annealing Comes from the physical process of annealing in which substances are raised to high energy levels (melted) and then cooled to solid state. The probability of moving to a higher energy state, instead of lower is p = e^(-  E/kT) where  E is the positive change in energy level, T is the temperature, and k is Bolzmann’s constant. heat cool

12 Simulated Annealing At the beginning, the temperature is high. As the temperature becomes lower – kT becomes lower –  E/kT gets bigger – (-  E/kT) gets smaller – e^(-  E/kT) gets smaller As the process continues, the probability of a downhill move gets smaller and smaller.

13 For Simulated Annealing  E represents the change in the value of the objective function. Since the physical relationships no longer apply, drop k. So p = e^(-  E/T) We need an annealing schedule, which is a sequence of values of T: T 0, T 1, T 2,...

14 Simulated Annealing Algorithm current  start node; for each T on the schedule /* need a schedule */ –next  randomly selected successor of current –evaluate next; it it’s a goal, return it –  E  next.Value – current.Value /* already negated */ –if  E > 0 then current  next /* better than current */ else current  next with probability e^(  E/T) How would you do this probabilistic selection?

Probabilistic Selection Select next with probability p Generate a random number If it’s <= p, select next p random number

16 Simulated Annealing Properties At a fixed “temperature” T, state occupation probability reaches the Boltzman distribution If T is decreased slowly enough (very slowly), the procedure will reach the best state. Slowly enough has proven too slow for some researchers who have developed alternate schedules. p(x) =  e^(E(x)/kT)

17 Simulated Annealing Schedules Acceptance criterion and cooling schedule

18 Simulated Annealing Applications Basic Problems –Traveling salesman –Graph partitioning –Matching problems –Graph coloring –Scheduling Engineering –VLSI design Placement Routing Array logic minimization Layout –Facilities layout –Image processing –Code design in information theory

19 Local Beam Search Keeps more previous states in memory –Simulated annealing just kept one previous state in memory. –This search keeps k states in memory. - randomly generate k initial states - if any state is a goal, terminate - else, generate all successors and select best k - repeat

Local Beam Search 20

21

Genetic Algorithms Start with random population of states –Representation serialized (ie. strings of characters or bits) –States are ranked with “fitness function” Produce new generation –Select random pair(s) using probability: probability ~ fitness –Randomly choose “crossover point” Offspring mix halves –Randomly mutate bits Crossover Mutation 

Genetic Algorithm Given: population P and fitness-function f repeat –newP  empty set –for i = 1 to size(P) x  RandomSelection(P,f) y  RandomSelection(P,f) child  Reproduce(x,y) if (small random probability) then child  Mutate(child) add child to newP –P  newP until some individual is fit enough or enough time has elapsed return the best individual in P according to f 23

Using Genetic Algorithms 2 important aspects to using them –1. How to encode your real-life problem –2. Choice of fitness function Research Example –I have N variables V1, V2,... VN –I want to produce a single number from them that best satisfies my fitness function F –I tried linear combinations, but that didn’t work –A guy named Stan I met at a workshop in Italy told me to try Genetic Programming 24

Genetic Programming Like genetic algorithm, but instead of finding the best character string, we want to find the best arithmetic expression tree The leaves will be the variables and the non-terminals will be arithmetic operators It uses the same ideas of crossover and mutation to produce the arithmetic expression tree that maximizes the fitness function. 25

Example: Classification and Quantification of Facial Abnormalities Input is 3D meshes of faces Disease is 22q11.2 Deletion Syndrome. Multiple different facial abnormalities We’d like to assign severity scores to the different abnormalities, so need a single number to represent our analysis of a portion of the face. 26

27 Learning 3D Shape Quantification Analyze 22q11.2DS and 9 associated facial features Goal: quantify different shape variations in different facial abnormalities

28 Learning 3D Shape Quantification - 2D Histogram Azimuth Elevation Using azimuth and elevation angles of surface normal vectors of points in selected region azimuth elevation

29 Learning 3D Shape Quantification - Feature Selection Determine most discriminative bins Use Adaboost learning Obtain positional information of important region on face

30 Learning 3D Shape Quantification - Feature Combination Use Genetic Programming (GP) to evolve mathematical expression Start with random population –Individuals are evaluated with fitness measure –Best individuals reproduce to form new population

31 Learning 3D Shape Quantification - Genetic Programming Individual: –Tree structure –Terminals e.g variables eg. 3, 5, x, y, … –Function set e.g +, -, *, … –Fitness measure e.g sum of square … xy 5+ * 5*(x+y)

32 Learning 3D Shape Quantification - Feature Combination 22q11.2DS dataset –Assessed by craniofacial experts –Groundtruth is union of expert scores Goal: classify individual according to given facial abnormality

33 Learning 3D Shape Quantification - Feature Combination Individual –Terminal: selected histogram bins –Function set: +,-,*,min,max,sqrt,log,2x,5x,10x –Fitness measure: F1-measure X6 + X7 + (max(X7,X6)-sin(X8) + (X6+X6)) precision = TP/(TP + FP) recall = TP/all positives

34 Learning 3D Shape Quantification - Experiment 1 Objective: investigate function sets –Combo1 = {+,-,*,min,max} –Combo2 = {+,-,*,min,max,sqrt,log2,log10} –Combo3 = {+,-,*,min,max, 2x,5x,10x,20x,50x,100x} –Combo4 = {+,-,*,min,max,sqrt,log2,log10, 2x,5x,10x,20x,50x,100x}

35 Learning 3D Shape Quantification - Experiment 1 Best F-measure out of 10 runs

36 Tree structure for quantifying midface hypoplasia Xi are the selected histogram bins from an azimuth- elevation histogram of the surface normals of the face.

37 Learning 3D Shape Quantification - Experiment 2 Objective: compare local facial shape descriptors

38 Learning 3D Shape Quantification - Experiment 3 Objective: predict 22q11.2DS

39

Local Search in Continuous Spaces Given a continuous state space S = {(x 1,x 2,…,x N ) | x i  R } Given a continuous objective function f(x 1,x 2,…,x N ) The gradient of the objective function is a vector  f = (  f/  x 1,  f/  x 2,…,  f/  x N ) The gradient gives the magnitude and direction of the steepest slope at a point. 40

Local Search in Continuous Spaces To find a maximum, the basic idea is to set  f =0 Then updating of the current state becomes x  x +  f(x) where  is a small constant. Theory behind this is taught in numerical methods classes. Your book suggests the Newton-Raphson method. Luckily there are packages….. 41

Computer Vision Pose Estimation Example 42 pose from 6 point correspondences pose from ellipse- circle correspondence pose from both 6 points and ellipse-circle correspondences I have a 3D model of an object and an image of that object. I want to find the pose: the position and orientation of the camera.

Computer Vision Pose Estimation Example 43 Initial pose from points/ellipses and final pose after optimization. The optimization was searching a 6D space: (x,y,z,θx,θy,θz) The fitness function was how well the projection of the 3D object lined up with the edges on the image.

44

Searching with Nondeterministic Actions Vacuum World (actions = {left, right, suck}) 45

Searching with Nondeterministic Actions In the nondeterministic case, the result of an action can vary. Erratic Vacuum World: When sucking a dirty square, it cleans it and sometimes cleans up dirt in an adjacent square. When sucking a clean square, it sometimes deposits dirt on the carpet. 46

Generalization of State-Space Model 1.Generalize the transition function to return a set of possible outcomes. oldf: S x A -> S newf: S x A -> 2 S 2. Generalize the solution to a contingency plan. if state=s then action-set-1 else action-set-2 3. Generalize the search tree to an AND-OR tree. 47

AND-OR Search Tree 48 AND Node OR Node agent chooses must check both same as an ancestor

Searching with Partial Observations The agent does not always know its state! Instead, it maintains a belief state: a set of possible states it might be in. Example: a robot can be used to build a map of a hostile environment. It will have sensors that allow it to “see” the world. 49

Belief State Space for Sensorless Agent 50 initial state Knows it’s on the right. Knows it’s on the left Knows left side clean ? Knows its side is clean.

Online Search Problems Active agent – executes actions – acquires percepts from sensors – deterministic and fully observable – has to perform an action to know the outcome Examples –Web search –Autonomous vehicle 51