LOCAL SEARCH AND CONTINUOUS SEARCH. Local search algorithms  In many optimization problems, the path to the goal is irrelevant ; the goal state itself.

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Informed search algorithms
Local Search and Optimization
Local Search Algorithms
Local Search Algorithms Chapter 4. Outline Hill-climbing search Simulated annealing search Local beam search Genetic algorithms Ant Colony Optimization.
CSC344: AI for Games Lecture 5 Advanced heuristic search Patrick Olivier
Local search algorithms
Local search algorithms
Two types of search problems
Introduction to Artificial Intelligence Local Search (updated 4/30/2006) Henry Kautz.
CS 4700: Foundations of Artificial Intelligence
Informed Search CSE 473 University of Washington.
Trading optimality for speed…
Review Best-first search uses an evaluation function f(n) to select the next node for expansion. Greedy best-first search uses f(n) = h(n). Greedy best.
CSC344: AI for Games Lecture 4: Informed search
Informed Search Next time: Search Application Reading: Machine Translation paper under Links Username and password will be mailed to class.
Optimization via Search CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
Local Search and Optimization
1 Local search and optimization Local search= use single current state and move to neighboring states. Advantages: –Use very little memory –Find often.
Search CSE When you can’t use A* Hill-climbing Simulated Annealing Other strategies 2 person- games.
INTRODUÇÃO AOS SISTEMAS INTELIGENTES Prof. Dr. Celso A.A. Kaestner PPGEE-CP / UTFPR Agosto de 2011.
An Introduction to Artificial Life Lecture 4b: Informed Search and Exploration Ramin Halavati In which we see how information.
Local Search Algorithms This lecture topic Chapter Next lecture topic Chapter 5 (Please read lecture topic material before and after each lecture.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Informed search algorithms
Informed search algorithms
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Iterative Improvement Algorithm 2012/03/20. Outline Local Search Algorithms Hill-Climbing Search Simulated Annealing Search Local Beam Search Genetic.
Local Search Algorithms
Local Search Pat Riddle 2012 Semester 2 Patricia J Riddle Adapted from slides by Stuart Russell,
CHAPTER 4, Part II Oliver Schulte Summer 2011 Local Search.
For Wednesday Read chapter 6, sections 1-3 Homework: –Chapter 4, exercise 1.
For Wednesday Read chapter 5, sections 1-4 Homework: –Chapter 3, exercise 23. Then do the exercise again, but use greedy heuristic search instead of A*
Princess Nora University Artificial Intelligence Chapter (4) Informed search algorithms 1.
Local Search and Optimization Presented by Collin Kanaley.
When A* doesn’t work CIS 391 – Intro to Artificial Intelligence A few slides adapted from CS 471, Fall 2004, UBMC (which were adapted from notes by Charles.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Local search algorithms In many optimization problems, the state space is the space of all possible complete solutions We have an objective function that.
Announcement "A note taker is being recruited for this class. No extra time outside of class is required. If you take clear, well-organized notes, this.
Local Search. Systematic versus local search u Systematic search  Breadth-first, depth-first, IDDFS, A*, IDA*, etc  Keep one or more paths in memory.
Chapter 4 (Section 4.3, …) 2 nd Edition or Chapter 4 (3 rd Edition) Local Search and Optimization.
Lecture 6 – Local Search Dr. Muhammad Adnan Hashmi 1 24 February 2016.
Local Search Algorithms and Optimization Problems
CPSC 322, Lecture 16Slide 1 Stochastic Local Search Variants Computer Science cpsc322, Lecture 16 (Textbook Chpt 4.8) Oct, 11, 2013.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Local Search Algorithms CMPT 463. When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and.
CMPT 463. What will be covered A* search Local search Game tree Constraint satisfaction problems (CSP)
Local Search Goal is to find the local maximum (or minimum) Example: – # seconds to spin wheels at 1.0 to move 2.0 meters.
Constraints Satisfaction Edmondo Trentin, DIISM. Constraint Satisfaction Problems: Local Search In many optimization problems, the path to the goal is.
Local search algorithms In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution State space = set of "complete"
1 Intro to AI Local Search. 2 Intro to AI Local search and optimization Local search: –use single current state & move to neighboring states Idea: –start.
Games: Expectimax MAX MIN MAX Prune if α ≥ β. Games: Expectimax MAX MIN MAX
CSCI 4310 Lecture 10: Local Search Algorithms
Department of Computer Science
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
Local Search Algorithms
Local Search Strategies: From N-Queens to Walksat
Artificial Intelligence (CS 370D)
Heuristics Local Search
Informed search algorithms
Heuristics Local Search
Artificial Intelligence
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
First Exam 18/10/2010.
Local Search Algorithms
CSC 380: Design and Analysis of Algorithms
Beyond Classical Search
Local Search Algorithms
Presentation transcript:

LOCAL SEARCH AND CONTINUOUS SEARCH

Local search algorithms  In many optimization problems, the path to the goal is irrelevant ; the goal state itself is the solution  In such cases, we can use local search algorithms  keep a ( sometimes ) single " current " state, try to improve it

Example : n - queens  Put n queens on an n × n board with no two queens on the same row, column, or diagonal

Example : n - queens  Put n queens on an n × n board with no two queens on the same row, column, or diagonal

Example : n - queens  Put n queens on an n × n board with no two queens on the same row, column, or diagonal

Local Search  Operates by keeping track of only the current node and moving only to neighbors of that node  Often used for :  Optimization problems  Scheduling  Task assignment  …many other problem where the goal is to find the best state according to some objective function

A different view of search

Hill - climbing search  Consider next possible moves ( i. e. neighbors )  Pick the one that improves things the most  “ Like climbing Everest in thick fog with amnesia ”

Hill - climbing search

Hill - climbing search : 8- queens problem  h = number of pairs of queens that are attacking each other, either directly or indirectly  h = 17 for the above state

Hill - climbing search : 8- queens problem 5 steps later… A local minimum with h = 1 (a common problem with hill climbing)

Drawbacks of hill climbing  Problem : depending on initial state, can get stuck in local maxima

Approaches to local minima  Try again  Sideways moves

Try, try again  Run algorithm some number of times and return the best solution  Initial start location is usually chosen randomly  If you run it “ enough ” times, will get answer ( in the limit )  Drawback : takes lots of time

Sideways moves  If stuck on a ridge, if we wait awhile and allow flat moves, will become unstuck — maybe  Questions  How long is awhile ?  How likely to become unstuck ?

Any other extensions ?  First - choice hill climbing  Generate successors randomly until a good one is found  Look three moves ahead  Unstuck from certain areas  More inefficient  Might not be any better  Move quality : as good or better

Comparison of approaches for 8- queens problem TechniqueSuccess rateAverage number of moves Hill Climbing14%3.9 Hill Climbing + 6 restarts if needed 65%11.5 Hill Climbing + up to 100 sideways moves if needed 94%21 Tradeoff between success rate and number of moves As success rate approaches 100% number of moves will increase rapidly

Nice properties of local search  Can often get “ close ”  When is this useful ?  Can trade off time and performance  Can be applied to continuous problems  E. g. first - choice hill climbing  More on this later…

Simulated annealing  Insight : all of the modifications to hill climbing are really about injecting variance  Don ’ t want to get stuck in local maxima or plateu  Idea : explicitly inject variability into the search process

Properties of simulated annealing  More variability at the beginning of search  Since you have little confidence you ’ re in right place  Variability decreases over time  Don ’ t want to move away from a good solution  Probability of picking move is related to how good it is  Sideways or slight decreases are more likely than major decreases

How simulated annealing works  At each step, have temperature T  Pick next action semi - randomly Higher temperature increase randomness Select action according to goodness and temperature  Decrease temperature slightly at each time step until it reaches 0 ( no randomness )

Local Beam Search  Keep track of k states rather than just one  Start with k randomly generated states  At each iteration, all the successors of all k states are generated  If any one is a goal state, stop ; else select the k best successors from the complete list and repeat.  Results in states getting closer together over time

Stochastic Local Beam Search  Designed to prevent all k states clustering together  Instead of choosing k best, choose k successors at random, with higher probability of choosing better states. Terminology: stochastic means random.

Genetic algorithms  Inspired by nature  New states generated from two parent states. Throw some randomness into the mix as well…

Genetic Algorithms  Initialize population ( k random states )  Select subset of population for mating  Generate children via crossover  Continuous variables : interpolate  Discrete variables : replace parts of their representing variables  Mutation ( add randomness to the children ' s variables )  Evaluate fitness of children  Replace worst parents with the children

Genetic algorithms

Genetic algorithms  Fitness function : number of non - attacking pairs of queens ( min = 0, max = 8 × 7/2 = 28)  24/( ) = 31%  23/( ) = 29%  … etc.

Genetic algorithms Probability of selection is weighted by the normalized fitness function.

Genetic algorithms Probability of selection is weighted by the normalized fitness function. Crossover from the top two parents.

Genetic algorithms

Genetic Algorithms 1. Initialize population ( k random states ) 2. Calculate fitness function 3. Select pairs for crossover 4. Apply mutation 5. Evaluate fitness of children 6. From the resulting population of 2* k individuals, probabilistically pick k of the best. 7. Repeat.

Searching Continuous Spaces  Continuous : Infinitely many values.  Discrete : A limited number of distinct, clearly defined values.  In continuous space, cannot consider all next possible moves ( infinite branching factor )  Makes classic hill climbing impossible

Example

 What can we do to solve this problem ?

Searching Continuous Space  Discretize the state space  Turn it into a grid and do what we ’ ve always done.

Searching Continuous Space Problem: Can be hard or impossible to calculate. Solution: approximate the gradient through sampling.

 Very small  takes a long time to reach the peak  Very big  can overshoot the goal  What can we do… ?  Start high and decrease with time  Make it higher for flatter parts of the space

Summary  Local search often finds an approximate solution  ( i. e. it end in “ good ” but not “ best ” states )  Can inject randomness to avoid getting stuck in local maxima  Can trade off time for higher likelihood of success

Real World Problems  “ many real world problems have a landscape that looks more like a widely scattered family of balding porcupines on a flat floor, with miniature porcupines living on the tip of each porcupine needle, ad infinitum.” - Russell and Norvig

Questions ?

“One of the popular myths of higher education is that professors are sadists who live to inflict psychological trauma on undergraduates. …” … “I do not “take off” points. You earn them. The difference is not merely rhetorical, nor is it trivial. In other words, you start with zero points and earn your way to a grade.” … “this means that the burden of proof is on you to demonstrate that you have mastered the material. It is not on me to demonstrate that you have not. ” Dear Student: I Don't Lie Awake At Night Thinking of Ways to Ruin Your Life Art Caden, for Forbes.com Link to the Article