Download presentation
Presentation is loading. Please wait.
Published byElisa Gardener Modified over 9 years ago
1
LOCAL SEARCH AND CONTINUOUS SEARCH
2
Local search algorithms In many optimization problems, the path to the goal is irrelevant ; the goal state itself is the solution In such cases, we can use local search algorithms keep a ( sometimes ) single " current " state, try to improve it
3
Example : n - queens Put n queens on an n × n board with no two queens on the same row, column, or diagonal
4
Example : n - queens Put n queens on an n × n board with no two queens on the same row, column, or diagonal
5
Example : n - queens Put n queens on an n × n board with no two queens on the same row, column, or diagonal
6
Local Search Operates by keeping track of only the current node and moving only to neighbors of that node Often used for : Optimization problems Scheduling Task assignment …many other problem where the goal is to find the best state according to some objective function
7
A different view of search
8
Hill - climbing search Consider next possible moves ( i. e. neighbors ) Pick the one that improves things the most “ Like climbing Everest in thick fog with amnesia ”
9
Hill - climbing search
10
Hill - climbing search : 8- queens problem h = number of pairs of queens that are attacking each other, either directly or indirectly h = 17 for the above state
11
Hill - climbing search : 8- queens problem 5 steps later… A local minimum with h = 1 (a common problem with hill climbing)
12
Drawbacks of hill climbing Problem : depending on initial state, can get stuck in local maxima
13
Approaches to local minima Try again Sideways moves
14
Try, try again Run algorithm some number of times and return the best solution Initial start location is usually chosen randomly If you run it “ enough ” times, will get answer ( in the limit ) Drawback : takes lots of time
15
Sideways moves If stuck on a ridge, if we wait awhile and allow flat moves, will become unstuck — maybe Questions How long is awhile ? How likely to become unstuck ?
16
Any other extensions ? First - choice hill climbing Generate successors randomly until a good one is found Look three moves ahead Unstuck from certain areas More inefficient Might not be any better Move quality : as good or better
17
Comparison of approaches for 8- queens problem TechniqueSuccess rateAverage number of moves Hill Climbing14%3.9 Hill Climbing + 6 restarts if needed 65%11.5 Hill Climbing + up to 100 sideways moves if needed 94%21 Tradeoff between success rate and number of moves As success rate approaches 100% number of moves will increase rapidly
18
Nice properties of local search Can often get “ close ” When is this useful ? Can trade off time and performance Can be applied to continuous problems E. g. first - choice hill climbing More on this later…
19
Simulated annealing Insight : all of the modifications to hill climbing are really about injecting variance Don ’ t want to get stuck in local maxima or plateu Idea : explicitly inject variability into the search process
20
Properties of simulated annealing More variability at the beginning of search Since you have little confidence you ’ re in right place Variability decreases over time Don ’ t want to move away from a good solution Probability of picking move is related to how good it is Sideways or slight decreases are more likely than major decreases
21
How simulated annealing works At each step, have temperature T Pick next action semi - randomly Higher temperature increase randomness Select action according to goodness and temperature Decrease temperature slightly at each time step until it reaches 0 ( no randomness )
22
Local Beam Search Keep track of k states rather than just one Start with k randomly generated states At each iteration, all the successors of all k states are generated If any one is a goal state, stop ; else select the k best successors from the complete list and repeat. Results in states getting closer together over time
23
Stochastic Local Beam Search Designed to prevent all k states clustering together Instead of choosing k best, choose k successors at random, with higher probability of choosing better states. Terminology: stochastic means random.
24
Genetic algorithms Inspired by nature New states generated from two parent states. Throw some randomness into the mix as well…
25
Genetic Algorithms Initialize population ( k random states ) Select subset of population for mating Generate children via crossover Continuous variables : interpolate Discrete variables : replace parts of their representing variables Mutation ( add randomness to the children ' s variables ) Evaluate fitness of children Replace worst parents with the children
26
Genetic algorithms 32752411
27
Genetic algorithms Fitness function : number of non - attacking pairs of queens ( min = 0, max = 8 × 7/2 = 28) 24/(24+23+20+11) = 31% 23/(24+23+20+11) = 29% … etc.
28
Genetic algorithms Probability of selection is weighted by the normalized fitness function.
29
Genetic algorithms Probability of selection is weighted by the normalized fitness function. Crossover from the top two parents.
30
Genetic algorithms
31
Genetic Algorithms 1. Initialize population ( k random states ) 2. Calculate fitness function 3. Select pairs for crossover 4. Apply mutation 5. Evaluate fitness of children 6. From the resulting population of 2* k individuals, probabilistically pick k of the best. 7. Repeat.
32
Searching Continuous Spaces Continuous : Infinitely many values. Discrete : A limited number of distinct, clearly defined values. In continuous space, cannot consider all next possible moves ( infinite branching factor ) Makes classic hill climbing impossible
33
Example
34
What can we do to solve this problem ?
35
Searching Continuous Space Discretize the state space Turn it into a grid and do what we ’ ve always done.
36
Searching Continuous Space Problem: Can be hard or impossible to calculate. Solution: approximate the gradient through sampling.
37
Very small takes a long time to reach the peak Very big can overshoot the goal What can we do… ? Start high and decrease with time Make it higher for flatter parts of the space
38
Summary Local search often finds an approximate solution ( i. e. it end in “ good ” but not “ best ” states ) Can inject randomness to avoid getting stuck in local maxima Can trade off time for higher likelihood of success
39
Real World Problems “ many real world problems have a landscape that looks more like a widely scattered family of balding porcupines on a flat floor, with miniature porcupines living on the tip of each porcupine needle, ad infinitum.” - Russell and Norvig
40
Questions ?
41
“One of the popular myths of higher education is that professors are sadists who live to inflict psychological trauma on undergraduates. …” … “I do not “take off” points. You earn them. The difference is not merely rhetorical, nor is it trivial. In other words, you start with zero points and earn your way to a grade.” … “this means that the burden of proof is on you to demonstrate that you have mastered the material. It is not on me to demonstrate that you have not. ” Dear Student: I Don't Lie Awake At Night Thinking of Ways to Ruin Your Life Art Caden, for Forbes.com Link to the Article
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.