Download presentation
Presentation is loading. Please wait.
Published byBrett Dale Banks Modified over 9 years ago
1
Department of Computer Science Lecture 5: Local Search https://sites.google.com/site/mtsiddiquecs/ai
2
© M. Tariq Siddique 2015 Depart of Computer Science | Bahria University 2 This lecture Assumptions are relaxed Local search in the state space Evaluating and modifying one or more current states rather than systematically exploring paths from an initial state Local search algorithms: Simulated annealing Genetic algorithms Further relax the assumptions of determinism and observability If agent cannot predict exactly what percept it will receive Need to do under contingency Partial observable Online search – initially unknown state space, need to be explored
3
© M. Tariq Siddique 2015 Depart of Computer Science | Bahria University 3 Local search algorithm and optimization problems Previously Search by exploring search spaces keep one or more paths in memory Record explored set When a goal is found get a solution However In many cases, path to goal is irrelevant E.g. 8-queen Need different kinds of algorithm that do not worry about the path find one that satisfies constraints (e.g., no two classes at same time) find optimal one (e.g., highest possible value, least possible cost) Local search algorithm
4
© M. Tariq Siddique 2015 Depart of Computer Science | Bahria University 4 Local Search Algorithm Iterative improvement Using a single current node (rather than multiple paths), move only to neighbors of that node Usually no memory of the visited paths Advantages Use very little memory (constant amount) Often find reasonable solutions in large/ infinite state spaces (online / offline) Good for optimization problems Best state according to an objective function Because many problems do not have “goal test” or “path cost”
5
© M. Tariq Siddique 2015 Depart of Computer Science | Bahria University 5 Example: the n-Queens Problem Put n queens on an n x n chessboard No two queens on the same row, column, or diagonal Iterative improvement: Start with one queen in each column move a queen to reduce number of conflicts
6
© M. Tariq Siddique 2015 Depart of Computer Science | Bahria University 6 Local Search Algorithm State-space landscape Location (defined by state) Elevation (defined by the value of the heuristic cost function or objective function) If elevation = cost, aim to find the lowest valley (a global minimum) If elevation = objective function, find the highest peak (a global maximum) A complete local search algorithm always find a goal if one exists An optimal algorithm always find a global minimum/maximum
7
© M. Tariq Siddique 2015 Depart of Computer Science | Bahria University 7 Local and Global Optima Global optimum A solution which is better than all other solutions Or no worse than any other solution Local optimum A solution which is better than nearby solutions A local optimum is not necessarily a global one
8
© M. Tariq Siddique 2015 Depart of Computer Science | Bahria University 8 Global / Local (max/min) A local max/min is over a small area. For instance, if a point is lower than the next nearest point on the left & right than it's a local min. There can be many local maxs and mins over an entire graph. A global max/min is the highest/lowest point on the entire graph. There can only be ONE gobal max and/or min on a graph and there may not be one at all.
9
© M. Tariq Siddique 2015 Depart of Computer Science | Bahria University 9 Global / Local (max/min)
10
© M. Tariq Siddique 2015 Depart of Computer Science | Bahria University 10 Hill-climbing search A loop that continually moves in the direction of increasing value (uphill) Terminate when at peak, where no neighbor has a higher value No search tree, record only the current node’s state and its value of objective function. No look ahead beyond immediate neighbors of the current state
11
© M. Tariq Siddique 2015 Depart of Computer Science | Bahria University 11 Hill-climbing search Expands the node that improves the quality towards a goal. Similar to hiking (uphill). Greedy Local search (grabs a good neighbor state without thinking ahead about where to go next) : Disadvantages: Local Maxima: Medium-size hill. Plateau: Flat valley. : Solution: Random re-starting.
12
© M. Tariq Siddique 2015 Depart of Computer Science | Bahria University 12 Hill-climbing search Possible problems; might get stuck Local Maxima A peak that is higher than its neighbors, but lower than the global maximum
13
© M. Tariq Siddique 2015 Depart of Computer Science | Bahria University 13 Hill-climbing problems Plateau – flat area of the state-space landscape Could be flat local maximum Could be shoulder (progress is possible)
14
© M. Tariq Siddique 2015 Depart of Computer Science | Bahria University 14 Hill-climbing search
15
© M. Tariq Siddique 2015 Depart of Computer Science | Bahria University 15 Hill-climbing search Hill climbing that never makes “downhill” move is guaranteed to be incomplete, because it can get stuck on a local maximum But if makes a purely random walk is complete but extremely inefficient
16
© M. Tariq Siddique 2015 Depart of Computer Science | Bahria University 16 Alternative hill climbing Stochastic hill climbing Choose at random from among the uphill moves First-choice hill climbing Generating successor randomly until one is better than the current state Good when a state has many successors Random-restart hill climbing Randomly generate successors from initial state
17
© M. Tariq Siddique 2015 Depart of Computer Science | Bahria University 17 Simulated annealing hill climbing (efficient) + random walk (completeness) simulated annealing
18
© M. Tariq Siddique 2015 Depart of Computer Science | Bahria University 18 Simulated annealing: basic idea From current state, pick a random successor state; If it has better value than current state, then “accept the transition,” that is, use successor state as current state; Otherwise, do not give up, but instead flip a coin and accept the transition with a given probability (that is lower as the successor is worse). So we accept to sometimes “un-optimize” the value function a little with a non-zero probability.
19
© M. Tariq Siddique 2015 Depart of Computer Science | Bahria University 19 Annealing in metallurgy Is a process used to temper or harden metals or glass by heating them to a high temperature and then gradually cooling them Allow them to reach a low crystalline state
20
© M. Tariq Siddique 2015 Depart of Computer Science | Bahria University 20 Simulated annealing search
21
© M. Tariq Siddique 2015 Depart of Computer Science | Bahria University 21 Local beam search Instead of keeping only one node in memory Keep track of k states Start with k randomly generated states For each step, generate the successors of all k states If anyone is a goal, the algorithm halts Otherwise selects the k best successors from the complete list and repeats
22
© M. Tariq Siddique 2015 Depart of Computer Science | Bahria University 22 Local beam search Look like instead of doing it in parallel, rather than sequential, but In random-search, search independently of others In local beam search, useful info is passed among the parallel search threads Quickly abandon unfruitful searches, and move to the resourceful ones Drawbacks Lack of diversity among k states Can quickly become concentrated in a small region of search space (more expensive version of hill climbing) Alternative: Stochastic beam search Instead of choosing best k from the successors, choose k successors at random (with probability of choosing a given successor being an increasing function of its value) Similar to natural selection
23
© M. Tariq Siddique 2015 Depart of Computer Science | Bahria University 23 Genetic Algorithms A variation of stochastic beam search Successor states are generated by combining two parent states rather than by modifying a single state. Start with a set of k randomly generated states (population) Each state (individual) is represented with a string Each state is rated by the objective function (fitness function), higher values for better states Two pairs are selected at random for reproduction (with the probabilities) A crossover point is chosen randomly Each location is subject to random mutation
24
© M. Tariq Siddique 2015 Depart of Computer Science | Bahria University 24 GA is a good no clue approach to problem solving GA is superb if: Your space is loaded with lots of weird bumps and local minima. GA tends to spread out and test a larger subset of your space than many other types of learning/optimization algorithms. You don’t quite understand the underlying process of your problem space. You have lots of processors GA’s parallelize very easily!
25
© M. Tariq Siddique 2015 Depart of Computer Science | Bahria University 25 Optimization Problems New Algorithms ACO (Ant colony optimization) PSO (Particle Swarm Intelligence) QGA (Quantum Inspired Genetic Algorithm)
26
© M. Tariq Siddique 2015 Depart of Computer Science | Bahria University 26 Fairly simple units generate complicated global behaviour. An ant colony expresses a complex collective behavior providing intelligent solutions to problems such as: carrying large items forming bridges finding the shortest routes from the nest to a food source, prioritizing food sources based on their distance and ease of access. “If we knew how an ant colony works, we might understand more about how all such systems work, from brains to ecosystems.” (Gordon, 1999) Anything to be Learnt from Ant Colonies?
27
© M. Tariq Siddique 2015 Depart of Computer Science | Bahria University 27 Shortest path discovery
28
© M. Tariq Siddique 2015 Depart of Computer Science | Bahria University 28 Shortest path discovery Ants get to find the shortest path after few minutes …
29
© M. Tariq Siddique 2015 Depart of Computer Science | Bahria University 29 Ant Colony Optimization Each artificial ant is a probabilistic mechanism that constructs a solution to the problem, using: Artificial pheromone deposition Heuristic information: pheromone trails, already visited cities memory …
30
© M. Tariq Siddique 2015 Depart of Computer Science | Bahria University 30 TSP Solved using ACO
31
© M. Tariq Siddique 2015 Depart of Computer Science | Bahria University 31 Summary Local search (Iterative improvement) algorithms keep only a single state in memory. Can get stuck in local extrema(maxima/minima); simulated annealing provides a way to escape local extrema, and is complete and optimal given a slow enough cooling schedule. Simulated annealing, local search – are heuristics that usually produce sub-optimal solutions since they may terminate at local optimal solution.
32
© M. Tariq Siddique 2015 Depart of Computer Science | Bahria University 32 Summary Local beam search keeps track of k states, rather than one state. Quickly abandon useless path, but suffer from lack of diversity. Stochastic beam search chooses random k successors that has high probabilities Genetic Algorithm uses crossover from parents that have high fitness function
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.