CSCI 4310 Lecture 10: Local Search Algorithms

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Local Search Algorithms
G5BAIM Artificial Intelligence Methods
Local search algorithms
Local search algorithms
Two types of search problems
Optimization via Search CPSC 315 – Programming Studio Spring 2009 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
Review Best-first search uses an evaluation function f(n) to select the next node for expansion. Greedy best-first search uses f(n) = h(n). Greedy best.
Informed Search Chapter 4 Adapted from materials by Tim Finin, Marie desJardins, and Charles R. Dyer CS 63.
D Nagesh Kumar, IIScOptimization Methods: M1L4 1 Introduction and Basic Concepts Classical and Advanced Techniques for Optimization.
Optimization via Search CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
Ch. 11: Optimization and Search Stephen Marsland, Machine Learning: An Algorithmic Perspective. CRC 2009 some slides from Stephen Marsland, some images.
Local Search and Optimization
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Informed Search Chapter 4 (b)
An Introduction to Artificial Life Lecture 4b: Informed Search and Exploration Ramin Halavati In which we see how information.
Local Search Algorithms This lecture topic Chapter Next lecture topic Chapter 5 (Please read lecture topic material before and after each lecture.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Design & Analysis of Algorithms Combinatory optimization SCHOOL OF COMPUTING Pasi Fränti
Iterative Improvement Algorithm 2012/03/20. Outline Local Search Algorithms Hill-Climbing Search Simulated Annealing Search Local Beam Search Genetic.
Local Search Algorithms
Local Search Pat Riddle 2012 Semester 2 Patricia J Riddle Adapted from slides by Stuart Russell,
For Wednesday Read chapter 6, sections 1-3 Homework: –Chapter 4, exercise 1.
For Wednesday Read chapter 5, sections 1-4 Homework: –Chapter 3, exercise 23. Then do the exercise again, but use greedy heuristic search instead of A*
Princess Nora University Artificial Intelligence Chapter (4) Informed search algorithms 1.
Probabilistic Algorithms Evolutionary Algorithms Simulated Annealing.
Local Search and Optimization Presented by Collin Kanaley.
Optimization Problems
Announcement "A note taker is being recruited for this class. No extra time outside of class is required. If you take clear, well-organized notes, this.
Local Search. Systematic versus local search u Systematic search  Breadth-first, depth-first, IDDFS, A*, IDA*, etc  Keep one or more paths in memory.
Chapter 4 (Section 4.3, …) 2 nd Edition or Chapter 4 (3 rd Edition) Local Search and Optimization.
Local Search Algorithms and Optimization Problems
CPSC 322, Lecture 16Slide 1 Stochastic Local Search Variants Computer Science cpsc322, Lecture 16 (Textbook Chpt 4.8) Oct, 11, 2013.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Department of Computer Science Lecture 5: Local Search
Local Search Algorithms CMPT 463. When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and.
Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Games: Expectimax MAX MIN MAX Prune if α ≥ β. Games: Expectimax MAX MIN MAX
Informed Search Chapter 4 (b)
Optimization Problems
Optimization via Search
Informed Search Chapter 4 (b)
Department of Computer Science
Informed Search Chapter 4 (b)
School of Computer Science & Engineering
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
Local Search Algorithms
Artificial Intelligence (CS 370D)
Subject Name: Operation Research Subject Code: 10CS661 Prepared By:Mrs
Advanced Artificial Intelligence Evolutionary Search Algorithm
metaheuristic methods and their applications
Local Search and Optimization
Optimization Problems
CSE 589 Applied Algorithms Spring 1999
CS Fall 2016 (Shavlik©), Lecture 9, Week 5
Heuristics Local Search
Metaheuristic methods and their applications. Optimization Problems Strategies for Solving NP-hard Optimization Problems What is a Metaheuristic Method?
Informed Search Chapter 4 (b)
School of Computer Science & Engineering
Informed search algorithms
Design & Analysis of Algorithms Combinatorial optimization
More on Search: A* and Optimization
Stochastic Local Search Variants Computer Science cpsc322, Lecture 16
Artificial Intelligence
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
First Exam 18/10/2010.
Local Search Algorithms
Local Search Algorithms
Simulated Annealing & Boltzmann Machines
Presentation transcript:

CSCI 4310 Lecture 10: Local Search Algorithms Adapted from Russell and Norvig & Coppin

Reading Section 5.8 in Artificial Intelligence Illuminated by Ben Coppin ISBN 0-7637-3230-3 Chapter 4 in Artifical Intelligence, A Modern Approach by Russell and Norvig ISBN 0-13-790395-2 Chapters 4 and 25 in Winston

Techniques as Metaheuristics wikipedia Methods for solving problems by combining heuristics - hopefully efficiently. Generally applied to problems for which there is no satisfactory problem-specific algorithm Not a panacea

What is local search? Local search algorithms operate using a single current state Search involves moving to neighbors of the current state We don’t care how we got to the current state ie: No one cares how you arrived at the 8 queens solution Don’t need intermediate steps We do need a path for TSP But not the discarded longer paths

Purpose Local search strategies Can often find ‘good’ solutions in large or infinite search spaces Work well with optimization problems An objective function Like Genetic Algorithms Nature has provided reproductive fitness as an objective function No goal test or path cost as we saw with directed search

Local Search State space landscape State is current location on the curve If height of state is cost, finding the global minimum is the goal

Local Search Complete local search algorithm Always finds a goal if one exists Optimal local search algorithm Always finds a global min / max Google image search for “Global Maxima”

Hill climbing Greedy local search If minimizing, this is ‘gradient descent’ The algorithm will “never get worse” Suffers from the same mountain climbing problems we have discussed Sometimes worse must you get in order to find the better -Yoda We can do better…

Stochastic Hill Climbing Generate successors randomly until one is better than the current state Good choice when each state has a very large number of successors Still, this is an incomplete algorithm We may get stuck in a local maxima

Random Restart Hill Climbing Generate start states randomly Then proceed with hill climbing Will eventually generate a goal state as the initial state we should have a complete algorithm by dumb luck (eventually) Hard problems typically have an large number of local maxima This may be a decent definition of “difficult” as related to search strategy

Simulated Annealing Problems so far: Never making downhill moves is guaranteed to be incomplete A purely random walk (choosing a successor state randomly) is complete, but boring and inefficient Let’s combine and see what happens…

Simulated Annealing Uses the concept of ‘temperature’ which decreases as we proceed Instead of picking the best next move, we choose randomly If the move improves, we accept If not, we accept with a probability that exponentially decreases with the ‘badness’ of the move At high temperature, we are more likely to accept random ‘bad’ moves As the system cools, ‘bad’ moves are less likely

Simulated Annealing Temperature eventually goes to 0. At Temperature = 0, this is the greedy algorithm

Simulated Annealing s := s0; e := E(s) // Initial state, energy. sb := s; eb := e // Initial "best" solution k := 0 // Energy evaluation count. while k < kmax and e > emax // While time left & not good enough: sn := neighbour(s) // Pick some neighbour. en := E(sn) // Compute its energy. if en < eb then // Is this a new best? sb := sn; eb := en // Yes, save it. if P(e, en, temp(k/kmax)) > random() then // Should we move to it? s := sn; e := en // Yes, change state. k := k + 1 // One more evaluation done return sb // Return the best solution found.

Simulated Annealing Fast cooling Slow cooling Similar colors attract at short distances and repel at slightly larger distances. Each move swaps two pixels Pictures: wikipedia

Tabu Search Keep a list of k states previously visited to avoid repeating paths Combine this technique with other heuristics Avoid local optima by rewarding exploration of new paths, even if they appear relatively poor “a bad strategic choice can yield more information than a good random choice.” The home of Tabu search

Ant Colony Optimization Send artificial ‘ants’ along graph edges Drop pheromone as you travel Next generation of ants are attracted to pheromone Applied to Traveling Salesman Read this paper

Local Beam Search Hill climbing and variants have one current state Beam search keeps k states For each of k states, generate all potential next states If any next state is goal, terminate Otherwise, select k best successors Each search thread shares information So, not just k parallel searches

Local Beam Search 2 Quickly move resources to where the most progress is being made But suffers a lack of diversity and can quickly devolve into parallel hill climbing So, we can apply our same techniques Randomize – choose k successors at random At the point in any algorithm where we are getting stuck – just randomize to re-introduce diversity What does this sound like in the genetic realm?

Stochastic Beam Search Choose a pool of next states at random Select k with probability increasing as a function of the value of the state Successors (offspring) of a state (organism) populate the next generation according to a value (fitness) Sound familiar?

Genetic Algorithms Variant of stochastic beam search Rather than modifying a single state… Two parent states are combined to form a successor state This state is embodied in the phenotype