Iterative Improvement Algorithm 2012/03/20. Outline Local Search Algorithms Hill-Climbing Search Simulated Annealing Search Local Beam Search Genetic.

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Local Search and Optimization
Local Search Algorithms
LOCAL SEARCH AND CONTINUOUS SEARCH. Local search algorithms  In many optimization problems, the path to the goal is irrelevant ; the goal state itself.
CPSC 322, Lecture 16Slide 1 Stochastic Local Search Variants Computer Science cpsc322, Lecture 16 (Textbook Chpt 4.8) February, 9, 2009.
Problem Solving by Searching
CSC344: AI for Games Lecture 5 Advanced heuristic search Patrick Olivier
Local search algorithms
Local search algorithms
Two types of search problems
Iterative improvement algorithms Prof. Tuomas Sandholm Carnegie Mellon University Computer Science Department.
CS 460 Spring 2011 Lecture 3 Heuristic Search / Local Search.
CS 4700: Foundations of Artificial Intelligence
Trading optimality for speed…
Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2002.
Review Best-first search uses an evaluation function f(n) to select the next node for expansion. Greedy best-first search uses f(n) = h(n). Greedy best.
CSC344: AI for Games Lecture 4: Informed search
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 5 Jim Martin.
Informed Search Next time: Search Application Reading: Machine Translation paper under Links Username and password will be mailed to class.
1 CS 2710, ISSP 2610 R&N Chapter 4.1 Local Search and Optimization.
Constraint Satisfaction Problems
Informed search algorithms
Local Search and Optimization
1 Local search and optimization Local search= use single current state and move to neighboring states. Advantages: –Use very little memory –Find often.
Search CSE When you can’t use A* Hill-climbing Simulated Annealing Other strategies 2 person- games.
INTRODUÇÃO AOS SISTEMAS INTELIGENTES Prof. Dr. Celso A.A. Kaestner PPGEE-CP / UTFPR Agosto de 2011.
An Introduction to Artificial Life Lecture 4b: Informed Search and Exploration Ramin Halavati In which we see how information.
Local Search Algorithms This lecture topic Chapter Next lecture topic Chapter 5 (Please read lecture topic material before and after each lecture.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Informed search algorithms
Informed search algorithms
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
ISC 4322/6300 – GAM 4322 Artificial Intelligence Lecture 3 Informed Search and Exploration Instructor: Alireza Tavakkoli September 10, 2009 University.
1 CS 2710, ISSP 2610 Chapter 4, Part 2 Heuristic Search.
Artificial Intelligence for Games Online and local search
Local Search Algorithms
Local Search Pat Riddle 2012 Semester 2 Patricia J Riddle Adapted from slides by Stuart Russell,
For Wednesday Read chapter 6, sections 1-3 Homework: –Chapter 4, exercise 1.
For Wednesday Read chapter 5, sections 1-4 Homework: –Chapter 3, exercise 23. Then do the exercise again, but use greedy heuristic search instead of A*
Princess Nora University Artificial Intelligence Chapter (4) Informed search algorithms 1.
Local Search and Optimization Presented by Collin Kanaley.
When A* doesn’t work CIS 391 – Intro to Artificial Intelligence A few slides adapted from CS 471, Fall 2004, UBMC (which were adapted from notes by Charles.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Local search algorithms In many optimization problems, the state space is the space of all possible complete solutions We have an objective function that.
Informed search algorithms Chapter 4 Slides derived in part from converted to powerpoint by Min-Yen.
Announcement "A note taker is being recruited for this class. No extra time outside of class is required. If you take clear, well-organized notes, this.
Local Search. Systematic versus local search u Systematic search  Breadth-first, depth-first, IDDFS, A*, IDA*, etc  Keep one or more paths in memory.
Chapter 4 (Section 4.3, …) 2 nd Edition or Chapter 4 (3 rd Edition) Local Search and Optimization.
Lecture 6 – Local Search Dr. Muhammad Adnan Hashmi 1 24 February 2016.
Local Search Algorithms and Optimization Problems
CPSC 322, Lecture 16Slide 1 Stochastic Local Search Variants Computer Science cpsc322, Lecture 16 (Textbook Chpt 4.8) Oct, 11, 2013.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Local Search and Optimization Chapter 4 Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld…) 1.
Local Search Algorithms CMPT 463. When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and.
CMPT 463. What will be covered A* search Local search Game tree Constraint satisfaction problems (CSP)
Constraints Satisfaction Edmondo Trentin, DIISM. Constraint Satisfaction Problems: Local Search In many optimization problems, the path to the goal is.
Local search algorithms In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution State space = set of "complete"
1 Intro to AI Local Search. 2 Intro to AI Local search and optimization Local search: –use single current state & move to neighboring states Idea: –start.
Department of Computer Science
Local Search Algorithms
Artificial Intelligence (CS 370D)
Local Search and Optimization
Artificial Intelligence
First Exam 18/10/2010.
Local Search Algorithms
CSC 380: Design and Analysis of Algorithms
Local Search Algorithms
Presentation transcript:

Iterative Improvement Algorithm 2012/03/20

Outline Local Search Algorithms Hill-Climbing Search Simulated Annealing Search Local Beam Search Genetic Algorithms

Iterative Improvement Algorithm General idea of local search – start with a complete configuration – make modification to improve its quality Domains: e.g., 8-queens, VLSI layout, etc. – state description  solution – path to a solution is irrelevant Approaches – Hill-Climbing (f  quality) – Gradient Descent (f  cost) – Simulated Annealing

Local Search Algorithm and Optimization Problems Uninformed search – Looking for a solution where solution is a path from start to goal – At each intermediate point along a path, we have no prediction of value of path Informed search – Again, looking for a path from start to goal – This time, we have insight regarding the value of intermediate solutions

Local Search Algorithm and Optimization Problems (cont.) What if the path is not important, just the goal? – So the goal is unknown – The path to the goal need not be solved – State space = set of complete configuration – Find configurations satisfying constraints Examples – What quantities of quarters, nickels, and dimes add up to $17.45 and minimize the total number of coins? – 8-Queen problem

Local Search Algorithm Local search does not keep track of previous solutions – It operates using a single current state (rather than multiple paths) and generally move only to neighbors of that state Advantages – Use a small amount of memory (usually constant amount) – They can often find reasonable (not we are not saying optimal) solutions in infinite search space

Optimization Problems To find the best state according to an Objective Function Example – f(q, d, n)= 1,000,000if q* d*0.1 + n*0.05  = q + d + notherwise – To minimize f

Looking for Global Maximum (or Minimum) current state local maximum “flat” local maximum global maximum objective function state space shoulder

Hill-Climbing Search “Like climbing Everest in thick fog with amnesia” Only record the state and its evaluation instead of maintaining a search tree function H ILL -C LIMBING ( problem ) returns a state that is a local maximum inputs: problem, a problem local variables: current, a node neighbor, a node current  M AKE -N ODE ( I NITIAL -S TATE [ problem ]) loop do neighbor  a highest-valued successor of current if V ALUE [ neighbor ]  V ALUE [ current ] then return S TATE [ current ] current  neighbor

Hill-Climbing Search (cont.-1) Variations – choose any successor with a higher value than current – choose value[next]  value[current] Problems – Local Maxima: search halts prematurely – Plateaux: search conducts a random walk – Ridges: search oscillates with slow progress Solution: Random-Restart Hill-Climbing – start from randomly generated initial states – saving the best result so far – finding the optimal solution eventually if enough iterations are allowed

Hill-Climbing Search (cont.-2) Creating a sequence of local maximum that are not directly connected to each other From each local maximum, all the available actions point downhill

Hill-Climbing Search (cont.-3) 8-queens problem – successor function = all states generated by moving a single queen to another square in the same column 8  7 successors – h = # of pairs of queens that are attacking each other, either directly or indirectly – h = 17 for the above state

Hill-Climbing Search (cont.-4) 8-queens problem – a local minimum with h = 1 – every successor has a higher cost bacdefgh

The K-Means Algorithm 1.Choose a value for K, the total number of clusters. 2.Randomly choose K points as cluster centers. 3.Assign the remaining instances to their closest cluster center. 4.Calculate a new cluster center for each cluster. 5.Repeat steps 3-5 until the cluster centers do not change.

15

16

17

18

General Considerations Requires real-valued data. We must select the number of clusters present in the data. Works best when the clusters in the data are of approximately equal size. Attribute significance cannot be determined. Lacks explanation capabilities.

Simulated Annealing idea: escape local maxima by allowing some bad moves but gradually decrease their frequency function S IMULATED -A NNEALING ( problem, schedule ) returns a solution state inputs: problem, a problem schedule, a mapping from time to “temperature” local variables: current, next, a node T, a “temperature” controlling the probability of downward steps current  M AKE -N ODE ( I NITIAL -S TATE [ problem ]) for t  1 to  do T  schedule [ t ] if T = 0 then return current next  a randomly selected successor of current  E  V ALUE [ next ] - V ALUE [ current ] if  E > 0 then current  next else current  next only with probability e  E / T

Simulated Annealing (cont.-1) A term borrowed from metalworking We want metal molecules to find a stable location relative to neighbors Heating causes metal molecules to move around and to take on undesirable locations During cooling, molecules reduce their movement and settle into a more stable position Annealing is process of heating metal and letting it cool slowly to lock in the stable locations of the molecules

Simulated Annealing (cont.-2) Select a random move at each iteration Move to the selected node if it is better than the current node The probability of moving to a worse node decreases exponentially with the badness of the move, i.e., e ΔE/T The temperature T changes according to a schedule

Property of Simulated Annealing One can prove: If T decreases slowly enough, then simulated annealing search will find a global optimum with probability approaching 1 Wildly used in VLSI layout, airline scheduling, etc.

Local Beam Search Keep track of k states rather than just one Begin with k randomly generated states At each iteration, all the successors of all k states are generated If any one is a goal state, halt; else select the k best successors from the complete list and repeat Useful information is passed among the k parallel search thread

Genetic Algorithm (GA) A successor state is generated by combining two parent states Start with k randomly generated states (population) A state is represented as a string over a finite alphabet (often a string of 0s and 1s) Evaluation function (fitness function). Higher values for better states Produce the next generation of states by selection, crossover, and mutation

Genetic Algorithm (cont.) Fitness function: # of non-attacking pair of queens (min = 0, max = 8×7/2 = 28) Probability for selected for rep  24/( ) = 31%  23/( ) = 29%, etc Initial Population Fitness Fn Selection Crossover Mutation