Optimization via Search

Slides:



Advertisements
Similar presentations
Local Search Algorithms
Advertisements

Neural and Evolutionary Computing - Lecture 4 1 Random Search Algorithms. Simulated Annealing Motivation Simple Random Search Algorithms Simulated Annealing.
LOCAL SEARCH AND CONTINUOUS SEARCH. Local search algorithms  In many optimization problems, the path to the goal is irrelevant ; the goal state itself.
Games with Chance Other Search Algorithms CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 3 Adapted from slides of Yoonsuck Choe.
Optimization via Search CPSC 315 – Programming Studio Spring 2009 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002.
Iterative Improvement Algorithms
Iterative Improvement Algorithms For some problems, path to solution is irrelevant: just want solution Start with initial state, and change it iteratively.
Review Best-first search uses an evaluation function f(n) to select the next node for expansion. Greedy best-first search uses f(n) = h(n). Greedy best.
Informed Search Chapter 4 Adapted from materials by Tim Finin, Marie desJardins, and Charles R. Dyer CS 63.
D Nagesh Kumar, IIScOptimization Methods: M1L4 1 Introduction and Basic Concepts Classical and Advanced Techniques for Optimization.
Optimization via Search CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
Informed Search Chapter 4 (b)
Search CSE When you can’t use A* Hill-climbing Simulated Annealing Other strategies 2 person- games.
Local Search Algorithms This lecture topic Chapter Next lecture topic Chapter 5 (Please read lecture topic material before and after each lecture.
Boltzmann Machine (BM) (§6.4) Hopfield model + hidden nodes + simulated annealing BM Architecture –a set of visible nodes: nodes can be accessed from outside.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Chapter 4.1 Beyond “Classic” Search. What were the pieces necessary for “classic” search.
Iterative Improvement Algorithm 2012/03/20. Outline Local Search Algorithms Hill-Climbing Search Simulated Annealing Search Local Beam Search Genetic.
Local Search Pat Riddle 2012 Semester 2 Patricia J Riddle Adapted from slides by Stuart Russell,
CHAPTER 4, Part II Oliver Schulte Summer 2011 Local Search.
Introduction to Simulated Annealing Study Guide for ES205 Xiaocang Lin & Yu-Chi Ho August 22, 2000.
Simulated Annealing. Difficulty in Searching Global Optima starting point descend direction local minima global minima barrier to local search.
Optimization Problems
Chapter 5. Advanced Search Fall 2011 Comp3710 Artificial Intelligence Computing Science Thompson Rivers University.
Local Search Algorithms and Optimization Problems
CPSC 322, Lecture 16Slide 1 Stochastic Local Search Variants Computer Science cpsc322, Lecture 16 (Textbook Chpt 4.8) Oct, 11, 2013.
Local Search Algorithms CMPT 463. When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and.
Constraints Satisfaction Edmondo Trentin, DIISM. Constraint Satisfaction Problems: Local Search In many optimization problems, the path to the goal is.
Local search algorithms In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution State space = set of "complete"
Games: Expectimax MAX MIN MAX Prune if α ≥ β. Games: Expectimax MAX MIN MAX
Optimization Problems
Informed Search Chapter 4 (b)
Evolutionary Algorithms Jim Whitehead
CSCI 4310 Lecture 10: Local Search Algorithms
Department of Computer Science
Informed Search Chapter 4 (b)
Heuristic Optimization Methods
Van Laarhoven, Aarts Version 1, October 2000
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
Local Search Algorithms
Artificial Intelligence (CS 370D)
Subject Name: Operation Research Subject Code: 10CS661 Prepared By:Mrs
Games with Chance Other Search Algorithms
Games with Chance Other Search Algorithms
CS621: Artificial Intelligence
Randomized Hill Climbing
Heuristic search INT 404.
Local Search and Optimization
Optimization Problems
Randomized Hill Climbing
CSE 589 Applied Algorithms Spring 1999
CSE 4705 Artificial Intelligence
CS Fall 2016 (Shavlik©), Lecture 9, Week 5
Informed Search Chapter 4 (b)
Introduction to Simulated Annealing
More on Search: A* and Optimization
Stochastic Local Search Variants Computer Science cpsc322, Lecture 16
Heuristics Local Search
Boltzmann Machine (BM) (§6.4)
Xin-She Yang, Nature-Inspired Optimization Algorithms, Elsevier, 2014
Chapter 5. Advanced Search
Artificial Intelligence
Lecture 9 Administration Heuristic search, continued
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
First Exam 18/10/2010.
Local Search Algorithms
Games with Chance Other Search Algorithms
Local Search Algorithms
Presentation transcript:

Optimization via Search CPSC 315 – Programming Studio Spring 2012 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe

Improving Results and Optimization Assume a state with many variables Assume some function that you want to maximize/minimize the value of Searching entire space is too complicated Can’t evaluate every possible combination of variables Function might be difficult to evaluate analytically

Iterative improvement Start with a complete valid state Gradually work to improve to better and better states Sometimes, try to achieve an optimum, though not always possible Sometimes states are discrete, sometimes continuous

Simple Example One dimension (typically use more): function value x

Simple Example Start at a valid state, try to maximize function value

Simple Example Move to better state function value x

Simple Example Try to find maximum function value x

Hill-Climbing Choose Random Starting State Repeat From current state, generate n random steps in random directions Choose the one that gives the best new value While some new better state found (i.e. exit if none of the n steps were better)

Simple Example Random Starting Point function value x

Simple Example Three random steps function value x

Simple Example Choose Best One for new position function value x

Simple Example Repeat function value x

Simple Example Repeat function value x

Simple Example Repeat function value x

Simple Example Repeat function value x

Simple Example No Improvement, so stop. function value x

Problems With Hill Climbing Random Steps are Wasteful Addressed by other methods Local maxima, plateaus, ridges Can try random restart locations Can keep the n best choices (this is also called “beam search”) Comparing to game trees: Basically looks at some number of available next moves and chooses the one that looks the best at the moment Beam search: follow only the best-looking n moves

Gradient Descent (or Ascent) Simple modification to Hill Climbing Generallly assumes a continuous state space Idea is to take more intelligent steps Look at local gradient: the direction of largest change Take step in that direction Step size should be proportional to gradient Tends to yield much faster convergence to maximum

Gradient Ascent Random Starting Point function value x

Gradient Ascent Take step in direction of largest increase (obvious in 1D, must be computed in higher dimensions) function value x

Gradient Ascent Repeat function value x

Gradient Ascent Next step is actually lower, so stop function value x

Gradient Ascent Could reduce step size to “hone in” function value x

Gradient Ascent Converge to (local) maximum function value x

Dealing with Local Minima Can use various modifications of hill climbing and gradient descent Random starting positions – choose one Random steps when maximum reached Conjugate Gradient Descent/Ascent Choose gradient direction – look for max in that direction Then from that point go in a different direction Simulated Annealing

Simulated Annealing Annealing: heat up metal and let cool to make harder By heating, you give atoms freedom to move around Cooling “hardens” the metal in a stronger state Idea is like hill-climbing, but you can take steps down as well as up. The probability of allowing “down” steps goes down with time

Simulated Annealing Heuristic/goal/fitness function E (energy) Generate a move (randomly) and compute DE = Enew-Eold If DE <= 0, then accept the move If DE > 0, accept the move with probability: Set T is “Temperature”

Simulated Annealing Compare P(DE) with a random number from 0 to 1. If it’s below, then accept Temperature decreased over time When T is higher, downward moves are more likely accepted T=0 means equivalent to hill climbing When DE is smaller, downward moves are more likely accepted

“Cooling Schedule” Speed at which temperature is reduced has an effect Too fast and the optima are not found Too slow and time is wasted

Simulated Annealing Random Starting Point T = Very High function value x

Simulated Annealing T = Very High Random Step function value x

Simulated Annealing Even though E is lower, accept T = Very High function value x

Simulated Annealing Next Step; accept since higher E T = Very High function value x

Simulated Annealing Next Step; accept since higher E T = Very High function value x

Simulated Annealing Next Step; accept even though lower T = High function value x

Simulated Annealing Next Step; accept even though lower T = High function value x

Simulated Annealing Next Step; accept since higher T = Medium function value x

Simulated Annealing Next Step; lower, but reject (T is falling) T = Medium Next Step; lower, but reject (T is falling) function value x

Simulated Annealing Next Step; Accept since E is higher T = Medium function value x

Simulated Annealing Next Step; Accept since E change small T = Low function value x

Simulated Annealing Next Step; Accept since E larger T = Low function value x

Simulated Annealing Next Step; Reject since E lower and T low T = Low function value x

Simulated Annealing Eventually converge to Maximum T = Low function value x

Other Optimization Approach: Genetic Algorithms State = “Chromosome” Genes are the variables Optimization Function = “Fitness” Create “Generations” of solutions A set of several valid solutions Most fit solutions carry on Generate next generation by: Mutating genes of previous generation “Breeding” – Pick two (or more) “parents” and create children by combining their genes