MAE 552 – Heuristic Optimization Lecture 4 January 30, 2002.

Slides:



Advertisements
Similar presentations
Heuristic Search techniques
Advertisements

Local optimization technique G.Anuradha. Introduction The evaluation function defines a quality measure score landscape/response surface/fitness landscape.
Local Search Algorithms
G5BAIM Artificial Intelligence Methods
Artificial Intelligence Presentation
CS6800 Advanced Theory of Computation
Neural and Evolutionary Computing - Lecture 4 1 Random Search Algorithms. Simulated Annealing Motivation Simple Random Search Algorithms Simulated Annealing.
Clase 2, Parte 1: Conceptos Básicos de Búsqueda. Algoritmo de Ascenso de Colina. Enunciado Tarea 1 Diseño de Algoritmos (Heurísticas Modernas) Gabriela.
Local and Global Optima
Iterative improvement algorithms Prof. Tuomas Sandholm Carnegie Mellon University Computer Science Department.
MAE 552 – Heuristic Optimization Lecture 8 February 8, 2002.
MAE 552 – Heuristic Optimization Lecture 27 April 3, 2002
MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002.
MAE 552 – Heuristic Optimization
MAE 552 – Heuristic Optimization Lecture 3 January 28, 2002.
Local Search and Stochastic Algorithms Solution tutorial 4.
Optimization Methods One-Dimensional Unconstrained Optimization
Simulated Annealing Van Laarhoven, Aarts Version 1, October 2000.
MAE 552 – Heuristic Optimization Lecture 10 February 13, 2002.
MAE 552 – Heuristic Optimization Lecture 1 January 23, 2002.
MAE 552 – Heuristic Optimization Lecture 5 February 1, 2002.
D Nagesh Kumar, IIScOptimization Methods: M1L4 1 Introduction and Basic Concepts Classical and Advanced Techniques for Optimization.
MAE 552 – Heuristic Optimization Lecture 6 February 4, 2002.
Optimization Methods One-Dimensional Unconstrained Optimization
Metaheuristics The idea: search the solution space directly. No math models, only a set of algorithmic steps, iterative method. Find a feasible solution.
1.1 Chapter 1: Introduction What is the course all about? Problems, instances and algorithms Running time v.s. computational complexity General description.
Genetic Algorithm.
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Escaping Local Optima. Where are we? Optimization methods Complete solutions Partial solutions Exhaustive search Hill climbing Random restart General.
Simulated Annealing.
Local Search: walksat, ant colonies, and genetic algorithms.
Heuristic Optimization Methods Greedy algorithms, Approximation algorithms, and GRASP.
Honors Track: Competitive Programming & Problem Solving Optimization Problems Kevin Verbeek.
Course: Logic Programming and Constraints
Thursday, May 9 Heuristic Search: methods for solving difficult optimization problems Handouts: Lecture Notes See the introduction to the paper.
FORS 8450 Advanced Forest Planning Lecture 11 Tabu Search.
Applications of Dynamic Programming and Heuristics to the Traveling Salesman Problem ERIC SALMON & JOSEPH SEWELL.
A Computational Study of Three Demon Algorithm Variants for Solving the TSP Bala Chandran, University of Maryland Bruce Golden, University of Maryland.
Introduction to Optimization
Probabilistic Algorithms Evolutionary Algorithms Simulated Annealing.
Local Search and Optimization Presented by Collin Kanaley.
Clase 3: Basic Concepts of Search. Problems: SAT, TSP. Tarea 1 Computación Evolutiva Gabriela Ochoa
Optimization Problems
Iterative Improvement Search Including Hill Climbing, Simulated Annealing, WALKsat and more....
Ramakrishna Lecture#2 CAD for VLSI Ramakrishna
Different Local Search Algorithms in STAGE for Solving Bin Packing Problem Gholamreza Haffari Sharif University of Technology
Metaheuristics for the New Millennium Bruce L. Golden RH Smith School of Business University of Maryland by Presented at the University of Iowa, March.
Escaping Local Optima. Where are we? Optimization methods Complete solutions Partial solutions Exhaustive search Hill climbing Exhaustive search Hill.
On the Ability of Graph Coloring Heuristics to Find Substructures in Social Networks David Chalupa By, Tejaswini Nallagatla.
1 Intro to AI Local Search. 2 Intro to AI Local search and optimization Local search: –use single current state & move to neighboring states Idea: –start.
Optimization Problems
Automatic Test Generation
School of Computer Science & Engineering
CSCI 4310 Lecture 10: Local Search Algorithms
SAT problem SAT – Boolean satisfiability problem
Department of Computer Science
Heuristic Optimization Methods
School of Computer Science & Engineering
Local Search Algorithms
Subject Name: Operation Research Subject Code: 10CS661 Prepared By:Mrs
Genetic Algorithms, Search Algorithms
Comparing Genetic Algorithm and Guided Local Search Methods
Randomized Hill Climbing
How Hard Can It Be?.
Optimization Problems
Metaheuristic methods and their applications. Optimization Problems Strategies for Solving NP-hard Optimization Problems What is a Metaheuristic Method?
Multi-Objective Optimization
School of Computer Science & Engineering
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
Local Search Algorithms
Presentation transcript:

MAE 552 – Heuristic Optimization Lecture 4 January 30, 2002

Basics of problem Solving-Evaluation Function For every real world problem the evaluation problem is chosen by the designer. It should of course indicate for instance that a solution that meets the objective is better than one that does not. It should also depend on factors such as the computational complexity of the problem. Often the objective function indicates a good evaluation function. Objective - Minimize Stress -> Evaluation Function -Stress

Basics of problem Solving-Evaluation Function Other times you cannot derive a useful Evaluation Function from the objective: In the SAT problem the objective is to find a set of boolean (TRUE,FALSE) variables that satisfies a logical statement (makes it TRUE). All wrong candidate solutions return FALSE which does not tell you how to improve the solution.

Basics of problem Solving-Defining a Search Problem When you design an evaluation function you need to consider that for many problems the only solutions of interest are the subset that are Feasible (satisfy the constraints). The feasible space can be defined as F where F  S. A search problem can then be defined as: Given a search space S and its feasible part F  S find x  F such that eval(x)  eval(y)THIS IS THE DEF. OF A GLOBAL OPT for all y  F Note that the objective does not appear at all in the formulation!! If your EF does not correspond with the objective you will searching for the answer to the wrong problem.

Basics of problem Solving-Defining a Search Problem A point x that satisfies the condition is called a global solution. Finding a global solution can be difficult and impossible to prove in some cases. It would be easier if we could limit the search to a smaller area of S. This fact underlies many search techniques.

Basics of problem Solving-Neighborhood Search If we concentrate on the area of S ‘near’ to some point in the search space we can more easily look in this ‘neighborhood’. S x N(x) N(x) of x is a set of all points in the search space that ‘close’ to the given point x. N(x) ={y  S: dist(x,y)  }

Basics of problem Solving-Neighborhood Search For a continuous NLP the Euclidean distance can be used to define a neighborhood. For the TSP a 2-swap neighborhood can be defined as all of the candidates that would result from swapping two cities in a given tour. A solution x (a permutation of n=5 cities) has n(n-1)/2 neighbors including (swapping cities 2 and 3) (swapping cities 1 and 5) etc.

Basics of problem Solving-Neighborhood Search F=x 2 +3 xcxc Example: Quadratic Objective with no Constraints

Basics of problem Solving-Neighborhood Search Step 1: Define a neighborhood around point x c. N(x): x c -   x  x c +  Min F=x 2 +3 xcxc

Basics of problem Solving-Neighborhood Search Step 2: Sample a candidate solution from the neighborhood and evaluate it. if F(x 1 )>F(x c ) reject point and choose another. F=x 2 +3 xcxc x1x1

Basics of problem Solving-Neighborhood Search if F(x 1 )<F(x c ) accept point and choose replace current point x c with x 1. F=x 2 +3 xcxc x1x1

Basics of problem Solving-Neighborhood Search Step 3: Create new neighborhood around x c and repeat process. F=x 2 +3 xcxc

Basics of problem Solving-Neighborhood Search Most realistic problems are considerably more difficult than a quadratic bowl problem. The evaluation function defines a response surface that describes the topography of the search space with many hills and valleys.

Basics of problem Solving-Neighborhood Search Finding the best peak or the lowest valley is like trying to navigate a mountain range in the dark with only a small lamp. Your decisions must be made using local information. You can sample points in a local area and then decide where to walk next.

Basics of problem Solving-Neighborhood Search If you decide to always go uphill then you will reach a peak but not necessarily the highest peak. You may need to walk downhill in order to eventually reach the highest peak in the space.

Basics of problem Solving-Local Optima With the notion of neighbor we can define the idea of local optima. A potential solution x  F is a local optima if and only if : eval(x)  eval(y) for all y  N(x) If N(x) is small then it is relatively easy to search it for the best solution but is also easy to get trapped in a local minimum. If N(x) is large then the visibility of the entire design space is increased and the chances of finding the global optima increase. Large N(x) also lead to more computational expense.

Basics of problem Solving -Local Optima With a small neighborhood only a local optima is found. N(x)

Basics of problem Solving With a large neighborhood the global optima is more likely to be found but with high computational expense. The size of the neighbor hood should fit the problem!!!! N(x)

Formal Implementation of Neighborhood Search - Hill Climbing Methods Basic Hill Climbing Methods utilize the concept of a neighborhood search and iterative improvement to find local optima. During each iteration the best solution is selected for the neighborhood N(x) and is used to replace the current solution. If there are no better solutions in N(x) then a local optima has been reached and a new design point is selected at random to start the next iteration. Hill climbing methods are VERY dependent on the starting point of the algorithm and size of the neighborhood. Always go uphill (or downhill in the case of minimization).

Hill Climbing Procedure Begin Set t =0; Set best=0; Repeat local = FALSE; Select a current point v c at random; Evaluate v c and set best=eval(v c ); Repeat select all points in the neighborhood of v c ; select the point v n from the set of new points with best value of evaluation function eval if eval(v n ) is better than eval(v c ) then v c = v n else local=TRUE Until local t=t+1 if v c is better than best then best = v c Until t = MAX_ITERATIONS

Disadvantages of Hill Climbers They usually terminate at solutions that only locally optimal There is no information as to the amount by which the discovered local optimum deviates from the global optima or other local optima. The optimum that is found depends on the initial configuration. In general, it is not possible to provide an upper bound for the computation time.

Advantages of Hill Climbers They are very easy to apply!!!!!

Balancing Local and Global Search Effective search techniques balance exploitation and exploration. Exploitation is the process of using the best solution as a jumping of point to finding an improved solution. Exploration is the process of exploring new areas of the search space. Hill climbing methods utilize exploitation by effectively utilizing the current best point, but they can neglect a large portion of the search space.

Pure Random Search Pure random search utilizes all exploration and no exploitation. It explored the space thoroughly, but forgoes exploiting promising areas of the design space. x F(x)

Random Search Procedure Begin Set t =0; Set best=0; Select an initial point v o at random; Evaluate v o and set best_x=v o ; Set best_f=eval(v o ) Repeat Select a point v c at random t=t+1 Evaluate v c if eval (v c ) is better than best_f then Set best_f = eval (v c ) Set best =v c Until t = MAX_ITERATIONS