Heuristic Optimization Methods Greedy algorithms, Approximation algorithms, and GRASP.

Slides:



Advertisements
Similar presentations
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Advertisements

G5BAIM Artificial Intelligence Methods
Incremental Linear Programming Linear programming involves finding a solution to the constraints, one that maximizes the given linear function of variables.
How to Schedule a Cascade in an Arbitrary Graph F. Chierchetti, J. Kleinberg, A. Panconesi February 2012 Presented by Emrah Cem 7301 – Advances in Social.
Branch-and-Bound In this handout,  Summary of branch-and-bound for integer programs Updating the lower and upper bounds for OPT(IP) Summary of fathoming.
Sub Exponential Randomize Algorithm for Linear Programming Paper by: Bernd Gärtner and Emo Welzl Presentation by : Oz Lavee.
Informed Search Methods How can we improve searching strategy by using intelligence? Map example: Heuristic: Expand those nodes closest in “as the crow.
Greedy Algorithms Clayton Andrews 2/26/08. What is an algorithm? “An algorithm is any well-defined computational procedure that takes some value, or set.
Analysis of Algorithms
A Simple Distribution- Free Approach to the Max k-Armed Bandit Problem Matthew Streeter and Stephen Smith Carnegie Mellon University.
EE 553 Integer Programming
Scenario Trees and Metaheuristics for Stochastic Inventory Routing Problems DOMinant Workshop, Molde, Norway, September , 2009 Lars Magnus Hvattum.
Optimal Policies for POMDP Presented by Alp Sardağ.
Anytime RRTs Dave Fergusson and Antony Stentz. RRT – Rapidly Exploring Random Trees Good at complex configuration spaces Efficient at providing “feasible”
Infinite Horizon Problems
Planning under Uncertainty
MAE 552 – Heuristic Optimization Lecture 24 March 20, 2002 Topic: Tabu Search.
MAE 552 – Heuristic Optimization Lecture 27 April 3, 2002
Dealing with NP-Complete Problems
Computational Methods for Management and Economics Carla Gomes
GREEDY RANDOMIZED ADAPTIVE SEARCH PROCEDURES Reporter : Benson.
Approximation Algorithms
Reporter : Mac Date : Multi-Start Method Rafael Marti.
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under contract.
MAE 552 – Heuristic Optimization
MAE 552 – Heuristic Optimization Lecture 5 February 1, 2002.
LP formulation of Economic Dispatch
Improved results for a memory allocation problem Rob van Stee University of Karlsruhe Germany Leah Epstein University of Haifa Israel WADS 2007 WAOA 2007.
Tabu Search Manuel Laguna. Outline Background Short Term Memory Long Term Memory Related Tabu Search Methods.
Metaheuristics The idea: search the solution space directly. No math models, only a set of algorithmic steps, iterative method. Find a feasible solution.
Elements of the Heuristic Approach
Escaping local optimas Accept nonimproving neighbors – Tabu search and simulated annealing Iterating with different initial solutions – Multistart local.
Scott Perryman Jordan Williams.  NP-completeness is a class of unsolved decision problems in Computer Science.  A decision problem is a YES or NO answer.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Heuristic Optimization Methods
Types of IP Models All-integer linear programs Mixed integer linear programs (MILP) Binary integer linear programs, mixed or all integer: some or all of.
Analysis of Algorithms
Fundamentals of Algorithms MCS - 2 Lecture # 7
GRASP: A Sampling Meta-Heuristic
Heuristic Optimization Methods Tabu Search: Advanced Topics.
1 Markov Decision Processes Infinite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld.
Thursday, May 9 Heuristic Search: methods for solving difficult optimization problems Handouts: Lecture Notes See the introduction to the paper.
FORS 8450 Advanced Forest Planning Lecture 11 Tabu Search.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
15.053Tuesday, April 9 Branch and Bound Handouts: Lecture Notes.
G5BAIM Artificial Intelligence Methods
1 Markov Decision Processes Infinite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld.
1 Branch and Bound Searching Strategies Updated: 12/27/2010.
LOG740 Heuristic Optimization Methods Local Search / Metaheuristics.
A Membrane Algorithm for the Min Storage problem Dipartimento di Informatica, Sistemistica e Comunicazione Università degli Studi di Milano – Bicocca WMC.
Heuristic Methods for the Single- Machine Problem Chapter 4 Elements of Sequencing and Scheduling by Kenneth R. Baker Byung-Hyun Ha R2.
The bin packing problem. For n objects with sizes s 1, …, s n where 0 < s i ≤1, find the smallest number of bins with capacity one, such that n objects.
CS 3343: Analysis of Algorithms Lecture 19: Introduction to Greedy Algorithms.
1 Approximation algorithms Algorithms and Networks 2015/2016 Hans L. Bodlaender Johan M. M. van Rooij TexPoint fonts used in EMF. Read the TexPoint manual.
Sporadic model building for efficiency enhancement of the hierarchical BOA Genetic Programming and Evolvable Machines (2008) 9: Martin Pelikan, Kumara.
Artificial Intelligence Lecture No. 8 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
School of Computer Science & Engineering
Greedy Algorithms.
Heuristic Optimization Methods
Subject Name: Operation Research Subject Code: 10CS661 Prepared By:Mrs
Heuristics Definition – a heuristic is an inexact algorithm that is based on intuitive and plausible arguments which are “likely” to lead to reasonable.
CSE 589 Applied Algorithms Spring 1999
Metaheuristic methods and their applications. Optimization Problems Strategies for Solving NP-hard Optimization Problems What is a Metaheuristic Method?
Multi-Objective Optimization
School of Computer Science & Engineering
Algorithms for Budget-Constrained Survivable Topology Design
More on Search: A* and Optimization
EMIS 8373: Integer Programming
Unit –VII Coping with limitations of algorithm power.
Presentation transcript:

Heuristic Optimization Methods Greedy algorithms, Approximation algorithms, and GRASP

2 Agenda Greedy Algorithms –A class of heuristics Approximation Algorithms –Does not prove optimality, but returns a solution that is guaranteed to be within a certain distance from the optimal value GRASP –Greedy Randomized Adaptive Search Procedure Other –Sqeaky Weel –Ruin and Recreate –Very Large Neighborhood Search

3 Greedy Algorithms We have previosly studied Local Search Algorithms, which can produce heuristic solutions to difficult optimization problems Another way of producing heuristic solutions is to apply Greedy Algorithms The idea of a Greedy Algorithm is to construct a solution from scratch, choosing at each step the item bringing the ”best” immediate reward

4 Greedy Example (1) 0-1 Knapsack Problem: –Maximize: –12x 1 + 8x x x 4 + 6x 5 + 2x 6 + 2x 7 –Subject to: – 4x 1 + 3x 2 + 7x 3 + 5x 4 + 3x 5 + 2x 6 + 3x 7 ≤ 9 –With x binary Notice that the variables are ordered such that –c j /a j ≥ c j+1 /a j+1 –Variable c j gives more ” bang per buck ” than c j+1

5 Greedy Example (2) The greedy solution is to consider each item in turn, and to allow it in the knapsack if it has enough room, starting with the variable that gives most ”bang per buck”: –x 1 = 1 (enough space, and best remaining item) –x 2 = 1 (enough space, and best remaining item) –x 3 = x 4 = x 5 = 0 (not enough space for any of them) –x 6 = 1 (enough space, and best remaining item) –x 7 = 0 (not enough space)

6 Formalized Greedy Algorithm (1) Let us assume that we can write our combinatorial optimization problem as follows: For example, the 0-1 Knapsack Problem: –(S will be the set of items not in the knapsack)

7 Formalized Greedy Algorithm (2)

8 Adapting Greedy Algorithms Greedy Algorithms have to be adapted to the particular problem structure –Just like Local Search Algorithms For a given problem there can be many different Greedy Algorithms –TSP: ”nearest neighbor”, ”pure greedy” (select shortest edges first)

9 Approximation Algorithms We remember three classes of algorithms: –Exact (returns the optimal solution) –Approximation (returns a solution within a certain distance from the optimal value) –Heuristic (returns a hopefully good solution, but with no guarantees) For Approximation Algorithms, we need some kind of proof that the algorithm returns a value within some bound We will look at an example of a Greedy Algorithm that is also an Approximation Algorithm

10 Approximation: Example (1) We consider the Integer Knapsack Problem –Same as the 0-1 Knapsack Problem, but we can select any number of each item (that is, we have available an unlimited number of each item)

11 Approximation: Example (2) We can assume that –a j ≤ b –c 1 /a 1 ≥ c j /a j for all items j –(That is, the first item is the one that gives the most ”bang per buck”) We will show that a greedy solution to this gives a value that is at least half of the optimal value

12 Approximation: Example (3) The first step of a Greedy Algorithm will create the following solution: We could imagine that some of the other variables were non-0 as well (if x 1 is very large, and there are some smaller ones to fill the gap that is left)

13 Approximation: Example (4) Now, the Linear Programming Relaxation of the problem will have the following solution: –x 1 = b/a 1 –x j = 0 for all j=2,..., n We let the value of the greedy heuristic be z H We let the value of the LP-relaxation be z LP We should show that z H /z > ½, where z is the optimal value

14 Approximation: Example (5) The proof goes as follows: where, for some 0≤f ≤1:

15 Approximation: Summary It is important to note that the analysis depends on finding –A lower bound on the optimal value –An upper bound on the optimal value The practical importance of such analysis might not be too high –Bounds are usually not very good, and alternative heuristics will often work much better

16 GRASP Greedy Randomized Adaptive Search Procedures A Metaheuristic that is based on Greedy Algorithms –A constructive approach –A multi-start approach –Includes (optionally) a local search to improve the constructed solutions

17 Spelling out GRASP Greedy: Select the best choice (or among the best choices) Randomized: Use some probabilistic selection to prevent the same solution to be constructed every time Adaptive: Change the evaluation of choices after making each decision Search Procedure: It is a heuristic algorithm for examining the solution space

18 Two Phases of GRASP GRASP is an iterative process, in which each iteration has two phases Construction –Build a feasible solution (from scratch) in the same way as using a Greedy Algorithm, but with some randomization Improvement –Improve the solution by using some Local Search (Best/First Improvement) The best overall solution is retained

19 The Constructive Phase (1)

20 The Constructive Phase (2) Each step is both Greedy and Randomized First, we build a Restricted Candidate List –The RCL contains the best elements that we can add to the solution Then we select randomly one of the elements in the Restricted Candidate List We then need to reevaluate the remaining elements (their evaluation should change as a result of the recent change in the partial solution), and repeat

21 The Restricted Candidate List (1) Assume we have evaluated all the possible elements that can be added to the solution There are two ways of generate a restricted list –Based on rank –Based on value In each case, we introduce a parameter α that controls how large the RCL will be –Include the (1- α)% elements with highest rank –Include all elements that has a value within α% of the best element

22 The Restricted Candidate List (2) In general: A small RCL leads to a small variance in the values of the constructed solutions A large RCL leads to worse average solution values, but a larger variance High values (=1) for α result in a purely greedy construction Low values (=0) for α result in a purely random construction

23 The Restricted Candidate List (3)

24 The Restricted Candidate List (4) The role of α is thus critical Usually, a good choice will be to modify the value of α during the search –Randomly –Based on results The approach where α is adjusted based on previous results is called ”Reactive GRASP” –The probability distribution of α changes based on the performance of each value of α

25 Effect of α on Local Search

26 GRASP vs. Other Methods (1) GRASP is the first pure constructive method that we have seen However, GRASP can be compared to Local Search based methods in some aspects That is, a GRASP can sometimes be interpreted as a Local Search where the entire solution is destroyed (emptied) whenever a local optimum is reached –The construction reaches a local optimum when no more elements can be added

27 GRASP vs. Other Methods (2) In this sense, we can classify GRASP as –Memoryless (not using adaptive memory) –Randomized (not systematic) –Operating on 1 solution (not a population) Potential improvements of GRASP would involve adding some memory –Many improvements have been suggested, but not too many have been implemented/tested –There is still room for doing research in this area

Squeaky Wheel Optimization If its not broken, don’t fix it. Often used in constructive meta-heuristics. –Inspect the constructed (complete) solution –If it has any flaws, focus on fixing these in the next constructive run 28

Ruin and Recreate Also called Very Large Neighborhood search Given a solution, destroy part of it –Random –Geographically –Along other dimensions Rebuild Greedily –Can also use GRASP-like ideas Can intersperse with local search (meta- heuristics) 29

30 Summary of Todays’s Lecture Greedy Algorithms –A class of heuristics Approximation Algorithms –Does not prove optimality, but returns a solution that is guaranteed to be within a certain distance from the optimal value GRASP –Greedy Randomized Adaptive Search Procedure Other –Sqeaky Weel –Ruin and Recreate –Very Large Neighborhood Search