Download presentation
Presentation is loading. Please wait.
Published byLouisa Robbins Modified over 9 years ago
1
Heuristic Optimization Methods Greedy algorithms, Approximation algorithms, and GRASP
2
2 Agenda Greedy Algorithms –A class of heuristics Approximation Algorithms –Does not prove optimality, but returns a solution that is guaranteed to be within a certain distance from the optimal value GRASP –Greedy Randomized Adaptive Search Procedure Other –Sqeaky Weel –Ruin and Recreate –Very Large Neighborhood Search
3
3 Greedy Algorithms We have previosly studied Local Search Algorithms, which can produce heuristic solutions to difficult optimization problems Another way of producing heuristic solutions is to apply Greedy Algorithms The idea of a Greedy Algorithm is to construct a solution from scratch, choosing at each step the item bringing the ”best” immediate reward
4
4 Greedy Example (1) 0-1 Knapsack Problem: –Maximize: –12x 1 + 8x 2 + 17x 3 + 11x 4 + 6x 5 + 2x 6 + 2x 7 –Subject to: – 4x 1 + 3x 2 + 7x 3 + 5x 4 + 3x 5 + 2x 6 + 3x 7 ≤ 9 –With x binary Notice that the variables are ordered such that –c j /a j ≥ c j+1 /a j+1 –Variable c j gives more ” bang per buck ” than c j+1
5
5 Greedy Example (2) The greedy solution is to consider each item in turn, and to allow it in the knapsack if it has enough room, starting with the variable that gives most ”bang per buck”: –x 1 = 1 (enough space, and best remaining item) –x 2 = 1 (enough space, and best remaining item) –x 3 = x 4 = x 5 = 0 (not enough space for any of them) –x 6 = 1 (enough space, and best remaining item) –x 7 = 0 (not enough space)
6
6 Formalized Greedy Algorithm (1) Let us assume that we can write our combinatorial optimization problem as follows: For example, the 0-1 Knapsack Problem: –(S will be the set of items not in the knapsack)
7
7 Formalized Greedy Algorithm (2)
8
8 Adapting Greedy Algorithms Greedy Algorithms have to be adapted to the particular problem structure –Just like Local Search Algorithms For a given problem there can be many different Greedy Algorithms –TSP: ”nearest neighbor”, ”pure greedy” (select shortest edges first)
9
9 Approximation Algorithms We remember three classes of algorithms: –Exact (returns the optimal solution) –Approximation (returns a solution within a certain distance from the optimal value) –Heuristic (returns a hopefully good solution, but with no guarantees) For Approximation Algorithms, we need some kind of proof that the algorithm returns a value within some bound We will look at an example of a Greedy Algorithm that is also an Approximation Algorithm
10
10 Approximation: Example (1) We consider the Integer Knapsack Problem –Same as the 0-1 Knapsack Problem, but we can select any number of each item (that is, we have available an unlimited number of each item)
11
11 Approximation: Example (2) We can assume that –a j ≤ b –c 1 /a 1 ≥ c j /a j for all items j –(That is, the first item is the one that gives the most ”bang per buck”) We will show that a greedy solution to this gives a value that is at least half of the optimal value
12
12 Approximation: Example (3) The first step of a Greedy Algorithm will create the following solution: We could imagine that some of the other variables were non-0 as well (if x 1 is very large, and there are some smaller ones to fill the gap that is left)
13
13 Approximation: Example (4) Now, the Linear Programming Relaxation of the problem will have the following solution: –x 1 = b/a 1 –x j = 0 for all j=2,..., n We let the value of the greedy heuristic be z H We let the value of the LP-relaxation be z LP We should show that z H /z > ½, where z is the optimal value
14
14 Approximation: Example (5) The proof goes as follows: where, for some 0≤f ≤1:
15
15 Approximation: Summary It is important to note that the analysis depends on finding –A lower bound on the optimal value –An upper bound on the optimal value The practical importance of such analysis might not be too high –Bounds are usually not very good, and alternative heuristics will often work much better
16
16 GRASP Greedy Randomized Adaptive Search Procedures A Metaheuristic that is based on Greedy Algorithms –A constructive approach –A multi-start approach –Includes (optionally) a local search to improve the constructed solutions
17
17 Spelling out GRASP Greedy: Select the best choice (or among the best choices) Randomized: Use some probabilistic selection to prevent the same solution to be constructed every time Adaptive: Change the evaluation of choices after making each decision Search Procedure: It is a heuristic algorithm for examining the solution space
18
18 Two Phases of GRASP GRASP is an iterative process, in which each iteration has two phases Construction –Build a feasible solution (from scratch) in the same way as using a Greedy Algorithm, but with some randomization Improvement –Improve the solution by using some Local Search (Best/First Improvement) The best overall solution is retained
19
19 The Constructive Phase (1)
20
20 The Constructive Phase (2) Each step is both Greedy and Randomized First, we build a Restricted Candidate List –The RCL contains the best elements that we can add to the solution Then we select randomly one of the elements in the Restricted Candidate List We then need to reevaluate the remaining elements (their evaluation should change as a result of the recent change in the partial solution), and repeat
21
21 The Restricted Candidate List (1) Assume we have evaluated all the possible elements that can be added to the solution There are two ways of generate a restricted list –Based on rank –Based on value In each case, we introduce a parameter α that controls how large the RCL will be –Include the (1- α)% elements with highest rank –Include all elements that has a value within α% of the best element
22
22 The Restricted Candidate List (2) In general: A small RCL leads to a small variance in the values of the constructed solutions A large RCL leads to worse average solution values, but a larger variance High values (=1) for α result in a purely greedy construction Low values (=0) for α result in a purely random construction
23
23 The Restricted Candidate List (3)
24
24 The Restricted Candidate List (4) The role of α is thus critical Usually, a good choice will be to modify the value of α during the search –Randomly –Based on results The approach where α is adjusted based on previous results is called ”Reactive GRASP” –The probability distribution of α changes based on the performance of each value of α
25
25 Effect of α on Local Search
26
26 GRASP vs. Other Methods (1) GRASP is the first pure constructive method that we have seen However, GRASP can be compared to Local Search based methods in some aspects That is, a GRASP can sometimes be interpreted as a Local Search where the entire solution is destroyed (emptied) whenever a local optimum is reached –The construction reaches a local optimum when no more elements can be added
27
27 GRASP vs. Other Methods (2) In this sense, we can classify GRASP as –Memoryless (not using adaptive memory) –Randomized (not systematic) –Operating on 1 solution (not a population) Potential improvements of GRASP would involve adding some memory –Many improvements have been suggested, but not too many have been implemented/tested –There is still room for doing research in this area
28
Squeaky Wheel Optimization If its not broken, don’t fix it. Often used in constructive meta-heuristics. –Inspect the constructed (complete) solution –If it has any flaws, focus on fixing these in the next constructive run 28
29
Ruin and Recreate Also called Very Large Neighborhood search Given a solution, destroy part of it –Random –Geographically –Along other dimensions Rebuild Greedily –Can also use GRASP-like ideas Can intersperse with local search (meta- heuristics) 29
30
30 Summary of Todays’s Lecture Greedy Algorithms –A class of heuristics Approximation Algorithms –Does not prove optimality, but returns a solution that is guaranteed to be within a certain distance from the optimal value GRASP –Greedy Randomized Adaptive Search Procedure Other –Sqeaky Weel –Ruin and Recreate –Very Large Neighborhood Search
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.