MAE 552 – Heuristic Optimization Lecture 5 February 1, 2002.

Slides:



Advertisements
Similar presentations
Local Search Jim Little UBC CS 322 – CSP October 3, 2014 Textbook §4.8
Advertisements

Constraint Optimization We are interested in the general non-linear programming problem like the following Find x which optimizes f(x) subject to gi(x)
Traveling Salesperson Problem
Types of Algorithms.
Branch & Bound Algorithms
Lecture 11 CSS314 Parallel Computing
Algorithm Strategies Nelson Padua-Perez Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
Computability and Complexity 23-1 Computability and Complexity Andrei Bulatov Search and Optimization.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
1 Greedy Algorithms. 2 2 A short list of categories Algorithm types we will consider include: Simple recursive algorithms Backtracking algorithms Divide.
MAE 552 – Heuristic Optimization Lecture 27 April 3, 2002
Approximation Algorithms Lecture for CS 302. What is a NP problem? Given an instance of the problem, V, and a ‘certificate’, C, we can verify V is in.
Ryan Kinworthy 2/26/20031 Chapter 7- Local Search part 1 Ryan Kinworthy CSCE Advanced Constraint Processing.
Constraint Satisfaction Problems
MAE 552 – Heuristic Optimization Lecture 3 January 28, 2002.
Intelligent Agents What is the basic framework we use to construct intelligent programs?
MAE 552 – Heuristic Optimization Lecture 26 April 1, 2002 Topic:Branch and Bound.
MAE 552 – Heuristic Optimization Lecture 4 January 30, 2002.
Analysis of Algorithms CS 477/677
Search in the semantic domain. Some definitions atomic formula: smallest formula possible (no sub- formulas) literal: atomic formula or negation of an.
1 Branch and Bound Searching Strategies 2 Branch-and-bound strategy 2 mechanisms: A mechanism to generate branches A mechanism to generate a bound so.
MAE 552 – Heuristic Optimization Lecture 10 February 13, 2002.
Chapter 5 Outline Formal definition of CSP CSP Examples
Linear Programming Applications
Backtracking.
Ant Colony Optimization: an introduction
1.1 Chapter 1: Introduction What is the course all about? Problems, instances and algorithms Running time v.s. computational complexity General description.
Linear programming Lecture (4) and lecture (5). Recall An optimization problem is a decision problem in which we are choosing among several decisions.
Busby, Dodge, Fleming, and Negrusa. Backtracking Algorithm Is used to solve problems for which a sequence of objects is to be selected from a set such.
1 The TSP : NP-Completeness Approximation and Hardness of Approximation All exact science is dominated by the idea of approximation. -- Bertrand Russell.
Fundamentals of Algorithms MCS - 2 Lecture # 7
Spring 2015 Mathematics in Management Science Traveling Salesman Problem Approximate solutions for TSP NNA, RNN, SEA Greedy Heuristic Algorithms.
CSE 024: Design & Analysis of Algorithms Chapter 9: NP Completeness Sedgewick Chp:40 David Luebke’s Course Notes / University of Virginia, Computer Science.
EMIS 8373: Integer Programming NP-Complete Problems updated 21 April 2009.
Recursive Back Tracking & Dynamic Programming Lecture 7.
Thursday, May 9 Heuristic Search: methods for solving difficult optimization problems Handouts: Lecture Notes See the introduction to the paper.
Applications of Dynamic Programming and Heuristics to the Traveling Salesman Problem ERIC SALMON & JOSEPH SEWELL.
CSE373: Data Structures & Algorithms Lecture 22: The P vs. NP question, NP-Completeness Lauren Milne Summer 2015.
Introduction to Genetic Algorithms. Genetic Algorithms We’ve covered enough material that we can write programs that use genetic algorithms! –More advanced.
Lecture 6 NP Class. P = ? NP = ? PSPACE They are central problems in computational complexity.
Traveling Salesman Problem (TSP)
CS 3343: Analysis of Algorithms Lecture 25: P and NP Some slides courtesy of Carola Wenk.
Types of Algorithms. 2 Algorithm classification Algorithms that use a similar problem-solving approach can be grouped together We’ll talk about a classification.
Clase 3: Basic Concepts of Search. Problems: SAT, TSP. Tarea 1 Computación Evolutiva Gabriela Ochoa
Optimization Problems
Simplex Method Simplex: a linear-programming algorithm that can solve problems having more than two decision variables. The simplex technique involves.
Ramakrishna Lecture#2 CAD for VLSI Ramakrishna
Ch22.Branch and Bound.
Heuristics for Efficient SAT Solving As implemented in GRASP, Chaff and GSAT.
1 CSC 384 Lecture Slides (c) , C. Boutilier and P. Poupart CSC384: Lecture 16  Last time Searching a Graphplan for a plan, and relaxed plan heuristics.
Escaping Local Optima. Where are we? Optimization methods Complete solutions Partial solutions Exhaustive search Hill climbing Exhaustive search Hill.
COSC 3101A - Design and Analysis of Algorithms 14 NP-Completeness.
Linear programming Lecture (4) and lecture (5). Recall An optimization problem is a decision problem in which we are choosing among several decisions.
School of Computer Science & Engineering
CSPs: Search and Arc Consistency Computer Science cpsc322, Lecture 12
Lecture 2-2 NP Class.
CSPs: Search and Arc Consistency Computer Science cpsc322, Lecture 12
Hard Problems Introduction to NP
Computer Science cpsc322, Lecture 14
Subject Name: Operation Research Subject Code: 10CS661 Prepared By:Mrs
Types of Algorithms.
CSPs: Search and Arc Consistency Computer Science cpsc322, Lecture 12
Heuristics Definition – a heuristic is an inexact algorithm that is based on intuitive and plausible arguments which are “likely” to lead to reasonable.
Types of Algorithms.
Heuristic Algorithms via VBA
Artificial Intelligence
Types of Algorithms.
Heuristic Algorithms via VBA
Heuristic Algorithms via VBA
Presentation transcript:

MAE 552 – Heuristic Optimization Lecture 5 February 1, 2002

Traditional Search Methods Exhaustive Search checks each and every solution in the search space until the solution has been found. Works well and is easy to code for small search problem. For large problems it is not practical Recall that for the 50 city TSP there are possible tours If to evaluate each tour took, seconds it would take >10 45 years to evaluate them all !!!!!

Exhaustive Search Methods Exhaustive Search requires only that all of the solutions be generated in some systematic way. The order the solutions are evaluated is irrelevant because all of them have to be looked at. The basic question is how can we generate all the solutions to a particular problem??

Exhaustive Search Methods For the SAT problem the structure of the problem may allow some solutions to be pruned so they do not have to be searched. X 1 =TX 1 =F X 2 =TX 2 =F X 3 =TX 3 =FX 3 =TX 3 =FX 3 =TX 3 =FX 3 =TX 3 =F X 2 =T X 2 =F...

Exhaustive Search Methods Pruning is possible if after examining the values of a few variables it is clear that the candidate is not optimal. X 1 =TX 1 =F X 2 =TX 2 =F X 3 =TX 3 =FX 3 =TX 3 =FX 3 =TX 3 =FX 3 =TX 3 =F X 2 =T X 2 =F If there was a clause in the SAT: Means x 1 and x 2 must be present

Exhaustive Search Methods All the the branches with either x 1 and x 2 FALSE could be pruned without ever looking at x 3 leaving a much smaller S X 1 =TX 1 =F X 2 =TX 2 =F X 3 =TX 3 =FX 3 =TX 3 =FX 3 =TX 3 =FX 3 =TX 3 =F X 2 =T X 2 =F Each node is traversed in order in a Depth-First Search

Enumerating the TSP For some problems, including the TSP you cannot simply generate all possible solutions as many will not be feasible. For example in many TSP problems all of the cities are not ‘fully connected’ so generating random permutations will generate many invalid solutions

Enumerating the TSP The tour would not be valid for instance Possibilities include penalized invalid candidates. Generating new candidates by swapping adjacent pairs.

Enumerating the NLP It is not strictly possible to enumerate all the possible candidates in a continuous problem (infinite number of solutions). Continuous domain can be divided into a finite number of intervals. There can be multiple levels to the enumeration. X1X1 X2X2

Enumerating the NLP Step 1: Evaluate the center of each cell. X1X1 X2X2 Step 2:Take the best cell and re- subdivide it.

Enumerating the NLP There are several disadvantages to enumerating an NLP problem. 1.In order to really cover the design space the granularity of the cells must be very fine. 2.A coarse granularity of the cells increases the probability that the best solution will not be discovered i.e., the cell that contains the best solution wouldn’t show up because the level was too coarse. 3.When there are a large number of variables this become impractical. If n=50 and each x were divided into 100 intervals yields cells!!!

Greedy Algorithms Greedy algorithms attack a problem by constructing the complete solution in a series of steps. The general idea is to assign the values for all of the decision variables one at a time. At each step, the algorithm makes the best available decision, the ‘best’ profit, making it ‘greedy’. Unfortunately making the best decision at each step does not necessarily result in the best solution overall.

Greedy Algorithms and the SAT A simple greedy algorithm for the SAT problem is as follows: For each variable from 1 to n, in any order,, assign the truth value that results in satisfying the greatest number of currently unsatisfied clauses. If there’s a tie, choose one of the best options at random. At every step this greedy algorithm tries to satisfy the largest number of unsatisfied clauses. Example: Starting with x 1 set x 1 =TRUE as this will satisfy 3 clauses. Unfortunately the first clause can never be satisfied with x 1 =TRUE

Greedy Algorithms and the SAT There are heuristics that can be applied to improve this simple greedy algorithm. It fails because it does not consider the variables in the right order. If we started with the variables that occur in only a few clauses, x 2, x 3, x 4+ leaving more commonly occurring variables for later. The improved greedy algorithm could then be as follows: 1. Sort the variables on the basis of there frequency, from the smallest to the largest. 2. For each variable taken in the above order assign a value that would satisfy the greatest number of unsatisfied clauses.

Greedy Algorithms and the SAT Now consider the following SAT Where Z does not contain x 1 or x 2 but contains numerous instances of the other variables. Thus x 1 is present in 3 clauses, x 2 in four clauses and the other variables occur with higher frequency. The improved greedy algorithm would assign x 1 a value of TRUE because it satisfies 2 clauses and x 2 a values of TRUE because it satisfies an additional 2 clauses which would violate the third and fourth clauses and the algorithm would fail.

Greedy Algorithms and the SAT You could assign additional heuristics that would improve the greedy algorithm. Unfortunately there would always be SAT problems that it could not solve. If fact, there is no GREEDY algorithm that works for all SAT problems.

Greedy Algorithms and the TSP For the travelling salesman problem (TSP), the most intuitive greedy algorithm is as follows. Starting from a random city, proceed to the nearest unvisited city until every city has been visited, then return to the starting city. A B C D

Greedy Algorithms and the TSP Starting from Point A and applying the greedy algorithm the resultant tour is A-B-C-D Cost= =33 A B C D Can we find a better tour?

Greedy Algorithms and the TSP By inspection A-C-B-D-A costs only =19 which is much better than that found by the greedy algorithm. Once again you can think of other heuristics that would improve the performance of the greedy algorithm for certain cases, but you can always find a counter example in which it will fail A B C D

Greedy Algorithms and NLP For the NLP a greedy algorithm would consider one variable at a time. Start by choosing a random point. Change x 1 until the objective function reaches an optimum with respect to it. New point = Minimum F(x 1, x 2,..x n ) with x 2..x n held constant. Hold x 1 and x 3..x n constant and optimize with respect to x 2. Repeat for all x’s. This algorithm works fine if there are no interaction between design variables. It is basically a series of line searches.

Greedy Algorithms and NLP For simple problems greedy algorithms can be effective.

Greedy Algorithms and NLP For mutimodal problems greedy algorithms are less effective

Greedy Algorithms Greedy methods, whether applied to NLP, SAT, TSP, and many other domains are conceptually simple. They make the best available move considering only a portion of the problem. Unfortunately they pay for their simplicity by failing to provide good solutions to problems with interacting parameters. These are the problems most commonly faced by designers.