Unit –VII Coping with limitations of algorithm power.

Slides:



Advertisements
Similar presentations
Traveling Salesperson Problem
Advertisements

Lecture 24 Coping with NPC and Unsolvable problems. When a problem is unsolvable, that's generally very bad news: it means there is no general algorithm.
Techniques for Dealing with Hard Problems Backtrack: –Systematically enumerates all potential solutions by continually trying to extend a partial solution.
Algorithm Strategies Nelson Padua-Perez Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
Approximation Algorithms: Combinatorial Approaches Lecture 13: March 2.
3 -1 Chapter 3 The Greedy Method 3 -2 The greedy method Suppose that a problem can be solved by a sequence of decisions. The greedy method has that each.
1 Branch and Bound Searching Strategies 2 Branch-and-bound strategy 2 mechanisms: A mechanism to generate branches A mechanism to generate a bound so.
Homework page 102 questions 1, 4, and 10 page 106 questions 4 and 5 page 111 question 1 page 119 question 9.
Ch 13 – Backtracking + Branch-and-Bound
Jin Zheng, Central South University1 Branch-and-bound.
Backtracking.
Busby, Dodge, Fleming, and Negrusa. Backtracking Algorithm Is used to solve problems for which a sequence of objects is to be selected from a set such.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
© The McGraw-Hill Companies, Inc., Chapter 3 The Greedy Method.
Search.
Chapter 12 Coping with the Limitations of Algorithm Power Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Design and Analysis of Algorithms - Chapter 111 How to tackle those difficult problems... There are two principal approaches to tackling NP-hard problems.
Chapter 12 Coping with the Limitations of Algorithm Power Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
BackTracking CS335. N-Queens The object is to place queens on a chess board in such as way as no queen can capture another one in a single move –Recall.
Chapter 15 Approximation Algorithm Introduction Basic Definition Difference Bounds Relative Performance Bounds Polynomial approximation Schemes Fully Polynomial.
Honors Track: Competitive Programming & Problem Solving Optimization Problems Kevin Verbeek.
Thursday, May 9 Heuristic Search: methods for solving difficult optimization problems Handouts: Lecture Notes See the introduction to the paper.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
COPING WITH THE LIMITATIONS OF ALGORITHM POWER
Chapter 13 Backtracking Introduction The 3-coloring problem
CSCE350 Algorithms and Data Structure Lecture 21 Jianjun Hu Department of Computer Science and Engineering University of South Carolina
Algorithms for hard problems Introduction Juris Viksna, 2015.
Brute Force A straightforward approach, usually based directly on the problem’s statement and definitions of the concepts involved Examples: Computing.
Brute Force II.
CSG3F3/ Desain dan Analisis Algoritma
Optimization Problems
Review: Tree search Initialize the frontier using the starting state
School of Computer Science & Engineering
Brute Force A straightforward approach, usually based directly on the problem’s statement and definitions of the concepts involved Examples: Computing.
Hard Problems Some problems are hard to solve.
EMIS 8373: Integer Programming
BackTracking CS255.
Brute Force A straightforward approach, usually based directly on the problem’s statement and definitions of the concepts involved Examples: Computing.
Depth-First Search N-Queens Problem Hamiltonian Circuits
Greedy Technique.
Algorithm Design Methods
Backtracking And Branch And Bound
Courtsey & Copyright: DESIGN AND ANALYSIS OF ALGORITHMS Courtsey & Copyright:
Chapter 3 Brute Force Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Design and Analysis of Algorithm
Algorithms for hard problems
Artificial Intelligence Problem solving by searching CSC 361
Analysis and design of algorithm
Heuristics Definition – a heuristic is an inexact algorithm that is based on intuitive and plausible arguments which are “likely” to lead to reasonable.
Back Tracking.
Optimization Problems
Branch and Bound.
Analysis & Design of Algorithms (CSCE 321)
Advanced Analysis of Algorithms
Chapter 3 Brute Force Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Brute Force A straightforward approach, usually based directly on the problem’s statement and definitions of the concepts involved Examples: Computing.
Applied Combinatorics, 4th Ed. Alan Tucker
Chapter 12 Coping with the Limitations of Algorithm Power
3. Brute Force Selection sort Brute-Force string matching
Branch and Bound Searching Strategies
3. Brute Force Selection sort Brute-Force string matching
Backtracking and Branch-and-Bound
Algorithm Design Methods
Major Design Strategies
Approximation Algorithms
Major Design Strategies
Algorithm Design Methods
Lecture 4: Tree Search Strategies
3. Brute Force Selection sort Brute-Force string matching
Presentation transcript:

Unit –VII Coping with limitations of algorithm power

Solving NP-complete problems 2 Engineered for Tomorrow Solving NP-complete problems At present, all known algorithms for NP-complete problems require time that is super polynomial in the input size, and it is unknown whether there are any faster algorithms. The following techniques can be applied to solve computational problems in general, and they often give rise to substantially faster algorithms: Approximation: Instead of searching for an optimal solution, search for an "almost" optimal one.

Engineered for Tomorrow Randomization: Use randomness to get a faster average running time, and allow the algorithm to fail with some small probability. Restriction: By restricting the structure of the input (e.g., to planar graphs), faster algorithms are usually possible. Parameterization: Often there are fast algorithms if certain parameters of the input are fixed. Heuristic: An algorithm that works "reasonably well" on many cases, but for which there is no proof that it is both always fast and always produces a good result. Meta-heuristic approaches are often used.

Tackling Difficult Combinatorial Problems 4 Engineered for Tomorrow Tackling Difficult Combinatorial Problems There are two principal approaches to tackling difficult combinatorial problems (NP-hard problems). Use a strategy that guarantees solving the problem exactly but doesn’t guarantee to find a solution in polynomial time. Use an approximation algorithm that can find an approximate (sub-optimal) solution in polynomial time

Exact Solution Strategies 5 Engineered for Tomorrow Exact Solution Strategies Exhaustive search (brute force) useful only for small instances Dynamic programming applicable to some problems (e.g., the knapsack problem) Backtracking eliminates some unnecessary cases from consideration yields solutions in reasonable time for many instances but worst case is still exponential Branch-and-bound further refines the backtracking idea for optimization problems

Engineered for Tomorrow Backtracking The principal idea is to construct solutions one component at a time and evaluate such partially constructed candidates as follows. If a partially constructed solution can be developed further without violating the problem’s constraints, it is done by taking the first remaining legitimate option for the next component. If there is no legitimate option for the next component, no alternatives for any remaining component need to be considered. In this case, the algorithm backtracks to replace the last component of the partially constructed solution with its next option.

Engineered for Tomorrow State-space tree This kind of processing is often implemented by constructing a tree of choices being made, called the state-space tree. Its root represents an initial state before the search for a solution begins. The nodes of the first level in the tree represent the choices made for the first component of a solution, The nodes of the second level represent the choices for the second component, and so on. A node in a state-space tree is said to be promising if it corresponds to a partially constructed solution that may still lead to a complete solution; otherwise, it is called non- promising node.

Engineered for Tomorrow Leaves represent either non-promising dead ends or complete solutions found by the algorithm. If the current node turns out to be non-promising, the algorithm backtracks to the node’s parent to consider the next possible option for its last component. If there is no such option, it backtracks one more level up the tree, and so on.

Back-tracking Algorithm Engineered for Tomorrow Back-tracking Algorithm

Construct the state-space tree nodes: partial solutions Engineered for Tomorrow Construct the state-space tree nodes: partial solutions edges: choices in extending partial solutions Explore the state space tree using depth-first search “Prune” non-promising nodes DFS stops exploring sub-trees rooted at nodes that cannot lead to a solution and backtracks to such a node’s parent to continue the search

Example: n-Queens Problem Engineered for Tomorrow Example: n-Queens Problem The problem is to place n queens on an n-by-n chessboard so that no two queens attack each other by being in the same row or in the same column or on the same diagonal.

Engineered for Tomorrow N-Queens Problem We start with the empty board and then place queen 1 in the first possible position of its row, which is in column 1 of row 1. Then we place queen 2, after trying unsuccessfully columns 1 and 2, in the first acceptable position for it, which is square (2,3), the square in row 2 and column 3. This proves to be a dead end because there is no acceptable position for queen 3. So, the algorithm backtracks and puts queen 2 in the next possible position at (2,4). Then queen 3 is……………………….

Engineered for Tomorrow

Hamiltonian Circuit Problem Engineered for Tomorrow Hamiltonian Circuit Problem We make vertex a the root of the state-space tree. The first component of our future solution, if it exists, is a first intermediate vertex of a Hamiltonian cycle to be constructed. Using the alphabet order to break the three-way tie among the vertices adjacent to a, we select vertex b. From b, the algorithm proceeds to c, then to d, then to e, and finally to f , which proves to be a dead end.

Hamiltonian Circuit Problem Engineered for Tomorrow Hamiltonian Circuit Problem So the algorithm backtracks from f to e, then to d, and then to c, which provides the first alternative for the algorithm to pursue. Going from c to e eventually proves useless, and the algorithm has to backtrack from e to c and then to b. From there, it goes to the vertices f , e, c, and d, from which it can legitimately return to a, yielding the Hamiltonian circuit a, b, f , e, c, d, a. If we wanted to find another Hamiltonian circuit, we could continue this process by backtracking from the leaf of the solution found.

Engineered for Tomorrow

Benefits and drawbacks Engineered for Tomorrow Benefits and drawbacks It is typically applied to difficult combinatorial problems for which no efficient algorithms for finding exact solutions possibly exist. Unlike the exhaustive search approach, which is doomed to be extremely slow for all instances of a problem, backtracking at least holds a hope for solving some instances of nontrivial sizes in an acceptable amount of time. This is especially true for optimization problems. Even if backtracking does not eliminate any elements of a problem’s state space and ends up generating all its elements, it provides a specific technique for doing so, which can be of value in its own right.

Engineered for Tomorrow Branch-and-Bound Branch and bound (BB) is a general algorithm for finding optimal solutions of various optimization problems, especially in discrete and combinatorial optimization. It consists of a systematic enumeration of all candidate solutions, where large subsets of fruitless candidates are discarded, by using upper and lower estimated bounds of the quantity being optimized.

Engineered for Tomorrow Branch-and-Bound In the standard terminology of optimization problems, a feasible solution is a point in the problem’s search space that satisfies all the problem’s constraints An optimal solution is a feasible solution with the best value of the objective function

Engineered for Tomorrow Branch-and-Bound 3 Reasons for terminating a search path at the current node in a state-space tree of a branch-and-bound algorithm: The value of the node’s bound is not better than the value of the best solution seen so far. The node represents no feasible solutions because the constraints of the problem are already violated. The subset of feasible solutions represented by the node consists of a single point—in this case we compare the value of the objective function for this feasible solution with that of the best solution seen so far and update the latter with the former if the new solution is better.

Conclusion An enhancement of backtracking 2121 Engineered for Tomorrow Conclusion An enhancement of backtracking Applicable to optimization problems For each node (partial solution) of a state-space tree, computes a bound on the value of the objective function for all descendants of the node (extensions of the partial solution) Uses the bound for: Ruling out certain nodes as “non- promising” to prune the tree – if a node’s bound is not better than the best solution seen so far guiding the search through state-space

Randomized Algorithms Engineered for Tomorrow Randomized Algorithms Randomized Algorithm aims to reduce both programming time and computational cost by approximating the process of calculation using randomness.

Randomized Algorithms Area Calculation Problem Engineered for Tomorrow Randomized Algorithms Area Calculation Problem Calculate the area of irregular shape (in red) in a box of size 20m x 24m 3,3 B 20,5 R 4, 15 R 6, 10 B A box of size 20 x 24 m2

Randomized Algorithms Engineered for Tomorrow Randomized Algorithms The randomization was made to produce a number point’s coordinate randomly with in the box. The number of Hit & Miss will be counted. Scaling calculation will be made the calculate the area, this case it is Red Area = 20 x 24 x red points/all points

Approximation Algorithms for Knapsack Problem Engineered for Tomorrow Approximation Algorithms for Knapsack Problem Greedy algorithms for the discrete knapsack problem Step 1 Compute the value-to-weight ratios ri = vi/wi , i = 1, . . . , n, for the items given. Step 2 Sort the items in non-increasing order of the ratios computed in Step 1. (Ties can be broken arbitrarily.) Step 3 Repeat the following operation until no item is left in the sorted list: if the current item on the list fits into the knapsack, place it in the knapsack; otherwise, proceed to the next item.

Approximation Algorithms for Knapsack Problem Example Engineered for Tomorrow Approximation Algorithms for Knapsack Problem Example Let us consider the instance of the knapsack problem with the knapsack’s capacity equal to 10 and the item information

Approximation Algorithms for Knapsack Problem Example Engineered for Tomorrow Approximation Algorithms for Knapsack Problem Example Computing the value-to-weight ratios and sorting the items in non- increasing order of these efficiency ratios yields the table beside The greedy algorithm will select the first item of weight 4, skip the next item of weight 7, select the next item of weight 5, and skip the last item of weight 3. The solution obtained happens to be optimal for this instance

Approximation Algorithms for Traveling Salesman Problem Engineered for Tomorrow Approximation Algorithms for Traveling Salesman Problem Nearest-neighbor algorithm - based on the nearest-neighbor heuristic: the idea of always going to the nearest unvisited city next. Step 1 Choose an arbitrary city as the start. Step 2 Repeat the following operation until all the cities have been visited: go to the unvisited city nearest the one visited last (ties can be broken arbitrarily). Step 3 Return to the starting city.

Approximation Algorithms for Traveling Salesman Problem Example Engineered for Tomorrow Approximation Algorithms for Traveling Salesman Problem Example With a as the starting vertex, the nearest-neighbor algorithm yields the tour (Hamiltonian circuit) Sa : a - b – c - d - a of length 10. The optimal solution, as can be easily checked by exhaustive search, is the tour s.: a - b - d - c - a of length 8. Thus, the accuracy ratio r(sa) = f (sa)/ f (s*) = 10/8 = 1.25

Twice-Around-the-Tree Algorithm Engineered for Tomorrow Twice-Around-the-Tree Algorithm Stage 1: Construct a minimum spanning tree of the graph (e.g., by Prim’s or Kruskal’s algorithm) Stage 2: Starting at an arbitrary vertex, create a path that goes twice around the tree and returns to the same vertex Stage 3: Create a tour from the circuit constructed in Stage 2 by making shortcuts to avoid visiting intermediate vertices more than once Note: RA = ∞ for general instances, but this algorithm tends to produce better tours than the nearest-neighbor algorithm

Engineered for Tomorrow Examples Walk: a – b – c – b – d – e – d – b – a Tour: a – b – c – d – e – a