NP-Complete Problems and how to go-on with them Data Structures and Algorithms A.G. Malamos References: Algorithms, 2006, S. Dasgupta, C. H. Papadimitriou,

Slides:



Advertisements
Similar presentations
Heuristic Search techniques
Advertisements

Traveling Salesperson Problem
Max Cut Problem Daniel Natapov.
Types of Algorithms.
Lecture 24 Coping with NPC and Unsolvable problems. When a problem is unsolvable, that's generally very bad news: it means there is no general algorithm.
Algorithms + L. Grewe.
1 NP-Complete Problems. 2 We discuss some hard problems:  how hard? (computational complexity)  what makes them hard?  any solutions? Definitions 
Branch & Bound Algorithms
S. J. Shyu Chap. 1 Introduction 1 The Design and Analysis of Algorithms Chapter 1 Introduction S. J. Shyu.
Search by partial solutions. Where are we? Optimization methods Complete solutions Partial solutions Exhaustive search Hill climbing Random restart General.
Combinatorial Algorithms
Algorithm Strategies Nelson Padua-Perez Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
Computability and Complexity 23-1 Computability and Complexity Andrei Bulatov Search and Optimization.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
Computational problems, algorithms, runtime, hardness
Approximation Algorithms
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
MAE 552 – Heuristic Optimization Lecture 27 April 3, 2002
Approximation Algorithms
The Theory of NP-Completeness
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2006 Lecture 7 Monday, 4/3/06 Approximation Algorithms.
MAE 552 – Heuristic Optimization Lecture 26 April 1, 2002 Topic:Branch and Bound.
Analysis of Algorithms CS 477/677
1 Branch and Bound Searching Strategies 2 Branch-and-bound strategy 2 mechanisms: A mechanism to generate branches A mechanism to generate a bound so.
1 NP-Complete Problems Polynomial time vs exponential time –Polynomial O(n k ), where n is the input size (e.g., number of nodes in a graph, the length.
Chapter 11: Limitations of Algorithmic Power
Ch 13 – Backtracking + Branch-and-Bound
ASC Program Example Part 3 of Associative Computing Examining the MST code in ASC Primer.
Approximation Algorithms Motivation and Definitions TSP Vertex Cover Scheduling.
Backtracking.
CPSC 411, Fall 2008: Set 4 1 CPSC 411 Design and Analysis of Algorithms Set 4: Greedy Algorithms Prof. Jennifer Welch Fall 2008.
Max-flow/min-cut theorem Theorem: For each network with one source and one sink, the maximum flow from the source to the destination is equal to the minimal.
1.1 Chapter 1: Introduction What is the course all about? Problems, instances and algorithms Running time v.s. computational complexity General description.
Busby, Dodge, Fleming, and Negrusa. Backtracking Algorithm Is used to solve problems for which a sequence of objects is to be selected from a set such.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
NP-Complete Problems CSC 331: Algorithm Analysis NP-Complete Problems.
Called as the Interval Scheduling Problem. A simpler version of a class of scheduling problems. – Can add weights. – Can add multiple resources – Can ask.
Algorithms for Network Optimization Problems This handout: Minimum Spanning Tree Problem Approximation Algorithms Traveling Salesman Problem.
Nattee Niparnan. Easy & Hard Problem What is “difficulty” of problem? Difficult for computer scientist to derive algorithm for the problem? Difficult.
Mehdi Kargar Aijun An York University, Toronto, Canada Discovering Top-k Teams of Experts with/without a Leader in Social Networks.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Topics in Algorithms 2005 Constructing Well-Connected Networks via Linear Programming and Primal Dual Algorithms Ramesh Hariharan.
Tonga Institute of Higher Education Design and Analysis of Algorithms IT 254 Lecture 8: Complexity Theory.
Linear Programming Data Structures and Algorithms A.G. Malamos References: Algorithms, 2006, S. Dasgupta, C. H. Papadimitriou, and U. V. Vazirani Introduction.
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
CSE 024: Design & Analysis of Algorithms Chapter 9: NP Completeness Sedgewick Chp:40 David Luebke’s Course Notes / University of Virginia, Computer Science.
CP Summer School Modelling for Constraint Programming Barbara Smith 2. Implied Constraints, Optimization, Dominance Rules.
EMIS 8373: Integer Programming NP-Complete Problems updated 21 April 2009.
Techniques for Proving NP-Completeness Show that a special case of the problem you are interested in is NP- complete. For example: The problem of finding.
Applications of Dynamic Programming and Heuristics to the Traveling Salesman Problem ERIC SALMON & JOSEPH SEWELL.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
Unit 9: Coping with NP-Completeness
1 Branch and Bound Searching Strategies Updated: 12/27/2010.
Advanced Algorithm Design and Analysis (Lecture 14) SW5 fall 2004 Simonas Šaltenis E1-215b
On the Relation between SAT and BDDs for Equivalence Checking Sherief Reda Rolf Drechsler Alex Orailoglu Computer Science & Engineering Dept. University.
Optimization Problems
CS6045: Advanced Algorithms NP Completeness. NP-Completeness Some problems are intractable: as they grow large, we are unable to solve them in reasonable.
NP Completeness Piyush Kumar. Today Reductions Proving Lower Bounds revisited Decision and Optimization Problems SAT and 3-SAT P Vs NP Dealing with NP-Complete.
CSC 413/513: Intro to Algorithms
The NP class. NP-completeness Lecture2. The NP-class The NP class is a class that contains all the problems that can be decided by a Non-Deterministic.
6/13/20161 Greedy A Comparison. 6/13/20162 Greedy Solves an optimization problem: the solution is “best” in some sense. Greedy Strategy: –At each decision.
1 Intro to AI Local Search. 2 Intro to AI Local search and optimization Local search: –use single current state & move to neighboring states Idea: –start.
The NP class. NP-completeness
More NP-Complete and NP-hard Problems
Algorithm Design Methods
Richard Anderson Lecture 25 NP-Completeness
Artificial Intelligence
Algorithm Design Methods
Algorithm Design Methods
Presentation transcript:

NP-Complete Problems and how to go-on with them Data Structures and Algorithms A.G. Malamos References: Algorithms, 2006, S. Dasgupta, C. H. Papadimitriou, and U. V. Vazirani Introduction to Algorithms, Cormen, Leiserson, Rivest & Stein

intro Y ou are the junior member of a seasoned project team. Y our current task is to write code for solving a simple-looking problem involving graphs and numbers. What are you supposed to do? If you are very lucky, your problem will be among the half-dozen problems concerning graphs with weights (shortest path, minimum spanning tree, maximum flow, etc.), that we have solved in this book. Even if this is the case, recognizing such a problem in its natural habitat—-grungy and obscured by reality and context —requires practice and skill. It is more likely that you will need to reduce your problem to one of these lucky ones-—or to solve it using dynamic programming or linear programming. But the remaining vast expanse is pitch dark: NP-complete. What are you to do? You can start by proving that your problem is actually NP-complete. Often a proof by generalization (recall the discussion on page 270 and Exercise 8.10) is all that you need; and sometimes a simple reduction from 3SAT or ZOE is not too diffcult to find. This sounds like a theoretical exercise, but, if carried out successfully, it does bring some tangible rewards: now your status in the team has been elevated, you are no longer the kid who can't do, and you have become the noble knight with the impossible quest. But, unfortunately, a problem does not go away when proved NP-complete. The real question is, What do you do next?

Intelligent exhaustive search Backtracking Backtracking is based on the observation that it is often possible to reject a solution by looking t just a small portion of it. For example, if an instance of SAT contains the clause (x1 + x2) then all assignments with x1 = x2 = 0 (i.e., false) can be instantly eliminated. To put it differently, by quickly checking and discrediting this partial assignment, we are able to prune a quarter of the entire search space. A promising direction, but can it be systematicallyexploited? Here's how it is done. Consider the Boolean formula (w, x,y,z) specified by the set of Clauses (w +x+y+z)*(w+x’)*(x+y’)*(y+z’)*(z+w’)*(w’+z’ ) We will incrementally grow a tree of partial solutions. We start by branching on any one variable, say w The partial assignment w = 0, x = 1 violates the clause (w+x’) and can be terminated, thereby pruning a good chunk of the search space. We backtrack out of this dead-end and continue our explorations at one of the two remaining active nodes.

Backtracking

backtracking More abstractly, a backtracking algorithm requires a test that looks at a subproblem and quickly declares one of three outcomes: 1. Failure: the subproblem has no solution. 2. Success: a solution to the subproblem is found. 3. Uncertainty.

Branch and Bound The same principle can be generalized from search problems such as SAT to optimization problems. For concreteness, let's say we have a minimization problem; maximization will follow the same pattern. As before, we will deal with partial solutions, each of which represents a subproblem, namely, what is the (cost of the) best way to complete this solution? And as before, we need a basis for eliminating partial solutions, since there is no other source of efficiency in our method. To reject a subproblem, we must be certain that its cost exceeds that of some other solution we have already encountered. But its exact cost is unknown to us and is generally not efficiently computable. So instead we use a quick lower bound on this cost.

Branch & Bound TSP How can we lower-bound the cost of completing a partial tour [a,S,b]? Many sophisticated methods have been developed for this, but let's look at a rather simple one. The remainder of the tour consists of a path through V -S, plus edges from a and b to V-S. Therefore, its cost is at least the sum of the following: 1. The lightest edge from a to V-S. 2. The lightest edge from b to V-S. 3. The minimum spanning tree of V S. Reminder: Min Spanning tree is a subgraph that is a tree and connects all the vertices together

Approximation algorithms In an optimization problem we are given an instance I and are asked to find the optimum solution— the one with the maximum gain if we have a maximization problem like INDEPENDENT SET, or the minimum cost if we are dealing with a minimization problem such as the TSP. For every instance I, let us denote by OPT(I) the value (benefit or cost) of the optimum solution. It makes the math a little simpler (and is not too far from the truth) to assume that OPT(I) is always a positive integer. More generally, consider any minimization problem. Suppose now that we have an algo- rithm A for our problem which, given an instance I, returns a solution with value A(I). The approximation ratio of algorithm A is defined to be The goal is to keep ratio very low, however how can we calculate the optimal since this is the parameter we are asking to determine? This solution is feasible into convenient problems where the optimal value is known but we are looking for the optimal way to get it! (ex optimal path with a given cost bound)

Local search heuristics Our next strategy is inspired by an incremental process of introducing small mutations, trying them out, and keeping them if they work well. This paradigm is called local search and can be applied to any optimization task. Here's how it looks for a minimization problem.

Local Optima Heuristics-Simulated annealing The method of simulated annealing redefines the local search by introducing the notion of a random T. Simulated annealing is inspired by the physics of crystallization. When a substance is to be crystallized, it starts in liquid state, with its particles in relatively unconstrained motion. Then it is slowly cooled, and as this happens, the particles gradually move into more regular configurations. This regularity becomes more and more pronounced until finally a crystal lattice is formed.

Local Optima Heuristics-Simulated annealing