MIT and James Orlin © 2003 1 NP-hardness. MIT and James Orlin © 2003 2 Moving towards complexity theory Linear Programming is viewed as easy and Integer.

Slides:



Advertisements
Similar presentations
NP-Hard Nattee Niparnan.
Advertisements

Chapter 11 Limitations of Algorithm Power Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
C&O 355 Lecture 23 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
Lecture 24 Coping with NPC and Unsolvable problems. When a problem is unsolvable, that's generally very bad news: it means there is no general algorithm.
The Theory of NP-Completeness
1 NP-Complete Problems. 2 We discuss some hard problems:  how hard? (computational complexity)  what makes them hard?  any solutions? Definitions 
CSC5160 Topics in Algorithms Tutorial 2 Introduction to NP-Complete Problems Feb Jerry Le
Computational problems, algorithms, runtime, hardness
Complexity Theory CSE 331 Section 2 James Daly. Reminders Project 4 is out Due Friday Dynamic programming project Homework 6 is out Due next week (on.
Integer Programming 3 Brief Review of Branch and Bound
Hardness Results for Problems P: Class of “easy to solve” problems Absolute hardness results Relative hardness results –Reduction technique.
Approximation Algorithms
The Theory of NP-Completeness
CSE 326: Data Structures NP Completeness Ben Lerner Summer 2007.
Analysis of Algorithms CS 477/677
CSE 421 Algorithms Richard Anderson Lecture 27 NP Completeness.
Chapter 11: Limitations of Algorithmic Power
Chapter 11 Limitations of Algorithm Power Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Hardness Results for Problems P: Class of “easy to solve” problems Absolute hardness results Relative hardness results –Reduction technique.
1.1 Chapter 1: Introduction What is the course all about? Problems, instances and algorithms Running time v.s. computational complexity General description.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
Chapter 11 Limitations of Algorithm Power. Lower Bounds Lower bound: an estimate on a minimum amount of work needed to solve a given problem Examples:
CSCE350 Algorithms and Data Structure
Computational Complexity Polynomial time O(n k ) input size n, k constant Tractable problems solvable in polynomial time(Opposite Intractable) Ex: sorting,
1 The Theory of NP-Completeness 2012/11/6 P: the class of problems which can be solved by a deterministic polynomial algorithm. NP : the class of decision.
Nattee Niparnan. Easy & Hard Problem What is “difficulty” of problem? Difficult for computer scientist to derive algorithm for the problem? Difficult.
Complexity Classes (Ch. 34) The class P: class of problems that can be solved in time that is polynomial in the size of the input, n. if input size is.
Tonga Institute of Higher Education Design and Analysis of Algorithms IT 254 Lecture 8: Complexity Theory.
MIT and James Orlin1 NP-completeness in 2005.
Approximation Algorithms
CSC 172 P, NP, Etc. “Computer Science is a science of abstraction – creating the right model for thinking about a problem and devising the appropriate.
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
CSE 024: Design & Analysis of Algorithms Chapter 9: NP Completeness Sedgewick Chp:40 David Luebke’s Course Notes / University of Virginia, Computer Science.
1 Lower Bounds Lower bound: an estimate on a minimum amount of work needed to solve a given problem Examples: b number of comparisons needed to find the.
EMIS 8373: Integer Programming NP-Complete Problems updated 21 April 2009.
1 The Theory of NP-Completeness 2 Cook ’ s Theorem (1971) Prof. Cook Toronto U. Receiving Turing Award (1982) Discussing difficult problems: worst case.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
CSE373: Data Structures & Algorithms Lecture 22: The P vs. NP question, NP-Completeness Lauren Milne Summer 2015.
Unit 9: Coping with NP-Completeness
NP-Complete Problems. Running Time v.s. Input Size Concern with problems whose complexity may be described by exponential functions. Tractable problems.
1 Chapter 34: NP-Completeness. 2 About this Tutorial What is NP ? How to check if a problem is in NP ? Cook-Levin Theorem Showing one of the most difficult.
NP-COMPLETE PROBLEMS. Admin  Two more assignments…  No office hours on tomorrow.
CSE 421 Algorithms Richard Anderson Lecture 27 NP-Completeness and course wrap up.
NP-Complete problems.
CS 3343: Analysis of Algorithms Lecture 25: P and NP Some slides courtesy of Carola Wenk.
CSE 589 Part V One of the symptoms of an approaching nervous breakdown is the belief that one’s work is terribly important. Bertrand Russell.
CPS Computational problems, algorithms, runtime, hardness (a ridiculously brief introduction to theoretical computer science) Vincent Conitzer.
CS6045: Advanced Algorithms NP Completeness. NP-Completeness Some problems are intractable: as they grow large, we are unable to solve them in reasonable.
Lecture 25 NP Class. P = ? NP = ? PSPACE They are central problems in computational complexity.
1 CSE 326: Data Structures: Graphs Lecture 24: Friday, March 7 th, 2003.
NP-completeness NP-complete problems. Homework Vertex Cover Instance. A graph G and an integer k. Question. Is there a vertex cover of cardinality k?
NPC.
CS216: Program and Data Representation University of Virginia Computer Science Spring 2006 David Evans Lecture 8: Crash Course in Computational Complexity.
CSC 413/513: Intro to Algorithms
Young CS 331 D&A of Algo. NP-Completeness1 NP-Completeness Reference: Computers and Intractability: A Guide to the Theory of NP-Completeness by Garey and.
Lecture. Today Problem set 9 out (due next Thursday) Topics: –Complexity Theory –Optimization versus Decision Problems –P and NP –Efficient Verification.
COSC 3101A - Design and Analysis of Algorithms 14 NP-Completeness.
1 The Theory of NP-Completeness 2 Review: Finding lower bound by problem transformation Problem X reduces to problem Y (X  Y ) iff X can be solved by.
The NP class. NP-completeness Lecture2. The NP-class The NP class is a class that contains all the problems that can be decided by a Non-Deterministic.
CSE 332: NP Completeness, Part II Richard Anderson Spring 2016.
The NP class. NP-completeness
Computational problems, algorithms, runtime, hardness
EMIS 8373: Integer Programming
Richard Anderson Lecture 26 NP-Completeness
Richard Anderson Lecture 25 Min Cut Applications and NP-Completeness
Analysis and design of algorithm
Richard Anderson Lecture 25 NP-Completeness
Chapter 11 Limitations of Algorithm Power
CPS 173 Computational problems, algorithms, runtime, hardness
Our old list of problems
Presentation transcript:

MIT and James Orlin © NP-hardness

MIT and James Orlin © Moving towards complexity theory Linear Programming is viewed as easy and Integer Programming is viewed as hard” Next, we address some theoretical ways of characterizing easy vs. hard problems Often referred to as the theory of NP- completeness or NP-hardness

MIT and James Orlin © Overview of complexity How can we show a problem is efficiently solvable? –We can show it constructively. We provide an algorithm and show that it solves the problem efficiently. How can we show a problem is not efficiently solvable? –How do you prove a negative? –This is the aim of complexity theory, which is the topic of today’s lecture. –The approach today is non-standard in that it covers half of the usual definitions of an intro to complexity

MIT and James Orlin © What do we mean by a problem? C onsider maximize 3x + 4y subject to 4x + 5y  23 x  0, y  0 This is an “instance” of linear programming. When we say the linear programming problem, we refer to the collection of all instances. Similar, the integer programming problem (or integer programming) refers to the collection of all instances of integer programming The traveling salesman problem refers to all instances of the traveling salesman problem etc.

MIT and James Orlin © Instances versus problem Complexity theory addresses the following problem: when is a problem hard? Note: it does not deal with the question of whether any instance is hard.

MIT and James Orlin © General Fact As problem instances get larger, the time to solve the problem grows. But how fast? We say that a problem is solvable in polynomial time if there is a polynomial p( ) such that the time to solve a problem of size n is at most p(n).

MIT and James Orlin © Example: Sorting a list of n items by using a greedy approach Greedy sorting: for i = 1 to n, choose the ith least item on the list and put it in position i.

MIT and James Orlin © A quick analysis of greedy sorting Suppose that there are n items. How many have to be scanned to find the next largest item? How much time does it take to insert the next largest item in the correct place? Claim: the running time is at most 100 n 2. This is a polynomial time algorithm. The best possible time for sorting is around n log n.

MIT and James Orlin © General Fact Examples: –Finding an word in a dictionary with n entries. Time  log n, depending on assumptions. Polynomial time –Sorting n items time  n log n Polynomial time –Finding the shortest path from s to t time  n 2. Polynomial time –Complete enumeration of a binary integer program on n variables: time > 2 n. Exponential time (not polynomial time)

MIT and James Orlin © Running times as n grows

MIT and James Orlin © Overview of next few slides Easy problems: running time is guaranteed to grow slower than a polynomial in the size of the input. Hard problems are everything else. Next: what do we mean by the size of the input?

MIT and James Orlin © Polynomial Time Algorithms For any instance I of a problem, let S(I) be the number of inputs. –Examples. For an integer programming instance, S  m x n –For a capital budgeting instance, S  n. –What would the size of a TSP on n cities be? Let M(I) be the largest integer in the data. –We assume that integers are expressed in binary. –Consider the problem of determining whether a number M is prime. It’s size grows as log M. (The size is not just 1). Size(I) is the number of digits to represent I. S(I) + log M(I)  Size(I)  S(I)  log M(I)

MIT and James Orlin © Polynomial time algorithms The algorithm A for problem X runs in polynomial time if the number of steps taken by A on any instance I is bounded by a polynomial in size(I). Equivalently, there is some polynomial p, so that for every instance I, the number of steps taken by A is less than p(S(I) + log M(I))). e.g., Suppose the number of steps is less than 1000  size(I) 7  1000  [S(I)  log M(I)] 7  1000  [S(I) + log M(I)] 14 Interesting fact: everyone in complexity talks about the size of the problem, but almost no one cares about measuring it precisely.

MIT and James Orlin © On Polynomial Time Algorithms We consider a problem X to be “easy” or efficiently solvable, if there is a polynomial time algorithm A for solving X. We let P denote the class of problems solvable in polynomial time. Some more problems that are in the class P include: –Linear Programming –The assignment problem and transportation problem and minimum cost flow problem –Finding a topological order –Finding a critical path –Finding an Eulerian cycle

MIT and James Orlin © Which are polynomial time algorithms? (answer with partner). To determine whether M is prime, one can divide M by every integer less than M. This takes at most M divisions. Dijkstra’s algorithm solves a shortest path problem in at most 100 n 2 steps. The number of steps taken by one variant of the simplex algorithm on the minimum cost flow problem is 1000 n log n pivots. Linear programming can be solved by a technique called the ellipsoid algorithm in at most n log (M) iterations, where each iteration takes at most 1000 n 3 steps.

MIT and James Orlin © Can integer programming be solved in polynomial time? FACT: every algorithm that has ever been developed for integer programming takes exponential time. Hundreds of very smart researchers have tried to come up with polynomial time algorithms for integer programming, and failed. It is generally believed that there is no polynomial time algorithm for integer programming. Complexity theory: deals with proving that integer programming is hard.

MIT and James Orlin © Hard problems in practice What can you say to your manager if he or she hands you a problem that is too difficult for you to solve. (adapted from Garey and Johnson)

MIT and James Orlin © I cant’ find an efficient algorithm. I guess I’m too dumb.

MIT and James Orlin © I cant’ find an efficient algorithm, because no such algorithm is possible

MIT and James Orlin © I can’t find an efficient algorithm, but neither can these famous researchers.

MIT and James Orlin © David Johnson George Dantzig Richard Karp Michael Garey Ralph Gomory Alan Turing Kurt Gödel John Von Neumann Steven Cook

MIT and James Orlin © The class NP-easy Consider an optimization problem X, in which for any instance I, the goal is to find a feasible solution x for I with maximum value f I (x). (or minimum) We say that X is NP-Easy if there is a polynomial p( ) with the following properties: For every instance I of X 1.There is an optimal solution x for I such that size(x) < p(size( I )). “There is a small sized optimum solution” 2.For any proposed solution y, one can evaluate whether y is feasible in fewer than p(size( I )+ size(y)) steps. “One can efficiently check feasibility” 3.The number of steps to evaluate f(y) is fewer than p(size( I )+ size(y)). “One can efficiently evaluate the objective function.”

MIT and James Orlin © The housing problem 400 students applied in the lottery for a wonderful new dorm that holds 100 students. You have a list of pairs of incompatible students. –no two incompatible students are in the list of 100 students chosen for the dorm –Is there an efficient procedure for finding the list of 100 students.

MIT and James Orlin © The dorm problem is the independent set problem Create a node for each student Create an arc between two students if they are incompatible A set S of nodes of a graph G = (N, A) are independent if no two nodes of S are adjacent. What is the largest size independent set in G? This problem is NP-easy

MIT and James Orlin © Integer Programming is NP-easy Checking whether 0-1 integer programming is NP-easy. 1.There is an optimal solution x for I such that size(x) < p(size( I )). “There is a small sized optimum solution” --every solution is an n-vector of 0’s and 1’s 2.For any proposed solution y, one can evaluate whether y is feasible in fewer than p(size( I )+ size(y)) steps. “One can efficiently check feasibility” --evaluating whether a 0-1 vector is feasible means checking each constraint 3.The number of steps to evaluate f(y) is fewer than p(size( I )+ size(y)). “One can efficiently evaluate the objective function.” –evaluating f(x) is to evaluate cx for some linear function c.

MIT and James Orlin © Some More NP-easy Problems TSP Is there is a small sized optimum solution? Can one check feasibility efficiently? Can one evaluate the obective function efficiently?

MIT and James Orlin © Question of the millennium: is there a polynomial time algorithm for all NP-easy problems? If you can successfully answer this question, you will get $1,000,000. Millennium Prize

MIT and James Orlin © On NP-easy problems Theorem. If problem X is NP-easy, and if Y is a special case of X, then Y is NP-easy. Example. 0-1 integer programming is NP-easy Capital budgeting is a special case of 0-1 integer programming. Therefore Capital budgeting is NP- easy. “If a problem is easier than an NP-easy problem it is NP-easy.”

MIT and James Orlin © Other problems that are NP-easy Set cover problem (fire station problem) Capital budgeting problem Determining the largest prime number less than n that divides integer n –solutions are any numbers that divide n –size of any “solution x” is log x < log n –A solution can be checked as a divisor in polynomial time Also, any problem that is a special case of an NP- easy problem is NP-easy. –So determining if a number is prime is NP-easy

MIT and James Orlin © On NP-easy optimization problems Almost any optimization problem that you see will ever want to solve will be NP-easy. It’s a challenge to find optimization problems that are not NP-easy. The next two slides illustrate problems that are not NP-easy.

MIT and James Orlin © A problem that is not NP-easy. Problem: INPUT: an integer n Optimization: find the smallest integer N such that N > n, and both N and N+2 are primes. It is possible that the size of optimum solution, which is log N, is exponentially large in the size of the problem instance, which is log n. So, this violates condition 1: the size of the optimum solution may be exponential in the size of the problem instance.

MIT and James Orlin © Another Problem that is not NP-easy Consider an integer program: minimize cx + dy subject to Ax + By = b x, y binary n-vectors (that is, n components) We say that y is a “blocking n-vector” if it is a binary n- vector, and there is no feasible solution to Ax + By = b with x binary. What is the least cost blocking n-vector? But to check whether y is a feasible blocking n-vector requires the solution of an integer program. This violates condition 2 since checking whether a solution is feasible can take exponential time.

MIT and James Orlin © NP-easy Almost every optimization problem that you will ever see is NP-easy. Question. Can the NP-easy problems be solved in polynomial time? This is a very famous unsolved problem in mathematics. It is often represented as “Does P = NP?” Amazing Fact 1: If 0-1 integer programming can be solved in polynomial time, then every other NP-easy problem can be solved in polynomial time. Amazing Fact 2: If the traveling salesman problem (or the capital budgeting problem, or the independent set problem) can be solved in polynomial time, then every other NP-easy problem can be solved in polynomial time.

MIT and James Orlin © The class NP-hard An oracle function is a “black box” for solving an optimization problem. An oracle function for integer programming would take an integer programming instance as input and produce a solution in 1 time unit. Let X be an optimization problem. We say that X is NP-hard if every NP-easy problem can be solved in polynomial time if one is permitted to use an oracle function for X. Theorem: 0-1 integer programming is NP-hard.

MIT and James Orlin © NP-equivalence and other classes We say that a problem is NP-equivalent if it is both NP hard and NP-easy. NP-hard NP-easy NP-equivalent NP-complete P

MIT and James Orlin © On NP-hardness Theorem. If problem X is NP-hard, and if X is a special case of Y, then Y is NP-hard. Example. 0-1 integer programming is NP-hard 0-1 integer programming is a special case of integer programming. Therefore, integer programming is NP-hard. “If a problem is harder than an NP-hard problem it is NP-hard.”

MIT and James Orlin © Some examples of NP-hard problems Traveling Salesman Problem Capital Budgeting Problem (knapsack problem) Independent Set Problem Fire Station Problem (set covering) 0-1 Integer programming Integer Programming Project management with resource constraints and thousands more

MIT and James Orlin © Proving that a problem is hard “To prove that problem X is hard, find a problem Y that you know is hard, and show that problem X is easier than Y” To prove that a problem X is NP-hard, start with a “similar” NP-hard problem Y. Then show that Y can be solved in polynomial time if one permits X to be used as a subroutine and counting each solution of x as taking 1 step.

MIT and James Orlin © On Proving NP-hardness results Suppose that we know that the problem of determining a hamiltonian cycle is NP-hard. We will show that the problem of finding an a hamiltonian path is also NP-hard. A hamiltonian cycle is a cycle that passes through each node exactly once. A hamiltonian path is a path that includes every node of G. Proof technique: Start with any instance of the hamiltonian cycle problem. We denote this instance as G = (N, A). Transformation proofs (these are standard). Create an instance G’ = (N’, A) for the hamiltonian path problem from G with the following property: there is a hamiltonian path in G’ if and only if there is a hamiltonian cycle in G.

MIT and James Orlin © A transformation The original network The transformed network: node 1 of the original network was split into nodes 1 and 21, and nodes 0 and 22 were connected to the split nodes.

MIT and James Orlin © Claim 1: If there is a hamiltonian cycle in the original graph then there is a hamiltonian path in the transformed graph. 1 A Hamiltonian Cycle. Connect one to node 1, and the other to node 21. Add in arcs (0,1) and (21, 22) Take the two arcs in G incident to the node 1.

MIT and James Orlin © Claim 1: If there is a hamiltonian path in the transformed graph then there is a hamiltonian cycle in the original graph. 1 Delete the two arcs (0, 1) and (21, 22). Then take the other arcs in G’ incident to 1 and 21, and make them incident to node 1 in G A Hamiltonian Path

MIT and James Orlin © Proofs of NP-hardness transformations have two parts Original instance I, and transformed instance I ’. Part 1. An optimal (or feasible) solution for I induces an optimal (or feasible) solution for I ’. Part 2. An optimal (or feasible) solution for I ’ induces and optimal (or feasible) solution for I. Formulating problems as integer programs illustrates the type of transformation. Note: transformation can be difficult to develop. Great reference: Garey and Johnson 1979.

MIT and James Orlin © Summary on complexity theory. Polynomial time algorithms NP-easy, NP-hard, NP-equivalent Proving hardness results. NP-hard NP-easy NP-equivalent NP-complete P