An Introduction to Computational Complexity Edith Elkind IAM, ECS.

Slides:



Advertisements
Similar presentations
The Theory of NP-Completeness
Advertisements

1 NP-Complete Problems. 2 We discuss some hard problems:  how hard? (computational complexity)  what makes them hard?  any solutions? Definitions 
© The McGraw-Hill Companies, Inc., Chapter 8 The Theory of NP-Completeness.
Umans Complexity Theory Lectures Lecture 15: Approximation Algorithms and Probabilistically Checkable Proofs (PCPs)
Complexity 16-1 Complexity Andrei Bulatov Non-Approximability.
Computability and Complexity 23-1 Computability and Complexity Andrei Bulatov Search and Optimization.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
1 CSE 417: Algorithms and Computational Complexity Winter 2001 Lecture 21 Instructor: Paul Beame.
NP-Complete Problems Reading Material: Chapter 10 Sections 1, 2, 3, and 4 only.
The Theory of NP-Completeness
88- 1 Chapter 8 The Theory of NP-Completeness P: the class of problems which can be solved by a deterministic polynomial algorithm. NP : the class.
Analysis of Algorithms CS 477/677
CSE 421 Algorithms Richard Anderson Lecture 27 NP Completeness.
1 NP-Complete Problems Polynomial time vs exponential time –Polynomial O(n k ), where n is the input size (e.g., number of nodes in a graph, the length.
Chapter 11: Limitations of Algorithmic Power
CS151 Complexity Theory Lecture 15 May 18, CS151 Lecture 152 Outline IP = PSPACE Arthur-Merlin games –classes MA, AM Optimization, Approximation,
Toward NP-Completeness: Introduction Almost all the algorithms we studies so far were bounded by some polynomial in the size of the input, so we call them.
Chapter 11 Limitations of Algorithm Power Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Approximation Algorithms Motivation and Definitions TSP Vertex Cover Scheduling.
1 The Theory of NP-Completeness 2 NP P NPC NP: Non-deterministic Polynomial P: Polynomial NPC: Non-deterministic Polynomial Complete P=NP? X = P.
Computational aspects of stability in weighted voting games Edith Elkind (NTU, Singapore) Based on joint work with Leslie Ann Goldberg, Paul W. Goldberg,
Theory of Computing Lecture 19 MAS 714 Hartmut Klauck.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
The Theory of NP-Completeness 1. What is NP-completeness? Consider the circuit satisfiability problem Difficult to answer the decision problem in polynomial.
1 The Theory of NP-Completeness 2012/11/6 P: the class of problems which can be solved by a deterministic polynomial algorithm. NP : the class of decision.
Nattee Niparnan. Easy & Hard Problem What is “difficulty” of problem? Difficult for computer scientist to derive algorithm for the problem? Difficult.
The Complexity of Optimization Problems. Summary -Complexity of algorithms and problems -Complexity classes: P and NP -Reducibility -Karp reducibility.
Chapter 15 Approximation Algorithm Introduction Basic Definition Difference Bounds Relative Performance Bounds Polynomial approximation Schemes Fully Polynomial.
TECH Computer Science NP-Complete Problems Problems  Abstract Problems  Decision Problem, Optimal value, Optimal solution  Encodings  //Data Structure.
Approximation Algorithms
CSC 413/513: Intro to Algorithms NP Completeness.
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
CSE 024: Design & Analysis of Algorithms Chapter 9: NP Completeness Sedgewick Chp:40 David Luebke’s Course Notes / University of Virginia, Computer Science.
1 Lower Bounds Lower bound: an estimate on a minimum amount of work needed to solve a given problem Examples: b number of comparisons needed to find the.
EMIS 8373: Integer Programming NP-Complete Problems updated 21 April 2009.
CSCI 3160 Design and Analysis of Algorithms Tutorial 10 Chengyu Lin.
1 The Theory of NP-Completeness 2 Cook ’ s Theorem (1971) Prof. Cook Toronto U. Receiving Turing Award (1982) Discussing difficult problems: worst case.
Unit 9: Coping with NP-Completeness
NP-Complete Problems. Running Time v.s. Input Size Concern with problems whose complexity may be described by exponential functions. Tractable problems.
Non-Approximability Results. Summary -Gap technique -Examples: MINIMUM GRAPH COLORING, MINIMUM TSP, MINIMUM BIN PACKING -The PCP theorem -Application:
Instructor Neelima Gupta Table of Contents Class NP Class NPC Approximation Algorithms.
CS 3343: Analysis of Algorithms Lecture 25: P and NP Some slides courtesy of Carola Wenk.
CSE 589 Part V One of the symptoms of an approaching nervous breakdown is the belief that one’s work is terribly important. Bertrand Russell.
CS6045: Advanced Algorithms NP Completeness. NP-Completeness Some problems are intractable: as they grow large, we are unable to solve them in reasonable.
NPC.
NP Completeness Piyush Kumar. Today Reductions Proving Lower Bounds revisited Decision and Optimization Problems SAT and 3-SAT P Vs NP Dealing with NP-Complete.
Young CS 331 D&A of Algo. NP-Completeness1 NP-Completeness Reference: Computers and Intractability: A Guide to the Theory of NP-Completeness by Garey and.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
TU/e Algorithms (2IL15) – Lecture 9 1 NP-Completeness NOT AND OR AND NOT AND.
COSC 3101A - Design and Analysis of Algorithms 14 NP-Completeness.
1 The Theory of NP-Completeness 2 Review: Finding lower bound by problem transformation Problem X reduces to problem Y (X  Y ) iff X can be solved by.
The NP class. NP-completeness Lecture2. The NP-class The NP class is a class that contains all the problems that can be decided by a Non-Deterministic.
Conceptual Foundations © 2008 Pearson Education Australia Lecture slides for this course are based on teaching materials provided/referred by: (1) Statistics.
P, NP, NP-completeness 2 Reductions 1 Thu, July 7 th 1.
TU/e Algorithms (2IL15) – Lecture 10 1 NP-Completeness, II.
ICS 353: Design and Analysis of Algorithms NP-Complete Problems King Fahd University of Petroleum & Minerals Information & Computer Science Department.
The Theory of NP-Completeness
More NP-Complete and NP-hard Problems
P & NP.
Chapter 10 NP-Complete Problems.
Computability and Complexity
ICS 353: Design and Analysis of Algorithms
Approximation Algorithms
Chapter 11 Limitations of Algorithm Power
NP-Complete Problems.
The Theory of NP-Completeness
CS154, Lecture 16: More NP-Complete Problems; PCPs
Presentation transcript:

An Introduction to Computational Complexity Edith Elkind IAM, ECS

Knapsack Problem n items, each item has – a weight w i – a value v i Knapsack capacity W Target value V Question: can we pack the knapsack to reach the target value? – is there a vector (x 1,..., x n )  {0, 1} n such that w 1 x w n x n ≤ W v 1 x v n x n ≥ V

Knapsack Problem: Special Cases Subset Sum: – for each item weight = value – capacity of knapsack = target value – Question: is there a vector (x 1,..., x n )  {0, 1} n such that w 1 x w n x n = W Partition: can we split a given list into 2 equal-weight parts? – like Subset Sum, but additionally w w n = 2W

Subset Sum is NP-complete Reduction from 1-in-3-SAT – n variables, m clauses – can you set the variables so that in each clause, exactly one variable is satisfied? x1 V ¬x2 V x3, x2 V x3 V x4: x1=T, x2=T, x3=F, x4=F YES x1 V x2 V x3, x1 V ¬x2 V x3, ¬ x1 V x2 V x3: x1=T => contradiction x1=F => contradiction NO

Subset Sum is NP-complete Proof: – 2n numbers: 2 per variable (one for x i, one for ¬x i ) – each number has 2(n+m) binary digits – digits 2i-1, 2i encode variable i xixi ¬x i 2i-12i W forced to pick exactly one of x i, ¬x i 2n...

Subset Sum is NP-complete Proof: – 2n numbers: 2 per variable – each number has 2(n+m) digits – digits 2n+2j-1, 2n+2j encode clause j x1x1 ¬x 3 2j-12j W x7x7 forced to pick exactly one of x 1, ¬x 3, x 7 c j = x 1 V ¬x 3 V x 7 2n

Subset Sum is NP-complete Proof: – 2n numbers: 2 per variable – each number has 2(n+m) digits – digits 2i-1, 2i encode variable i – digits 2n+2j-1, 2n+2j encode clause j – pick exactly one of the numbers for x i, ¬x i – can obtain W iff exactly one variable per clause is satisfied

What about Partition and Knapsack? Subset Sum is a special case of Knapsack => Knapsack is NP-hard Can reduce Subset Sum to Partition: – instance of Subset Sum: (w 1,..., w n ; W) – set X = w w n – instance of Partition: (3X-W, 2X+W, w 1,..., w n ) yes-instance of SS => yes-instance of P: 3X-W + subset of w i s of weight W has weight 3X 2X+W + subset of w i s of weight X-W has weight 3X yes-instance of P => yes-instance of SS any set of weight 3X contains exactly one of 3X-W, 2X+W

Knapsack: an (Efficient?) Algorithm Dynamic programming: V(i, w): maximum value you can achieve by using a subset of the first i items of weight w – V(1, w): 0 if w ≠ w 1 and v 1 if w = w 1 – V(i+1, w): max {V(i, w), v i+1 +V(i, w - w i+1 )} item 1: w 1 = 4, v 1 = 5 item 2: w 2 = 2, v 1 = 3...xy z z = max {y, v i+1 + x} w i+1 i=2 i=1 i+1 i

Knapsack: an (Efficient?) Algorithm V(i, w): maximum value you can achieve by using a subset of the first i items of weight w V(n, w): maximum value of a subset of weight w if V(n, w) ≥ V for some w ≤ W, the answer is “yes”, else “no” Running time: – each V(i, w) can be computed in log V time – nW values to compute – final scan: O(W) O(nW log V)

What Is an Efficient Algorithm? Knapsack input: – n+1 numbers of size at most W each – n+1 numbers of size at most V each – representation size: O(n(log W + log V)) numbers are represented in binary Dynamic programming algorithm: O(nW log V) – exponential in input size: W is exponentially bigger than log W

Small Numbers Knapsack input: – n+1 numbers of size at most W each – n+1 numbers of size at most V each – representation size: O(n(log W + log V)) What if weights an values are small? – W = poly(n), V = poly(n) – DP running time: O(nW log V) = poly (n): polynomial in input size!

Binary vs. unary notation Knapsack input: – n+1 numbers of size at most W each – n+1 numbers of size at most V each What if inputs are given in unary? – w is represented as – representation size: O(n(W + V)) DP running time: O(nW log V): polynomial in input size! w times

Strong vs. Weak NP-hardness A problem is weakly NP-hard, if it is hard when inputs are given in binary, but not in unary A problem is strongly NP-hard, if it is hard even when inputs are given in unary Example: Longest Path – input: Graph G=(V, E), source s, sink t, target k, edge lengths l 1, …, l |E| – question: is there a loop-free path from s to t of length at least k? – hard even for small weights (in fact, 0/1 weights)

Implications If you have an NP-hard problem, try to figure out where the hardness comes from: – big numbers in the problem? – structure of the problem? – both? Useful for understanding which special cases are likely to be easy

Coping With Intractability Good algorithm: (1)returns exact answer (2)works on all instances (3)runs in poly-time For NP-hard problems, can have at most 2 out of 3! – (1)+(2): exp-time algorithms (can be practical) – (1)+(3): heuristics – (2)+(3): approximation algorithms

Approximation Algorithms yes/no problems => problems with numeric answers problem X: I (instance) => S(I)  R (solution space) maximization problems: OPT(I) = max S(I) – max-value Knapsack, longest path minimization problems: OPT(I) = min S(I) – shortest TSP tour, smallest vertex cover

Approximation Algorithms Definition: – an algorithm A is an c-approximation for a maximization problem X if for every instance I, A outputs A(I)  S(I) such that OPT(I)/c ≤ A(I) ≤ OPT(I) – an algorithm A is an c-approximation for a minimization problem X if for every instance I, A outputs A(I)  S(I) such that OPT(I) ≤ A(I) ≤ cOPT(I) c: approximation ratio

How Do You Bound the Approximation Ratio? We do not know the value of OPT(I) How can we prove that something is within a constant factor from OPT(I)? General idea (minimization version): – find an easily computable lower bound on OPT(I) i.e., OPT(I) ≥ X – show that A(I) is within a constant factor of X i.e., A(I) ≤ cX

Example: Vertex Cover Instance: Graph G=(V, E), target K Question: is there a V’  V such that – |V’|≤ K and for any (u, v)  E either u  V’ or v  V’ Optimization version: what is the size of the smallest vertex cover?

Vertex Cover: 2-Approximation 1.Start with V’ empty 2.While there are unmarked edges left, repeat: Pick an arbitrary edge e=(u, v) and mark it Add u, v to V’ Remove from G all unmarked edges incident to u or v except (u, v)

Vertex Cover: Proof Claim 1: marked edges form a matching (do not share vertices) Claim 2: Given any matching M = {(u 1, v 1 ),..., (u k, v k )}, any vertex cover (including OPT(I)) must contain at least one of u i, v i for each pair (u i, v i ) in M V’ contains both – hence, V’ is at most twice as large as OPT(I)

Can We Do Better? Claim: unless P ≠ NP, any poly-time approximation algorithm for VC has approximation ratio at least 1+1/(n+1) Proof: suppose OPT(I) ≤ A(I) ≤ OPT(I)+OPT(I)/(n+1) – OPT(I) < n, so OPT(I)/(n+1) < 1 – A(I), OPT(I) are both integer and differ by < 1 – A(I) = OPT(I) and P = NP

Polynomial-time Approximation Scheme (PTAS) Definition (maximization version): A is a PTAS for a problem P if: – inputs: instance I of P, an error parameter  – output: A(I,  ) such that (1-  )OPT(I) ≤ A(I,  ) ≤ OPT(I) – for every fixed , running time of A is poly(I) FPTAS: same, but running time of A is poly(I, 1/  ) for all  (n/  2 is an FPTAS, n 1/  is a PTAS, but not FPTAS)

Tool: Another Pseudopolynomial Algorithm for Knapsack Knapsack: n items, each item has a weight w i and a value v i capacity W, target value V poly (n, W, log V) algorithm: fill out table P(i, w): maximal value of a subset of {1,..., i} of weight w Claim: there is an algorithm for Knapsack that runs in time poly (n, V, log W) Proof: fill out table W(i, v), i=1,..., n, v=1,..., V: minimal weight of a subset of {1,..., i} of value v

Tool: Another Pseudopolynomial Algorithm for Knapsack Claim: there is an algorithm for Knapsack that runs in time poly (n, V, log W) Proof: fill out table W(i, v), i=1,..., n, v=1,..., V: W(i, v): minimal weight of a subset of {1,..., i} of value V – W(1, v): +∞ if v ≠ v 1 and w 1 if v = v 1 – W(i+1, v): min {W(i, v), w i+1 +W(i, v - v i+1 ) Check if W(n, v) ≤ W for some v ≥ V

Pseudopolynomial Algorithm => FPTAS Algorithm from previous slide has running time poly (n, V, log W) – polynomial if V = poly(n) – also polynomial if values are drawn from a “small” set: e.g., all values are of the form kX, where k is at most poly(n); X can be huge – idea: make the set of possible values small by rounding

FPTAS for Knapsack Knapsack: n items, each item has a weight w i and a value v i capacity W, target value V Parameter  Algorithm: set  =  v max /n, and set v’ i = max{k  | k  < v i } Let k i = v’ i /  ; observe that k i ≤ n/  (n/  = v max ) recall: W(i, v)=min weight of a subset of {1,..., i} of value v W’(i, k): min weight of a subset of {1,..., i} of value k  can be computed in the same way by DP

FPTAS for Knapsack: Bounding the Error W’(i, k): min weight of a subset of {1,..., i} of value k  Compute W’(i, k) for i=1,..., n, k=1,..., n 2 /  n 2 /  ≥ V) Let V’ = max {k  | W’(n, k) ≤ W} How different is V’ from OPT? – consider an opt solution i 1,...., i t – v i v it = OPT – we have V’ ≥ v’ i v’ it – also v’ i1 ≥ v i1 -  for all i 1,...., i t V’ ≥ OPT - n  = OPT –  v max ≥ OPT –  OPT ≥ OPT(1-  )

More Complexity Theory Recall: 3-SAT is the question of whether a given Boolean formula is satisfiable Tautology: is a given formula a tautology, i.e., is it true for all values of variables? Does not seem to be in NP: what’s the witness? Easy to prove that a formula is NOT a tautology: exhibit a falsifying assignment coNP: problems whose complements are in NP

More about coNP Complement of a problem: Q is a complement of P if a yes-instance of P is a no-instance of Q and vice versa How to prove that a problem is in coNP? – prove that its complement is in NP coNP-hard: as hard as any problem in coNP How to prove that a problem is coNP-hard? – prove that its complement is NP-hard It is believed that coNP ≠ P, coNP ≠ NP

Even Harder Problems (1/2) Minimal Circuit: given a Boolean circuit C with n inputs, is it minimal? – is it the case that for any circuit C’ with |C’|<|C| there is an input x such that C’(x) ≠ C(x)? Does not seem to be in NP... Complement: is a given Boolean circuit non-minimal? – does not seem to be in NP either: even if you guess C’, need to show C’(x)=C(x) for all x

Even Harder Problems (2/2) Team Stability: – set of n agents N – any subset S of N can form a team and earn u(S) assume that u(S) is poly-time computable Question: can we split agents into teams S 1,..., S k and share the earnings so that no group of agents wants to deviate? – is there a partition (S 1,..., S k ) and a vector (p 1,..., p n ) such that  j  Si p j = u(S i ) for all i=1,..., k and  j  S p j ≥ u(S) for all S  N?

 2 and  2 Minimal Circuit: is it the case that for any circuit C’ with |C’|<|C| there exists an input x such that C’(x) ≠ C(x)? Team Stability: does there exist a partition (S 1,..., S k ) and a vector (p 1,..., p n ) such that  j  Si p j = u(S i ) for all i=1,..., k and  j  S p j ≥ u(S) for all S  N?  2 : problems of the form  x  y P(x, y)  2 : problems of the form  x  y P(x, y) (where P(x, y) is polynomial-time checkable)

Polynomial Hierarchy –  2 : problems of the form  x  y P(x, y) –  2 : problems of the form  x  y P(x, y) – NP (=  1 ): problems of the form  xP(x) – coNP (=  1 ): problems of the form  x P(x) – can extend this further:  k : k alternations of quantifiers starting with   k : k alternations of quantifiers starting with  – It is believed that all these classes are distinct – can show  k   k+1,  k+1 and  k   k+1,  k+1

PSPACE PSPACE: the class of problems you can solve using polynomial space P  PSPACE: can’t use up more than poly space in poly time NP  PSPACE: can check all candidates reusing the space  2  PSPACE: e.g., for Min Circuit, need – poly(n) bits to go over circuits – poly(n) bits to go over Boolean strings

More about PSPACE all of polynomial hierarchy is in PSPACE EXPTIME: problems solvable in time 2 poly(n) – PSPACE  EXPTIME are there PSPACE-complete problems? Unbounded Quantifier Alternation: check if  x1  y1  x2  y2....P(x1, y1, x2, y2,....) is satisfiable Claim: Unbounded Quantifier Alteration is PSPACE-complete

Problems in PSPACE: Other Examples given a 2-player game, does the first player have a winning strategy? – does there exist a move for player 1 such that for any move of player 2 there exists a move of player 1 such that.... Any deterministic 2-player game that is guaranteed to terminate is in PSPACE Some are PSPACE-complete: a version of GO

Counting Solutions Sometimes you want to know how many solutions are there – e.g., if there are many, you can sample #P: class of counting problems whose corresponding decision problems are in NP – #SAT: how many satisfying assignments a given formula has #P-completeness: complete under counting reductions (preserve the number of solutions) – #SAT is #P-complete

Summary Lecture 1: – polynomial time – P vs. NP – NP-complete problems Lecture 2: – strong NP-hardness – pseudopolynomial algorithms – approximation algorithms – polynomial hierarchy, PSPACE, #P