Presentation is loading. Please wait.

Presentation is loading. Please wait.

David Luebke 1 1/14/2016 CS 332: Algorithms NP Completeness.

Similar presentations


Presentation on theme: "David Luebke 1 1/14/2016 CS 332: Algorithms NP Completeness."— Presentation transcript:

1 David Luebke 1 1/14/2016 CS 332: Algorithms NP Completeness

2 David Luebke 2 1/14/2016 Administrivia l Homework 5 clarifications: n Deadline extended to Thursday Dec 7 n However, no late assignments will be accepted u This is so that I can discuss it in class n You may not work with others on this one

3 David Luebke 3 1/14/2016 Review: Dynamic Programming l When applicable: n Optimal substructure: optimal solution to problem consists of optimal solutions to subproblems n Overlapping subproblems: few subproblems in total, many recurring instances of each n Basic approach: u Build a table of solved subproblems that are used to solve larger ones u What is the difference between memoization and dynamic programming? u Why might the latter be more efficient?

4 David Luebke 4 1/14/2016 Review: Greedy Algorithms l A greedy algorithm always makes the choice that looks best at the moment n The hope: a locally optimal choice will lead to a globally optimal solution n For some problems, it works u Yes: fractional knapsack problem u No: playing a bridge hand l Dynamic programming can be overkill; greedy algorithms tend to be easier to code

5 David Luebke 5 1/14/2016 Review: Activity-Selection Problem l The activity selection problem: get your money’s worth out of a carnival n Buy a wristband that lets you onto any ride n Lots of rides, starting and ending at different times n Your goal: ride as many rides as possible l Naïve first-year CS major strategy: n Ride the first ride, when get off, get on the very next ride possible, repeat until carnival ends l What is the sophisticated third-year strategy?

6 David Luebke 6 1/14/2016 Review: Activity-Selection l Formally: n Given a set S of n activities u s i = start time of activity i f i = finish time of activity i n Find max-size subset A of compatible activities n Assume activities sorted by finish time l What is optimal substructure for this problem?

7 David Luebke 7 1/14/2016 Review: Activity-Selection l Formally: n Given a set S of n activities u s i = start time of activity i f i = finish time of activity i n Find max-size subset A of compatible activities n Assume activities sorted by finish time l What is optimal substructure for this problem? n A: If k is the activity in A with the earliest finish time, then A - {k} is an optimal solution to S’ = {i  S: s i  f k }

8 David Luebke 8 1/14/2016 Review: Greedy Choice Property For Activity Selection l Dynamic programming? Memoize? Yes, but… l Activity selection problem also exhibits the greedy choice property: n Locally optimal choice  globally optimal sol’n n Them 17.1: if S is an activity selection problem sorted by finish time, then  optimal solution A  S such that {1}  A u Sketch of proof: if  optimal solution B that does not contain {1}, can always replace first activity in B with {1} (Why?). Same number of activities, thus optimal.

9 David Luebke 9 1/14/2016 Review: The Knapsack Problem l The 0-1 knapsack problem: n A thief must choose among n items, where the ith item worth v i dollars and weighs w i pounds n Carrying at most W pounds, maximize value l A variation, the fractional knapsack problem: n Thief can take fractions of items n Think of items in 0-1 problem as gold ingots, in fractional problem as buckets of gold dust l What greedy choice algorithm works for the fractional problem but not the 0-1 problem?

10 David Luebke 10 1/14/2016 Homework 4 l Problem 1 (detecting overlap in rectangles): n “Sweep” vertical line across rectangles; keep track of line-rectangle intersections with interval tree n Need to sort rectangles by X min, X max n Then need to do O(n) inserts and deletes n Total time: O(n lg n)

11 David Luebke 11 1/14/2016 Homework 4 l Problem 2: Optimizing Kruskal’s algorithm n Edge weights integers from 1 to |V|: u Use counting sort to sort edges in O(V+E) time = O(E) time (Why?)  Bottleneck now the disjoint-set unify operations, so total algorithm time now O(E  (E,V)) n Edge weights integers from 1 to constant W: u Again use counting sort to sort edges in O(W+E) time u Which again = O(E) time  Which again means O(E  (E,V)) total time

12 David Luebke 12 1/14/2016 Homework 4 l Problem 3: Optimizing Prim’s algorithm n Running time of Prim’s algorithm depends on the implementation of the priority queue n Edge weights integers from 1 to constant W: u Implement queue as an array Q[1..W+1] u The ith slot in Q holds doubly-linked list of vertices with key = i (What does slot W+1 represent?) u Extract-Min() takes O(1) time (How?) u Decrease-Key() takes O(1) time (How?) u Total asymptotic running time of Prims: O(E)

13 David Luebke 13 1/14/2016 Homework 4 l Problem 4: n No time to cover here, sorry

14 David Luebke 14 1/14/2016 NP-Completeness l Some problems are intractable: as they grow large, we are unable to solve them in reasonable time l What constitutes reasonable time? Standard working definition: polynomial time n On an input of size n the worst-case running time is O(n k ) for some constant k n Polynomial time: O(n 2 ), O(n 3 ), O(1), O(n lg n) n Not in polynomial time: O(2 n ), O(n n ), O(n!)

15 David Luebke 15 1/14/2016 Polynomial-Time Algorithms l Are some problems solvable in polynomial time? n Of course: every algorithm we’ve studied provides polynomial-time solution to some problem n We define P to be the class of problems solvable in polynomial time l Are all problems solvable in polynomial time? n No: Turing’s “Halting Problem” is not solvable by any computer, no matter how much time is given n Such problems are clearly intractable, not in P

16 David Luebke 16 1/14/2016 NP-Complete Problems l The NP-Complete problems are an interesting class of problems whose status is unknown n No polynomial-time algorithm has been discovered for an NP-Complete problem n No suprapolynomial lower bound has been proved for any NP-Complete problem, either l We call this the P = NP question n The biggest open problem in CS

17 David Luebke 17 1/14/2016 An NP-Complete Problem: Hamiltonian Cycles l An example of an NP-Complete problem: n A hamiltonian cycle of an undirected graph is a simple cycle that contains every vertex n The hamiltonian-cycle problem: given a graph G, does it have a hamiltonian cycle? u Draw on board: dodecahedron, odd bipartite graph n Describe a naïve algorithm for solving the hamiltonian-cycle problem. Running time?

18 David Luebke 18 1/14/2016 P and NP l As mentioned, P is set of problems that can be solved in polynomial time l NP (nondeterministic polynomial time) is the set of problems that can be solved in polynomial time by a nondeterministic computer n What the hell is that?

19 David Luebke 19 1/14/2016 Nondeterminism l Think of a non-deterministic computer as a computer that magically “guesses” a solution, then has to verify that it is correct n If a solution exists, computer always guesses it n One way to imagine it: a parallel computer that can freely spawn an infinite number of processes u Have one processor work on each possible solution u All processors attempt to verify that their solution works u If a processor finds it has a working solution n So: NP = problems verifiable in polynomial time

20 David Luebke 20 1/14/2016 P and NP l Summary so far: n P = problems that can be solved in polynomial time n NP = problems for which a solution can be verified in polynomial time n Unknown whether P = NP (most suspect not) l Hamiltonian-cycle problem is in NP: n Cannot solve in polynomial time n Easy to verify solution in polynomial time (How?)

21 David Luebke 21 1/14/2016 NP-Complete Problems l We will see that NP-Complete problems are the “hardest” problems in NP: n If any one NP-Complete problem can be solved in polynomial time… n …then every NP-Complete problem can be solved in polynomial time… n …and in fact every problem in NP can be solved in polynomial time (which would show P = NP) n Thus: solve hamiltonian-cycle in O(n 100 ) time, you’ve proved that P = NP. Retire rich & famous.

22 David Luebke 22 1/14/2016 Reduction l The crux of NP-Completeness is reducibility n Informally, a problem P can be reduced to another problem Q if any instance of P can be “easily rephrased” as an instance of Q, the solution to which provides a solution to the instance of P u What do you suppose “easily” means? u This rephrasing is called transformation n Intuitively: If P reduces to Q, P is “no harder to solve” than Q

23 David Luebke 23 1/14/2016 Reducibility l An example: n P: Given a set of Booleans, is at least one TRUE? n Q: Given a set of integers, is their sum positive? n Transformation: (x 1, x 2, …, x n ) = (y 1, y 2, …, y n ) where y i = 1 if x i = TRUE, y i = 0 if x i = FALSE l Another example: n Solving linear equations is reducible to solving quadratic equations u How can we easily use a quadratic-equation solver to solve linear equations?

24 David Luebke 24 1/14/2016 Using Reductions l If P is polynomial-time reducible to Q, we denote this P  p Q l Definition of NP-Complete: n If P is NP-Complete, all problems R  NP are reducible to P n Formally: R  p P  R  NP l If P  p Q and P is NP-Complete, Q is also NP- Complete n This is the key idea you should take away today

25 David Luebke 25 1/14/2016 Coming Up l Given one NP-Complete problem, we can prove many interesting problems NP-Complete n Graph coloring (= register allocation) n Hamiltonian cycle n Hamiltonian path n Knapsack problem n Traveling salesman n Job scheduling with penalities n Many, many more

26 David Luebke 26 1/14/2016 The End


Download ppt "David Luebke 1 1/14/2016 CS 332: Algorithms NP Completeness."

Similar presentations


Ads by Google