Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS 3343: Analysis of Algorithms

Similar presentations


Presentation on theme: "CS 3343: Analysis of Algorithms"— Presentation transcript:

1 CS 3343: Analysis of Algorithms
Review for Exam 2

2 Exam 2 Closed book exam One cheat sheet allowed (limit to a single page of letter-size paper, double-sided) Tuesday, April 15, 10:00 – 11:25pm Basic calculator (no graphing) is allowed but not necessary

3 Materials covered Quick Sort Heap sort, priority queue
Linear time sorting algorithms Order statistics Dynamic programming Greedy algorithm Questions will be similar to homework / quizzes Familiar with the algorithm procedure Some analysis of time/space complexity One or two problems on algorithm design

4 Quick sort Quicksort an n-element array:
Divide: Partition the array into two subarrays around a pivot x such that elements in lower subarray £ x £ elements in upper subarray. Conquer: Recursively sort the two subarrays. Combine: Trivial. £ x x ≥ x Key: Linear-time partitioning subroutine.

5 Pseudocode for quicksort
QUICKSORT(A, p, r) if p < r then q  PARTITION(A, p, r) QUICKSORT(A, p, q–1) QUICKSORT(A, q+1, r) Initial call: QUICKSORT(A, 1, n)

6 Partition Code Partition(A, p, r)
x = A[p]; // pivot is the first element i = p; j = r + 1; while (TRUE) { repeat i++; until A[i] > x | i >= j; j--; until A[j] < x | j <= i; if (i < j) Swap (A[i], A[j]); else break; } swap (A[p], A[j]); return j;

7 p r 6 10 5 8 13 3 2 11 x = 6 i j 6 10 5 8 13 3 2 11 scan i j 6 2 5 8 13 3 10 11 swap i j Partition example 6 2 5 8 13 3 10 11 scan i j 6 2 5 3 13 8 10 11 swap i j 6 2 5 3 13 8 10 11 scan j i p q r 3 2 5 6 13 8 10 11 final swap

8 6 10 5 8 11 3 2 13 Quick sort example 3 2 5 6 11 8 10 13 2 3 5 6 10 8 11 13 2 3 5 6 8 10 11 13 2 3 5 6 8 10 11 13

9 Quicksort Runtimes Best case runtime Tbest(n)  O(n log n)
Worst case runtime Tworst(n)  O(n2) Average case runtime Tavg(n)  O(n log n) Expected runtime of randomized quicksort is O(n log n)

10 Randomized Partition Randomly choose an element as pivot
Every time need to do a partition, throw a die to decide which element to use as the pivot Each element has 1/n probability to be selected Rand-Partition(A, p, r){ d = random(); // draw a random number between 0 and 1 index = p + floor((r-p+1) * d); // p<=index<=q swap(A[p], A[index]); Partition(A, p, r); // now use A[p] as pivot }

11 Running time of randomized quicksort
T(0) + T(n–1) + dn if 0 : n–1 split, T(1) + T(n–2) + dn if 1 : n–2 split, M T(n–1) + T(0) + dn if n–1 : 0 split, T(n) = The expected running time is an average of all cases Expectation

12 Need to Prove: T(n) ≤ c n log (n)
Fact: Need to Prove: T(n) ≤ c n log (n) Assumption: T(k) ≤ ck log (k) for 0 ≤ k ≤ n-1 Prove by induction If c ≥ 4

13 Heaps A heap can be seen as a complete binary tree:
Perfect binary tree 16 14 10 8 7 9 3 2 4 1 16 14 10 8 7 9 3 2 4 1

14 Referencing Heap Elements
So… Parent(i) {return i/2;} Left(i) {return 2*i;} right(i) {return 2*i + 1;}

15 Heap Operations: Heapify()
Heapify(A, i) { // precondition: subtrees rooted at l and r are heaps l = Left(i); r = Right(i); if (l <= heap_size(A) && A[l] > A[i]) largest = l; else largest = i; if (r <= heap_size(A) && A[r] > A[largest]) largest = r; if (largest != i) { Swap(A, i, largest); Heapify(A, largest); } } // postcondition: subtree rooted at i is a heap Among A[l], A[i], A[r], which one is largest? If violation, fix it.

16 Heapify() Example 16 4 10 14 7 9 3 2 8 1 A = 16 4 10 14 7 9 3 2 8 1

17 Heapify() Example 16 4 10 14 7 9 3 2 8 1 A = 16 4 10 14 7 9 3 2 8 1

18 Heapify() Example 16 4 10 14 7 9 3 2 8 1 A = 16 4 10 14 7 9 3 2 8 1

19 Heapify() Example 16 14 10 4 7 9 3 2 8 1 A = 16 14 10 4 7 9 3 2 8 1

20 Heapify() Example 16 14 10 4 7 9 3 2 8 1 A = 16 14 10 4 7 9 3 2 8 1

21 Heapify() Example 16 14 10 8 7 9 3 2 4 1 A = 16 14 10 8 7 9 3 2 4 1

22 Heapify() Example 16 14 10 8 7 9 3 2 4 1 A = 16 14 10 8 7 9 3 2 4 1

23 BuildHeap() // given an unsorted array A, make A a heap BuildHeap(A) {
heap_size(A) = length(A); for (i = length[A]/2 downto 1) Heapify(A, i); }

24 BuildHeap() Example Work through example A = {4, 1, 3, 2, 16, 9, 10, 14, 8, 7} 4 1 3 2 16 9 10 14 8 7 A = 4 1 3 2 16 9 10 14 8 7

25 4 1 3 2 16 9 10 14 8 7 A = 4 1 3 2 16 9 10 14 8 7

26 4 1 3 2 16 9 10 14 8 7 A = 4 1 3 2 16 9 10 14 8 7

27 4 1 3 14 16 9 10 2 8 7 A = 4 1 3 14 16 9 10 2 8 7

28 4 1 3 14 16 9 10 2 8 7 A = 4 1 3 14 16 9 10 2 8 7

29 4 1 10 14 16 9 3 2 8 7 A = 4 1 10 14 16 9 3 2 8 7

30 4 1 10 14 16 9 3 2 8 7 A = 4 1 10 14 16 9 3 2 8 7

31 4 16 10 14 1 9 3 2 8 7 A = 4 16 10 14 1 9 3 2 8 7

32 4 16 10 14 7 9 3 2 8 1 A = 4 16 10 14 7 9 3 2 8 1

33 4 16 10 14 7 9 3 2 8 1 A = 4 16 10 14 7 9 3 2 8 1

34 16 4 10 14 7 9 3 2 8 1 A = 16 4 10 14 7 9 3 2 8 1

35 16 14 10 4 7 9 3 2 8 1 A = 16 14 10 4 7 9 3 2 8 1

36 16 14 10 8 7 9 3 2 4 1 A = 16 14 10 8 7 9 3 2 4 1

37 Analyzing BuildHeap(): Tight
To Heapify() a subtree takes O(h) time where h is the height of the subtree h = O(lg m), m = # nodes in subtree The height of most subtrees is small Fact: an n-element heap has at most n/2h+1 nodes of height h CLR 6.3 uses this fact to prove that BuildHeap() takes O(n) time

38 Heapsort Heapsort(A) { BuildHeap(A); for (i = length(A) downto 2)
Swap(A[1], A[i]); heap_size(A) -= 1; Heapify(A, 1); }

39 Heapsort Example Work through example A = {4, 1, 3, 2, 16, 9, 10, 14, 8, 7} 4 1 3 2 16 9 10 14 8 7 A = 4 1 3 2 16 9 10 14 8 7

40 Heapsort Example First: build a heap 16 14 10 8 7 9 3 2 4 1 A = 16 14

41 Heapsort Example Swap last and first 1 14 10 8 7 9 3 2 4 16 A = 1 14

42 Heapsort Example Last element sorted 1 14 10 8 7 9 3 2 4 16 A = 1 14

43 Heapsort Example Restore heap on remaining unsorted elements 14 8 10 4
7 9 3 2 1 16 Heapify A = 14 8 10 4 7 9 3 2 1 16

44 Heapsort Example Repeat: swap new last and first 1 8 10 4 7 9 3 2 14
16 A = 1 8 10 4 7 9 3 2 14 16

45 Heapsort Example Restore heap 10 8 9 4 7 1 3 2 14 16 A = 10 8 9 4 7 1

46 Heapsort Example Repeat 9 8 3 4 7 1 2 10 14 16 A = 9 8 3 4 7 1 2 10 14

47 Heapsort Example Repeat 8 7 3 4 2 1 9 10 14 16 A = 8 7 3 4 2 1 9 10 14

48 Heapsort Example Repeat 1 2 3 4 7 8 9 10 14 16 A = 1 2 3 4 7 8 9 10 14

49 Implementing Priority Queues
HeapMaximum(A) { return A[1]; }

50 Implementing Priority Queues
HeapExtractMax(A) { if (heap_size[A] < 1) { error; } max = A[1]; A[1] = A[heap_size[A]] heap_size[A] --; Heapify(A, 1); return max; }

51 HeapExtractMax Example
16 14 10 8 7 9 3 2 4 1 A = 16 14 10 8 7 9 3 2 4 1

52 HeapExtractMax Example
Swap first and last, then remove last 1 14 10 8 7 9 3 2 4 16 A = 1 14 10 8 7 9 3 2 4 16

53 HeapExtractMax Example
Heapify 14 8 10 4 7 9 3 2 1 16 A = 14 8 10 4 7 9 3 2 1 16

54 Implementing Priority Queues
HeapChangeKey(A, i, key){ if (key ≤ A[i]){ // decrease key A[i] = key; heapify(A, i); } else { // increase key while (i>1 & A[parent(i)]<A[i]) swap(A[i], A[parent(i)]; } Sift down Bubble up

55 HeapChangeKey Example
Increase key 16 14 10 8 7 9 3 2 4 1 A = 16 14 10 8 7 9 3 2 4 1

56 HeapChangeKey Example
Increase key 16 14 10 15 7 9 3 2 4 1 A = 16 14 10 15 7 9 3 2 4 1

57 HeapChangeKey Example
Increase key 16 15 10 14 7 9 3 2 4 1 A = 16 15 10 14 7 9 3 2 4 1

58 Implementing Priority Queues
HeapInsert(A, key) { heap_size[A] ++; i = heap_size[A]; A[i] = -∞; HeapChangeKey(A, i, key); }

59 HeapInsert Example HeapInsert(A, 17) 16 14 10 8 7 9 3 2 4 1 A = 16 14

60 HeapInsert Example HeapInsert(A, 17) -∞ -∞ -∞ makes it a valid heap 16
14 10 8 7 9 3 2 4 1 -∞ -∞ makes it a valid heap A = 16 14 10 8 7 9 3 2 4 1 -∞

61 HeapInsert Example HeapInsert(A, 17) Now call changeKey 16 14 10 8 7 9
3 2 4 1 17 Now call changeKey A = 16 14 10 8 7 9 3 2 4 1 17

62 HeapInsert Example HeapInsert(A, 17) 17 16 10 8 14 9 3 2 4 1 7 A = 17

63 Counting sort for i  1 to k do C[i]  0 for j  1 to n
1. for i  1 to k do C[i]  0 for j  1 to n do C[A[ j]]  C[A[ j]] + 1 ⊳ C[i] = |{key = i}| for i  2 to k do C[i]  C[i] + C[i–1] ⊳ C[i] = |{key £ i}| for j  n downto 1 do B[C[A[ j]]]  A[ j] C[A[ j]]  C[A[ j]] – 1 Initialize 2. Count 3. Compute running sum 4. Re-arrange

64 Counting sort A: 4 1 3 4 3 C: 1 2 2 B: C': 1 1 3 5 for i  2 to k
2 2 B: C': 1 1 3 5 3. for i  2 to k do C[i]  C[i] + C[i–1] ⊳ C[i] = |{key £ i}|

65 Loop 4: re-arrange A: 4 1 3 4 3 C: 1 1 3 5 B: 3 C': 1 1 3 5
2 3 4 5 1 2 3 4 A: 4 1 3 4 3 C: 1 1 3 5 B: 3 C': 1 1 3 5 4. for j  n downto 1 do B[C[A[ j]]]  A[ j] C[A[ j]]  C[A[ j]] – 1

66 Analysis Q(k) Q(n) Q(k) Q(n) Q(n + k) 1. for i  1 to k do C[i]  0 2.
for j  1 to n do C[A[ j]]  C[A[ j]] + 1 Q(n) 3. for i  2 to k do C[i]  C[i] + C[i–1] Q(k) 4. for j  n downto 1 do B[C[A[ j]]]  A[ j] C[A[ j]]  C[A[ j]] – 1 Q(n) Q(n + k)

67 What other algorithms have this property?
Stable sorting Counting sort is a stable sort: it preserves the input order among equal elements. A: 4 1 3 B: Why this is important? What other algorithms have this property?

68 Radix sort Similar to sorting the address books
Treat each digit as a key Start from the least significant bit Most significant Least significant

69 Time complexity Sort each of the d digits by counting sort
Total cost: d (n + k) k = 10 Total cost: Θ(dn) Partition the d digits into groups of 3 Total cost: (n+103)d/3 We work with binaries rather than decimals Partition a binary number into groups of r bits Total cost: (n+2r)d/r Choose r = log n Total cost: dn / log n Compare with dn log n Catch: faster than quicksort only when n is very large

70 Randomized selection algorithm
RAND-SELECT(A, p, q, i) ⊳ i th smallest of A[ p . . q] if p = q & i > 1 then error! r  RAND-PARTITION(A, p, q) k  r – p + 1 ⊳ k = rank(A[r]) if i = k then return A[ r] if i < k then return RAND-SELECT( A, p, r – 1, i ) else return RAND-SELECT( A, r + 1, q, i – k ) £ A[r] ³ A[r] r p q k

71 Complete example: select the 6th smallest element.
i = 6 7 10 5 8 11 3 2 13 3 2 5 7 11 8 10 13 k = 4 i = 6 – 4 = 2 10 8 11 13 k = 3 i = 2 < k Note: here we always used first element as pivot to do the partition (instead of rand-partition). 8 10 k = 2 i = 2 = k 10

72 Running time of randomized selection
T(max(0, n–1)) + n if 0 : n–1 split, T(max(1, n–2)) + n if 1 : n–2 split, M T(max(n–1, 0)) + n if n–1 : 0 split, T(n) ≤ For upper bound, assume ith element always falls in larger side of partition The expected running time is an average of all cases Expectation

73 Substitution method Want to show T(n) = O(n).
So need to prove T(n) ≤ cn for n > n0 Assume: T(k) ≤ ck for all k < n if c ≥ 4 Therefore, T(n) = O(n)

74 Worst-case linear-time selection
if i = k then return x elseif i < k then recursively SELECT the i th smallest element in the lower part else recursively SELECT the (i–k)th smallest element in the upper part SELECT(i, n) Divide the n elements into groups of 5. Find the median of each 5-element group by rote. Recursively SELECT the median x of the ën/5û group medians to be the pivot. Partition around the pivot x. Let k = rank(x). Same as RAND-SELECT

75 Developing the recurrence
T(n) if i = k then return x elseif i < k then recursively SELECT the i th smallest element in the lower part else recursively SELECT the (i–k)th smallest element in the upper part SELECT(i, n) Divide the n elements into groups of 5. Find the median of each 5-element group by rote. Recursively SELECT the median x of the ën/5û group medians to be the pivot. Partition around the pivot x. Let k = rank(x). Q(n) T(n/5) Q(n) T(7n/10+3)

76 Solving the recurrence
Assumption: T(k) £ ck for all k < n if n ≥ 60 if c ≥ 20 and n ≥ 60

77 Elements of dynamic programming
Optimal sub-structures Optimal solutions to the original problem contains optimal solutions to sub-problems Overlapping sub-problems Some sub-problems appear in many solutions

78 Two steps to dynamic programming
Formulate the solution as a recurrence relation of solutions to subproblems. Specify an order to solve the subproblems so you always have what you need.

79 Optimal subpaths Claim: if a path startgoal is optimal, any sub-path, startx, or xgoal, or xy, where x, y is on the optimal path, is also the shortest. Proof by contradiction If the subpath between x and y is not the shortest, we can replace it with the shorter one, which will reduce the total length of the new path => the optimal path from start to goal is not the shortest => contradiction! Hence, the subpath xy must be the shortest among all paths from x to y start goal x y a b c b’ a + b + c is shortest b’ < b a + b’ + c < a + b + c

80 Dynamic programming illustration
3 9 1 2 3 12 13 15 5 3 3 3 3 3 2 5 2 5 6 8 13 15 2 3 3 9 3 2 4 2 3 7 9 11 13 16 6 2 3 7 4 3 6 3 3 13 11 14 17 20 4 6 3 1 3 1 2 3 2 17 17 17 18 20 G F(i-1, j) + dist(i-1, j, i, j) F(i, j) = min F(i, j-1) + dist(i, j-1, i, j)

81 Trace back 3 9 1 2 3 12 13 15 5 3 3 3 3 3 2 5 2 5 6 8 13 15 2 3 3 9 3 2 4 2 3 7 9 11 13 16 6 2 3 7 4 3 6 3 3 13 11 14 17 20 4 6 3 1 3 1 2 3 2 17 17 17 18 20

82 Longest Common Subsequence
Given two sequences x[1 . . m] and y[1 . . n], find a longest subsequence common to them both. “a” not “the” x: A B C D y: BCBA = LCS(x, y) functional notation, but not a function

83 Optimal substructure Notice that the LCS problem has optimal substructure: parts of the final solution are solutions of subproblems. If z = LCS(x, y), then any prefix of z is an LCS of a prefix of x and a prefix of y. Subproblems: “find LCS of pairs of prefixes of x and y” i m x z n y j

84 Finding length of LCS m x n y Let c[i, j] be the length of LCS(x[1..i], y[1..j]) => c[m, n] is the length of LCS(x, y) If x[m] = y[n] c[m, n] = c[m-1, n-1] + 1 If x[m] != y[n] c[m, n] = max { c[m-1, n], c[m, n-1] }

85 DP Algorithm c[i–1, j–1] + 1 if x[i] = y[j],
Key: find out the correct order to solve the sub-problems Total number of sub-problems: m * n c[i, j] = c[i–1, j–1] + 1 if x[i] = y[j], max{c[i–1, j], c[i, j–1]} otherwise. j n C(i, j) i m

86 LCS Example (0) ABCB BDCAB X = ABCB; m = |X| = 4
j i Y[j] B D C A B X[i] A 1 B 2 3 C 4 B X = ABCB; m = |X| = 4 Y = BDCAB; n = |Y| = 5 Allocate array c[5,6]

87 LCS Example (1) ABCB BDCAB for i = 1 to m c[i,0] = 0
j i Y[j] B D C A B X[i] A 1 B 2 3 C 4 B for i = 1 to m c[i,0] = 0 for j = 1 to n c[0,j] = 0

88 LCS Example (2) ABCB BDCAB j 0 1 2 3 4 5 i Y[j] B D C A B X[i] A 1 B 2
A 1 B 2 3 C 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

89 LCS Example (3) ABCB BDCAB j 0 1 2 3 4 5 i Y[j] B D C A B X[i] A 1 B 2
A 1 B 2 3 C 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

90 LCS Example (4) ABCB BDCAB j 0 1 2 3 4 5 i Y[j] B D C A B X[i] A 1 1 B
A 1 1 B 2 3 C 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

91 LCS Example (5) ABCB BDCAB j 0 1 2 3 4 5 i Y[j] B D C A B X[i] A 1 1 1
A 1 1 1 B 2 3 C 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

92 LCS Example (6) ABCB BDCAB j 0 1 2 3 4 5 i Y[j] B D C A B X[i] A 1 1 1
A 1 1 1 B 2 1 3 C 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

93 LCS Example (7) ABCB BDCAB j 0 1 2 3 4 5 i Y[j] B D C A B X[i] A 1 1 1
A 1 1 1 B 2 1 1 1 1 3 C 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

94 LCS Example (8) ABCB BDCAB j 0 1 2 3 4 5 i Y[j] B D C A B X[i] A 1 1 1
A 1 1 1 B 2 1 1 1 1 2 3 C 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

95 LCS Example (9) ABCB BDCAB j 0 1 2 3 4 5 i Y[j] B D C A B X[i] A 1 1 1
A 1 1 1 B 2 1 1 1 1 2 3 C 1 1 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

96 LCS Example (10) ABCB BDCAB j 0 1 2 3 4 5 i Y[j] B D C A B X[i] A 1 1
A 1 1 1 B 2 1 1 1 1 2 3 C 1 1 2 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

97 LCS Example (11) ABCB BDCAB j 0 1 2 3 4 5 i Y[j] B D C A B X[i] A 1 1
A 1 1 1 B 2 1 1 1 1 2 3 C 1 1 2 2 2 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

98 LCS Example (12) ABCB BDCAB j 0 1 2 3 4 5 i Y[j] B D C A B X[i] A 1 1
A 1 1 1 B 2 1 1 1 1 2 3 C 1 1 2 2 2 4 B 1 if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

99 LCS Example (13) ABCB BDCAB j 0 1 2 3 4 5 i Y[j] B D C A B X[i] A 1 1
A 1 1 1 B 2 1 1 1 1 2 3 C 1 1 2 2 2 4 B 1 1 2 2 if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

100 LCS Example (14) 3 ABCB BDCAB j 0 1 2 3 4 5 i Y[j] B D C A B X[i] A 1
A 1 1 1 B 2 1 1 1 1 2 3 C 1 1 2 2 2 3 4 B 1 1 2 2 if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

101 LCS Algorithm Running Time
LCS algorithm calculates the values of each entry of the array c[m,n] So what is the running time? O(m*n) since each c[i,j] is calculated in constant time, and there are m*n elements in the array

102 How to find actual LCS For example, here
The algorithm just found the length of LCS, but not LCS itself. How to find the actual LCS? For each c[i,j] we know how it was acquired: A match happens only when the first equation is taken So we can start from c[m,n] and go backwards, remember x[i] whenever c[i,j] = c[i-1, j-1]+1. 2 2 For example, here c[i,j] = c[i-1,j-1] +1 = 2+1=3 2 3

103 Finding LCS 3 Time for trace back: O(m+n). j 0 1 2 3 4 5 i Y[j] B D C
X[i] A 1 1 1 B 2 1 1 1 1 2 3 C 1 1 2 2 2 3 4 B 1 1 2 2 Time for trace back: O(m+n).

104 Finding LCS (2) 3 LCS (reversed order): B C B B C B
j i Y[j] B D C A B X[i] A 1 1 1 B 2 1 1 1 1 2 3 C 1 1 2 2 2 3 4 B 1 1 2 2 LCS (reversed order): B C B B C B (this string turned out to be a palindrome) LCS (straight order):

105 LCS as a longest path problem
D C A B A 1 B 1 1 1 C B 1 1

106 LCS as a longest path problem
D C A B A 1 1 1 B 1 1 1 1 1 1 2 1 C 1 1 2 2 2 B 1 1 1 1 1 2 3

107 A more general problem Aligning two strings, such that
Match = m Mismatch = -s Insertion/deletion = -d Aligning ABBC with CABC LCS = 3: ABC Best alignment ABBC CABC Score = 2m-2s ABBC CABC Score = 3m-2d

108 Alignment as a longest path problem
C A B C A B B C -d m -s

109 Recurrence Let F(i, j) be the best alignment score between X[1..i] and Y[1..j]. F(m, n) is the best alignment score between X and Y Recurrence F(i, j) = max F(i-1, j-1) + (i, j) F(i-1,j) – d F(i, j-1) – d Match/Mismatch Insertion on Y Insertion on X (i, j) = m if X[i]=Y[j] and -s otherwise.

110 Restaurant location problem
You work in the fast food business Your company plans to open up new restaurants in Texas along I-35 Towns along the highway called t1, t2, …, tn Restaurants at ti has estimated annual profit pi No two restaurants can be located within 10 miles of each other due to some regulation Your boss wants to maximize the total profit You want a big bonus 10 mile

111 A DP algorithm Suppose you’ve already found the optimal solution
It will either include tn or not include tn Case 1: tn not included in optimal solution Best solution same as best solution for t1 , …, tn-1 Case 2: tn included in optimal solution Best solution is pn + best solution for t1 , …, tj , where j < n is the largest index so that dist(tj, tn) ≥ 10

112 Recurrence formulation
Let S(i) be the total profit of the optimal solution when the first i towns are considered (not necessarily selected) S(n) is the optimal solution to the complete problem S(n-1) S(j) + pn j < n & dist (tj, tn) ≥ 10 S(n) = max S(i-1) S(j) + pi j < i & dist (tj, ti) ≥ 10 S(i) = max Generalize Number of sub-problems: n. Boundary condition: S(0) = 0. Dependency: i i-1 j S

113 Example S(i-1) S(j) + pi j < i & dist (tj, ti) ≥ 10 S(i) = max
Distance (mi) 100 5 2 2 6 6 3 6 10 7 dummy 7 3 4 12 Profit (100k) 6 7 9 8 3 3 2 4 12 5 S(i) 6 7 9 9 10 12 12 14 26 26 Optimal: 26 S(i-1) S(j) + pi j < i & dist (tj, ti) ≥ 10 S(i) = max Natural greedy 1: = 25 Natural greedy 2: = 24

114 Complexity Time: (nk), where k is the maximum number of towns that are within 10 miles to the left of any town In the worst case, (n2) Can be improved to (n) with some preprocessing tricks Memory: Θ(n)

115 Knapsack problem Each item has a value and a weight
Objective: maximize value Constraint: knapsack has a weight limitation Three versions: 0-1 knapsack problem: take each item or leave it Fractional knapsack problem: items are divisible Unbounded knapsack problem: unlimited supplies of each item. Which one is easiest to solve? We study the 0-1 problem today.

116 Formal definition (0-1 problem)
Knapsack has weight limit W Items labeled 1, 2, …, n (arbitrarily) Items have weights w1, w2, …, wn Assume all weights are integers For practical reason, only consider wi < W Items have values v1, v2, …, vn Objective: find a subset of items, S, such that iS wi  W and iS vi is maximal among all such (feasible) subsets

117 A DP algorithm Suppose you’ve find the optimal solution S
Case 1: item n is included Case 2: item n is not included Total weight limit: W Total weight limit: W wn wn Find an optimal solution using items 1, 2, …, n-1 with weight limit W - wn Find an optimal solution using items 1, 2, …, n-1 with weight limit W

118 Recursive formulation
Let V[i, w] be the optimal total value when items 1, 2, …, i are considered for a knapsack with weight limit w => V[n, W] is the optimal solution V[n, W] = max V[n-1, W-wn] + vn V[n-1, W] Generalize V[i, w] = max V[i-1, w-wi] + vi item i is taken V[i-1, w] item i not taken V[i-1, w] if wi > w item i not taken Boundary condition: V[i, 0] = 0, V[0, w] = 0. Number of sub-problems = ?

119 Example n = 6 (# of items) W = 10 (weight limit)
Items (weight, value):

120 w 1 2 3 4 5 6 7 8 9 10 i wi vi 1 2 2 2 4 3 wi 3 3 3 V[i-1, w-wi] V[i-1, w] 4 5 5 6 6 V[i, w] 5 2 4 6 6 9 V[i-1, w-wi] + vi item i is taken V[i-1, w] item i not taken max V[i, w] = V[i-1, w] if wi > w item i not taken

121 w 1 2 3 4 5 6 7 8 9 10 i wi vi 1 2 4 3 5 6 9 2 2 2 2 2 2 2 2 2 3 5 2 2 3 5 5 5 5 6 8 3 5 2 3 5 6 8 6 8 9 11 2 3 3 6 9 4 6 7 10 12 13 4 7 10 13 15 9 4 4 6 7 10 13 V[i-1, w-wi] + vi item i is taken V[i-1, w] item i not taken max V[i-1, w] if wi > w item i not taken V[i, w] =

122 w 1 2 3 4 5 6 7 8 9 10 i wi vi 1 2 4 3 5 6 9 2 2 2 2 2 2 2 2 2 3 5 2 2 3 5 5 5 5 6 8 3 5 2 3 5 6 8 6 8 9 11 2 3 3 6 9 4 7 10 12 13 4 6 7 10 13 9 4 4 6 7 10 13 15 Optimal value: 15 Item: 6, 5, 1 Weight: = 10 Value: = 15

123 Time complexity Θ (nW) Polynomial?
Pseudo-polynomial Works well if W is small Consider following items (weight, value): (10, 5), (15, 6), (20, 5), (18, 6) Weight limit 35 Optimal solution: item 2, 4 (value = 12). Iterate: 2^4 = 16 subsets Dynamic programming: fill up a 4 x 35 = 140 table entries What’s the problem? Many entries are unused: no such weight combination Top-down may be better

124 Use DP algorithm to solve new problems
Directly map a new problem to a known problem Modify an algorithm for a similar task Design your own Think about the problem recursively Optimal solution to a larger problem can be computed from the optimal solution of one or more subproblems These sub-problems can be solved in certain manageable order Works nicely for naturally ordered data such as strings, trees, some special graphs Trickier for general graphs The text book has some very good exercises.

125 Greedy algorithm for restaurant location problem
select t1 d = 0; for (i = 2 to n) d = d + dist(ti, ti-1); if (d >= min_dist) select ti end 5 2 2 6 6 3 6 10 7 d 5 7 9 15 6 9 15 10 7

126 Complexity Time: Θ(n) Memory: Θ(n) to store the input
Θ(1) for greedy selection

127 Knapsack problem Each item has a value and a weight Objective: maximize value Constraint: knapsack has a weight limitation Three versions: 0-1 knapsack problem: take each item or leave it Fractional knapsack problem: items are divisible Unbounded knapsack problem: unlimited supplies of each item. Which one is easiest to solve? We can solve the fractional knapsack problem using greedy algorithm

128 Greedy algorithm for fractional knapsack problem
Compute value/weight ratio for each item Sort items by their value/weight ratio into decreasing order Call the remaining item with the highest ratio the most valuable item (MVI) Iteratively: If the weight limit can not be reached by adding MVI Select MVI Otherwise select MVI partially until weight limit

129 Example Weight limit: 10 9 6 4 2 5 3 1 Value ($) Weight (LB) item 1.5
1.2 1 0.75 $ / LB

130 Example Weight limit: 10 Take item 5 Take item 6 Take 2 LB of item 4
Weight (LB) Value ($) $ / LB 5 2 4 6 9 1.5 1.2 1 3 0.75

131 Why is greedy algorithm for fractional knapsack problem valid?
Claim: the optimal solution must contain the MVI as much as possible (either up to the weight limit or until MVI is exhausted) Proof by contradiction: suppose that the optimal solution does not use all available MVI (i.e., there is still w (w < W) units of MVI left while we choose other items) We can replace w pounds of less valuable items by MVI The total weight is the same, but with value higher than the “optimal” Contradiction w w

132 Representing Graphs Assume V = {1, 2, …, n}
An adjacency matrix represents the graph as a n x n matrix A: A[i, j] = 1 if edge (i, j)  E = 0 if edge (i, j)  E For weighted graph A[i, j] = wij if edge (i, j)  E = 0 if edge (i, j)  E For undirected graph Matrix is symmetric: A[i, j] = A[j, i]

133 Graphs: Adjacency Matrix
Example: A 1 2 3 4 ?? 1 2 4 3

134 Graphs: Adjacency Matrix
Example: A 1 2 3 4 1 2 4 3 How much storage does the adjacency matrix require? A: O(V2)

135 Graphs: Adjacency Matrix
Example: A 1 2 3 4 1 1 1 2 2 4 3 3 4 Undirected graph

136 Graphs: Adjacency Matrix
Example: A 1 2 3 4 1 1 4 9 6 5 5 2 2 6 4 3 9 4 3 4 Weighted graph

137 Graphs: Adjacency Matrix
Time to answer if there is an edge between vertex u and v: Θ(1) Memory required: Θ(n2) regardless of |E| Usually too much storage for large graphs But can be very efficient for small graphs Most large interesting graphs are sparse E.g., road networks (due to limit on junctions) For this reason the adjacency list is often a more appropriate representation

138 Graphs: Adjacency List
Adjacency list: for each vertex v  V, store a list of vertices adjacent to v Example: Adj[1] = {2,3} Adj[2] = {3} Adj[3] = {} Adj[4] = {3} Variation: can also keep a list of edges coming into vertex 1 2 4 3

139 Graph representations
Adjacency list 1 2 3 3 2 4 3 3 How much storage does the adjacency list require? A: O(V+E)

140 Graph representations
Undirected graph 4 3 2 1 A 1 2 4 3 2 3 1 3 1 2 4 3

141 Graph representations
Weighted graph 4 3 2 9 6 5 1 A 1 5 2 6 4 9 4 3 2,5 3,6 1,5 3,9 1,6 2,9 4,4 3,4

142 Graphs: Adjacency List
How much storage is required? For directed graphs |adj[v]| = out-degree(v) Total # of items in adjacency lists is  out-degree(v) = |E| For undirected graphs |adj[v]| = degree(v) # items in adjacency lists is  degree(v) = 2 |E| So: Adjacency lists take (V+E) storage Time needed to test if edge (u, v)  E is O(n)

143 Tradeoffs between the two representations
|V| = n, |E| = m Adj Matrix Adj List test (u, v)  E Θ(1) O(n) Degree(u) Θ(n) Memory Θ(n2) Θ(n+m) Edge insertion Edge deletion Graph traversal Both representations are very useful and have different properties, although adjacency lists are probably better for more problems

144 Good luck with your exam!


Download ppt "CS 3343: Analysis of Algorithms"

Similar presentations


Ads by Google