CS 3343: Analysis of Algorithms

Slides:



Advertisements
Similar presentations
Analysis of Algorithms
Advertisements

Overview What is Dynamic Programming? A Sequence of 4 Steps
CS 3343: Analysis of Algorithms Lecture 14: Order Statistics.
David Luebke 1 5/20/2015 CS 332: Algorithms Quicksort.
September 19, Algorithms and Data Structures Lecture IV Simonas Šaltenis Nykredit Center for Database Research Aalborg University
Comp 122, Spring 2004 Heapsort. heapsort - 2 Lin / Devi Comp 122 Heapsort  Combines the better attributes of merge sort and insertion sort. »Like merge.
Data Structures, Spring 2004 © L. Joskowicz 1 Data Structures – LECTURE 7 Heapsort and priority queues Motivation Heaps Building and maintaining heaps.
Tirgul 4 Order Statistics Heaps minimum/maximum Selection Overview
David Luebke 1 7/2/2015 Merge Sort Solving Recurrences The Master Theorem.
Longest Common Subsequence
David Luebke 1 10/3/2015 CS 332: Algorithms Solving Recurrences Continued The Master Theorem Introduction to heapsort.
Heaps, Heapsort, Priority Queues. Sorting So Far Heap: Data structure and associated algorithms, Not garbage collection context.
Binary Heap.
Analysis of Algorithms CS 477/677
September 29, Algorithms and Data Structures Lecture V Simonas Šaltenis Aalborg University
David Luebke 1 6/3/2016 CS 332: Algorithms Heapsort Priority Queues Quicksort.
6/4/ ITCS 6114 Dynamic programming Longest Common Subsequence.
CS 2133: Data Structures Quicksort. Review: Heaps l A heap is a “complete” binary tree, usually represented as an array:
1 Algorithms CSCI 235, Fall 2015 Lecture 14 Analysis of Heap Sort.
David Luebke 1 12/23/2015 Heaps & Priority Queues.
CS 3343: Analysis of Algorithms Lecture 18: More Examples on Dynamic Programming.
CSC 413/513: Intro to Algorithms Solving Recurrences Continued The Master Theorem Introduction to heapsort.
CS 3343: Analysis of Algorithms Review for Exam 2.
1 Heap Sort. A Heap is a Binary Tree Height of tree = longest path from root to leaf =  (lgn) A heap is a binary tree satisfying the heap condition:
CS 3343: Analysis of Algorithms Lecture 19: Introduction to Greedy Algorithms.
David Luebke 1 2/5/2016 CS 332: Algorithms Introduction to heapsort.
Chapter 11 Sorting Acknowledgement: These slides are adapted from slides provided with Data Structures and Algorithms in C++, Goodrich, Tamassia and Mount.
CSC317 Selection problem q p r Randomized‐Select(A,p,r,i)
All-pairs Shortest paths Transitive Closure
CS 3343: Analysis of Algorithms
Heaps, Heapsort, and Priority Queues
CS 3343: Analysis of Algorithms
Algorithms and Data Structures Lecture VI
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
Introduction to Algorithms
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
Chapter 8 Dynamic Programming.
CS 3343: Analysis of Algorithms
Keys into Buckets: Lower bounds, Linear-time sort, & Hashing
Heaps, Heapsort, and Priority Queues
Ch 6: Heapsort Ming-Te Chi
CSE 2331/5331 Topic 8: Hash Tables CSE 2331/5331.
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
CS Algorithms Dynamic programming 0-1 Knapsack problem 12/5/2018.
Lecture 3 / 4 Algorithm Analysis
Dynamic Programming Dr. Yingwu Zhu Chapter 15.
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
Longest Common Subsequence
Hashing Sections 10.2 – 10.3 Lecture 26 CS302 Data Structures
Lecture 8. Paradigm #6 Dynamic Programming
CS 3343: Analysis of Algorithms
HEAPS.
Solving Recurrences Continued The Master Theorem
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
CS 3343: Analysis of Algorithms
The Selection Problem.
Dynamic Programming.
CS200: Algorithm Analysis
Presentation transcript:

CS 3343: Analysis of Algorithms Lecture 20: Review for Exam 2

Mid term 2 Closed book exam One cheat sheet allowed (limit to a single page of letter-size paper, double-sided) Thursday, April 11, 9:30am – 10:45am Basic calculator (no graphing) is allowed but not necessary

Materials covered Weeks 5 – 9 (February 12 – March 28) Quick Sort Heap sort, priority queue Linear time sorting algorithms Order statistics Hash table Dynamic programming Questions will be similar to homework / quizzes Familiar with the algorithm procedure Some analysis of time/space complexity One or two problems on algorithm design

Quick sort Quicksort an n-element array: Divide: Partition the array into two subarrays around a pivot x such that elements in lower subarray £ x £ elements in upper subarray. Conquer: Recursively sort the two subarrays. Combine: Trivial. £ x x ≥ x Key: Linear-time partitioning subroutine.

Partition Code Partition(A, p, r) x = A[p]; // pivot is the first element i = p; j = r + 1; while (TRUE) { repeat i++; until A[i] > x | i >= j; j--; until A[j] < x | j <= i; if (i < j) Swap (A[i], A[j]); else break; } swap (A[p], A[j]); return j;

p r 6 10 5 8 13 3 2 11 x = 6 i j 6 10 5 8 13 3 2 11 scan i j 6 2 5 8 13 3 10 11 swap i j Partition example 6 2 5 8 13 3 10 11 scan i j 6 2 5 3 13 8 10 11 swap i j 6 2 5 3 13 8 10 11 scan j i p q r 3 2 5 6 13 8 10 11 final swap

6 10 5 8 11 3 2 13 Quick sort example 3 2 5 6 11 8 10 13 2 3 5 6 10 8 11 13 2 3 5 6 8 10 11 13 2 3 5 6 8 10 11 13

Quicksort Runtimes Best case runtime Tbest(n)  O(n log n) Worst case runtime Tworst(n)  O(n2) Average case runtime Tavg(n)  O(n log n) Expected runtime of randomized quicksort is O(n log n)

Randomized Partition Randomly choose an element as pivot Every time need to do a partition, throw a die to decide which element to use as the pivot Each element has 1/n probability to be selected Rand-Partition(A, p, r){ d = random(); // draw a random number between 0 and 1 index = p + floor((r-p+1) * d); // p<=index<=q swap(A[p], A[index]); Partition(A, p, r); // now use A[p] as pivot }

Running time of randomized quicksort T(0) + T(n–1) + dn if 0 : n–1 split, T(1) + T(n–2) + dn if 1 : n–2 split, M T(n–1) + T(0) + dn if n–1 : 0 split, T(n) = The expected running time is an average of all cases Expectation

Need to Prove: T(n) ≤ c n log (n) Fact: Need to Prove: T(n) ≤ c n log (n) Assumption: T(k) ≤ ck log (k) for 0 ≤ k ≤ n-1 Prove by induction If c ≥ 4

Heaps A heap can be seen as a complete binary tree: Perfect binary tree 16 14 10 8 7 9 3 2 4 1 16 14 10 8 7 9 3 2 4 1

Referencing Heap Elements So… Parent(i) {return i/2;} Left(i) {return 2*i;} right(i) {return 2*i + 1;}

Heap Operations: Heapify() Heapify(A, i) { // precondition: subtrees rooted at l and r are heaps l = Left(i); r = Right(i); if (l <= heap_size(A) && A[l] > A[i]) largest = l; else largest = i; if (r <= heap_size(A) && A[r] > A[largest]) largest = r; if (largest != i) { Swap(A, i, largest); Heapify(A, largest); } } // postcondition: subtree rooted at i is a heap Among A[l], A[i], A[r], which one is largest? If violation, fix it.

Heapify() Example 16 4 10 14 7 9 3 2 8 1 A = 16 4 10 14 7 9 3 2 8 1

Heapify() Example 16 4 10 14 7 9 3 2 8 1 A = 16 4 10 14 7 9 3 2 8 1

Heapify() Example 16 4 10 14 7 9 3 2 8 1 A = 16 4 10 14 7 9 3 2 8 1

Heapify() Example 16 14 10 4 7 9 3 2 8 1 A = 16 14 10 4 7 9 3 2 8 1

Heapify() Example 16 14 10 4 7 9 3 2 8 1 A = 16 14 10 4 7 9 3 2 8 1

Heapify() Example 16 14 10 8 7 9 3 2 4 1 A = 16 14 10 8 7 9 3 2 4 1

Heapify() Example 16 14 10 8 7 9 3 2 4 1 A = 16 14 10 8 7 9 3 2 4 1

BuildHeap() // given an unsorted array A, make A a heap BuildHeap(A) { heap_size(A) = length(A); for (i = length[A]/2 downto 1) Heapify(A, i); }

BuildHeap() Example Work through example A = {4, 1, 3, 2, 16, 9, 10, 14, 8, 7} 4 1 3 2 16 9 10 14 8 7 A = 4 1 3 2 16 9 10 14 8 7

4 1 3 2 16 9 10 14 8 7 A = 4 1 3 2 16 9 10 14 8 7

4 1 3 2 16 9 10 14 8 7 A = 4 1 3 2 16 9 10 14 8 7

4 1 3 14 16 9 10 2 8 7 A = 4 1 3 14 16 9 10 2 8 7

4 1 3 14 16 9 10 2 8 7 A = 4 1 3 14 16 9 10 2 8 7

4 1 10 14 16 9 3 2 8 7 A = 4 1 10 14 16 9 3 2 8 7

4 1 10 14 16 9 3 2 8 7 A = 4 1 10 14 16 9 3 2 8 7

4 16 10 14 1 9 3 2 8 7 A = 4 16 10 14 1 9 3 2 8 7

4 16 10 14 7 9 3 2 8 1 A = 4 16 10 14 7 9 3 2 8 1

4 16 10 14 7 9 3 2 8 1 A = 4 16 10 14 7 9 3 2 8 1

16 4 10 14 7 9 3 2 8 1 A = 16 4 10 14 7 9 3 2 8 1

16 14 10 4 7 9 3 2 8 1 A = 16 14 10 4 7 9 3 2 8 1

16 14 10 8 7 9 3 2 4 1 A = 16 14 10 8 7 9 3 2 4 1

Analyzing BuildHeap(): Tight To Heapify() a subtree takes O(h) time where h is the height of the subtree h = O(lg m), m = # nodes in subtree The height of most subtrees is small Fact: an n-element heap has at most n/2h+1 nodes of height h CLR 6.3 uses this fact to prove that BuildHeap() takes O(n) time

Heapsort Heapsort(A) { BuildHeap(A); for (i = length(A) downto 2) Swap(A[1], A[i]); heap_size(A) -= 1; Heapify(A, 1); }

Heapsort Example Work through example A = {4, 1, 3, 2, 16, 9, 10, 14, 8, 7} 4 1 3 2 16 9 10 14 8 7 A = 4 1 3 2 16 9 10 14 8 7

Heapsort Example First: build a heap 16 14 10 8 7 9 3 2 4 1 A = 16 14

Heapsort Example Swap last and first 1 14 10 8 7 9 3 2 4 16 A = 1 14

Heapsort Example Last element sorted 1 14 10 8 7 9 3 2 4 16 A = 1 14

Heapsort Example Restore heap on remaining unsorted elements 14 8 10 4 7 9 3 2 1 16 Heapify A = 14 8 10 4 7 9 3 2 1 16

Heapsort Example Repeat: swap new last and first 1 8 10 4 7 9 3 2 14 16 A = 1 8 10 4 7 9 3 2 14 16

Heapsort Example Restore heap 10 8 9 4 7 1 3 2 14 16 A = 10 8 9 4 7 1

Heapsort Example Repeat 9 8 3 4 7 1 2 10 14 16 A = 9 8 3 4 7 1 2 10 14

Heapsort Example Repeat 8 7 3 4 2 1 9 10 14 16 A = 8 7 3 4 2 1 9 10 14

Heapsort Example Repeat 1 2 3 4 7 8 9 10 14 16 A = 1 2 3 4 7 8 9 10 14

Implementing Priority Queues HeapMaximum(A) { return A[1]; }

Implementing Priority Queues HeapExtractMax(A) { if (heap_size[A] < 1) { error; } max = A[1]; A[1] = A[heap_size[A]] heap_size[A] --; Heapify(A, 1); return max; }

HeapExtractMax Example 16 14 10 8 7 9 3 2 4 1 A = 16 14 10 8 7 9 3 2 4 1

HeapExtractMax Example Swap first and last, then remove last 1 14 10 8 7 9 3 2 4 16 A = 1 14 10 8 7 9 3 2 4 16

HeapExtractMax Example Heapify 14 8 10 4 7 9 3 2 1 16 A = 14 8 10 4 7 9 3 2 1 16

Implementing Priority Queues HeapChangeKey(A, i, key){ if (key ≤ A[i]){ // decrease key A[i] = key; heapify(A, i); } else { // increase key while (i>1 & A[parent(i)]<A[i]) swap(A[i], A[parent(i)]; } Sift down Bubble up

HeapChangeKey Example Increase key 16 14 10 8 7 9 3 2 4 1 A = 16 14 10 8 7 9 3 2 4 1

HeapChangeKey Example Increase key 16 14 10 15 7 9 3 2 4 1 A = 16 14 10 15 7 9 3 2 4 1

HeapChangeKey Example Increase key 16 15 10 14 7 9 3 2 4 1 A = 16 15 10 14 7 9 3 2 4 1

Implementing Priority Queues HeapInsert(A, key) { heap_size[A] ++; i = heap_size[A]; A[i] = -∞; HeapChangeKey(A, i, key); }

HeapInsert Example HeapInsert(A, 17) 16 14 10 8 7 9 3 2 4 1 A = 16 14

HeapInsert Example HeapInsert(A, 17) -∞ -∞ -∞ makes it a valid heap 16 14 10 8 7 9 3 2 4 1 -∞ -∞ makes it a valid heap A = 16 14 10 8 7 9 3 2 4 1 -∞

HeapInsert Example HeapInsert(A, 17) Now call changeKey 16 14 10 8 7 9 3 2 4 1 17 Now call changeKey A = 16 14 10 8 7 9 3 2 4 1 17

HeapInsert Example HeapInsert(A, 17) 17 16 10 8 14 9 3 2 4 1 7 A = 17

Counting sort for i  1 to k do C[i]  0 for j  1 to n 1. for i  1 to k do C[i]  0 for j  1 to n do C[A[ j]]  C[A[ j]] + 1 ⊳ C[i] = |{key = i}| for i  2 to k do C[i]  C[i] + C[i–1] ⊳ C[i] = |{key £ i}| for j  n downto 1 do B[C[A[ j]]]  A[ j] C[A[ j]]  C[A[ j]] – 1 Initialize 2. Count 3. Compute running sum 4. Re-arrange

Counting sort A: 4 1 3 4 3 C: 1 2 2 B: C': 1 1 3 5 for i  2 to k 2 2 B: C': 1 1 3 5 3. for i  2 to k do C[i]  C[i] + C[i–1] ⊳ C[i] = |{key £ i}|

Loop 4: re-arrange A: 4 1 3 4 3 C: 1 1 3 5 B: 3 C': 1 1 3 5 2 3 4 5 1 2 3 4 A: 4 1 3 4 3 C: 1 1 3 5 B: 3 C': 1 1 3 5 4. for j  n downto 1 do B[C[A[ j]]]  A[ j] C[A[ j]]  C[A[ j]] – 1

Analysis Q(k) Q(n) Q(k) Q(n) Q(n + k) 1. for i  1 to k do C[i]  0 2. for j  1 to n do C[A[ j]]  C[A[ j]] + 1 Q(n) 3. for i  2 to k do C[i]  C[i] + C[i–1] Q(k) 4. for j  n downto 1 do B[C[A[ j]]]  A[ j] C[A[ j]]  C[A[ j]] – 1 Q(n) Q(n + k)

What other algorithms have this property? Stable sorting Counting sort is a stable sort: it preserves the input order among equal elements. A: 4 1 3 B: Why this is important? What other algorithms have this property?

Radix sort Similar to sorting the address books Treat each digit as a key Start from the least significant bit Most significant Least significant 198099109123518183599 340199540380128115295 384700101594539614696 382408360201039258538 614386507628681328936

Time complexity Sort each of the d digits by counting sort Total cost: d (n + k) k = 10 Total cost: Θ(dn) Partition the d digits into groups of 3 Total cost: (n+103)d/3 We work with binaries rather than decimals Partition a binary number into groups of r bits Total cost: (n+2r)d/r Choose r = log n Total cost: dn / log n Compare with dn log n Catch: faster than quicksort only when n is very large

Randomized selection algorithm RAND-SELECT(A, p, q, i) ⊳ i th smallest of A[ p . . q] if p = q & i > 1 then error! r  RAND-PARTITION(A, p, q) k  r – p + 1 ⊳ k = rank(A[r]) if i = k then return A[ r] if i < k then return RAND-SELECT( A, p, r – 1, i ) else return RAND-SELECT( A, r + 1, q, i – k ) £ A[r] ³ A[r] r p q k

Complete example: select the 6th smallest element. i = 6 7 10 5 8 11 3 2 13 3 2 5 7 11 8 10 13 k = 4 i = 6 – 4 = 2 10 8 11 13 k = 3 i = 2 < k Note: here we always used first element as pivot to do the partition (instead of rand-partition). 8 10 k = 2 i = 2 = k 10

Running time of randomized selection T(max(0, n–1)) + n if 0 : n–1 split, T(max(1, n–2)) + n if 1 : n–2 split, M T(max(n–1, 0)) + n if n–1 : 0 split, T(n) ≤ For upper bound, assume ith element always falls in larger side of partition The expected running time is an average of all cases Expectation

Substitution method Want to show T(n) = O(n). So need to prove T(n) ≤ cn for n > n0 Assume: T(k) ≤ ck for all k < n if c ≥ 4 Therefore, T(n) = O(n)

Worst-case linear-time selection if i = k then return x elseif i < k then recursively SELECT the i th smallest element in the lower part else recursively SELECT the (i–k)th smallest element in the upper part SELECT(i, n) Divide the n elements into groups of 5. Find the median of each 5-element group by rote. Recursively SELECT the median x of the ën/5û group medians to be the pivot. Partition around the pivot x. Let k = rank(x). Same as RAND-SELECT

Developing the recurrence T(n) if i = k then return x elseif i < k then recursively SELECT the i th smallest element in the lower part else recursively SELECT the (i–k)th smallest element in the upper part SELECT(i, n) Divide the n elements into groups of 5. Find the median of each 5-element group by rote. Recursively SELECT the median x of the ën/5û group medians to be the pivot. Partition around the pivot x. Let k = rank(x). Q(n) T(n/5) Q(n) T(7n/10+3)

Solving the recurrence Assumption: T(k) £ ck for all k < n if n ≥ 60 if c ≥ 20 and n ≥ 60

Hash tables Problem: collision |U| >> K & |U| >> m U (universe of keys) h(k1) k1 h(k4) k4 K (actual keys) k5 collision h(k2) = h(k5) k2 h(k3) k3 m - 1 Problem: collision

Chaining Chaining puts elements that hash to the same slot in a linked list: T —— U (universe of keys) k1 k4 —— —— k1 —— k4 K (actual keys) k5 —— k7 k5 k2 k7 —— —— k3 k2 k8 k3 —— k6 k8 k6 —— ——

Hashing with Chaining Chained-Hash-Insert (T, x) Insert x at the head of list T[h(key[x])]. Worst-case complexity – O(1). Chained-Hash-Delete (T, x) Delete x from the list T[h(key[x])]. Worst-case complexity – proportional to length of list with singly-linked lists. O(1) with doubly-linked lists. Chained-Hash-Search (T, k) Search an element with key k in list T[h(k)]. Worst-case complexity – proportional to length of list.

Analysis of Chaining Assume simple uniform hashing: each key in table is equally likely to be hashed to any slot Given n keys and m slots in the table, the load factor  = n/m = average # keys per slot Average cost of an unsuccessful search for a key is (1+) (Theorem 11.1) Average cost of a successful search is (2 + /2) = (1 + ) (Theorem 11.2) If the number of keys n is proportional to the number of slots in the table,  = n/m = O(1) The expected cost of searching is constant if  is constant

Hash Functions: The Division Method h(k) = k mod m In words: hash k into a table with m slots using the slot given by the remainder of k divided by m Example: m = 31 and k = 78 => h(k) = 16. Advantage: fast Disadvantage: value of m is critical Bad if keys bear relation to m Or if hash does not depend on all bits of k Pick m = prime number not too close to power of 2 (or 10)

Hash Functions: The Multiplication Method For a constant A, 0 < A < 1: h(k) = m (kA mod 1) =  m (kA - kA)  Advantage: Value of m is not critical Disadvantage: relatively slower Choose m = 2P, for easier implementation Choose A not too close to 0 or 1 Knuth: Good choice for A = (5 - 1)/2 Example: m = 1024, k = 123, A  0.6180339887… h(k) = 1024(123 · 0.6180339887 mod 1) = 1024 · 0.018169...  = 18. Fractional part of kA

A Universal Hash Function Choose a prime number p that is larger than all possible keys Choose table size m ≥ n Randomly choose two integers a, b, such that 1  a  p -1, and 0  b  p -1 ha,b(k) = ((ak+b) mod p) mod m Example: p = 17, m = 6 h3,4 (8) = ((3*8 + 4) % 17) % 6 = 11 % 6 = 5 With a random pair of parameters a, b, the chance of a collision between x and y is at most 1/m Expected search time for any input is (1)

Elements of dynamic programming Optimal sub-structures Optimal solutions to the original problem contains optimal solutions to sub-problems Overlapping sub-problems Some sub-problems appear in many solutions

Two steps to dynamic programming Formulate the solution as a recurrence relation of solutions to subproblems. Specify an order to solve the subproblems so you always have what you need.

Optimal subpaths Claim: if a path startgoal is optimal, any sub-path, startx, or xgoal, or xy, where x, y is on the optimal path, is also the shortest. Proof by contradiction If the subpath between x and y is not the shortest, we can replace it with the shorter one, which will reduce the total length of the new path => the optimal path from start to goal is not the shortest => contradiction! Hence, the subpath xy must be the shortest among all paths from x to y start goal x y a b c b’ a + b + c is shortest b’ < b a + b’ + c < a + b + c

Dynamic programming illustration 3 9 1 2 3 12 13 15 5 3 3 3 3 3 2 5 2 5 6 8 13 15 2 3 3 9 3 2 4 2 3 7 9 11 13 16 6 2 3 7 4 3 6 3 3 13 11 14 17 20 4 6 3 1 3 1 2 3 2 17 17 17 18 20 G F(i-1, j) + dist(i-1, j, i, j) F(i, j) = min F(i, j-1) + dist(i, j-1, i, j)

Trace back 3 9 1 2 3 12 13 15 5 3 3 3 3 3 2 5 2 5 6 8 13 15 2 3 3 9 3 2 4 2 3 7 9 11 13 16 6 2 3 7 4 3 6 3 3 13 11 14 17 20 4 6 3 1 3 1 2 3 2 17 17 17 18 20

Longest Common Subsequence Given two sequences x[1 . . m] and y[1 . . n], find a longest subsequence common to them both. “a” not “the” x: A B C D y: BCBA = LCS(x, y) functional notation, but not a function

Optimal substructure Notice that the LCS problem has optimal substructure: parts of the final solution are solutions of subproblems. If z = LCS(x, y), then any prefix of z is an LCS of a prefix of x and a prefix of y. Subproblems: “find LCS of pairs of prefixes of x and y” i m x z n y j

Finding length of LCS m x n y Let c[i, j] be the length of LCS(x[1..i], y[1..j]) => c[m, n] is the length of LCS(x, y) If x[m] = y[n] c[m, n] = c[m-1, n-1] + 1 If x[m] != y[n] c[m, n] = max { c[m-1, n], c[m, n-1] }

DP Algorithm c[i–1, j–1] + 1 if x[i] = y[j], Key: find out the correct order to solve the sub-problems Total number of sub-problems: m * n c[i, j] = c[i–1, j–1] + 1 if x[i] = y[j], max{c[i–1, j], c[i, j–1]} otherwise. j n C(i, j) i m

LCS Example (0) ABCB BDCAB X = ABCB; m = |X| = 4 j 0 1 2 3 4 5 i Y[j] B D C A B X[i] A 1 B 2 3 C 4 B X = ABCB; m = |X| = 4 Y = BDCAB; n = |Y| = 5 Allocate array c[5,6]

LCS Example (1) ABCB BDCAB for i = 1 to m c[i,0] = 0 j 0 1 2 3 4 5 i Y[j] B D C A B X[i] A 1 B 2 3 C 4 B for i = 1 to m c[i,0] = 0 for j = 1 to n c[0,j] = 0

LCS Example (2) ABCB BDCAB j 0 1 2 3 4 5 i Y[j] B D C A B X[i] A 1 B 2 A 1 B 2 3 C 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

LCS Example (3) ABCB BDCAB j 0 1 2 3 4 5 i Y[j] B D C A B X[i] A 1 B 2 A 1 B 2 3 C 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

LCS Example (4) ABCB BDCAB j 0 1 2 3 4 5 i Y[j] B D C A B X[i] A 1 1 B A 1 1 B 2 3 C 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

LCS Example (5) ABCB BDCAB j 0 1 2 3 4 5 i Y[j] B D C A B X[i] A 1 1 1 A 1 1 1 B 2 3 C 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

LCS Example (6) ABCB BDCAB j 0 1 2 3 4 5 i Y[j] B D C A B X[i] A 1 1 1 A 1 1 1 B 2 1 3 C 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

LCS Example (7) ABCB BDCAB j 0 1 2 3 4 5 i Y[j] B D C A B X[i] A 1 1 1 A 1 1 1 B 2 1 1 1 1 3 C 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

LCS Example (8) ABCB BDCAB j 0 1 2 3 4 5 i Y[j] B D C A B X[i] A 1 1 1 A 1 1 1 B 2 1 1 1 1 2 3 C 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

LCS Example (9) ABCB BDCAB j 0 1 2 3 4 5 i Y[j] B D C A B X[i] A 1 1 1 A 1 1 1 B 2 1 1 1 1 2 3 C 1 1 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

LCS Example (10) ABCB BDCAB j 0 1 2 3 4 5 i Y[j] B D C A B X[i] A 1 1 A 1 1 1 B 2 1 1 1 1 2 3 C 1 1 2 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

LCS Example (11) ABCB BDCAB j 0 1 2 3 4 5 i Y[j] B D C A B X[i] A 1 1 A 1 1 1 B 2 1 1 1 1 2 3 C 1 1 2 2 2 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

LCS Example (12) ABCB BDCAB j 0 1 2 3 4 5 i Y[j] B D C A B X[i] A 1 1 A 1 1 1 B 2 1 1 1 1 2 3 C 1 1 2 2 2 4 B 1 if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

LCS Example (13) ABCB BDCAB j 0 1 2 3 4 5 i Y[j] B D C A B X[i] A 1 1 A 1 1 1 B 2 1 1 1 1 2 3 C 1 1 2 2 2 4 B 1 1 2 2 if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

LCS Example (14) 3 ABCB BDCAB j 0 1 2 3 4 5 i Y[j] B D C A B X[i] A 1 A 1 1 1 B 2 1 1 1 1 2 3 C 1 1 2 2 2 3 4 B 1 1 2 2 if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

LCS Algorithm Running Time LCS algorithm calculates the values of each entry of the array c[m,n] So what is the running time? O(m*n) since each c[i,j] is calculated in constant time, and there are m*n elements in the array

How to find actual LCS For example, here The algorithm just found the length of LCS, but not LCS itself. How to find the actual LCS? For each c[i,j] we know how it was acquired: A match happens only when the first equation is taken So we can start from c[m,n] and go backwards, remember x[i] whenever c[i,j] = c[i-1, j-1]+1. 2 2 For example, here c[i,j] = c[i-1,j-1] +1 = 2+1=3 2 3

Finding LCS 3 Time for trace back: O(m+n). j 0 1 2 3 4 5 i Y[j] B D C X[i] A 1 1 1 B 2 1 1 1 1 2 3 C 1 1 2 2 2 3 4 B 1 1 2 2 Time for trace back: O(m+n).

Finding LCS (2) 3 LCS (reversed order): B C B B C B j 0 1 2 3 4 5 i Y[j] B D C A B X[i] A 1 1 1 B 2 1 1 1 1 2 3 C 1 1 2 2 2 3 4 B 1 1 2 2 LCS (reversed order): B C B B C B (this string turned out to be a palindrome) LCS (straight order):

LCS as a longest path problem D C A B A 1 B 1 1 1 C B 1 1

LCS as a longest path problem D C A B A 1 1 1 B 1 1 1 1 1 1 2 1 C 1 1 2 2 2 B 1 1 1 1 1 2 3

A more general problem Aligning two strings, such that Match = m Mismatch = -s Insertion/deletion = -d Aligning ABBC with CABC LCS = 3: ABC Best alignment ABBC CABC Score = 2m-2s ABBC CABC Score = 3m-2d

Alignment as a longest path problem C A B C A B B C -d m -s

Recurrence Let F(i, j) be the best alignment score between X[1..i] and Y[1..j]. F(m, n) is the best alignment score between X and Y Recurrence F(i, j) = max F(i-1, j-1) + (i, j) F(i-1,j) – d F(i, j-1) – d Match/Mismatch Insertion on Y Insertion on X (i, j) = m if X[i]=Y[j] and -s otherwise.

Restaurant location problem You work in the fast food business Your company plans to open up new restaurants in Texas along I-35 Towns along the highway called t1, t2, …, tn Restaurants at ti has estimated annual profit pi No two restaurants can be located within 10 miles of each other due to some regulation Your boss wants to maximize the total profit You want a big bonus 10 mile

A DP algorithm Suppose you’ve already found the optimal solution It will either include tn or not include tn Case 1: tn not included in optimal solution Best solution same as best solution for t1 , …, tn-1 Case 2: tn included in optimal solution Best solution is pn + best solution for t1 , …, tj , where j < n is the largest index so that dist(tj, tn) ≥ 10

Recurrence formulation Let S(i) be the total profit of the optimal solution when the first i towns are considered (not necessarily selected) S(n) is the optimal solution to the complete problem S(n-1) S(j) + pn j < n & dist (tj, tn) ≥ 10 S(n) = max S(i-1) S(j) + pi j < i & dist (tj, ti) ≥ 10 S(i) = max Generalize Number of sub-problems: n. Boundary condition: S(0) = 0. Dependency: i i-1 j S

Example S(i-1) S(j) + pi j < i & dist (tj, ti) ≥ 10 S(i) = max Distance (mi) 100 5 2 2 6 6 3 6 10 7 dummy 7 3 4 12 Profit (100k) 6 7 9 8 3 3 2 4 12 5 S(i) 6 7 9 9 10 12 12 14 26 26 Optimal: 26 S(i-1) S(j) + pi j < i & dist (tj, ti) ≥ 10 S(i) = max Natural greedy 1: 6 + 3 + 4 + 12 = 25 Natural greedy 2: 12 + 9 + 3 = 24

Complexity Time: (nk), where k is the maximum number of towns that are within 10 miles to the left of any town In the worst case, (n2) Can be improved to (n) with some preprocessing tricks Memory: Θ(n)

Knapsack problem Each item has a value and a weight Objective: maximize value Constraint: knapsack has a weight limitation Three versions: 0-1 knapsack problem: take each item or leave it Fractional knapsack problem: items are divisible Unbounded knapsack problem: unlimited supplies of each item. Which one is easiest to solve? We study the 0-1 problem today.

Formal definition (0-1 problem) Knapsack has weight limit W Items labeled 1, 2, …, n (arbitrarily) Items have weights w1, w2, …, wn Assume all weights are integers For practical reason, only consider wi < W Items have values v1, v2, …, vn Objective: find a subset of items, S, such that iS wi  W and iS vi is maximal among all such (feasible) subsets

A DP algorithm Suppose you’ve find the optimal solution S Case 1: item n is included Case 2: item n is not included Total weight limit: W Total weight limit: W wn wn Find an optimal solution using items 1, 2, …, n-1 with weight limit W - wn Find an optimal solution using items 1, 2, …, n-1 with weight limit W

Recursive formulation Let V[i, w] be the optimal total value when items 1, 2, …, i are considered for a knapsack with weight limit w => V[n, W] is the optimal solution V[n, W] = max V[n-1, W-wn] + vn V[n-1, W] Generalize V[i, w] = max V[i-1, w-wi] + vi item i is taken V[i-1, w] item i not taken V[i-1, w] if wi > w item i not taken Boundary condition: V[i, 0] = 0, V[0, w] = 0. Number of sub-problems = ?

Example n = 6 (# of items) W = 10 (weight limit) Items (weight, value): 2 2 4 3 3 3 5 6 2 4 6 9

w 1 2 3 4 5 6 7 8 9 10 i wi vi 1 2 2 2 4 3 wi 3 3 3 V[i-1, w-wi] V[i-1, w] 4 5 5 6 6 V[i, w] 5 2 4 6 6 9 V[i-1, w-wi] + vi item i is taken V[i-1, w] item i not taken max V[i, w] = V[i-1, w] if wi > w item i not taken

w 1 2 3 4 5 6 7 8 9 10 i wi vi 1 2 4 3 5 6 9 2 2 2 2 2 2 2 2 2 3 5 2 2 3 5 5 5 5 6 8 3 5 2 3 5 6 8 6 8 9 11 2 3 3 6 9 4 6 7 10 12 13 4 7 10 13 15 9 4 4 6 7 10 13 V[i-1, w-wi] + vi item i is taken V[i-1, w] item i not taken max V[i-1, w] if wi > w item i not taken V[i, w] =

w 1 2 3 4 5 6 7 8 9 10 i wi vi 1 2 4 3 5 6 9 2 2 2 2 2 2 2 2 2 3 5 2 2 3 5 5 5 5 6 8 3 5 2 3 5 6 8 6 8 9 11 2 3 3 6 9 4 7 10 12 13 4 6 7 10 13 9 4 4 6 7 10 13 15 Optimal value: 15 Item: 6, 5, 1 Weight: 6 + 2 + 2 = 10 Value: 9 + 4 + 2 = 15

Time complexity Θ (nW) Polynomial? Pseudo-polynomial Works well if W is small Consider following items (weight, value): (10, 5), (15, 6), (20, 5), (18, 6) Weight limit 35 Optimal solution: item 2, 4 (value = 12). Iterate: 2^4 = 16 subsets Dynamic programming: fill up a 4 x 35 = 140 table entries What’s the problem? Many entries are unused: no such weight combination Top-down may be better

Longest increasing subsequence Given a sequence of numbers 1 2 5 3 2 9 4 9 3 5 6 8 Find a longest subsequence that is non-decreasing E.g. 1 2 5 9 It has to be a subsequence of the original list It has to in sorted order => It is a subsequence of the sorted list Original list: 1 2 5 3 2 9 4 9 3 5 6 8 LCS: Sorted: 1 2 2 3 3 4 5 5 6 8 9 9 1 2 3 4 5 6 8

Events scheduling problem Time A list of events to schedule (or shows to see) ei has start time si and finishing time fi Indexed such that fi < fj if i < j Each event has a value vi Schedule to make the largest value You can attend only one event at any time Very similar to the new restaurant location problem Sort events according to their finish time Consider: if the last event is included or not

Events scheduling problem f9 s8 f8 s7 f7 e8 e3 e4 e5 e7 e9 e1 e2 Time V(i) is the optimal value that can be achieved when the first i events are considered V(n) = V(n-1) en not selected max { V(j) + vn en selected j < n and fj < sn

Coin change problem Given some denomination of coins (e.g., 2, 5, 7, 10), decide if it is possible to make change for a value (e.g, 13), or minimize the number of coins Version 1: Unlimited number of coins for each denomination Unbounded knapsack problem Version 2: Use each denomination at most once 0-1 Knapsack problem

Use DP algorithm to solve new problems Directly map a new problem to a known problem Modify an algorithm for a similar task Design your own Think about the problem recursively Optimal solution to a larger problem can be computed from the optimal solution of one or more subproblems These sub-problems can be solved in certain manageable order Works nicely for naturally ordered data such as strings, trees, some special graphs Trickier for general graphs The text book has some very good exercises.

Good luck with your exam!