1/20/20161 CS 3343: Analysis of Algorithms Review for final.

Slides:



Advertisements
Similar presentations
Analysis of Algorithms
Advertisements

Analysis of Algorithms
Sorting Comparison-based algorithm review –You should know most of the algorithms –We will concentrate on their analyses –Special emphasis: Heapsort Lower.
Overview What is Dynamic Programming? A Sequence of 4 Steps
CS 3343: Analysis of Algorithms Lecture 14: Order Statistics.
Introduction to Algorithms Jiafen Liu Sept
Spring 2015 Lecture 5: QuickSort & Selection
David Luebke 1 5/20/2015 CS 332: Algorithms Quicksort.
September 19, Algorithms and Data Structures Lecture IV Simonas Šaltenis Nykredit Center for Database Research Aalborg University
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu.
CS 253: Algorithms Chapter 6 Heapsort Appendix B.5 Credit: Dr. George Bebis.
Analysis of Algorithms CS 477/677 Midterm Exam Review Instructor: George Bebis.
Tirgul 4 Order Statistics Heaps minimum/maximum Selection Overview
David Luebke 1 7/2/2015 Merge Sort Solving Recurrences The Master Theorem.
Longest Common Subsequence
Mathematics Review and Asymptotic Notation
David Luebke 1 10/3/2015 CS 332: Algorithms Solving Recurrences Continued The Master Theorem Introduction to heapsort.
CS 3343: Analysis of Algorithms
ADA: 7. Dynamic Prog.1 Objective o introduce DP, its two hallmarks, and two major programming techniques o look at two examples: the fibonacci.
10/13/20151 CS 3343: Analysis of Algorithms Lecture 9: Review for midterm 1 Analysis of quick sort.
Heaps, Heapsort, Priority Queues. Sorting So Far Heap: Data structure and associated algorithms, Not garbage collection context.
10/20/20151 CS 3343: Analysis of Algorithms Review for final.
The Selection Problem. 2 Median and Order Statistics In this section, we will study algorithms for finding the i th smallest element in a set of n elements.
CMPT 438 Algorithms. Why Study Algorithms? Necessary in any computer programming problem ▫Improve algorithm efficiency: run faster, process more data,
September 29, Algorithms and Data Structures Lecture V Simonas Šaltenis Aalborg University
David Luebke 1 6/3/2016 CS 332: Algorithms Heapsort Priority Queues Quicksort.
6/4/ ITCS 6114 Dynamic programming Longest Common Subsequence.
CS 2133: Data Structures Quicksort. Review: Heaps l A heap is a “complete” binary tree, usually represented as an array:
David Luebke 1 12/23/2015 Heaps & Priority Queues.
CS 3343: Analysis of Algorithms Lecture 18: More Examples on Dynamic Programming.
1/6/20161 CS 3343: Analysis of Algorithms Lecture 2: Asymptotic Notations.
Midterm Review 1. Midterm Exam Thursday, October 15 in classroom 75 minutes Exam structure: –TRUE/FALSE questions –short questions on the topics discussed.
CSC 413/513: Intro to Algorithms Solving Recurrences Continued The Master Theorem Introduction to heapsort.
CS 3343: Analysis of Algorithms Review for Exam 2.
CS 3343: Analysis of Algorithms Lecture 19: Introduction to Greedy Algorithms.
David Luebke 1 2/5/2016 CS 332: Algorithms Introduction to heapsort.
David Luebke 1 2/19/2016 Priority Queues Quicksort.
2/20/20161 CS 3343: Analysis of Algorithms Review for final.
BITS Pilani Pilani Campus Data Structure and Algorithms Design Dr. Maheswari Karthikeyan Lecture1.
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
Heaps, Heapsort, and Priority Queues
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
Introduction to Algorithms
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
Heaps, Heapsort, and Priority Queues
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
Longest Common Subsequence
CS 3343: Analysis of Algorithms
Solving Recurrences Continued The Master Theorem
Presentation transcript:

1/20/20161 CS 3343: Analysis of Algorithms Review for final

1/20/20162 Final Exam Closed book exam Coverage: the whole semester Cheat sheet: you are allowed one letter- size sheet, both sides Monday, May 4, 9:45 – 12:15pm Basic calculator (no graphing) allowed No cell phones!

1/20/20163 Final Exam: Study Tips Study tips: –Study each lecture –Study the homework and homework solutions –Study the midterm exams Re-make your previous cheat sheets

1/20/20164 Topics covered (1) By reversed chronological order: Graph algorithms –Representations –MST (Prim’s, Kruskal’s) –Shortest path (Dijkstra’s) –Running time analysis with different implementations Greedy algorithm –Unit-profit restaurant location problem –Fractional knapsack problem –Prim’s and Kruskal’s are also examples of greedy algorithms –How to show that certain greedy choices are optimal

1/20/20165 Topics covered (2) Dynamic programming –LCS –Restaurant location problem –Shortest path problem on a grid –Other problems –How to define recurrence solution, and use dynamic programming to solve it Binary heap and priority queue –Heapify, buildheap, insert, exatractMax, changeKey –Running time analysis

1/20/20166 Topics covered (3) Order statistics –Rand-Select –Worst-case Linear-time select –Running time analysis Sorting algorithms –Insertion sort –Merge sort –Quick sort –Heap sort –Linear time sorting: counting sort, radix sort –Stability of sorting algorithms –Worst-case and expected running time analysis –Memory requirement of sorting algorithms

1/20/20167 Topics covered (4) Analysis –Order of growth –Asymptotic notation, basic definition Limit method L’ Hopital’s rule Stirling’s formula –Best case, worst case, average case Analyzing non-recursive algorithms –Arithmetic series –Geometric series Analyzing recursive algorithms –Defining recurrence –Solving recurrence Recursion tree (iteration) method Substitution method Master theorem

1/20/20168 Review for finals In chronological order Only the more important concepts –Very likely to appear in your final Does not mean to be exclusive

1/20/20169 Asymptotic notations O: Big-Oh Ω: Big-Omega Θ: Theta o: Small-oh ω: Small-omega Intuitively: O is like  o is like <  is like   is like >  is like =

1/20/ Big-Oh Math: –O(g(n)) = {f(n):  positive constants c and n 0 such that 0 ≤ f(n) ≤ cg(n)  n>n 0 } –Or: lim n→∞ g(n)/f(n) > 0 (if the limit exists.) Engineering: –g(n) grows at least as faster as f(n) –g(n) is an asymptotic upper bound of f(n) Intuitively it is like f(n) ≤ g(n)

1/20/ Big-Oh Claim: f(n) = 3n n + 5  O(n 2 ) Proof: 3n n + 5  3n n 2 + 5n 2 when n > 1  18 n 2 when n > 1 Therefore, Let c = 18 and n 0 = 1 We have f(n)  c n 2,  n > n 0 By definition, f(n)  O(n 2 )

1/20/ Big-Omega Math: –Ω(g(n)) = {f(n):  positive constants c and n 0 such that 0 ≤ cg(n) ≤ f(n)  n>n 0 } –Or: lim n→∞ f(n)/g(n) > 0 (if the limit exists.) Engineering: –f(n) grows at least as faster as g(n) –g(n) is an asymptotic lower bound of f(n) Intuitively it is like g(n) ≤ f(n)

1/20/ Big-Omega f(n) = n 2 / 10 = Ω(n) Proof: f(n) = n 2 / 10, g(n) = n –g(n) = n ≤ n 2 / 10 = f(n) when n > 10 –Therefore, c = 1 and n 0 = 10

1/20/ Theta Math: –Θ(g(n)) = {f(n):  positive constants c 1, c 2, and n 0 such that c 1 g(n)  f(n)  c 2 g(n)  n  n 0  n>n 0 } –Or: lim n→∞ f(n)/g(n) = c > 0 and c < ∞ –Or: f(n) = O(g(n)) and f(n) = Ω(g(n)) Engineering: –f(n) grows in the same order as g(n) –g(n) is an asymptotic tight bound of f(n) Intuitively it is like f(n) = g(n) Θ(1) means constant time.

1/20/ Theta Claim: f(n) = 2n 2 + n = Θ (n 2 ) Proof: –We just need to find three constants c 1, c 2, and n 0 such that –c 1 n 2 ≤ 2n 2 +n ≤ c 2 n 2 for all n > n 0 –A simple solution is c 1 = 2, c 2 = 3, and n 0 = 1

1/20/ Using limits to compare orders of growth 0 lim f(n) / g(n) = c > 0 ∞ n→∞ f(n)  o(g(n)) f(n)  Θ (g(n)) f(n)  ω (g(n)) f(n)  O(g(n)) f(n)  Ω(g(n))

1/20/ Compare 2 n and 3 n lim 2 n / 3 n = lim(2/3) n = 0 Therefore, 2 n  o(3 n ), and 3 n  ω(2 n ) n→∞

1/20/ L’ Hopital’s rule lim f(n) / g(n) = lim f(n)’ / g(n)’ n→∞ If both lim f(n) and lim g(n) goes to ∞

1/20/ Compare n 0.5 and log n lim n 0.5 / log n = ? (n 0.5 )’ = 0.5 n -0.5 (log n)’ = 1 / n lim (n -0.5 / 1/n) = lim(n 0.5 ) = Therefore, log n  o(n 0.5 ) n→∞ ∞

1/20/ Stirling’s formula (constant)

1/20/ Compare 2 n and n! Therefore, 2 n = o(n!)

1/20/ More advanced dominance ranking

1/20/ General plan for analyzing time efficiency of a non-recursive algorithm Decide parameter (input size) Identify most executed line (basic operation) worst-case = average-case? T(n) =  i t i T(n) = Θ (f(n))

1/20/ Statement cost time__ InsertionSort(A, n) { for j = 2 to n { c 1 n key = A[j] c 2 (n-1) i = j - 1; c 3 (n-1) while (i > 0) and (A[i] > key) { c 4 S A[i+1] = A[i] c 5 (S-(n-1)) i = i - 1 c 6 (S-(n-1)) } 0 A[i+1] = key c 7 (n-1) } 0 } Analysis of insertion Sort

1/20/ Best case Array already sorted 1 ij sorted Key Inner loop stops when A[i] <= key, or i = 0

1/20/ Worst case Array originally in reverse order 1 ij sorted Inner loop stops when A[i] <= key Key

1/20/ Average case Array in random order 1 ij sorted Inner loop stops when A[i] <= key Key

1/20/ Find the order of growth for sums How to find out the actual order of growth? –Remember some formulas –Learn how to guess and prove

1/20/ Arithmetic series An arithmetic series is a sequence of numbers such that the difference of any two successive members of the sequence is a constant. e.g.: 1, 2, 3, 4, 5 or 10, 12, 14, 16, 18, 20 In general: Recursive definition Closed form, or explicit formula Or:

1/20/ Sum of arithmetic series If a 1, a 2, …, a n is an arithmetic series, then

1/20/ Geometric series A geometric series is a sequence of numbers such that the ratio between any two successive members of the sequence is a constant. e.g.: 1, 2, 4, 8, 16, 32 or 10, 20, 40, 80, 160 or 1, ½, ¼, 1/8, 1/16 In general: Recursive definition Closed form, or explicit formula Or:

1/20/ Sum of geometric series if r < 1 if r > 1 if r = 1

1/20/ Important formulas

1/20/ Sum manipulation rules Example:

1/20/ Recursive algorithms General idea: –Divide a large problem into smaller ones By a constant ratio By a constant or some variable –Solve each smaller one recursively or explicitly –Combine the solutions of smaller ones to form a solution for the original problem Divide and Conquer

1/20/ How to analyze the time-efficiency of a recursive algorithm? Express the running time on input of size n as a function of the running time on smaller problems

1/20/ Analyzing merge sort M ERGE -S ORT A[1.. n] 1.If n = 1, done. 2.Recursively sort A[ 1..  n/2  ] and A[  n/2  +1.. n ]. 3.“Merge” the 2 sorted lists T(n)T(n) Θ(1) 2T(n/2) f(n) Sloppiness: Should be T(  n/2  ) + T(  n/2  ), but it turns out not to matter asymptotically.

1/20/ Analyzing merge sort 1.Divide: Trivial. 2.Conquer: Recursively sort 2 subarrays. 3.Combine: Merge two sorted subarrays T(n) = 2 T(n/2) + f(n) +Θ(1) # subproblems subproblem size Work dividing and Combining 1.What is the time for the base case? 2.What is f(n) ? 3.What is the growth order of T(n) ? Constant

1/20/ Solving recurrence Running time of many algorithms can be expressed in one of the following two recursive forms or Challenge: how to solve the recurrence to get a closed form, e.g. T(n) = Θ (n 2 ) or T(n) = Θ(nlgn), or at least some bound such as T(n) = O(n 2 ) ?

1/20/ Solving recurrence 1.Recurrence tree (iteration) method - Good for guessing an answer 2.Substitution method - Generic method, rigid, but may be hard 3.Master method - Easy to learn, useful in limited cases only - Some tricks may help in other cases

1/20/ The master method The master method applies to recurrences of the form T(n) = a T(n/b) + f (n), where a  1, b > 1, and f is asymptotically positive. 1.Divide the problem into a subproblems, each of size n/b 2.Conquer the subproblems by solving them recursively. 3.Combine subproblem solutions Divide + combine takes f(n) time.

1/20/ Master theorem T(n) = a T(n/b) + f (n) C ASE 1: f (n) = O(n log b a –  )  T(n) =  (n log b a ). C ASE 2: f (n) =  (n log b a )  T(n) =  (n log b a log n). C ASE 3: f (n) =  (n log b a +  ) and a f (n/b)  c f (n)  T(n) =  ( f (n)). Key: compare f(n) with n log b a e.g.: merge sort: T(n) = 2 T(n/2) + Θ(n) a = 2, b = 2  n log b a = n  CASE 2  T(n) = Θ(n log n).

1/20/ Case 1 Compare f (n) with n log b a : f (n) = O(n log b a –  ) for some constant  > 0. : f (n) grows polynomially slower than n log b a (by an n  factor). Solution: T(n) =  (n log b a ) i.e., aT(n/b) dominates e.g. T(n) = 2T(n/2) + 1 T(n) = 4 T(n/2) + n T(n) = 2T(n/2) + log n T(n) = 8T(n/2) + n 2

1/20/ Case 3 Compare f (n) with n log b a : f (n) =  (n log b a +  ) for some constant  > 0. : f (n) grows polynomially faster than n log b a (by an n  factor). Solution: T(n) =  (f(n)) i.e., f(n) dominates e.g. T(n) = T(n/2) + n T(n) = 2 T(n/2) + n 2 T(n) = 4T(n/2) + n 3 T(n) = 8T(n/2) + n 4

1/20/ Case 2 Compare f (n) with n log b a : f (n) =  (n log b a ). : f (n) and n log b a grow at similar rate. Solution: T(n) =  (n log b a log n) e.g. T(n) = T(n/2) + 1 T(n) = 2 T(n/2) + n T(n) = 4T(n/2) + n 2 T(n) = 8T(n/2) + n 3

1/20/ Recursion tree Solve T(n) = 2T(n/2) + dn, where d > 0 is constant.

1/20/ Recursion tree Solve T(n) = 2T(n/2) + dn, where d > 0 is constant. T(n)T(n)

1/20/ Recursion tree Solve T(n) = 2T(n/2) + dn, where d > 0 is constant. T(n/2) dn

1/20/ Recursion tree Solve T(n) = 2T(n/2) + dn, where d > 0 is constant. dn T(n/4) dn/2

1/20/ Recursion tree Solve T(n) = 2T(n/2) + dn, where d > 0 is constant. dn dn/4 dn/2  (1) …

1/20/ Recursion tree Solve T(n) = 2T(n/2) + dn, where d > 0 is constant. dn dn/4 dn/2  (1) … h = log n

1/20/ Recursion tree Solve T(n) = 2T(n/2) + dn, where d > 0 is constant. dn dn/4 dn/2  (1) … h = log n dn

1/20/ Recursion tree Solve T(n) = 2T(n/2) + dn, where d > 0 is constant. dn dn/4 dn/2  (1) … h = log n dn

1/20/ Recursion tree Solve T(n) = 2T(n/2) + dn, where d > 0 is constant. dn dn/4 dn/2  (1) … h = log n dn …

1/20/ Recursion tree Solve T(n) = 2T(n/2) + dn, where d > 0 is constant. dn dn/4 dn/2  (1) … h = log n dn #leaves = n (n)(n) …

1/20/ Recursion tree Solve T(n) = 2T(n/2) + dn, where d > 0 is constant. dn dn/4 dn/2  (1) … h = log n dn #leaves = n (n)(n) Total  (n log n) …

1/20/ Substitution method 1.Guess the form of the solution: (e.g. using recursion trees, or expansion) 2.Verify by induction (inductive step). The most general method to solve a recurrence (prove O and  separately):

1/20/ Recurrence: T(n) = 2T(n/2) + n. Guess: T(n) = O(n log n). (eg. by recurrence tree method) To prove, have to show T(n) ≤ c n log n for some c > 0 and for all n > n 0 Proof by induction: assume it is true for T(n/2), prove that it is also true for T(n). This means: Fact: T(n) = 2T(n/2) + n Assumption: T(n/2)≤ cn/2 log (n/2) Need to Prove: T(n)≤ c n log (n) Proof by substitution

1/20/ Proof Fact: T(n) = 2T(n/2) + n Assumption: T(n/2)≤ cn/2 log (n/2) Need to Prove: T(n)≤ c n log (n) Proof: Substitute T(n/2) into the recurrence function => T(n) = 2 T(n/2) + n ≤ cn log (n/2) + n =>T(n) ≤ c n log n - c n + n => T(n) ≤ c n log n (if we choose c ≥ 1).

1/20/ Recurrence: T(n) = 2T(n/2) + n. Guess: T(n) = Ω(n log n). To prove, have to show T(n) ≥ c n log n for some c > 0 and for all n > n 0 Proof by induction: assume it is true for T(n/2), prove that it is also true for T(n). This means: Fact: Assumption: Need to Prove: T(n) ≥ c n log (n) Proof by substitution T(n) = 2T(n/2) + n T(n/2) ≥ cn/2 log (n/2)

1/20/ Proof Fact: T(n) = 2T(n/2) + n Assumption: T(n/2) ≥ cn/2 log (n/2) Need to Prove: T(n) ≥ c n log (n) Proof: Substitute T(n/2) into the recurrence function => T(n) = 2 T(n/2) + n ≥ cn log (n/2) + n =>T(n) ≥ c n log n - c n + n => T(n) ≥ c n log n (if we choose c ≤ 1).

1/20/ Quick sort Quicksort an n-element array: 1.Divide: Partition the array into two subarrays around a pivot x such that elements in lower subarray  x  elements in upper subarray. 2.Conquer: Recursively sort the two subarrays. 3.Combine: Trivial.  x x  x x x x ≥ x≥ x ≥ x≥ x Key: Linear-time partitioning subroutine.

1/20/ Partition All the action takes place in the partition() function –Rearranges the subarray in place –End result: two subarrays All values in first subarray  all values in second –Returns the index of the “pivot” element separating the two subarrays  x x  x x x x ≥ x≥ x ≥ x≥ x pr q

1/20/ Partition Code Partition(A, p, r) x = A[p];// pivot is the first element i = p; j = r + 1; while (TRUE) { repeat i++; until A[i] > x or i >= j; repeat j--; until A[j] < x or j < i; if (i < j) Swap (A[i], A[j]); else break; } swap (A[p], A[j]); return j; What is the running time of partition() ? partition() runs in O(n) time

1/20/ ij x = 6 pr ij ij ij ij ij qpr

1/20/

1/20/ Quicksort Runtimes Best case runtime T best (n)  O(n log n) Worst case runtime T worst (n)  O(n 2 ) Worse than mergesort? Why is it called quicksort then? Its average runtime T avg (n)  O(n log n ) Better even, the expected runtime of randomized quicksort is O(n log n)

1/20/ Randomized quicksort Randomly choose an element as pivot –Every time need to do a partition, throw a die to decide which element to use as the pivot –Each element has 1/n probability to be selected Partition(A, p, r) d = random(); // a random number between 0 and 1 index = p + floor((r-p+1) * d); // p<=index<=r swap(A[p], A[index]); x = A[p]; i = p; j = r + 1; while (TRUE) { … }

1/20/ Running time of randomized quicksort The expected running time is an average of all cases T(n) = T(0) + T(n–1) + dnif 0 : n–1 split, T(1) + T(n–2) + dnif 1 : n–2 split,  T(n–1) + T(0) + dnif n–1 : 0 split, Expectation

1/20/ Heaps In practice, heaps are usually implemented as arrays:

1/20/ Heaps To represent a complete binary tree as an array: –The root node is A[1] –Node i is A[i] –The parent of node i is A[i/2] (note: integer divide) –The left child of node i is A[2i] –The right child of node i is A[2i + 1] A ==

1/20/ The Heap Property Heaps also satisfy the heap property: A[Parent(i)]  A[i]for all nodes i > 1 –In other words, the value of a node is at most the value of its parent –The value of a node should be greater than or equal to both its left and right children And all of its descendents –Where is the largest element in a heap stored?

1/20/ Heap Operations: Heapify() Heapify(A, i) { // precondition: subtrees rooted at l and r are heaps l = Left(i); r = Right(i); if (l A[i]) largest = l; else largest = i; if (r A[largest]) largest = r; if (largest != i) { Swap(A, i, largest); Heapify(A, largest); } }// postcondition: subtree rooted at i is a heap Among A[l], A[i], A[r], which one is largest? If violation, fix it.

1/20/ Heapify() Example A =

1/20/ Heapify() Example A = 4

1/20/ Heapify() Example A = 414

1/20/ Heapify() Example A = 4

1/20/ Heapify() Example A = 48

1/20/ Heapify() Example A = 4

1/20/ Heapify() Example A =

1/20/ Analyzing Heapify(): Formal T(n)  T(2n/3) +  (1) By case 2 of the Master Theorem, T(n) = O(lg n) Thus, Heapify() takes logarithmic time

1/20/ Heap Operations: BuildHeap() We can build a heap in a bottom-up manner by running Heapify() on successive subarrays –Fact: for array of length n, all elements in range A[  n/2  n] are heaps (Why?) –So: Walk backwards through the array from n/2 to 1, calling Heapify() on each node. Order of processing guarantees that the children of node i are heaps when i is processed

1/20/ BuildHeap() // given an unsorted array A, make A a heap BuildHeap(A) { heap_size(A) = length(A); for (i =  length[A]/2  downto 1) Heapify(A, i); }

1/20/ BuildHeap() Example Work through example A = {4, 1, 3, 2, 16, 9, 10, 14, 8, 7}

1/20/

1/20/

1/20/

1/20/

1/20/

1/20/ Analyzing BuildHeap(): Tight To Heapify() a subtree takes O(h) time where h is the height of the subtree –h = O(lg m), m = # nodes in subtree –The height of most subtrees is small Fact: an n-element heap has at most  n/2 h+1  nodes of height h CLR 7.3 uses this fact to prove that BuildHeap() takes O(n) time

1/20/ Heapsort Example Work through example A = {4, 1, 3, 2, 16, 9, 10, 14, 8, 7} A =

1/20/ Heapsort Example First: build a heap A =

1/20/ Heapsort Example Swap last and first A =

1/20/ Heapsort Example Last element sorted A =

1/20/ Heapsort Example Restore heap on remaining unsorted elements Heapify A =

1/20/ Heapsort Example Repeat: swap new last and first A =

1/20/ Heapsort Example Restore heap A =

1/20/ Heapsort Example Repeat A =

1/20/ Heapsort Example Repeat A =

1/20/ Heapsort Example Repeat A =

1/20/ Analyzing Heapsort The call to BuildHeap() takes O(n) time Each of the n - 1 calls to Heapify() takes O(lg n) time Thus the total time taken by HeapSort() = O(n) + (n - 1) O(lg n) = O(n) + O(n lg n) = O(n lg n)

1/20/ HeapExtractMax Example A =

1/20/ HeapExtractMax Example Swap first and last, then remove last A = 1

1/20/ HeapExtractMax Example Heapify A =

1/20/ HeapChangeKey Example Increase key A =

1/20/ HeapChangeKey Example Increase key A = 15

1/20/ HeapChangeKey Example Increase key A = 1415

1/20/ HeapInsert Example HeapInsert(A, 17) A =

1/20/ HeapInsert Example HeapInsert(A, 17) A = -∞ -∞ makes it a valid heap

1/20/ HeapInsert Example HeapInsert(A, 17) A = Now call changeKey

1/20/ HeapInsert Example HeapInsert(A, 17) A =

1/20/ Heapify: Θ(log n) BuildHeap: Θ(n) HeapSort: Θ(nlog n) HeapMaximum: Θ(1) HeapExtractMax: Θ(log n) HeapChangeKey: Θ(log n) HeapInsert: Θ(log n)

1/20/ Counting sort for i  1 to k do C[i]  0 for j  1 to n do C[A[ j]]  C[A[ j]] + 1 ⊳ C[i] = |{key = i}| for i  2 to k do C[i]  C[i] + C[i–1] ⊳ C[i] = |{key  i}| for j  n downto 1 doB[C[A[ j]]]  A[ j] C[A[ j]]  C[A[ j]] – Initialize Count Compute running sum Re-arrange

1/20/ Counting sort A:A: B:B: C:C: C':C': for i  2 to k do C[i]  C[i] + C[i–1] ⊳ C[i] = |{key  i}| 3.

1/20/ Loop 4: re-arrange A:A: B:B: C:C: C':C': for j  n downto 1 doB[C[A[ j]]]  A[ j] C[A[ j]]  C[A[ j]] – 1 4.

1/20/ Analysis for i  1 to k do C[i]  0 (n)(n) (k)(k) (n)(n) (k)(k) for j  1 to n do C[A[ j]]  C[A[ j]] + 1 for i  2 to k do C[i]  C[i] + C[i–1] for j  n downto 1 doB[C[A[ j]]]  A[ j] C[A[ j]]  C[A[ j]] – 1  (n + k)

1/20/ Stable sorting Counting sort is a stable sort: it preserves the input order among equal elements. A:A: B:B: Why this is important? What other algorithms have this property?

1/20/ Radix sort Similar to sorting the address books Treat each digit as a key Start from the least significant bit Most significant Least significant

1/20/ Time complexity Sort each of the d digits by counting sort Total cost: d (n + k) –k = 10 –Total cost: Θ(dn) Partition the d digits into groups of 3 –Total cost: (n+10 3 )d/3 We work with binaries rather than decimals –Partition a binary number into groups of r bits –Total cost: (n+2 r )d/r –Choose r = log n –Total cost: dn / log n –Compare with dn log n Catch: faster than quicksort only when n is very large

1/20/ Randomized selection algorithm R AND -S ELECT (A, p, q, i) ⊳ i th smallest of A[ p.. q] if p = q & i > 1 then error! r  R AND -P ARTITION (A, p, q) k  r – p + 1 ⊳ k = rank(A[r]) if i = k then return A[ r] if i < k then return R AND -S ELECT ( A, p, r – 1, i ) else return R AND -S ELECT ( A, r + 1, q, i – k )  A[r] A[r]  A[r] A[r]  A[r] A[r]  A[r] A[r] rpq k

1/20/ Example pivot i = k = 4 Select the 6 – 4 = 2nd smallest recursively. Select the i = 6th smallest: Partition:

1/20/ Complete example: select the 6 th smallest element. i = 6 k = 4 i = 6 – 4 = 2 k = 3 i = 2 < k k = 2 i = 2 = k Note: here we always used first element as pivot to do the partition (instead of rand-partition).

1/20/ Intuition for analysis Lucky: C ASE 3 T(n)= T(9n/10) +  (n) =  (n) Unlucky: T(n)= T(n – 1) +  (n) =  (n 2 ) arithmetic series Worse than sorting! (All our analyses today assume that all elements are distinct.)

1/20/ Running time of randomized selection For upper bound, assume i th element always falls in larger side of partition The expected running time is an average of all cases T(n) ≤ T(max(0, n–1)) + nif 0 : n–1 split, T(max(1, n–2)) + nif 1 : n–2 split,  T(max(n–1, 0)) + nif n–1 : 0 split, Expectation

1/20/ Worst-case linear-time selection if i = k then return x elseif i < k thenrecursively S ELECT the i th smallest element in the lower part elserecursively S ELECT the (i–k)th smallest element in the upper part S ELECT (i, n) 1.Divide the n elements into groups of 5. Find the median of each 5-element group by rote. 2.Recursively S ELECT the median x of the  n/5  group medians to be the pivot. 3.Partition around the pivot x. Let k = rank(x). 4. Same as R AND - S ELECT

1/20/ Developing the recurrence if i = k then return x elseif i < k thenrecursively S ELECT the i th smallest element in the lower part elserecursively S ELECT the (i–k)th smallest element in the upper part S ELECT (i, n) 1.Divide the n elements into groups of 5. Find the median of each 5-element group by rote. 2.Recursively S ELECT the median x of the  n/5  group medians to be the pivot. 3.Partition around the pivot x. Let k = rank(x). 4. T(n)T(n) (n)(n) T(n/5) (n)(n) T(7n/10 +3)

1/20/ Solving the recurrence if c ≥ 20 and n ≥ 60 Assumption: T(k)  ck for all k < n if n ≥ 60

1/20/ Elements of dynamic programming Optimal sub-structures –Optimal solutions to the original problem contains optimal solutions to sub-problems Overlapping sub-problems –Some sub-problems appear in many solutions

1/20/ Two steps to dynamic programming Formulate the solution as a recurrence relation of solutions to subproblems. Specify an order to solve the subproblems so you always have what you need.

1/20/ Optimal subpaths Claim: if a path start  goal is optimal, any sub-path, start  x, or x  goal, or x  y, where x, y is on the optimal path, is also the shortest. Proof by contradiction –If the subpath between x and y is not the shortest, we can replace it with the shorter one, which will reduce the total length of the new path => the optimal path from start to goal is not the shortest => contradiction! –Hence, the subpath x  y must be the shortest among all paths from x to y startgoalxy a b c b’ a + b + c is shortest b’ < b a + b’ + c < a + b + c

1/20/ Dynamic programming illustration S G F(i-1, j) + dist(i-1, j, i, j) F(i, j) = min F(i, j-1) + dist(i, j-1, i, j)

1/20/ Trace back

1/20/ Longest Common Subsequence Given two sequences x[1.. m] and y[1.. n], find a longest subsequence common to them both. x:x:ABCBDAB y:y:BDCABA “a” not “the” BCBA = LCS(x, y) functional notation, but not a function

1/20/ Optimal substructure Notice that the LCS problem has optimal substructure: parts of the final solution are solutions of subproblems. –If z = LCS(x, y), then any prefix of z is an LCS of a prefix of x and a prefix of y. Subproblems: “find LCS of pairs of prefixes of x and y” x y m n z i j

1/20/ Finding length of LCS Let c[i, j] be the length of LCS(x[1..i], y[1..j]) => c[m, n] is the length of LCS(x, y) If x[m] = y[n] c[m, n] = c[m-1, n-1] + 1 If x[m] != y[n] c[m, n] = max { c[m-1, n], c[m, n-1] } x y m n

1/20/ DP Algorithm Key: find out the correct order to solve the sub-problems Total number of sub-problems: m * n c[i, j] = c[i–1, j–1] + 1if x[i] = y[j], max { c[i–1, j], c[i, j–1] } otherwise. C(i, j) 0 m 0 n i j

1/20/ LCS Example (0) j i X[i] A B C B Y[j]BBACD X = ABCB; m = |X| = 4 Y = BDCAB; n = |Y| = 5 Allocate array c[5,6] ABCB BDCAB

1/20/ LCS Example (1) j i A B C B BBACD for i = 1 to m c[i,0] = 0 for j = 1 to n c[0,j] = 0 ABCB BDCAB X[i] Y[j]

1/20/ LCS Example (2) j i A B C B BBACD if ( X i == Y j ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) 0 ABCB BDCAB X[i] Y[j]

1/20/ LCS Example (3) j i A B C B BBACD if ( X i == Y j ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) 000 ABCB BDCAB X[i] Y[j]

1/20/ LCS Example (4) j i A B C B BBACD if ( X i == Y j ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) 0001 ABCB BDCAB X[i] Y[j]

1/20/ LCS Example (5) j i A B C B BBACD if ( X i == Y j ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) ABCB BDCAB X[i] Y[j]

1/20/ LCS Example (6) j i A B C B BBACD if ( X i == Y j ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) ABCB BDCAB X[i] Y[j]

1/20/ LCS Example (7) j i A B C B BBACD if ( X i == Y j ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) ABCB BDCAB X[i] Y[j]

1/20/ LCS Example (8) j i A B C B BBACD if ( X i == Y j ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) ABCB BDCAB X[i] Y[j]

1/20/ LCS Example (14) j i A B C B BBACD if ( X i == Y j ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) ABCB BDCAB X[i] Y[j]

1/20/ LCS Algorithm Running Time LCS algorithm calculates the values of each entry of the array c[m,n] So what is the running time? O(m*n) since each c[i,j] is calculated in constant time, and there are m*n elements in the array

1/20/ How to find actual LCS The algorithm just found the length of LCS, but not LCS itself. How to find the actual LCS? For each c[i,j] we know how it was acquired: A match happens only when the first equation is taken So we can start from c[m,n] and go backwards, remember x[i] whenever c[i,j] = c[i-1, j-1] For example, here c[i,j] = c[i-1,j-1] +1 = 2+1=3

1/20/ Finding LCS j i A B C BBACD B X[i] Y[j] Time for trace back: O(m+n).

1/20/ Finding LCS (2) j i A B C BBACD B BCB LCS (reversed order): LCS (straight order):B C B (this string turned out to be a palindrome) X[i] Y[j]

1/20/ LCS as a longest path problem A B C B BBACD

1/20/ LCS as a longest path problem A B C B BBACD

1/20/ Restaurant location problem 1 You work in the fast food business Your company plans to open up new restaurants in Texas along I-35 Towns along the highway called t 1, t 2, …, t n Restaurants at t i has estimated annual profit p i No two restaurants can be located within 10 miles of each other due to some regulation Your boss wants to maximize the total profit You want a big bonus 10 mile

1/20/ A DP algorithm Suppose you’ve already found the optimal solution It will either include t n or not include t n Case 1: t n not included in optimal solution –Best solution same as best solution for t 1, …, t n-1 Case 2: t n included in optimal solution –Best solution is p n + best solution for t 1, …, t j, where j < n is the largest index so that dist(t j, t n ) ≥ 10

1/20/ Recurrence formulation Let S(i) be the total profit of the optimal solution when the first i towns are considered (not necessarily selected) –S(n) is the optimal solution to the complete problem S(n-1) S(j) + p n j < n & dist (t j, t n ) ≥ 10 S(n) = max S(i-1) S(j) + p i j < i & dist (t j, t i ) ≥ 10 S(i) = max Generalize Number of sub-problems: n. Boundary condition: S(0) = 0. Dependency: i i-1 j S

1/20/ Example Natural greedy 1: = 25 Natural greedy 2: = Distance (mi) Profit (100k) S(i) S(i-1) S(j) + p i j < i & dist (t j, t i ) ≥ 10 S(i) = max dummy Optimal: 26

1/20/ Complexity Time:  (nk), where k is the maximum number of towns that are within 10 miles to the left of any town –In the worst case,  (n 2 ) –Can be improved to  (n) with some preprocessing tricks Memory: Θ(n)

1/20/ Knapsack problem Three versions: 0-1 knapsack problem: take each item or leave it Fractional knapsack problem: items are divisible Unbounded knapsack problem: unlimited supplies of each item. Which one is easiest to solve? Each item has a value and a weight Objective: maximize value Constraint: knapsack has a weight limitation We study the 0-1 problem today.

1/20/ Formal definition (0-1 problem) Knapsack has weight limit W Items labeled 1, 2, …, n (arbitrarily) Items have weights w 1, w 2, …, w n –Assume all weights are integers –For practical reason, only consider w i < W Items have values v 1, v 2, …, v n Objective: find a subset of items, S, such that  i  S w i  W and  i  S v i is maximal among all such (feasible) subsets

1/20/ A DP algorithm Suppose you’ve find the optimal solution S Case 1: item n is included Case 2: item n is not included Total weight limit: W wnwn Total weight limit: W Find an optimal solution using items 1, 2, …, n-1 with weight limit W - w n wnwn Find an optimal solution using items 1, 2, …, n-1 with weight limit W

1/20/ Recursive formulation Let V[i, w] be the optimal total value when items 1, 2, …, i are considered for a knapsack with weight limit w => V[n, W] is the optimal solution V[n, W] = max V[n-1, W-w n ] + v n V[n-1, W] Generalize V[i, w] = max V[i-1, w-w i ] + v i item i is taken V[i-1, w] item i not taken V[i-1, w] if w i > w item i not taken Boundary condition: V[i, 0] = 0, V[0, w] = 0. Number of sub-problems = ?

1/20/ Example n = 6 (# of items) W = 10 (weight limit) Items (weight, value):

1/20/ w i vivi wiwi max V[i-1, w-w i ] + v i item i is taken V[i-1, w] item i not taken V[i-1, w] if w i > w item i not taken V[i, w] = V[i, w] V[i-1, w]V[i-1, w-w i ] 6 wiwi 5

1/20/ w iwiwi vivi max V[i-1, w-w i ] + v i item i is taken V[i-1, w] item i not taken V[i-1, w] if w i > w item i not taken V[i, w] =

1/20/ w iwiwi vivi Item: 6, 5, 1 Weight: = 10 Value: = 15 Optimal value: 15

1/20/ Time complexity Θ (nW) Polynomial? –Pseudo-polynomial –Works well if W is small Consider following items (weight, value): (10, 5), (15, 6), (20, 5), (18, 6) Weight limit 35 –Optimal solution: item 2, 4 (value = 12). Iterate: 2^4 = 16 subsets –Dynamic programming: fill up a 4 x 35 = 140 table entries What’s the problem? –Many entries are unused: no such weight combination –Top-down may be better

1/20/ Longest increasing subsequence Given a sequence of numbers Find a longest subsequence that is non- decreasing –E.g –It has to be a subsequence of the original list –It has to in sorted order => It is a subsequence of the sorted list Original list: LCS: Sorted:

1/20/ Events scheduling problem A list of events to schedule (or shows to see) –e i has start time s i and finishing time f i –Indexed such that f i < f j if i < j Each event has a value v i Schedule to make the largest value –You can attend only one event at any time Very similar to the new restaurant location problem –Sort events according to their finish time –Consider: if the last event is included or not Time e1e1 e2e2 e3e3 e4e4 e5e5 e6e6 e7e7 e8e8 e9e9

1/20/ Events scheduling problem Time e1e1 e2e2 e3e3 e4e4 e5e5 e6e6 e7e7 e8e8 e9e9 V(i) is the optimal value that can be achieved when the first i events are considered V(n) = V(n-1) e n not selected e n selected V(j) + v n max { j < n and f j < s n s9s9 f9f9 s8s8 f8f8 s7s7 f7f7

1/20/ Coin change problem Given some denomination of coins (e.g., 2, 5, 7, 10), decide if it is possible to make change for a value (e.g, 13), or minimize the number of coins Version 1: Unlimited number of coins for each denomination –Unbounded knapsack problem Version 2: Use each denomination at most once –0-1 Knapsack problem

1/20/ Use DP algorithm to solve new problems Directly map a new problem to a known problem Modify an algorithm for a similar task Design your own –Think about the problem recursively –Optimal solution to a larger problem can be computed from the optimal solution of one or more subproblems –These sub-problems can be solved in certain manageable order –Works nicely for naturally ordered data such as strings, trees, some special graphs –Trickier for general graphs The text book has some very good exercises.

1/20/ Unit-profit restaurant location problem Now the objective is to maximize the number of new restaurants (subject to the distance constraint) –In other words, we assume that each restaurant makes the same profit, no matter where it is opened 10 mile

1/20/ A DP Algorithm Exactly as before, but p i = 1 for all i S(i-1) S(j) + 1 j < i & dist (t j, t i ) ≥ 10 S(i) = max S(i-1) S(j) + p i j < i & dist (t j, t i ) ≥ 10 S(i) = max

1/20/ Greedy algorithm for restaurant location problem select t 1 d = 0; for (i = 2 to n) d = d + dist(t i, t i-1 ); if (d >= min_dist) select t i d = 0; end d

1/20/ Complexity Time: Θ(n) Memory: –Θ(n) to store the input –Θ(1) for greedy selection

1/20/ Optimal substructure Claim 1: if A = [m 1, m 2, …, m k ] is the optimal solution to the restaurant location problem for a set of towns [t 1, …, t n ] –m 1 < m 2 < … < m k are indices of the selected towns –Then B = [m 2, m 3, …, m k ] is the optimal solution to the sub-problem [t j, …, t n ], where t j is the first town that are at least 10 miles to the right of t m 1 Proof by contradiction: suppose B is not the optimal solution to the sub-problem, which means there is a better solution B’ to the sub-problem –A’ = m i || B’ gives a better solution than A = m i || B => A is not optimal => contradiction => B is optimal m1m1 B’ (imaginary) A’ B m1m1 A m2m2 mkmk

1/20/ Greedy choice property Claim 2: for the uniform-profit restaurant location problem, there is an optimal solution that chooses t 1 Proof by contradiction: suppose that no optimal solution can be obtained by choosing t 1 –Say the first town chosen by the optimal solution S is t i, i > 1 –Replace t i with t 1 will not violate the distance constraint, and the total profit remains the same => S’ is an optimal solution –Contradiction –Therefore claim 2 is valid S S’

1/20/ Fractional knapsack problem 0-1 knapsack problem: take each item or leave it Fractional knapsack problem: items are divisible Unbounded knapsack problem: unlimited supplies of each item. Which one is easiest to solve? Each item has a value and a weight Objective: maximize value Constraint: knapsack has a weight limitation We can solve the fractional knapsack problem using greedy algorithm

1/20/ Greedy algorithm for fractional knapsack problem Compute value/weight ratio for each item Sort items by their value/weight ratio into decreasing order –Call the remaining item with the highest ratio the most valuable item (MVI) Iteratively: –If the weight limit can not be reached by adding MVI Select MVI –Otherwise select MVI partially until weight limit

1/20/ Example Weight limit: $ / LB Value ($) Weight (LB) item

1/20/ Example Weight limit: 10 Take item 5 –2 LB, $4 Take item 6 –8 LB, $13 Take 2 LB of item 4 –10 LB, 15.4 itemWeight (LB) Value ($) $ / LB

1/20/ Why is greedy algorithm for fractional knapsack problem valid? Claim: the optimal solution must contain the MVI as much as possible (either up to the weight limit or until MVI is exhausted) Proof by contradiction: suppose that the optimal solution does not use all available MVI (i.e., there is still w (w < W) pounds of MVI left while we choose other items) –We can replace w pounds of less valuable items by MVI –The total weight is the same, but with value higher than the “optimal” –Contradiction w w w w

1/20/ Graphs A graph G = (V, E) –V = set of vertices –E = set of edges = subset of V  V –Thus |E| = O(|V| 2 ) Vertices: {1, 2, 3, 4} Edges: {(1, 2), (2, 3), (1, 3), (4, 3)}

1/20/ Graphs: Adjacency Matrix Example: A How much storage does the adjacency matrix require? A: O(V 2 )

1/20/ Graphs: Adjacency List Adjacency list: for each vertex v  V, store a list of vertices adjacent to v Example: –Adj[1] = {2,3} –Adj[2] = {3} –Adj[3] = {} –Adj[4] = {3} Variation: can also keep a list of edges coming into vertex

1/20/ Kruskal’s algorithm: example a a b b f f c c e e d d g g h h c-d: 3 b-f: 5 b-a: 6 f-e: 7 b-d: 8 f-g: 9 d-e: 10 a-f: 12 b-c: 14 e-h: 15

1/20/ Kruskal’s algorithm: example a a b b f f c c e e d d g g h h c-d: 3 b-f: 5 b-a: 6 f-e: 7 b-d: 8 f-g: 9 d-e: 10 a-f: 12 b-c: 14 e-h: 15

1/20/ Kruskal’s algorithm: example a a b b f f c c e e d d g g h h c-d: 3 b-f: 5 b-a: 6 f-e: 7 b-d: 8 f-g: 9 d-e: 10 a-f: 12 b-c: 14 e-h: 15

1/20/ Kruskal’s algorithm: example a a b b f f c c e e d d g g h h c-d: 3 b-f: 5 b-a: 6 f-e: 7 b-d: 8 f-g: 9 d-e: 10 a-f: 12 b-c: 14 e-h: 15

1/20/ Kruskal’s algorithm: example a a b b f f c c e e d d g g h h c-d: 3 b-f: 5 b-a: 6 f-e: 7 b-d: 8 f-g: 9 d-e: 10 a-f: 12 b-c: 14 e-h: 15

1/20/ Kruskal’s algorithm: example a a b b f f c c e e d d g g h h c-d: 3 b-f: 5 b-a: 6 f-e: 7 b-d: 8 f-g: 9 d-e: 10 a-f: 12 b-c: 14 e-h: 15

1/20/ Kruskal’s algorithm: example a a b b f f c c e e d d g g h h c-d: 3 b-f: 5 b-a: 6 f-e: 7 b-d: 8 f-g: 9 d-e: 10 a-f: 12 b-c: 14 e-h: 15

1/20/ Kruskal’s algorithm: example a a b b f f c c e e d d g g h h c-d: 3 b-f: 5 b-a: 6 f-e: 7 b-d: 8 f-g: 9 d-e: 10 a-f: 12 b-c: 14 e-h: 15

1/20/ Kruskal’s algorithm: example a a b b f f c c e e d d g g h h c-d: 3 b-f: 5 b-a: 6 f-e: 7 b-d: 8 f-g: 9 d-e: 10 a-f: 12 b-c: 14 e-h: 15

1/20/ Kruskal’s algorithm: example a a b b f f c c e e d d g g h h c-d: 3 b-f: 5 b-a: 6 f-e: 7 b-d: 8 f-g: 9 d-e: 10 a-f: 12 b-c: 14 e-h: 15

1/20/ Kruskal’s algorithm: example a a b b f f c c e e d d g g h h c-d: 3 b-f: 5 b-a: 6 f-e: 7 b-d: 8 f-g: 9 d-e: 10 a-f: 12 b-c: 14 e-h: 15

1/20/ Time complexity Depending on implementation Pseudocode: sort all edges according to weights T = {}. tree(v) = v for all v. for each edge (u, v) if tree(u) != tree(v) T = T U (u, v); union (tree(u), tree(v)) Overall time complexity Naïve: Θ(nm) Better implementation: Θ(m log n) Θ(m log m) = Θ(m log n) m edges Avg time spent per edge Naïve: Θ (n) Better: Θ (log n) using set union

1/20/ Prim’s algorithm: example a a b b f f c c e e d d g g h h abcdefgh ∞∞∞∞∞∞∞∞∞

1/20/ Prim’s algorithm: example a a b b f f c c e e d d g g h h ChangeKey cbadefgh 00∞∞∞∞∞∞∞

1/20/ Prim’s algorithm: example a a b b f f c c e e d d g g h h ExctractMin hbadefg ∞∞∞∞∞∞∞

1/20/ Prim’s algorithm: example a a b b f f c c e e d d g g h h dbahefg 314∞∞∞∞∞ ChangeKey

1/20/ Prim’s algorithm: example a a b b f f c c e e d d g g h h bgahef  14∞∞∞∞∞ ExctractMin

1/20/ Prim’s algorithm: example a a b b f f c c e e d d g g h h beahgf 8810∞∞∞∞ Changekey

1/20/ Prim’s algorithm: example a a b b f f c c e e d d g g h h efahg 10∞∞∞∞ ExtractMin

1/20/ Prim’s algorithm: example a a b b f f c c e e d d g g h h feahg 5106∞∞ Changekey

1/20/ Prim’s algorithm: example a a b b f f c c e e d d g g h h aegh 610∞∞ ExtractMin

1/20/ Prim’s algorithm: example a a b b f f c c e e d d g g h h aegh 679∞ Changekey

1/20/ Prim’s algorithm: example a a b b f f c c e e d d g g h h ehg 7∞9 ExtractMin

1/20/ Prim’s algorithm: example a a b b f f c c e e d d g g h h gh 9∞ ExtractMin

1/20/ Prim’s algorithm: example a a b b f f c c e e d d g g h h gh 9 Changekey

1/20/ Prim’s algorithm: example a a b b f f c c e e d d g g h h h ExtractMin

1/20/ Prim’s algorithm: example a a b b f f c c e e d d g g h h

1/20/ Complete Prim’s Algorithm MST-Prim(G, w, r) Q = V[G]; for each u  Q key[u] =  ; key[r] = 0; T = {}; while (Q not empty) u = ExtractMin(Q); for each v  Adj[u] if (v  Q and w(u,v) < key[v]) T = T U (u, v); ChangeKey(v, w(u,v)); How often is ExtractMin() called? How often is ChangeKey() called? n vertices Θ(n) times Θ(n 2 ) times? Θ(m) times Overall running time: Θ(m log n) Cost per ChangeKey

1/20/ Summary Kruskal’s algorithm –Θ(m log n) –Possibly Θ(m + n log n) with counting sort Prim’s algorithm –With priority queue : Θ(m log n) Assume graph represented by adj list –With distance array : Θ(n^2) Adj list or adj matrix –For sparse graphs priority queue wins –For dense graphs distance array may be better

1/20/ abcdefghi ∞14750∞∞∞∞ b e d c g a i h f Dijkstra’s algorithm

1/20/ abcdefghi ∞∞∞∞ b e d c g a i h f Dijkstra’s algorithm

1/20/ abcdefghi ∞∞∞∞ b e d c g a i h f Dijkstra’s algorithm

1/20/ abcdefghi ∞∞ b e d c g a i h f Dijkstra’s algorithm

1/20/ abcdefghi ∞ b e d c g a i h f Dijkstra’s algorithm

1/20/ abcdefghi ∞ b e d c g a i h f Dijkstra’s algorithm

1/20/ abcdefghi b e d c g a i h f Dijkstra’s algorithm

1/20/ abcdefghi b e d c g a i h f Dijkstra’s algorithm

1/20/ abcdefghi b e d c g a i h f Dijkstra’s algorithm

1/20/ Prim’s Algorithm MST-Prim(G, w, r) Q = V[G]; for each u  Q key[u] =  ; key[r] = 0; T = {}; while (Q not empty) u = ExtractMin(Q); for each v  Adj[u] if (v  Q and w(u,v) < key[v]) T = T U (u, v); ChangeKey(v, w(u,v)); Overall running time: Θ(m log n) Cost per ChangeKey

1/20/ Dijkstra’s Algorithm Dijkstra(G, w, r) Q = V[G]; for each u  Q key[u] =  ; key[r] = 0; T = {}; while (Q not empty) u = ExtractMin(Q); for each v  Adj[u] if (v  Q and key[u]+w(u,v) < key[v]) T = T U (u, v); ChangeKey(v, key[u]+w(u,v)); Overall running time: Θ(m log n) Cost per ChangeKey Running time of Dijkstra’s algorithm is the same as Prim’s algorithm

1/20/ Good luck with your final!