CS 332: Algorithms Amortized Analysis Continued

Slides:



Advertisements
Similar presentations
Advanced Algorithm Design and Analysis Jiaheng Lu Renmin University of China
Advertisements

Graphs: MSTs and Shortest Paths David Kauchak cs161 Summer 2009.
David Luebke 1 5/4/2015 CS 332: Algorithms Dynamic Programming Greedy Algorithms.
More Graph Algorithms Minimum Spanning Trees, Shortest Path Algorithms.
Disjoint-Set Operation
UMass Lowell Computer Science Graduate Analysis of Algorithms Prof. Karen Daniels Spring, 2010 Lecture 3 Tuesday, 2/9/10 Amortized Analysis.
UMass Lowell Computer Science Graduate Analysis of Algorithms Prof. Karen Daniels Spring, 2005 Lecture 3 Tuesday, 2/8/05 Amortized Analysis.
Tirgul 9 Amortized analysis Graph representation.
UMass Lowell Computer Science Graduate Analysis of Algorithms Prof. Karen Daniels Spring, 2009 Lecture 3 Tuesday, 2/10/09 Amortized Analysis.
CS333 / Cutler Amortized Analysis 1 Amortized Analysis The average cost of a sequence of n operations on a given Data Structure. Aggregate Analysis Accounting.
CSE Algorithms Minimum Spanning Trees Union-Find Algorithm
1 Dynamic Programming Jose Rolim University of Geneva.
David Luebke 1 9/10/2015 CS 332: Algorithms Single-Source Shortest Path.
David Luebke 1 9/10/2015 ITCS 6114 Single-Source Shortest Path.
Definition: Given an undirected graph G = (V, E), a spanning tree of G is any subgraph of G that is a tree Minimum Spanning Trees (Ch. 23) abc d f e gh.
Introduction to Algorithms 6.046J/18.401J/SMA5503 Lecture 14 Prof. Charles E. Leiserson.
Tonga Institute of Higher Education Design and Analysis of Algorithms IT 254 Lecture 5: Advanced Design Techniques.
2IL05 Data Structures Fall 2007 Lecture 13: Minimum Spanning Trees.
Spring 2015 Lecture 11: Minimum Spanning Trees
David Luebke 1 10/24/2015 CS 332: Algorithms Greedy Algorithms Continued.
David Luebke 1 10/25/2015 CS 332: Algorithms Skip Lists Hash Tables.
1 Greedy Algorithms and MST Dr. Ying Lu RAIK 283 Data Structures & Algorithms.
Disjoint Sets Data Structure (Chap. 21) A disjoint-set is a collection  ={S 1, S 2,…, S k } of distinct dynamic sets. Each set is identified by a member.
Lecture X Disjoint Set Operations
Disjoint Sets Data Structure. Disjoint Sets Some applications require maintaining a collection of disjoint sets. A Disjoint set S is a collection of sets.
Advanced Algorithm Design and Analysis (Lecture 12) SW5 fall 2004 Simonas Šaltenis E1-215b
David Luebke 1 12/12/2015 CS 332: Algorithms Amortized Analysis.
Disjoint-Set Operation. p2. Disjoint Set Operations : MAKE-SET(x) : Create new set {x} with representative x. UNION(x,y) : x and y are elements of two.
MST – KRUSKAL UNIT IV. Disjoint-Set Union Problem Want a data structure to support disjoint sets – Collection of disjoint sets S = {S i }, S i ∩ S j =
Chapter 23: Minimum Spanning Trees: A graph optimization problem Given undirected graph G(V,E) and a weight function w(u,v) defined on all edges (u,v)
David Luebke 1 2/26/2016 CS 332: Algorithms Dynamic Programming.
David Luebke 1 3/1/2016 CS 332: Algorithms Dijkstra’s Algorithm Disjoint-Set Union.
MST Lemma Let G = (V, E) be a connected, undirected graph with real-value weights on the edges. Let A be a viable subset of E (i.e. a subset of some MST),
November 22, Algorithms and Data Structures Lecture XII Simonas Šaltenis Nykredit Center for Database Research Aalborg University
Introduction to Algorithms: Amortized Analysis. Introduction to Algorithms Amortized Analysis Dynamic tables Aggregate method Accounting method Potential.
Amortized Analysis.
Amortized Analysis of Rehashing
Lecture ? The Algorithms of Kruskal and Prim
Top 50 Data Structures Interview Questions
Greedy Technique Constructs a solution to an optimization problem piece by piece through a sequence of choices that are: feasible locally optimal irrevocable.
Introduction to Algorithms
Many slides here are based on D. Luebke slides
Shortest Paths and Minimum Spanning Trees
Disjoint Sets Data Structure
CS 332: Algorithms Hash Tables David Luebke /19/2018.
Presentation by Marty Krogel
Amortized Analysis The problem domains vary widely, so this approach is not tied to any single data structure The goal is to guarantee the average performance.
Chapter 17 Amortized Analysis Lee, Hsiu-Hui
Lecture 16 Amortized Analysis
Spanning Trees Kruskel’s Algorithm Prim’s Algorithm
CS 3343: Analysis of Algorithms
Many slides here are based on D. Luebke slides
CS 332: Algorithms Dijkstra’s Algorithm Continued Disjoint-Set Union
Algorithms and Data Structures Lecture XII
CS200: Algorithm Analysis
CS 3343: Analysis of Algorithms
Shortest Paths and Minimum Spanning Trees
CSC 413/513: Intro to Algorithms
Longest Common Subsequence
Lecture 8. Paradigm #6 Dynamic Programming
Minimum Spanning Tree.
Minimum Spanning Trees
Lecture 14 Shortest Path (cont’d) Minimum Spanning Tree
Introduction to Algorithms: Dynamic Programming
Amortized Analysis and Heaps Intro
Lecture 13 Shortest Path (cont’d) Minimum Spanning Tree
Lecture 21 Amortized Analysis
Dynamic Programming.
Review of MST Algorithms Disjoint-Set Union Amortized Analysis
Presentation transcript:

CS 332: Algorithms Amortized Analysis Continued Longest Common Subsequence Dynamic Programming David Luebke 1 4/5/2019

Administrivia Midterm almost graded Homework 4 assigned Due: Tuesday 28 (after Thanksgiving break) David Luebke 2 4/5/2019

Review: MST Algorithms In a connected, weighted, undirected graph, will the edge with the lowest weight be in the MST? Why or why not? Yes: If T is MST of G, and A  T is a subtree of T, and (u,v) is the min-weight edge connecting A to V-A, then (u,v)  T The lowest-weight edge must be in the tree (A=) David Luebke 3 4/5/2019

Review: MST Algorithms What do the disjoint sets in Kruskal’s algorithm represent? A: Parts of the graph we have connected up together so far David Luebke 4 4/5/2019

Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ; for each v  V MakeSet(v); sort E by increasing edge weight w for each (u,v)  E (in sorted order) if FindSet(u)  FindSet(v) T = T  {{u,v}}; Union(FindSet(u), FindSet(v)); } 2 19 9 14 17 8 25 5 21 13 1 David Luebke 5 4/5/2019

Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ; for each v  V MakeSet(v); sort E by increasing edge weight w for each (u,v)  E (in sorted order) if FindSet(u)  FindSet(v) T = T  {{u,v}}; Union(FindSet(u), FindSet(v)); } 2 19 9 14 17 8 25 5 21 13 1? David Luebke 6 4/5/2019

Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ; for each v  V MakeSet(v); sort E by increasing edge weight w for each (u,v)  E (in sorted order) if FindSet(u)  FindSet(v) T = T  {{u,v}}; Union(FindSet(u), FindSet(v)); } 2 19 9 14 17 8 25 5 21 13 1 David Luebke 7 4/5/2019

Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ; for each v  V MakeSet(v); sort E by increasing edge weight w for each (u,v)  E (in sorted order) if FindSet(u)  FindSet(v) T = T  {{u,v}}; Union(FindSet(u), FindSet(v)); } 2? 19 9 14 17 8 25 5 21 13 1 David Luebke 8 4/5/2019

Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ; for each v  V MakeSet(v); sort E by increasing edge weight w for each (u,v)  E (in sorted order) if FindSet(u)  FindSet(v) T = T  {{u,v}}; Union(FindSet(u), FindSet(v)); } 2 19 9 14 17 8 25 5 21 13 1 David Luebke 9 4/5/2019

Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ; for each v  V MakeSet(v); sort E by increasing edge weight w for each (u,v)  E (in sorted order) if FindSet(u)  FindSet(v) T = T  {{u,v}}; Union(FindSet(u), FindSet(v)); } 2 19 9 14 17 8 25 5? 21 13 1 David Luebke 10 4/5/2019

Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ; for each v  V MakeSet(v); sort E by increasing edge weight w for each (u,v)  E (in sorted order) if FindSet(u)  FindSet(v) T = T  {{u,v}}; Union(FindSet(u), FindSet(v)); } 2 19 9 14 17 8 25 5 21 13 1 David Luebke 11 4/5/2019

Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ; for each v  V MakeSet(v); sort E by increasing edge weight w for each (u,v)  E (in sorted order) if FindSet(u)  FindSet(v) T = T  {{u,v}}; Union(FindSet(u), FindSet(v)); } 2 19 9 14 17 8? 25 5 21 13 1 David Luebke 12 4/5/2019

Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ; for each v  V MakeSet(v); sort E by increasing edge weight w for each (u,v)  E (in sorted order) if FindSet(u)  FindSet(v) T = T  {{u,v}}; Union(FindSet(u), FindSet(v)); } 2 19 9 14 17 8 25 5 21 13 1 David Luebke 13 4/5/2019

Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ; for each v  V MakeSet(v); sort E by increasing edge weight w for each (u,v)  E (in sorted order) if FindSet(u)  FindSet(v) T = T  {{u,v}}; Union(FindSet(u), FindSet(v)); } 2 19 9? 14 17 8 25 5 21 13 1 David Luebke 14 4/5/2019

Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ; for each v  V MakeSet(v); sort E by increasing edge weight w for each (u,v)  E (in sorted order) if FindSet(u)  FindSet(v) T = T  {{u,v}}; Union(FindSet(u), FindSet(v)); } 2 19 9 14 17 8 25 5 21 13 1 David Luebke 15 4/5/2019

Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ; for each v  V MakeSet(v); sort E by increasing edge weight w for each (u,v)  E (in sorted order) if FindSet(u)  FindSet(v) T = T  {{u,v}}; Union(FindSet(u), FindSet(v)); } 2 19 9 14 17 8 25 5 21 13? 1 David Luebke 16 4/5/2019

Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ; for each v  V MakeSet(v); sort E by increasing edge weight w for each (u,v)  E (in sorted order) if FindSet(u)  FindSet(v) T = T  {{u,v}}; Union(FindSet(u), FindSet(v)); } 2 19 9 14 17 8 25 5 21 13 1 David Luebke 17 4/5/2019

Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ; for each v  V MakeSet(v); sort E by increasing edge weight w for each (u,v)  E (in sorted order) if FindSet(u)  FindSet(v) T = T  {{u,v}}; Union(FindSet(u), FindSet(v)); } 2 19 9 14? 17 8 25 5 21 13 1 David Luebke 18 4/5/2019

Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ; for each v  V MakeSet(v); sort E by increasing edge weight w for each (u,v)  E (in sorted order) if FindSet(u)  FindSet(v) T = T  {{u,v}}; Union(FindSet(u), FindSet(v)); } 2 19 9 14 17 8 25 5 21 13 1 David Luebke 19 4/5/2019

Review: Shortest-Path Algorithms How does the Bellman-Ford algorithm work? How can we do better for DAGs? Under what conditions can we use Dijkstra’s algorithm? David Luebke 20 4/5/2019

Review: Running Time of Kruskal’s Algorithm Expensive operations: Sort edges: O(E lg E) O(V) MakeSet()’s O(E) FindSet()’s O(V) Union()’s Upshot: Comes down to efficiency of disjoint-set operations, particularly Union() David Luebke 21 4/5/2019

Review: Disjoint Set Union So how do we represent disjoint sets? Naïve implementation: use a linked list to represent elements, with pointers back to set: MakeSet(): O(1) FindSet(): O(1) Union(A,B): “Copy” elements of A into set B by adjusting elements of A to point to B: O(A) How long could n Union()s take? O(n2), worst case David Luebke 22 4/5/2019

Disjoint Set Union: Analysis Worst-case analysis: O(n2) time for n Union’s Union(S1, S2) “copy” 1 element Union(S2, S3) “copy” 2 elements … Union(Sn-1, Sn) “copy” n-1 elements O(n2) Improvement: always copy smaller into larger How long would above sequence of Union’s take? Worst case: n Union’s take O(n lg n) time Proof uses amortized analysis David Luebke 23 4/5/2019

Amortized Analysis of Disjoint Sets If elements are copied from the smaller set into the larger set, an element can be copied at most lg n times Worst case: Each time copied, element in smaller set 1st time resulting set size  2 2nd time  4 … (lg n)th time  n David Luebke 24 4/5/2019

Amortized Analysis of Disjoint Sets Since we have n elements each copied at most lg n times, n Union()’s takes O(n lg n) time Therefore we say the amortized cost of a Union() operation is O(lg n) This is the aggregate method of amortized analysis: n operations take time T(n) Average cost of an operation = T(n)/n David Luebke 25 4/5/2019

Amortized Analysis: Accounting Method Charge each operation an amortized cost Amount not used stored in “bank” Later operations can used stored money Balance must not go negative Book also discusses potential method But we won’t worry about it here David Luebke 26 4/5/2019

Accounting Method Example: Dynamic Tables Implementing a table (e.g., hash table) for dynamic data, want to make it small as possible Problem: if too many items inserted, table may be too small Idea: allocate more memory as needed David Luebke 27 4/5/2019

Dynamic Tables 1. Init table size m = 1 2. Insert elements until number n > m 3. Generate new table of size 2m 4. Reinsert old elements into new table 5. (back to step 2) What is the worst-case cost of an insert? One insert can be costly, but the total? David Luebke 28 4/5/2019

Analysis Of Dynamic Tables Let ci = cost of ith insert ci = i if i-1 is exact power of 2, 1 otherwise Example: Operation Table Size Cost Insert(1) 1 1 1 David Luebke 29 4/5/2019

Analysis Of Dynamic Tables Let ci = cost of ith insert ci = i if i-1 is exact power of 2, 1 otherwise Example: Operation Table Size Cost Insert(1) 1 1 1 Insert(2) 2 1 + 1 2 David Luebke 30 4/5/2019

Analysis Of Dynamic Tables Let ci = cost of ith insert ci = i if i-1 is exact power of 2, 1 otherwise Example: Operation Table Size Cost Insert(1) 1 1 1 Insert(2) 2 1 + 1 2 Insert(3) 4 1 + 2 3 David Luebke 31 4/5/2019

Analysis Of Dynamic Tables Let ci = cost of ith insert ci = i if i-1 is exact power of 2, 1 otherwise Example: Operation Table Size Cost Insert(1) 1 1 1 Insert(2) 2 1 + 1 2 Insert(3) 4 1 + 2 3 Insert(4) 4 1 4 David Luebke 32 4/5/2019

Analysis Of Dynamic Tables Let ci = cost of ith insert ci = i if i-1 is exact power of 2, 1 otherwise Example: Operation Table Size Cost Insert(1) 1 1 1 Insert(2) 2 1 + 1 2 Insert(3) 4 1 + 2 3 Insert(4) 4 1 4 Insert(5) 8 1 + 4 5 David Luebke 33 4/5/2019

Analysis Of Dynamic Tables Let ci = cost of ith insert ci = i if i-1 is exact power of 2, 1 otherwise Example: Operation Table Size Cost Insert(1) 1 1 1 Insert(2) 2 1 + 1 2 Insert(3) 4 1 + 2 3 Insert(4) 4 1 4 Insert(5) 8 1 + 4 5 Insert(6) 8 1 6 David Luebke 34 4/5/2019

Analysis Of Dynamic Tables Let ci = cost of ith insert ci = i if i-1 is exact power of 2, 1 otherwise Example: Operation Table Size Cost Insert(1) 1 1 1 Insert(2) 2 1 + 1 2 Insert(3) 4 1 + 2 3 Insert(4) 4 1 4 Insert(5) 8 1 + 4 5 Insert(6) 8 1 6 Insert(7) 8 1 7 David Luebke 35 4/5/2019

Analysis Of Dynamic Tables Let ci = cost of ith insert ci = i if i-1 is exact power of 2, 1 otherwise Example: Operation Table Size Cost Insert(1) 1 1 1 Insert(2) 2 1 + 1 2 Insert(3) 4 1 + 2 3 Insert(4) 4 1 4 Insert(5) 8 1 + 4 5 Insert(6) 8 1 6 Insert(7) 8 1 7 Insert(8) 8 1 8 David Luebke 36 4/5/2019

Analysis Of Dynamic Tables Let ci = cost of ith insert ci = i if i-1 is exact power of 2, 1 otherwise Example: Operation Table Size Cost 1 2 3 4 5 6 7 Insert(1) 1 1 1 8 Insert(2) 2 1 + 1 9 2 Insert(3) 4 1 + 2 Insert(4) 4 1 Insert(5) 8 1 + 4 Insert(6) 8 1 Insert(7) 8 1 Insert(8) 8 1 Insert(9) 16 1 + 8 David Luebke 37 4/5/2019

Aggregate Analysis n Insert() operations cost Average cost of operation = (total cost)/(# operations) < 3 Asymptotically, then, a dynamic table costs the same as a fixed-size table Both O(1) per Insert operation David Luebke 38 4/5/2019

Accounting Analysis Charge each operation $3 amortized cost Use $1 to perform immediate Insert() Store $2 When table doubles $1 reinserts old item, $1 reinserts another old item Point is, we’ve already paid these costs Upshot: constant (amortized) cost per operation David Luebke 39 4/5/2019

Accounting Analysis Suppose must support insert & delete, table should contract as well as expand Table overflows  double it (as before) Table < 1/2 full  halve it: BAD IDEA (Why?) Better: Table < 1/4 full  halve it Charge $3 for Insert (as before) Charge $2 for Delete Store extra $1 in emptied slot Use later to pay to copy remaining items to new table when shrinking table David Luebke 40 4/5/2019

Dynamic Programming Another strategy for designing algorithms is dynamic programming A metatechnique, not an algorithm (like divide & conquer) The word “programming” is historical and predates computer programming Use when problem breaks down into recurring small subproblems This lecture: a driving problem Next lecture: the algorithm David Luebke 41 4/5/2019

Dynamic Programming Example: Longest Common Subsequence Longest common subsequence (LCS) problem: Given two sequences x[1..m] and y[1..n], find the longest subsequence which occurs in both Ex: x = {A B C B D A B }, y = {B D C A B A} {B C} and {A A} are both subsequences of both What is the LCS? Brute-force algorithm: For every subsequence of x, check if it’s a subsequence of y How many subsequences of x are there? What will be the running time of the brute-force alg? David Luebke 42 4/5/2019

LCS Algorithm Brute-force algorithm: 2m subsequences of x to check against n elements of y: O(n 2m) We can do better: for now, let’s only worry about the problem of finding the length of LCS When finished we will see how to backtrack from this solution back to the actual LCS Define c[i,j] to be the length of the LCS of x[1..i] and y[1..j] What is the length of LCS of x and y? David Luebke 43 4/5/2019

Finding LCS Length Theorem: What is this really saying? David Luebke 44 4/5/2019

The End David Luebke 45 4/5/2019