Lecture 5 Today, how to solve recurrences We learned “guess and proved by induction” We also learned “substitution” method Today, we learn the “master.

Slides:



Advertisements
Similar presentations
Introduction to Algorithms 6.046J
Advertisements

5/5/20151 Analysis of Algorithms Lecture 6&7: Master theorem and substitution method.
Lecture 4 Divide and Conquer for Nearest Neighbor Problem
Comp 122, Spring 2004 Divide and Conquer (Merge Sort)
September 12, Algorithms and Data Structures Lecture III Simonas Šaltenis Nykredit Center for Database Research Aalborg University
Divide-and-Conquer Recursive in structure –Divide the problem into several smaller sub-problems that are similar to the original but smaller in size –Conquer.
11 Computer Algorithms Lecture 6 Recurrence Ch. 4 (till Master Theorem) Some of these slides are courtesy of D. Plaisted et al, UNC and M. Nicolescu, UNR.
Divide and Conquer. Recall Complexity Analysis – Comparison of algorithm – Big O Simplification From source code – Recursive.
ADA: 4. Divide/Conquer1 Objective o look at several divide and conquer examples (merge sort, binary search), and 3 approaches for calculating their.
Algorithms Recurrences. Definition – a recurrence is an equation or inequality that describes a function in terms of its value on smaller inputs Example.
Nattee Niparnan. Recall  Complexity Analysis  Comparison of Two Algos  Big O  Simplification  From source code  Recursive.
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
The master method The master method applies to recurrences of the form T(n) = a T(n/b) + f (n), where a  1, b > 1, and f is asymptotically positive.
Divide-and-Conquer1 7 2  9 4   2  2 79  4   72  29  94  4.
Divide-and-Conquer1 7 2  9 4   2  2 79  4   72  29  94  4.
Data Structures, Spring 2004 © L. Joskowicz 1 Data Structures – LECTURE 3 Recurrence equations Formulating recurrence equations Solving recurrence equations.
Lecture 4: Divide and Conquer III: Other Applications and Examples Shang-Hua Teng.
Data Structures, Spring 2006 © L. Joskowicz 1 Data Structures – LECTURE 3 Recurrence equations Formulating recurrence equations Solving recurrence equations.
Updates HW#1 has been delayed until next MONDAY. There were two errors in the assignment Merge sort runs in Θ(n log n). Insertion sort runs in Θ(n2).
Lecture 6 Divide and Conquer for Nearest Neighbor Problem Shang-Hua Teng.
Recurrences Part 3. Recursive Algorithms Recurrences are useful for analyzing recursive algorithms Recurrence – an equation or inequality that describes.
Recurrences The expression: is a recurrence. –Recurrence: an equation that describes a function in terms of its value on smaller functions Analysis of.
CSE 421 Algorithms Richard Anderson Lecture 11 Recurrences.
Recurrence Relations Connection to recursive algorithms Techniques for solving them.
Recurrences The expression: is a recurrence. –Recurrence: an equation that describes a function in terms of its value on smaller functions BIL741: Advanced.
October 1, Algorithms and Data Structures Lecture III Simonas Šaltenis Nykredit Center for Database Research Aalborg University
Introduction to Algorithms Jiafen Liu Sept
Divide-and-Conquer 7 2  9 4   2   4   7
Analysis of Algorithms
Analyzing Recursive Algorithms A recursive algorithm can often be described by a recurrence equation that describes the overall runtime on a problem of.
MCA 202: Discrete Mathematics Instructor Neelima Gupta
10/13/20151 CS 3343: Analysis of Algorithms Lecture 9: Review for midterm 1 Analysis of quick sort.
1 Computer Algorithms Lecture 7 Master Theorem Some of these slides are courtesy of D. Plaisted, UNC and M. Nicolescu, UNR.
2IL50 Data Structures Fall 2015 Lecture 2: Analysis of Algorithms.
Project 2 due … Project 2 due … Project 2 Project 2.
Divide and Conquer Andreas Klappenecker [based on slides by Prof. Welch]
DR. NAVEED AHMAD DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF PESHAWAR LECTURE-5 Advance Algorithm Analysis.
10/25/20151 CS 3343: Analysis of Algorithms Lecture 6&7: Master theorem and substitution method.
Recurrences David Kauchak cs161 Summer Administrative Algorithms graded on efficiency! Be specific about the run times (e.g. log bases) Reminder:
CSE 421 Algorithms Lecture 15 Closest Pair, Multiplication.
Divide and Conquer Andreas Klappenecker [based on slides by Prof. Welch]
Divide-and-Conquer UNC Chapel HillZ. Guo. Divide-and-Conquer It’s a technique instead of an algorithm Recursive in structure – Divide the problem into.
Recurrences – II. Comp 122, Spring 2004.
Foundations II: Data Structures and Algorithms
Solving Recurrences with the Substitution Method.
Design & Analysis of Algorithms COMP 482 / ELEC 420 John Greiner
Divide and Conquer. Recall Divide the problem into a number of sub-problems that are smaller instances of the same problem. Conquer the sub-problems by.
Spring 2015 Lecture 2: Analysis of Algorithms
Master Method Some of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
Recurrences (in color) It continues…. Recurrences When an algorithm calls itself recursively, its running time is described by a recurrence. When an algorithm.
Divide and Conquer Faculty Name: Ruhi Fatima Topics Covered Divide and Conquer Matrix multiplication Recurrence.
Advanced Algorithms Analysis and Design By Dr. Nazir Ahmad Zafar Dr Nazir A. Zafar Advanced Algorithms Analysis and Design.
Equal costs at all levels
Introduction to Algorithms
Mathematical Foundations (Solving Recurrence)
Lecture 11. Master Theorem for analyzing Recursive relations
Chapter 4: Divide and Conquer
T(n) = aT(n/b) + cn = a(aT(n/b/b) + cn/b) + cn 2
Introduction to Algorithms 6.046J
Algorithms and Data Structures Lecture III
Divide-and-Conquer 7 2  9 4   2   4   7
Recursion-tree method
Ch 4: Recurrences Ming-Te Chi
CS 3343: Analysis of Algorithms
Divide and Conquer (Merge Sort)
Trevor Brown CS 341: Algorithms Trevor Brown
Divide-and-Conquer 7 2  9 4   2   4   7
Divide and Conquer Merge sort and quick sort Binary search
Algorithms and Data Structures Lecture III
Presentation transcript:

Lecture 5 Today, how to solve recurrences We learned “guess and proved by induction” We also learned “substitution” method Today, we learn the “master theorem” More divide and conquer: closest pair problem matrix multiplication

Master Theorem Theorem 4.1 (CLRS, Theorem 4.1) Let a ≥ 1 and b > 1 be constants. Let f(n) be a function and let T(n) be defined on the nonnegative integers by T(n) = aT(n/b) + f(n). Then

Note Only apply to a particular family of recurrences. f(n) is positive for large n. Key is to compare f(n) with n log_b a Case 2, more general is f(n) = Θ( n log_b a lg k n). Then the result is T(n) = Θ( n log_b a lg k+1 n). Sometimes it does not apply. Ex. T(n) = 4T(n/2) + n 2 /logn.

Proof ideas of Master Theorem Consider a tree with T(n) at the root, and apply the recursion to each node, until we get down to T(1) at the leaves. The first recursion is T(n) = aT(n/b) + f(n), so assign a cost of f(n) to the root. At the next level we have “a” nodes, each with a cost of T(n/b). When we apply the recursion again, we get a cost of af(n/b) for all of these. At the next level we have a 2 nodes, each with a cost of T(n/b 2 ). We get a cost of a 2 f(n/b 2 ). We continue down to T(1) at the leaves. There are a log_b n leaves and each costs Θ(1), which gives Θ(a log_b n ). The total cost associated with f is Σ 0 ≤ i ≤ log_b n - 1 a i f(n/b i ). Thus T(n) = Θ(n log_b a ) + Σ 0 ≤ i ≤ (log_b n) - 1 a i f(n/b i ). The three cases now come from deciding which term is dominant. In case (1), the Θ term is dominant. In case (2), the terms are roughly equal (but the second term has an extra lg n factor). In case (3), the f(n) term is dominant. The details are somewhat painful, but can be found in CLRS, pp

f (n/b) Idea of master theorem f (n/b)   (1) … Recursion tree: … f (n)f (n) a f (n/b 2 ) … a h = log b n f (n)f (n) a f (n/b) a 2 f (n/b 2 ) … #leaves = a h = a log b n = n log b a n log b a   (1)

Three common cases Compare f (n) with n log b a : 1. f (n) = O(n log b a –  ) for some constant  > 0. f (n) grows polynomially slower than n log b a (by an n  factor). Solution: T(n) =  (n log b a ).

f (n/b) Idea of master theorem f (n/b)   (1) … Recursion tree: … f (n)f (n) a f (n/b 2 ) … a h = log b n f (n)f (n) a f (n/b) a 2 f (n/b 2 ) … n log b a   (1) C ASE 1: The weight increases geometrically from the root to the leaves. The leaves hold a constant fraction of the total weight.  (n log b a ) These functions increase from top to bottom geometrically, hence we only need to have the last bottom term

Case 2 Compare f (n) with n log b a : 2. f (n) =  (n log b a lg k n) for some constant k  0. f (n) and n log b a grow at similar rates. This is clear for k=0. For k>0, the intuition is that lg k n factor remain for constant fraction of levels, hence sum to the following Solution: T(n) =  (n log b a lg k+1 n).

f (n/b) Idea of master theorem f (n/b)   (1) … Recursion tree: … f (n)f (n) a f (n/b 2 ) … a h = log b n f (n)f (n) a f (n/b) a 2 f (n/b 2 ) … n log b a   (1) C ASE 2: (k = 0) The weight is approximately the same on each of the log b n levels.  (n log b a lg n) All levels same

Case 3, c<1, a k f(n/b k ) geometrically decreases hence = Θ(f(n)) Compare f (n) with n log b a : 3. f (n) =  (n log b a +  ) for some constant  > 0. f (n) grows polynomially faster than n log b a (by an n  factor), and f (n) satisfies the regularity condition that a f (n/b)  c f (n) for some constant c < 1. Solution: T(n) =  ( f (n) ).

f (n/b) Idea of master theorem f (n/b)   (1) … Recursion tree: … f (n)f (n) a f (n/b 2 ) … a h = log b n f (n)f (n) a f (n/b) a 2 f (n/b 2 ) … n log b a   (1) C ASE 3: The weight decreases geometrically from the root to the leaves. The root holds a constant fraction of the total weight.  ( f (n)) af(n/b)<(1-ε)f(n)

Examples for the Master Theorem The Karatsuba recurrence has a = 3, b = 2, f(n) = cn. Then case 1 applies, and so T(n) = Θ(n l og_2 3 ), as we found. The mergesort recurrence has a = 2, b = 2, f(n) = n. Then case 2 applies, and so T(n) = Θ(n lg n). Finally, a recurrence like T(n) = 3T(n/2) + n 2 gives rise to case 3. In this case f(n) = n 2, so 3f(n/2) = 3 (n/2) 2 = (3/4) n 2 ≤ c n 2 for c = 3/4, and so T(n) = Θ(n 2 ). Note that the master theorem does not cover all cases. In particular, it does not cover the case T(n) = 2 T(n/2) + n / lg n since then the only applicable case is case 3, but then the inequality involving f does not hold.

Closest pair problem Input: A set of points P = {p 1,…, p n } in two dimensions Output: The pair of points p i, p j that minimize the Euclidean distance between them.

Distances Euclidean distance

Closest Pair Problem

Divide and Conquer O(n 2 ) time algorithm is easy Assumptions: No two points have the same x-coordinates No two points have the same y-coordinates How do we solve this problem in 1 dimension? Sort the number and walk from left to right to find minimum gap.

Divide and Conquer Divide and conquer has a chance to do better than O(n 2 ). We can first sort the points by their x- coordinates and sort also by y-coordinates

Closest Pair Problem

Divide and Conquer for the Closest Pair Problem Divide by x-median

Divide Divide by x-median L R

Conquer Conquer: Recursively solve L and R L R

Combination I Take the smaller one of  1,  2 :  = min(  1,  2 ) L R 22

Combination II Is there a point in L and a point in R whose distance is smaller than  ?  = min(  1,  2 ) L R

Combination II If the answer is “no” then we are done!!! If the answer is “yes” then the closest such pair forms the closest pair for the entire set How do we determine this?

Combination II Is there a point in L and a point in R whose distance is smaller than  ? L R

Need only to consider the narrow band O(n) time L R

Combination II Is there a point in L and a point in R whose distance is smaller than  ? Denote this set by S, assume S y is the sorted list of S by the y-coordinates. L R

Combination II There exists a point in L and a point in R whose distance is less than  if and only if there exist two points in S whose distance is less than . If S is the whole thing, did we gain anything? CLAIM: If s and t in S have the property that ||s- t|| < , then s and t are within 15 positions of each other in the sorted list S y.

Combination II Is there a point in L and a point in R whose distance is smaller than  ? L R There are at most one point in each box of size δ/2 by δ/2. Thus s and t cannot be too far apart.

Closest-Pair Preprocessing: Construct P x and P y as sorted-list by x- and y-coordinates Closest-pair(P, P x,P y ) Divide Construct L, L x, L y and R, R x, R y Conquer Let  1 = Closest-Pair(L, L x, L y ) Let  2 = Closest-Pair(R, R x, R y ) Combination Let  = min(  1,  2 ) Construct S and S y For each point in S y, check each of its next 15 points down the list If the distance is less than , update the  as this smaller distance

Complexity Analysis Preprocessing takes O(n lg n) time Divide takes O(n) time Conquer takes 2 T(n/2) time Combination takes O(n) time T(n) = 2T(n/2) + cn So totally takes O(n lg n) time.

Matrix Multiplication Suppose we multiply two NxN matrices together. Regular method is NxNxN = N 3 multiplications O(N 3 )

Can we Divide and Conquer? C11 = A11*B11 + A12*B21 C12 = A11*B12 + A12*B22 C21 = A21*B11 + A22*B21 C22 = A21*B12 + A22*B22 Complexity : T(N) = 8T(N/2) + O(N 2 ) = O(N log 2 8 ) = O(N 3 ) A = B = C= A*B = No improvement

Strassen’s Matrix Multiplication P 1 = (A 11 + A 22 )(B 11 +B 22 ) P 2 = (A 21 + A 22 ) * B 11 P 3 = A 11 * (B 12 - B 22 ) P 4 = A 22 * (B 21 - B 11 ) P 5 = (A 11 + A 12 ) * B 22 P 6 = (A 21 - A 11 ) * (B 11 + B 12 ) P 7 = (A 12 - A 22 ) * (B 21 + B 22 ) C 11 = P 1 + P 4 - P 5 + P 7 C 12 = P 3 + P 5 C 21 = P 2 + P 4 C 22 = P 1 + P 3 - P 2 + P 6 And do this recursively as usual. Volker Strassen

Time analysis T(n) = 7T(n/2) + O(n 2 ) = 7 logn by the Master Theorem =n log7 =n 2.81 Best bound: O(n ) by Coppersmith- Winograd. Best known (trivial) lower bound: Ω(n 2 ). Open: what is the true complexity of matrix multiplication?