1 Chapter 4 Divide-and-Conquer. 2 About this lecture Recall the divide-and-conquer paradigm, which we used for merge sort: – Divide the problem into a.

Slides:



Advertisements
Similar presentations
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Advertisements

Divide-and-Conquer CIS 606 Spring 2010.
Divide and Conquer (Merge Sort)
A simple example finding the maximum of a set S of n numbers.
More on Divide and Conquer. The divide-and-conquer design paradigm 1. Divide the problem (instance) into subproblems. 2. Conquer the subproblems by solving.
Comp 122, Spring 2004 Divide and Conquer (Merge Sort)
Divide and Conquer Strategy
Introduction to Algorithms Jiafen Liu Sept
5/15/ Algorithms1 Algorithms – Ch4 - Divide & Conquer Recurrences: as we saw in another lecture, the Divide and Conquer approach leads to Recurrence.
CS4413 Divide-and-Conquer
Divide-and-Conquer Recursive in structure –Divide the problem into several smaller sub-problems that are similar to the original but smaller in size –Conquer.
1 Divide-and-Conquer CSC401 – Analysis of Algorithms Lecture Notes 11 Divide-and-Conquer Objectives: Introduce the Divide-and-conquer paradigm Review the.
Theory of Algorithms: Divide and Conquer
11 Computer Algorithms Lecture 6 Recurrence Ch. 4 (till Master Theorem) Some of these slides are courtesy of D. Plaisted et al, UNC and M. Nicolescu, UNR.
Divide and Conquer. Recall Complexity Analysis – Comparison of algorithm – Big O Simplification From source code – Recursive.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 5.
Nattee Niparnan. Recall  Complexity Analysis  Comparison of Two Algos  Big O  Simplification  From source code  Recursive.
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
CS 253: Algorithms Chapter 7 Mergesort Quicksort Credit: Dr. George Bebis.
Divide-and-Conquer1 7 2  9 4   2  2 79  4   72  29  94  4.
Divide-and-Conquer1 7 2  9 4   2  2 79  4   72  29  94  4.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu.
Data Structures, Spring 2004 © L. Joskowicz 1 Data Structures – LECTURE 3 Recurrence equations Formulating recurrence equations Solving recurrence equations.
Lecture 2: Divide and Conquer I: Merge-Sort and Master Theorem Shang-Hua Teng.
Data Structures, Spring 2006 © L. Joskowicz 1 Data Structures – LECTURE 3 Recurrence equations Formulating recurrence equations Solving recurrence equations.
CS 253: Algorithms Chapter 4 Divide-and-Conquer Recurrences Master Theorem Credit: Dr. George Bebis.
Recurrences The expression: is a recurrence. –Recurrence: an equation that describes a function in terms of its value on smaller functions Analysis of.
Recurrences The expression: is a recurrence. –Recurrence: an equation that describes a function in terms of its value on smaller functions BIL741: Advanced.
Analysis of Recursive Algorithms October 29, 2014
Divide-and-Conquer 7 2  9 4   2   4   7
Analysis of Algorithms
Analyzing Recursive Algorithms A recursive algorithm can often be described by a recurrence equation that describes the overall runtime on a problem of.
Introduction to Algorithms 6.046J/18.401J/SMA5503 Lecture 3 Prof. Erik Demaine.
Project 2 due … Project 2 due … Project 2 Project 2.
Divide and Conquer Andreas Klappenecker [based on slides by Prof. Welch]
1 Designing algorithms There are many ways to design an algorithm. Insertion sort uses an incremental approach: having sorted the sub-array A[1…j - 1],
DR. NAVEED AHMAD DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF PESHAWAR LECTURE-5 Advance Algorithm Analysis.
10/25/20151 CS 3343: Analysis of Algorithms Lecture 6&7: Master theorem and substitution method.
4.Divide-and-Conquer Hsu, Lih-Hsing. Computer Theory Lab. Chapter 4P.2 Instruction Divide Conquer Combine.
Recurrences David Kauchak cs161 Summer Administrative Algorithms graded on efficiency! Be specific about the run times (e.g. log bases) Reminder:
Divide-and-Conquer The most-well known algorithm design strategy: 1. Divide instance of problem into two or more smaller instances 2.Solve smaller instances.
Divide and Conquer Andreas Klappenecker [based on slides by Prof. Welch]
Divide-and-Conquer UNC Chapel HillZ. Guo. Divide-and-Conquer It’s a technique instead of an algorithm Recursive in structure – Divide the problem into.
Divide and Conquer Strategy
Foundations II: Data Structures and Algorithms
Lecture 5 Today, how to solve recurrences We learned “guess and proved by induction” We also learned “substitution” method Today, we learn the “master.
Divide and Conquer. Recall Divide the problem into a number of sub-problems that are smaller instances of the same problem. Conquer the sub-problems by.
2IS80 Fundamentals of Informatics Fall 2015 Lecture 6: Sorting and Searching.
Young CS 331 D&A of Algo. Topic: Divide and Conquer1 Divide-and-Conquer General idea: Divide a problem into subprograms of the same kind; solve subprograms.
Recurrences (in color) It continues…. Recurrences When an algorithm calls itself recursively, its running time is described by a recurrence. When an algorithm.
Divide and Conquer Faculty Name: Ruhi Fatima Topics Covered Divide and Conquer Matrix multiplication Recurrence.
UNIT- I Problem solving and Algorithmic Analysis
Chapter 4: Divide and Conquer
Randomized Algorithms
Chapter 4 Divide-and-Conquer
CS 3343: Analysis of Algorithms
CSCE 411 Design and Analysis of Algorithms
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
Randomized Algorithms
Divide-and-Conquer 7 2  9 4   2   4   7
Ch 4: Recurrences Ming-Te Chi
Divide and Conquer (Merge Sort)
Trevor Brown CS 341: Algorithms Trevor Brown
Ack: Several slides from Prof. Jim Anderson’s COMP 202 notes.
Divide-and-Conquer 7 2  9 4   2   4   7
Quicksort Quick sort Correctness of partition - loop invariant
Presentation transcript:

1 Chapter 4 Divide-and-Conquer

2 About this lecture Recall the divide-and-conquer paradigm, which we used for merge sort: – Divide the problem into a number of sub-problems that are smaller instances of the same problem. – Conquer the sub-problems by solving them recursively. Base case: If the sub-problems are small enough, just solve them by brute force. – Combine the sub-problem solutions to give a solution to the original problem. We look at two more algorithms based on divide- and-conquer.

3 About this lecture Analyzing divide-and-conquer algorithms Introduce some ways of solving recurrences – Substitution Method (If we know the answer) – Recursion Tree Method (Very useful !) – Master Theorem (Save our effort)

4 Maximum-subarray problem Input: an array A[1..n] of n numbers – Assume that some of the numbers are negative, because this problem is trivial when all numbers are nonnegative Output: a nonempty subarray A[i..j] having the largest sum S[i, j] = a i + a i a j A maximum subarray

5 A brute-force solution Examine all possible S[i, j] Two implementations: – compute each S[i, j] in O(n) time  O(n 3 ) time – compute each S[i, j+1] from S[i, j] in O(1) time – (S[i, i] = A[i] and S[i, j+1] = S[i, j] + A[j+1]) –  O(n 2 ) time Ex: i A[i] S[1, j] = S[2, j] =

6 A divide-and-conquer solution Possible locations of a maximum subarray A[i..j] of A[low..high], where mid =  (low+high)/2  – entirely in A[low..mid] (low  i  j  mid) – entirely in A[mid+1..high] (mid < i  j  high) – crossing the midpoint (low  i  mid < j  high) low mid high mid +1 entirely in A[low..mid] entirely in A[mid+1..high] crossing the midpoint Possible locations of subarrays of A[low..high]

7 low mid high mid +1 A[i..mid] j A[mid+1..j] i A[i..j] comprises two subarrays A[i..mid] and A[mid+1..j]

8 FIND-MAX-CROSSING-SUBARRAY (A, low, mid, high) left-sum = - // Find a maximum subarray of the form A[i..mid] sum = 0 for i = mid downto low sum = sum + A[i ] if sum > left-sum left-sum = sum max-left = i right-sum = - // Find a maximum subarray of the form A[mid j ] sum =0 for j = mid +1 to high sum = sum + A[j] if sum > right-sum right-sum = sum max-right = j // Return the indices and the sum of the two subarrays Return (max-left, max-right, left-sum + right-sum)

Example: A S[5.. 5] = -3 S[4.. 5] = 17  (max-left = 4) S[3.. 5] = -8 S[2.. 5] = -11 S[1.. 5] = 2 mid = A S[6.. 6] = -16 S[6.. 7] = -39 S[6.. 8] = -21 S[6.. 9] = (max-right = 9)  -1 S[6..10] = -8 mid =5  maximum subarray crossing mid is S[4..9] = 16

10 if high == low Return (low, high, A[low]) // base case: only one element else mid = (left-low, left-high, left-sum) = FIND-MAXIMUM-SUBARRAY(A, low, mid) (right-low, right-high, right-sum) = FIND-MAXIMUM-SUBARRAY(A, mid + 1, high) (cross-low, cross-high, cross-sum) = FIND-MAX-CROSSING-SUBARRAY(A, low, mid, high) if left-sum ≧ right-sum and left-sum ≧ cross-sum return (left-low, left-high, left-sum) elseif right-sum ≧ left-sum and right-sum ≧ cross-sum return (right-low, right-high, right-sum) else return (cross-low, cross-high, cross-sum) FIND-MAXIMUM-SUBARRAY (A, low, high) Initial call: FIND-MAXIMUM-SUBARRAY (A, 1, n)

11 Analyzing time complexity FIND-MAX-CROSSING-SUBARRAY :  (n), where n = high  low + 1 FIND-MAXIMUM-SUBARRAY T(n) = 2T(n/2) +  (n)(with T(1) =  (1)) =  (nlg n) (similar to merge-sort)

12 Matrix multiplication Input: two n  n matrices A and B Output:C = AB, An O(n 3 ) time naive algorithm SQUARE-MATRIX-MULTIPLY(A, B) n  A.rows let C be an n  n matrix for i  1 to n forj  1 to n c ij  0 fork  1 to n c ij  c ij + a ik b kj return C

13 Divide-and-Conquer Algorithm Assume that n is an exact power of 2 (4.1)

14 Divide-and-Conquer Algorithm  A straightforward divide-and-conquer algorithm T(n)= 8T(n/2) +  (n 2 ) =  (n 3 ) Computing A+B  O(n 2 )

15 Strassen’s method (4.2)

16 Strassen’s method (4.3) (4.4)

17 Strassen’s divide-and-conquer algorithm Step 1: Divide each of A, B, and C into four sub-matrices as in (4.1) Step 2: Create 10 matrices S 1, S 2, …, S 10 as in (4.2) Steep 3: Recursively, compute P 1, P 2, …, P 7 as in (4.3) Step 4: Compute according to (4.4)

18 Time complexity T(n) = 7T(n/2) +  (n 2 ) =  (n lg 7 ) (why?) =  (n 2.81 )

19 Discussion Strassen’s method is largely of theoretical interest for n  45 Strassen’s method is based on the fact that we can multiply two 2  2 matrices using only 7 multiplications (instead of 8). It was shown that it is impossible to multiply two 2  2 matrices using less than 7 multiplications.

20 Discussion We can improve Strassen’s algorithm by finding an efficient way to multiply two k  k matrices using a smaller number q of multiplications, where k > 2. The time is T(n) = qT(n/k) + θ(n 2 ). A trivial lower bound for matrix multiplication is  (n 2 ). The current best upper bound known is O(n ). Open problems: – Can the upper bound O(n ) be improved? – Can the lower bound  (n 2 ) be improved?

21 Substitution Method (if we know the answer) How to solve this? T( n ) = 2T( ) + n, with T(1) = 1 1.Make a guess e.g., T( n ) = O( n log n ) 2. Show it by induction e.g., to show upper bound, we find constants c and n 0 such that T( n )  c f( n ) for n = n 0, n 0 +1, n 0 +2, …

22 Substitution Method (if we know the answer) How to solve this? T( n ) = 2T( ) + n, with T(1) = 1 1.Make a guess e.g., T( n ) = O( n log n ) 2.Show it by induction Firstly, T(2) = 4, T(3) = 5.  We want to have T(n)  c n lg n  Let c = 2  T(2) and T(3) okay Other Cases ?

23 Substitution Method (if we know the answer) Induction Case: Assume the guess is true for all n = 2, 3,…, k For n = k+1, we have: T( n ) = 2T( ) + n = cn lg n – cn + n  cn log n Induction case is true

24 Substitution Method (if we know the answer) Q. How did we know the value of c and n 0 ? A.If induction works, the induction case must be correct  c ≥ 1 Then, we find that by setting c = 2, our guess is correct as soon as n 0 = 2 Alternatively, we can also use c = 1.5 Then, we just need a larger n 0 = 4 (What will be the new base cases? Why?)

25 Substitution Method (New Challenge) How to solve this? 1. Make a guess (T( n ) = O( n )), and 2.Show T( n ) ≤ cn by induction – What will happen in induction case?

26 Substitution Method (New Challenge) Induction Case: (assume guess is true for some base cases) This term is not what we want …

27 Substitution Method (New Challenge) The 1 st attempt was not working because our guess for T( n ) was a bit “loose” Recall: Induction may become easier if we prove a “stronger” statement 2 nd Attempt: Refine our statement Try to show T( n ) ≤ cn - b instead

28 Substitution Method (New Challenge) Induction Case: It remains to find c and n 0, and prove the base case(s), which is relatively easy Can you prove T(n) ≤ cn 2 by induction? We get the desired term (when b  1)

29 Avoiding Pitfalls For T(n) = 2T( ) + n, we can falsely prove T(n) = O(n) by guessing T(n)  cn and then arguing Wrong!!

30 Substitution Method (New Challenge 2) How to solve this? T( n ) = 2T( ) + lg n ? Hint: Change variable: Set m = lg n

31 Substitution Method (New Challenge 2) Set m = lg n, we get T(2 m ) = 2T(2 m/2 ) + m Next, set S(m) = T(2 m ) = T(n) S(m) = 2S(m/2) + m We solve S(m) = O(m lg m)  T(n) = O(lg n lg lg n)

32 Recursion Tree Method ( Nothing Special… Very Useful ! ) How to solve this? T( n ) = 2T( n /2) + n 2, with T(1) = 1

33 Recursion Tree Method ( Nothing Special… Very Useful ! ) Expanding the terms, we get: T( n ) = n 2 + 2T( n /2) = n n 2 /4 + 4T( n /4) = n n 2 /4 + 4 n 2 /16 + 8T( n /8) =... = =  ( n 2 ) +  ( n ) =  ( n 2 )

34 Recursion Tree Method ( Recursion Tree View ) We can express the previous recurrence by:

35 Further expressing gives us: This term is from T(n/2)

36 Recursion Tree Method ( New Challenge ) How to solve this? T( n ) = T( n /3) + T(2 n /3) + n, with T(1) = 1 What will be the recursion tree view?

37 The corresponding recursion tree view is: The depth of the tree is log 3/2 n. Why?

38 Master Method ( Save our effort ) When the recurrence is in a special form, we can apply the Master Theorem to solve the recurrence immediately The Master Theorem has 3 cases …

39 Master Theorem Theorem: (Case 1) If f( n ) = O( n log b a -  ) for some constant   0 then T( n ) =  ( n log b a ) Let T( n ) = aT( n /b) + f( n ) with a  1 and b  1 are constants, where we interpret n/b to mean either  n/b  or  n/b .

40 Theorem: (Case 2) If f( n ) =  ( n log b a ), then T( n ) =  ( n log b a lg n ) Theorem: (Case 3) If f(n) =  (n log b a +  ) for some constant   0, and if af(n/b)  c f(n) for some constant c   1, and all sufficiently large n, then T(n) =  (f(n))

41 Master Theorem 1.SolveT(n) = 9T(n/3) + n (case 1) 2.SolveT(n) = T(2n/3) + 1 (case 2) 3.SolveT(n) = 3T(n/4) + nlgn (case 3) 4.How about this? T(n) = 2T(n/2) + n lg n ? 5. T(n) = 8T(n/2) + n 2, T(n) = 8T(n/2) + n 6. T(n) = 7T(n/2) + n 2, T(n) = 7T(n/2) + 1

42 Homework Exercise: (Programming) (due: Oct. 17) Practice at home: 4.1-5, 4.2-1, Exercise: 4.3-6, 4.4-6, (due: Oct. 19) Problem: 4.1 (a, g) (due: Oct.19) Practice at home: 4.3-7, 4.4-8, 4.4-7, 4.5-4