Divide-and-Conquer Dr. B C Dhara Department of Information Technology Jadavpur University.

Slides:



Advertisements
Similar presentations
A simple example finding the maximum of a set S of n numbers.
Advertisements

Theory of Computing Lecture 3 MAS 714 Hartmut Klauck.
Comp 122, Spring 2004 Divide and Conquer (Merge Sort)
Divide-and-Conquer The most-well known algorithm design strategy:
Divide and Conquer Strategy
Algorithm Design Paradigms
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 5 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
1 Divide-and-Conquer The most-well known algorithm design strategy: 1. Divide instance of problem into two or more smaller instances 2. Solve smaller instances.
CS4413 Divide-and-Conquer
September 12, Algorithms and Data Structures Lecture III Simonas Šaltenis Nykredit Center for Database Research Aalborg University
Divide-and-Conquer Recursive in structure –Divide the problem into several smaller sub-problems that are similar to the original but smaller in size –Conquer.
DIVIDE AND CONQUER APPROACH. General Method Works on the approach of dividing a given problem into smaller sub problems (ideally of same size).  Divide.
Divide and Conquer. Recall Complexity Analysis – Comparison of algorithm – Big O Simplification From source code – Recursive.
Spring 2015 Lecture 5: QuickSort & Selection
Nattee Niparnan. Recall  Complexity Analysis  Comparison of Two Algos  Big O  Simplification  From source code  Recursive.
Introduction to Algorithms Rabie A. Ramadan rabieramadan.org 4 Some of the sides are exported from different sources.
Lecture 8 Jianjun Hu Department of Computer Science and Engineering University of South Carolina CSCE350 Algorithms and Data Structure.
Algorithm Design Strategy Divide and Conquer. More examples of Divide and Conquer  Review of Divide & Conquer Concept  More examples  Finding closest.
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
CPSC 320: Intermediate Algorithm Design & Analysis Divide & Conquer and Recurrences Steve Wolfman 1.
Chapter 4 Divide-and-Conquer Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Divide-and-Conquer1 7 2  9 4   2  2 79  4   72  29  94  4.
Divide-and-Conquer1 7 2  9 4   2  2 79  4   72  29  94  4.
Chapter 4: Divide and Conquer The Design and Analysis of Algorithms.
Data Structures, Spring 2004 © L. Joskowicz 1 Data Structures – LECTURE 3 Recurrence equations Formulating recurrence equations Solving recurrence equations.
Analysis of Recursive Algorithms
Design and Analysis of Algorithms - Chapter 41 Divide and Conquer The most well known algorithm design strategy: 1. Divide instance of problem into two.
Recurrences The expression: is a recurrence. –Recurrence: an equation that describes a function in terms of its value on smaller functions Analysis of.
CSE 421 Algorithms Richard Anderson Lecture 11 Recurrences.
Design and Analysis of Algorithms - Chapter 41 Divide and Conquer The most well known algorithm design strategy: 1. Divide instance of problem into two.
Recurrences The expression: is a recurrence. –Recurrence: an equation that describes a function in terms of its value on smaller functions BIL741: Advanced.
Prof. Swarat Chaudhuri COMP 482: Design and Analysis of Algorithms Spring 2013 Lecture 12.
Divide-and-Conquer 7 2  9 4   2   4   7
Analyzing Recursive Algorithms A recursive algorithm can often be described by a recurrence equation that describes the overall runtime on a problem of.
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 1 Chapter.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 5 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
Complexity of algorithms Algorithms can be classified by the amount of time they need to complete compared to their input size. There is a wide variety:
Project 2 due … Project 2 due … Project 2 Project 2.
Divide and Conquer Andreas Klappenecker [based on slides by Prof. Welch]
1 Designing algorithms There are many ways to design an algorithm. Insertion sort uses an incremental approach: having sorted the sub-array A[1…j - 1],
DR. NAVEED AHMAD DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF PESHAWAR LECTURE-5 Advance Algorithm Analysis.
The Selection Problem. 2 Median and Order Statistics In this section, we will study algorithms for finding the i th smallest element in a set of n elements.
1 Chapter 4 Divide-and-Conquer. 2 About this lecture Recall the divide-and-conquer paradigm, which we used for merge sort: – Divide the problem into a.
Review 1 Selection Sort Selection Sort Algorithm Time Complexity Best case Average case Worst case Examples.
Divide-and-Conquer The most-well known algorithm design strategy: 1. Divide instance of problem into two or more smaller instances 2.Solve smaller instances.
Divide And Conquer A large instance is solved as follows:  Divide the large instance into smaller instances.  Solve the smaller instances somehow. 
Divide-and-Conquer. Outline Introduction Merge Sort Quick Sort Closest pair of points Large integer multiplication.
Divide and Conquer Andreas Klappenecker [based on slides by Prof. Welch]
Divide-and-Conquer UNC Chapel HillZ. Guo. Divide-and-Conquer It’s a technique instead of an algorithm Recursive in structure – Divide the problem into.
Divide and Conquer Strategy
Lecture 5 Today, how to solve recurrences We learned “guess and proved by induction” We also learned “substitution” method Today, we learn the “master.
Young CS 331 D&A of Algo. Topic: Divide and Conquer1 Divide-and-Conquer General idea: Divide a problem into subprograms of the same kind; solve subprograms.
Divide and Conquer (Part II) Multiplication of two numbers Let U = (u 2n-1 u 2n-2 … u 1 u 0 ) 2 and V = (v 2n-1 v 2n-2 …v 1 v 0 ) 2, and our goal is to.
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 11.
Divide and Conquer Faculty Name: Ruhi Fatima Topics Covered Divide and Conquer Matrix multiplication Recurrence.
CS 361 – Chapter 11 Divide and Conquer ! Examples: –Merge sort √ –Ranking inversions –Matrix multiplication –Closest pair of points Master theorem –A shortcut.
Advanced Algorithms Analysis and Design
Chapter 4 Divide-and-Conquer
Algorithms and Data Structures Lecture III
Data Structures Review Session
Topic: Divide and Conquer
Divide and Conquer Algorithms Part I
Divide-and-Conquer The most-well known algorithm design strategy:
Divide and Conquer (Merge Sort)
Divide-and-Conquer 7 2  9 4   2   4   7
Richard Anderson Lecture 14 Divide and Conquer
The Selection Problem.
Richard Anderson Lecture 14, Winter 2019 Divide and Conquer
Divide and Conquer Merge sort and quick sort Binary search
Presentation transcript:

Divide-and-Conquer Dr. B C Dhara Department of Information Technology Jadavpur University

Divide-and-Conquer Divide-and conquer is a general algorithm design paradigm: Divide-and conquer is a general algorithm design paradigm: Breaking the problem into several sub-problems that are similar to the original problem but smaller in size, Breaking the problem into several sub-problems that are similar to the original problem but smaller in size, (successively and independently), and then Conquer the subproblems by solving them recursively (successively and independently), and then Combine these solutions to subproblems to create a solution to the original problem. Combine these solutions to subproblems to create a solution to the original problem. The base case for the recursion are subproblems of constant size The base case for the recursion are subproblems of constant size Analysis can be done using recurrence equations Analysis can be done using recurrence equations It is a top-down technique for designing algorithms

Binary search Binary Search is an extremely well-known instance of divide-and-conquer paradigm. Binary Search is an extremely well-known instance of divide-and-conquer paradigm. Given an ordered (increasing) array of n elements (a[1,…,n]), the basic idea of binary search is that for a given element we "probe" the middle element of the array. Given an ordered (increasing) array of n elements (a[1,…,n]), the basic idea of binary search is that for a given element we "probe" the middle element of the array. We continue in either the lower or upper segment of the array, depending on the outcome of the probe until we reached the required (given) element. We continue in either the lower or upper segment of the array, depending on the outcome of the probe until we reached the required (given) element.

Binary search Divide: middle = (low+high)/2 Divide: middle = (low+high)/2 Conquer: Conquer: if(low>high) return 0; if (a[middle]==key) return key; else if(a[middle]>key) search(a, low, middle-1); else search(a,middle+1,high); Combine: none Combine: none

Binary search T(n) = T(n/2) +  (1) T(n) = T(n/2) +  (1)

Merge sort The merge sort algorithm closely follows the divide-and-conquer paradigm. Intuitively, it operates as follows. Divide: Divide the n-element sequence to be sorted into two subsequences of n/2 elements each. Conquer: Sort the two subsequences recursively using merge sort. Combine: Merge the two sorted subsequences to produce the sorted answer.

Merge sort We note that the recursion “bottoms out” when the sequence to be sorted has length 1, in which case there is no work to be done, since every sequence of length 1 is already in sorted order.

Merge sort Input: An array A and indices p and r Output: An sorted array A MERGE-SORT(A, p, r) 1. if p < r 2. then q = (p + r) / 2 3. MERGE-SORT(A, p, q) 4. MERGE-SORT(A, q+1, r) 5. MERGE(A, p, q, r)

Merge sort MERGE(A, p, q, r) 1. i = p, j = q + 1, n = r - p for k = 1 to n // for 1 3. if [ (A[i] r)] and (i ≤ q) 4. B[k] = A[i] 5. i = i else 7. B[k] = A[j] 8. j = j for k = 0 to n – A[p + k] = B[k]  (n) Complexity of merge sort is T(n)= T(  n/2  ) + T(  n/2  ) +  (n)

Quick sort Divide: partition the data set into two subsets respect to the pivot element Divide: partition the data set into two subsets respect to the pivot element Conquer: recursively use the quick sort to solve the subsets Conquer: recursively use the quick sort to solve the subsets Combine: none, as it is inplace sorting. Combine: none, as it is inplace sorting.

Quick sort Quicksort(A,p,r) if p  r then return; if p  r then return; q = partition(A,p,r); q = partition(A,p,r); Quicksort(A,p,q-1); Quicksort(A,p,q-1); Quicksort(A,q+1,r); Quicksort(A,q+1,r); Partition(a,first,last) pivot =a[first] up=first; down=last; while (up <down) { while(pivot >=a[up] && up <=last) up++; while( pivot < a[down]) down--; if(up<down) swap(a, up, down); } swap(a, first, down); return down; Complexity of quick sort is T(n) = T(i) + T(n-i-1) + O(n)

Counting inversions problem Let L = {x 1, x 2, x 3,…, x n } a collection of distinct integers between 1 and n. Let L = {x 1, x 2, x 3,…, x n } a collection of distinct integers between 1 and n. The number of pairs (i,j), 1 ≤ i x j The number of pairs (i,j), 1 ≤ i x j If counter is (n-1) + (n-2) + … + 1 = n(n-1)/2, then the given data is in ascending order. If counter is (n-1) + (n-2) + … + 1 = n(n-1)/2, then the given data is in ascending order. If counter value is 0, the given set is in descending order. If counter value is 0, the given set is in descending order.

Counting inversions problem Number of crossing point is the answer We can apply the merge sort like method to solve the problem

Counting inversions problem Divide: Partition the list L into two lists A and B with elements n/2 Divide: Partition the list L into two lists A and B with elements n/2 Conquer: Conquer: Recursively count the number of inversions in A Recursively count the number of inversions in A Recursively count the number of inversions in B Recursively count the number of inversions in B Combine: Count the number of inversions involving one element in A and one element in B (use merge procedure and increase the counter appropriately) Combine: Count the number of inversions involving one element in A and one element in B (use merge procedure and increase the counter appropriately)

Counting inversions problem sort-and-count(L) If |L}==1, return 0; Else A  first  n/2  elements A  first  n/2  elements B  remaining  n/2  elements B  remaining  n/2  elements (ca, A)= sort-and-count(A) (ca, A)= sort-and-count(A) (cb, B)= sort-and-count(B) (cb, B)= sort-and-count(B) (cm, L) = merge-and-count(A,B) (cm, L) = merge-and-count(A,B) return ca + cb + cm with sorted list L return ca + cb + cm with sorted list L Running time of the method is T(n) ≤2T(  n/2  ) + O(n)

Counting inversions problem merge-and-count (A,B) 1. Maintain a current pointer for each list and initially point to the first element 2. Maintain a variable count initialized to 0 3. While both lists are nonempty 1. Let ai and bj are the current elements 2. Append the smaller of the two to the output list 3. If bj is smaller, increment the count by number of elements in A from current position to last 4. Advance the current pointer of smaller element list 4. Append the rest of the non-empty list to the output 5. Return count and the merged list.

Notes In ‘sort-and-count’ process In ‘sort-and-count’ process The lists A and B inputed to the ‘merge-and-count’ function are sorted. The lists A and B inputed to the ‘merge-and-count’ function are sorted Counter =0

Example Counter =

Example

Example Counter =4 1 Counter = Counter = Counter =

Closest pair of points A set P of n points in the plane, find the pair of points in P that are closest to each other. A set P of n points in the plane, find the pair of points in P that are closest to each other. The simplest solution of the problem, consider all pair and find the distance of each pair and then find the closets one. The simplest solution of the problem, consider all pair and find the distance of each pair and then find the closets one. Complexity is O(n 2 ) Complexity is O(n 2 ) Another solution, using divide-and-conquer approach with complexity O(n log n). Another solution, using divide-and-conquer approach with complexity O(n log n).

Closest pair of points In 1D In 1D Without any loss assume points are on the x-axis Without any loss assume points are on the x-axis Shortest distance is the one between adjacent pair of points. Shortest distance is the one between adjacent pair of points. Sort the given points and compute the distance for n-1 adjacent pairs Sort the given points and compute the distance for n-1 adjacent pairs Complexity: O(n log n) + O(n) = O(n log n) Complexity: O(n log n) + O(n) = O(n log n)

Closest pair of points 1D 1D Divide and conquer approach after the sorting: Divide equally in left half and right half Closest pair could be: Closest pair in left half: distance  l Closest pair in right half: distance  r Closest pair that span the left and right halves and are at most min(  l,  r ) apart (only one such pair?) Closest pair that span the left and right halves and are at most min(  l,  r ) apart (only one such pair?)

Closest pair of points Extend the concept of 1D for 2D Extend the concept of 1D for 2D Let the given point set P = {p 1, p 2,…,p n }with p i =(x i, y i ) Let the given point set P = {p 1, p 2,…,p n }with p i =(x i, y i ) If |P|≤3, use brute-force method If |P|≤3, use brute-force method X  sort(P) respect to x value (presort) X  sort(P) respect to x value (presort) Y  sort(P) respect to y value (presort) Y  sort(P) respect to y value (presort)

Closest pair of points Divide: Divide: partition, respect to a vertical line L, P into P L and P R with  n/2  and  n/2  points partition, respect to a vertical line L, P into P L and P R with  n/2  and  n/2  points X L  X(P L ); Y L  Y(P L ); X L  X(P L ); Y L  Y(P L ); X R  X(P R ); Y R  Y(P R ); X R  X(P R ); Y R  Y(P R ); Conquer: Conquer:  L = Closesrpair(P L, X L, Y L )  L = Closesrpair(P L, X L, Y L )  R = Closesrpair(P R, X R, Y R )  R = Closesrpair(P R, X R, Y R )  = min(  L,  R )  = min(  L,  R )

Closest pair of points Combine: Combine: closest pair is in Left set, or  closest pair is in Left set, or  closest pair is in Right set, or  closest pair is in Right set, or  closest pair formed by one point from each of the sets closest pair formed by one point from each of the sets if minimum distance is , then we reach to the solution if minimum distance is , then we reach to the solution else (i.e., less than  ), then one point (p L ) in P L and other point (p R ) is in P R and both must reside within  unit from L. else (i.e., less than  ), then one point (p L ) in P L and other point (p R ) is in P R and both must reside within  unit from L.

Closest pair of points Combine (contd…) Combine (contd…) Consider vertical strip (S) of width 2  centered at L // Ignore the points outside the strip Now, Y’  strip(Y), sorted on y value For each point p in S compute the distance with next 5 points (in Y’) compute the distance with next 5 points (in Y’) if distance is less update the minimum distance and track the closest pair. if distance is less update the minimum distance and track the closest pair.

Closest pair of points All points in P L are at least  unit apart, there can be at most 4 point in  x  with distance  Similarly, in other half, so total = 6 points  x 2  Closest pair within  x 2 

Closest pair of points Running time Running time Presort (respect to x, y value): O(n log n) Each recursive call required the sorted array in x and y for the sub points. This can be done by O(n). How? size(X L )=n/2 size(X R )=n/2, t=0; r=0; size(X L )=n/2 size(X R )=n/2, t=0; r=0; for i = 1 to n for i = 1 to n if X[i]  P L X L [t]  X[i]; t++ if X[i]  P L X L [t]  X[i]; t++ if X[i]  P R X R [t]  X[i]; r++ if X[i]  P R X R [t]  X[i]; r++

Closest pair of points Running time Running time In combine step, for each point within the strip compare with next 5 points O(n) O(n) Complexity is: T(n) + O(n log n), where T(n) + O(n log n), where T(n) = 2T(n/2) + O(n) T(n) = 2T(n/2) + O(n)

Integer multiplication Instance: two n-digits number X and Y (decimal number or binary number) Instance: two n-digits number X and Y (decimal number or binary number) Output: XY, a number with at most 2n digits, for both the cases. Output: XY, a number with at most 2n digits, for both the cases. In school method, number of operations is O(n 2 ), n is the number of digits. School method

Integer multiplication We consider 2 decimal, non- negative integers We consider 2 decimal, non- negative integers Input: Input: X = x n-1 x n-2 x n-3 … x 2 x 1 x 0 X = x n-1 x n-2 x n-3 … x 2 x 1 x 0 Y = y n-1 y n-2 y n-3 … y 2 y 1 y 0 Y = y n-1 y n-2 y n-3 … y 2 y 1 y 0 Output: Output: Z = z 2n-2 z 2n-3 z 2n-4 … z 2 z 1 z 0 Z = z 2n-2 z 2n-3 z 2n-4 … z 2 z 1 z 0

Integer multiplication

Let Let U=a*c U=a*c V=b*d V=b*d W=(a+b)*(c+d) = a*c + a*d + b*c + b*d W=(a+b)*(c+d) = a*c + a*d + b*c + b*d Z = U*10 n + (W-U-V)*10 n/2 + V Z = U*10 n + (W-U-V)*10 n/2 + V We need three multiplications and number of digits is n/2 We need three multiplications and number of digits is n/2

Integer multiplication integerMultiplication(X, Y) U= integerMultiplication(a, c) V= integerMultiplication(b, d) W= integerMultiplication((a+b),(c+d)) W= integerMultiplication((a+b),(c+d)) return (U.10 n + (W-U-V)*10 n/2 + V ) return (U.10 n + (W-U-V)*10 n/2 + V )

Integer multiplication Complexity: Complexity: T(n) = 3T(n/2) + O(n) The same algorithm can be used for two binary numbers multiplication The same algorithm can be used for two binary numbers multiplication We have to replace 10 by 2 We have to replace 10 by 2

Matrix multiplication We consider n x n matrices We consider n x n matrices Using brut-force method,  (n 3 ) Using brut-force method,  (n 3 ) a, b, c, d, e, f, g, h, r, s, t and u are n/2 x n/2 matrices r = ae + bf s= ag + bh t = ce + df u = cg + dh Eight n/2 x n/2 multiplications and four additions of size n/2 x n/2 Eight n/2 x n/2 multiplications and four additions of size n/2 x n/2 T(n) = 8T(n/2) +  (n 2 ) T(n) = 8T(n/2) +  (n 2 )

Matrix multiplication Compute 14 n/2 x n/2 matrices A1,B1,A2,…,A7,B7 Compute 14 n/2 x n/2 matrices A1,B1,A2,…,A7,B7 Pi = AiBi Pi = AiBi Then compute r, s, t, u by simple matrix addition/subtraction with different combination of Pi Seven n/2 x n/2 multiplications and some additions of size n/2 x n/2 Seven n/2 x n/2 multiplications and some additions of size n/2 x n/2 T(n) = 7T(n/2) +  (n 2 ) T(n) = 7T(n/2) +  (n 2 )

Strassen’s method r = ae + bf r = ae + bf s= ag + bh s= ag + bh t = ce + df t = ce + df u = cg + dh u = cg + dh P5=(a+d).(e+h) = ae + ah + de + dh P6 = (b-d).(f+h) = bf + bh –df –dh P7 = (a-c).(e+g) = ae + ag –ce –cg r = P5 + P4 –P2 +P6 = ae + bf u = P5 + P1 – P3 – P7 = cg + dh P1 = ag –ah = a.(g-h) P1 = ag –ah = a.(g-h) P2 = bh+ ah = (a+b).h P2 = bh+ ah = (a+b).h s = P1 + P2 s = P1 + P2 P3 = ce + de = (c+d).e P3 = ce + de = (c+d).e P4 = df -de = d.(f-e) P4 = df -de = d.(f-e) t = P3 + P4 t = P3 + P4

Analysis of divide-and-conquer algorithms Let the input problem of size n divides into subproblems of size n 1, n 2, n 3,…, n r. Let the input problem of size n divides into subproblems of size n 1, n 2, n 3,…, n r. The running time of the problem is T(n) The running time of the problem is T(n) Let t d (n) and t c (n) represents the time complexity of the divide step and combine step respectively. Then, Let t d (n) and t c (n) represents the time complexity of the divide step and combine step respectively. Then, T(n) = t d (n) +  T(n i ) + t c (n) when n> certain value

Analysis of divide-and-conquer algorithms Usually (not always) the subproblems are of same size, say, n/b and then Usually (not always) the subproblems are of same size, say, n/b and then T(n) = r.T(n/b)+ f(n) when n> certain value and f(n)= t d (n) + t c (n) T(n) = r.T(n/b)+ f(n) when n> certain value and f(n)= t d (n) + t c (n) If not of same size, sometimes we write the expression as (depending on the requirement) If not of same size, sometimes we write the expression as (depending on the requirement) T(n) ≤ r.T(n/b)+ f(n) n>c T(n) ≤ r.T(n/b)+ f(n) n>c

Analysis of divide-and-conquer algorithms Binary search: T(n) = T(n/2) +  (1) Binary search: T(n) = T(n/2) +  (1) Merge sort:  (n) Merge sort: T(n)= T(  n/2  ) + T(  n/2  ) +  (n) Quick sort: T(n) = T(i) + T(n-i-1) + O(n) Quick sort: T(n) = T(i) + T(n-i-1) + O(n) Counting inver prob: T(n) ≤2T(n/2) + O(n) Counting inver prob: T(n) ≤2T(n/2) + O(n) Closest pair: T(n) + O(n log n), where T(n) = 2T(n/2) + O(n) Closest pair: T(n) + O(n log n), where T(n) = 2T(n/2) + O(n) Integer multiplication: T(n) = 3T(n/2) + O(n) Integer multiplication: T(n) = 3T(n/2) + O(n) Matrix multiplication: Matrix multiplication: T(n) = 8T(n/2) +  (n 2 ) T(n) = 8T(n/2) +  (n 2 ) T(n) = 7T(n/2) +  (n 2 ) T(n) = 7T(n/2) +  (n 2 )

Solution of recurrence relation Three approaches: Three approaches: Substitution method Substitution method Iteration method Iteration method Master method Master method

Substitution method We ‘guess’ that a certain function is a solution (or an upper or lower bound on the solution) to the recurrence relation, and we verify that this correct by induction We ‘guess’ that a certain function is a solution (or an upper or lower bound on the solution) to the recurrence relation, and we verify that this correct by induction Consider T(n) = 2T(  n/2  ) + O(n), let T(n)=O(n log n) Consider T(n) = 2T(  n/2  ) + O(n), let T(n)=O(n log n)

Substitution method Making a good guess Making a good guess Avoiding pitfalls Avoiding pitfalls T(n) = 2T(n/2) + n, let T(n) = cn for c>0 T(n) = 2T(n/2) + n, let T(n) = cn for c>0 T(n) = 2.c.n/2 + n = cn +n = O(n), a wrong sol. T(n) = 2.c.n/2 + n = cn +n = O(n), a wrong sol. The initial assumption was wrong The initial assumption was wrong

Substitution method Changing variable: Changing variable: Consider m = lg n,i.e, n = 2 m Consider m = lg n,i.e, n = 2 m T(2 m ) = 2T(2 m/2 ) + m T(2 m ) = 2T(2 m/2 ) + m S(m) = 2S(m/2) + m S(m) = 2S(m/2) + m S(m)= O(m lg m) S(m)= O(m lg m) T(n) = O(lg n lg lg n) T(n) = O(lg n lg lg n)

Iteration method Repeatedly expand the recurrence relation using the same given form for the recurrence terms on the right side, until we reach th base the case, for which we can substitute the value given for the base case. Repeatedly expand the recurrence relation using the same given form for the recurrence terms on the right side, until we reach th base the case, for which we can substitute the value given for the base case. Finally, we have to apply the summation of the values, to find the bound Finally, we have to apply the summation of the values, to find the bound

Iteration method T(n) = 3T(  n/4  ) + n T(n) = 3T(  n/4  ) + n

Recursion tree A recursion tree is a way to visualize what happens when a recursion is iterated. A recursion tree is a way to visualize what happens when a recursion is iterated. E.g., T(n) = 2T(n/2) + n 2 E.g., T(n) = 2T(n/2) + n 2

The master method Let a  1 and b >1, f(n) is monotonically non- negative function of n, and T(n) = aT(n/b) + f(n) is a recurrence relation defined on the non-negative integers. Then T(n) can be bounded asymptotically as follows, Let a  1 and b >1, f(n) is monotonically non- negative function of n, and T(n) = aT(n/b) + f(n) is a recurrence relation defined on the non-negative integers. Then T(n) can be bounded asymptotically as follows,

Analysis of divide-and-conquer algorithms Binary search: T(n) = T(n/2) +  (1) [O(log n)] Binary search: T(n) = T(n/2) +  (1) [O(log n)] Merge sort:  (n) [rule 1] Merge sort: T(n)= T(  n/2  ) + T(  n/2  ) +  (n) [rule 1] Quick sort: T(n) = T(i); + T(n-i-1) + O(n) ????? Quick sort: T(n) = T(i); + T(n-i-1) + O(n) ????? Counting inver prob: T(n) ≤2T(n/2) + O(n) [R1: O(n log n] Counting inver prob: T(n) ≤2T(n/2) + O(n) [R1: O(n log n] Closest pair: T(n) + O(n log n), where T(n) = 2T(n/2) + O(n) [R1: O(n log n)] Closest pair: T(n) + O(n log n), where T(n) = 2T(n/2) + O(n) [R1: O(n log n)] Integer multiplication: T(n) = 3T(n/2) + O(n) [R1] Integer multiplication: T(n) = 3T(n/2) + O(n) [R1] Matrix multiplication: Matrix multiplication: T(n) = 8T(n/2) +  (n 2 ) [R1:  (n 2 ) ] T(n) = 8T(n/2) +  (n 2 ) [R1:  (n 2 ) ] T(n) = 7T(n/2) +  (n 2 ) [R1:  (n 2 ) ] T(n) = 7T(n/2) +  (n 2 ) [R1:  (n 2 ) ]

Master theorem T(n) =3T(n/4) + n lg n T(n) =3T(n/4) + n lg n a=3 and b=4, for  =0.2, Now test the regularity condition, af(n/b) ≤ cf(n) for some 0 <c <1 3(n/4) lg (n/4) ≤ (3/4) n lg n c=3/4. 3(n/4) lg (n/4) ≤ (3/4) n lg n c=3/4. By base 3, solution is T(n)=  (n lg n)

Master theorem T(n) =2T(n/2) + n lg n T(n) =2T(n/2) + n lg n a=2 and b=2, for any ,

Approximation by integrals

Let f(x) is a monotonically increasing function and then the sum can be expressed as Let f(x) is a monotonically increasing function and then the sum can be expressed as Similarly, if f(x) monotonically decreasing function then Similarly, if f(x) monotonically decreasing function then

Consider f(k) = 1/k, for k>0 Consider f(k) = 1/k, for k>0

Average case quick sort Assumptions: Assumptions: All elements are distinct All elements are distinct Number of permutation depends on order of the input data set. We assume the data set as {1,2,…,n}. Number of permutation depends on order of the input data set. We assume the data set as {1,2,…,n}. All permutation is equally likely All permutation is equally likely Let input list is any random permutation of {1, 2, …, n} and pivot element is ‘s’ Let input list is any random permutation of {1, 2, …, n} and pivot element is ‘s’

Average case quick sort After partition let the situation is as After partition let the situation is as (i 1, i 2, …, i s-1, s, j s+1, …., j n,) (i 1, i 2, …, i s-1, s, j s+1, …., j n,) (i 1, i 2, …, i s-1 ) is a random permutation of {1, 2, …, s-1} (i 1, i 2, …, i s-1 ) is a random permutation of {1, 2, …, s-1} (j s+1, j s+2, …, j n ) is a random permutation of {s+1, s+2, …, n} (j s+1, j s+2, …, j n ) is a random permutation of {s+1, s+2, …, n}

Average case quick sort Q a (n) = avg. number of comparison done by ‘Partition( )’ on an input which is a random permutation of {1, 2, …, n}, then Q a (n) = avg. number of comparison done by ‘Partition( )’ on an input which is a random permutation of {1, 2, …, n}, then Q a (0) = Q a (1) = 0 Q a (0) = Q a (1) = 0 Q a (2) = 3 Q a (2) = 3 Any element can be a pivot element, prob (s: s is pivot)=1/n, 1≤ s ≤ n Any element can be a pivot element, prob (s: s is pivot)=1/n, 1≤ s ≤ n

Average case quick sort