Recurrences.

Slides:



Advertisements
Similar presentations
A simple example finding the maximum of a set S of n numbers.
Advertisements

Comp 122, Spring 2004 Divide and Conquer (Merge Sort)
September 12, Algorithms and Data Structures Lecture III Simonas Šaltenis Nykredit Center for Database Research Aalborg University
Algorithm : Design & Analysis [4]
11 Computer Algorithms Lecture 6 Recurrence Ch. 4 (till Master Theorem) Some of these slides are courtesy of D. Plaisted et al, UNC and M. Nicolescu, UNR.
ADA: 4. Divide/Conquer1 Objective o look at several divide and conquer examples (merge sort, binary search), and 3 approaches for calculating their.
Algorithms Recurrences. Definition – a recurrence is an equation or inequality that describes a function in terms of its value on smaller inputs Example.
Divide-and-Conquer1 7 2  9 4   2  2 79  4   72  29  94  4.
Divide-and-Conquer1 7 2  9 4   2  2 79  4   72  29  94  4.
Data Structures, Spring 2004 © L. Joskowicz 1 Data Structures – LECTURE 3 Recurrence equations Formulating recurrence equations Solving recurrence equations.
Data Structures, Spring 2006 © L. Joskowicz 1 Data Structures – LECTURE 3 Recurrence equations Formulating recurrence equations Solving recurrence equations.
Analysis of Recursive Algorithms
Recurrences Part 3. Recursive Algorithms Recurrences are useful for analyzing recursive algorithms Recurrence – an equation or inequality that describes.
Recurrences The expression: is a recurrence. –Recurrence: an equation that describes a function in terms of its value on smaller functions Analysis of.
Recurrence Relations Connection to recursive algorithms Techniques for solving them.
Recurrences The expression: is a recurrence. –Recurrence: an equation that describes a function in terms of its value on smaller functions BIL741: Advanced.
Analysis of Recursive Algorithms October 29, 2014
Divide-and-Conquer 7 2  9 4   2   4   7
Lecture 8. How to Form Recursive relations 1. Recap Asymptotic analysis helps to highlight the order of growth of functions to compare algorithms Common.
Analysis of Algorithms
Analyzing Recursive Algorithms A recursive algorithm can often be described by a recurrence equation that describes the overall runtime on a problem of.
Analysis of Algorithms
Divide-and-Conquer1 7 2  9 4   2  2 79  4   72  29  94  4.
1 Computer Algorithms Lecture 7 Master Theorem Some of these slides are courtesy of D. Plaisted, UNC and M. Nicolescu, UNR.
Project 2 due … Project 2 due … Project 2 Project 2.
Recursion Algorithm : Design & Analysis [3]. In the last class… Asymptotic growth rate The Sets ,  and  Complexity Class An Example: Maximum Subsequence.
+ David Kauchak cs312 Review. + Midterm Will be posted online this afternoon You will have 2 hours to take it watch your time! if you get stuck on a problem,
Divide and Conquer. Recall Divide the problem into a number of sub-problems that are smaller instances of the same problem. Conquer the sub-problems by.
Recurrences (in color) It continues…. Recurrences When an algorithm calls itself recursively, its running time is described by a recurrence. When an algorithm.
Divide and Conquer Faculty Name: Ruhi Fatima Topics Covered Divide and Conquer Matrix multiplication Recurrence.
Lecture #3 Analysis of Recursive Algorithms
Advanced Algorithms Analysis and Design By Dr. Nazir Ahmad Zafar Dr Nazir A. Zafar Advanced Algorithms Analysis and Design.
Recurrences It continues… Jeff Chastine. Recurrences When an algorithm calls itself recursively, its running time is described by a recurrence. A recurrence.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 4.
Algorithm Analysis 1.
Chapter 11 Sorting Acknowledgement: These slides are adapted from slides provided with Data Structures and Algorithms in C++, Goodrich, Tamassia and Mount.
Introduction to Algorithms
Recursion Ali.
Analysis of Algorithms
Modeling with Recurrence Relations
Analysis of Algorithms
Divide-and-Conquer 6/30/2018 9:16 AM
CS 3343: Analysis of Algorithms
Chapter 4 Divide-and-Conquer
Algorithm : Design & Analysis [4]
CS 3343: Analysis of Algorithms
Algorithm Analysis (for Divide-and-Conquer problems)
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
Advance Analysis of Lecture No 9 Institute of Southern Punjab Multan
Algorithms and Data Structures Lecture III
Data Structures Review Session
Divide-and-Conquer 7 2  9 4   2   4   7
Ch 4: Recurrences Ming-Te Chi
CS 3343: Analysis of Algorithms
CS200: Algorithms Analysis
Solving Recurrence Relations
Divide and Conquer (Merge Sort)
Divide-and-Conquer 7 2  9 4   2   4   7
Trevor Brown CS 341: Algorithms Trevor Brown
Analysis of Algorithms
At the end of this session, learner will be able to:
Introduction To Algorithms
Algorithms Recurrences.
Solving recurrence equations
Recurrences.
Divide-and-Conquer 7 2  9 4   2   4   7
Divide and Conquer Merge sort and quick sort Binary search
Divide-and-Conquer 7 2  9 4   2   4   7
Presentation transcript:

Recurrences

Execution of a recursive program Deriving & solving recurrence equations

Recursion What is the recursive definition of n! ? Program int fact(int n) { if (n<=1) return 1; else return n*fact(n-1); } // Note '*' is done after returning from fact(n-1)

Recursive algorithms A recursive algorithm typically contains recursive calls to the same algorithm In order for the recursive algorithm to terminate, it must contain code for solving directly some “base case(s)” A direct solution does not include recursive calls. We use the following notation: DirectSolutionSize is the “size” of the base case DirectSolutionCount is the count of the number of operations done by the “direct solution”

A Call Tree for fact(3) The Run Time Environment The following 2 slides show the program stack after the third call from fact(3) When a function is called an activation records('ar') is created and pushed on the program stack. The activation record stores copies of local variables, pointers to other ‘ar’ in the stack and the return address. When a function returns the stack is popped. A Call Tree for fact(3) returns 6 fact(3) 6 * fact(2) 3 2 * 2 fact(1) 1 1 int fact(int n) { if (n<=1) return 1; else return n*fact(n-1); }

ep= environmental pointer AR=Activation Record ep= environmental pointer free memory N =1 function value [=1] AR(fact 1) return address EP Program stack Val fact: <ipF,ep0> N =3 function value [=?] return address static link = ep0 dynamic link = ep1 free memory AR(Driver) AR(fact 3) Snapshot of the environment at end of second call to fact ep0 N =2 function value [=2] dynamic link = EP AR(fact 2) ep1 static link = ep0 dynamic link = EP EP N =2 function value [=?] AR(fact 2) return address static link = ep0 dynamic link =ep2 ep2 N =3 function value [=?] AR(fact 3) return address static link = ep0 dynamic link =ep1 ep1 Val AR(Driver) fact: <ipF,ep0> ep0 Program stack Snapshot of the environment at end of third call to fact

EP Program stack Val fact: <ipF,ep0> N =3 function value [=6] return address static link = ep0 dynamic link = EP free memory AR(Driver) AR(fact 3) Snapshot of the environment at end of first call to fact ep0 EP Program stack Val fact: <ipF,ep0> free memory AR(Driver) Snapshot of the environment after fact l returns

Goal: Analyzing recursive algorithms Until now we have only analyzed (derived a count) for non-recursive algorithms. In order to analyze recursive algorithms we must learn to: Derive the recurrence equation from the code Solve recurrence equations.

Deriving a Recurrence Equation for a Recursive Algorithm Our goal is to compute the count (Time) T(n) as a function of n, where n is the size of the problem We will first write a recurrence equation for T(n) For example T(n)=T(n-1)+1 and T(1)=0 Then we will solve the recurrence equation When we solve the recurrence equation above we will find that T(n)=n, and any such recursive algorithm is linear in n.

Deriving a Recurrence Equation for a Recursive Algorithm Determine the “size of the problem”. The count T is a function of this size 2. Determine DirectSolSize, such that for size DirectSolSize the algorithm computes a direct solution, and the DirectSolCount(s).

Deriving a Recurrence Equation for a Recursive Algorithm To determine GeneralCount: Identify all the recursive calls done by a single call of the algorithm and their counts, T(n1), …,T(nk) and compute RecursiveCallSum = 4. Determine the “non recursive count” t(size) done by a single call of the algorithm, i.e., the amount of work, excluding the recursive calls done by the algorithm

Deriving DirectSolutionCount for Factorial int fact(int n) { if (n<=1) return 1; else return n*fact(n-1); } 1. Size = n 2.DirectSolSize is (n<=1) 3. DirectSolCount is (1) The algorithm does a small constant number of operations (comparing n to 1, and returning)

Deriving a GeneralCount for Factorial int fact(int n) { if (n<=1) return 1; // Note '*' is done after returning from fact(n-1) else return n * fact(n-1); 3. RecursiveCallSum = T( n - 1) 4. t(n) = (1) (if, *, -, return) Operations counted in t(n) The only recursive call, requiring T(n - 1) operations

Deriving a Recurrence Equation for Mystery In this example we are not concerned with what mystery computes. Our goal is to simply write the recurrence equation mystery (n) 1. if n £ 1 2. then return n 3. else return (5 * mystery(n -1) - 6* mystery(n -2) )

Deriving a DirectSolutionCount for Mystery 1. Size = n 2. DirectSolSize is (n<=1) 3. DirectSolCount is (1) The algorithm does a small constant number of operations (comparing n to 1, and returning)

Deriving a GeneralCount for Mystery A recursive call to Mystery requiring T(n - 1) mystery (n) 1. if n £ 1 2. then return n 3. else return (5 * mystery(n -1) - 6 * mystery(n - 2) ) A recursive call to Mystery requiring T(n - 2) Operations counted in t(n)

Deriving a Recurrence Equation for Mystery 3. RecursiveCallSum = T(n-1) + T(n-2) 4. t (n)= (1) The non recursive count includes the comparison n<=1, the 2 multiplications, the subtraction, and the return. So (1).

Deriving a Recurrence Equation for Multiply Barometer operation Multiply(y, z)//binary multiplication 1. if (z ==0) return 0 2. else if z is odd 3. then return (multiply(2y,ëz/2û) + y) 4. else return (multiply(2y,ëz/2û)) To derive the recurrence equation we need to determine the size of the problem. The variable which is decreased and provides the direct solution is z. So we can use z as the size. Another possibility is to use n which is the number of bits in z (n=ëlg zû+1). Let addition be our barometer operation. We do 0 additions for the direct solution. So DirectSolutionSize =0, and DirectSolutionCount=0

Deriving a Recurrence Equation for Multiply 1. RecursiveCallSum = T(z/2) t(z) = 1 addition in the worst case. So Solving this equation we get T(z)= (lg z)= (n). 2. RecursiveCallSum = T(n-1) since z/2 has n-1 bits Solving this equation we get T(n)= (n)

Deriving a recurrence equation for minimum Minimum(A,lt,rt) // minimum(A,1,length(A)) 1. if rt - lt £ 1 2. then return (min(A[lt], A[rt])) 3. else m1 = minimum(A,lt, ë(lt+rt)/2û ) 4. m2 = minimum(A,ë(lt+rt)/2û +1,rt) 5. return (min (m1, m2)) This procedure computes the minimum of the left half of the array, the right half of the array, and the minimum of the two. Size = rt-lt+1= n DirectSolutionSize is n = (rt - lt)+1 £ 1+1=2. DirectSolutionCount = (1)

Deriving a recurrence equation for minimum Minimum(A,lt,rt) // minimum(A,1,length(A)) 1. if rt - lt £ 1 2. then return (min(A[lt], A[rt])) 3. else m1 = minimum(A,lt, ë(lt+rt)/2û ) 4. m2 = minimum(A,ë(lt+rt)/2û +1,rt) 5. return (min (m1, m2)) Two recursive calls with n/2 elements. So T(n/2) + T(n/2)

Computing xn We can decrease the number of times that x is multiplied by itself Example: compute x8 we need only 3 multiplications (x2=x*x, x4=x2*x2, and x8=x4*x4) compute x9, we need 4 multiplications (x2=x*x, x4=x2*x2, x8=x4*x4 and x9= x8*x) A recursive procedure for computing xn should be (lg n) For the following algorithm we will derive the worst case recurrence equation. Note: we are ignoring the fact that the numbers will quickly become too large to store in a long integer

Recursion - Power long power (long x, long n) { if (n == 0) { return 1; } else { long p = power(x, n/2); if ((n % 2) == 0) { return p*p; } else { return x * p*p;} } We use the * as our barometer operation.

Deriving a Recurrence Equation for Power Size = n DirectSolutionSize is n=0; DirectSolutionCount is 0; RecursiveCallSum is T(n/2) t(n) = 2 multiplications in the worst case T(n)=0 for n=0 T(n)=T(n/2)+2 for n>0

Solving recurrence equations 5 techniques for solving recurrence equations: Recursion tree Iteration method Induction (Guess and Test) Master Theorem (master method) Characteristic equations We discuss these methods with examples.

Deriving the count using the recursion tree method Recursion trees provide a convenient way to represent the unrolling of recursive algorithm It is not a formal proof but a good technique to compute the count. Once the tree is generated, each node contains its “non recursive number of operations” t(n) or DirectSolutionCount The count is derived by summing the “non recursive number of operations” of all the nodes in the tree For convenience we usually compute the sum for all nodes at each given depth, and then sum these sums over all depths.

Building the recursion tree The initial recursion tree has a single node containing two fields: The recursive call, (for example Factorial(n)) and the corresponding count T(n) . The tree is generated by: unrolling the recursion of the node depth 0, then unrolling the recursion for the nodes at depth 1, etc. The recursion is unrolled as long as the size of the recursive call is greater than DirectSolutionSize

Building the recursion tree When the “recursion is unrolled”, each current leaf node is substituted by a subtree containing a root and a child for each recursive call done by the algorithm. The root of the subtree contains the recursive call, and the corresponding “non recursive count”. Each child node contains a recursive call, and its corresponding count. The unrolling continues, until the “size” in the recursive call is DirectSolutionSize Nodes with a call of DirectSolutionSize, are not “unrolled”, and their count is replaced by DirectSolutionCount

Example: recursive factorial Factorial(n) T(n) Initially, the recursive tree is a node containing the call to Factorial(n), and count T(n). When we unroll the computation this node is replaced with a subtree containing a root and one child: The root of the subtree contains the call to Factorial(n) , and the “non recursive count” for this call t(n)= (1). The child node contains the recursive call to Factorial(n-1), and the count for this call, T(n-1).

After the first unrolling depth nd T(n) Factorial(n) t(n)= (1) 0 1 (1) 1 1 T(n-1) Factorial(n-1) T(n-1) nd denotes the number of nodes at that depth

After the second unrolling depth nd T(n) Factorial(n) t(n)= (1) 0 1 (1) Factorial(n-1) t(n-1)= (1) 1 1 (1) Factorial(n-2) T(n-2) 2 1 T(n-2)

After the third unrolling depth nd T(n) Factorial(n) t(n)= (1) 0 1 (1) Factorial(n-1) t(n-1)= (1) 1 1 (1) Factorial(n-2) t(n-2)= (1) 2 1 (1) Factorial(n-3) T(n-3) 3 1 T(n-3) For Factorial DirectSolutionSize = 1 and DirectSolutionCount= (1)

The recursion tree depth nd T(n) Factorial(n) t(n)= (1) 0 1 (1) 0 1 (1) Factorial(n-1) t(n-1)= (1) 1 1 (1) Factorial(n-2) t(n-2)= (1) 2 1 (1) Factorial(n-3) t(n-3)= (1) 3 1 (1) Factorial(1) T(1)= (1) n-1 1 T(1)= (1) The sum T(n)=n* (1) = (n)

Divide and Conquer Basic idea: divide a problem into smaller portions, solve the smaller portions and combine the results. Name some algorithms you already know that employ this technique. D&C is a top down approach. Usually, we use recursion to implement D&C algorithms. The following is an “outline” of a divide and conquer algorithm

Divide and Conquer begin procedure Solution(I); if size(I) <=smallSize then {calculate solution} return(DirectSolution(I)) {use direct solution} else {decompose, solve each and combine} Decompose(I, I1,...,Ik); for i=1 to k do S(i)=Solution(Ii); {solve a smaller problem} return(Combine(S1,...,Sk)); {combine solutions} end {Solution}

Divide and Conquer Let size(I) = n DirectSolutionCount = DS(n) t(n) = D(n) + C(n) where: D(n) = count for dividing problem into subproblems C(n) = count for combining solutions

Divide and Conquer Main advantages Code: simple Algorithm: efficient Implementation: parallel computation

Binary search Assumption - the list S[low…high] is sorted, and x is the search key If the search key x is in the list, x==S[i], and the index i is returned. If x is not in the list a NoSuchKey is returned

Binary search The problem is divided into 3 subproblems: x=S[mid], xÎ S[low,..,mid-1], xÎ S[mid+1,..,high] The first case x=S[mid]) is easily solved The other cases xÎ S[low,..,mid-1], or xÎ S[mid+1,..,high] require a recursive call When the array is empty the search terminates with a “non-index value”

BinarySearch(S, k, low, high) if low > high then return NoSuchKey else mid  floor ((low+high)/2) if (k == S[mid]) return mid else if (k < S[mid]) then return BinarySearch(S, k, low, mid-1) return BinarySearch(S, k, mid+1, high)

Worst case analysis A worst input (what is it?) causes the algorithm to keep searching until low>high We count number of comparisons of a list element with x per recursive call. Assume 2k  n < 2k+1 k = ëlg nû T (n) - worst case number of comparisons for the call to BS(n)

Recursion tree for BinarySearch (BS) BS(n) T(n) Initially, the recursive tree is a node containing the call to BS(n), and total amount of work in the worst case, T(n). When we unroll the computation this node is replaced with a subtree containing a root and one child: The root of the subtree contains the call to BS(n) , and the “nonrecursive work” for this call t(n). The child node contains the recursive call to BS(n/2), and the total amount of work in the worst case for this call, T(n/2).

After first unrolling depth nd T(n) BS(n) t(n)=1 0 1 1 BS(n/2) T(n/2) 0 1 1 BS(n/2) T(n/2) 1 1 T(n/2)

After second unrolling depth nd T(n) BS(n) t(n)=1 0 1 1 BS(n/2) t(n/2)=1 1 1 1 BS(n/4) T(n/4) 2 1 T(n/4)

After third unrolling depth nd T(n) BS(n) t(n)=1 0 1 1 BS(n/2) 0 1 1 BS(n/2) t(n/2)=1 1 1 1 2 1 1 BS(n/4) t(n/4)=1 BS(n/8) T(n/8) T(n/8) For BinarySearch DirectSolutionSize =0, 1 and DirectSolutionCount=0 for 0, and 1 for 1

Terminating the unrolling Let 2k n < 2k+1 k= lg n When a node has a call to BS(n/2k), (or to BS(n/2k+1)): The size of the list is DirectSolutionSize since   n/2k  = 1, (or  n/2k+1  = 0) In this case the unrolling terminates, and the node is a leaf containing DirectSolutionCount (DSC) = 1, (or 0)

The recursion tree depth nd T(n) BS(n) t(n)=1 0 1 1 BS(n/2) t(n/2)=1 0 1 1 BS(n/2) t(n/2)=1 1 1 1 BS(n/4) t(n/4)=1 2 1 1 BS(n/2k) DSC=1 k 1 1 T(n)=k+1=lgn +1

Merge Sort Input: S of size n. Output: a permutation of S, such that if i > j then S[ i ]  S[ j ] Divide: If S has at least 2 elements divide into S1 and S2. S1 contains the the first  n/2  elements of S. S2 has the last  n/2 elements of S. Recurs: Recursively sort S1 , and S2. Conquer: Merge sorted S1 and S2 into S.

Merge Sort: Input: S 1 if (n  2) 2 moveFirst( S, S1, (n+1)/2))// divide 3 moveSecond( S, S2, n/2) // divide 4 Sort(S1) // recurs 5 Sort(S2) // recurs 6 Merge( S1, S2, S ) // conquer

Merge( S1 , S2, S ): Pseudocode Input: S1 , S2, sorted in nondecreasing order Output S is a sorted sequence. 1. while (S1 is Not Empty && S2 is Not Empty) 2. if (S1.first()  S2.first() ) 3. remove first element of S1 and move it to the end of S 4. else remove first element of S2 and move it to the end of S 5. move the remaining elements of the non-empty sequence (S1 or S2 ) to S

Deriving a recurrence equation for Merge Sort: 1 if (n  2) 2 moveFirst( S, S1, (n+1)/2)) // divide 3 moveSecond( S, S2, n/2) // divide 4 Sort(S1) // recurs 5 Sort(S2) // recurs 6 Merge( S1, S2, S ) // conquers DirectSolutionSize is n<2 DirectSolutionCount is (1) RecursiveCallSum is T( n/2 ) + T( n/2 ) The non recursive count t(n) = (n)

Recurrence Equation (cont’d) The implementation of the moves costs Q(n) and the merge costs Q(n). So the total cost for dividing and merging is Q(n). The recurrence relation for the run time of Sort is T(n) = T( n/2 ) + T( n/2 ) + Q(n) . T(0) =T(1) = Q(1) The solution is T(n)= Q(nlg n)

Recursion tree for MergeSort (MS) MS(n) T(n) Initially, the recursive tree is a node containing the call to MS(n), and total amount of work in the worst case, T(n). When we unroll the computation this node is replaced with a subtree: The root of the subtree contains the call to MS(n) , and the “nonrecursive work” for this call t(n). The two children contain a recursive call to MS(n/2), and the total amount of work in the worst case for this call, T(n/2).

After first unrolling of mergeSort MS(n) depth nd T(n) t(n)=(n) 0 1 (n) MS(n/2) T(n/2) MS(n/2) T(n/2) 2T(n/2)

After second unrolling depth nd T(n) MS(n) (n) 0 1 (n) MS(n/2) (n/2) MS(n/2) (n/2) 1 2 (n) + MS(n/4) T(n/4) MS(n/4) T(n/4) MS(n/4) T(n/4) 4T(n/4)

After third unrolling depth nd T(n) 0 1 (n) 1 2 (n) + 2 4 (n) + + + MS(n) (n) 0 1 (n) MS(n/2) (n/2) MS(n/2) (n/2) 1 2 (n) + MS(n/4) (n/4) MS(n/4) (n/4) MS(n/4) (n/4) MS(n/4) (n/4) 2 4 (n) + + + MS(n/8) T(n/8) MS(n/8) T(n/8) MS(n/8) T(n/8) 8T(n/8)

Terminating the unrolling For simplicity let n = 2k lg n = k When a node has a call to MS(n/2k): The size of the list to merge sort is DirectSolutionSize since n/2k = 1 In this case the unrolling terminates, and the node is a leaf containing DirectSolutionCount = (1)

The recursion tree d nd T(n) Q (n/20) 0 1 Q (n) 1 2 Q (n) Q (n / 21) k 2k Q (n) T(n)= (k+1) Q (n) = Q( n lg n)

The sum for each depth d=0 nd=1: 21 22 . Q (n ) ´ Q n æ è ç ö ø ÷ d=k nd=2k:

Induction method Estimate the form of the solution Prove by mathematical induction For Example: Computing Factorial: Fact(n) = Fact(n-1)*n; Number of “*”: T(n) = T(n-1) + 1 (Recurrence equation) Proof: Induction base: for n=0  T(0) = 0; Induction hypothesis: for arbitrary positive integer n  T(n) = n (Guess) Induction step to show  T(n+1) = n+1 Replace n by n+1 in the recurrence equation, we get: T(n+1) = T(n) +1 = n+1 (It’s correct!) So this completes the proof that T(n) is correct  T(n) = (n)

Induction (Guess and Test) for Binary Search W(n) = lg n + 1 Proof by Induction: Induction base: n = 1 W(1) = lg 1 + 1 = 0 + 1 = 1 Induction hypothesis : Assume for 1  k < n, W(k) = lg k + 1 (Assume that above condition holds for n/2) Proof for n: W(n) = 1+ W(n/2) = 1+ lg n/2  + 1 (see next slide)

Proof Continued.

Iteration for binary search

Characteristic Equation (example) Solve the recurrence:

Characteristic Equation (one case) The homogeneous linear recurrence equation with constant coefficients: Its characteristic equation is: If there are k distinct solutions r1, r2, …, rk, the only solution to the recurrence is:

The Master Method Solving recurrence in the form: T(n) = aT(n/b) + f(n) (a>=1, b>1, f(n) is an asympototically positive function.) The time complexity T(n) is bounded in following three cases: Case 1: (If ) Case 2: (If ) Case 3: (If and af(n/b) <= cf(n) for c<1 and large n ) Example: T(n) = 4T(n/2)+n  a=4, b=2, Case 1 applies:  Solution T(n) = (n2)

Master theorem (alternative form)

Master method examples Case 1: T(n) = 9T(n/3) + Q(n). a = 9, b = 3 so nlogba =nlog39 Since Q(n)= Q(nlog39 -1 ) we are dealing with Case 1. Therefore T(n) = Q (nlogba )=Q(nlog39) = Q(n2). T(n) = 7T(n/2) + Q(n2). a = 7, b = 2 so nlogba =nlog27 Since Q(n2)/nlog27 = Q(n2-lg 7) = Q(n-0.8), we are dealing with Case 1. Therefore T(n) = Q (nlogba )= Q(nlog27).

Master method examples Case 2 T(n) = T(n/2) + Q(1) a = 1, b = 2 so nlogba =nlog21 Since nlog21 = n0 = 1, Q(1)/1 = Q(lg0n), we are dealing with Case 2. T(n) = Q (nlogba lgk+1n)= Q(lg n) T(n) = 2T(n/2) + Q(n) a = 2, b = 2 so nlogba = nlog22 Since nlog22 = n, Q(n)/n = Q(1) = Q(lg0n), we are dealing with Case 2. T(n) = Q(nlogba lgk+1n) = Q(nlgn)

Master method examples: Case 3 T(n) = 4T(n/2) + n3 a = 4, b = 2 so nlogba =nlog24 Since n3/nlg4 = n, we are dealing with Case 3. We must find a c such that af(n/b) < cf(n) for all n >N 4(n/2)3 =n3/2<5/8n3 for all n³1 T(n) = Q(n3) The master method does not solve all recurrences of the given form.

Master method fails T(n) = 4T(n/2) + n2/lgn a = 4, b = 2 so nlogba = nlog24 = n2 (n2/lgn)/n2 = 1/lgn and no case applies The solution is T(n) = Q(n2lglgn) and can be verified by the substitution method.

A More General Recursion Tree: T(n) = a T(n/b) + f(n) f(n/b) f(n/b) f(n/b) af(n/b) f(n/b2) f(n/b2) f(n/b2) a2f(n/b2) k Q(1) Q(1) Q(1) ... Q(1) Q(1) Q(1) ... Q(1) Q(1) Q(1) akf(n/bk) n=bk, k=logbn, f(n/bk )=f(1)= Q (1) alogbn = nlogba

Deriving a Recurrence Equation from Recursive Algorithms Decide what “n”, the problem size, is. See what value of n is used as the base of the recursion. It will usually be a single value (ie n = 1). However, it may be multiple values. Let the base value be n0 . Figure out what T(n0 ) is. You can usually use “some constant c”. Sometimes a specific number will be needed. The General T(n) is usually a sum of various choices of T(m ), the recursive calls, plus the sum of the other work done. T(n ) = aT(f(n)) + bT(g(n)) + . . . + c(n) # of subProblems other work size function

Recursion - Power long power (long x, long n) { if (n == 0) { return 1; } else if ((n % 2) == 0) { return power (x, n/2) * power (x, n/2); } else { return x * power (x, n/2) * power (x, n/2); } Worst case -- Count 2 multiplications.

T(n) = 2 T(n/2) + 2, n = 2k T(0) = 0 (n=0) --- Worst case Problem number number Size nodes of ops n 1 2 n/2 2 2*T(n/2) n/2 2 2*2 = 4 n/4 4 4*T(n/4) T(n ) 2 T(n / 2) T(n / 2) 2 2 2 T(n / 4) T(n / 4) T(n / 4) T(n / 4)

T(n) = 2 T(n/2) + 2, n = 2k Problem number number Size nodes of ops n 1 2 n/2 2 2*2=4 n/4 4 4*2=8 n/8 8 8*T(n/8) 2 2 2 2 2 2 2   T(n/8) T(n/8) T(n/8) T(n/8)

T(n) = 2 T(n/2) + 2, n = 2k Problem number number Size nodes of ops n 1 20*2=2 n/2 2 22=4 n/4 4 42=8 n/8 8 82=16 n/2k 2k 2k 2 0 2k+1 2k+10 2 2 2 2 2 2 2   2 2 2 2 2 2 2 2 T(n/2k+1) T(n/2k+1) T(0) T(0) SUMMING operations: 2(20+ 21 + 22 + 23 + 24 + … + 2k) = 2(2k+1-1) = 2(2n -1)