CS 340Chapter 2: Algorithm Analysis1 Time Complexity The best, worst, and average-case complexities of a given algorithm are numerical functions of the.

Slides:



Advertisements
Similar presentations
MATH 224 – Discrete Mathematics
Advertisements

Chapter 1 – Basic Concepts
Fall 2006CENG 7071 Algorithm Analysis. Fall 2006CENG 7072 Algorithmic Performance There are two aspects of algorithmic performance: Time Instructions.
1 ICS 353 Design and Analysis of Algorithms Spring Semester (062) King Fahd University of Petroleum & Minerals Information & Computer Science.
Introduction to Analysis of Algorithms
Complexity Analysis (Part I)
Analysis of Algorithms1 Estimate the running time Estimate the memory space required. Time and space depend on the input size.
Algorithm Analysis. Math Review – 1.2 Exponents –X A X B = X A+B –X A /X B =X A-B –(X A ) B = X AB –X N +X N = 2X N ≠ X 2N –2 N+ 2 N = 2 N+1 Logarithms.
Algorithm Analysis CS 201 Fundamental Structures of Computer Science.
Data Structure Algorithm Analysis TA: Abbas Sarraf
Analysis of Algorithms 7/2/2015CS202 - Fundamentals of Computer Science II1.
Cpt S 223 – Advanced Data Structures
Analysis of Algorithms COMP171 Fall Analysis of Algorithms / Slide 2 Introduction * What is Algorithm? n a clearly specified set of simple instructions.
Analysis of Algorithms Spring 2015CS202 - Fundamentals of Computer Science II1.
DATA STRUCTURES AND ALGORITHMS Lecture Notes 1 Prepared by İnanç TAHRALI.
Chapter 6 Algorithm Analysis Bernard Chen Spring 2006.
1 Chapter 2 Program Performance – Part 2. 2 Step Counts Instead of accounting for the time spent on chosen operations, the step-count method accounts.
Algorithm Analysis. Algorithm Def An algorithm is a step-by-step procedure.
Algorithm Analysis & Complexity We saw that a linear search used n comparisons in the worst case (for an array of size n) and binary search had logn comparisons.
Program Performance & Asymptotic Notations CSE, POSTECH.
Week 2 CS 361: Advanced Data Structures and Algorithms
Lecture 8. How to Form Recursive relations 1. Recap Asymptotic analysis helps to highlight the order of growth of functions to compare algorithms Common.
For Wednesday Read Weiss chapter 3, sections 1-5. This should be largely review. If you’re struggling with the C++ aspects, you may refer to Savitch, chapter.
SEARCHING, SORTING, AND ASYMPTOTIC COMPLEXITY Lecture 12 CS2110 – Fall 2009.
Introduction to complexity. 2 Analysis of Algorithms Why do we need to analyze algorithms? –To measure the performance –Comparison of different algorithms.
1 Recursion Algorithm Analysis Standard Algorithms Chapter 7.
Vishnu Kotrajaras, PhD.1 Data Structures. Vishnu Kotrajaras, PhD.2 Introduction Why study data structure?  Can understand more code.  Can choose a correct.
Algorithm Efficiency CS 110: Data Structures and Algorithms First Semester,
Mathematics Review and Asymptotic Notation
CSC 201 Analysis and Design of Algorithms Lecture 04: CSC 201 Analysis and Design of Algorithms Lecture 04: Time complexity analysis in form of Big-Oh.
Algorithm Analysis An algorithm is a clearly specified set of simple instructions to be followed to solve a problem. Three questions for algorithm analysis.
Analysis of Algorithms
SNU IDB Lab. Ch3. Asymptotic Notation © copyright 2006 SNU IDB Lab.
1 COMP3040 Tutorial 1 Analysis of algorithms. 2 Outline Motivation Analysis of algorithms Examples Practice questions.
Coursenotes CS3114: Data Structures and Algorithms Clifford A. Shaffer Department of Computer Science Virginia Tech Copyright ©
Introduction to Analysis of Algorithms COMP171 Fall 2005.
Algorithm Efficiency There are often many approaches (algorithms) to solve a problem. How do we choose between them? At the heart of a computer program.
Chapter 5 Algorithms (2) Introduction to CS 1 st Semester, 2015 Sanghyun Park.
RUNNING TIME 10.4 – 10.5 (P. 551 – 555). RUNNING TIME analysis of algorithms involves analyzing their effectiveness analysis of algorithms involves analyzing.
DATA STRUCTURES AND ALGORITHMS Lecture Notes 2 Prepared by İnanç TAHRALI.
Algorithm Analysis Part of slides are borrowed from UST.
Chapter 2 Computational Complexity. Computational Complexity Compares growth of two functions Independent of constant multipliers and lower-order effects.
Algorithm Analysis. What is an algorithm ? A clearly specifiable set of instructions –to solve a problem Given a problem –decide that the algorithm is.
DS.A.1 Algorithm Analysis Chapter 2 Overview Definitions of Big-Oh and Other Notations Common Functions and Growth Rates Simple Model of Computation Worst.
Vishnu Kotrajaras, PhD.1 Data Structures
1 Chapter 2 Algorithm Analysis All sections. 2 Complexity Analysis Measures efficiency (time and memory) of algorithms and programs –Can be used for the.
1 Chapter 2 Algorithm Analysis Reading: Chapter 2.
Chapter 9: Algorithm Efficiency and Sorting MEASURING ALGORITHM EFFICIENCY SORTING ALGORITHMS CS
Analysis of Algorithms Spring 2016CS202 - Fundamentals of Computer Science II1.
Chapter 15 Running Time Analysis. Topics Orders of Magnitude and Big-Oh Notation Running Time Analysis of Algorithms –Counting Statements –Evaluating.
CSE 3358 NOTE SET 2 Data Structures and Algorithms 1.
Algorithm Analysis 1.
Chapter 2 Algorithm Analysis
Analysis of Algorithms
Analysis of Algorithms
Analysis of Algorithms
Matters of Time & Space Today: Time & Space Complexity Wednesday
Time Complexity Analysis Neil Tang 01/19/2010
Algorithm Analysis Neil Tang 01/22/2008
Algorithm Analysis (not included in any exams!)
CS 201 Fundamental Structures of Computer Science
Programming and Data Structure
Analysis of Algorithms
Programming and Data Structure
Searching, Sorting, and Asymptotic Complexity
CE 221 Data Structures and Algorithms
At the end of this session, learner will be able to:
Chapter 9 Algorithm Efficiency & Sorting
Analysis of Algorithms
Presentation transcript:

CS 340Chapter 2: Algorithm Analysis1 Time Complexity The best, worst, and average-case complexities of a given algorithm are numerical functions of the size of the instances. It is difficult to work with these functions exactly because they are often very complicated, with many little up and down bumps. Thus it is usually cleaner and easier to talk about upper and lower bounds of such functions. This is where the big Oh notation comes into the picture.

CS 340Chapter 2: Algorithm Analysis2 Time Complexity Upper and lower bounds smooth out the behavior of complex functions

CS 340Chapter 2: Algorithm Analysis3 Time Complexity - Big-O T(n) = O(f(n)) means c.f(n) is an upper bound on T(n), where there exists some constant c such that T(n) is always <= c.f(n) for large enough n. Example: n 3 + 3n 2 + 6n + 5 is O(n 3 ). (Use c = 15 and n 0 = 1.) Example: n 2 + n logn is O(n 2 ). (Use c = 2 and n 0 = 1.)

CS 340Chapter 2: Algorithm Analysis4 Each of the algorithms below has O(n 3 ) time complexity... (In fact, the execution time for Algorithm A is n 3 + n 2 + n, and the execution time for Algorithm B is n n 2 + n.) Demonstrating The Big-O Concept 1,1101,11011,11011,110 1,010,1001,010,1002,010,1002,010,100 1,001,001,0001,001,001,0001,101,001,0001,101,001,000 1,000,100,010,0001,000,100,010,0001,010,100,010,0001,010,100,010,000 1,000,010,000,100,0001,000,010,000,100,0001,001,010,000,100,0001,001,010,000,100,000 1,000,001,000,001,000,0001,000,001,000,001,000,0001,000,101,000,001,000,0001,000,101,000,001,000, ,0001,000 10,00010, ,000100,000 1,000,0001,000,000 AABB ALGORITHMALGORITHM InputSizenInputSizen

CS 340Chapter 2: Algorithm Analysis5 (In fact, the execution time for Algorithm C is n 2 + 2n + 3, and the execution time for Algorithm D is n n + 3.) A Second Big-O Demonstration Each of the algorithms below has O(n 2 ) time complexity ,12310,123 10,20310,203110,203110,203 1,002,0031,002,0032,002,0032,002, ,020,003100,020,003110,020,003110,020,003 10,000,200,00310,000,200,00310,100,200,00310,100,200,003 1,000,002,000,0031,000,002,000,0031,001,002,000,0031,001,002,000, ,0001,000 10,00010, ,000100,000 1,000,0001,000,000 CCDDALGORITHMALGORITHMInputSizenInputSizen

CS 340Chapter 2: Algorithm Analysis6 (In fact, the execution time for Algorithm E is n logn + 5n, and the execution time for Algorithm F is n logn + 105n. Note that the linear term for Algorithm F will dominate until n reaches ) One More Big-O Demonstration Each of the algorithms below has O(nlogn) time complexity… 83831,0831,083 1,1641,16411,16411,164 14,96614,966114,966114, ,877182,8771,182,8771,182,877 2,160,9642,160,96412,160,96412,160,964 24,931,56924,931,569124,931,569124,931, ,0001,000 10,00010, ,000100,000 1,000,0001,000,000 EEFFALGORITHMALGORITHMInputSizenInputSizen

CS 340Chapter 2: Algorithm Analysis7 Big-O Represents An Upper Bound If T(n) is O(f(n)), then f(n) is basically a cap on how bad T(n) will behave when n gets big. g(n) r(n) b(n) p(n) y(n) v(n) Is g(n) O(r(n))? Is r(n) O(g(n))? Is v(n) O(y(n))? Is y(n) O(v(n))? Is b(n) O(p(n))? Is p(n) O(b(n))?

CS 340Chapter 2: Algorithm Analysis8 Function T(n) is said to be  (g(n)) if there are positive constants c and n 0 such that T(n)  c g (n) for any n  n 0 (i.e., T(n) is ultimately bounded below by c g (n)). Function T(n) is said to be  (g(n)) if there are positive constants c and n 0 such that T(n)  c g (n) for any n  n 0 (i.e., T(n) is ultimately bounded below by c g (n)). –Example: n 3 + 3n 2 + 6n + 5 is  (n 3 ). (Use c = 1 and n 0 = 1.) –Example: n 2 + n logn is  (n 2 ). (Use c = 1 and n 0 = 1.) Function T(n) is said to be  (g(n)) if there are positive constants c and n 0 such that T(n)  c g (n) for any n  n 0 (i.e., T(n) is ultimately bounded below by c g (n)). Function T(n) is said to be  (g(n)) if there are positive constants c and n 0 such that T(n)  c g (n) for any n  n 0 (i.e., T(n) is ultimately bounded below by c g (n)). –Example: n 3 + 3n 2 + 6n + 5 is  (n 3 ). (Use c = 1 and n 0 = 1.) –Example: n 2 + n logn is  (n 2 ). (Use c = 1 and n 0 = 1.) Time Complexity Terminology: Big-Omega g(n) r(n) r(n) is not  (g(n)) since for every positive constant c, (c)g(n) ultimately gets bigger than r(n) g(n) is  (r(n)) since g(n) exceeds (1)r(n) for all n-values past n r nrnr nrnr

CS 340Chapter 2: Algorithm Analysis9 Function T(n) is said to be  (h(n)) if T(n) is both O(h(n)) and  (h(n)). Function T(n) is said to be  (h(n)) if T(n) is both O(h(n)) and  (h(n)). –Example: n 3 + 3n 2 + 6n + 5 is  (n 3 ). –Example: n 2 + n logn is  (n 2 ). Function T(n) is said to be  (h(n)) if T(n) is both O(h(n)) and  (h(n)). Function T(n) is said to be  (h(n)) if T(n) is both O(h(n)) and  (h(n)). –Example: n 3 + 3n 2 + 6n + 5 is  (n 3 ). –Example: n 2 + n logn is  (n 2 ). Time Complexity Terminology: Big-Theta g(n) r(n) r(n) is  (g(n)) since r(n) is squeezed between (1)g(n) and (2)g(n) once n exceeds n 0 g(n) is  (r(n)) since g(n) is squeezed between (½)r(n) and (1)r(n) once n exceeds n 0 n0n0 n0n0

CS 340Chapter 2: Algorithm Analysis10 Function T(n) is said to be o(p (n)) if T(n) is O(p (n)) but not  (p (n)). Function T(n) is said to be o(p (n)) if T(n) is O(p (n)) but not  (p (n)). –Example: n 3 + 3n 2 + 6n + 5 is O(n 4 ). (Use c = 15 and n 0 = 1.) However, n 3 + 3n 2 + 6n + 5 is not  (n 4 ). Proof (by contradiction): Assume that there are positive constants c and n 0 such that n 3 + 3n 2 + 6n + 5  c n 4 for all n  n 0. Then dividing by n 4 on both sides yields the fact that (1/n)+(3/n 2 )+(6/n 3 )+(5/n 4 )  c, for all n  n 0. Since lim n  ((1/n)+(3/n 2 )+(6/n 3 )+(5/n 4 )) = 0, we must conclude that 0  c, which contradicts the fact that c must be a positive constant. Function T(n) is said to be o(p (n)) if T(n) is O(p (n)) but not  (p (n)). Function T(n) is said to be o(p (n)) if T(n) is O(p (n)) but not  (p (n)). –Example: n 3 + 3n 2 + 6n + 5 is O(n 4 ). (Use c = 15 and n 0 = 1.) However, n 3 + 3n 2 + 6n + 5 is not  (n 4 ). Proof (by contradiction): Assume that there are positive constants c and n 0 such that n 3 + 3n 2 + 6n + 5  c n 4 for all n  n 0. Then dividing by n 4 on both sides yields the fact that (1/n)+(3/n 2 )+(6/n 3 )+(5/n 4 )  c, for all n  n 0. Since lim n  ((1/n)+(3/n 2 )+(6/n 3 )+(5/n 4 )) = 0, we must conclude that 0  c, which contradicts the fact that c must be a positive constant. Time Complexity Terminology: Little-O

CS 340Chapter 2: Algorithm Analysis11 To formally analyze the performance of algorithms, we will use a computational model with a couple of simplifying assumptions: –Each simple instruction (assignment, comparison, addition, multiplication, memory access, etc.) is assumed to execute in a single time unit. –Memory is assumed to be limitless, so there is always room to store whatever data is needed. The size of the input, n, will normally be used as our main variable, and we’ll primarily be interested in “worst case” scenarios. To formally analyze the performance of algorithms, we will use a computational model with a couple of simplifying assumptions: –Each simple instruction (assignment, comparison, addition, multiplication, memory access, etc.) is assumed to execute in a single time unit. –Memory is assumed to be limitless, so there is always room to store whatever data is needed. The size of the input, n, will normally be used as our main variable, and we’ll primarily be interested in “worst case” scenarios. Computational Model For Algorithm Analysis

CS 340Chapter 2: Algorithm Analysis12 Rule One: Loops The running time of a loop is at most the running time of the statements inside the loop, multiplied by the number of iterations. Rule One: Loops The running time of a loop is at most the running time of the statements inside the loop, multiplied by the number of iterations. General Rules For Running Time Calculation Example: for (i = 0; i < n; i++) // n iterations for (i = 0; i < n; i++) // n iterations A[i] = (1-t)*X[i] + t*Y[i]; // 12 time units A[i] = (1-t)*X[i] + t*Y[i]; // 12 time units // per iteration // per iterationExample: for (i = 0; i < n; i++) // n iterations for (i = 0; i < n; i++) // n iterations A[i] = (1-t)*X[i] + t*Y[i]; // 12 time units A[i] = (1-t)*X[i] + t*Y[i]; // 12 time units // per iteration // per iteration (Retrieving X[i] requires one addition and one memory access, as does retrieving Y[i]; the calculation involves a subtraction, two multiplications, and an addition; assigning A[i] requires one addition and one memory access; and each loop iteration requires a comparison and either an assignment or an increment, thus totals twelve primitive operations.) Thus, the total running time is 12n time units, i.e., this part of the program is O(n).

CS 340Chapter 2: Algorithm Analysis13 Rule Two: Nested Loops The running time of a nested loop is at most the running time of the statements inside the innermost loop, multiplied by the product of the number of iterations of all of the loops. Rule Two: Nested Loops The running time of a nested loop is at most the running time of the statements inside the innermost loop, multiplied by the product of the number of iterations of all of the loops. Example: for (i = 0; i < n; i++)// n iterations. 2 ops each for (i = 0; i < n; i++)// n iterations. 2 ops each for (j = 0; j < n; j++)// n iterations, 2 ops each for (j = 0; j < n; j++)// n iterations, 2 ops each C[i,j] = j*A[i] + i*B[j];// 10 time units/iteration C[i,j] = j*A[i] + i*B[j];// 10 time units/iteration (2 for retrieving A[i], 2 for retrieving B[j], 3 for the RHS arithmetic, 3 for assigning C[i,j].) Total running time: ((10+2)n+2)n = 12n 2 +2n time units, which is O(n 2 ). Example: for (i = 0; i < n; i++)// n iterations. 2 ops each for (i = 0; i < n; i++)// n iterations. 2 ops each for (j = 0; j < n; j++)// n iterations, 2 ops each for (j = 0; j < n; j++)// n iterations, 2 ops each C[i,j] = j*A[i] + i*B[j];// 10 time units/iteration C[i,j] = j*A[i] + i*B[j];// 10 time units/iteration (2 for retrieving A[i], 2 for retrieving B[j], 3 for the RHS arithmetic, 3 for assigning C[i,j].) Total running time: ((10+2)n+2)n = 12n 2 +2n time units, which is O(n 2 ). More complex example (ignoring for loop time): for (i = 0; i < n; i++)// n iterations for (i = 0; i < n; i++)// n iterations for (j = i; j < n; j++)// n-i iterations for (j = i; j < n; j++)// n-i iterations C[j,i] = C[i,j] = j*A[i]+i*B[j];// 13 time units/iter C[j,i] = C[i,j] = j*A[i]+i*B[j];// 13 time units/iter Total running time:  i=0,n-1 (  j=i, n-1 13) =  i=0,n-1 (13(n-i)) = 13(  i=0,n-1 n -  i=0,n-1 i) = 13(n 2 - ½n(n-1)) = 6.5n n time units, which is also O(n 2 ). More complex example (ignoring for loop time): for (i = 0; i < n; i++)// n iterations for (i = 0; i < n; i++)// n iterations for (j = i; j < n; j++)// n-i iterations for (j = i; j < n; j++)// n-i iterations C[j,i] = C[i,j] = j*A[i]+i*B[j];// 13 time units/iter C[j,i] = C[i,j] = j*A[i]+i*B[j];// 13 time units/iter Total running time:  i=0,n-1 (  j=i, n-1 13) =  i=0,n-1 (13(n-i)) = 13(  i=0,n-1 n -  i=0,n-1 i) = 13(n 2 - ½n(n-1)) = 6.5n n time units, which is also O(n 2 ).

CS 340Chapter 2: Algorithm Analysis14 Rule Three: Consecutive Statements The running time of a sequence of statements is merely the sum of the running times of the individual statements. Rule Three: Consecutive Statements The running time of a sequence of statements is merely the sum of the running times of the individual statements. Example: for (i = 0; i < n; i++) { // 22n time units A[i] = (1-t)*X[i] + t*Y[i]; // for this B[i] = (1-s)*X[i] + s*Y[i]; // entire loop } for (i = 0; i < n; i++)// (12n+2)n time for (j = 0; j < n; j++)// units for this C[i,j] = j*A[i] + i*B[j];// nested loop C[i,j] = j*A[i] + i*B[j];// nested loopExample: for (i = 0; i < n; i++) { // 22n time units A[i] = (1-t)*X[i] + t*Y[i]; // for this B[i] = (1-s)*X[i] + s*Y[i]; // entire loop } for (i = 0; i < n; i++)// (12n+2)n time for (j = 0; j < n; j++)// units for this C[i,j] = j*A[i] + i*B[j];// nested loop C[i,j] = j*A[i] + i*B[j];// nested loop Total running time: 12n 2 +24n time units, i.e., this code is O(n 2 ).

CS 340Chapter 2: Algorithm Analysis15 Rule Four: Conditional Statements The running time of an if-else statement is at most the running time of the conditional test, added to the maximum of the running times of the if and else blocks of statements. Rule Four: Conditional Statements The running time of an if-else statement is at most the running time of the conditional test, added to the maximum of the running times of the if and else blocks of statements. Example: if (amt > cost + tax)//2 time units if (amt > cost + tax)//2 time units { count = 0;//1 time unit count = 0;//1 time unit while ((count cost+tax))//4 TUs per iter while ((count cost+tax))//4 TUs per iter {//At most n iter {//At most n iter amt -= (cost + tax);//3 time units amt -= (cost + tax);//3 time units count++;//2 time units count++;//2 time units } cout << “CAPACITY:” << count;//2 time units cout << “CAPACITY:” << count;//2 time units }else cout << “INSUFFICIENT FUNDS”;//1 time unit cout << “INSUFFICIENT FUNDS”;//1 time unitExample: if (amt > cost + tax)//2 time units if (amt > cost + tax)//2 time units { count = 0;//1 time unit count = 0;//1 time unit while ((count cost+tax))//4 TUs per iter while ((count cost+tax))//4 TUs per iter {//At most n iter {//At most n iter amt -= (cost + tax);//3 time units amt -= (cost + tax);//3 time units count++;//2 time units count++;//2 time units } cout << “CAPACITY:” << count;//2 time units cout << “CAPACITY:” << count;//2 time units }else cout << “INSUFFICIENT FUNDS”;//1 time unit cout << “INSUFFICIENT FUNDS”;//1 time unit Total running time: 2 + max(1 + ( )n + 2, 1) = 9n + 5 time units, i.e., this code is O(n).

CS 340Chapter 2: Algorithm Analysis16 int binsrch(const etype A[], const etype x, const int n) { int low = 0, high = n-1; // 3 time units int low = 0, high = n-1; // 3 time units int middle; // 0 time units int middle; // 0 time units while (low <= high) // 1 time unit while (low <= high) // 1 time unit { middle = (low + high)/2; // 3 time units middle = (low + high)/2; // 3 time units if (A[middle] < x) // 2 TU | <-- Worst Case if (A[middle] < x) // 2 TU | <-- Worst Case low = middle + 1; // 2 TU | low = middle + 1; // 2 TU | else if (A[middle] > x) // 2 TU | x) // 2 TU | <-- Worst Case high = middle - 1; // 2 TU | <-- Worst Case high = middle - 1; // 2 TU | <-- Worst Case else // 0 TU | else // 0 TU | return middle; // 1 TU | return middle; // 1 TU | } return -1; // If search is unsuccessful; 1 time unit. return -1; // If search is unsuccessful; 1 time unit.} int binsrch(const etype A[], const etype x, const int n) { int low = 0, high = n-1; // 3 time units int low = 0, high = n-1; // 3 time units int middle; // 0 time units int middle; // 0 time units while (low <= high) // 1 time unit while (low <= high) // 1 time unit { middle = (low + high)/2; // 3 time units middle = (low + high)/2; // 3 time units if (A[middle] < x) // 2 TU | <-- Worst Case if (A[middle] < x) // 2 TU | <-- Worst Case low = middle + 1; // 2 TU | low = middle + 1; // 2 TU | else if (A[middle] > x) // 2 TU | x) // 2 TU | <-- Worst Case high = middle - 1; // 2 TU | <-- Worst Case high = middle - 1; // 2 TU | <-- Worst Case else // 0 TU | else // 0 TU | return middle; // 1 TU | return middle; // 1 TU | } return -1; // If search is unsuccessful; 1 time unit. return -1; // If search is unsuccessful; 1 time unit.} Complete Analysis Of Binary Search Function In the worst case, the loop will keep dividing the distance between the low and high indices in half until they are equal, iterating at most logn times. Thus, the total running time is: 10logn + 4 time units, which is O(logn).

CS 340Chapter 2: Algorithm Analysis17 etype SuperFreq(const etype A[], const int n) { etype bestElement = A[0]; // 3 time units etype bestElement = A[0]; // 3 time units int bestFreq = 0; // 1 time unit int bestFreq = 0; // 1 time unit int currFreq; // 0 time units int currFreq; // 0 time units for (i = 0; i < n; i++) // n iterations; 2 TUs each for (i = 0; i < n; i++) // n iterations; 2 TUs each { currFreq = 0; // 1 time unit currFreq = 0; // 1 time unit for (j = i; j < n; j++) // n-i iterations; 2 TUs each for (j = i; j < n; j++) // n-i iterations; 2 TUs each if (A[i] == A[j]) // 3 time units if (A[i] == A[j]) // 3 time units currFreq++; // 2 time units currFreq++; // 2 time units if (currFreq > bestFreq) // 1 time unit if (currFreq > bestFreq) // 1 time unit bestElement = A[i]; // 3 time units bestElement = A[i]; // 3 time units } return bestElement; // 1 time unit return bestElement; // 1 time unit} etype SuperFreq(const etype A[], const int n) { etype bestElement = A[0]; // 3 time units etype bestElement = A[0]; // 3 time units int bestFreq = 0; // 1 time unit int bestFreq = 0; // 1 time unit int currFreq; // 0 time units int currFreq; // 0 time units for (i = 0; i < n; i++) // n iterations; 2 TUs each for (i = 0; i < n; i++) // n iterations; 2 TUs each { currFreq = 0; // 1 time unit currFreq = 0; // 1 time unit for (j = i; j < n; j++) // n-i iterations; 2 TUs each for (j = i; j < n; j++) // n-i iterations; 2 TUs each if (A[i] == A[j]) // 3 time units if (A[i] == A[j]) // 3 time units currFreq++; // 2 time units currFreq++; // 2 time units if (currFreq > bestFreq) // 1 time unit if (currFreq > bestFreq) // 1 time unit bestElement = A[i]; // 3 time units bestElement = A[i]; // 3 time units } return bestElement; // 1 time unit return bestElement; // 1 time unit} Analysis Of Another Function: SuperFreq Note that the function is obviously O(n 2 ) due to its familiar nested loop structure. Specifically, its worst-case running time is ½(7n n + 10).

CS 340Chapter 2: Algorithm Analysis18 humongInt pow(const humongInt &val, const humongInt &n) { if (n == 0) if (n == 0) return humongInt(0); return humongInt(0); if (n == 1) if (n == 1) return val; return val; if (n % 2 == 0) if (n % 2 == 0) return pow(val*val, n/2); return pow(val*val, n/2); return pow(val*val, n/2) * val; return pow(val*val, n/2) * val;} humongInt pow(const humongInt &val, const humongInt &n) { if (n == 0) if (n == 0) return humongInt(0); return humongInt(0); if (n == 1) if (n == 1) return val; return val; if (n % 2 == 0) if (n % 2 == 0) return pow(val*val, n/2); return pow(val*val, n/2); return pow(val*val, n/2) * val; return pow(val*val, n/2) * val;} What About Recursion? The worst-case running time would require all 3 conditions to be checked, and to fail (taking 4 time units). The last return statement requires 3 time units each time it’s executed, which happens logn times (since it halves n with each execution, until it reaches a value of 1). When the parameterized n-value finally reaches 1, two last operations are performed. Thus, the worst-case running time is 7logn + 2. The worst-case running time would require all 3 conditions to be checked, and to fail (taking 4 time units). The last return statement requires 3 time units each time it’s executed, which happens logn times (since it halves n with each execution, until it reaches a value of 1). When the parameterized n-value finally reaches 1, two last operations are performed. Thus, the worst-case running time is 7logn + 2.

CS 340Chapter 2: Algorithm Analysis19 int powerOf2(const int &n) { if (n == 0) if (n == 0) return 1; return 1; return powerOf2(n-1) + powerOf2(n-1); return powerOf2(n-1) + powerOf2(n-1);} int powerOf2(const int &n) { if (n == 0) if (n == 0) return 1; return 1; return powerOf2(n-1) + powerOf2(n-1); return powerOf2(n-1) + powerOf2(n-1);} Recurrence Relations To Evaluate Recursion Assume that there is a function T(n) such that it takes T(k) time to execute powerOf2(k). Examining the code allows us to conclude the following: T(0) = 2 T(k) = 5 + 2T(k-1) for all k > 0 Assume that there is a function T(n) such that it takes T(k) time to execute powerOf2(k). Examining the code allows us to conclude the following: T(0) = 2 T(k) = 5 + 2T(k-1) for all k > 0 The second fact tells us that: T(n) = 5 + 2T(n-1) = 5 + 2(5 + 2T(n-2)) = 5 + 2(5 + 2(5 + 2(T(n-3)))) = … = 5( … + 2 n-1 ) + 2 n T(0) = 5(2 n -1) + 2 n (2) = 7(2 n ) - 5, which is O(2 n ). The second fact tells us that: T(n) = 5 + 2T(n-1) = 5 + 2(5 + 2T(n-2)) = 5 + 2(5 + 2(5 + 2(T(n-3)))) = … = 5( … + 2 n-1 ) + 2 n T(0) = 5(2 n -1) + 2 n (2) = 7(2 n ) - 5, which is O(2 n ).

CS 340Chapter 2: Algorithm Analysis20 int alternatePowerOf2(const int &n) { if (n == 0) if (n == 0) return 1; return 1; return 2*alternatePowerOf2(n-1); return 2*alternatePowerOf2(n-1);} int alternatePowerOf2(const int &n) { if (n == 0) if (n == 0) return 1; return 1; return 2*alternatePowerOf2(n-1); return 2*alternatePowerOf2(n-1);} Another Recurrence Relation Example The second fact tells us that: T(n) = 4 + T(n-1) = 4 + (4 + T(n-2)) = + (4 + (4 + (T(n-3)))) = … + (4 + (4 + (T(n-3)))) = … = 4n + T(0) = 4n + 2, which is O(n). The second fact tells us that: T(n) = 4 + T(n-1) = 4 + (4 + T(n-2)) = + (4 + (4 + (T(n-3)))) = … + (4 + (4 + (T(n-3)))) = … = 4n + T(0) = 4n + 2, which is O(n). Assume that there is a function T(n) such that it takes T(k) time to execute alternatePowerOf2(k). Examining the code allows us to conclude the following: T(0) = 2 T(k) = 4 + T(k-1) for all k > 0 Assume that there is a function T(n) such that it takes T(k) time to execute alternatePowerOf2(k). Examining the code allows us to conclude the following: T(0) = 2 T(k) = 4 + T(k-1) for all k > 0