CS 221 Analysis of Algorithms Instructor: Don McLaughlin
Theoretical Analysis of Algorithms Recursive Algorithms One strategy for algorithmic design is solve the problem using the same design but with a smaller problem The algorithm must be able use itself to compute a solution but with subset of the problem Must be one case that can be solved with evoking the whole algorithm again – Base Case This is called recursion
Theoretical Analysis of Algorithms Recursive Algorithms Recursion is elegant But is it efficient?
Theoretical Analysis of Algorithms A recursive Max algorithm Algorithm recursiveMax(A,n) Input: Array A storing n>=1 integers Output: Maximum value in A if n = 1 then return A[0] return max{recursiveMax(A,n-1),A[n-1]
Theoretical Analysis of Algorithms Efficiency – T(n) = {3 if n = 1, T(n-1)+7 if otherwise} or T(n) = 7(n-1)+3 = 7n-2 * We will come back to this
Theoretical Analysis of Algorithms Best-case Worst-case Average-case A number if issues here Not as straight forward as you would think
Theoretical Analysis of Algorithms Average-case Sometimes Average Case is needed Best or Worst case may not be the best representation of the typical case with “usual” input Best or worst case may be rare Consider A sort that typically gets a
Theoretical Analysis of Algorithms Average-case Consider A sort that typically gets a partially sorted list A concatenation of sorted sublists A search that must find a match in a list …and the list is partially sorted
Theoretical Analysis of Algorithms Average-case Can you take the average of best- case and worst case?
Theoretical Analysis of Algorithms Average-case Average case must consider the probability of each case or a range of n N may not be arbitrary One strategy – Divide range of n into classes Determine the probability that any given n is from each respective class Use n in the analysis based on probability distribution of each class
Theoretical Analysis of Algorithms Asymptotic notation Remember that our analysis is concerned with the efficiency of algorithms so we determine a function that describes (to a reasonable degree) the run-time cost (T) of the algorithm we are particularly interested in the growth of the algorithm’s cost as the size of the problem grows (n)
Theoretical Analysis of Algorithms Asymptotic notation Often we need to know that run-time cost (T) is broader terms …within certain boundaries We want to describe the efficiency of an algorithm in terms of its asymptotic behavior
Theoretical Analysis of Algorithms Big 0 Suppose we have a function of n g(n) that we suggest gives us an upper bound on the worst case behavior of our algorithm’s runtime – which we have determined to be f(n) then…
Theoretical Analysis of Algorithms Big 0 We describe the upper bound on the growth of our run-time function f(n) – f(n) is O(g(n)) f(n) is bounded from above by g(n) for all significant values of n if f(n) = n 0 there exists some constant c such that for all values of n >= n 0 f(n) <= cg(n)
Theoretical Analysis of Algorithms Big 0 from: Levitin, Anany, The Design and Analysis of Algorithms, Addison-Wesley, 2007
Theoretical Analysis of Algorithms Big 0 …but be careful f(n) = O(g(n)) is incorrect the proper term is f(n) is O(g(n)) to be absolutely correct f(n) Є O(g(n))
Theoretical Analysis of Algorithms Big Ω Big Omega our function is bounded from below by g(n) that is, f(n) is Ω(g(n)) if there exists some positive constant c such that f(n) >= cg(n), n >= n 0 what does this mean?
Theoretical Analysis of Algorithms Big Ω from: Levitin, Anany, The Design and Analysis of Algorithms, Addison-Wesley, 2007
Theoretical Analysis of Algorithms Big Θ Big Theta our function is bounded from above and below by g(n) that is, f(n) is Θ(g(n)) if there exists two positive constants c1 and c2 such that c 2 g(n) = c 1 g(n) for all n >= n 0 what does this mean?
Theoretical Analysis of Algorithms Big Θ
Theoretical Analysis of Algorithms Or said in another way O(g(n)): class of functions f(n) that grow no faster than g(n) Θ (g(n)): class of functions f(n) that grow at same rate as g(n) Ω(g(n)): class of functions f(n) that grow at least as fast as g(n)
Theoretical Analysis of Algorithms Little o o f(n) is o(g(n)) if f(n) = n 0 for all constants c >0 what does this mean? f(n) is asymptotically smaller than g(n)
Theoretical Analysis of Algorithms Little ω ω f(n) is ω(g(n)) if f(n) n 0 what does this mean? f(n) is asymptotically larger than g(n)
Theoretical Analysis of Algorithms Simplifying things a bit Usually only really interested in the order of growth of our algorithm’s run-time function
Theoretical Analysis of Algorithms Suppose we have a run-time function like T(n) = 4n n 2 + 2n + 7 So, to simplify our run-time function T, we can eliminate constant terms that are not coefficients of n 4n n 2 + 2n eliminate lowest order terms 4n n 2 Maybe only keep the highest order term 4n 3 …and drop the coefficent of that term n 3
Theoretical Analysis of Algorithms So, T(n) = 4n n 2 + 2n + 7 therefore T(n) is O(n 3 ) or is it that T(n) is O(n 6 ) (true or false) True but it is considered bad form why?
Classes of Algorithmic Efficiency ClassNameAlgorithms 1Constant Time Runs in constant time regardless of the size of the problem (n) Algorithms like this are rare Log nLogarithmicAlgorithms that pare away part of the problem by constant factor in each iteration nLinearAlgorithm’s T grows in linear proportion to growth of n nlog nn-log-nDivide and conquer algorithms, often seen in recursive algorithms n2n2 QuadraticSeen in algorithms that two level nested loops n3n3 cubicOften seen in algorithms that three levels of nested loops, linear algebra 2n2n exponentialAlgorithms that grow as power of 2 – all possible subsets of a set n!factorialAll permutations of a set based on: Levitin, Anany, The Design and Analysis of Algorithms, Addison-Wesley, 2007
Properties of Asymptotic Comparisons Transitivity f(n) is O(g(n)) and g(n) is O(h(n)) then f(n) is O(h(n)) f(n) is Θ(g(n)) and g(n) is Θ(h(n)) then f(n) is Θ(h(n)) f(n) is Ω(g(n)) and g(n) is Ω(h(n)) then f(n) is Ω(h(n)) f(n) is o(g(n)) and g(n) is o(h(n)) then f(n) is o(h(n)) f(n) is ω(g(n)) and g(n) is ω(h(n)) then f(n) is ω(h(n)) Reflexivity f(n) is Θ(f(n)) f(n) is O(f(n)) f(n) is Ω(f(n))
Properties of Asymptotic Comparisons Symmetry f(n) is Θ(g(n)) if and only if g(n) is Θ(f(n)) Tranpose Symmetry f(n) is O(g(n)) if and only if g(n) is Ω(f(n)) f(n) is o(g(n)) if and only if g(n) is ω(f(n))
Some rules of Asymptotic Notation d(n) is O(f(n)), then ad(n) is O(f(n)) for any constant a > 0 d(n) is O(f(n)) and e(n) is O(g(n)), then d(n)+e(n) is O(f(n) + g(n)) d(n) is O(f(n)) and e(n) is O(g(n)), then d(n)e(n) is O(f(n)g(n)) d(n) is O(f(n)) and f(n) is O(g(n)), then d(n) is O(g(n))
Some rules of Asymptotic Notation f(n) is polynomial of degree d, the f(n) is O(n d ) n x is O(a n ) for any fixed x > 0 and a > 1 logn x is O(log n) for any x > 0 log x n is O(n y ) for any fixed constants x > 0 and y > 0
Homework The Scream, by Edvard Munch,
Homework – due Wednesday, Sept. 3rd Assume Various algorithms would run 100 primitive instructions per input element (n) (constant coefficient of 100) …and it will be implemented on a processor that run 2 billion primitive instructions per second then estimate the execution times for the following algorithmic efficiency classes
Homework with n of Log nnnlog nn2n2 n3n3 2n2n n! give answer in largest meaningful time increment