Efficiency (Chapter 2)
Efficiency of Algorithms Difficult to get a precise measure of the performance of an algorithm or program Can characterize a program by how the execution time or memory requirements increase as a function of increasing input size Big-O notation A simple way to determine the big-O of an algorithm or program is to look at the loops and to see whether the loops are nested
Counting Executions Generally, A statement counts as “1” Sequential statements add: Statement; statement 2 Loops multiply based on how many times they run
Counting Example Consider: First time through outer loop, inner loop is executed n-1 times; next time n-2, and the last time once. So we have T(n) = 3(n – 1) + 3(n – 2) + … + 3 or T(n) = 3(n – 1 + n – 2 + … + 1)
Counting Example (continued) We can reduce the expression in parentheses to: n x (n – 1) 2 So, T(n) = 1.5n2 – 1.5n This polynomial is zero when n is 1. For values greater than 1, 1.5n2 is always greater than 1.5n2 – 1.5n Therefore, we can use 1 for n0 and 1.5 for c to conclude that T(n) is O(n2)
Big-O Growth
Importance of Big-O Doesn’t matter for small values of N Shows how time grows with size Regardless of comparative performance for small N, smaller big-O will eventually win Technically, what we usually use when we say big-O, is really big-Theta
Some Rules If you have to read or write N items of data, your program is at least O(N) Search programs range from O(log N) to O(N) Sort programs range from O(N log N) to O(N2)
Cases Different data might take different times to run Best case Worst case Average case Example: consider sequential search…