Page 1 adapted after Dr. Menezes Analysis of Algorithms What does it mean to analyze ? Strictly speaking analysis is the separation of an intellectual or substantial whole into its constituent parts for individual study. [American Heritage Dictionary] Surely we're interested in studying individual parts of algorithms but the “analysis of an algorithm” is commonly used in a more restricted sense. Investigation of an algorithm's efficiency with respect to two resources: Running time Memory space The emphasis on these are because they define the usability of the algorithm (in case of bounded time/space) can be studied in a precise manner using analysis frameworks
Page 1 adapted after Dr. Menezes Measuring Input Size First the obvious: Almost all algorithms run longer on longer inputs Therefore it seems logical to investigate an algorithm's efficiency in terms of its input size, but... It is not uncommon to have algorithms that require more than one parameter (eg. Graph algorithms) The input size may not be well-defined as one would wish (eg. Matrix multiplication) In these cases there normally is a relation between the candidate input sizes In cases related to measuring properties of numbers sometimes we use the number of bits ( b ) in the number ( n) b = floor(log 2 n) + 1
Page 1 adapted after Dr. Menezes Units for Measuring Time Efficiency We could simply use some standard unit for time measurement but we'd face serious drawbacks: Dependence on the speed of a particular computer Dependence on the quality of the program implementing the algorithm Dependence on the quality of the compiler used to generate the executable Difficulty of clocking the time precisely Since we want to measure algorithms we shouldn't suffer the effects of external factors such as the ones above We could count how many times each operation in the algorithm is executed But this may be difficult and also unnecessary
Page 1 adapted after Dr. Menezes Basic Operation The standard approach is to identify the basic/expensive operation and count how many times it is executed. As a rule of thumb, it is the most time-consuming operation in the innermost loop of the algorithm For instance: For most sorting, the basic operation is the key comparison On matrix multiplication, the basic operation consists of a multiplication and an addition But since on most computers multiplications are more expensive we can count only multiplications The established framework is: Count the time the algorithm's basic operation is executed for inputs of size n Where n is clearly defined
Page 1 adapted after Dr. Menezes Using the Framework Here is an example. Let c op be the time of execution of an algorithm's basic operation on a particular computer and let C(n) be the number of times the operation needs to be executed. Then: T(n) ≈ c op C(n) Now let's assume C(n) = ½ n(n-1). The formula above can help us answer the question. How much longer will the algorithm run if we double the input size? The answer is ~4. Why? More importantly, the framework allowed us to answer the question without knowing the value of c op.
Page 1 adapted after Dr. Menezes Answering the Question We should first work a bit on the value of C(n): Therefore:
Page 1 adapted after Dr. Menezes Orders of Growth As we mentioned before the difference in the running times of two algorithms for small inputs is not a measure of how efficient the algorithm is Recall the example given for GCD If we want to calculate gcd(12,5) is not that clear why Euclid's solution is more efficient But let the two numbers be very large and you'll easily appreciate the difference We've also seen that T(n) is a function For very large values of n, it is only T(n)'s order of growth that counts
Page 1 adapted after Dr. Menezes Orders of Growth
Page 1 adapted after Dr. Menezes Appreciating Orders of Magnitude Without additional discussions, the importance of the table in the previous slide may pass unnoticed. Just to put some perspective on the numbers: It would take 4*10 10 years for a computer executing one trillion (10 12 ) operations per second (1 Teraflop) to execute operations. Surprised? The above is incomparable faster than the amount of time it would take to execute 100! operations. By the way, 4.5 * 10 9 years is the estimated age of the planet Earth!
Page 1 adapted after Dr. Menezes Appreciating Orders of Magnitude Both 2 n functions and n! functions are called exponential. These are only practical for very small input sizes Another way to appreciate the orders of magnitude is by answering questions like. How much effect a twofold increase in the input size has to the algorithm running time? This is very much like the example we've seen previously
Page 1 adapted after Dr. Menezes Common Growth Functions 1: Constant time. Normally the amount of time that an instruction takes under the RAM model log n: Logarithmic. It normally occurs in algorithms that transform a bigger problem into a smaller version, whose input size is a ration of the original problem. Common in searching and some tree algorithms. n: Linear. Algorithms that are forced to pass through all elements of the input (of size n) a number (constant) of times yield linear running time. n log n: Polylogarithmic. Typical of algorithms that break the problem into smaller parts, solve the problem for the smaller parts and combine the solutions to obtain the solution of the original problem instance.
Page 1 adapted after Dr. Menezes Common Growth Functions n 2 : Quadratic. A subset of the polynomial solutions. Quadratic solutions are still acceptable and are relatively efficient for small to medium scale problem sizes. Typical for algorithms that have to analyze all pairs of elements of the input. n 3 : Cubic. Not very efficient but still polynomial. A classical example of algorithms in this class is the matrix multiplication. 2 n : Very poor performance. Unfortunately quite a few known solutions to practical problems fall in this category. This is as bad as testing all possible answers to a problem. When algorithms fall in this category, algorithmics goes in search of approximation algorithms.
Page 1 adapted after Dr. Menezes Worst-case, Best-case and Average-case Efficiencies We have now identified that it is a good idea to express the performance of an algorithms as a function of the input size However, in some cases, the efficiency of the algorithm also depends on the specifics of a particular input. For instance assume the Sequential Search algorithm (Similar to program 2.1 of Sedgewick's book) SequentialSearch(A[0..n-1],K) A[n] = K i = 0 while (A[i] != K) do i = i + 1 if (i < n) return i else return -1 SequentialSearch(A[0..n-1],K) A[n] = K i = 0 while (A[i] != K) do i = i + 1 if (i < n) return i else return -1
Page 1 adapted after Dr. Menezes Worst Case In the algorithm we just saw the running time can be quite different for the same list of size n In the worst case there are no matching elements or the first matching element is in the last position in the list. In this case the algorithm make the largest number of key comparisons C worst (n) = n The worst-case efficiency of an algorithm is its efficiency for the worst-case input of size n, which is an input (or inputs) of size n for which the algorithm runs the longest among all possible inputs of that size [Levitin 2003]
Page 1 adapted after Dr. Menezes Worst Case The C worst (n) provides very important information about the algorithm because it binds its running time from the above (an upper bound). It guarantees that, no matter how is the input of size n, it can't be worst than C worst (n).
Page 1 adapted after Dr. Menezes Best Case The best-case efficiency of an algorithm is its efficiency for the best-case input of size n, which is an input (or inputs) of size n for which the algorithm runs the fastest among all the inputs of that size [Levitin 2003] Note that the best case does not mean the smallest input. Not nearly as useful but not useless. One can take advantage of algorithms that run really fast in the best case if the sample inputs to which the algorithm will be applied is approximately the best input: For instance, the insertion sort of a list of size n will perform in C best (n), when the list is already sorted. If the list is close to being sorted the best case performance does not degenerate much
Page 1 adapted after Dr. Menezes Best Case This means that insertion sort may be a good option for lists that are known to be nearly sorted. If the best case of an algorithm is already not good enough you might want to forget about it.
Page 1 adapted after Dr. Menezes Average Case Neither the worst-case nor the best-case can answer the question about the running time of a typical input or a random input. This is given by the average case efficiency of an algorithm To get the average analysis of the sequential search algorithm we have to consider The probability (p) of the search being successful, where 0 <= p <= 1 The probability of the first match occuring in the i th position is the same for every i. If we assume p = 1 (search must be successful) we're left with:
Page 1 adapted after Dr. Menezes O-notation A function f(n) is said to be in the O(g(n)), denoted by f(n) = O(g(n)), if there exist some positive constant c and some nonnegative integer n 0 such that f(n) = n 0 Let's prove that 100n + 5 is O(n 2 ) First we can say that 100n + 5 = 5) And then we can also say that 101n = 0) Using transitivity we can say that 100n + 5 = 5) In this case we took c = 101 and n 0 = 5 Could we have used c = 105 and n 0 = 1 ?
Page 1 adapted after Dr. Menezes O-notation
Page 1 adapted after Dr. Menezes Ω -notation A function f(n) is said to be in the Ω (g(n)), denoted by f(n) = Ω (g(n)), if there exist some positive constant c and some nonnegative integer n 0 such that f(n) >= cg(n) for all n >= n 0 Let's prove that n 3 is Ω (n 2 ) In this case its simpler since n 3 >= n 2 (for all n >= 0) In this case we took c = 1 and n 0 = 0
Page 1 adapted after Dr. Menezes Ω -notation
Page 1 adapted after Dr. Menezes Θ -notation A function f(n) is said to be in the Θ (g(n)), denoted by f(n) = Θ (g(n)), if there exist some positive constants c 1 and c 2 and some nonnegative integer n 0 such that c 2 g(n) = n 0 Let's prove that ½ n(n-1) is Θ (n 2 ) First we prove the upper bound which is straightforward ½ n 2 – ½ n = 0) Then we can also say that ½ n 2 – ½ n >= ¼ n 2 (for all n >= 2). In this case we took c 1 = ½ and c 2 = ¼ and n 0 = 2 To prove t(n) = Θ (g(n)) we have to prove that: t(n) = O(g(n)) and t(n) = Ω (g(n)).
Page 1 adapted after Dr. Menezes Θ -notation
Page 1 adapted after Dr. Menezes A much more convenient way of comparing orders of growth of two functions is based on the computation of the ratio of the two functions: Note that the first two cases refer to big-Oh the last two cases refer to big-Omega The middle case refers to big-Theta Using Limits for Comparing Orders of Growth
Page 1 adapted after Dr. Menezes Introduction to Recurrences Many efficient algorithms are based on a recursive definition The ability to “transform” a problem into one or more smaller instances of the same problem. Then solving these instances to find a solution to the large problem. A recurrence is a relation describing a recursive function in terms of its values on smaller inputs. It is widely used in the analysis of recursive algorithms The recursive decomposition of a problem is directly reflected in its analysis via a recurrence relation: Size and number of subproblems Time required for the decomposition
Page 1 adapted after Dr. Menezes Understanding the Relation
Page 1 adapted after Dr. Menezes Bibliography [Sedgewick 2003]. Algorithms in Java. Parts 1-4. [American Heritage Dictionary] [Levitin 2003]. The Design and Analysis of Algorithms. [Cormen et al]. Introduction to Algorithms.