Download presentation
Presentation is loading. Please wait.
Published byKaren Rice Modified over 9 years ago
1
CSE 1342 Programming Concepts Algorithmic Analysis Using Big-O Part 1
2
The Running Time of Programs nMost problems can be solved by more than one algorithm. So, how do you choose the best solution? nThe best solution is usually based on efficiency n Efficiency of time (speed of execution) n Efficiency of space (memory usage) nIn the case of a program that is infrequently run or subject to frequent modification, algorithmic simplicity may take precedence over efficiency.
3
The Running Time of Programs nAn absolute measure of time (5.3 seconds, for example) is not a practical measure of efficiency because … n The execution time is a function of the amount of data that the program manipulates and typically grows as the amount of data increases. n Different computers will execute the same program (using the same data) at different speeds. n Depending on the choice of programming language and compiler, speeds can vary on the same computer.
4
The Running Time of Programs nThe solution is to remove all implementation considerations from our analysis and focus on those aspects of the algorithm that most critically effect the execution time. n The most important aspect is usually the number of data elements (n) the program must manipulate. n Occasionally the magnitude of a single data element (and not the number of data elements) is the most important aspect.
5
The 90 - 10 Rule nThe 90 - 10 rule states that, in general, a program spends 90% of its time executing the same 10% of its code. n This is due to the fact that most programs rely heavily on repetition structures (loops and recursive calls). nBecause of the 90 - 10 rule, algorithmic analysis focuses on repetition structures.
6
Analysis of Summation Algorithms Consider the following code segment that sums each row of an n-by-n array (version 1): grandTotal = 0; for (k = 0; k < n; k++) { sum[k] = 0; for (j = 0; j < n; j++) { sum[k] += a[k][j]; grandTotal += a[k][j]; } Requires 2n 2 additions
7
Analysis of Summation Algorithms Consider the following code segment that sums each row of an n-by-n array (version 2) grandTotal = 0; for (k = 0; k < n; k++) { sum[k] = 0; for (j = 0; j < n; j++) { sum[k] += a[k][j]; } grandTotal += sum[k]; } Requires n 2 + n additions
8
Analysis of Summation Algorithms nWhen we compare the number of additions performed in versions 1 and 2 we find that … (n 2 + n) 1 nBased on this analysis the version 2 algorithm appears to be the fastest. Although, as we shall see, faster may not have any real meaning in the real world of computation.
9
Analysis of Summation Algorithms nFurther analysis of the two summation algorithms. n Assume a 1000 by 1000 ( n = 1000) array and a computer that can execute an addition instruction in 1 microsecond. 1 microsecond = one millionth of a second. n The version 1 algorithm ( 2n 2 ) would require 2(1000 2 )/1,000,000 = 2 seconds to execute. n The version 2 algorithm ( n 2 + n) would require (1000 2 + 1000)/1,000,000 = = 1.001 seconds to execute. n From a users real-time perspective the difference is insignificant
10
Analysis of Summation Algorithms nNow increase the size of n. n Assume a 100,000 by 100,000 ( n = 100,000) array. n The version 1 algorithm ( 2n 2 ) would require 2(100,000 2 )/1,000,000 = 20,000 seconds to execute (5.55 hours). n The version 2 algorithm ( n 2 + n) would require (100,000 2 + 100,000)/1,000,000 = 10,000.1 seconds to execute (2.77 hours). n From a users real-time perspective both jobs take a long time and would need to run in a batch environment. nIn terms of order of magnitude (big-O) versions 1 and 2 have the same efficiency - O(n 2 ).
11
Big-O Analysis Overview nO stands for order of magnitude. nBig-O analysis is independent of all implementation factors. n It is dependent (in most cases) on the number of data elements (n) the program must manipulate. nBig-O analysis only has significance for large values of n. n For small values of n big-o analysis breaks down. nBig-O analysis is built around the principle that the runtime behavior of an algorithm is dominated by its behavior in its loops (90 - 10 rule).
12
Definition of Big-O nLet T(n) be a function that measures the running time of a program in some unknown unit of time. nLet n represent the size of the input data set that the program manipulates where n > 0. nLet f(n) be some function defined on the size of the input data set, n. nWe say that “T(n) is O(f(n))” if there exists an integer n 0 and a constant c, where c > 0, such that for all integers n >= n 0 we have T(n) <= cf(n). n The pair n 0 and c are witnesses to the fact that T(n) is O(f(n))
13
Simplifying Big-O Expressions nBig-O expressions are simplified by dropping constant factors and low order terms. nThe total of all terms gives us the total running time of the program. For example, say that T(n) = O(f 3 (n) + f 2 (n) + f 1 (n)) where f 3 (n) = 4n 3 ; f 2 (n) = 5n 2 ; f 1 (n) = 23 or to restate T(n): T(n) = O(4n 3 + 5n 2 + 23) nAfter stripping out the constants and low order terms we are left with T(n) = O(n 3 )
14
T(n) = f 1 (n) + f 2 (n) + f 3 (n) + … + f k (n) nIn big-O analysis, one of the terms in the T(n) expression is identified as the dominant term. n A dominant term is one that, for large values of n, becomes so large that it allows us to ignore the other terms in the expression. nThe problem of big-O analysis can be reduced to one of finding the dominant term in an expression representing the number of operations required by an algorithm. n All other terms and constants are dropped from the expression. Simplifying Big-O Expressions
15
Big-O Analysis Example 1 for (k = 0; k < n/2; ++k) { for (j = 0; j < n*n; ++j) { statement(s) } nOuter loop executes n/2 times nInner loop executes n 2 times nT(n) = (n/2)(n 2 ) = n 3 /2 =.5(n 3 ) nT(n) = O(n 3 )
16
for (k = 0; k < n/2; ++k) { statement(s) } for (j = 0; j < n*n; ++j) { statement(s) } nFirst loop executes n/2 times nSecond loop executes n 2 times nT(n) = (n/2) + n 2 =.5n + n 2 nn 2 is the dominant term nT(n) = O(n 2 ) Big-O Analysis Example 2
17
while (n > 1) { statement(s) n = n / 2; } nThe values of n will follow a logarithmic progression. n Assuming n has the initial value of 64, the progression will be 64, 32, 16, 8, 4, 2. nLoop executes log 2 times nO(log 2 n) = O(log n) Big-O Analysis Example 3
18
Big-O Comparisons
19
Analysis Involving if/else if (condition) loop1; //assume O(f(n)) for loop1 else loop2; //assume O(g(n)) for loop 2 nThe order of magnitude for the entire if/else statement is O(max(f(n), g(n)))
20
An Example Involving if/else if (a[1][1] = = 0) for (i = 0; i < n; ++i) for (j = 0; j < n; ++j) a[i][j] = 0; else for (i = 0; i < n; ++i) a[i][j] = 1; nThe order of magnitude for the entire if/else statement is O(max(f(n), g(n))) = O(n 2 ) f(n) = n 2 g(n) = n
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.