Download presentation
Presentation is loading. Please wait.
Published byHannu-Pekka Sala Modified over 6 years ago
1
Defining Efficiency Asymptotic Complexity - how running time scales as function of size of input Two problems: What is the “input size” ? How do we express the running time ? (The Big-O notation, and its associates)
2
Input Size Usually: length (in characters) of input
Sometimes: number of “items” (Ex: numbers in an array, nodes in a graph) Input size is denoted n. Which inputs (because there are many inputs at a given size)? Worst case: look at the input of size n that causes the longest running time; tells us how good an algorithm works Best case: look at the input on which the algorithm is the fastest; it is rarely relevant Average case: useful in practice, but there are technical problems here (next)
3
Input Size Average Case Analysis
Assume inputs are randomly distributed according to some “realistic” distribution Compute expected running time Drawbacks Often hard to define realistic random distributions Usually hard to perform math
4
Input Size Example: the function: find(x, v, n)(find v in the array x[1..n]) Input size: n (the length of the array) T(n) = “running time for size n” But T(n) needs clarification: Worst case T(n): it runs in at most T(n) time for any x,v Best case T(n): it takes at least T(n) time for any x,v Average case T(n): average time over all v and x
5
Amortized analysis Instead of a single execution of the alg., consider a sequence of consecutive executions: This is interesting when the running time on an execution depends on the result of previous ones Worst case analysis over the sequence of executions Determine average running time on this sequence Will illustrate in the course
6
Definition of Order Notation
Upper bound: T(n) = O(f(n)) Big-O Exist constants c and n’ such that T(n) c f(n) for all n n’ Lower bound: T(n) = (g(n)) Omega T(n) c g(n) for all n n’ Tight bound: T(n) = (f(n)) Theta When both hold: T(n) = O(f(n)) T(n) = (f(n)) We’ll use some specific terminology to describe asymptotic behavior. There are some analogies here that you might find useful. Other notations: o(f), (f) - later
7
Example: Upper Bound
8
Using a Different Pair of Constants
9
Example: Lower Bound
10
Conventions of Order Notation
11
Which Function Dominates?
g(n) = 100n log n 2n + 10 log n n! 1000n15 3n7 + 7n f(n) = n3 + 2n2 n0.1 n + 100n0.1 5n5 n-152n/100 82log n Welcome, everyone, to the Silicon Downs. I’m getting race results as we stand here. Let’s start with the first race. I’ll have the first row bet on race #1. Raise your hand if you bet on function #1 (the jockey is n^0.1) So on. Show the race slides after each race. Question to class: is f = O(g) ? Is g = O(f) ?
12
Race I f(n)= n3+2n2 vs. g(n)=100n2+1000
13
Race II n0.1 vs. log n Well, log n looked good out of the starting gate and indeed kept on looking good until about n^17 at which point n^0.1 passed it up forever. Moral of the story? N^epsilon beats log n for any eps > 0. BUT, which one of these is really better?
14
Race III n + 100n0.1 vs. 2n + 10 log n Notice that these just look like n and 2n once we get way out. That’s because the larger terms dominate. So, the left is less, but not asymptotically less. It’s a TIE!
15
Race IV 5n5 vs. n! N! is BIG!!!
16
Race V n-152n/100 vs. 1000n15 No matter how you put it, any exponential beats any polynomial. It doesn’t even take that long here (~250 input size)
17
Race VI 82log(n) vs. 3n7 + 7n We can reduce the left hand term to n^6, so they’re both polynomial and it’s an open and shut case.
18
Eliminate low order terms
Eliminate constant coefficients
19
Common Names Other names: Slowest Growth constant: O(1)
logarithmic: O(log n) linear: O(n) log-linear: O(n log n) quadratic: O(n2) exponential: O(cn) (c is a constant > 1) hyperexponential: (a tower of n exponentials Other names: superlinear: O(nc) (c is a constant > 1) polynomial: O(nc) (c is a constant > 0) Fastest Growth
20
Execution time comparison
O(1) < O(log n) < O(n) < O(n log n) < O(n2) <O(2n) < O(n!) < O(22n) Example: the naïve alg. for TSP has O(n!) time complexity 5! =120, 10! = 3,628, ! = 1.3 * ! = 2.43 * 1018 25! = 1.55* ! = 2.65 * 1032 Size O(log n) sec sec sec O(n) sec O(n2) O(n5) minutes O(2n) sec 17.9 minutes 35.7 years O(3n) years 2*108 centuries An algorithm is infeasible if its running time is super-polynomial - it takes too long to compute
21
The big-O arithmetic Rule 1: If T(n) = O(f(n)) and f(n) = O(g(n)) then T(n) = O(g(n)) Proof: Given T(n) <= C1 * f(n) for n > n0’ f(n) <= C2 * g(n) for all n > n0” Substitute f(n) with C2 *g(n) we have: T(n) <= C1*C2* g(n) for all n > max(n0’, n0”) = C3*g(n) where C3=C1*C2 Conclusion: T(n) = O(g(n))
22
Cont. big-O Arithmetic Rule 2: If T1(n) = O(f(n)) and T2(n) = O(g(n))
Then (a) T1(n) + T2(n) = max(O(f(n)),O(g(n))) Example: T1(n) = 2n2+4n f(n) = O(n2) T2(n) = 4n3+2n g(n)=O(n3) T1(n)+T2(n)= 2n2+4n + 4n3+2n =O(n3) (b) T1(n) * T2(n) = O(f(n) * g(n)) Example: T1(n) = 3n2 f(n) = O(n2) T2(n) = 4n5 g(n) =O(n5) T1(n) * T2(n) = 3n2 * 4n5 = O(n7)
23
Cont. of big-O rules Rule 3: If T(n) is a polynomial of degree k, then T(n) = O(nk) where n is the variable and k is the highest power Example: n5+4n4+8n2+9 = O(n5) 7n3+2n+19 = O(n3) Remarks: 2n2 = O(n3) and 2n2 = O(n2) are both correct, but O(n2) is more accurate The following notations are to be avoided: - O(2n2) - O(5) -O(n2+10n) The big-O notation is an upper bound
24
Big-O comparison Determine the relative growth rates by computing
limit n-> f(n)/g(n) (1) if the limit is 0 then f(n) = O(g(n)) more precisely: f(n) = o (g(n)) (2) if the limit is C 0 then f(n) = O(g(n)) and g(n) = O(f(n)) more precisely f(n) = (g(n)) (3) if the limit is then f(n) = (g(n)) more precisely: f(n) = (g(n))
25
General Rules for Computing Time Complexity
(1) O(1) if the execution time does not depend on the input data (2) Apply Addition if the code segments are placed consecutively (3) Apply Multiplication if the code segments are nested (4) O(log n) if the number of executions is reduced in half repeatedly (5) O(2n) if the number of executions is doubled repeatedly (6) Always consider the worst case
26
Big-O examples (1) cin >> n; for (k=1; k<=n; k++)
for(j=1; j<=k; j++) for(m=1; m<=n+10; m++) a = 10; Time complexity is O(n3) (2) cin >> n for(j=0; j < myfunc(n); j++) cout << j for(k=1; k<= n; k++) cout << j << k; Time complexity is: (a) O(n2) if myfunc(n) returns n (b) O(nlogn) if myfunc(n) returns log n
27
Big-O examples(2) (3) cin >> n; total = 0 for(j=1; j <n; j++)
cin >> x total = x + total if((total %2) == 1) cout << total else for(k=1; k<=j; k++) cout << k; Time complexity: O(n2)
28
Big-O examples(3) (4) float f(float n) { sum =0; for(j=1; j<n; j++)
sum = sum +j; return sum;} Time complexity: O(n) (5) float g(float n) { sum = 0 for(j=0; j<n; j++) sum = sum +f(j) } Time complexity: O(n2)
29
Big-O examples(4) (6) float h(int n) {for (j=0; j<n; j++)
cout << g(n) + f(n); } Time complexity: O(n3) int j,n,k; // Now what? (7) cin >> n; power =1; for(j=1; j< n; j++) power = power * 2 for(k=1; k< power; k++) cout << k Time complexity: O(2n)
30
Big-O examples(5) (8) cin >> n; x =1 while (x < n) x = 2*x
Time Complexity: O(log n) Exercises: (1) Prove n3+2n = O(n3) The method: look for suitable values for C and n0 try 1: C = 1 n n3+2n n3 ==> fail
31
Exercise cont. try 2: C =2 n n3+2n 2n3 1 21 2 2 48 16 5 225 250
therefore, C=2 and n0=5. Then prove by induction that this holds. (2) Prove that 3n is not O(2n) Use the method of proof by contradiction The general approach (a) assume the theorem to be proved is false (b) show that this assumption implies that some known property is false (c ) conclude the original assumption is wrong
32
The proof: Assume 3n is O(2n)
Thus, there must exist two constants C and n0 such that 3n <= C 2n for all n => n0 (3/2)n <= C but (3/2)n can get very large as n increases therefore (tends to infinity), so no such constant can be found conclude: 3n is not O(2n)
33
upper and lower bounds Upper bound for a problem:
The best that we know how to do it. It is determined by the best algorithm that we have found for a given problem. Lower bound for a problem: Shows the inherent limitations. Lower bounds are developed by mathematicians who provide proofs that no algorithm for the problem can do better than the lower bound. Summary: Upper bounds tell us the best we have been able to do so far. Lower bounds tell us the best we can hope to do.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.