Download presentation
Presentation is loading. Please wait.
Published byAmbrose Wright Modified over 9 years ago
1
1 o-notation For a given function g(n), we denote by o(g(n)) the set of functions: o(g(n)) = {f(n): for any positive constant c > 0, there exists a constant n 0 > 0 such that 0 f(n) < cg(n) for all n n 0 } f(n) becomes insignificant relative to g(n) as n approaches infinity: lim [f(n) / g(n)] = 0 n We say g(n) is an upper bound for f(n) that is not asymptotically tight.
2
2 O(*) versus o(*) O(g(n)) = {f(n): there exist positive constants c and n 0 such that 0 f(n) cg(n), for all n n 0 }. o(g(n)) = {f(n): for any positive constant c > 0, there exists a constant n 0 > 0 such that 0 f(n) < cg(n) for all n n 0 }. Thus o(f(n)) is a weakened O(f(n)). For example: n 2 = O(n 2 ) n 2 o(n 2 ) n 2 = O(n 3 ) n 2 = o(n 3 )
3
3 o-notation n 1.9999 = o(n 2 ) n 2 / lg n = o(n 2 ) n 2 o(n 2 ) (just like 2< 2) n 2 /1000 o(n 2 )
4
4 -notation For a given function g(n), we denote by (g(n)) the set of functions (g(n)) = {f(n): for any positive constant c > 0, there exists a constant n 0 > 0 such that 0 cg(n) < f(n) for all n n 0 } f(n) becomes arbitrarily large relative to g(n) as n approaches infinity: lim [f(n) / g(n)] = n We say g(n) is a lower bound for f(n) that is not asymptotically tight.
5
5 -notation n 2.0001 = ω(n 2 ) n 2 lg n = ω(n 2 ) n 2 ω(n 2 )
6
6 Comparison of Functions f g a b f (n) = O(g(n)) a b f (n) = (g(n)) a b f (n) = (g(n)) a = b f (n) = o(g(n)) a < b f (n) = (g(n)) a > b
7
7 Properties Transitivity f(n) = (g(n)) & g(n) = (h(n)) f(n) = (h(n)) f(n) = O(g(n)) & g(n) = O(h(n)) f(n) = O(h(n)) f(n) = (g(n)) & g(n) = (h(n)) f(n) = (h(n)) Symmetry f(n) = (g(n)) if and only if g(n) = (f(n)) Transpose Symmetry f(n) = O(g(n)) if and only if g(n) = (f(n))
8
8 Practical Complexities Is O(n 2 ) too much time? Is the algorithm practical? At CPU speed 10 9 instructions/second
9
9 Impractical Complexities At CPU speed 10 9 instructions/second
10
10 Some Common Name for Complexity O(1)Constant time O(log n)Logarithmic time O(log 2 n)Log-squared time O(n)Linear time O(n 2 )Quadratic time O(n 3 )Cubic time O(n i ) for some iPolynomial time O(2 n )Exponential time
11
11 Growth Rates of some Functions ExponentialFunctions PolynomialFunctions
12
12 Effect of Multiplicative Constant 0 102025 200 400 600 800 n Run time f(n)=10n f(n)=n 2
13
13 Exponential Functions Exponential functions increase rapidly, e.g., 2 n will double whenever n is increased by 1. 31.7 years10 15 50 10 18 10 12 10 9 10 6 10 3 2n2n 0.001 s10 1 s20 16.7 mins30 11.6 days40 31710 years60 1 s x 2 n n
14
14 Practical Complexity
15
15 Practical Complexity
16
16 Practical Complexity
17
17 Practical Complexity
18
18 Floors & Ceilings For any real number x, we denote the greatest integer less than or equal to x by x read “the floor of x” For any real number x, we denote the least integer greater than or equal to x by x read “the ceiling of x” For all real x, (example for x=4.2) x – 1 x x x x + 1 For any integer n, n/2 + n/2 = n
19
19 Polynomials Given a positive integer d, a polynomial in n of degree d is a function P(n) of the form P(n) = where a 0, a 1, …, a d are coefficient of the polynomial a d 0 A polynomial is asymptotically positive iff a d 0 Also P(n) = (n d )
20
20 Exponents x 0 = 1x 1 = x x -1 = 1/x x a. x b = x a+b x a / x b = x a-b (x a ) b = (x b ) a = x ab x n + x n = 2x n x 2n 2 n + 2 n = 2.2 n = 2 n+1
21
21 Logarithms (1) In computer science, all logarithms are to base 2 unless specified otherwise x a = bifflog x (b) = a lg(n) = log 2 (n) ln(n) = log e (n) lg k (n) = (lg(n)) k log a (b) = log c (b) / log c (a) ; c 0 lg(ab) = lg(a) + lg(b) lg(a/b) = lg(a) - lg(b) lg(a b ) = b. lg(a)
22
22 Logarithms (2) a = b log b (a) a log b (n) = n log b (a) lg (1/a) = - lg(a) log b (a)= 1/log a (b) lg(n) nfor all n 0 log a (a) = 1 lg(1) = 0, lg(2) = 1, lg(1024=2 10 ) = 10 lg(1048576=2 20 ) = 20
23
23 Summation Why do we need to know this? We need it for computing the running time of a given algorithm. Example: Maximum Sub-vector Given an array a[1…n] of numeric values (can be positive, zero and negative) determine the sub- vector a[i…j] (1 i j n) whose sum of elements is maximum over all sub-vectors.
24
24 Example: Max Sub-Vectors MaxSubvector(a, n) { maxsum = 0; for i = 1 to n { for j = i to n { sum = 0; for k = i to j { sum += a[k] } maxsum = max(sum, maxsum); } return maxsum; }
25
25 Summation
26
26 Summation Constant Series: For a, b 0, Quadratic Series: For n 0, Linear-Geometric Series: For n 0,
27
27 Series
28
28 Proof of Geometric series A Geometric series is one in which the sum approaches a given number as N tends to infinity. Proofs for geometric series are done by cancellation, as demonstrated.
29
29 Factorials n! (“n factorial”) is defined for integers n 0 as n! = n! = 1. 2.3 … n n! < n n for n ≥ 2
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.