Download presentation
Presentation is loading. Please wait.
1
CS 3343: Analysis of Algorithms
Correctness Proof, Order of Growth, Asymptotic Notations 11/16/2018
2
What is an algorithm? Algorithms are the ideas behind computer programs. An algorithm is the thing that stays the same regardless of programming language and the computing hardware 11/16/2018
3
What is an algorithm? (cont’)
An algorithm is a precise and unambiguous specification of a sequence of steps that can be carried out to solve a given problem or to achieve a given condition. An algorithm accepts some value or set of values as input and produces a value or set of values as output. Algorithms are closely intertwined with the nature of the data structure of the input and output values 11/16/2018
4
How to express algorithms?
Nature language (e.g. English) Pseudocode Real programming languages Increasing precision Ease of expression Describe the ideas of an algorithm in nature language. Use pseudocode to clarify sufficiently tricky details of the algorithm. 11/16/2018
5
How to express algorithms?
Nature language (e.g. English) Pseudocode Real programming languages Increasing precision Ease of expression To understand / describe an algorithm: Get the big idea first. Use pseudocode to clarify sufficiently tricky details 11/16/2018
6
Example: sorting Input: A sequence of N numbers a1…an
Output: the permutation (reordering) of the input sequence such that a1 ≤ a2 … ≤ an. Possible algorithms you’ve learned so far Insertion, selection, bubble, quick, merge, … More in this course We seek algorithms that are both correct and efficient 11/16/2018
7
Insertion Sort InsertionSort(A, n) { for j = 2 to n { }
▷ Pre condition: A[1..j-1] is sorted 1. Find position i in A[1..j-1] such that A[i] ≤ A[j] < A[i+1] 2. Insert A[j] between A[i] and A[i+1] ▷ Post condition: A[1..j] is sorted 1 j sorted 11/16/2018
8
Insertion Sort InsertionSort(A, n) { for j = 2 to n { key = A[j]; i = j - 1; while (i > 0) and (A[i] > key) { A[i+1] = A[i]; i = i – 1; } A[i+1] = key } } 1 i j Key sorted 11/16/2018
9
Correctness What makes a sorting algorithm correct?
In the output sequence, the elements are ordered non-decreasingly Each element in the input sequence has a unique appearance in the output sequence [2 3 1] => [1 2 2] X [ ] => [ ] X 11/16/2018
10
Correctness For any algorithm, we must prove that it always returns the desired output for all legal instances of the problem. For sorting, this means even if (1) the input is already sorted, or (2) it contains repeated elements. Algorithm correctness is NOT obvious in some problems (e.g., optimization) 11/16/2018
11
How to prove correctness?
Given a concrete input, eg. <4,2,6,1,7> trace it and prove that it works. Given an abstract input, eg. <a1, … an> trace it and prove that it works. Sometimes it is easier to find a counterexample to show that an algorithm does NOT work. Think about all small examples Think about examples with extremes of big and small Think about examples with ties Failure to find a counterexample does NOT mean that the algorithm is correct 11/16/2018
12
An Example: Insertion Sort
InsertionSort(A, n) { for j = 2 to n { key = A[j]; i = j - 1; ▷Insert A[j] into the sorted sequence A[1..j-1] while (i > 0) and (A[i] > key) { A[i+1] = A[i]; i = i – 1; } A[i+1] = key } } 1 i j Key sorted 11/16/2018
13
Example of insertion sort
5 2 4 6 1 3 2 5 4 6 1 3 2 4 5 6 1 3 2 4 5 6 1 3 1 2 4 5 6 3 1 2 3 4 5 6 Done! 11/16/2018
14
Use loop invariants to prove the correctness of loops
A loop invariant (LI) is a formal statement about the variables in your program which holds true throughout the loop Claim: at the start of each iteration of the for loop, the subarray A[1..j-1] consists of the elements originally in A[1..j-1] but in sorted order. Proof by induction Initialization: the LI is true prior to the 1st iteration Maintenance: if the LI is true before the jth iteration, it remains true before the (j+1)th iteration Termination: when the loop terminates, the LI gives us a useful property to show that the algorithm is correct 11/16/2018
15
Loop invariants and correctness of insertion sort
Claim: at the start of each iteration of the for loop, the subarray consists of the elements originally in A[1..j-1] but in sorted order. Proof: by induction 11/16/2018
16
Prove correctness using loop invariants
InsertionSort(A, n) { for j = 2 to n { key = A[j]; i = j - 1; ▷Insert A[j] into the sorted sequence A[1..j-1] while (i > 0) and (A[i] > key) { A[i+1] = A[i]; i = i – 1; } A[i+1] = key } } Loop invariant: at the start of each iteration of the for loop, the subarray A[1..j-1] consists of the elements originally in A[1..j-1] but in sorted order. 11/16/2018
17
Initialization InsertionSort(A, n) { for j = 2 to n { key = A[j]; i = j - 1; ▷Insert A[j] into the sorted sequence A[1..j-1] while (i > 0) and (A[i] > key) { A[i+1] = A[i]; i = i – 1; } A[i+1] = key } } Subarray A[1] is sorted. So loop invariant is true before the loop starts. Loop invariant: at the start of each iteration of the for loop, the subarray A[1..j-1] consists of the elements originally in A[1..j-1] but in sorted order. 11/16/2018
18
Loop invariant: at the start of each iteration of the for loop, the subarray A[1..j-1] consists of the elements originally in A[1..j-1] but in sorted order. Maintenance InsertionSort(A, n) { for j = 2 to n { key = A[j]; i = j - 1; ▷Insert A[j] into the sorted sequence A[1..j-1] while (i > 0) and (A[i] > key) { A[i+1] = A[i]; i = i – 1; } A[i+1] = key } } Assume loop variant is true prior to iteration j Loop variant will be true before iteration j +1 1 i j Key sorted 11/16/2018
19
The algorithm is correct!
Loop invariant: at the start of each iteration of the for loop, the subarray A[1..j-1] consists of the elements originally in A[1..j-1] but in sorted order. Termination InsertionSort(A, n) { for j = 2 to n { key = A[j]; i = j - 1; ▷Insert A[j] into the sorted sequence A[1..j-1] while (i > 0) and (A[i] > key) { A[i+1] = A[i]; i = i – 1; } A[i+1] = key } } The algorithm is correct! Upon termination, A[1..n] contains all the original elements of A in sorted order. 1 n j=n+1 Sorted 11/16/2018
20
Efficiency Correctness alone is not sufficient
Brute-force algorithms exist for most problems To sort n numbers, we can enumerate all permutations of these numbers and test which permutation has the correct order Why cannot we do this? Too slow! By what standard? 11/16/2018
21
How to measure complexity?
Accurate running time is not a good measure It depends on input It depends on the machine you used and who implemented the algorithm It depends on the weather, maybe We would like to have an analysis that does not depend on those factors 11/16/2018
22
Machine-independent A generic uniprocessor random-access machine (RAM) model No concurrent operations Each simple operation (e.g. +, -, =, *, if, for) takes 1 step. Loops and subroutine calls are not simple operations. All memory equally expensive to access Constant word size Unless we are explicitly manipulating bits 11/16/2018
23
Running Time Number of primitive steps that are executed
Except for time of executing a function call most statements roughly require the same amount of time y = m * x + b c = 5 / 9 * (t - 32 ) z = f(x) + g(x) We can be more exact if need be 11/16/2018
24
Running time of insertion sort
The running time depends on the input: an already sorted sequence is easier to sort. Parameterize the running time by the size of the input, since short sequences are easier to sort than long ones. Generally, we seek upper bounds on the running time, because everybody likes a guarantee. 11/16/2018
25
Kinds of analyses Worst case Best case – not very useful Average case
Provides an upper bound on running time An absolute guarantee Best case – not very useful Average case Provides the expected running time Very useful, but treat with care: what is “average”? Random (equally likely) inputs Real-life inputs 11/16/2018
26
Example: analysis of insertion Sort
InsertionSort(A, n) { for j = 2 to n { key = A[j] i = j - 1; while (i > 0) and (A[i] > key) { A[i+1] = A[i] i = i - 1 } A[i+1] = key } } How many times will this line execute? 11/16/2018
27
Example: analysis of insertion Sort
InsertionSort(A, n) { for j = 2 to n { key = A[j] i = j - 1; while (i > 0) and (A[i] > key) { A[i+1] = A[i] i = i - 1 } A[i+1] = key } } How many times will this line execute? 11/16/2018
28
Analysis of insertion Sort: exact
Statement cost time__ InsertionSort(A, n) { for j = 2 to n { c1 n key = A[j] c2 (n-1) i = j - 1; c3 (n-1) while (i > 0) and (A[i] > key) { c4 S A[i+1] = A[i] c5 (S-(n-1)) i = i c6 (S-(n-1)) } A[i+1] = key c7 (n-1) } } S = t2 + t3 + … + tn where tj is number of while expression evaluations for the jth for loop iteration 11/16/2018
29
Analyzing Insertion Sort : exact
T(n) = c1n + c2(n-1) + c3(n-1) + c4S + c5(S - (n-1)) + c6(S - (n-1)) + c7(n-1) = c8S + c9n + c10 What can S be? Best case -- inner loop body never executed tj = 1 S = n - 1 T(n) = an + b is a linear function Worst case -- inner loop body executed for all previous elements tj = j S = … + n = n(n+1)/2 - 1 T(n) = an2 + bn + c is a quadratic function Average case Can assume that on average, we have to insert A[j] into the middle of A[1..j-1], so tj = j/2 S ≈ n(n+1)/4 T(n) is still a quadratic function Θ (n) Θ (n2) Θ (n2) 11/16/2018
30
Asymptotic Analysis Running time depends on the size of the input
Larger array takes more time to sort T(n): the time taken on input with size n Look at growth of T(n) as n→∞. “Asymptotic Analysis” Size of input is generally defined as the number of input elements In some cases may be tricky 11/16/2018
31
Asymptotic Analysis Ignore actual and abstract statement costs
Order of growth is the interesting measure: Highest-order term is what counts As the input size grows larger it is the high order term that dominates 11/16/2018
32
Comparison of functions
log2n n nlog2n n2 n3 2n n! 10 3.3 33 102 103 106 6.6 660 104 1030 10158 109 13 105 108 1012 17 1010 1015 20 107 1018 For a super computer that does 1 trillion operations per second, it will be longer than 1 billion years 11/16/2018
33
Order of growth 1 << log2n << n << nlog2n << n2 << n3 << 2n << n! (We are slightly abusing of the “<<“ sign. It means a smaller order of growth). 11/16/2018
34
Asymptotic notations We say InsertionSort’s worst-case running time is Θ(n2) Properly we should say running time is in Θ(n2) It is also in O(n2 ) What’s the relationship between Θ and O? Formal definition soon 11/16/2018
35
Analysis of insertion Sort: Asymptotic
Statement cost time__ InsertionSort(A, n) { for j = 2 to n { c1 n key = A[j] c2 (n-1) i = j - 1; c3 (n-1) while (i > 0) and (A[i] > key) { c4 S A[i+1] = A[i] c5 (S-(n-1)) i = i c6 (S-(n-1)) } A[i+1] = key c7 (n-1) } } What are the basic operations (most executed lines)? 11/16/2018
36
Analysis of insertion Sort: Asymptotic
Statement cost time__ InsertionSort(A, n) { for j = 2 to n { c1 n key = A[j] c2 (n-1) i = j - 1; c3 (n-1) while (i > 0) and (A[i] > key) { c4 S A[i+1] = A[i] c5 (S-(n-1)) i = i c6 (S-(n-1)) } A[i+1] = key c7 (n-1) } } 11/16/2018
37
Analysis of insertion Sort: Asymptotic
Statement cost time__ InsertionSort(A, n) { for j = 2 to n { c1 n key = A[j] c2 (n-1) i = j - 1; c3 (n-1) while (i > 0) and (A[i] > key) { c4 S A[i+1] = A[i] c5 (S-(n-1)) i = i c6 (S-(n-1)) } A[i+1] = key c7 (n-1) } } 11/16/2018
38
What can S be? S = j=2..n tj Best case: Worst case: Average case: 1
Inner loop stops when A[i] <= key, or i = 0 1 i j Key sorted S = j=2..n tj Best case: Worst case: Average case: 11/16/2018
39
Best case Array already sorted S = j=2..n tj tj = 1 for all j
Inner loop stops when A[i] <= key, or i = 0 1 i j Key sorted Array already sorted S = j=2..n tj tj = 1 for all j S = n T(n) = Θ (n) 11/16/2018
40
Worst case Array originally in reversely sorted order S = j=2..n tj
Inner loop stops when A[i] <= key 1 i j Key sorted Array originally in reversely sorted order S = j=2..n tj tj = j S = j=2..n j = … + n = (n-1) (n+2) / 2 = Θ (n2) 11/16/2018
41
Average case Array in random order S = j=2..n tj
Inner loop stops when A[i] <= key 1 i j Key sorted Array in random order S = j=2..n tj tj = j / 2 on average S = j=2..n j/2 = ½ j=2..n j = (n-1) (n+2) / 4 = Θ (n2) What if we use binary search? Answer: still Θ(n2) 11/16/2018
42
Exact analysis is hard and unnecessary!
Worst-case and average-case are difficult to deal with precisely, because the details can be very complicated It may be easier to talk about upper and lower bounds of the function. 11/16/2018
43
Asymptotic notations O: Big-Oh Ω: Big-Omega Θ: Theta o: Small-oh
ω: Small-omega 11/16/2018
44
Big O Informally, O (g(n)) is the set of all functions with a smaller or same order of growth as g(n), within a constant multiple If we say f(n) is in O(g(n)), it means that g(n) is an asymptotic upper bound of f(n) Intuitively, it is like f(n) ≤ g(n) What is O(n2)? The set of all functions that grow slower than or in the same order as n2 Abuse of notation (for convenience): f(n) = O(g(n)) actually means f(n) O(g(n)) 11/16/2018
45
So: But: n O(n2) n2 O(n2) 1000n O(n2) n2 + n O(n2)
Intuitively, O is like ≤ 11/16/2018
46
(optional) small o Informally, o (g(n)) is the set of all functions with a strictly smaller growth as g(n), within a constant multiple What is o(n2)? The set of all functions that grow slower than n2 So: 1000n o(n2) But: n2 o(n2) Intuitively, o is like < 11/16/2018
47
Big Ω Informally, Ω (g(n)) is the set of all functions with a larger or same order of growth as g(n), within a constant multiple f(n) Ω(g(n)) means g(n) is an asymptotic lower bound of f(n) Intuitively, it is like g(n) ≤ f(n) So: n2 Ω(n) 1/1000 n2 Ω(n) But: 1000 n Ω(n2) Intuitively, Ω is like ≥ Abuse of notation (for convenience): f(n) = Ω(g(n)) actually means f(n) Ω(g(n)) 11/16/2018
48
(optional) small ω Informally, ω (g(n)) is the set of all functions with a larger order of growth as g(n), within a constant multiple So: n2 ω(n) 1/1000 n2 ω(n) n2 ω(n2) Intuitively, ω is like > 11/16/2018
49
Theta (Θ) Informally, Θ (g(n)) is the set of all functions with the same order of growth as g(n), within a constant multiple f(n) Θ(g(n)) means g(n) is an asymptotically tight bound of f(n) Intuitively, it is like f(n) = g(n) What is Θ(n2)? The set of all functions that grow in the same order as n2 Abuse of notation (for convenience): f(n) = Θ(g(n)) actually means f(n) Θ(g(n)) Θ(1) means constant time. 11/16/2018
50
So: But: n2 Θ(n2) n2 + n Θ(n2) 100n2 + n Θ(n2)
100n2 + log2n Θ(n2) But: nlog2n Θ(n2) 1000n Θ(n2) 1/1000 n3 Θ(n2) Intuitively, Θ is like = 11/16/2018
51
Tricky cases How about sqrt(n) and log2 n?
How about log2 n and log10 n How about 2n and 3n How about 3n and n!? 11/16/2018
52
Mathematical definitions (more discussions later)
O(g(n)) = {f(n): positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) n≥n0} Ω(g(n)) = {f(n): positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) n≥n0} Θ(g(n)) = {f(n): positive constants c1, c2, and n0 such that 0 c1 g(n) f(n) c2 g(n) n n0} 11/16/2018
53
O, Ω, and Θ The definitions imply a constant n0 beyond which they are
satisfied. We do not care about small values of n. 11/16/2018
54
Using limits to compare functions for order of growth
lim f(n) / g(n) = c > 0 ∞ f(n) o(g(n)) f(n) O(g(n)) f(n) Θ (g(n)) n→∞ f(n) Ω(g(n)) f(n) ω (g(n)) 11/16/2018
55
logarithms compare log2n and log10n logab = logcb / logca
log2n = log10n / log102 ~ 3.3 log10n Therefore lim(log2n / log10 n) = 3.3 log2n = Θ (log10n) 11/16/2018
56
Therefore, 2n o(3n), and 3n ω(2n) How about 2n and 2n+1?
Compare 2n and 3n lim 2n / 3n = lim(2/3)n = 0 Therefore, 2n o(3n), and 3n ω(2n) How about 2n and 2n+1? 2n / 2n+1 = ½, therefore 2n = Θ (2n+1) n→∞ n→∞ 11/16/2018
57
L’ Hopital’s rule lim f(n) / g(n) = lim f(n)’ / g(n)’
You can apply this transformation as many times as you want, as long as the condition holds lim f(n) / g(n) = lim f(n)’ / g(n)’ Condition: If both lim f(n) and lim g(n) are or 0 n→∞ n→∞ 11/16/2018
58
In fact, log n o(nε), for any ε > 0
Compare n0.5 and log n lim n0.5 / log n = ? (n0.5)’ = 0.5 n-0.5 (log n)’ = 1 / n lim (n-0.5 / 1/n) = lim(n0.5) = ∞ Therefore, log n o(n0.5) In fact, log n o(nε), for any ε > 0 n→∞ 11/16/2018
59
Stirling’s formula (constant) 11/16/2018
60
Compare 2n and n! Therefore, 2n = o(n!) Compare nn and n!
Therefore, nn = ω(n!) How about log (n!)? Θ (n log n) 11/16/2018
61
About exponential and logarithm functions
Textbook page 55-56 It is important to understand what logarithms are and where they come from. A logarithm is simply an inverse exponential function. Saying bx = y is equivalent to saying that x = logb y. Logarithms reflect how many times we can double something until we get to n, or halve something until we get to 1. log21 = ? log22 = ? 11/16/2018
62
Binary Search In binary search we throw away half the possible number of keys after each comparison. How many times can we halve n before getting to 1? Answer: ceiling (lg n) 11/16/2018
63
Logarithms and Trees How tall a binary tree do we need until we have n leaves? The number of potential leaves doubles with each level. How many times can we double 1 until we get to n? Answer: ceiling (lg n) 11/16/2018
64
Logarithms and Bits How many numbers can you represent with k bits?
Each bit you add doubles the possible number of bit patterns You can represent from 0 to 2k – 1 with k bits. A total of 2k numbers. How many bits do you need to represent the numbers from 0 to n? ceiling (lg (n+1)) 11/16/2018
65
logarithms lg n = log2 n ln n = loge n, e ≈ 2.718 lgkn = (lg n)k
lg lg n = lg (lg n) = lg(2)n lg(k) n = lg lg lg … lg n lg24 = ? lg(2)4 = ? Compare lgkn vs lg(k)n? 11/16/2018
66
Useful rules for logarithms
For all a > 0, b > 0, c > 0, the following rules hold logba = logca / logcb = lg a / lg b logban = n logba blogba = a log (ab) = log a + log b lg (2n) = ? log (a/b) = log (a) – log(b) lg (n/2) = ? lg (1/n) = ? logba = 1 / logab 11/16/2018
67
Useful rules for exponentials
For all a > 0, b > 0, c > 0, the following rules hold a0 = 1 (00 = ?) a1 = a a-1 = 1/a (am)n = amn (am)n = (an)m aman = am+n 11/16/2018
68
More advanced dominance ranking
11/16/2018
69
Definition & Proof by definition
11/16/2018
70
Big-Oh There exist For all Definition: O(g(n)) = {f(n): positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) n≥n0} 11/16/2018
71
Big-Oh Claim: f(n) = 3n2 + 10n + 5 O(n2) Prove by definition
(Hint: to prove this claim by definition, we need to find some positive constants c and n0 such that f(n) <= cn2 for all n ≥ n0.) (Note: you just need to find one concrete example of c and n0 satisfying the condition, but it needs to be correct for all n ≥ n0. So do not try to plug in a concrete value of n and show the inequality holds.) Proof: 3n2 + 10n + 5 3n2 + 10n2 + 5, n ≥ 1 3n2 + 10n2 + 5n2, n ≥ 1 18 n2, n ≥ 1 If we let c = 18 and n0 = 1, we have f(n) c n2, n ≥ n0. Therefore by definition, f(n) O(n2). 11/16/2018
72
Big-Omega Definition:
Ω(g(n)) = {f(n): positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) n≥n0} 11/16/2018
73
Big-Omega Claim: f(n) = n2 / 10 = Ω(n) Prove by definition:
f(n) = n2 / 10, g(n) = n Need to find a c and a no to satisfy the definition of f(n) Ω(g(n)), i.e., f(n) ≥ cg(n) for n ≥ n0 Proof: n ≤ n2 / 10 when n ≥ 10 If we let c = 1 and n0 = 10, we have f(n) ≥ cn, n ≥ n0. Therefore, by definition, n2 / 10 = Ω(n). 11/16/2018
74
Theta Definition: Alternatively: f(n) = O(g(n)) and f(n) = Ω(g(n))
Θ(g(n)) = {f(n): positive constants c1, c2, and n0 such that 0 c1 g(n) f(n) c2 g(n), n n0 } Alternatively: f(n) = O(g(n)) and f(n) = Ω(g(n)) 11/16/2018
75
Theta Claim: f(n) = 2n2 + n = Θ (n2) Prove by definition:
Need to find the three constants c1, c2, and n0 such that c1n2 ≤ 2n2+n ≤ c2n2 for all n ≥ n0 A simple solution is c1 = 2, c2 = 3, and n0 = 1 11/16/2018
76
More Examples Prove n2 + 3n + lg n is in O(n2)
Need to find c and n0 such that n2 + 3n + lg n <= cn2 for n ≥ n0 Proof: n2 + 3n + lg n <= n2 + 3n2 + n for n ≥ 1 <= n2 + 3n2 + n2 for n ≥ 1 <= 5n2 for n ≥ 1 Therefore, by definition n2 + 3n + lg n O(n2). (Alternatively: n2 + 3n + lg n <= n2 + n2 + n2 for n ≥ 10 <= 3n2 for n ≥ 10) 11/16/2018
77
More Examples Prove n2 + 3n + lg n is in Ω(n2)
Want to find c and n0 such that n2 + 3n + lg n >= cn2 for n ≥ n0 n2 + 3n + lg n >= n2 for n ≥ 1 n2 + 3n + lg n = O(n2) and n2 + 3n + lg n = Ω (n2) => n2 + 3n + lg n = Θ(n2) 11/16/2018
78
(FYI: How to prove logn < n)
Let f(n) = 1 + log n, g(n) = n. Then f(n)’ = 1/n g(n)’ = 1. f(1) = g(1) = 1. Because f(n)’ ≤ g(n)’ n ≥ 1, by the racetrack principle, we have f(n) ≤ g(n) n ≥ 1, i.e., 1 + log n ≤ n. Therefore, log n < 1 + log n ≤ n for all n ≥ 1. From now on, we will use that fact that log n < n n ≥ 1 without proof. 11/16/2018
79
Proof for abstract functions
Example: for any two asymptomatically positive functions f(n) and g(n) that satisfy f(n) O(g(n)), prove that g(n) Θ(f(n) + g(n)) Proof: Since f(n) is asymptotically positive, we have g(n) f(n) + g(n) for sufficiently large n. Therefore, by definition g(n) O(f(n) + g(n)). We now need to show g(n) Ω(f(n) + g(n)). 11/16/2018
80
Proof for abstract functions (cont’d)
Given: f(n) O(g(n)) Need to show: g(n) Ω(f(n) + g(n)) f(n) O(g(n)) implies that f(n) cg(n) for sufficiently large n => f(n) + g(n) cg(n) + g(n) => f(n) + g(n) (c+1)g(n) => (f(n) + g(n)) / (c+1) g(n) => g(n) Ω(f(n) + g(n)). g(n)O(f(n) + g(n)) and g(n) Ω(f(n) + g(n)) => g(n) Θ(f(n) + g(n)) 11/16/2018
81
Properties of asymptotic notations
Textbook page 51 Transitivity f(n) = (g(n)) and g(n) = (h(n)) => f(n) = (h(n)) (holds true for o, O, , and as well). Symmetry f(n) = (g(n)) if and only if g(n) = (f(n)) Transpose symmetry f(n) = O(g(n)) if and only if g(n) = (f(n)) f(n) = o(g(n)) if and only if g(n) = (f(n)) 11/16/2018
82
Asymptotic notations: Summary
O: Big-Oh Ω: Big-Omega Θ: Theta o: Small-oh ω: Small-omega Intuitively: O is like o is like < is like is like > is like = 11/16/2018
83
Mathematical definitions
O(g(n)) = {f(n): positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) n≥n0} Ω(g(n)) = {f(n): positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) n≥n0} Θ(g(n)) = {f(n): positive constants c1, c2, and n0 such that 0 c1 g(n) f(n) c2 g(n) n n0} 11/16/2018
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.