Download presentation
Presentation is loading. Please wait.
1
Chapter 2 Analysis Framework 2.1 – 2.2
2
Homework Remember questions due Wed. Remember questions due Wed. Read sections 2.1 and 2.2 Read sections 2.1 and 2.2 pages 41-50 and 52-59 pages 41-50 and 52-59
3
Agenda: Analysis of Algorithms Blackboard Blackboard Issues: Issues: –Correctness –Time efficiency –Space efficiency –Optimality Approaches: Approaches: –Theoretical analysis –Empirical analysis
4
Time Efficiency Time efficiency is analyzed by determining the number of repetitions of the basic operation as a function of input size
5
Theoretical analysis of time efficiency Basic operation: the operation that contributes most towards the running time of the algorithm. Basic operation: the operation that contributes most towards the running time of the algorithm. T(n) ≈ c op C(n) running time execution time for basic operation Number of times basic operation is executed input size
6
Input size and basic operation examples Problem Input size measure Basic operation Search for key in list of n items Number of items in list n Key comparison Multiply two matrices of floating point numbers Dimensions of matrices Floating point multiplication Compute a n n Floating point multiplication Graph problem #vertices and/or edges Visiting a vertex or traversing an edge
7
Empirical analysis of time efficiency Select a specific (typical) sample of inputs Select a specific (typical) sample of inputs Use physical unit of time (e.g., milliseconds) Use physical unit of time (e.g., milliseconds) OR OR Count actual number of basic operations Count actual number of basic operations Analyze the empirical data Analyze the empirical data
8
Best-case, average-case, worst-case For some algorithms efficiency depends on type of input: Worst case:W(n) – maximum over inputs of size n Worst case:W(n) – maximum over inputs of size n Best case:B(n) – minimum over inputs of size n Best case:B(n) – minimum over inputs of size n Average case:A(n) – “average” over inputs of size n Average case:A(n) – “average” over inputs of size n –Number of times the basic operation will be executed on typical input –NOT the average of worst and best case –Expected number of basic operations repetitions considered as a random variable under some assumption about the probability distribution of all possible inputs of size n
9
Example: Sequential search Problem: Given a list of n elements and a search key K, find an element equal to K, if any. Problem: Given a list of n elements and a search key K, find an element equal to K, if any. Algorithm: Scan the list and compare its successive elements with K until either a matching element is found (successful search) of the list is exhausted (unsuccessful search) Algorithm: Scan the list and compare its successive elements with K until either a matching element is found (successful search) of the list is exhausted (unsuccessful search) Worst case Worst case Best case Best case Average case Average case
10
Types of formulas for basic operation count Exact formula Exact formula e.g., C(n) = n(n-1)/2 e.g., C(n) = n(n-1)/2 Formula indicating order of growth with specific multiplicative constant Formula indicating order of growth with specific multiplicative constant e.g., C(n) ≈ 0.5 n 2 e.g., C(n) ≈ 0.5 n 2 Formula indicating order of growth with unknown multiplicative constant Formula indicating order of growth with unknown multiplicative constant e.g., C(n) ≈ cn 2 e.g., C(n) ≈ cn 2
11
Order of growth Most important: Order of growth within a constant multiple as n → ∞ Most important: Order of growth within a constant multiple as n → ∞ Example: Example: –How much faster will algorithm run on computer that is twice as fast? –How much longer does it take to solve problem of double input size? See table 2.1 See table 2.1
12
Table 2.1
13
Day 4: Agenda O, Θ, Ω O, Θ, Ω Limits Limits Definitions Definitions Examples Examples Code O(?) Code O(?)
14
Homework Due on Monday 11:59PM Due on Monday 11:59PM Electronic submission see website. Electronic submission see website. Try to log into Blackboard Try to log into Blackboard Finish reading 2.1 and 2.2 Finish reading 2.1 and 2.2 page 60-61, questions 2, 3, 4, 5, & 9 page 60-61, questions 2, 3, 4, 5, & 9
15
Asymptotic growth rate A way of comparing functions that ignores constant factors and small input sizes O(g(n)): class of functions t (n) that grow no faster than g(n) O(g(n)): class of functions t (n) that grow no faster than g(n) Θ(g(n)): class of functions t (n) that grow at same rate as g(n) Θ(g(n)): class of functions t (n) that grow at same rate as g(n) Ω(g(n)): class of functions t (n) that grow at least as fast as g(n) Ω(g(n)): class of functions t (n) that grow at least as fast as g(n) see figures 2.1, 2.2, 2.3
16
Big-oh
17
Big-omega
18
Big-theta
19
Using Limits if t(n) grows slower than g(n) if t(n) grows faster than g(n) if t(n) grows at the same rate as g(n)
20
L’Hopital’s Rule
21
Examples: 10n vs. 2n 2 n(n+1)/2 vs. n 2 log b n vs. log c n
22
Definition f(n) = O( g(n) ) if there exists f(n) = O( g(n) ) if there exists a positive constant c and a positive constant c and a non-negative interger n 0 a non-negative interger n 0 such that f(n) c·g(n) for every n > n 0 such that f(n) c·g(n) for every n > n 0Examples: 10n is O(2n 2 ) 10n is O(2n 2 ) 5n+20 is O(10n) 5n+20 is O(10n)
23
Basic Asymptotic Efficiency classes 1constant log n logarithmic nlinear n log n n2n2n2n2quadratic n3n3n3n3cubic 2n2n2n2nexponential n!factorial
24
Non-recursive algorithm analysis Analysis Steps: Decide on parameter n indicating input size Decide on parameter n indicating input size Identify algorithm’s basic operation Identify algorithm’s basic operation Determine worst, average, and best case for input of size n Determine worst, average, and best case for input of size n Set up summation for C(n) reflecting algorithm’s loop structure Set up summation for C(n) reflecting algorithm’s loop structure Simplify summation using standard formulas Simplify summation using standard formulas
25
Example for (x = 0; x < n; x++) a[x] = max(b[x],c[x]);
26
Example for (x = 0; x < n; x++) for (y = x; y < n; y++) a[x][y] = max(b[x],c[y]);
27
Example for (x = 0; x < n; x++) for (y = 0; y < n/2; y++) for (z = 0; z < n/3; z++) a[z] = max(a[x],c[y]);
28
Example y = n while (y > 0) if (a[y--] == b[y--]) break;
29
Day 5: Agenda Go over the answer to hw2 Go over the answer to hw2 Try electronic submission Try electronic submission Do some problems on the board Do some problems on the board Time permitting: Recursive algorithm analysis Time permitting: Recursive algorithm analysis
30
Homework Remember to electronically submit hw3 before Tues. morning Remember to electronically submit hw3 before Tues. morning Read section 2.3 thoroughly! pages 61-67 Read section 2.3 thoroughly! pages 61-67
31
Day 6: Agenda First, what have we learned so far about non-recursive algorithm analysis First, what have we learned so far about non-recursive algorithm analysis Second, log b n and b n The enigma is solved. Second, log b n and b n The enigma is solved. Third, what is the deal with Abercrombie & Fitch Third, what is the deal with Abercrombie & Fitch Abercrombie & Fitch Abercrombie & Fitch Fourth, recursive analysis tool kit. Fourth, recursive analysis tool kit.
32
Homework 4 and Exam 1 Last homework before Exam 1 Last homework before Exam 1 Due on Friday 2/6 (electronic?) Due on Friday 2/6 (electronic?) Will be returned on Monday 2/9 Will be returned on Monday 2/9 All solutions will be posted next Monday 2/9 All solutions will be posted next Monday 2/9 Exam 1 Wed. 2/11 Exam 1 Wed. 2/11
33
Homework 4 Page 68 questions 4, 5 and 6 Page 68 questions 4, 5 and 6 Pages 77 and 78 questions 8 and 9 Pages 77 and 78 questions 8 and 9 We will do similar example questions all day on Wed 2/4 We will do similar example questions all day on Wed 2/4
34
What have we learned? Non-recursive algorithm analysis Non-recursive algorithm analysis First, Identify problem size. First, Identify problem size. –Typically, –a loop counter –an array size –a set size –the size of a value
35
Basic operations Second, identify the basic operation Second, identify the basic operation Usually a small block of code or even a single statement that is executed over and over. Usually a small block of code or even a single statement that is executed over and over. Sometimes the basic operation is a comparison that is hidden inside of the loop Sometimes the basic operation is a comparison that is hidden inside of the loop Example: Example: while (target != a[x]) x++; while (target != a[x]) x++;
36
Single loops One loop from 1 to n One loop from 1 to n O(n) O(n) Be aware this is the same as Be aware this is the same as –2 independent loops from 1 to n –c independent loops from 1 to n –A loop from 5 to n-1 –A loop from 0 to n/2 –A loop from 0 to n/c
37
Nested loops for (x = 0; x < n; x++) for (y = 0; y < n; y++) O(n 2 ) for (x = 0; x < n; x++) for (y = 0; y < n; y++) for (z = 0; z < n; z++) O(n 3 ) But remember: We can have c independent nested loops, or the loops can be terminated early n/c.
38
Most non-recursive algorithms reduce to one of these efficiency classes 1constant log n logarithmic nlinear n log n n2n2n2n2quadratic n3n3n3n3cubic 2n2n2n2nexponential
39
What else? Best cases often arise when loops terminate early for specific inputs. Best cases often arise when loops terminate early for specific inputs. For worst cases, consider the following: Is it possible that a loop will NOT terminate early For worst cases, consider the following: Is it possible that a loop will NOT terminate early Average cases are not the mid-pointer or average of the best and worst case Average cases are not the mid-pointer or average of the best and worst case –Average cases require us to consider a set of typical input –We won’t really worry too much about average case until we hit more difficult problems.
40
Anything else? Important questions you should be asking? Important questions you should be asking? How does log 2 n arise in non-recursive algorithms? How does log 2 n arise in non-recursive algorithms? How does 2 n arise? How does 2 n arise? Dr. B. Please, show me the actual loops that cause this! Dr. B. Please, show me the actual loops that cause this!
41
Log 2 n for (x = 0; x < n; x++) { Basic operation; x = x*2; } for every item in the list do Basic operation; eliminate or discard half the items; Note: These types of log 2 n loops can be nested inside of O(n), O(n 2 ), or O(n k ) loops, which leads to n log nn log n n 2 log nn 2 log n n k log nn k log n
42
Log 2 n Which, by the way, is pretty much equivalent to log b n, ln, or the magical lg(n) Which, by the way, is pretty much equivalent to log b n, ln, or the magical lg(n) But, don’t be mistaken But, don’t be mistaken O(n log n) is different than O(n) even though log n grows so slow. O(n log n) is different than O(n) even though log n grows so slow.
43
Last thing: 2 n How does 2 n arise in real algorithms? How does 2 n arise in real algorithms? Lets consider n n : Lets consider n n : How can you program n nested loops How can you program n nested loops Its impossible right? Its impossible right? So, how the heck does it happen in practice? So, how the heck does it happen in practice?
44
Think Write an algorithm that prints “*” 2 n times. Write an algorithm that prints “*” 2 n times. Is it hard? Is it hard? It depends on how you think. It depends on how you think.
45
Recursive way fun(int x) { if (x > 0) { print “*”; fun(x-1);fun(x-1);}} Then call the function fun(n);
46
Non-recursive way for (x = 0; x < pow(2,n); x++) print “*”; WTF? Be very wary of your input size. If you input size is truly n, then this is truly O(2 n ) with just one loop.
47
That’s it! That’s really it. That’s really it. That’s everything I want you do know about non-recursive algorithm analysis That’s everything I want you do know about non-recursive algorithm analysis Well, for now. Well, for now.
48
Enigma We know that log 2 n = (log 3 n) We know that log 2 n = (log 3 n) But what about 2 n = (3 n ) But what about 2 n = (3 n ) Here is how I will disprove it… Here is how I will disprove it… This gives you an idea of how I like to see questions answered. This gives you an idea of how I like to see questions answered.
49
Algorithm Analysis Recursive Algorithm Toolkit
50
Example Recursive evaluation of n ! Definition: n ! = 1*2*…*(n-1)*n Definition: n ! = 1*2*…*(n-1)*n Recursive definition of n!: Recursive definition of n!: if n=0 then F(n) := 1 else F(n) := F(n-1) * n return F(n) Recurrence for number of multiplications to compute n!: Recurrence for number of multiplications to compute n!:
51
Important recurrence types: One (constant) operation reduces problem size by one. One (constant) operation reduces problem size by one. T(n) = T(n-1) + c T(1) = d Solution: T(n) = (n-1)c + d linear
52
Important recurrence types: A pass through input reduces problem size by one. A pass through input reduces problem size by one. T(n) = T(n-1) + cn T(1) = d Solution: T(n) = [n(n+1)/2 – 1] c + d quadratic
53
Important recurrence types: One (constant) operation reduces problem size by half. One (constant) operation reduces problem size by half. T(n) = T(n/2) + c T(1) = d Solution: T(n) = c lg n + d logarithmic
54
Important recurrence types: A pass through input reduces problem size by half. A pass through input reduces problem size by half. T(n) = 2T(n/2) + cn T(1) = d Solution: T(n) = cn lg n + d n n log n
55
A general divide-and- conquer recurrence T(n) = aT(n/b) + f (n) where f (n) = Θ(n k ) 1. a < b k T(n) = Θ(n k ) 2. a = b k T(n) = Θ(n k lg n ) 3. a > b k T(n) = Θ ( n log b a ) Note: the same results hold with O instead of Θ.
56
Recursive Algorithm Analysis Input: an array of floats a[0…n-1] and a integer counter x Fun(int x, float a[]) { if (x == 0) return a[0]; if (a[0] > a[x]) swap(a[0], a[x]); Fun(x-1, a); }
57
Example Fun(int x, float a[]) { if (x == 0) return a[0]; if (a[0] > a[x]) swap(a[0], a[x]); Fun(x-1, a); } 9 5 1 6 3 8 9 5 1 6 3 8 Fun(5, a) Fun(5, a) 8 5 1 6 3 9 8 5 1 6 3 9 Fun(4, a) Fun(4, a) 3 5 1 6 8 9 3 5 1 6 8 9 Fun(3, a) Fun(3, a) 3 5 1 6 8 9 3 5 1 6 8 9 Fun(2, a) Fun(2, a) 1 5 3 6 8 9 1 5 3 6 8 9 Fun(1, a) Fun(1, a) 1 5 3 6 8 9 1 5 3 6 8 9 Fun(0, a) Fun(0, a) return a[0] 1 return a[0] 1
58
Analysis Fun(int x, float a[]) { if (x == 0) return a[0]; if (a[0] > a[x]) swap(a[0], a[x]); Fun(x-1, a); } First, identify the input size First, identify the input size The running time seems to depend on the value of x The running time seems to depend on the value of x
59
Analysis Fun(int x, float a[]) { if (x == 0) return a[0]; if (a[0] > a[x]) swap(a[0], a[x]); Fun(x-1, a); } Second, identify what terminates the recursion Second, identify what terminates the recursion Think of the running time as function Fun(x) Think of the running time as function Fun(x) Fun(0) is the base case Fun(0) is the base case
60
Analysis Fun(int x, float a[]) { if (x == 0) return a[0]; if (a[0] > a[x]) swap(a[0], a[x]); Fun(x-1, a); } Third, identify the basic operation Third, identify the basic operation The basic operation could be a constant operation The basic operation could be a constant operation Or it could be embedded in a loop that depends on the input size Or it could be embedded in a loop that depends on the input size
61
Analysis Fun(int x, float a[]) { if (x == 0) return a[0]; if (a[0] > a[x]) swap(a[0], a[x]); Fun(x-1, a); } Fourth, identify the recursive call and how the input size is changed. Fourth, identify the recursive call and how the input size is changed. Warning the input size reduction may not be part of the recursive call. Warning the input size reduction may not be part of the recursive call.
62
Analysis Fun(int x, float a[]) { if (x == 0) return a[0]; if (a[0] > a[x]) swap(a[0], a[x]); Fun(x-1, a); } Finally, put all the pieces together Finally, put all the pieces together Base case: Fun(0) Base case: Fun(0) Recursive structure: Fun(x) = Fun(x-1) Recursive structure: Fun(x) = Fun(x-1) Interior complexity Fun(x) = Fun(x-1) + O(1) Interior complexity Fun(x) = Fun(x-1) + O(1) Recursive algorithms usually fit one of the 4 basic models Recursive algorithms usually fit one of the 4 basic models
63
Recursive Algorithm Analysis Input: vector of integers v Fun(v[1…n]) { if size of v is 1 return 1 else q = Fun(v[1..n-1]); if (v[q] > v[n]) swap(v[q], v[n]); swap(v[q], v[n]); temp = 0; for x = 1 to n if (v[x] > temp) if (v[x] > temp) temp = v[x]; temp = v[x]; p = x; p = x; return p; }
64
Example Fun(3 1 4 2 5): temp = Fun(3 1 4 2) Fun(3 1 4 2 5): temp = Fun(3 1 4 2) Fun(3 1 4 2): temp = Fun(3 1 4) Fun(3 1 4 2): temp = Fun(3 1 4) Fun(3 1 4): temp = Fun(3 1) Fun(3 1 4): temp = Fun(3 1) Fun(3 1): temp = Fun(3) Fun(3 1): temp = Fun(3) Fun(3): return 1; Fun(3): return 1; Fun(3 1): v[q] = 3; v[n] = 1; v: 1 3 4 2 5; return 2 Fun(3 1): v[q] = 3; v[n] = 1; v: 1 3 4 2 5; return 2 Fun(3 1 4): v[q] = 3; v[n] = 4; v: 1 3 4 2 5; return 3 Fun(3 1 4): v[q] = 3; v[n] = 4; v: 1 3 4 2 5; return 3 Fun(3 1 4 2): v[q] = 4; v[n] = 2; v: 1 3 2 4 5; return 4 Fun(3 1 4 2): v[q] = 4; v[n] = 2; v: 1 3 2 4 5; return 4 Fun(3 1 4 2 5): v[q] = 4; v[n] = 5; v: 1 3 2 4 5; return 5 Fun(3 1 4 2 5): v[q] = 4; v[n] = 5; v: 1 3 2 4 5; return 5 Fun(v[1…n]) { if size of v is 1 return 1 else q = Fun(v[1..n-1]); if (v[q] > v[n]) swap(v[q], v[n]); swap(v[q], v[n]); temp = 0; for x = 1 to n if (v[x] > temp) if (v[x] > temp) temp = v[x]; temp = v[x]; p = x; p = x; return p; }
65
Example First, the input size is the vector size. First, the input size is the vector size. Fun(v[1…n]) { if size of v is 1 return 1 else q = Fun(v[1..n-1]); if (v[q] > v[n]) swap(v[q], v[n]); swap(v[q], v[n]); temp = 0; for x = 1 to n if (v[x] > temp) if (v[x] > temp) temp = v[x]; temp = v[x]; p = x; p = x; return p; }
66
Example Second, when the size of the vector is 1 the recursion terminates Second, when the size of the vector is 1 the recursion terminates Fun(v[1…n]) { if size of v is 1 return 1 else q = Fun(v[1..n-1]); if (v[q] > v[n]) swap(v[q], v[n]); swap(v[q], v[n]); temp = 0; for x = 1 to n if (v[x] > temp) if (v[x] > temp) temp = v[x]; temp = v[x]; p = x; p = x; return p; }
67
Example Third, the basic operations are not simple Third, the basic operations are not simple Fun(v[1…n]) { if size of v is 1 return 1 else q = Fun(v[1..n-1]); if (v[q] > v[n]) swap(v[q], v[n]); swap(v[q], v[n]); temp = 0; for x = 1 to n if (v[x] > temp) if (v[x] > temp) temp = v[x]; temp = v[x]; p = x; p = x; return p; }
68
Example There is O(1) compare and swap There is O(1) compare and swap Fun(v[1…n]) { if size of v is 1 return 1 else q = Fun(v[1..n-1]); if (v[q] > v[n]) swap(v[q], v[n]); swap(v[q], v[n]); temp = 0; for x = 1 to n if (v[x] > temp) if (v[x] > temp) temp = v[x]; temp = v[x]; p = x; p = x; return p; }
69
Example There is an O(n) loop with O(3) operations inside There is an O(n) loop with O(3) operations inside Fun(v[1…n]) { if size of v is 1 return 1 else q = Fun(v[1..n-1]); if (v[q] > v[n]) swap(v[q], v[n]); swap(v[q], v[n]); temp = 0; for x = 1 to n if (v[x] > temp) if (v[x] > temp) temp = v[x]; temp = v[x]; p = x; p = x; return p; }
70
Example Fourth, the recursive call decrease the vector size (n) by one Fourth, the recursive call decrease the vector size (n) by one Fun(v[1…n]) { if size of v is 1 return 1 else q = Fun(v[1..n-1]); if (v[q] > v[n]) swap(v[q], v[n]); swap(v[q], v[n]); temp = 0; for x = 1 to n if (v[x] > temp) if (v[x] > temp) temp = v[x]; temp = v[x]; p = x; p = x; return p; }
71
Example Finally, we can describe the running time as Finally, we can describe the running time as T(n) = T(n-1) + O(n) T(n) = T(n-1) + O(n) Which is the quadratic basic form. Which is the quadratic basic form. Fun(v[1…n]) { if size of v is 1 return 1 else q = Fun(v[1..n-1]); if (v[q] > v[n]) swap(v[q], v[n]); swap(v[q], v[n]); temp = 0; for x = 1 to n if (v[x] > temp) if (v[x] > temp) temp = v[x]; temp = v[x]; p = x; p = x; return p; }
72
Summary of simple recursive algorithms How does the input size (i.e., terminating variable) change for each recursive function call. How does the input size (i.e., terminating variable) change for each recursive function call. In practice there are to basic options In practice there are to basic options 1. n n – 1 2. n n/2
73
n n-1 Case #1: fun(n) { if (n==1) quit O(1)fun(n-1) } O(n) Case #2: fun(n) { if (n==1) quit O(n)fun(n-1) } O(n 2 ) This means the recursion is simply an O(n) loop. This leads to two common cases:
74
n n-1 Generic case Generic case fun(n) { if (n==1) quit O(n k ) fun(n-1) } O(n k+1 )
75
n n/2 Case #3: fun(n) { if (n==1) quit O(1)fun(n/2) } O(log n) Case #4: fun(n) { if (n==1) quit O(n) fun(n/2) fun(n/2) } O(n log n) This means the recursion is simply an O(log n) loop. This leads to three common cases: Case #5: fun(n) { if (n==1) quit O(n)fun(n/2) } O(n)
76
Generic Case: T(n) = aT(n/b) + f (n) where f (n) = Θ(n k ) 1. a < b k T(n) = Θ(n k ) 2. a = b k T(n) = Θ(n k lg n ) 3. a > b k T(n) = Θ ( n log b a ) 1/b is the reduction factor a is the number of recursive calls f(n) is the interior complexity of the recursive function.
77
Multiple Recursive Calls O(2 n ) arises from multiple recursive function calls where the input size is not reduced by a factor of ½ fun(n) { if (n==1) quit O(1) fun(n-1) fun(n-1) } O(2 n )
78
Multiple Recursive Calls fun(n) { if (n==1) quit O(1) fun(n-1) fun(n-1) } O(2 n ) Here the recursion acts as an O(n) loop, but for each loop the number of recursive calls is doubled. Here the recursion acts as an O(n) loop, but for each loop the number of recursive calls is doubled. Number of operations = 1+ 2+ 4 + 8 + 16 + … + 2 n = (2 n -1)+2 n = O(2 n ) Number of operations = 1+ 2+ 4 + 8 + 16 + … + 2 n = (2 n -1)+2 n = O(2 n )
79
Multiple Recursive Calls fun(n) { if (n==1) quit O(n) fun(n-1) fun(n-1) } O(???)
80
Multiple Recursive Calls fun(n) { if (n==1) quit O(1) fun(n-1) fun(n-1) fun(n-1) } O(???)
81
Day 10 Agenda (can you believe it’s day 10?) The last algorithm analysis enigma. The last algorithm analysis enigma. Summary of enigma Summary of enigma Can things like 1.5 n or 1.375 n arise? Can things like 1.5 n or 1.375 n arise? Homework 4 solutions Homework 4 solutions Exam review Exam review
82
Last Enigma fun1(n) { if (n==1) quit O(1) fun1(n-1) fun1(n-1) } O(2 n ) fun2(n) { if (n==1) quit O(n) fun2(n-1) fun2(n-1) } O(2 n ) fun3(n,m) { if (n==1) quit O(m) fun3(n-1,m) fun3(n-1,m) } O(n2 n ) if we call fun(n,n)
83
Last Enigma 1 11 11 11 11 11 11 11 44 88 … 2n2n 22 11 + + + + = 1 + 2 + 4 + … + 2 n =(2 n - 1) + 2 n =2(2 n ) – 1 =O(2 n ) fun1(n) { if (n==1) quit O(1) fun1(n-1) fun1(n-1) } O(2 n )
84
Last Enigma n n-1 n-2 4(n-2) 8(n-3) … 2 n (1) 2(n-1) 1(n) + + + + = ??? =O(2 n ) You have to see my program to believe it! fun2(n) { if (n==1) quit O(n) fun2(n-1) fun2(n-1) } O(2 n ) n-2 11 11 11 11
85
Last Enigma n nn nn nn nn nn nn nn 4n 8n … 2nn2nn 2n 1n + + + + = (1 + 2 + 4 + … + 2 n )n =((2 n - 1) + 2n)n =(2(2 n ) – 1)n =O(n2 n ) fun3(n,m) { if (n==1) quit O(m) fun3(n-1,m) fun3(n-1,m) } O(n2 n ) if we call fun(n,n)
86
Summary of Enigmas log grows so small, the base makes no difference. log grows so small, the base makes no difference. log 3 n = (log 2 n) log 3 n = (log 2 n) log 100 n = (log 2 n) log 100 n = (log 2 n) log e n = (log 2 n) log e n = (log 2 n)
87
Summary of Enigmas Exponential grow so fast that the base makes a huge difference. Exponential grow so fast that the base makes a huge difference. 2 n (3 n ) 2 n (3 n ) (2+0.01) n (2 n ) (2+0.01) n (2 n ) However, However, 2 n = O(3 n ) 2 n = O(3 n ) 3 n = (2 n ) 3 n = (2 n ) 2 n+1 = 2· 2 n = (2 n ) 2 n+1 = 2· 2 n = (2 n )
88
Summary of Enigmas Cutting the input size in half recursively creates a log n loop that does NOT increase the efficiency class (except for Case #3 O(1) internal complexity) Cutting the input size in half recursively creates a log n loop that does NOT increase the efficiency class (except for Case #3 O(1) internal complexity) Case #3: fun(n) { if (n==1) quit O(1)fun(n/2) } O(log n) Case #5: fun(n) { if (n==1) quit O(n)fun(n/2) } O(n) Generic case: fun(n) { if (n==1) quit O(n k ) fun(n/2) } O(n k )
89
Summary of Enigmas Here we cut the input size in half (log n), but we spawn two recursive calls for each recursive level (2 n ). This adds a factor of log n to the internal complexity. Here we cut the input size in half (log n), but we spawn two recursive calls for each recursive level (2 n ). This adds a factor of log n to the internal complexity. However, it adds a factor of n if the internal complexity is O(1) However, it adds a factor of n if the internal complexity is O(1) Exception: fun(n) { if (n==1) quit O(1) fun(n/2) fun(n/2) } O(n) Case #4: fun(n) { if (n==1) quit O(n) fun(n/2) fun(n/2) } O(n log n) Generic case fun(n) { if (n==1) quit O(n k ) fun(n/2) fun(n/2) } O(n k )
90
Fibonacci numbers (here is where things get difficult to analyze) The Fibonacci sequence: The Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, … Fibonacci recurrence: Fibonacci recurrence: F(n) = F(n-1) + F(n-2) F(0) = 0 F(1) = 1 Another example: Another example: A(n) = 3A(n-1) - 2(n-2) A(0) = 1 A(1) = 3 2nd order linear homogeneous recurrence relation with constant coefficients
91
How do we handle things like this? fun(n) { if (n==1) quit O(1) fun(n-1) fun(n-2) } O(???) The simple way is to draw a picture. The simple way is to draw a picture.
92
Exam 1 Chapters 1 and 2 only Chapters 1 and 2 only Review hw solutions Review hw solutions Go through powerpoint slides and make your cheat sheet. Go through powerpoint slides and make your cheat sheet.
93
Homework 4 BTW, hw serves three purposes: BTW, hw serves three purposes: 1.It helps me gauge if I’m going too fast. 2.It helps improve the grades of those who put forth effort but may have test anxiety 3.It helps prepare you for the exams hw1, hw2, hw3 & hw4 = 9 points hw1, hw2, hw3 & hw4 = 9 points exam1 = 10 points exam1 = 10 points
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.