Presentation is loading. Please wait.

Presentation is loading. Please wait.

Big Oh – part 2 CS 244 This presentation requires Sound Enabled Brent M. Dingle, Ph.D. Game Design and Development Program Department of Mathematics,

Similar presentations


Presentation on theme: "Big Oh – part 2 CS 244 This presentation requires Sound Enabled Brent M. Dingle, Ph.D. Game Design and Development Program Department of Mathematics,"— Presentation transcript:

1

2 Big Oh – part 2 CS 244 This presentation requires Sound Enabled Brent M. Dingle, Ph.D. Game Design and Development Program Department of Mathematics, Statistics, and Computer Science University of Wisconsin – Stout 2014

3 Things to Note Homework 2 is Due Soon Homework 3 is Posted on D2L – Do NOT delay in starting it

4 From Last Time – Algorithms Review – Big-Oh Introduction Linear Search Binary Search

5 For Today More on Big-Oh – This extends beyond that presented in Chapter 1 of your book, but start there Side Comment – There are many ways to approach Big-Oh – You will see several of the possible approaches – Some are math/logic based, some more ‘human intuition’ based – Some are faster, less error prone, and better than others – NONE are randomly guessing

6 Marker Slide Any General Questions ? Next up – Searching Arrays Summary Review from last time – Big-Oh The Math side of things – Asymptotic Analysis The Code side of things – Examples, how to get the math function

7 Searching Arrays: Linear Search and Binary Search Searching – Find an element in a large amount of data Typically checking if an array contains a value matching the sought element’s key value So far you have seen at least: – Linear searching (sequential search) – Binary searching Action Figures and Dictionary examples from Previous lecture

8 Sequential (Linear) Search Begins at the start of the list and continues until the item is found or the entire list has been searched – Compare each array element with search key If search key found, return element index If search key not found, return –1 (invalid index) – Works best for small or unsorted arrays – Inefficient for larger arrays Action Figures Previous lecture

9 Binary Search, on Ordered Lists Uses a divide-and-conquer strategy to find an element in a list (eliminates half for each pass) – Compare middle array element to search key If element equals key, then return array index If element is less than key, then repeat search on first half of array If element is greater then key, then repeat search on second half of array Continue search until – Element equals search key (success) – Efficient for large, sorted arrays Dictionary Happiness Previous lecture

10 Searching: Big-Oh Reference Linear Search – O(n) Binary Search – O(lg n) yet to be proven/shown how spoiler: – (integer-wise) lg n is the number of times n can be divided by 2

11 Marker Slide Any Questions On: – Searching Arrays Summary Review from last time Next up – Big-Oh The Math side of things – Asymptotic Analysis – Growth Rate Comparisons (input growth versus time growth) The Code side of things – Examples, how to get the math function

12 Definitions An algorithm is a sequence of steps to solve a problem. – The performance of an algorithm when implemented on a computer may vary depending on the approach used to solve the problem and the actual steps taken. To compare the performance of algorithms, computer scientists use big-O notation. – We will study several algorithms during this course and analyze their performance using big-O notation.

13 Measuring Algorithm Efficiency There are different mechanisms for measuring efficiency. – Measure actual system characteristics (practical) processor time used, memory size used, and execution time. (practical) – Measure Time to develop the program (developer time). – Analyze the number of operations an algorithm performs to measure its complexity. (theoretical) – You can analyze the space (memory or disk) that the algorithm uses to perform its computation. (theoretical) Often, algorithms make a time vs. space trade-off. That is, the algorithm may run faster if given more space. Most of this class will put focus here

14 Best, Worst, and Average Case Few algorithms – have the exact same performance every time performance of an algorithm depends on the size of the inputs it processes The best case performance of the algorithm is the most efficient execution of the algorithm on the "best" data inputs. The worst case performance of the algorithm is the least efficient execution of the algorithm on the "worst" data inputs. The average case performance of the algorithm is the average efficiency of the algorithm on the set of all data inputs. The analysis of all cases typically – expresses efficiency in terms of the input size, n, of the data. Most of this class will put focus here – on worst case

15 Big-Oh: Used for What (again) Big-O notation is a mechanism for quickly communicating the efficiency of an algorithm. – Big-O notation measures the worst case performance of the algorithm by bounding the formula expressing the efficiency. cg(n) bounds f(n)

16 Asymptotic Execution Time Suppose f(n) is a formula describing the exact execution run time of some algorithm for an input size of n. That algorithm is then O( g(n) ) If there exist constants c and n 0 such that: f(n) ≤ c*g(n), for all n > n 0 Searching a dictionary is O( log n ) Searching an unordered list is O( n ) For Example… cg(n) bounds f(n)

17 Big-Oh Notation Given functions f(n) and g(n), we say that f(n) is O(g(n)) if there are positive constants c and n 0 such that f(n)  cg(n) for n  n 0 In other words: a function f(n) is O(g(n)) if f(n) is bounded above by some constant multiple of g(n) for all large n. So, f(n) ≤ cg(n) for all n > n o.

18 Big-Oh Notation Given functions f(n) and g(n), we say that f(n) is O(g(n)) if there are positive constants c and n 0 such that f(n)  cg(n) for n  n 0 Example: 2n  10 is O(n) – 2n  10  cn – (c  2) n  10 – n  10  (c  2) – Pick c  3 and n 0  10

19 7 Functions Seven functions commonly used to examine the complexity of an algorithm are: – constant, log 2 n, n, n log 2 n, n 2, n 3, 2 n These are listed in lowest complexity to highest complexity – More details follow In this class the subscript 2 may be omitted: log 2 n  log n but in the context of this class log n should be taken to mean log base 2 of n, unless explicitly stated otherwise. You may also see log 2 n abbreviated lg n Note your calculator will not understand this

20 Common Big-Oh Functions Seven functions that often appear in algorithm analysis: – Constant  1 – Logarithmic  log n – Linear  n – N-Log-N  n log n – Quadratic  n 2 – Cubic  n 3 – Exponential  2 n

21 Common Big-Oh Functions Seven functions that often appear in algorithm analysis: – Constant  1 – Logarithmic  log n – Linear  n – N-Log-N  n log n – Quadratic  n 2 – Cubic  n 3 – Exponential  2 n

22 Running Time Comparisons * just too big of a number to calculate

23 Ranking of Execution Times functioncommon name n!n!factorial 2n2n exponential n d, d > 3polynomial n3n3 cubic n2n2 quadratic n log n nlinear square root log nlogarithmic 1constant fast  ok  kinda slow  way too long

24 Constant Factors and Big Oh With regard to Big Oh Analysis The growth rate is not affected by – constant factors, c, or – lower-order terms Examples – 100 n  10 5 is a linear function O(n) – 40 n 2  10 8 n is a quadratic function O(n 2 ) constant factorlower order termconstant factorslower order term

25 7n – 2 3n 3 + 20n 2 + 5 3 (log n) + 5 Big Oh Examples: Applying Definition 7n-2 is O(n) need c > 0 and n 0  1 such that 7n-2  cn for n  n 0 this is true for c = 7 and n 0 = 1 3n 3 + 20n 2 + 5 is O(n 3 ) need c > 0 and n 0  1 such that 3n 3 + 20n 2 + 5  cn 3 for n  n 0 this is true for c = 4 and n 0 = 21 3 log n + 5 is O(log n) need c > 0 and n 0  1 such that 3 log n + 5  clog n for n  n 0 this is true for c = 8 and n 0 = 2

26 Marker Slide Any Questions On: – Searching Arrays – Big-Oh The Math side of things – Asymptotic Analysis Next up – Big-Oh The Math side of things – Growth Rate Comparisons The Code side of things – Nested For-Loops, Sigma Notation – Practice Exercises – Book Example

27 Big-Oh: Relates to Growth Rate As Big-Oh notation is based on asymptotic analysis – Big-Oh notation gives an upper bound on the growth rate of a function – The statement “f(n) is O(g(n))” means that the growth rate of f(n) is no more than the growth rate of g(n) ( f(n)  cg(n) for n  n 0  f’(n)  c g’(n) ) Thus – We can use the big-Oh notation to rank functions according to their growth rate f(n) is O(g(n))g(n) is O(f(n)) g(n) grows moreYesNo f(n) grows moreNoYes Same growthYes Are we allowed to use derivatives in a Computer Science Class? YES!

28 Summarizing: Why Growth Rate Matters if runtime is... time for n + 1time for 2 ntime for 4 n c lg nc lg (n + 1)c (lg n + 1)c(lg n + 2) c nc (n + 1)2c n4c n c n lg n ~ c n lg n + c n 2c n lg n + 2cn 4c n lg n + 4cn c n 2 ~ c n 2 + 2c n4 cn 2 16c n 2 c n 3 ~ c n 3 + 3c n 2 8c n 3 64c n 3 c 2 n c 2 n+1 c 2 2n c 2 4n consider O(n^2) Aside c is just a constant Pretend it is 1 and make it go away if it helps

29 Summarizing: Why Growth Rate Matters if runtime is... time for n + 1time for 2 ntime for 4 n c lg nc lg (n + 1)c (lg n + 1)c(lg n + 2) c nc (n + 1)2c n4c n c n lg n ~ c n lg n + c n 2c n lg n + 2cn 4c n lg n + 4cn c n 2 ~ c n 2 + 2c n4 cn 2 16c n 2 c n 3 ~ c n 3 + 3c n 2 8c n 3 64c n 3 c 2 n c 2 n+1 c 2 2n c 2 4n where input size doubles then runtime quadruples consider O(n^2) Aside c is just a constant Pretend it is 1 and make it go away if it helps

30 Summarizing: Why Growth Rate Matters if runtime is... time for n + 1time for 2 ntime for 4 n c lg nc lg (n + 1)c (lg n + 1)c(lg n + 2) c nc (n + 1)2c n4c n c n lg n ~ c n lg n + c n 2c n lg n + 2cn 4c n lg n + 4cn c n 2 ~ c n 2 + 2c n4 cn 2 16c n 2 c n 3 ~ c n 3 + 3c n 2 8c n 3 64c n 3 c 2 n c 2 n+1 c 2 2n c 2 4n consider O(lg n)

31 Summarizing: Why Growth Rate Matters if runtime is... time for n + 1time for 2 ntime for 4 n c lg nc lg (n + 1)c (lg n + 1)c(lg n + 2) c nc (n + 1)2c n4c n c n lg n ~ c n lg n + c n 2c n lg n + 2cn 4c n lg n + 4cn c n 2 ~ c n 2 + 2c n4 cn 2 16c n 2 c n 3 ~ c n 3 + 3c n 2 8c n 3 64c n 3 c 2 n c 2 n+1 c 2 2n c 2 4n consider O(lg n) where input size doubles then runtime increases by “1 step”

32 Examples: Running Time Comparisons i.e. 8n versus n Say it takes 3 minutes for an input size of 10 (i.e. n = 10) Now if we had an input size of 80 (i.e. 8n) How long would it take? 8 times longer = 3*8 = 24 minutes 8 times LONGER

33 Examples: Running Time Comparisons i.e. 8n versus n Say it takes 3 minutes for an input size of 10 (i.e. n = 10) Now if we had an input size of 80 (i.e. 8n) How long would it take? 64 times longer = 3*64 = 192 minutes 64 times LONGER 8 times LONGER

34 Examples: Running Time Comparisons i.e. 8n versus n Say it takes 3 minutes for an input size of 10 (i.e. n = 10) Now if we had an input size of 80 (i.e. 8n) How long would it take? 3 * 1 = 3 minutes same time 8 times LONGER 64 times LONGER

35 Examples: Running Time Comparisons i.e. 8n versus n And these follow in similar fashion same time 8 times LONGER 64 times LONGER Be careful with this one… it is not a “times” longer … but rather a “plus steps”

36 Marker Slide Any Questions On: – Searching Arrays – Big-Oh The Math side of things – Asymptotic Analysis – Growth Rate Comparisons Next up – Big-Oh The Code side of things – Nested For-Loops, Sigma Notation – Practice Exercises – Book Example

37 Big-Oh Example Determine the performance using big-O notation for the following code based on the number of cout executions: Determine the number of operations performed: – 1) The inner loop counts from 1 to n => n operations – 2) The outer loop counts from 1 to n => n operations – 3) The outer loop performs the inner loop n times. – 4) Thus, the cout statement is executed n * n times = O(n 2 ). for (int i =1; i <= n; i++) for (int j=1; j <= n; j++) cout << j << endl;

38 Sigma Summation To more formally analyze algorithms we typically need to perform mathematical summations. The summation sign is used to denote the summation of a large series of numbers. The summation sign is the Greek letter sigma. Example:

39 Sigma Summation Rules The sum of numbers from 1 to n has a simple formula: Multiplied constants can be moved outside the summation: Added constants can be summed separately: This would be a good formula to have memorized

40 Example: Using the Sum Using summations determine the Big-Oh of the following code: for (int i =1; i <= n; i++) for (int j=1; j <= n; j++) cout << j << endl; Executes n times (worst case), Uses n units of time n

41 Example: Using the Sum Using summations determine the Big-Oh of the following code: for (int i =1; i <= n; i++) for (int j=1; j <= n; j++) cout << j << endl; Executes n times (worst case), Uses n units of time n n

42 Example: Using the Sum Using summations determine the Big-Oh of the following code: for (int i =1; i <= n; i++) for (int j=1; j <= n; j++) cout << j << endl; Constant time: Mark it as using ONE unit of time 1 n n

43 Example: Using the Sum Using summations determine the Big-Oh of the following code: for (int i =1; i <= n; i++) for (int j=1; j <= n; j++) cout << j << endl; Simplifies to n

44 Example: Using the Sum Using summations determine the Big-Oh of the following code: for (int i =1; i <= n; i++) for (int j=1; j <= n; j++) cout << j << endl; Simplifies to n

45 Example: Using the Sum Using summations determine the Big-Oh of the following code: for (int i =1; i <= n; i++) for (int j=1; j <= n; j++) cout << j << endl; n is a ‘constant’ take it out of the summation

46 Example: Using the Sum Using summations determine the Big-Oh of the following code: for (int i =1; i <= n; i++) for (int j=1; j <= n; j++) cout << j << endl; n is a ‘constant’ take it out of the summation

47 Example: Using the Sum Using summations determine the Big-Oh of the following code: for (int i =1; i <= n; i++) for (int j=1; j <= n; j++) cout << j << endl; Simplifies to n

48 Example: Using the Sum Using summations determine the Big-Oh of the following code: for (int i =1; i <= n; i++) for (int j=1; j <= n; j++) cout << j << endl; Simplifies to n

49 Example: Using the Sum Using summations determine the Big-Oh of the following code: for (int i =1; i <= n; i++) for (int j=1; j <= n; j++) cout << j << endl; Multiply out

50 Example: Using the Sum Using summations determine the Big-Oh of the following code: for (int i =1; i <= n; i++) for (int j=1; j <= n; j++) cout << j << endl; Multiply out

51 Example: Using the Sum Using summations determine the Big-Oh of the following code: for (int i =1; i <= n; i++) for (int j=1; j <= n; j++) cout << j << endl; Math function for this is: n 2 Algorithm is O(n 2 )

52 Marker Slide Any Questions On: – Searching Arrays – Big-Oh The Math side of things – Asymptotic Analysis – Growth Rate Comparisons The Code side of things – Nested For-Loops, Sigma Notation Next up – Big-Oh The Code side of things – Practice Exercises – Book Example

53 Big-O Exercises: Warm Up 1.What is the Big-Oh for the following formula? 4n 3 + 3n 2 + 6n Remove lower order terms Eliminate constants O(n 3 )

54 Big-O Exercises: Warm Up 2.What is the Big-Oh for the following formula? n + n * (log n) Remove lower order terms Eliminate constants O(n log n)

55 Big-O Exercises: Code 3.Analyze using Big-Oh and Summations (count number of cout calls) for (int i =1; i <= n; i++) for (int j=1; j <= i; j++) cout << j << endl; i = 1  inner loops runs 1 time i = 2  inner loops runs 2 times i = 3  inner loops runs 3 times i = 4  inner loops runs 4 times : : i = n  inner loops runs n times 1 + 2 + 3 + 4 + … + n O(n 2 ) ½ n 2 + ½ n Remove lower order termsEliminate constants

56 Big-O Exercises: Code 4.Analyze using Big-Oh and Summations int i=1, j=1; while (i <= 2*n) { cout << i << endl; j=1; while (j <= n/2) { cout << j; cout << j*2 << endl; j++; } i++; } Assume n = 8 Outer loop runs 16 times Inner loop runs 4 times each time 16*4 = 64  8 2 Assume n = 10 Outer loop runs 20 times Inner loop runs 5 times each time 20*5 = 100  10 2 Assume n = 9 Outer loop runs 18 times Inner loop runs 4 times each time 18*4 = 72  9*8 < 81 = 9 2 Assume n = 1024 Outer loop runs 2048 times Inner loop runs 512 times each time 2048*512 = 1048675 = 1024 2 Wait a minute… Assume n = n Outer loop runs 2n times Inner loop runs n/2 times each time 2n * n/2 = n 2 O(n 2 )

57 Marker Slide Any Questions On: – Searching Arrays – Big-Oh The Math side of things – Asymptotic Analysis – Growth Rate Comparisons The Code side of things – Nested For-Loops, Sigma Notation – Practice Exercises Next up – Big-Oh The Code side of things – Book Example More Examples

58 Big-Oh Rules (yet another refresher) If f(n) is a polynomial of degree d, then f(n) is O(n d ), i.e., 1.Drop lower-order terms Use the smallest possible class of functions – Say “2n is O(n)” instead of “2n is O(n 2 )”

59 Big-Oh Rules (yet another refresher) If f(n) is a polynomial of degree d, then f(n) is O(n d ), i.e., 1.Drop lower-order terms 2.Drop constant factors Use the smallest possible class of functions – Say “2n is O(n)” instead of “2n is O(n 2 )” Use the simplest expression of the class – Say “3n  5 is O(n)” instead of “3n  5 is O(3n)”

60 Big-Oh Constant Time: O(1) We assume most primitive operations – addition, multiplication, assignment ops, etc Can be performed in constant time, c – The exact number does not really matter Constants get knocked out during Big-Oh analysis

61 Constant Time Loops A loop that performs a constant number of iterations is still considered a constant // Initialize a deck by creating all 52 cards Deck::Deck() { topCard = 0; for (int i = 1; i <= 13; i++) { Card c1(diamond, i), c2(spade, i), c3(heart, i), c4(club, i); cards[topCard++] = c1; cards[topCard++] = c2; cards[topCard++] = c3; cards[topCard++] = c4; } Everything is constant in the loop And the loop runs from 1 to 13, always

62 Computing Prefix Averages We further illustrate asymptotic analysis with two algorithms for prefix averages Define: – The i-th prefix average of an array X is the average of the first (i  1) elements of X: A[i]  X[0]  X[1]  …  X[i]) / (i + 1)

63 Prefix Averages (Quadratic) The following algorithm computes prefix averages in quadratic time by applying the definition Algorithm prefixAverages1(X, n) Input array X of n integers Output array A of prefix averages of X #operations A  new array of n integers n for i  0 to n  1 do s  X[0] for j  1 to i do s  s  X[j] A[i]  s  (i  1) return A Yet another way of looking at things: Count the operations This initializes n elements This will take a total of n operations

64 Prefix Averages (Quadratic) The following algorithm computes prefix averages in quadratic time by applying the definition Algorithm prefixAverages1(X, n) Input array X of n integers Output array A of prefix averages of X #operations A  new array of n integers n for i  0 to n  1 do n s  X[0] for j  1 to i do s  s  X[j] A[i]  s  (i  1) return A This loop runs n times This will take a total of n operations

65 Prefix Averages (Quadratic) The following algorithm computes prefix averages in quadratic time by applying the definition Algorithm prefixAverages1(X, n) Input array X of n integers Output array A of prefix averages of X #operations A  new array of n integers n for i  0 to n  1 do n s  X[0] n for j  1 to i do s  s  X[j] A[i]  s  (i  1) return A This line executes once for each iteration of the outer loop (i.e. n times) So this will take a total of n operations

66 Prefix Averages (Quadratic) The following algorithm computes prefix averages in quadratic time by applying the definition Algorithm prefixAverages1(X, n) Input array X of n integers Output array A of prefix averages of X #operations A  new array of n integers n for i  0 to n  1 do n s  X[0] n for j  1 to i do1  2  …  (n  1) s  s  X[j] A[i]  s  (i  1) return A This line executes once for the first iteration of the outer loop (i == 1) Thus using 1 operation

67 Prefix Averages (Quadratic) The following algorithm computes prefix averages in quadratic time by applying the definition Algorithm prefixAverages1(X, n) Input array X of n integers Output array A of prefix averages of X #operations A  new array of n integers n for i  0 to n  1 do n s  X[0] n for j  1 to i do1  2  …  (n  1) s  s  X[j] A[i]  s  (i  1) return A This line executes twice for the second iteration of the outer loop (i == 2) Thus using 2 more operations

68 Prefix Averages (Quadratic) The following algorithm computes prefix averages in quadratic time by applying the definition Algorithm prefixAverages1(X, n) Input array X of n integers Output array A of prefix averages of X #operations A  new array of n integers n for i  0 to n  1 do n s  X[0] n for j  1 to i do1  2  …  (n  1) s  s  X[j] A[i]  s  (i  1) return A This line executes 3 times for the third iteration of the outer loop (i == 3) Thus using 3 more operations

69 Prefix Averages (Quadratic) The following algorithm computes prefix averages in quadratic time by applying the definition Algorithm prefixAverages1(X, n) Input array X of n integers Output array A of prefix averages of X #operations A  new array of n integers n for i  0 to n  1 do n s  X[0] n for j  1 to i do1  2  …  (n  1) s  s  X[j] A[i]  s  (i  1) return A This line executes (n-1) times for the last iteration of the outer loop (i == n-1) Thus using (n-1) more operations

70 Prefix Averages (Quadratic) The following algorithm computes prefix averages in quadratic time by applying the definition Algorithm prefixAverages1(X, n) Input array X of n integers Output array A of prefix averages of X #operations A  new array of n integers n for i  0 to n  1 do n s  X[0] n for j  1 to i do1  2  …  (n  1) s  s  X[j]1  2  …  (n  1) A[i]  s  (i  1) return A This line is just like the for-j loop it first uses 1, then 2 more, then 3 more, … and eventually uses (n-1) more operations

71 Prefix Averages (Quadratic) The following algorithm computes prefix averages in quadratic time by applying the definition Algorithm prefixAverages1(X, n) Input array X of n integers Output array A of prefix averages of X #operations A  new array of n integers n for i  0 to n  1 do n s  X[0] n for j  1 to i do1  2  …  (n  1) s  s  X[j]1  2  …  (n  1) A[i]  s  (i  1) n return A This line executes once for each iteration of the outer loop (i.e. n times) So this will take a total of n operations

72 Prefix Averages (Quadratic) The following algorithm computes prefix averages in quadratic time by applying the definition Algorithm prefixAverages1(X, n) Input array X of n integers Output array A of prefix averages of X #operations A  new array of n integers n for i  0 to n  1 do n s  X[0] n for j  1 to i do1  2  …  (n  1) s  s  X[j]1  2  …  (n  1) A[i]  s  (i  1) n return A 1 This line executes only one time It is outside both loops So this will take a total of 1 operation

73 Prefix Averages (Quadratic) The following algorithm computes prefix averages in quadratic time by applying the definition Algorithm prefixAverages1(X, n) Input array X of n integers Output array A of prefix averages of X #operations A  new array of n integers n for i  0 to n  1 do n s  X[0] n for j  1 to i do1  2  …  (n  1) s  s  X[j]1  2  …  (n  1) A[i]  s  (i  1) n return A 1 Clearly either of these can be taken as the LARGEST number of operations. So we will use 1 + 2 + 3 + …+ (n-1) to characterize the runtime of this algorithm… see next page

74 Prefix Averages: End of 1 st Method The running time of prefixAverages1 is O(1  2  …  n) The sum of the first n integers is n(n  1)  2 Thus, algorithm prefixAverages1 runs in O(n 2 ) time Algorithm prefixAverages1(X, n) Input array X of n integers Output array A of prefix averages of X #operations A  new array of n integers n for i  0 to n  1 do n s  X[0] n for j  1 to i do1  2  …  (n  1) s  s  X[j]1  2  …  (n  1) A[i]  s  (i  1) n return A 1 This is from the formula you memorized:

75 Prefix Averages: 2 nd Implementation This is a transition slide Do you feel things transitioning? Move onto the 2 nd implementation of the Prefix Averages Code

76 Prefix Averages (Linear) The following algorithm computes prefix averages in linear time by keeping a running sum Algorithm prefixAverages2(X, n) Input array X of n integers Output array A of prefix averages of X #operations A  new array of n integers s  0 for i  0 to n  1 do s  s  X[i] A[i]  s  (i  1) return A

77 Prefix Averages (Linear) The following algorithm computes prefix averages in linear time by keeping a running sum Algorithm prefixAverages2(X, n) Input array X of n integers Output array A of prefix averages of X #operations A  new array of n integersn s  0 for i  0 to n  1 do s  s  X[i] A[i]  s  (i  1) return A

78 Prefix Averages (Linear) The following algorithm computes prefix averages in linear time by keeping a running sum Algorithm prefixAverages2(X, n) Input array X of n integers Output array A of prefix averages of X #operations A  new array of n integersn s  0 1 for i  0 to n  1 do s  s  X[i] A[i]  s  (i  1) return A

79 Prefix Averages (Linear) The following algorithm computes prefix averages in linear time by keeping a running sum Algorithm prefixAverages2(X, n) Input array X of n integers Output array A of prefix averages of X #operations A  new array of n integersn s  0 1 for i  0 to n  1 don s  s  X[i] A[i]  s  (i  1) return A

80 Prefix Averages (Linear) The following algorithm computes prefix averages in linear time by keeping a running sum Algorithm prefixAverages2(X, n) Input array X of n integers Output array A of prefix averages of X #operations A  new array of n integersn s  0 1 for i  0 to n  1 don s  s  X[i]n A[i]  s  (i  1) return A

81 Prefix Averages (Linear) The following algorithm computes prefix averages in linear time by keeping a running sum Algorithm prefixAverages2(X, n) Input array X of n integers Output array A of prefix averages of X #operations A  new array of n integersn s  0 1 for i  0 to n  1 don s  s  X[i]n A[i]  s  (i  1) n return A

82 Prefix Averages (Linear) The following algorithm computes prefix averages in linear time by keeping a running sum Algorithm prefixAverages2(X, n) Input array X of n integers Output array A of prefix averages of X #operations A  new array of n integersn s  0 1 for i  0 to n  1 don s  s  X[i]n A[i]  s  (i  1) n return A 1

83 Prefix Averages (Linear) The following algorithm computes prefix averages in linear time by keeping a running sum Algorithm prefixAverages2(X, n) Input array X of n integers Output array A of prefix averages of X #operations A  new array of n integersn s  0 1 for i  0 to n  1 don s  s  X[i]n A[i]  s  (i  1) n return A 1 Largest of all these is n

84 Prefix Averages (Linear) The following algorithm computes prefix averages in linear time by keeping a running sum Algorithm prefixAverages2(X, n) Input array X of n integers Output array A of prefix averages of X #operations A  new array of n integersn s  0 1 for i  0 to n  1 don s  s  X[i]n A[i]  s  (i  1) n return A 1 Algorithm prefixAverages2 runs in O(n) time Largest of all these is n

85 So Many Ways to Big-Oh? You have now seen a variety of ways to approach calculating Big-Oh for various algorithms and source code. Why so many ways? – Are they really different? – Or just different implementations of the same process? – Think of it as a “meta” example of algorithm implementation – And hopefully at least one of the ways makes “the most sense” to you

86 Marker Slide Any Questions On: – Searching Arrays – Big-Oh The Math side of things – Asymptotic Analysis – Growth Rate Comparisons The Code side of things – Nested For-Loops, Sigma Notation – Practice Exercises – Book Example Next up – Big-Oh More Examples

87 Coding example #1 for ( i=0; i < n; i++ ) m += i; Operations

88 Coding example #1 for ( i=0; i < n; i++ ) m += i; Operations n

89 Coding example #1 for ( i=0; i < n; i++ ) m += i; Operations n

90 Coding example #1 for ( i=0; i < n; i++ ) m += i; Operations n Answer: O(n)

91 Coding example #2 for ( i=0; i < n; i++ ) for( j=0; j < n; j++ ) sum[i] += entry[i][j]; Operations

92 Coding example #2 for ( i=0; i < n; i++ ) for( j=0; j < n; j++ ) sum[i] += entry[i][j]; Operations n

93 Coding example #2 for ( i=0; i < n; i++ ) for( j=0; j < n; j++ ) sum[i] += entry[i][j]; Operations n n 2

94 Coding example #2 for ( i=0; i < n; i++ ) for( j=0; j < n; j++ ) sum[i] += entry[i][j]; Operations n n 2

95 Coding example #2 for ( i=0; i < n; i++ ) for( j=0; j < n; j++ ) sum[i] += entry[i][j]; Operations n n 2 Answer: O(n 2 )

96 Coding example #3 for ( i=0; i < n; i++ ) for( j=0; j < i ; j++ ) m += j; Operations

97 Coding example #3 for ( i=0; i < n; i++ ) for( j=0; j < i ; j++ ) m += j; Operations n

98 Coding example #3 for ( i=0; i < n; i++ ) for( j=0; j < i ; j++ ) m += j; i = 1…. j-loop goes 1 i = 2…. j-loop goes 2 i = 3…. j-loop goes 3 : : i = (n-1) …. j-loop goes (n-1) 1 +2 +3 : + (n-1) Operations n 1+2+..+n-1

99 Coding example #3 for ( i=0; i < n; i++ ) for( j=0; j < i ; j++ ) m += j; Operations n 1+2+..+n-1

100 Coding example #3 for ( i=0; i < n; i++ ) for( j=0; j < i ; j++ ) m += j; Operations n 1+2+..+n-1 Answer: O(n 2 )

101 Coding example #4 int i = n; while (i > 0) { tot += i; i = i / 2; } Assume positive integers so 1 / 2 = 0.5 => 0 This looks tricky

102 Coding example #4 int i = n; while (i > 0) { tot += i; i = i / 2; } Assume positive integers so 1 / 2 = 0.5 => 0 i starts equal to n We divide by 2 until i <= 0 So the real question is: How many times can n be divided by 2 ?

103 Coding example #4 int i = n; while (i > 0) { tot += i; i = i / 2; } Assume positive integers so 1 / 2 = 0.5 => 0 How many times can n be divided by 2 ? Or rather: Find k such that n - 2 k <= 0 and k+1 will be the number of times this loop executes the +1 is by experiment: n = 1, n – 2 0 = 0, loop executes 1 time n = 2, n – 2 1 = 0, loop executes 2 times It really does not matter though -it will not change the asymptotic bound

104 Coding example #4 int i = n; while (i > 0) { tot += i; i = i / 2; } Assume positive integers so 1 / 2 = 0.5 => 0 Answer: O(lg n) Find k such that n - 2 k <= 0 n <= 2 k lg n <= k k >= lg n Pick the minimum k value = lg n Loop executes (lg n) + 1 times

105 Equivalent of Coding example #4 i = 1; while (i < n) { tot += i; i = i * 2; } Answer: O(lg n) Find k such that 2 k >= n k >= lg n Pick the minimum k value = lg n Loop executes (lg n) times

106 Coding example #5 for ( i=0; i < n; i++ ) for( j=0; j < n; j++ ) for( k=0; k < n; k++ ) sum[i][j] += entry[i][j][k]; Operations

107 Coding example #5 for ( i=0; i < n; i++ ) for( j=0; j < n; j++ ) for( k=0; k < n; k++ ) sum[i][j] += entry[i][j][k]; Operations n

108 Coding example #5 for ( i=0; i < n; i++ ) for( j=0; j < n; j++ ) for( k=0; k < n; k++ ) sum[i][j] += entry[i][j][k]; Operations n n^2

109 Coding example #5 for ( i=0; i < n; i++ ) for( j=0; j < n; j++ ) for( k=0; k < n; k++ ) sum[i][j] += entry[i][j][k]; Operations n n^2 n^3

110 Coding example #5 for ( i=0; i < n; i++ ) for( j=0; j < n; j++ ) for( k=0; k < n; k++ ) sum[i][j] += entry[i][j][k]; Operations n n^2 n^3

111 Coding example #5 for ( i=0; i < n; i++ ) for( j=0; j < n; j++ ) for( k=0; k < n; k++ ) sum[i][j] += entry[i][j][k]; Operations n n^2 n^3 Answer: O(n 3 )

112 Coding example #6 for ( i=0; i < n; i++ ) for( j=0; j < n; j++ ) sum[i] += entry[i][j][0]; for ( i=0; i < n; i++ ) for( k=0; k < n; k++ ) sum[i] += entry[i][0][k]; Calculate the total These 4 for-loops are the function. Operations n

113 Coding example #6 for ( i=0; i < n; i++ ) for( j=0; j < n; j++ ) sum[i] += entry[i][j][0]; for ( i=0; i < n; i++ ) for( k=0; k < n; k++ ) sum[i] += entry[i][0][k]; Calculate the total These 4 for-loops are the function. Operations n n 2

114 Coding example #6 for ( i=0; i < n; i++ ) for( j=0; j < n; j++ ) sum[i] += entry[i][j][0]; for ( i=0; i < n; i++ ) for( k=0; k < n; k++ ) sum[i] += entry[i][0][k]; Calculate the total These 4 for-loops are the function. Operations n n 2

115 Coding example #6 for ( i=0; i < n; i++ ) for( j=0; j < n; j++ ) sum[i] += entry[i][j][0]; for ( i=0; i < n; i++ ) for( k=0; k < n; k++ ) sum[i] += entry[i][0][k]; Calculate the total These 4 for-loops are the function. Operations n n 2 n

116 Coding example #6 for ( i=0; i < n; i++ ) for( j=0; j < n; j++ ) sum[i] += entry[i][j][0]; for ( i=0; i < n; i++ ) for( k=0; k < n; k++ ) sum[i] += entry[i][0][k]; Calculate the total These 4 for-loops are the function. Operations n n 2 n n 2

117 Coding example #6 for ( i=0; i < n; i++ ) for( j=0; j < n; j++ ) sum[i] += entry[i][j][0]; for ( i=0; i < n; i++ ) for( k=0; k < n; k++ ) sum[i] += entry[i][0][k]; Calculate the total These 4 for-loops are the function. Operations n n 2 n n 2

118 Coding example #6 for ( i=0; i < n; i++ ) for( j=0; j < n; j++ ) sum[i] += entry[i][j][0]; for ( i=0; i < n; i++ ) for( k=0; k < n; k++ ) sum[i] += entry[i][0][k]; Calculate the total These 4 for-loops are the function. Operations n n 2 n n 2 Answer: n + n 2 + n 2 + n + n 2 + n 2 4n 2 + 2n 4n 2 O(n 2 ) Showing more of the implied steps for illustration

119 Coding example #7 for ( i=0; i < n; i++ ) for( j=0; j < sqrt(n); j++ ) m += j; Operations

120 Coding example #7 for ( i=0; i < n; i++ ) for( j=0; j < sqrt(n); j++ ) m += j; Operations n

121 Coding example #7 for ( i=0; i < n; i++ ) for( j=0; j < sqrt(n); j++ ) m += j; Operations n n 3/2

122 Coding example #7 for ( i=0; i < n; i++ ) for( j=0; j < sqrt(n); j++ ) m += j; Operations n n 3/2

123 Coding example #7 for ( i=0; i < n; i++ ) for( j=0; j < sqrt(n); j++ ) m += j; Operations n n 3/2 Answer: O(n 3/2 )

124 Coding example #8 for ( i=0; i < n; i++ ) for( j=0; j < sqrt(995); j++ ) m += j; Operations

125 Coding example #8 for ( i=0; i < n; i++ ) for( j=0; j < sqrt(995); j++ ) m += j; Operations n

126 Coding example #8 for ( i=0; i < n; i++ ) for( j=0; j < sqrt(995); j++ ) m += j; Operations n 31n

127 Coding example #8 for ( i=0; i < n; i++ ) for( j=0; j < sqrt(995); j++ ) m += j; Operations n 31n

128 Coding example #8 for ( i=0; i < n; i++ ) for( j=0; j < sqrt(995); j++ ) m += j; Operations n 31n Answer: O(n)

129 Coding example #8 : Equivalent code for ( i=0; i < n; i++ ) { m += j; … m += j; // 31 times } Answer: O(n) Operations n

130 Coding example #9 int total(int n ) { int subtotal = 0; for( i=0; i < n; i++) subtotal += i; return subtotal; } main() { int tot = 0; for ( i=0; i < n; i++ ) tot += total(i); } First rearrange the function to be after main - cosmetic reasons

131 main() { int tot = 0; for ( i=0; i < n; i++ ) tot += total(i); } int total(int n ) { int subtotal = 0; for( i=0; i < n; i++) subtotal += i; return subtotal; } Coding example #9 Change the parameter name used in the total() function m m;

132 main() { int tot = 0; for ( i=0; i < n; i++ ) tot += total(i); } int total(int m ) { int subtotal = 0; for( i=0; i < m; i++) subtotal += i; return subtotal; } Coding example #9 Change the loop variable used in the total function k k k k

133 Coding example #9 main() { int tot = 0; for ( i=0; i < n; i++ ) tot += total(i); } Next inline the total function m becomes i int total(int m ) { int subtotal = 0; for( k=0; k < m; k++) subtotal += k; return subtotal; } main() { int tot = 0; for ( i=0; i < n; i++ ) { int subtotal = 0; for( k=0; k < i; k++) subtotal += k; tot += subtotal; } m;

134 Coding example #9 main() { int tot = 0; for ( i=0; i < n; i++ ) { int subtotal = 0; for( k=0; k < i; k++) subtotal += k; tot += subtotal; } Operations

135 Coding example #9 main() { int tot = 0; for ( i=0; i < n; i++ ) { int subtotal = 0; for( k=0; k < i; k++) subtotal += k; tot += subtotal; } Operations 1

136 Coding example #9 main() { int tot = 0; for ( i=0; i < n; i++ ) { int subtotal = 0; for( k=0; k < i; k++) subtotal += k; tot += subtotal; } Operations 1 n

137 Coding example #9 main() { int tot = 0; for ( i=0; i < n; i++ ) { int subtotal = 0; for( k=0; k < i; k++) subtotal += k; tot += subtotal; } Operations 1 n

138 Coding example #9 main() { int tot = 0; for ( i=0; i < n; i++ ) { int subtotal = 0; for( k=0; k < i; k++) subtotal += k; tot += subtotal; } Operations 1 n 1 + 2 + 3 + … + (n-1)

139 Coding example #9 main() { int tot = 0; for ( i=0; i < n; i++ ) { int subtotal = 0; for( k=0; k < i; k++) subtotal += k; tot += subtotal; } Operations 1 n 1 + 2 + 3 + … + (n-1)

140 Coding example #9 main() { int tot = 0; for ( i=0; i < n; i++ ) { int subtotal = 0; for( k=0; k < i; k++) subtotal += k; tot += subtotal; } Operations 1 n 1 + 2 + 3 + … + (n-1) n

141 Graded In-Class Activity: BigOhEx09 main() { int tot = 0; for ( i=0; i < n; i++ ) { int subtotal = 0; for( k=0; k < i; k++) subtotal += k; tot += subtotal; } Operations 1 n 1 + 2 + 3 + … + (n-1) n Create a MS-Word document, named BigOhEx09.docx. Copy the above code and operations table into it. Complete the exercise by including the corresponding Sigma Summation, its simplification, and the final Big-Oh runtime for this code. Submit the document to the appropriate D2L dropbox, before class ends

142 Coding example #9 main() { int tot = 0; for ( i=0; i < n; i++ ) { int subtotal = 0; for( k=0; k < i; k++) subtotal += k; tot += subtotal; } Operations 1 n 1 + 2 + 3 + … + (n-1) n Answer: O(n 2 )

143 Marker Slide Any Questions on: – Searching Arrays – Big-Oh The Math side of things – Asymptotic Analysis – Growth Rate Comparisons The Code side of things – Nested For-Loops, Sigma Notation – Practice Exercises – Book Example More Examples Next up – Free Play

144 Free Play – Things to Work On Homework 2 Homework 3

145 The End Or is it?


Download ppt "Big Oh – part 2 CS 244 This presentation requires Sound Enabled Brent M. Dingle, Ph.D. Game Design and Development Program Department of Mathematics,"

Similar presentations


Ads by Google