Discrete Maths Objective to describe the Big-Oh notation for estimating the running time of programs 242-213, Semester 2, 2013-2014 10. Running Time of.

Slides:



Advertisements
Similar presentations
Analysis of Algorithms
Advertisements

Copyright © 2003 Pearson Education, Inc. Slide 1.
Recursive Descent Technique CMSC 331. UMBC 2 The Header /* This program matches the following A -> B { '|' B } B -> C { '&' C } C -> D { '^' D } D ->
For(int i = 1; i
Recursion Prog #include <stdio.h> #include<conio.h> main()
ELA and Math Assessments 2013 Grades 3 – 8: District Test Results Presentation to the Board of Education Kings Park Central School District September 10,
BNF <digit> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
Programming In C++ Spring Semester 2013 Practice 1-3 Lectures Programming In C++, Lecture 3 By Umer Rana.
Programming In C++ Spring Semester 2013 Practice 2 Programming In C++, Practice By Umer Rana.
Programming In C++ Spring Semester 2013 Lecture 3 Programming In C++, Lecture 3 By Umer Rana.
Let’s hope this doesn’t take too long. BIG O. WHY WE SHOULD CARE Computers are can perform billions of operations per second but they are still not infinite.
Spring Semester 2013 Lecture 5
Lecture 11 Mathematical Induction CSCI – 1900 Mathematics for Computer Science Fall 2014 Bill Pine.
static void Main() { int i = 0; if (i == 0) { int a = 5; int b = 15; if (a == 5) { int c = 3; int d = 99; }
Specialist Group Funding Helena Djurkovic 4 th December 2014.
void count_down (int count) { for(i=count; i>1; i--) printf(" %d\t", count); } printf("A%d\n", count); if(count>1) count_down(count-1); printf("B%d\n",
MATH 224 – Discrete Mathematics
§3 Dynamic Programming Use a table instead of recursion 1. Fibonacci Numbers: F(N) = F(N – 1) + F(N – 2) int Fib( int N ) { if ( N
Advance Data Structure and Algorithm COSC600 Dr. Yanggon Kim Chapter 2 Algorithm Analysis.
Sort the given string, without using string handling functions.
Tinaliah, S. Kom.. * * * * * * * * * * * * * * * * * #include using namespace std; void main () { for (int i = 1; i
COS 126 – Atomic Theory of Matter. Goal of the Assignment Calculate Avogadro’s number Using Einstein’s equations Using fluorescent imaging Input data.
2420 Review Questions Chapter 6.
Lecture 4 Feb 5 completion of recursion (inserting into a linked list as last item) analysis of algorithms – Chapter 2.
$100 $200 $300 $400 $500 $100 $200 $300 $400 $500 $100 $200 $300 $400 $500 $100 $200 $300 $400 $500 $100 $200 $300 $400 $500 $100 $200 $300.
Analysis of Algorithms 7/2/2015CS202 - Fundamentals of Computer Science II1.
Math Journal 9-5 Evaluate Simplify
For Wednesday Read Weiss chapter 3, sections 1-5. This should be largely review. If you’re struggling with the C++ aspects, you may refer to Savitch, chapter.
Discrete Maths Objective to introduce mathematical induction through examples , Semester 2, Mathematical Induction 1.
1 Discrete Structures – CNS2300 Text Discrete Mathematics and Its Applications Kenneth H. Rosen Chapter 3 Mathematical Reasoning, Induction and Recursion.
Math 71B 11.1 – Sequences and Summation Notation 1.
Discrete Maths Objective to describe the Big-Oh notation for estimating the running time of programs , Semester 2, Running Time of.
Discrete Maths: Induction/1 1 Discrete Maths Objective – –to introduce mathematical induction through examples , Semester
1 Week 9 A little more GUI, and threads. Objectives: Discuss the Swing set of classes. Incorporate animation into applets. Define the term thread. Explain.
Introduction to Programming (in C++) Complexity Analysis of Algorithms
ITEC 2620M Introduction to Data Structures Instructor: Prof. Z. Yang Course Website: ec2620m.htm Office: TEL 3049.
COS 126 – Atomic Theory of Matter. Goal of the Assignment Calculate Avogadro’s number Using Einstein’s equations Using fluorescent imaging Input data.
Chapter 5 Algorithms (2) Introduction to CS 1 st Semester, 2015 Sanghyun Park.
Discrete Mathematics Lecture 7. 2 Analysis of Algorithms Analyzing an algorithm Time complexity Space complexity Time complexity Running time needed by.
COS 126 – Atomic Theory of Matter. Goal of the Assignment Calculate Avogadro’s number Using Einstein’s equations Using fluorescent imaging Input data.
1 Beginning & Intermediate Algebra – Math 103 Math, Statistics & Physics.
Print Row Function void PrintRow(float x[ ][4],int i) { int j; for(j=0;j
Int fact (int n) { If (n == 0) return 1; else return n * fact (n – 1); } 5 void main () { Int Sum; : Sum = fact (5); : } Factorial Program Using Recursion.
1 CSE1301 Computer Programming: Where are we now in the CSE1301 syllabus?
Unit 4: Sequences & Series 1Integrated Math 3Shire-Swift.
12 x 12 = x 11 = x 10 = x 9 = 108.
Working on One Drive OH HAPPY DAY!!! THE MEAN MATH TEACHER :)
Introduction to Analysis of Algorithms
Algebra II Elements 5.8: Analyze Graphs of Polynomial Functions
COS 126 – Atomic Theory of Matter
Big-Oh and Execution Time: A Review
CHAPTER 2: Analysis of Algorithms
SA3202 Statistical Methods for Social Sciences
Complexity Analysis of Algorithms
CS 201 Fundamental Structures of Computer Science
MATH 1311 Section 1.2.
مديريت موثر جلسات Running a Meeting that Works
The Big 6 Research Model Step 3: Location and Access
Analyzing an Algorithm Computing the Order of Magnitude Big O Notation
class PrintOnetoTen { public static void main(String args[]) {
COS 126 – Atomic Theory of Matter
More on Asymptotic Analysis and Performance Measurement
Hank Childs, University of Oregon
Revision of C++.
Introduction to Algorithm and its Complexity Lecture 1: 18 slides
CHAPTER 2: Analysis of Algorithms
Year 5 Beat It.
More on Asymptotic Analysis + Performance Measurement
Year 4 Beat It.
Year 3 Beat It.
Presentation transcript:

Discrete Maths Objective to describe the Big-Oh notation for estimating the running time of programs , Semester 2, Running Time of Programs 1

Overview 1. Running Time 2. Big-Oh and Approximate Running Time 3. Big-Oh for Programs 4. Analyzing Function Calls 5. Analyzing Recursive Functions 6. Further Information 2

1. Running Time What is the running time of this program? void main() { int i, n; scanf("%d", &n); for(i=0; i<n; i++) printf("%d"\n", i); } continued 3

There is no single answer! the running time depends on the size of the n value Instead of a time answer in seconds, we want a time answer which is related to the size of the input. continued 4

For example: programTime(n) = constant * n this means that as n gets bigger, so does the program time running time is linearly related to the input size of n running time constant * n 5

Running Time Theory A program/algorithm has a running time T(n) n is some measure of the input size T(n) is the largest amount of time the program takes on any input of size n Time units are left unspecified. continued 6

A typical result is: T(n) = c*n, where c is some constant but often we just ignore c this means the program has linear running time T(n) values for different programs can be used to compare their relative running times selection sort: T(n) = n 2 merge sort: T(n) = n log n so, merge sort is better for larger n sizes 7

1.1. Different Kinds of Running Time Usually T(n) is the worst-case running time the maximum running time on any input of size n T avg (n) is the average running time of the program over all inputs of size n more realistic very hard to calculate not considered by us 8

1.2. T(n) Example Loop fragment for finding the index of the smallest value in A[] array of size n: (2)small = 0; (3)for(j = 1; j < n; j++) (4) if (A[j] < A[small]) (5) small = j; Count each assignment and test as 1 time unit. 9

Calculation The for loop executes n-1 times each loop carries out (in the worse case) 4 ops test of j < n, if test, small assign, j increment total loop time = 4(n-1) plus 3 ops at start and end small assign (line 2), init of j (line 3), final j < n test Total time T(n) = 4(n-1) + 3 = 4n -1 running time is linear with the size of the array 10

1.3. Comparing Different T()s If input size < 50, program B is faster. But for large ns, which are more common in real code, program B gets worse and worse T(n) value input size n T a (n) = 100n T b (n) = 2n 2 11

1.4. Common Growth Formulae & Names Formula (n = input size) Name nlinear n 2 quadratic n 3 cubic n m polynomial, e.g. n 10 m n ( m >= 2)exponential, e.g. 5 n n!factorial 1constant log nlogarithmic n log n log log n 12

1.5. Execution Times n ms1sec n ms10ms1sec12 days n ms1sec16.7 min31,710yr 2 n yr4*10 16 yr3* yr3* yr log n n (no. of instructions) growth formula T() if n is 50, you will wait 36 years for an answer! Assume 1 instruction takes 1 microsec (10 -6 secs) to execute. How long will n instructions take? 13

Notes Logarithmic running times are best. Polynomial running times are acceptable, if the power isnt too big e.g. n 2 is ok, n 100 is terrible Exponential times mean sloooooooow code. some size problems may take longer to finish than the lifetime of the universe! 14

1.6. Why use T(n)? T() can guide our choice of which algorithm to implement, or program to use e.g. selection sort or merge sort? T() helps us look for better algorithms in our own code, without expensive implementation, testing, and measurement. 15

2. Big-Oh and Approximate Running Time Big-Oh mathematical notation simplifies the process of estimating the running time of programs it uses T(n), but ignores constant factors which depend on compiler/machine behaviour continued 16

The Big-Oh value specifies running time independent of: machine architecture e.g. dont consider the running speed of individual machine operations machine load (usage) e.g. time delays due to other users compiler design effects e.g. gcc versus Borland C 17

Example In the code fragment example on slide 9, we assumed that assigment and testing takes 1 time unit. This means: T(n) = 4n -1 The Big-Oh value, O(), uses the T(n) value but ignores constants (which will actually vary from machine to machine). This means: T(n) is O(n) we say "T(n) is order n" 18

More Examples T(n) valueBig Oh value: O() 10n n+100 O(n 2 ) (n+1) 2 O(n 2 ) n 10 O(2 n ) 5n O(n 3 ) These simplifications have a mathematical reason, which is explained in section 2.2. hard to understand 19

2.1. Is Big-Oh Useful? O() ignores constant factors, which means it is a more reliable measure across platforms/compilers. It can be compared with Big-Oh values for other algorithms. i.e. linear is better than polynomial and exponential, but worse than logarithmic 20

2.2. Definition of Big-Oh The connection between T() and O() is: when T(n) is O( f(n) ), it means that f(n) is the most important thing in T() when n is large More formally, for some integer n 0 and constant c > 0 T(n) is O( f(n) ), if for all integers n >= n 0, T(n) <= c*f(n) n 0 and c are called witnesses to the relationship: T(n) is O( f(n) ) 21

Example 1 T(n) = 10n n which allows that T(n) is O(n 2 ) Why? Witnesses:n 0 = 1, c = 160 thenT(n) = 1 so10n n <= 160 n 2 since 10n n <= 10n n n 2 <= 160 n 2 Informally, the n 2 part is the most important thing in the T() function 22

Example 2 T(n) = (n+1) 2 which allows that T(n) is O(n 2 ) Why? Witnesses:n 0 = 1, c = 4 thenT(n) = 1 so(n+1) 2 <= 4n 2 since n 2 + 2n + 1 <= n 2 + 2n 2 + n 2 <= 4n 2 23

Example 3 T(n) = n 10 which allows that T(n) is O(2 n ) Why? Witnesses:n 0 = 64, c = 1 thenT(n) = 64 son 10 = 64 (10*log 2 64 == 10*6; 60 <= 64) 24

2.4. Some Observations about O() When choosing an O() approximation to T(), remember that: constant factors do not matter e.g. T(n) = (n+1) 2 is O(n 2 ) low-order terms do not matter e.g. T(n) = 10n n is O(n 2 ) there are many possible witnesses 25

3. Big-Oh for Programs First decide on a size measure for the data in the program. This will become the n. Data TypePossible Size Measure integerits value stringits length arrayits length 26

3.1. Building a Big-Oh Result The Big-Oh value for a program is built up inductively by: 1) Calculate the Big-Ohs for all the simple statements in the program e.g. assignment, arithmetic 2) Then use those value to obtain the Big-Ohs for the complex statements e.g. blocks, for loops, if-statements 27

Simple Statements (in C) We assume that simple statements always take a constant amount of time to execute written as O(1) Kinds of simple statements: assignment, break, continue, return, all library functions (e.g. putchar(),scanf()), arithmetic, boolean tests, array indexing 28

Complex Statements The Big-Oh value for a complex statement is a combination of the Big-Oh values of its component simple statements. Kinds of complex statements: blocks {... } conditionals: if-then-else, switch loops: for, while, do-while continued 29

3.2. Structure Trees The easiest way to see how complex statement timings are based on simple statements (and other complex statements) is by drawing a structure tree for the program. 30

Example: binary conversion void main() { int i; (1) scanf(%d, &i); (2) while (i > 0) { (3) putchar(0 + i%2); (4) i = i/2; } (5) putchar(\n); } 31

Structure Tree for Example block while 2-4 block the time for this is the time for (3) + (4) 32

3.3. Details for Complex Statements Blocks: Running time bound = summation of the bounds of its parts. The summation rule means that only the largest Big-Oh value is considered. "summation" means 'add' 33

Block Calculation Graphically O( f 1 (n) ) O( f 2 (n) ) O( f k (n) ) O( f 1 (n) + f 2 (n) f k (n)) In other words: O( largest f i (n) ) summation rule 34

Block Summation Rule Example First block's time T1(n) = O(n 2 ) Second block's time T2(n) = O(n) Total running time = O(n 2 + n) = O(n 2 ) the largest part 35

Conditionals Conditionals: Running time bound = the cost of the if-test + larger of the bounds for the if- and else- parts When the if-test is a simple statement (a boolean test), it is O(1). e.g. if statements, switches 36

Conditional Graphically Test Else Part If Part O(1) O( max( f 1 (n), f 2 (n)) +1 ) which is the same as O( max( f 1 (n), f 2 (n)) ) O( f 1 (n) )O( f 2 (n) ) 37

If Example Code fragment: if (x < y)// O(1) foo(x);// O(n) else bar(y);// O(n 2 ) Total running time = O( max(n,n 2 ) + 1) = O(n 2 + 1) = O(n 2 ) 38

Loops Loops: Running time bound is usually = the max. number of times round the loop * the time to execute the loop body once But we must include O(1) for the increment and test each time around the loop. Must also include the initialization and final test costs (both O(1)). 39

While Graphically Test Body O(1) O( f(n) ) At most g(n) times around O( g(n)*f(n) ) Altogether this is: O( g(n)*(f(n)+1) + 1 ) which can be simplified to: O( g(n)*f(n) ) 40

While Loop Example Code fragment: x = 0; while (x < n) {// O(1) for test foo(x, n);// O(n 2 ) x++;// O(1) } Total running time of loop: = O( n*( 1 + n 2 + 1) + 1 ) = O(n 3 + 2n + 1) = O(n 3 ) 41

For-loop Graphically Test Body Increment Initialize O(1) O( f(n) ) At most g(n) times around O( g(n)*(f(n)+1+1) + 1) which can be simplified to: O( g(n)*f(n) ) O(1) 42

For Loop Example Code Fragment: for (i=0; i < n; i++) foo(i, n);// O(n 2 ) It helps to rewrite this as a while loop: i=0;// O(1) while (i < n) {// O(1) for test foo(i, n);// O(n 2 ) i++;// O(1) } continued 43

Running time for the for loop: = O( 1 + n*( 1 + n 2 + 1) + 1 ) = O( 2 + n 3 + 2n ) = O(n 3 ) 44

Example: nested loops (1)for(i=0; i < n; i++)_ (2) for (j = 0; j < n; j++) (3) A[i][j] = 0; line (3) is a simple op - takes O(1) line (2) is a loop carried out n times takes O(n *1) = O(n) line (1) is a loop carried out n times takes O(n * n) = O(n 2 ) 45

Example: if statement (1)if (A[0][0] == 0) { (2) for(i=0; i < n; i++)_ (3) for (j = 0; j < n; j++) (4) A[i][j] = 0; } (5)else { (6) for (i=0; i < n; i++) (7) A[i][i] = 1; } continued 46

The if-test takes O(1); the if block takes O(n 2 ); the else block takes O(n). Total running time: = O(1) + O( max(n 2, n) ) = O(1) + O(n 2 ) = O(n 2 )// using the summation rule 47

Time for a Binary Conversion void main() { int i; (1) scanf(%d, &i); (2) while (i > 0) { (3) putchar(0 + i%2); (4) i = i/2; } (5) putchar(\n); } continued 48

Lines 1, 2, 3, 4, 5: each O(1) Block of 3-4 is O(1) + O(1) = O(1) While of 2-4 loops at most (log 2 i)+1 times total running time = O(1 * log 2 i+1) = O(log 2 i) Block of 1-5: = O(1) + O(log 2 i) + O(1) = O(log 2 i) why? 49

Why (log 2 i)+1 ? Assume i = 2 k Start 1 st iteration, i = 2 k Start 2 nd iteration, i = 2 k-1 Start 3 rd iteration, i = 2 k-2 Start k th iteration, i = 2 k-(k-1) = 2 1 = 2 Start k+1 th iteration, i = 2 k-k = 2 0 = 1 the while will terminate after this iteration Since 2 k = i, so k = log 2 i So k+1, the no. of iterations, = (log 2 i)+1 50

Using a Structure Tree block while 2-4 block O(1) O(log 2 i) 51

Time for a Selection Sort void selectionSort(int A[], int n) { int i, j, small, temp; (1) for (i=0; i < n-1; i++) { (2) small = i; (3) for( j= i+1; j < n; j++) (4) if (A[j] < A[small]) (5) small = j; (6) temp = A[small]; (7) A[small] = A[i]; (8) A[i] = temp; } } 52

Selection Sort Structure Tree for block 2-8 for if

Lines 2, 5, 6, 7, 8: each is O(1) If of 4-5 is O(max(1,0)+1) = O(1) For of 3-5 is O( (n-(i+1))*1) = O(n-i-1) = O(n), simplified Block of 2-8 = O(1) + O(n) + O(1) + O(1) + O(1) = O(n) For of 1-8 is: = O( (n-1) * n) = O(n 2 - n) = O(n 2 ), simplified if partelse part 54

4. Analyzing Function calls In this section, we assume that the functions are not recursive we add recursion in section (5) Size measures for all the functions must be similar, so they can be combined to give the programs Big-Oh value. 55

Example Program #include int bar(int x, int n); int foo(int x, int n): void main() { int a, n; (1) scanf(%d, &n); (2) a = foo(0, n); (3) printf(%d\n, bar(a,n)); } continued 56

int bar(int x, int n) { int i; (4) for(i = 1; i <= n; i++) (5) x += i; (6) return x; } int foo(int x, int n) { int i; (7) for(i = 1; i <= n; i++) (8) x += bar(i, n); (9) return x; } 57

Calling Graph main foo bar 58

Calculating Times with a Calling Graph 1. Calculate times for Group 0 functions those that call no other user functions 2. Calculate times for Group 1 functions those that call Group 0 functions only 3. Calculate times for Group 2 functions those that call Group 0 and Group 1 functions only 4. Continue until the time for main() is obtained. 59

Example Program Analysis Group 0: bar() is O(n) Group 1: foo() is O( n * n) = O(n 2 ) Group 2: main() is = O(1) + O(n 2 ) + O(1) + O(n) = O(n 2 ) bar() in body 60

5. Analyzing Recursive Functions Recursive functions call themselves with a smaller size argument, and terminate by calling a base case. int factorial(int n) { if (n <= 1) return 1; else return n*factorial(n-1); } 61

Running Time for a Recursive Function 1. Develop basis and inductive statements for the running time. 2. Solve the corresponding recurrence relation. this usually requires the Big-Oh notation to be rewritten as constants and multiples of n e.g. O(1) becomes a, O(n) becomes b*n, O(n 2 ) becomes c*n 2, etc. continued 62

3. Translate the solved relation back into Big- Oh notation rewrite the remaining constants back into Big-Oh form e.g. a becomes O(1), b*n becomes O(n) 63

5.1. Factorial Running Time Step 1. Basis: T(1) = O(1) Induction: T(n) = O(1) + T(n-1), for n > 1 Step 2. Simplify the relation by replacing the O() notation with constants. Basis: T(1) = a Induction: T(n) = b + T(n-1), for n > 1 64

The simplest way to solve T(n) is to calculate it for some values of n, and then guess the general expression. T(1) = a T(2) = b + T(1)= b + a T(3) = b + T(2)= 2b + a T(4) = b + T(3)= 3b + a Obviously, the general form is: T(n) = ((n-1)*b) + a = bn + (a-b) continued 65

Step 3. Translate back: T(n) = bn + (a-b) Replace constants by Big-Oh notation: T(n) = O(n) + O(1) = O(n) The running time for recursive factorial is O(n). That is fast. 66

5.2. Recursive Selection Sort void rSSort(int A[], int n) { int imax, i; if (n == 1) return; else { imax = 0; /* A[0] is biggest */ for (i=1; i A[imax]) imax = i; swap(A, n-1, imax); rSSort(A, n-1); } } 67

Running Time Step 1. Basis: T(1) = O(1) Induction: T(n) = O(n-1) + T(n-1), for n > 1 Step 2. Basis: T(1) = a Induction: T(n) = b(n-1) + T(n-1), for n > 1 continued multiple of n-1 the loop call to rSSort() Assume swap() is O(1), so ignore n == the size of the array 68

Solve the relation: T(1) = a T(2) = b + T(1)= b + a T(3) = 2b + T(2)= 2b + b + a T(4) = 3b + T(3)= 3b + 2b + b + a General Form: T(n) = (n-1)b b + a = a + b(n-1)n/2 continued 69

Step 3. Translate back: T(n) = a + b(n-1)n/2 Replace constants by Big-Oh notation: T(n) = O(1) + O(n 2 ) + O(n) = O(n 2 ) The running time for recursive selection sort is O(n 2 ). That is slow for large arrays. 70

6. Further Information Discrete Mathematics and its Applications Kenneth H. Rosen McGraw Hill, 2007, 7th edition chapter 3, sections 3.2 –