Presentation is loading. Please wait.

Presentation is loading. Please wait.

Complexity Analysis (Part I)

Similar presentations


Presentation on theme: "Complexity Analysis (Part I)"— Presentation transcript:

1 Complexity Analysis (Part I)
Motivation Average, Best, and Worst Case Complexity Analysis Asymptotic Analysis

2 Motivations for Complexity Analysis
There are often many different algorithms which can be used to solve the same problem. For example, assume that we want to search for a key in a sorted array. Thus, it makes sense to develop techniques that allow us to: compare different algorithms with respect to their “efficiency” choose the most efficient algorithm for the problem The efficiency of any algorithmic solution to a problem can be measured according to the: Time efficiency: the time it takes to execute. Space efficiency: the space (primary or secondary memory) it uses. We will focus on an algorithm’s efficiency with respect to time.

3 How do we Measure Efficiency?
Running time in [micro/milli] seconds Advantages Disadvantages Instead, we count the number of basic operations the algorithm performs. we calculate how this number depends on the size of the input. A basic operation is an operation which takes a constant amount of time to execute. E.g., Hence, the efficiency of an algorithm is the number of basic operations it performs. This number is a function of the input size n. It is not useful to measure how fast the algorithm runs as this depends on which particular computer, OS, programming language, compiler, and kind of inputs are used in testing

4 Example of Basic Operations:
Arithmetic operations: *, /, %, +, - Assignment statements of simple data types. Reading of primitive types writing of a primitive types Simple conditional tests: if (x < 12) ... method call (Note: the execution time of the method itself may depend on the value of parameter and it may not be constant) a method's return statement Memory Access We consider an operation such as ++ , += , and *= as consisting of two basic operations. Note: To simplify complexity analysis we will not consider memory access (fetch or store) operations.

5 Best, Average, and Worst case complexities
What is the best case complexity analysis? What is the worst case complexity analysis? What is the average case complexity analysis? We are usually interested in the worst case complexity Easier to compute Usually close to the actual running time Crucial to real-time systems (e.g. air-traffic control)

6 Best, Average, and Worst case complexities
Example: For linear search algorithm, determine the case and the number of comparisons for the Best Case : Worst Case : Average Case : Best. worst and average complexities of common sorting algorithms Method Best Case Worst Case Average Case Selection sort n2 Insertion sort n Merge sort n log n Quick sort

7 Simple Complexity Analysis
public class TestAbstractClass { public static void main(String[] args) { Employee[] list = new Employee[3]; list[0] = new Executive("Jarallah Al-Ghamdi", 50000); list[1] = new HourlyEmployee("Azmat Ansari", 120); list[2] = new MonthlyEmployee("Sahalu Junaidu", 9000); ((Executive)list[0]).awardBonus(11000); for(int i = 0; i < list.length; i++) if(list[i] instanceof HourlyEmployee) ((HourlyEmployee)list[i]).addHours(60); for(int i = 0; i < list.length; i++) { list[i].print(); System.out.println("Paid: " + list[i].pay()); System.out.println("*************************"); }

8 Simple Complexity Analysis
Counting the number of basic operations is cumbersome We will learn that it is not important to count the number of basic operations Instead, we count/find the maximum “number of times” a statement gets executed that will directly reflect the time complexity behavior of the algorithm.

9 Simple Complexity Analysis: Simple Loops
Find the cost of the body of the loop (if independent of the iterator) Find the number of iterations for the for loop Multiply the two numbers to get the total number of operations E.g. double x, y; x = 2.5 ; y = 3.0; for(int i = 0; i < n; i++){ a[i] = x * y; x = 2.5 * x; y = y + a[i]; }

10 Simple Complexity Analysis: Complex Loops
Method: Represent the cost of the for loop in summation form. The main idea is to make sure that we find an iterator that iterates over successive values. Examples for (int i = k; i <= n; i = i + m){ statement1; statement2; } 1. [floor ((n-k)/m)] +1 iterations 2. [floor ((n-k)/m)] for (int i = k; i < n; i = i + m){ statement1; statement2; }

11 Simple Complexity Analysis: Complex Loops
Suppose n is a power of 2. Determine the number of basic operations performed by of the method myMethod(): Solution: static int myMethod(int n){ int sum = 0; for(int i = 1; i < n; i = i * 2) sum = sum + i + helper(i); return sum; } static int helper(int n){ int sum = 0; for(int i = 1; i <= n; i++) sum = sum + i; return sum; }

12 Simple Complexity Analysis: Examples
Suppose n is a multiple of 2. Determine the number of basic operations performed by of the method myMethod(): Solution: The number of iterations of the loop: for(int i = 1; i < n; i = i * 2) sum = sum + i + helper(i); is log2n (A Proof will be given later) Hence the number of basic operations is: (1 + log2n) + log2n[ (n + 1) + n[2 + 2] + 1] + 1 = 3 + log2n + log2n[10 + 5n] + 1 = 5 n log2n + 11 log2n + 4 static int myMethod(int n){ int sum = 0; for(int i = 1; i < n; i = i * 2) sum = sum + i + helper(i); return sum; } static int helper(int n){ int sum = 0; for(int i = 1; i <= n; i++) sum = sum + i; return sum; }

13 Simple Complexity Analysis: Loops With Logarithmic Iterations
In the following for-loop: (with <) The number of iterations is: (Logm (n / k) ) In the following for-loop: (with <=) The number of iterations is:  (Logm (n / k) + 1)  for (int i = k; i < n; i = i * m){ statement1; statement2; } for (int i = k; i <= n; i = i * m){ statement1; statement2; }

14 Asymptotic Growth Since counting the exact number of operations is cumbersome, sometimes impossible, we can always focus our attention to asymptotic analysis, where constants and lower-order terms are ignored. E.g. n3, 1000n3, and 10n n2+5n-1 are all “the same” The reason we can do this is that we are always interested in comparing different algorithms for arbitrary large number of inputs.

15 Asymptotic Growth (1)

16 Asymptotic Growth (2)

17 Running Times for Different Sizes of Inputs of Different Functions

18 Asymptotic Complexity
Finding the exact complexity, f(n) = number of basic operations, of an algorithm is difficult. We approximate f(n) by a function g(n) in a way that does not substantially change the magnitude of f(n). --the function g(n) is sufficiently close to f(n) for large values of the input size n. This "approximate" measure of efficiency is called asymptotic complexity. Thus the asymptotic complexity measure does not give the exact number of operations of an algorithm, but it shows how that number grows with the size of the input. This gives us a measure that will work for different operating systems, compilers and CPUs.

19 Big-O (asymptotic) Notation
The most commonly used notation for specifying asymptotic complexity is the big-O notation. The Big-O notation, O(g(n)), is used to give an upper bound on a positive runtime function f(n) where n is the input size. Definition of Big-O: Consider a function f(n) that is non-negative  n  0. We say that “f(n) is Big-O of g(n)” i.e., f(n) = O(g(n)), if  n0  0 and a constant c > 0 such that f(n)  cg(n),  n  n0

20 Big-O (asymptotic) Notation
Implication of the definition: For all sufficiently large n, c *g(n) is an upper bound of f(n) Note: By the definition of Big-O: f(n) = 3n + 4 is O(n) it is also O(n2), it is also O(n3), . . . it is also O(nn) However when Big-O notation is used, the function g in the relationship f(n) is O(g(n)) is CHOSEN TO BE AS SMALL AS POSSIBLE. We call such a function g a tight asymptotic bound of f(n)

21 Big-O (asymptotic) Notation
Some Big-O complexity classes in order of magnitude from smallest to highest: O(1) Constant O(log(n)) Logarithmic O(n) Linear O(n log(n)) n log n O(nx) {e.g., O(n2), O(n3), etc} Polynomial O(an) {e.g., O(1.6n), O(2n), etc} Exponential O(n!) Factorial O(nn)

22 Examples of Algorithms and their big-O complexity
Big-O Notation Examples of Algorithms O(1) Push, Pop, Enqueue (if there is a tail reference), Dequeue, Accessing an array element O(log(n)) Binary search O(n) Linear search O(n log(n)) Heap sort, Quick sort (average), Merge sort O(n2) Selection sort, Insertion sort, Bubble sort O(n3) Matrix multiplication O(2n) Towers of Hanoi

23 Warnings about O-Notation
Big-O notation cannot compare algorithms in the same complexity class. Big-O notation only gives sensible comparisons of algorithms in different complexity classes when n is large . Consider two algorithms for same task: Linear: f(n) = 1000 n Quadratic: f'(n) = n2/1000 The quadratic one is faster for n <

24 Rules for using big-O For large values of input n , the constants and terms with lower degree of n are ignored. Multiplicative Constants Rule: Ignoring constant factors. O(c f(n)) = O(f(n)), where c is a constant; Example: O(20 n3) = O(n3) 2. Addition Rule: Ignoring smaller terms. If O(f(n)) < O(h(n)) then O(f(n) + h(n)) = O(h(n)). O(n2 log n + n3) = O(n3) O(2000 n3 + 2n ! + n n + 27n log n + 5) = O(n !) 3. Multiplication Rule: O(f(n) * h(n)) = O(f(n)) * O(h(n)) O((n3 + 2n 2 + 3n log n + 7)(8n 2 + 5n + 2)) = O(n 5)

25 How to determine complexity of code structures
Loops: for, while, and do-while: Complexity is determined by the number of iterations in the loop times the complexity of the body of the loop. Examples: for (int i = 0; i < n; i++) sum = sum - i; O(n) for (int i = 0; i < n * n; i++) sum = sum + i; O(n2) i=1; while (i < n) { sum = sum + i; i = i*2 } O(log n)

26 How to determine complexity of code structures
Nested Loops: Examples: sum = 0 for(int i = 0; i < n; i++) for(int j = 0; j < n; j++) sum += i * j ; O(n2) i = 1; while(i <= n) { j = 1; while(j <= n){ statements of constant complexity j = j*2; } i = i+1; O(n log n)

27 How to determine complexity of code structures
Sequence of statements: Use Addition rule O(s1; s2; s3; … sk) = O(s1) + O(s2) + O(s3) + … + O(sk) = O(max(s1, s2, s3, , sk)) Example: Complexity is O(n2) + O(n) +O(1) = O(n2) for (int j = 0; j < n * n; j++) sum = sum + j; for (int k = 0; k < n; k++) sum = sum - l; System.out.print("sum is now ” + sum);

28 How to determine complexity of code structures
Switch: Take the complexity of the most expensive case char key; int[] X = new int[n]; int[][] Y = new int[n][n]; switch(key) { case 'a': for(int i = 0; i < X.length; i++) sum += X[i]; break; case 'b': for(int i = 0; i < Y.length; j++) for(int j = 0; j < Y[0].length; j++) sum += Y[i][j]; } // End of switch block o(n) o(n2) Overall Complexity: O(n2)

29 How to determine complexity of code structures
If Statement: Take the complexity of the most expensive case : char key; int[][] A = new int[n][n]; int[][] B = new int[n][n]; int[][] C = new int[n][n]; if(key == '+') { for(int i = 0; i < n; i++) for(int j = 0; j < n; j++) C[i][j] = A[i][j] + B[i][j]; } // End of if block else if(key == 'x') C = matrixMult(A, B); else System.out.println("Error! Enter '+' or 'x'!"); O(n2) Overall complexity O(n3) O(n3) O(1)

30 How to determine complexity of code structures
Sometimes if-else statements must carefully be checked: O(if-else) = O(Condition)+ Max[O(if), O(else)] int[] integers = new int[n]; if(hasPrimes(integers) == true) integers[0] = 20; else integers[0] = -20; public boolean hasPrimes(int[] arr) { for(int i = 0; i < arr.length; i++) } // End of hasPrimes() O(1) O(1) O(n) O(if-else) = O(Condition) = O(n)


Download ppt "Complexity Analysis (Part I)"

Similar presentations


Ads by Google