Download presentation
Presentation is loading. Please wait.
Published byGerald Crawford Modified over 8 years ago
1
Introduction to Complexity Analysis Motivation Average, Best, and Worst Case Complexity Analysis Asymptotic Analysis
2
Motivations for Complexity Analysis There are often many different algorithms which can be used to solve the same problem. –For example, assume that we want to search for a key in a sorted array. Thus, it makes sense to develop techniques that allow us to: ocompare different algorithms with respect to their “efficiency” ochoose the most efficient algorithm for the problem The efficiency of any algorithmic solution to a problem can be measured according to the: oTime efficiency: the time it takes to execute. oSpace efficiency: the space (primary or secondary memory) it uses. We will focus on an algorithm’s efficiency with respect to time.
3
How do we Measure Efficiency? Running time in [micro/milli] seconds –Advantages –Disadvantages Instead, owe count the number of basic operations the algorithm performs. owe calculate how this number depends on the size of the input. A basic operation is an operation which takes a constant amount of time to execute. –E.g., additions, subtractions, multiplications, comparisons, …etc. Hence, the efficiency of an algorithm is determined in terms of the number of basic operations it performs. This number is most useful when expressed as a function of the input size n.
4
Example of Basic Operations: Arithmetic operations: *, /, %, +, - Assignment statements of simple data types. Reading of primitive types writing of a primitive types Simple conditional tests: if (x < 12)... method call (Note: the execution time of the method itself may depend on the value of parameter and it may not be constant) a method's return statement Memory Access We consider an operation such as ++, +=, and *= as consisting of two basic operations. Note: To simplify complexity analysis we will not consider memory access (fetch or store) operations.
5
Simple Complexity Analysis public class TestAbstractClass { public static void main(String[] args) { Employee[] list = new Employee[3]; list[0] = new Executive("Jarallah Al-Ghamdi", 50000); list[1] = new HourlyEmployee("Azmat Ansari", 120); list[2] = new MonthlyEmployee("Sahalu Junaidu", 9000); ((Executive)list[0]).awardBonus(11000); for(int i = 0; i < list.length; i++) if(list[i] instanceof HourlyEmployee) ((HourlyEmployee)list[i]).addHours(60); for(int i = 0; i < list.length; i++) { list[i].print(); System.out.println("Paid: " + list[i].pay()); System.out.println("*************************"); } # of basic operations: Assignment statements: Additions: Print statements: Method calls:
6
Simple Complexity Analysis Counting the number of basic operations is cumbersome –It is not important to count the number of all basic operations Instead, we count/find the “number of times” of the statement that gets executed the most –E.g. find the number of element comparisons
7
Simple Complexity Analysis: Simple Loops How to find the cost of single unnested loops –Find the cost of the body of the loop –In the case below, consider the number of multiplications –Find the number of iterations for the for loop –Multiply the two numbers to get the total –E.g. double x, y; x = 2.5 ; y = 3.0; for(int i = 0; i < n; i++){ a[i] = x * y; x = 2.5 * x; y = y + a[i]; }
8
Simple Complexity Analysis: Complex Loops Represent the cost of the for loop in summation form. –The main idea is to make sure that we find an iterator that increase/decrease its values by 1. –For example, consider finding the number of times statements 1, 2 and 3 get executed below: for (int i = 1; i < n; i++) statement1; for (int i = 1; i <= n; i++) for (int j = 1; j <= n; j++) statement2; } for (int i = 1; i <= n; i++) for (int j = 1; j <= i; j++) statement3; }
9
Useful Summation Formulas
10
Simple Complexity Analysis: Complex Loops Represent the cost of the for loop in summation form. –The problem in the example below is that the value of i does not increase by 1 i: k, k + m, k + 2m, …, k + rm –Here, we can assume without loss of generality that k + rm = n, i.e. r = (n – k)/m –i.e., an iterator s from 0, 1, …,r can be used for (int i = k; i <= n; i = i + m) statement4;
11
Simple Complexity Analysis : Loops (with <=) In the following for-loop: The number of iterations is: (n – k) / m +1 The initialization statement, i = k, is executed one time. The condition, i <= n, is executed (n – k) / m +1 + 1 times. The update statement, i = i + m, is executed (n – k) / m +1 times. Each of statement1 and statement2 is executed (n – k) / m +1 times. for (int i = k; i <= n; i = i + m){ statement1; statement2; }
12
Simple Complexity Analysis: Loops (with <) In the following for-loop: The number of iterations is: (n – k ) / m The initialization statement, i = k, is executed one time. The condition, i < n, is executed (n – k ) / m + 1 times. The update statement, i = i + m, is executed (n – k ) / m times. Each of statement1 and statement2 is executed (n – k ) / m times. for (int i = k; i < n; i = i + m){ statement1; statement2; }
13
Simple Complexity Analysis: Complex Loops Suppose n is a power of 2. Determine the number of basic operations performed by of the method myMethod(): Solution: The variables i and n in myMethod are different from the ones in the helper method. –In fact, n of “helper” is being called by variable i in “myMethod”. –Hence, we need to change the name of variable i in helper because it is independent from i in myMethod (let us call it k). We will count the number of times statement5 gets executed. (in myMethod) i : 1, 2, 2 2, 2 3,…, 2 r = n(r = log 2 n) Hence, we can use j where i = 2 j j : 0, 1, 2, 3, …, r = log 2 n static int myMethod(int n){ int sum = 0; for(int i = 1; i <= n; i = i * 2) sum = sum + i + helper(i); return sum; } static int helper(int n){ int sum = 0; for(int i = 1; i <= n; i++) sum = sum + i; //statement5 return sum; }
14
Useful Logarithmic Formulas
15
Best, Average, and Worst case complexities What is the best case complexity analysis? The smallest number of basic operations carried out by the algorithm for a given input. What is the worst case complexity analysis? The largest number of basic operations carried out by the algorithm for a given input. What is the average case complexity analysis? The number of basic operations carried out by the algorithm on average. We are usually interested in the worst case complexity Easier to compute Represents an upper bound on the actual running time for all inputs Crucial to real-time systems (e.g. air-traffic control)
16
Best, Average, and Worst case complexities: Example For linear search algorithm, searching for a key in an array of n elements, determine the situation and the number of comparisons in each of the following cases –Best Case –Worst Case –Average Case
17
Asymptotic Growth Since counting the exact number of operations is cumbersome, sometimes impossible, we can always focus our attention to asymptotic analysis, where constants and lower-order terms are ignored. –E.g. n 3, 1000n 3, and 10n 3 +10000n 2 +5n-1 are all “the same” –The reason we can do this is that we are always interested in comparing different algorithms for arbitrary large number of inputs.
18
Asymptotic Growth (1)
19
Asymptotic Growth (2)
20
Running Times for Different Sizes of Inputs of Different Functions
21
Asymptotic Complexity Finding the exact complexity, f(n) = number of basic operations, of an algorithm is difficult. We approximate f(n) by a function g(n) in a way that does not substantially change the magnitude of f(n). -- the function g(n) is sufficiently close to f(n) for large values of the input size n. This "approximate" measure of efficiency is called asymptotic complexity. Thus the asymptotic complexity measure does not give the exact number of operations of an algorithm, but it shows how that number grows with the size of the input. This gives us a measure that will work for different operating systems, compilers and CPUs.
22
Big-O (asymptotic) Notation The most commonly used notation for specifying asymptotic complexity is the big-O notation. The Big-O notation, O(g(n)), is used to give an upper bound on a positive runtime function f(n) where n is the input size. Definition of Big-O: Consider a function f(n) that is non-negative n 0. We say that “f(n) is Big-O of g(n)” i.e., f(n) = O(g(n)), if n0 0 and a constant c > 0 such that f(n) cg(n), n n0
23
Big-O (asymptotic) Notation Implication of the definition: For all sufficiently large n, c *g(n) is an upper bound of f(n) Note: By the definition of Big-O: f(n) = 3n + 4 is O(n) it is also O(n 2 ), it is also O(n 3 ),... it is also O(n n ) However when Big-O notation is used, the function g in the relationship f(n) is O(g(n)) is CHOSEN TO BE AS SMALL AS POSSIBLE. –We call such a function g a tight asymptotic bound of f(n)
24
Big-O (asymptotic) Notation Some Big-O complexity classes in order of magnitude from smallest to highest: O(1)Constant O(log(n))Logarithmic O(n)Linear O(n log(n))n log n O(n x ){e.g., O(n 2 ), O(n 3 ), etc}Polynomial O(a n ){e.g., O(1.6 n ), O(2 n ), etc}Exponential O(n!)Factorial O(n n )
25
Examples of Algorithms and their big-O complexity MethodBest CaseWorst CaseAverage Case Selection sortO(n 2 ) Insertion sortO(n)O(n 2 ) Merge sortO(n log n) Quick sortO(n log n)O(n 2 )O(n log n)
26
Warnings about O-Notation Big-O notation cannot compare algorithms in the same complexity class. Big-O notation only gives sensible comparisons of algorithms in different complexity classes when n is large. Consider two algorithms for same task: Linear: f(n) = 1000 n Quadratic: f'(n) = n 2 /1000 The quadratic one is faster for n < 1000000.
27
Rules for using big-O For large values of input n, the constants and terms with lower degree of n are ignored. 1.Multiplicative Constants Rule: Ignoring constant factors. O(c f(n)) = O(f(n)), where c is a constant; Example: O(20 n 3 ) = O(n 3 ) 2. Addition Rule: Ignoring smaller terms. If O(f(n)) < O(h(n)) then O(f(n) + h(n)) = O(h(n)). Example: O(n 2 log n + n 3 ) = O(n 3 ) O(2000 n 3 + 2n ! + n 800 + 10n + 27n log n + 5) = O(n !) 3. Multiplication Rule: O(f(n) * h(n)) = O(f(n)) * O(h(n)) Example: O((n 3 + 2n 2 + 3n log n + 7)(8n 2 + 5n + 2)) = O(n 5 )
28
How to determine complexity of code structures Loops: for, while, and do-while: Complexity is determined by the number of iterations in the loop times the complexity of the body of the loop. Examples: for (int i = 0; i < n; i++) sum = sum - i; for (int i = 0; i < n * n; i++) sum = sum + i; i=1; while (i < n) { sum = sum + i; i = i*2 } O(n) O(n 2 ) O(log n)
29
How to determine complexity of code structures Nested Loops: Examples: sum = 0 for(int i = 0; i < n; i++) for(int j = 0; j < n; j++) sum += i * j ; i = 1; while(i <= n) { j = 1; while(j <= n){ statements of constant complexity j = j*2; } i = i+1; } O(n 2 ) O(n log n)
30
How to determine complexity of code structures Sequence of statements: Use Addition rule O(s1; s2; s3; … sk) = O(s1) + O(s2) + O(s3) + … + O(sk) = O(max(s1, s2, s3,..., sk)) Example: Complexity is O(n 2 ) + O(n) +O(1) = O(n 2 ) for (int j = 0; j < n * n; j++) sum = sum + j; for (int k = 0; k < n; k++) sum = sum - l; System.out.print("sum is now ” + sum);
31
char key; int[] X = new int[n]; int[][] Y = new int[n][n];........ switch(key) { case 'a': for(int i = 0; i < X.length; i++) sum += X[i]; break; case 'b': for(int i = 0; i < Y.length; j++) for(int j = 0; j < Y[0].length; j++) sum += Y[i][j]; break; } // End of switch block How to determine complexity of code structures Switch: Take the complexity of the most expensive case o(n) o(n 2 ) Overall Complexity: O(n 2 )
32
char key; int[][] A = new int[n][n]; int[][] B = new int[n][n]; int[][] C = new int[n][n];........ if(key == '+') { for(int i = 0; i < n; i++) for(int j = 0; j < n; j++) C[i][j] = A[i][j] + B[i][j]; } // End of if block else if(key == 'x') C = matrixMult(A, B); else System.out.println("Error! Enter '+' or 'x'!"); If Statement: Take the complexity of the most expensive case : O(n 2 ) O(n 3 ) O(1) How to determine complexity of code structures Overall complexity O(n 3 )
33
int[] integers = new int[n];........ if(hasPrimes(integers) == true) integers[0] = 20; else integers[0] = -20; public boolean hasPrimes(int[] arr) { for(int i = 0; i < arr.length; i++).......... } // End of hasPrimes() How to determine complexity of code structures Sometimes if-else statements must carefully be checked: O(if-else) = O(Condition)+ Max[O(if), O(else)] O(1) O(if-else) = O(Condition) = O(n) O(n)
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.