Introduction to Complexity Analysis Motivation Average, Best, and Worst Case Complexity Analysis Asymptotic Analysis.

Slides:



Advertisements
Similar presentations
Algorithm Analysis.
Advertisements

Lecture: Algorithmic complexity
Algorithm Complexity Analysis: Big-O Notation (Chapter 10.4)
the fourth iteration of this loop is shown here
Fall 2006CENG 7071 Algorithm Analysis. Fall 2006CENG 7072 Algorithmic Performance There are two aspects of algorithmic performance: Time Instructions.
CSC401 – Analysis of Algorithms Lecture Notes 1 Introduction
1 ICS 353 Design and Analysis of Algorithms Spring Semester (062) King Fahd University of Petroleum & Minerals Information & Computer Science.
Introduction to Analysis of Algorithms
Complexity Analysis (Part I)
Analysis of Algorithms. Time and space To analyze an algorithm means: –developing a formula for predicting how fast an algorithm is, based on the size.
Complexity Analysis (Part II)
Complexity Analysis (Part I)
Complexity Analysis (Part II)
Analysis of Algorithms1 Estimate the running time Estimate the memory space required. Time and space depend on the input size.
Complexity Analysis (Part I)
Cmpt-225 Algorithm Efficiency.
25 June 2015Comp 122, Spring 2004 Asymptotic Notation, Review of Functions & Summations.
Algorithm Analysis CS 201 Fundamental Structures of Computer Science.
Analysis of Algorithms 7/2/2015CS202 - Fundamentals of Computer Science II1.
Elementary Data Structures and Algorithms
Analysis of Algorithms COMP171 Fall Analysis of Algorithms / Slide 2 Introduction * What is Algorithm? n a clearly specified set of simple instructions.
Analysis of Algorithms Spring 2015CS202 - Fundamentals of Computer Science II1.
Introduction to Analysis of Algorithms
Spring2012 Lecture#10 CSE 246 Data Structures and Algorithms.
Analysis of Performance
Algorithm Analysis. Algorithm Def An algorithm is a step-by-step procedure.
Algorithm Analysis & Complexity We saw that a linear search used n comparisons in the worst case (for an array of size n) and binary search had logn comparisons.
Analysis of Algorithm Lecture 3 Recurrence, control structure and few examples (Part 1) Huma Ayub (Assistant Professor) Department of Software Engineering.
Program Performance & Asymptotic Notations CSE, POSTECH.
Week 2 CS 361: Advanced Data Structures and Algorithms
1 ©2008 DEEDS Group Introduction to Computer Science 2 - SS 08 Asymptotic Complexity Introduction in Computer Science 2 Asymptotic Complexity DEEDS Group.
C. – C. Yao Data Structure. C. – C. Yao Chap 1 Basic Concepts.
SEARCHING, SORTING, AND ASYMPTOTIC COMPLEXITY Lecture 12 CS2110 – Fall 2009.
1 Chapter 24 Developing Efficient Algorithms. 2 Executing Time Suppose two algorithms perform the same task such as search (linear search vs. binary search)
1 Recursion Algorithm Analysis Standard Algorithms Chapter 7.
Data Structures and Algorithms Lecture 5 and 6 Instructor: Quratulain Date: 15 th and 18 th September, 2009 Faculty of Computer Science, IBA.
Algorithm Efficiency CS 110: Data Structures and Algorithms First Semester,
Mathematics Review and Asymptotic Notation
CSC 201 Analysis and Design of Algorithms Lecture 04: CSC 201 Analysis and Design of Algorithms Lecture 04: Time complexity analysis in form of Big-Oh.
Algorithm Input Output An algorithm is a step-by-step procedure for solving a problem in a finite amount of time. Chapter 4. Algorithm Analysis (complexity)
Algorithm Analysis An algorithm is a clearly specified set of simple instructions to be followed to solve a problem. Three questions for algorithm analysis.
Analysis of Algorithms
1 COMP3040 Tutorial 1 Analysis of algorithms. 2 Outline Motivation Analysis of algorithms Examples Practice questions.
Asymptotic Analysis-Ch. 3
Analysis of Algorithms CSCI Previous Evaluations of Programs Correctness – does the algorithm do what it is supposed to do? Generality – does it.
Program Efficiency & Complexity Analysis. Algorithm Review An algorithm is a definite procedure for solving a problem in finite number of steps Algorithm.
Data Structure Introduction.
Introduction to Analysis of Algorithms CS342 S2004.
David Luebke 1 1/6/2016 CS 332: Algorithms Asymptotic Performance.
Asymptotic Performance. Review: Asymptotic Performance Asymptotic performance: How does algorithm behave as the problem size gets very large? Running.
Algorithm Analysis (Big O)
27-Jan-16 Analysis of Algorithms. 2 Time and space To analyze an algorithm means: developing a formula for predicting how fast an algorithm is, based.
1 Chapter 2 Algorithm Analysis All sections. 2 Complexity Analysis Measures efficiency (time and memory) of algorithms and programs –Can be used for the.
1 Chapter 2 Algorithm Analysis Reading: Chapter 2.
Analysis of Algorithms Spring 2016CS202 - Fundamentals of Computer Science II1.
Chapter 15 Running Time Analysis. Topics Orders of Magnitude and Big-Oh Notation Running Time Analysis of Algorithms –Counting Statements –Evaluating.
CSE 3358 NOTE SET 2 Data Structures and Algorithms 1.
Complexity Analysis (Part I)
Chapter 2 Algorithm Analysis
Mathematical Foundation
Unit 2 Introduction to Complexity Analysis
Introduction to Algorithms
Algorithm Analysis (not included in any exams!)
CS 201 Fundamental Structures of Computer Science
Searching, Sorting, and Asymptotic Complexity
At the end of this session, learner will be able to:
Complexity Analysis (Part II)
Complexity Analysis (Part I)
Complexity Analysis (Part I)
Presentation transcript:

Introduction to Complexity Analysis Motivation Average, Best, and Worst Case Complexity Analysis Asymptotic Analysis

Motivations for Complexity Analysis There are often many different algorithms which can be used to solve the same problem. –For example, assume that we want to search for a key in a sorted array. Thus, it makes sense to develop techniques that allow us to: ocompare different algorithms with respect to their “efficiency” ochoose the most efficient algorithm for the problem The efficiency of any algorithmic solution to a problem can be measured according to the: oTime efficiency: the time it takes to execute. oSpace efficiency: the space (primary or secondary memory) it uses. We will focus on an algorithm’s efficiency with respect to time.

How do we Measure Efficiency? Running time in [micro/milli] seconds –Advantages –Disadvantages Instead, owe count the number of basic operations the algorithm performs. owe calculate how this number depends on the size of the input. A basic operation is an operation which takes a constant amount of time to execute. –E.g., additions, subtractions, multiplications, comparisons, …etc. Hence, the efficiency of an algorithm is determined in terms of the number of basic operations it performs. This number is most useful when expressed as a function of the input size n.

Example of Basic Operations: Arithmetic operations: *, /, %, +, - Assignment statements of simple data types. Reading of primitive types writing of a primitive types Simple conditional tests: if (x < 12)... method call (Note: the execution time of the method itself may depend on the value of parameter and it may not be constant) a method's return statement Memory Access We consider an operation such as ++, +=, and *= as consisting of two basic operations. Note: To simplify complexity analysis we will not consider memory access (fetch or store) operations.

Simple Complexity Analysis public class TestAbstractClass { public static void main(String[] args) { Employee[] list = new Employee[3]; list[0] = new Executive("Jarallah Al-Ghamdi", 50000); list[1] = new HourlyEmployee("Azmat Ansari", 120); list[2] = new MonthlyEmployee("Sahalu Junaidu", 9000); ((Executive)list[0]).awardBonus(11000); for(int i = 0; i < list.length; i++) if(list[i] instanceof HourlyEmployee) ((HourlyEmployee)list[i]).addHours(60); for(int i = 0; i < list.length; i++) { list[i].print(); System.out.println("Paid: " + list[i].pay()); System.out.println("*************************"); } # of basic operations: Assignment statements: Additions: Print statements: Method calls:

Simple Complexity Analysis Counting the number of basic operations is cumbersome –It is not important to count the number of all basic operations Instead, we count/find the “number of times” of the statement that gets executed the most –E.g. find the number of element comparisons

Simple Complexity Analysis: Simple Loops How to find the cost of single unnested loops –Find the cost of the body of the loop –In the case below, consider the number of multiplications –Find the number of iterations for the for loop –Multiply the two numbers to get the total –E.g. double x, y; x = 2.5 ; y = 3.0; for(int i = 0; i < n; i++){ a[i] = x * y; x = 2.5 * x; y = y + a[i]; }

Simple Complexity Analysis: Complex Loops Represent the cost of the for loop in summation form. –The main idea is to make sure that we find an iterator that increase/decrease its values by 1. –For example, consider finding the number of times statements 1, 2 and 3 get executed below: for (int i = 1; i < n; i++) statement1; for (int i = 1; i <= n; i++) for (int j = 1; j <= n; j++) statement2; } for (int i = 1; i <= n; i++) for (int j = 1; j <= i; j++) statement3; }

Useful Summation Formulas

Simple Complexity Analysis: Complex Loops Represent the cost of the for loop in summation form. –The problem in the example below is that the value of i does not increase by 1 i: k, k + m, k + 2m, …, k + rm –Here, we can assume without loss of generality that k + rm = n, i.e. r = (n – k)/m –i.e., an iterator s from 0, 1, …,r can be used for (int i = k; i <= n; i = i + m) statement4;

Simple Complexity Analysis : Loops (with <=) In the following for-loop: The number of iterations is:  (n – k) / m  +1 The initialization statement, i = k, is executed one time. The condition, i <= n, is executed  (n – k) / m  times. The update statement, i = i + m, is executed  (n – k) / m  +1 times. Each of statement1 and statement2 is executed  (n – k) / m  +1 times. for (int i = k; i <= n; i = i + m){ statement1; statement2; }

Simple Complexity Analysis: Loops (with <) In the following for-loop: The number of iterations is:  (n – k ) / m  The initialization statement, i = k, is executed one time. The condition, i < n, is executed  (n – k ) / m  + 1 times. The update statement, i = i + m, is executed  (n – k ) / m  times. Each of statement1 and statement2 is executed  (n – k ) / m  times. for (int i = k; i < n; i = i + m){ statement1; statement2; }

Simple Complexity Analysis: Complex Loops Suppose n is a power of 2. Determine the number of basic operations performed by of the method myMethod(): Solution: The variables i and n in myMethod are different from the ones in the helper method. –In fact, n of “helper” is being called by variable i in “myMethod”. –Hence, we need to change the name of variable i in helper because it is independent from i in myMethod (let us call it k). We will count the number of times statement5 gets executed. (in myMethod) i : 1, 2, 2 2, 2 3,…, 2 r = n(r = log 2 n) Hence, we can use j where i = 2 j j : 0, 1, 2, 3, …, r = log 2 n static int myMethod(int n){ int sum = 0; for(int i = 1; i <= n; i = i * 2) sum = sum + i + helper(i); return sum; } static int helper(int n){ int sum = 0; for(int i = 1; i <= n; i++) sum = sum + i; //statement5 return sum; }

Useful Logarithmic Formulas

Best, Average, and Worst case complexities What is the best case complexity analysis? The smallest number of basic operations carried out by the algorithm for a given input. What is the worst case complexity analysis? The largest number of basic operations carried out by the algorithm for a given input. What is the average case complexity analysis? The number of basic operations carried out by the algorithm on average. We are usually interested in the worst case complexity Easier to compute Represents an upper bound on the actual running time for all inputs Crucial to real-time systems (e.g. air-traffic control)

Best, Average, and Worst case complexities: Example For linear search algorithm, searching for a key in an array of n elements, determine the situation and the number of comparisons in each of the following cases –Best Case –Worst Case –Average Case

Asymptotic Growth Since counting the exact number of operations is cumbersome, sometimes impossible, we can always focus our attention to asymptotic analysis, where constants and lower-order terms are ignored. –E.g. n 3, 1000n 3, and 10n n 2 +5n-1 are all “the same” –The reason we can do this is that we are always interested in comparing different algorithms for arbitrary large number of inputs.

Asymptotic Growth (1)

Asymptotic Growth (2)

Running Times for Different Sizes of Inputs of Different Functions

Asymptotic Complexity Finding the exact complexity, f(n) = number of basic operations, of an algorithm is difficult. We approximate f(n) by a function g(n) in a way that does not substantially change the magnitude of f(n). -- the function g(n) is sufficiently close to f(n) for large values of the input size n. This "approximate" measure of efficiency is called asymptotic complexity. Thus the asymptotic complexity measure does not give the exact number of operations of an algorithm, but it shows how that number grows with the size of the input. This gives us a measure that will work for different operating systems, compilers and CPUs.

Big-O (asymptotic) Notation The most commonly used notation for specifying asymptotic complexity is the big-O notation. The Big-O notation, O(g(n)), is used to give an upper bound on a positive runtime function f(n) where n is the input size. Definition of Big-O: Consider a function f(n) that is non-negative  n  0. We say that “f(n) is Big-O of g(n)” i.e., f(n) = O(g(n)), if  n0  0 and a constant c > 0 such that f(n)  cg(n),  n  n0

Big-O (asymptotic) Notation Implication of the definition: For all sufficiently large n, c *g(n) is an upper bound of f(n) Note: By the definition of Big-O: f(n) = 3n + 4 is O(n) it is also O(n 2 ), it is also O(n 3 ),... it is also O(n n ) However when Big-O notation is used, the function g in the relationship f(n) is O(g(n)) is CHOSEN TO BE AS SMALL AS POSSIBLE. –We call such a function g a tight asymptotic bound of f(n)

Big-O (asymptotic) Notation Some Big-O complexity classes in order of magnitude from smallest to highest: O(1)Constant O(log(n))Logarithmic O(n)Linear O(n log(n))n log n O(n x ){e.g., O(n 2 ), O(n 3 ), etc}Polynomial O(a n ){e.g., O(1.6 n ), O(2 n ), etc}Exponential O(n!)Factorial O(n n )

Examples of Algorithms and their big-O complexity MethodBest CaseWorst CaseAverage Case Selection sortO(n 2 ) Insertion sortO(n)O(n 2 ) Merge sortO(n log n) Quick sortO(n log n)O(n 2 )O(n log n)

Warnings about O-Notation Big-O notation cannot compare algorithms in the same complexity class. Big-O notation only gives sensible comparisons of algorithms in different complexity classes when n is large. Consider two algorithms for same task: Linear: f(n) = 1000 n Quadratic: f'(n) = n 2 /1000 The quadratic one is faster for n <

Rules for using big-O For large values of input n, the constants and terms with lower degree of n are ignored. 1.Multiplicative Constants Rule: Ignoring constant factors. O(c f(n)) = O(f(n)), where c is a constant; Example: O(20 n 3 ) = O(n 3 ) 2. Addition Rule: Ignoring smaller terms. If O(f(n)) < O(h(n)) then O(f(n) + h(n)) = O(h(n)). Example: O(n 2 log n + n 3 ) = O(n 3 ) O(2000 n 3 + 2n ! + n n + 27n log n + 5) = O(n !) 3. Multiplication Rule: O(f(n) * h(n)) = O(f(n)) * O(h(n)) Example: O((n 3 + 2n 2 + 3n log n + 7)(8n 2 + 5n + 2)) = O(n 5 )

How to determine complexity of code structures Loops: for, while, and do-while: Complexity is determined by the number of iterations in the loop times the complexity of the body of the loop. Examples: for (int i = 0; i < n; i++) sum = sum - i; for (int i = 0; i < n * n; i++) sum = sum + i; i=1; while (i < n) { sum = sum + i; i = i*2 } O(n) O(n 2 ) O(log n)

How to determine complexity of code structures Nested Loops: Examples: sum = 0 for(int i = 0; i < n; i++) for(int j = 0; j < n; j++) sum += i * j ; i = 1; while(i <= n) { j = 1; while(j <= n){ statements of constant complexity j = j*2; } i = i+1; } O(n 2 ) O(n log n)

How to determine complexity of code structures Sequence of statements: Use Addition rule O(s1; s2; s3; … sk) = O(s1) + O(s2) + O(s3) + … + O(sk) = O(max(s1, s2, s3,..., sk)) Example: Complexity is O(n 2 ) + O(n) +O(1) = O(n 2 ) for (int j = 0; j < n * n; j++) sum = sum + j; for (int k = 0; k < n; k++) sum = sum - l; System.out.print("sum is now ” + sum);

char key; int[] X = new int[n]; int[][] Y = new int[n][n]; switch(key) { case 'a': for(int i = 0; i < X.length; i++) sum += X[i]; break; case 'b': for(int i = 0; i < Y.length; j++) for(int j = 0; j < Y[0].length; j++) sum += Y[i][j]; break; } // End of switch block How to determine complexity of code structures Switch: Take the complexity of the most expensive case o(n) o(n 2 ) Overall Complexity: O(n 2 )

char key; int[][] A = new int[n][n]; int[][] B = new int[n][n]; int[][] C = new int[n][n]; if(key == '+') { for(int i = 0; i < n; i++) for(int j = 0; j < n; j++) C[i][j] = A[i][j] + B[i][j]; } // End of if block else if(key == 'x') C = matrixMult(A, B); else System.out.println("Error! Enter '+' or 'x'!"); If Statement: Take the complexity of the most expensive case : O(n 2 ) O(n 3 ) O(1) How to determine complexity of code structures Overall complexity O(n 3 )

int[] integers = new int[n]; if(hasPrimes(integers) == true) integers[0] = 20; else integers[0] = -20; public boolean hasPrimes(int[] arr) { for(int i = 0; i < arr.length; i++) } // End of hasPrimes() How to determine complexity of code structures Sometimes if-else statements must carefully be checked: O(if-else) = O(Condition)+ Max[O(if), O(else)] O(1) O(if-else) = O(Condition) = O(n) O(n)