Introduction to Analysis of Algorithms

Slides:



Advertisements
Similar presentations
Analysis of Algorithms
Advertisements

Algorithm Analysis.
Lecture: Algorithmic complexity
Razdan with contribution from others 1 Algorithm Analysis What is the Big ‘O Bout? Anshuman Razdan Div of Computing.
the fourth iteration of this loop is shown here
Fall 2006CENG 7071 Algorithm Analysis. Fall 2006CENG 7072 Algorithmic Performance There are two aspects of algorithmic performance: Time Instructions.
Chapter 3 Growth of Functions
Introduction to Analysis of Algorithms
Complexity Analysis (Part I)
Complexity Analysis (Part II)
Complexity Analysis (Part II)
Complexity Analysis (Part I)
Cmpt-225 Algorithm Efficiency.
Analysis of Algorithms (pt 2) (Chapter 4) COMP53 Oct 3, 2007.
1 Algorithm Efficiency, Big O Notation, and Role of Data Structures/ADTs Algorithm Efficiency Big O Notation Role of Data Structures Abstract Data Types.
25 June 2015Comp 122, Spring 2004 Asymptotic Notation, Review of Functions & Summations.
Algorithm Analysis CS 201 Fundamental Structures of Computer Science.
Analysis of Algorithms 7/2/2015CS202 - Fundamentals of Computer Science II1.
Analysis of Algorithms COMP171 Fall Analysis of Algorithms / Slide 2 Introduction * What is Algorithm? n a clearly specified set of simple instructions.
Analysis of Algorithms Spring 2015CS202 - Fundamentals of Computer Science II1.
Abstract Data Types (ADTs) Data Structures The Java Collections API
Algorithm analysis and design Introduction to Algorithms week1
Algorithm Analysis & Complexity We saw that a linear search used n comparisons in the worst case (for an array of size n) and binary search had logn comparisons.
Analysis of Algorithm Lecture 3 Recurrence, control structure and few examples (Part 1) Huma Ayub (Assistant Professor) Department of Software Engineering.
Week 2 CS 361: Advanced Data Structures and Algorithms
1 ©2008 DEEDS Group Introduction to Computer Science 2 - SS 08 Asymptotic Complexity Introduction in Computer Science 2 Asymptotic Complexity DEEDS Group.
Algorithm Efficiency CS 110: Data Structures and Algorithms First Semester,
Analysis of Algorithms
Mathematics Review and Asymptotic Notation
CSC 201 Analysis and Design of Algorithms Lecture 04: CSC 201 Analysis and Design of Algorithms Lecture 04: Time complexity analysis in form of Big-Oh.
1 Computer Algorithms Lecture 3 Asymptotic Notation Some of these slides are courtesy of D. Plaisted, UNC and M. Nicolescu, UNR.
CS 3343: Analysis of Algorithms
Analysis of Algorithms
Analysis of Algorithm Efficiency Dr. Yingwu Zhu p5-11, p16-29, p43-53, p93-96.
1 COMP3040 Tutorial 1 Analysis of algorithms. 2 Outline Motivation Analysis of algorithms Examples Practice questions.
Asymptotic Analysis-Ch. 3
Program Efficiency & Complexity Analysis. Algorithm Review An algorithm is a definite procedure for solving a problem in finite number of steps Algorithm.
1 Lecture 5 Generic Types and Big O. 2 Generic Data Types A generic data type is a type for which the operations are defined but the types of the items.
Data Structure Introduction.
Algorithm Analysis CS 400/600 – Data Structures. Algorithm Analysis2 Abstract Data Types Abstract Data Type (ADT): a definition for a data type solely.
Asymptotic Analysis (based on slides used at UMBC)
CSC – 332 Data Structures Generics Analysis of Algorithms Dr. Curry Guinn.
Introduction to Analysis of Algorithms CS342 S2004.
Time Complexity of Algorithms (Asymptotic Notations)
Algorithm Analysis Part of slides are borrowed from UST.
Analysis of Algorithm. Why Analysis? We need to know the “behavior” of algorithms – How much resource (time/space) does it use So that we know when to.
Analysis of algorithms. What are we going to learn? Need to say that some algorithms are “better” than others Criteria for evaluation Structure of programs.
Asymptotic Notations By Er. Devdutt Baresary. Introduction In mathematics, computer science, and related fields, big O notation describes the limiting.
1/6/20161 CS 3343: Analysis of Algorithms Lecture 2: Asymptotic Notations.
13 February 2016 Asymptotic Notation, Review of Functions & Summations.
Introduction to Complexity Analysis Motivation Average, Best, and Worst Case Complexity Analysis Asymptotic Analysis.
BITS Pilani Pilani Campus Data Structure and Algorithms Design Dr. Maheswari Karthikeyan Lecture1.
Ch03-Algorithms 1. Algorithms What is an algorithm? An algorithm is a finite set of precise instructions for performing a computation or for solving a.
Analysis of Algorithms Spring 2016CS202 - Fundamentals of Computer Science II1.
Chapter 3 Chapter Summary  Algorithms o Example Algorithms searching for an element in a list sorting a list so its elements are in some prescribed.
September 18, Algorithms and Data Structures Lecture II Simonas Šaltenis Aalborg University
Algorithmic Foundations COMP108 COMP108 Algorithmic Foundations Algorithm efficiency Prudence Wong
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 2.
Data Structures I (CPCS-204) Week # 2: Algorithm Analysis tools Dr. Omar Batarfi Dr. Yahya Dahab Dr. Imtiaz Khan.
Asymptotic Complexity
Algorithm Analysis 1.
Mathematical Foundation
Analysis of Algorithms
Unit 2 Introduction to Complexity Analysis
Introduction to Algorithms
CS 3343: Analysis of Algorithms
CS 201 Fundamental Structures of Computer Science
At the end of this session, learner will be able to:
Complexity Analysis (Part II)
Presentation transcript:

Introduction to Analysis of Algorithms

Introduction What is Algorithm? Data structures a clearly specified set of simple instructions to be followed to solve a problem Takes a set of values, as input and produces a value, or set of values, as output May be specified In English As a computer program As a pseudo-code Data structures Methods of organizing data Program = algorithms + data structures

What are the Algorithms? Is any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output. It is a tool for solving a well-specified computational problem. They are written in a pseudo code which can be implemented in the language of programmer’s choice.

Example: sorting numbers. 􀂄 Input: A sequence of n numbers {a3 , a1, a2,...,an } 􀂄 Output: A reordered sequence of the input {a1 , a2, a3,...,an } such that a1≤a2 ≤a3… ≤an . 􀂄 Input instance: {5, 2, 4, 1, 6, 3}. 􀂄 Output : {1, 2, 3, 4, 5, 6}. 􀂄 An instance of a problem consists of the input (satisfying whatever constraints are imposed in the problem statement) needed to compute a solution to the problem.

Example: sorting numbers. 􀂄 Sorting is a fundamental operation. 􀂄 Many algorithms existed for that purpose. 􀂄 The best algorithm to use depends on: 􀂄 The number of items to be sorted. 􀂄 possible restrictions on the item values 􀂄 kind of storage device to be used: main memory, disks, or tapes.

Correct and incorrect algorithms Algorithm is correct if, for every input instance, it ends with the correct output. We say that a correct algorithm solves the given computational problem. An incorrect algorithm might not end at all on some input instances, or it might end with an answer other than the desired one. We shall be concerned only with correct algorithms.

Hard problems 􀂄 We can identify the Efficiency of an algorithm from its speed (how long does the algorithm take to produce the result). 􀂄 Some problems have unknown efficient solution. 􀂄 These problems are called NP-complete problems.

Big-O Notation We use a shorthand mathematical notation to describe the efficiency of an algorithm relative to any parameter n as its “Order” or Big-O We can say that the first algorithm is O(n) We can say that the second algorithm is O(n2) For any algorithm that has a function g(n) of the parameter n that describes its length of time to execute, we can say the algorithm is O(g(n)) We only include the fastest growing term and ignore any multiplying by or adding of constants

Seven Growth Functions Seven functions g(n) that occur frequently in the analysis of algorithms (in order of increasing rate of growth relative to n): Constant  1 Logarithmic  log n Linear  n Log Linear  n log n Quadratic  n2 Cubic  n3 Exponential  2n

Growth Rates Compared n=1 n=2 n=4 n=8 n=16 n=32 1 logn 2 3 4 5 n 8 16 2 3 4 5 n 8 16 32 nlogn 24 64 160 n2 256 1024 n3 512 4096 32768 2n 65536 4294967296

Graphical Comparison of Complexity Classes

Complexity Analysis Asymptotic Complexity Big-O Computation Rules Big-O (asymptotic) Notation Big-O Computation Rules Proving Big-O Complexity How to determine complexity of code structures

Asymptotic Complexity Finding the exact complexity, f(n) = number of basic operations, of an algorithm is difficult. We approximate f(n) by a function g(n) in a way that does not substantially change the magnitude of f(n). --the function g(n) is sufficiently close to f(n) for large values of the input size n. This "approximate" measure of efficiency is called asymptotic complexity. Thus the asymptotic complexity measure does not give the exact number of operations of an algorithm, but it shows how that number grows with the size of the input. This gives us a measure that will work for different operating systems, compilers and CPUs.

Big-O (asymptotic) Notation The most commonly used notation for specifying asymptotic complexity is the big-O notation. The Big-O notation, O(g(n)), is used to give an upper bound (worst-case) on a positive runtime function f(n) where n is the input size. Definition of Big-O: Consider a function f(n) that is non-negative  n  0. We say that “f(n) is Big-O of g(n)” i.e., f(n) = O(g(n)), if  n0  0 and a constant c > 0 such that f(n)  cg(n),  n  n0

Big-O (asymptotic) Notation Implication of the definition: For all sufficiently large n, c *g(n) is an upper bound of f(n) Note: By the definition of Big-O: f(n) = 3n + 4 is O(n) it is also O(n2), it is also O(n3), . . . it is also O(nn) However when Big-O notation is used, the function g in the relationship f(n) is O(g(n)) is CHOOSEN TO BE AS SMALL AS POSSIBLE. We call such a function g a tight asymptotic bound of f(n)

Big-O (asymptotic) Notation Some Big-O complexity classes in order of magnitude from smallest to highest: O(1) Constant O(log(n)) Logarithmic O(n) Linear O(n log(n)) n log n O(nx) {e.g., O(n2), O(n3), etc} Polynomial O(an) {e.g., O(1.6n), O(2n), etc} Exponential O(n!) Factorial O(nn)

Examples of Algorithms and their big-O complexity Big-O Notation Examples of Algorithms O(1) Push, Pop, Enqueue (if there is a tail reference), Dequeue, Accessing an array element O(log(n)) Binary search O(n) Linear search O(n log(n)) Heap sort, Quick sort (average), Merge sort O(n2) Selection sort, Insertion sort, Bubble sort O(n3) Matrix multiplication O(2n) Towers of Hanoi

Warnings about O-Notation Big-O notation cannot compare algorithms in the same complexity class. Big-O notation only gives sensible comparisons of algorithms in different complexity classes when n is large . Consider two algorithms for same task: Linear: f(n) = 1000 n Quadratic: f'(n) = n2/1000 The quadratic one is faster for n < 1000000.

Rules for using big-O For large values of input n , the constants and terms with lower degree of n are ignored. Multiplicative Constants Rule: Ignoring constant factors. O(c f(n)) = O(f(n)), where c is a constant; Example: O(20 n3) = O(n3) 2. Addition Rule: Ignoring smaller terms. If O(f(n)) < O(h(n)) then O(f(n) + h(n)) = O(h(n)). O(n2 log n + n3) = O(n3) O(2000 n3 + 2n ! + n800 + 10n + 27n log n + 5) = O(n !) 3. Multiplication Rule: O(f(n) * h(n)) = O(f(n)) * O(h(n)) O((n3 + 2n 2 + 3n log n + 7)(8n 2 + 5n + 2)) = O(n 5)

Proving Big-O Complexity To prove that f(n) is O(g(n)) we find any pair of values n0 and c that satisfy: f(n) ≤ c * g(n) for  n  n0 Note: The pair (n0, c) is not unique. If such a pair exists then there is an infinite number of such pairs. Example: Prove that f(n) = 3n2 + 5 is O(n2) We try to find some values of n and c by solving the following inequality: 3n2 + 5  cn2 OR 3 + 5/n2  c (By putting different values for n, we get corresponding values for c) n0 1 2 3 4  c 8 4.25 3.55 3.3125

Proving Big-O Complexity Example: Prove that f(n) = 3n2 + 4n log n + 10 is O(n2) by finding appropriate values for c and n0 We try to find some values of n and c by solving the following inequality 3n2 + 4n log n + 10  cn2 OR 3 + 4 log n / n+ 10/n2  c ( We used Log of base 2, but another base can be used as well) n0 1 2 3 4  c 13 7.5 6.22 5.62

How to determine complexity of code structures Loops: for, while, and do-while: Complexity is determined by the number of iterations in the loop times the complexity of the body of the loop. Examples: for (int i = 0; i < n; i++) sum = sum - i; O(n) for (int i = 0; i < n * n; i++) sum = sum + i; O(n2) i=1; while (i < n) { sum = sum + i; i = i*2 } O(log n)

How to determine complexity of code structures Nested Loops: Complexity of inner loop * complexity of outer loop. Examples: sum = 0 for(int i = 0; i < n; i++) for(int j = 0; j < n; j++) sum += i * j ; O(n2) i = 1; while(i <= n) { j = 1; while(j <= n){ statements of constant complexity j = j*2; } i = i+1; O(n log n)

How to determine complexity of code structures Sequence of statements: Use Addition rule O(s1; s2; s3; … sk) = O(s1) + O(s2) + O(s3) + … + O(sk) = O(max(s1, s2, s3, . . . , sk)) Example: Complexity is O(n2) + O(n) +O(1) = O(n2) for (int j = 0; j < n * n; j++) sum = sum + j; for (int k = 0; k < n; k++) sum = sum - l; System.out.print("sum is now ” + sum);

How to determine complexity of code structures Switch: Take the complexity of the most expensive case char key; int[] X = new int[5]; int[][] Y = new int[10][10]; ........ switch(key) { case 'a': for(int i = 0; i < X.length; i++) sum += X[i]; break; case 'b': for(int i = 0; i < Y.length; j++) for(int j = 0; j < Y[0].length; j++) sum += Y[i][j]; } // End of switch block o(n) o(n2) Overall Complexity: o(n2)

How to determine complexity of code structures If Statement: Take the complexity of the most expensive case : char key; int[][] A = new int[5][5]; int[][] B = new int[5][5]; int[][] C = new int[5][5]; ........ if(key == '+') { for(int i = 0; i < n; i++) for(int j = 0; j < n; j++) C[i][j] = A[i][j] + B[i][j]; } // End of if block else if(key == 'x') C = matrixMult(A, B); else System.out.println("Error! Enter '+' or 'x'!"); O(n2) Overall complexity O(n3) O(n3) O(1)

How to determine complexity of code structures Sometimes if-else statements must carefully be checked: O(if-else) = O(Condition)+ Max[O(if), O(else)] int[] integers = new int[10]; ........ if(hasPrimes(integers) == true) integers[0] = 20; else integers[0] = -20; public boolean hasPrimes(int[] arr) { for(int i = 0; i < arr.length; i++) .......... } // End of hasPrimes() O(1) O(1) O(n) O(if-else) = O(Condition) = O(n)

How to determine complexity of code structures Note: Sometimes a loop may cause the if-else rule not to be applicable. Consider the following loop: The else-branch has more basic operations; therefore one may conclude that the loop is O(n). However the if-branch dominates. For example if n is 60, then the sequence of n is: 60, 30, 15, 14, 7, 6, 3, 2, 1, and 0. Hence the loop is logarithmic and its complexity is O(log n) while (n > 0) { if (n % 2 = = 0) { System.out.println(n); n = n / 2; } else{ n = n – 1; }

Asymptotic Complexity Running time of an algorithm as a function of input size n for large n. Expressed using only the highest-order term in the expression for the exact running time. Instead of exact running time, say Q(n2). Describes behavior of function in the limit. Written using Asymptotic Notation. Comp 122

Asymptotic Notation Q, O, W Defined for functions over the natural numbers. Ex: f(n) = Q(n2). Describes how f(n) grows in comparison to n2. Define a set of functions; in practice used to compare two function sizes. The notations describe different rate-of-growth relations between the defining function and the defined set of functions.

-notation For function g(n), we define (g(n)), big-Theta of n, as the set: (g(n)) = {f(n) :  positive constants c1, c2, and n0, such that n  n0, we have 0  c1g(n)  f(n)  c2g(n) } Intuitively: Set of all functions that have the same rate of growth as g(n). g(n) is an asymptotically tight bound for f(n).

O-notation For function g(n), we define O(g(n)), big-O of n, as the set: O(g(n)) = {f(n) :  positive constants c and n0, such that n  n0, we have 0  f(n)  cg(n) } Intuitively: Set of all functions whose rate of growth is the same as or lower than that of g(n). g(n) is an asymptotic upper bound for f(n). f(n) = (g(n))  f(n) = O(g(n)). (g(n))  O(g(n)).

 -notation For function g(n), we define (g(n)), big-Omega of n, as the set: (g(n)) = {f(n) :  positive constants c and n0, such that n  n0, we have 0  cg(n)  f(n)} Intuitively: Set of all functions whose rate of growth is the same as or higher than that of g(n). g(n) is an asymptotic lower bound for f(n). f(n) = (g(n))  f(n) = (g(n)). (g(n))  (g(n)).

Relations Between Q, O, W

Relations Between Q, W, O I.e., (g(n)) = O(g(n)) Ç W(g(n)) Theorem : For any two functions g(n) and f(n), f(n) = (g(n)) iff f(n) = O(g(n)) and f(n) = (g(n)). I.e., (g(n)) = O(g(n)) Ç W(g(n)) In practice, asymptotically tight bounds are obtained from asymptotic upper and lower bounds.

Logarithms x = logba is the exponent for a = bx. lg2a = (lg a)2 Natural log: ln a = logea Binary log: lg a = log2a lg2a = (lg a)2 lg lg a = lg (lg a)

Review on Summations Constant Series: For integers a and b, a  b, Linear Series (Arithmetic Series): For n  0, Quadratic Series: For n  0,

Review on Summations Cubic Series: For n  0, Geometric Series: For real x  1, For |x| < 1,