Algorithmics - Lecture 41 LECTURE 4: Analysis of Algorithms Efficiency (I)

Slides:



Advertisements
Similar presentations
Analysis of Algorithms CS Data Structures Section 2.6.
Advertisements

Analysis of Algorithms
Lecture3: Algorithm Analysis Bohyung Han CSE, POSTECH CSED233: Data Structures (2014F)
CSE 373: Data Structures and Algorithms Lecture 5: Math Review/Asymptotic Analysis III 1.
the fourth iteration of this loop is shown here
Fall 2006CENG 7071 Algorithm Analysis. Fall 2006CENG 7072 Algorithmic Performance There are two aspects of algorithmic performance: Time Instructions.
CSC401 – Analysis of Algorithms Lecture Notes 1 Introduction
Chapter 2: Fundamentals of the Analysis of Algorithm Efficiency
CS 206 Introduction to Computer Science II 09 / 10 / 2008 Instructor: Michael Eckmann.
Introduction to Analysis of Algorithms
Efficiency of Algorithms
Complexity Analysis (Part I)
Analysis of Algorithms (Chapter 4)
Chapter 2 Fundamentals of the Analysis of Algorithm Efficiency Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Analysis of Algorithms1 Estimate the running time Estimate the memory space required. Time and space depend on the input size.
Object (Data and Algorithm) Analysis Cmput Lecture 5 Department of Computing Science University of Alberta ©Duane Szafron 1999 Some code in this.
Algorithm Analysis CS 201 Fundamental Structures of Computer Science.
CS2420: Lecture 4 Vladimir Kulyukin Computer Science Department Utah State University.
Concept of Basic Time Complexity Problem size (Input size) Time complexity analysis.
Analysis of Algorithms COMP171 Fall Analysis of Algorithms / Slide 2 Introduction * What is Algorithm? n a clearly specified set of simple instructions.
Design and Analysis of Algorithms Chapter Analysis of Algorithms Dr. Ying Lu August 28, 2012
1 Chapter 2 Program Performance – Part 2. 2 Step Counts Instead of accounting for the time spent on chosen operations, the step-count method accounts.
Lecture 2 We have given O(n 3 ), O(n 2 ), O(nlogn) algorithms for the max sub-range problem. This time, a linear time algorithm! The idea is as follows:
CSC 201 Analysis and Design of Algorithms Lecture 03: Introduction to a CSC 201 Analysis and Design of Algorithms Lecture 03: Introduction to a lgorithms.
Chapter 2.6 Comparison of Algorithms modified from Clifford A. Shaffer and George Bebis.
Week 2 CS 361: Advanced Data Structures and Algorithms
{ CS203 Lecture 7 John Hurley Cal State LA. 2 Execution Time Suppose two algorithms perform the same task such as search (linear search vs. binary search)
Lecture 2 Computational Complexity
Algorithm Efficiency CS 110: Data Structures and Algorithms First Semester,
CS 3343: Analysis of Algorithms
Iterative Algorithm Analysis & Asymptotic Notations
DISCRETE MATHEMATICS I CHAPTER 11 Dr. Adam Anthony Spring 2011 Some material adapted from lecture notes provided by Dr. Chungsim Han and Dr. Sam Lomonaco.
Design and Analysis of Algorithms - Chapter 21 Analysis of Algorithms b Issues: CorrectnessCorrectness Time efficiencyTime efficiency Space efficiencySpace.
Analysis of Algorithms
Chapter 3 Sec 3.3 With Question/Answer Animations 1.
CS 221 Analysis of Algorithms Instructor: Don McLaughlin.
Analysis of Algorithms These slides are a modified version of the slides used by Prof. Eltabakh in his offering of CS2223 in D term 2013.
Analyzing algorithms & Asymptotic Notation BIO/CS 471 – Algorithms for Bioinformatics.
Introduction to algorithmic problem solving
DR. Gatot F. Hertono, MSc. Design and Analysis of ALGORITHM (Session 2)
Major objective of this course is: Design and analysis of modern algorithms Different variants Accuracy Efficiency Comparing efficiencies Motivation thinking.
Algorithm Analysis CS 400/600 – Data Structures. Algorithm Analysis2 Abstract Data Types Abstract Data Type (ADT): a definition for a data type solely.
Algorithmic Analysis Charl du Plessis and Robert Ketteringham.
1 Asymptotic Notations Iterative Algorithms and their analysis Asymptotic Notations –Big O,  Notations Review of Discrete Math –Summations –Logarithms.
Algorithm Analysis Part of slides are borrowed from UST.
1 Algorithms  Algorithms are simply a list of steps required to solve some particular problem  They are designed as abstractions of processes carried.
2-0 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Theoretical.
Algorithmics - Lecture 121 LECTURE 11: Dynamic programming - II -
تصميم وتحليل الخوارزميات عال311 Chapter 3 Growth of Functions
Algorithm Complexity L. Grewe 1. Algorithm Efficiency There are often many approaches (algorithms) to solve a problem. How do we choose between them?
Algorithmics - Lecture 51 LECTURE 5: Analysis of Algorithms Efficiency (II)
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
E.G.M. PetrakisAlgorithm Analysis1  Algorithms that are equally correct can vary in their utilization of computational resources  time and memory  a.
1 Ch. 2: Getting Started. 2 About this lecture Study a few simple algorithms for sorting – Insertion Sort – Selection Sort (Exercise) – Merge Sort Show.
DS.A.1 Algorithm Analysis Chapter 2 Overview Definitions of Big-Oh and Other Notations Common Functions and Growth Rates Simple Model of Computation Worst.
BITS Pilani Pilani Campus Data Structure and Algorithms Design Dr. Maheswari Karthikeyan Lecture1.
Analysis of Algorithms Spring 2016CS202 - Fundamentals of Computer Science II1.
GC 211:Data Structures Week 2: Algorithm Analysis Tools Slides are borrowed from Mr. Mohammad Alqahtani.
Chapter 15 Running Time Analysis. Topics Orders of Magnitude and Big-Oh Notation Running Time Analysis of Algorithms –Counting Statements –Evaluating.
LECTURE 2 : fundamentals of analysis of algorithm efficiency Introduction to design and analysis algorithm 1.
Data Structures I (CPCS-204) Week # 2: Algorithm Analysis tools Dr. Omar Batarfi Dr. Yahya Dahab Dr. Imtiaz Khan.
Theoretical analysis of time efficiency
Analysis of Algorithms
GC 211:Data Structures Week 2: Algorithm Analysis Tools
GC 211:Data Structures Algorithm Analysis Tools
Analysis of algorithms
Chapter 2 Fundamentals of the Analysis of Algorithm Efficiency
GC 211:Data Structures Algorithm Analysis Tools
Chapter 2.
Analysis of algorithms
Presentation transcript:

Algorithmics - Lecture 41 LECTURE 4: Analysis of Algorithms Efficiency (I)

Algorithmics - Lecture 42 Outline What is efficiency analysis ? How can be time efficiency measured ? Examples Best-case, worst-case and average-case analysis

Algorithmics - Lecture 43 What is efficiency analysis ? Analyzing the efficiency of an algorithm means: To establish the amount of computing resources needed to execute the algorithm Remark: it is sometimes called complexity analysis Usefulness: it is useful to compare the algorithms and to estimate before the algorithm is executed which the expected amount of resources needed to obtain the result

Algorithmics - Lecture 44 What is efficiency analysis ? Computing resources: Memory space = space needed to store the data processed by the algorithm Running time = time needed to execute the operations of the algorithm Efficient algorithm: an algorithm which uses a reasonable amount of computing resources If an algorithm uses less resources than another one then the first one is considered to be more efficient than the second one

Algorithmics - Lecture 45 What is efficiency analysis ? There are two types of efficiency Space efficiency = it refers to the space the algorithm requires How much space ? Time efficiency = it refers to how fast an algorithm runs How much time ? Both analyses are based on the following remark: the amount of computing resources depends on the dimension of processed data = problem’s size = input size The main aim of the efficiency analysis is to answer the question: how depends the time and/or space on the input size ?

Algorithmics - Lecture 46 How can we determine the input size for a given problem ? … Usually, quite straightforward starting from the problem Input size = dimension of the space needed to store all input data of the problem It can be expressed in one of the following ways: the number of elements (real values, integer values, characters etc) belonging to the input data the number of bits necessary to represent the input data

Algorithmics - Lecture 47 How can we determine the input size for a given problem ? Examples: 1.Find the minimum of an array x[1..n] Input size: n 2.Compute the value of a polynomial of order n Input size: n 3.Compute the sum of two matrices m*n Input size: (m,n) or mn 4.Verify if a number n is prime or not Input size: n or [log 2 n]+1

Algorithmics - Lecture 48 The efficiency is influenced also by the manner the data are organized Example: sparse matrix 100*100 containing only 50 non-zero elements Representation variants: 1.Classical (100*100=10000 values) 2.One-dimensional array of non-zero elements: for each non-zero element: (i,j,a[i,j]) (150 values) Second variant: more efficient with respect to space First variant: can be more efficient with respect to time (depending on the sparseness degree and on the operation to be executed on the matrices) We have to make a compromise between space efficiency and time efficiency How can we determine the input size for a given problem ?

Algorithmics - Lecture 49 What is efficiency analysis ? How can be time efficiency measured ? Examples Best-case, worst-case and average-case analysis Outline

Algorithmics - Lecture 410 A measure for time efficiency = estimate of running time To estimate the running time we must choose: A computation model A measuring unit How can be time efficiency measured ?

Algorithmics - Lecture 411 A computation model: random access machine (RAM) Characteristics (simplifying assumptions): All processing steps are sequentially executed (there is no parallelism in the execution of the algorithm) The time of executing the basic operations does not depend on the values of the operands (there is no time difference between computing 1+2 and computing ) The time to access data does not depend on their address (there are no differences between processing the first element of an array and processing the last element) How can be time efficiency measured ?

Algorithmics - Lecture 412 Measuring unit = time needed to execute a basic operation Basic operations: Assignment Arithmetical operations Comparisons Logical operations Running time = number of basic operations Remark. The running time expresses the dependence of the number of executed operations on the input size How can be time efficiency measured ?

Algorithmics - Lecture 413 What is efficiency analysis ? How can be time efficiency measured ? Examples Best-case, worst-case and average-case analysis Outline

Algorithmics - Lecture 414 Preconditions: n>=1 Postcondition: S=1+2+…+n Input’s size: n Example 1 Algorithm: Sum(n) 1:S:=0 2: i:=0 3:WHILE i<n DO 4:i:=i+1 5:S:=S+i 6: ENDWHILE 7: RETURN S Algorithm: OperationCost Repetitions 1c1 1 2c2 1 3c3 n+1 4c4 n 5c5 n Running time: T(n)=(c3+c4+c5)n+(c1+c2+c3)= = a*n +b

Algorithmics - Lecture 415 Remarks: Considering that all basic operations have an unitary cost one obtains: T(n)=3(n+1) The values of the constants appearing in the expression of the running time are not very important. The fact which is important is that the running time depends linearly on the input size Since the algorithm is equivalent to: S:=0 FOR i:=1,n DO S:=S+i ENDFOR it is easy to see that the cost of updating the counting variable i is 2(n+1); the other (n+1) operations correspond to the operations involving S (initialization and modification) Example 1

Algorithmics - Lecture 416 Preconditions: A m*n, B n*p Postcondition: C=A*B Input size: (m,n,p) Example 2 1 n 1 m A[i,k], k=1..n 1p 1 n B[k,j], k=1..n 1 p 1 m C[i,j] x = A B C C[i,j]=A[i,1]*B[1,j]+A[i,2]*B[2,j]+…+A[i,n]*B[n,j], i=1..m, j=1..p

Algorithmics - Lecture 417 Example 2 Basic idea: for each i=1..m and j=1..p compute the sum after k Algorithm: Product(A[1..m,1..n],B[1..n,1..p]) 1: FOR i=1,m DO 2: FOR j=1,p DO 3: C[i,j]:=0 4: FOR k:=1,n DO 5: C[i,j]:=C[i,j]+A[i,k]*B[k,j] 6: ENDFOR 7: ENDFOR 8: ENDFOR 9: RETURN C[1..m,1..p] Costs table Op.Cost RepTotal 12(m+1) 1 2(m+1) 22(p+1) m 2m(p+1) 31 mp mp 42(n+1) mp 2mp(n+1) 52mpn 2mnp T(m,n,p)=4mnp+5mp+4m+2

Algorithmics - Lecture 418 Example 2 Remark: is no need to do always such a detailed analysis it suffices to identify a dominant operation and to count it Dominant operation: most frequent/expensive operation Algorithm: Product(A[1..m,1..n],B[1..n,1..p]) 1: FOR i=1,m DO 2: FOR j=1,p DO 3: C[i,j]:=0 4: FOR k:=1,n DO 5: C[i,j]:=C[i,j]+A[i,k]*B[k,j] 6: ENDFOR 7: ENDFOR 8: ENDFOR RETURN C[1..m,1..p] Analysis: T(m,n,p)=mnp

Algorithmics - Lecture 419 Example 3 Preconditions: x[1..n], n>=1 Postconditions: m=min(x[1..n]) Input size: n Algorithm: Minimum(x[1..n]) 1: m:=x[1] 2: FOR i=2,n DO 3: IF x[i]<m THEN 4: m:=x[i] 5: ENDIF 6: ENDFOR 7:RETURN m Table of costs: Op. Cost Rep. Total n 1 2n 3 1 n-1 n t(n) t(n) T(n)=3n+t(n) The running time depends not only on n but also on the properties of input data

Algorithmics - Lecture 420 Example 3 When the running time depends also on the properties of input data we have to analyze at least two cases: Best case (x[1] T(n)=3n Worst case (x[1]>x[2]>…>x[n]): t(n)=n-1=> T(n)=4n-1 Thus 3n<=T(n)<=4n-1 Both the lower and the upper bound depend linearly on the input size Algorithm: Minimum(x[1..n]) 1: m:=x[1] 2: FOR i=2,n DO 3: IF x[i]<m THEN 4: m:=x[i] 5: ENDIF 6: ENDFOR 7: RETURN m Dominant operation: comparison T(n) =n-1

Algorithmics - Lecture 421 Example 4 Preconditions: x[1..n], n>=1, v a value Postconditions: the variable “found” contains the truth value of the statement “the value v is in the array x[1..n]” Input size: n Algorithm (sequential search): search(x[1..n],v) 1: found := False 2: i:=1 3: WHILE (found=False) AND (i<=n) DO 4: IF x[i]=v //t1(n) 5: THEN found:= True //t2(n) 6: ELSE i:=i+1 //t3(n) 7: ENDIF 8: ENDWHILE 9: RETURN found Costs table Op. Cost t1(n)+1 4 t1(n) 5 t2(n) 6 t3(n)

Algorithmics - Lecture 422 Example 4 The running time depends on the properties of the array. Case 1: the value v is in the array (let k be the first position of v) Case 2: the value v is not in the array Algorithm (sequential search): search(x[1..n],v) 1: found := False 2: i:=1 3: WHILE (found=False) AND (i<=n) DO 4: IF x[i]=v // t1(n) 5: THEN found:= True // t2(n) 6: ELSE i:=i+1 // t3(n) 7: ENDIF 8: ENDWHILE 9: RETURN found k if v is in the array t1(n)= n if v is not in the array 1 if v is in the array t2(n)= 0 if v is not in the array k-1 if v is in the array t3(n)= n if v is not in the array

Algorithmics - Lecture 423 Example 4 Best case: x[1]=v t1(n)=1, t2(n)=1, t3(n)=0 T(n)= 6 Worst case: v is not in the array t1(n)=n, t2(n)=0, t3(n)=n T(n)=3n+3 The lower and the upper bound: 6<= T(n) <= 3(n+1) The lower bound is constant, the upper bound depends linearly on n k if v is in the array t1(n)= n if v is not in the array 1 if v is in the array t2(n)= 0 if v is not in the array k-1 if v is in the array t3(n)= n if v is not in the array

Algorithmics - Lecture 424 Example 4 Search(x[1..n],v) 1: i:=1 2: while x[i]<>v and i<n do 3: i:=i+1 4: endwhile 5: if x[i]=v then found:=true 6: else found:=false 7: endif 8: return found Best case: T(n)=4 Worst case: T(n)=1+n+(n-1)+2=2n+2

Algorithmics - Lecture 425 Example 4 For some problems the best case and the worst case are exceptional cases Thus … the running time in the best case and in the worst case do not give us enough information Another type of analysis … average case analysis The aim of average case analysis is to give us information about the behavior of the algorithm for typical (random) input data

Algorithmics - Lecture 426 Outline What is efficiency analysis ? How can be time efficiency measured ? Examples Best-case, worst-case and average-case analysis

Algorithmics - Lecture 427 Best-case and worst-case analysis Worst case analysis: gives us the largest running time with respect to all input data of size n (this is an upper bound of the running time) the upper bound of the running time is more important than the lower bound Best case analysis: gives us a lower bound for the running time it can help us to identify algorithms which are not efficient (if an algorithm has a high cost in the best case it cannot be considered as a viable solution)

Algorithmics - Lecture 428 Average-case analysis This analysis is based on knowing the distribution probability of the input space. This means to know which is the occurrence probability of each instance of input data (how frequently appears each instance of input data) The average running time is the mean value (in a statistical sense) of the running times corresponding to different instances of input data.

Algorithmics - Lecture 429 Average-case analysis Hypotheses: Let us suppose that the input data can be grouped in classes such that for input data in the same class the running time is the same there are m=M(n) such classes the probability of appearing an input data belonging to class k is P k the running time of the algorithm for data belonging to class k is T k (n) Then the average running time is: T a (n)= P 1 T 1 (n)+P 2 T 2 (n)+…+P m T m (n) Remark: if all classes have the same probability then T a (n)=(T 1 (n)+T 2 (n)+…+T m (n))/m

Algorithmics - Lecture 430 Average-case analysis Example: sequential search (dominant operation: comparison) Hypotheses concerning the probability distribution of input space: Probability that the value v is in the array: p - the value v appears with the same probability on any position of the array - the probability that the value v is on position k is 1/n Probability that the value v is not in the array: 1-p T a (n)=p(1+2+…+n)/n+(1-p)n=p(n+1)/2+(1-p)n=(1-p/2)n+p/2 If p=0.5 one obtains T a (n)=3/4 n+1/4 The average running time of sequential search is, as in the worst case, linear with respect to the input size

Algorithmics - Lecture 431 Average-case analysis Example: sequential search (flag variant) Basic idea: the array is extended with a position (n+1) and on this position is placed v the extended array is searched until the value v is found (it will be found at least on position n+1 – in this case we can decide that the value is not in the initial array x[1..n]) x[1]x[2]x[3] … x[n]v flag

Algorithmics - Lecture 432 Average-case analysis Algorithm: Search_flag(x[1..n],v) i:=1 WHILE x[i]<>v DO i:=i+1 ENDWHILE RETURN i Dominant operation: comparison The probability that v is on position k is 1/(n+1) Average running time: T a (n)=(1+2+…+(n+1))/(n+1) =(n+2)/2 Remark: by changing the hypothesis on the distribution probability of input space the value of average running time changed (however it is still linear) The average running time is NOT necessarily the arithmetical mean of the running times corresponding to best and average cases, respectively

Algorithmics - Lecture 433 Summary: steps in estimating the running time Identify the input size Identify the dominant operation Count for how many times the dominant operation is executed If this number depends on the properties of input data analyze: –Best case –Worst case –Average case

Algorithmics - Lecture 434 Next lecture will be … also on analysis of algorithm efficiency. More specifically, we will discuss about: growth order asymptotic notations complexity classes