CSE 1342 Programming Concepts Algorithmic Analysis Using Big-O Part 1.

Slides:



Advertisements
Similar presentations
CSCE 2100: Computing Foundations 1 Running Time of Programs
Advertisements

HST 952 Computing for Biomedical Scientists Lecture 10.
Razdan with contribution from others 1 Algorithm Analysis What is the Big ‘O Bout? Anshuman Razdan Div of Computing.
Algorithm Complexity Analysis: Big-O Notation (Chapter 10.4)
Reference: Tremblay and Cheston: Section 5.1 T IMING A NALYSIS.
CS 3610/5610N data structures Lecture: complexity analysis Data Structures Using C++ 2E1.
Fall 2006CENG 7071 Algorithm Analysis. Fall 2006CENG 7072 Algorithmic Performance There are two aspects of algorithmic performance: Time Instructions.
Chapter 10 Algorithm Efficiency
Complexity Analysis (Part I)
Runtime Analysis CSC 172 SPRING 2002 LECTURE 9 RUNNING TIME A program or algorithm has running time T(n), where n is the measure of the size of the input.
Complexity Analysis (Part II)
Analysis of Algorithms1 Estimate the running time Estimate the memory space required. Time and space depend on the input size.
© 2006 Pearson Addison-Wesley. All rights reserved6-1 More on Recursion.
Cmpt-225 Algorithm Efficiency.
The Efficiency of Algorithms
Lecture 3 Aug 31, 2011 Goals: Chapter 2 (algorithm analysis) Examples: Selection sorting rules for algorithm analysis discussion of lab – permutation generation.
Algorithm Analysis CS 201 Fundamental Structures of Computer Science.
The Efficiency of Algorithms
CS 104 Introduction to Computer Science and Graphics Problems Data Structure & Algorithms (1) Asymptotic Complexity 10/28/2008 Yang Song.
Analysis of Algorithms 7/2/2015CS202 - Fundamentals of Computer Science II1.
Lecture 3 Feb 7, 2011 Goals: Chapter 2 (algorithm analysis) Examples: Selection sorting rules for algorithm analysis Image representation Image processing.
Analysis of Algorithms Spring 2015CS202 - Fundamentals of Computer Science II1.
DATA STRUCTURES AND ALGORITHMS Lecture Notes 1 Prepared by İnanç TAHRALI.
Algorithm Analysis (Big O)
Spring2012 Lecture#10 CSE 246 Data Structures and Algorithms.
Asymptotic Growth Rates Themes –Analyzing the cost of programs –Ignoring constants and Big-Oh –Recurrence Relations & Sums –Divide and Conquer Examples.
Algorithm Analysis & Complexity We saw that a linear search used n comparisons in the worst case (for an array of size n) and binary search had logn comparisons.
Chapter 2.6 Comparison of Algorithms modified from Clifford A. Shaffer and George Bebis.
Week 2 CS 361: Advanced Data Structures and Algorithms
SEARCHING, SORTING, AND ASYMPTOTIC COMPLEXITY Lecture 12 CS2110 – Fall 2009.
1 Recursion Algorithm Analysis Standard Algorithms Chapter 7.
Data Structures and Algorithms Lecture 5 and 6 Instructor: Quratulain Date: 15 th and 18 th September, 2009 Faculty of Computer Science, IBA.
Complexity Analysis Chapter 1.
Analysis of Algorithms
Analysis of Algorithm Efficiency Dr. Yingwu Zhu p5-11, p16-29, p43-53, p93-96.
1 COMP3040 Tutorial 1 Analysis of algorithms. 2 Outline Motivation Analysis of algorithms Examples Practice questions.
© 2011 Pearson Addison-Wesley. All rights reserved 10 A-1 Chapter 10 Algorithm Efficiency and Sorting.
Chapter 10 A Algorithm Efficiency. © 2004 Pearson Addison-Wesley. All rights reserved 10 A-2 Determining the Efficiency of Algorithms Analysis of algorithms.
Analysis of Algorithms CSCI Previous Evaluations of Programs Correctness – does the algorithm do what it is supposed to do? Generality – does it.
Program Efficiency & Complexity Analysis. Algorithm Review An algorithm is a definite procedure for solving a problem in finite number of steps Algorithm.
Asymptotic Growth Rates  Themes  Analyzing the cost of programs  Ignoring constants and Big-Oh  Recurrence Relations & Sums  Divide and Conquer 
CSC – 332 Data Structures Generics Analysis of Algorithms Dr. Curry Guinn.
Algorithmic Analysis Charl du Plessis and Robert Ketteringham.
Recitation on analysis of algorithms. Formal definition of O(n) We give a formal definition and show how it is used: f(n) is O(g(n)) iff There is a positive.
Algorithm Analysis (Big O)
CSE 373: Data Structures and Algorithms
Algorithm Complexity L. Grewe 1. Algorithm Efficiency There are often many approaches (algorithms) to solve a problem. How do we choose between them?
Algorithm Analysis. What is an algorithm ? A clearly specifiable set of instructions –to solve a problem Given a problem –decide that the algorithm is.
E.G.M. PetrakisAlgorithm Analysis1  Algorithms that are equally correct can vary in their utilization of computational resources  time and memory  a.
Searching Topics Sequential Search Binary Search.
CISC220 Spring 2010 James Atlas Lecture 07: Big O Notation.
DS.A.1 Algorithm Analysis Chapter 2 Overview Definitions of Big-Oh and Other Notations Common Functions and Growth Rates Simple Model of Computation Worst.
Computational complexity The same problem can frequently be solved with different algorithms which differ in efficiency. Computational complexity is a.
Analysis of Algorithms Spring 2016CS202 - Fundamentals of Computer Science II1.
GC 211:Data Structures Week 2: Algorithm Analysis Tools Slides are borrowed from Mr. Mohammad Alqahtani.
Algorithm Complexity Analysis (Chapter 10.4) Dr. Yingwu Zhu.
CSE 3358 NOTE SET 2 Data Structures and Algorithms 1.
Algorithm Analysis 1.
Analysis of Algorithms
Asymptotic Notations Algorithms perform f(n) basic operations to accomplish task Identify that function Identify size of problem (n) Count number of operations.
Analysis of Algorithms
Asymptotic Notations Algorithms perform f(n) basic operations to accomplish task Identify that function Identify size of problem (n) Count number of operations.
Algorithm Efficiency Chapter 10.
CS 201 Fundamental Structures of Computer Science
Analysis of Algorithms
Asymptotic Notations Algorithms perform f(n) basic operations to accomplish task Identify that function Identify size of problem (n) Count number of operations.
CSE 1342 Programming Concepts
Analysis of Algorithms
Algorithms and data structures: basic definitions
Algorithm Analysis How can we demonstrate that one algorithm is superior to another without being misled by any of the following problems: Special cases.
Presentation transcript:

CSE 1342 Programming Concepts Algorithmic Analysis Using Big-O Part 1

The Running Time of Programs nMost problems can be solved by more than one algorithm. So, how do you choose the best solution? nThe best solution is usually based on efficiency n Efficiency of time (speed of execution) n Efficiency of space (memory usage) nIn the case of a program that is infrequently run or subject to frequent modification, algorithmic simplicity may take precedence over efficiency.

The Running Time of Programs nAn absolute measure of time (5.3 seconds, for example) is not a practical measure of efficiency because … n The execution time is a function of the amount of data that the program manipulates and typically grows as the amount of data increases. n Different computers will execute the same program (using the same data) at different speeds. n Depending on the choice of programming language and compiler, speeds can vary on the same computer.

The Running Time of Programs nThe solution is to remove all implementation considerations from our analysis and focus on those aspects of the algorithm that most critically effect the execution time. n The most important aspect is usually the number of data elements (n) the program must manipulate. n Occasionally the magnitude of a single data element (and not the number of data elements) is the most important aspect.

The Rule nThe rule states that, in general, a program spends 90% of its time executing the same 10% of its code. n This is due to the fact that most programs rely heavily on repetition structures (loops and recursive calls). nBecause of the rule, algorithmic analysis focuses on repetition structures.

Analysis of Summation Algorithms Consider the following code segment that sums each row of an n-by-n array (version 1): grandTotal = 0; for (k = 0; k < n; k++) { sum[k] = 0; for (j = 0; j < n; j++) { sum[k] += a[k][j]; grandTotal += a[k][j]; } Requires 2n 2 additions

Analysis of Summation Algorithms Consider the following code segment that sums each row of an n-by-n array (version 2) grandTotal = 0; for (k = 0; k < n; k++) { sum[k] = 0; for (j = 0; j < n; j++) { sum[k] += a[k][j]; } grandTotal += sum[k]; } Requires n 2 + n additions

Analysis of Summation Algorithms nWhen we compare the number of additions performed in versions 1 and 2 we find that … (n 2 + n) 1 nBased on this analysis the version 2 algorithm appears to be the fastest. Although, as we shall see, faster may not have any real meaning in the real world of computation.

Analysis of Summation Algorithms nFurther analysis of the two summation algorithms. n Assume a 1000 by 1000 ( n = 1000) array and a computer that can execute an addition instruction in 1 microsecond. 1 microsecond = one millionth of a second. n The version 1 algorithm ( 2n 2 ) would require 2( )/1,000,000 = 2 seconds to execute. n The version 2 algorithm ( n 2 + n) would require ( )/1,000,000 = = seconds to execute. n From a users real-time perspective the difference is insignificant

Analysis of Summation Algorithms nNow increase the size of n. n Assume a 100,000 by 100,000 ( n = 100,000) array. n The version 1 algorithm ( 2n 2 ) would require 2(100,000 2 )/1,000,000 = 20,000 seconds to execute (5.55 hours). n The version 2 algorithm ( n 2 + n) would require (100, ,000)/1,000,000 = 10,000.1 seconds to execute (2.77 hours). n From a users real-time perspective both jobs take a long time and would need to run in a batch environment. nIn terms of order of magnitude (big-O) versions 1 and 2 have the same efficiency - O(n 2 ).

Big-O Analysis Overview nO stands for order of magnitude. nBig-O analysis is independent of all implementation factors. n It is dependent (in most cases) on the number of data elements (n) the program must manipulate. nBig-O analysis only has significance for large values of n. n For small values of n big-o analysis breaks down. nBig-O analysis is built around the principle that the runtime behavior of an algorithm is dominated by its behavior in its loops ( rule).

Definition of Big-O nLet T(n) be a function that measures the running time of a program in some unknown unit of time. nLet n represent the size of the input data set that the program manipulates where n > 0. nLet f(n) be some function defined on the size of the input data set, n. nWe say that “T(n) is O(f(n))” if there exists an integer n 0 and a constant c, where c > 0, such that for all integers n >= n 0 we have T(n) <= cf(n). n The pair n 0 and c are witnesses to the fact that T(n) is O(f(n))

Simplifying Big-O Expressions nBig-O expressions are simplified by dropping constant factors and low order terms. nThe total of all terms gives us the total running time of the program. For example, say that T(n) = O(f 3 (n) + f 2 (n) + f 1 (n)) where f 3 (n) = 4n 3 ; f 2 (n) = 5n 2 ; f 1 (n) = 23 or to restate T(n): T(n) = O(4n 3 + 5n ) nAfter stripping out the constants and low order terms we are left with T(n) = O(n 3 )

T(n) = f 1 (n) + f 2 (n) + f 3 (n) + … + f k (n) nIn big-O analysis, one of the terms in the T(n) expression is identified as the dominant term. n A dominant term is one that, for large values of n, becomes so large that it allows us to ignore the other terms in the expression. nThe problem of big-O analysis can be reduced to one of finding the dominant term in an expression representing the number of operations required by an algorithm. n All other terms and constants are dropped from the expression. Simplifying Big-O Expressions

Big-O Analysis Example 1 for (k = 0; k < n/2; ++k) { for (j = 0; j < n*n; ++j) { statement(s) } nOuter loop executes n/2 times nInner loop executes n 2 times nT(n) = (n/2)(n 2 ) = n 3 /2 =.5(n 3 ) nT(n) = O(n 3 )

for (k = 0; k < n/2; ++k) { statement(s) } for (j = 0; j < n*n; ++j) { statement(s) } nFirst loop executes n/2 times nSecond loop executes n 2 times nT(n) = (n/2) + n 2 =.5n + n 2 nn 2 is the dominant term nT(n) = O(n 2 ) Big-O Analysis Example 2

while (n > 1) { statement(s) n = n / 2; } nThe values of n will follow a logarithmic progression. n Assuming n has the initial value of 64, the progression will be 64, 32, 16, 8, 4, 2. nLoop executes log 2 times nO(log 2 n) = O(log n) Big-O Analysis Example 3

Big-O Comparisons

Analysis Involving if/else if (condition) loop1; //assume O(f(n)) for loop1 else loop2; //assume O(g(n)) for loop 2 nThe order of magnitude for the entire if/else statement is O(max(f(n), g(n)))

An Example Involving if/else if (a[1][1] = = 0) for (i = 0; i < n; ++i) for (j = 0; j < n; ++j) a[i][j] = 0; else for (i = 0; i < n; ++i) a[i][j] = 1; nThe order of magnitude for the entire if/else statement is O(max(f(n), g(n))) = O(n 2 ) f(n) = n 2 g(n) = n