Algorithm Analysis and Big Oh Notation

Slides:



Advertisements
Similar presentations
MATH 224 – Discrete Mathematics
Advertisements

Fall 2006CENG 7071 Algorithm Analysis. Fall 2006CENG 7072 Algorithmic Performance There are two aspects of algorithmic performance: Time Instructions.
Introduction to Analysis of Algorithms
1 C++ Plus Data Structures Nell Dale Chapter 10 Sorting and Searching Algorithms Slides by Sylvia Sorkin, Community College of Baltimore County - Essex.
Complexity Analysis (Part I)
Sorting Algorithms Selection, Bubble, Insertion and Radix Sort.
Algorithm Efficiency and Sorting
CS 104 Introduction to Computer Science and Graphics Problems Data Structure & Algorithms (1) Asymptotic Complexity 10/28/2008 Yang Song.
Analysis of Algorithms 7/2/2015CS202 - Fundamentals of Computer Science II1.
Algorithm Analysis and Big Oh Notation Courtesy of Prof. Ajay Gupta (with updates by Dr. Leszek T. Lilien) CS 1120 – Fall 2006 Department of Computer Science.
Analysis of Algorithms COMP171 Fall Analysis of Algorithms / Slide 2 Introduction * What is Algorithm? n a clearly specified set of simple instructions.
Pointers Contd.. Causing a Memory Leak int *ptr = new int; *ptr = 8; int *ptr2 = new int; *ptr2 = -5; ptr = ptr2; ptr 8 ptr2 -5 ptr 8 ptr2 -5.
© 2006 Pearson Addison-Wesley. All rights reserved10 A-1 Chapter 10 Algorithm Efficiency and Sorting.
Analysis of Algorithms Spring 2015CS202 - Fundamentals of Computer Science II1.
Analysis of Performance
Algorithm Analysis & Complexity We saw that a linear search used n comparisons in the worst case (for an array of size n) and binary search had logn comparisons.
1 COMP3040 Tutorial 1 Analysis of algorithms. 2 Outline Motivation Analysis of algorithms Examples Practice questions.
© 2011 Pearson Addison-Wesley. All rights reserved 10 A-1 Chapter 10 Algorithm Efficiency and Sorting.
Analysis of Algorithms CSCI Previous Evaluations of Programs Correctness – does the algorithm do what it is supposed to do? Generality – does it.
Program Efficiency & Complexity Analysis. Algorithm Review An algorithm is a definite procedure for solving a problem in finite number of steps Algorithm.
Chapter 10 Algorithm Analysis.  Introduction  Generalizing Running Time  Doing a Timing Analysis  Big-Oh Notation  Analyzing Some Simple Programs.
Sorting CS Sorting means... Sorting rearranges the elements into either ascending or descending order within the array. (we’ll use ascending order.)
Big Java by Cay Horstmann Copyright © 2009 by John Wiley & Sons. All rights reserved. Selection Sort Sorts an array by repeatedly finding the smallest.
Analysis of Algorithms Spring 2016CS202 - Fundamentals of Computer Science II1.
Algorithm Complexity Analysis (Chapter 10.4) Dr. Yingwu Zhu.
CSE 3358 NOTE SET 2 Data Structures and Algorithms 1.
Algorithm Design Techniques, Greedy Method – Knapsack Problem, Job Sequencing, Divide and Conquer Method – Quick Sort, Finding Maximum and Minimum, Dynamic.
Algorithm Analysis 1.
Algorithm Efficiency and Sorting
CS 302 Data Structures Algorithm Efficiency.
Introduction to Analysis of Algorithms
Analysis of Algorithms
Analysis of Algorithms
Introduction to Search Algorithms
GC 211:Data Structures Week 2: Algorithm Analysis Tools
CSC 222: Object-Oriented Programming
Introduction to Analysis of Algorithms
Analysis of Algorithms
Introduction to Analysis of Algorithms
GC 211:Data Structures Algorithm Analysis Tools
Algorithm Analysis CSE 2011 Winter September 2018.
DATA STRUCTURES Introduction: Basic Concepts and Notations
Algorithm Analysis and Big Oh Notation
CSS 342 Data Structures, Algorithms, and Discrete Mathematics I
Algorithm Analysis (not included in any exams!)
Algorithm design and Analysis
ITEC 2620M Introduction to Data Structures
GC 211:Data Structures Algorithm Analysis Tools
Sorting Algorithms.
Algorithm Efficiency Chapter 10.
Straight Selection Sort
CS 201 Fundamental Structures of Computer Science
C++ Plus Data Structures
Algorithm Efficiency and Sorting
Analysis of Algorithms
25 Searching and Sorting Many slides modified by Prof. L. Lilien (even many without an explicit message indicating an update). Slides added or modified.
Algorithm Efficiency: Searching and Sorting Algorithms
Algorithm Analysis Bina Ramamurthy CSE116A,B.
Fundamentals of the Analysis of Algorithm Efficiency
Searching, Sorting, and Asymptotic Complexity
Algorithm Efficiency and Sorting
8. Comparison of Algorithms
Math/CSE 1019N: Discrete Mathematics for Computer Science Winter 2007
Chapter 10 Sorting Algorithms
Sorting Algorithms.
Searching/Sorting/Searching
Algorithmic complexity
Analysis of Algorithms
Algorithm Efficiency and Sorting
Presentation transcript:

Algorithm Analysis and Big Oh Notation Courtesy of Prof. Ajay Gupta (with updates by Dr. Leszek T. Lilien) CS 1120 Department of Computer Science Western Michigan University

Measuring the Efficiency of Algorithms Analysis of algorithms Area of computer science Provides tools for determining efficiency of different methods of solving a problem E.g., the sorting problem - which sorting method is more efficient Comparing the efficiency of different methods of solution. Concerned with significant differences E.g.: n - the number of items to be sorted Is the running time proportional to n or proportional to n2? Big difference: e.g., for n =100 it results in 100-fold difference; for n= 1000 it results in 1000-fold difference; for n = … 2

How To Do Algorithm Comparison? Approach 1: Implement the algorithms in C#, run the programs, measure their performance, compare Many issues that affect the results: How are the algorithms coded? What computer should you use? What data should the programs use? Approach 2: Analyze algorithms independently of their implementations How? For measuring/comparing execution time of algorithms Count the number of basic operations of an algorithm Summarize the count 3

The Execution Time of Algorithms Count the number of basic operations of an algorithm Read, write, compare, assign, jump, arithmetic operations (increment, decrement, add, subtract, multiply, divide), open, close, logical operations (not/complement, AND, OR, XOR), … 4

The Execution Time of Algorithms Counting an algorithm’s operations Example: calculating a sum of array elements int sum = item[0]; int j = 1; while (j < n) { sum += item[j]; ++j; } <- 1 assignment <- n comparisons <- n-1 plus/assignments Total: 3n operations Notice: Problem size = n = number of elements in an array This problem of size n requires solution with 3n operations 5

Algorithm Growth Rates Measure an algorithm’s time requirement as a function of the problem size E.g., problem size = number of elements in an array Algorithm A requires n2/5 time units Algorithm B requires 5n time units Algorithm efficiency is a concern for large problems only For smaller values of n, n2/5 and 5n not “that much” different Imagine how big is the difference for n > 1,000,000 6

Common Growth-Rate Functions - I 7

Common Growth-Rate Functions - II Differences among the growth-rate functions grow with n See the differences growing on the diagram on the previous page The bigger n, the bigger differences - - that’s why algorithm efficiency is “concern for large problems only” 8

Fig. 25.13 | Number of comparisons for common Big O notations. Similar table from the textbook: Fig. 25.13 | Number of comparisons for common Big O notations.

Big-Oh Notation Algorithm A is order f(n) —denoted O(f(n))— if there exist constants k and n0 such that A requires <= k*f(n) time units to solve a problem of size n >= n0 Examples: n2/5 O(n2): k=1/5, n0=0 5*n O(n): k=5, n0=0 10

More Examples How about n2-3n+10? It is O(n2) if there exist k and n0 such that kn2≥ n2-3n+10 for all n ≥ n0 We see (fig.) that: 3n2 ≥ n2-3n+10 for all n ≥ 2 So k=3, n0=2 More k-n0 pairs could be found, but finding just one is enough to prove that n2-3n+10 is O(n2) 11

Properties of Big-Oh Ignore low-order terms E.g., O(n3+4n2+3n)=O(n3) Ignore multiplicative constant E.g., O(5n3)=O(n3) Combine growth-rate functions O(f(n)) + O(g(n)) = O(f(n)+g(n)) E.g., O(n2) + O(n*log2n) = O(n2 + n*log2n) Then, O(n2 + n*log2n) = O(n2) 12

Worst-case vs. Average-case Analyses An algorithm can require different times to solve different problems of the same size Worst-case analysis = find the maximum number of operations an algorithm can execute in all situations Worst-case analysis is easier to calculate More common Average-case analysis = enumerate all possible situations, find the time of each of the m possible cases, total and divide by m Average-case analysis is harder to compute Yields a more realistic expected behavior 13

Bigger Example: Analysis of Selection Sort values [ 0 ] [ 1 ] [ 2 ] [ 3 ] [ 4 ] Divides the array into two parts: already sorted, and not yet sorted. On each pass, finds the smallest of the unsorted elements, and swap it into its correct place, thereby increasing the number of sorted elements by one. 36 24 10 6 12 14

Selection Sort: Pass One values [ 0 ] [ 1 ] [ 2 ] [ 3 ] [ 4 ] 36 24 10 6 12 U N S O R T E D To find the smallest in UNSORTED: indexMin = 0 comp. 1: check if values[1] = 24 < values[indexMin] = 36 - yes => indexMin = 1 comp. 2: check if values[2] = 10 < values[indexMin] = 24 - yes => indexMin = 2 comp. 3: check if values[3] = 6 < values[indexMin] = 10 - yes => indexMin = 3 comp. 4: check if values[4] = 12 < values[indexMin] = 6 - NO Thus indexMin = 3; swap values[0] = 36 with values[3] = 6 – see next slide 15

Selection Sort: End of Pass One values [ 0 ] [ 1 ] [ 2 ] [ 3 ] [ 4 ] SORTED 6 24 10 36 12 U N S O R T E D 16

Selection Sort: Pass Two SORTED values [ 0 ] [ 1 ] [ 2 ] [ 3 ] [ 4 ] 6 24 10 36 12 U N S O R T E D To find the smallest in UNSORTED: indexMin = 1 comp. 1: check if values[2] = 10 < values[indexMin] = 24 - yes => indexMin = 2 comp. 2: check if values[3] = 36 < values[indexMin] = 10 - NO comp. 3: check if values[4] = 12 < values[indexMin] = 10 - NO Thus indexMin = 2; swap values[1] = 24 with values[2] = 10 – see next slide 17

Selection Sort: End of Pass Two values [ 0 ] [ 1 ] [ 2 ] [ 3 ] [ 4 ] SORTED 6 10 24 36 12 U N S O R T E D 18

Selection Sort: Pass Three values [ 0 ] [ 1 ] [ 2 ] [ 3 ] [ 4 ] 6 10 24 36 12 U N S O R T E D SORTED To find the smallest in UNSORTED: indexMin = 2 comp. 1: check if values[3] = 36 < values[indexMin] = 24 - NO comp. 2: check if values[4] = 12 < values[indexMin] = 24 - yes => indexMin = 4 Thus indexMin = 4; swap values[2] = 24 with values[4] = 12 – see next slide 19

Selection Sort: End of Pass Three values [ 0 ] [ 1 ] [ 2 ] [ 3 ] [ 4 ] 6 10 12 36 24 S O R T E D UNSORTED 20

Selection Sort: Pass Four values [ 0 ] [ 1 ] [ 2 ] [ 3 ] [ 4 ] 6 10 12 36 24 S O R T E D UNSORTED To find the smallest in UNSORTED: indexMin = 3 comp. 1: check if values[4] = 24 < values[indexMin] = 36 - yes => indexMin = 4 Thus indexMin = 4; swap values[3] = 36 with values[4] = 24 – see next slide 21

Selection Sort: End of Pass Four values [ 0 ] [ 1 ] [ 2 ] [ 3 ] [ 4 ] 6 10 12 24 36 S O R T E D 22

Selection Sort: How Many Comparisons? Values [ 0 ] [ 1 ] [ 2 ] [ 3 ] [ 4 ] 6 10 12 24 36 4 comparisons starting with indexMin = 0 3 comparisons starting with indexMin = 1 2 comparisons starting with indexMin = 2 1 comparison starting with indexMin = 3 0 comparisons starting with indexMin = 4 = 4 + 3 + 2 + 1 + 0 comparisons In addition, we have <= 4 swaps 23

For Selection Sort in General Above: Array contained 5 elements 4 + 3 + 2 + 1 + 0 comparisons and <= 4 swaps were needed Generalization for Selection Sort: When the array contains N elements, the number of comparisons is: (N-1) + (N-2) + … + 2 + 1 + 0 and the number of swaps is <= N-1 Lets use: Sum = (N-1) + (N-2) + … + 2 + 1 + 0 24

Calculating Number of Comparisons Sum = (N-1) + (N-2) + . . . + 2 + 1 + Sum = 1 + 2 + . . . + (N-2) + (N-1) = 2 * Sum = N + N + . . . + N + N = = N * (N-1) Since: 2 * Sum = N * (N-1) then: Sum = 0.5 N2 - 0.5 N This means that we have 0.5 N2 - 0.5 N comparisons 25

And the Big-Oh for Selection Sort is… 0.5 N2 - 0.5 N comparisons = O(N2) comparisons N-1 swaps = O(N) swaps This means that complexity of Selection Sort is O(N2) Because O(N2) + O(N) = O(N2) 26

Pseudocode for Selection Sort void SelectionSort (int values[], int numValues) // Post: Sorts array values[0 . . numValues-1 ] // into ascending order by key value { int endIndex = numValues - 1 ; for (int current=0; current<endIndex; current++) Swap (values, current, MinIndex(values,current,endIndex)); } 27

Pseudocode for Selection Sort (contd) int MinIndex(int values[ ], int start, int end) // Post: Function value = index of the smallest value // in values [start] . . values [end]. { int indexOfMin = start ; for(int index =start + 1 ; index <= end ; index++) if (values[ index] < values [indexOfMin]) indexOfMin = index ; return indexOfMin; } 28

Algorithm Analysis Addendum The End of Algorithm Analysis Addendum