Big-O, Ω, ϴ Trevor Brown (tabrown@cs.utoronto.ca) CSC263 Tutorial 1 Big-O, Ω, ϴ Trevor Brown (tabrown@cs.utoronto.ca)

Slides:



Advertisements
Similar presentations
MATH 224 – Discrete Mathematics
Advertisements

Chapter 1 – Basic Concepts
Scott Grissom, copyright 2004 Chapter 5 Slide 1 Analysis of Algorithms (Ch 5) Chapter 5 focuses on: algorithm analysis searching algorithms sorting algorithms.
Simple Sorting Algorithms
Algorithm Efficiency and Sorting
CHAPTER 10 Recursion. 2 Recursive Thinking Recursion is a programming technique in which a method can call itself to solve a problem A recursive definition.
Simple Sorting Algorithms. 2 Bubble sort Compare each element (except the last one) with its neighbor to the right If they are out of order, swap them.
Search Lesson CS1313 Spring Search Lesson Outline 1.Searching Lesson Outline 2.How to Find a Value in an Array? 3.Linear Search 4.Linear Search.
Simple Sorting Algorithms. 2 Outline We are going to look at three simple sorting techniques: Bubble Sort, Selection Sort, and Insertion Sort We are going.
Analyzing algorithms using big-O, omega, theta First, we analyze some easy algorithms. Most of them have the same running time for all inputs of length.
Data Structures and Algorithms Lecture 5 and 6 Instructor: Quratulain Date: 15 th and 18 th September, 2009 Faculty of Computer Science, IBA.
Analysis of Algorithms
Chapter 19: Searching and Sorting Algorithms
Searching. RHS – SOC 2 Searching A magic trick: –Let a person secretly choose a random number between 1 and 1000 –Announce that you can guess the number.
Sorting – Insertion and Selection. Sorting Arranging data into ascending or descending order Influences the speed and complexity of algorithms that use.
Data Structure Introduction.
Fundamentals of Algorithms MCS - 2 Lecture # 15. Bubble Sort.
1/6/20161 CS 3343: Analysis of Algorithms Lecture 2: Asymptotic Notations.
1 Ch. 2: Getting Started. 2 About this lecture Study a few simple algorithms for sorting – Insertion Sort – Selection Sort (Exercise) – Merge Sort Show.
Computer Science 1620 Sorting. cases exist where we would like our data to be in ascending (descending order) binary searching printing purposes selection.
CSC263 Tutorial 1 Big-O, Ω, ϴ Trevor Brown
Chapter 15 Running Time Analysis. Topics Orders of Magnitude and Big-Oh Notation Running Time Analysis of Algorithms –Counting Statements –Evaluating.
CSC 108H: Introduction to Computer Programming Summer 2012 Marek Janicki.
Algorithm Analysis 1.
CMPT 438 Algorithms.
Growth of Functions & Algorithms
Analysis of Algorithms
Week 13: Searching and Sorting
May 17th – Comparison Sorts
Analysis of Algorithms
Introduction to Algorithms
CMPT 120 Topic: Searching – Part 1
Search Lesson Outline Searching Lesson Outline
COMP 53 – Week Seven Big O Sorting.
Simple Sorting Algorithms
ET 2006 : Data Structures & Algorithms
Introduction to Algorithms
Algorithm Analysis CSE 2011 Winter September 2018.
CS 213: Data Structures and Algorithms
CS 3343: Analysis of Algorithms
CS 2210 Discrete Structures Algorithms and Complexity
Big-Oh and Execution Time: A Review
Algorithm Analysis (not included in any exams!)
Analysis of Algorithms CS 477/677
Linear and Binary Search
CS 3343: Analysis of Algorithms
Analysis of Bubble Sort and Loop Invariant
Algorithm An algorithm is a finite set of steps required to solve a problem. An algorithm must have following properties: Input: An algorithm must have.
CSC 413/513: Intro to Algorithms
Ch 7: Quicksort Ming-Te Chi
Introduction to Algorithms Analysis
CS 2210 Discrete Structures Algorithms and Complexity
Analysis of Algorithms
Lecture 10: Quicker Sorting CS150: Computer Science
Sorting "There's nothing in your head the sorting hat can't see. So try me on and I will tell you where you ought to be." -The Sorting Hat, Harry Potter.
Searching, Sorting, and Asymptotic Complexity
CS 3343: Analysis of Algorithms
Simple Sorting Algorithms
Ch. 2: Getting Started.
Intro to Data Structures
Ack: Several slides from Prof. Jim Anderson’s COMP 202 notes.
Analysis of Algorithms
Simple Sorting Algorithms
Simple Sorting Algorithms
Estimating Algorithm Performance
Quicksort Quick sort Correctness of partition - loop invariant
Discrete Mathematics CS 2610
Analysis of Algorithms
Algorithm Course Algorithms Lecture 3 Sorting Algorithm-1
CS 2210 Discrete Structures Algorithms and Complexity
Presentation transcript:

Big-O, Ω, ϴ Trevor Brown (tabrown@cs.utoronto.ca) CSC263 Tutorial 1 Big-O, Ω, ϴ Trevor Brown (tabrown@cs.utoronto.ca)

Big-O and Big-Ω What do O(x) and Ω(x) mean? Many students think Big-O is “worst case” and Big-Ω is “best case.” They are WRONG! Both O(x) and Ω(x) describe running time for the worst possible input! What do O(x) and Ω(x) mean? How do we show an algorithm is O(x) or Ω(x)? Algorithm is O(x) Algorithm is Ω(x) The algorithm takes at most c*x steps to run on the worst possible input. The algorithm takes at least c*x steps to run on the worst possible input. Algorithm is O(x) Algorithm is Ω(x) Show: for every input, the algorithm takes at most c*x steps. Show: there is an input that makes the algorithm take at least c*x steps.

Analyzing algorithms using O, Ω, ϴ First, we analyze some easy algorithms. We can easily find upper and lower bounds on the running times of these algorithms by using basic arithmetic.

Example 1 for i = 1..100 print * Running time: 100*c, which is ϴ(1) (why?) O(1), Ω(1) => ϴ(1) What do you think about this function? Big-O of what? Big-Omega of what? Any guesses?

Example 2 for i = 1..x print * Running time: x*c O(x), Ω(x) => ϴ(x) In fact, this is also Ω(1) and O(n^2), but these are very weak statements. What do you think about this function? Big-O of what? Big-Omega of what? Any guesses? So what does this computation look like? If we draw a circle for each time the “print *” line is executed, we get… x

Example 3 for i = 1..x for j = 1..x print * O(x^2), Ω(x^2) => ϴ(x^2) x What do you think about this function? Big-O of what? Big-Omega of what? Any guesses? What does this computation look like? Let’s again draw a circle for each time the “print *” line is executed. In the first iteration of the first loop, print * is executed n times, so we get a full row. In the second iteration, we get another row, and so on… x

Example 4a for i = 1..x for j = 1..i print * Big-O: i is always ≤ x, so j always iterates up to at most x, so this at most x*x steps, which is O(x^2). Big-Ω: When i=1, the loop over j performs “print *” once. When i=2, the loop over j performs “print *” twice. […] So, “print *” is performed 1+2+3+…+x times. Easy summation formula: 1+2+3+…+x = x(x+1)/2 x(x+1)/2 = x^2/2+x/2 ≥ (1/2) x^2, which is Ω(x^2) What do you think about this function? Big-O of what? Big-Omega of what? Any guesses?

Example 4b >= for i = x/2..x for i = 1..x for j = 1..x/2 print * for i = 1..x for j = 1..i print * Big-Ω: Useful trick: consider the iterations of the first loop for which i >= x/2. In these iterations, the second loop iterates from j=1 to at least j=x/2. Therefore, the number of steps performed in these iterations is at least x/2*x/2 = (1/4) x^2, which is Ω(x^2). The time to perform ALL iterations is even more than this so, of course, the whole algorithm must take Ω(x^2) time. Therefore, this function is ϴ(x^2). j ≤ x/2 Let’s look at this function from another angle. Of course, since it’s so simple, there is an easy summation formula, which we saw on the last slide. However, the technique I’m about to show you applies to more complicated examples that we don’t know summation formulae for. I’ll show you a more complex example on the next slide. So, let’s think about what this computation looks like. In the first iteration of the first loop, i=1, so j goes from 1 to 1, and print * is executed only once. In the second literation of the first loop, i=2, so j goes from 1 to 2, and print * is executed twice. In the third iteration, print * is executed three times, and so on. So, we get a triangle. *narrate the points on the slide* i ≥ x/2 x/2 x/2

Example 5 for i = x/2..x for j = x/4..x/2 for k = 1..x/4 print * for k = 1..j print * for i = x/2..x for j = 1..x/2 for k = 1..j print * for i = x/2..x for j = 1..i for k = 1..j print * for i = 1..x for j = 1..i for k = 1..j print * Big-O: i, j, k ≤ x, so we know total number of steps is ≤ x*x*x = O(x^3). Big-Ω: Consider only the iterations of the first loop from x/2 up to x. For these values of i, the second loop always iterates up to at least x/2. Consider only the iterations of the second loop from x/4 up to x/2. For these values of j, the third loop always iterates up to at least x/4. For these values of i, j and k, the function performs “print *” at least: x/2 * x/4 * x/4 = x^3/32 times, which is Ω(x^3). Therefore, this function is ϴ(x^3). Now, we look at a more complicated version of the example we just saw. What do you think about this function? Big-O of what? Big-Omega of what? Any guesses?

Analyzing algorithms using O, Ω, ϴ Now, we are going to analyze algorithms whose running times depend on their inputs. For some inputs, they terminate very quickly. For other inputs, they can be slow.

Example 1 LinearSearch(A[1..n], key) for i = 1..n if A[i] = key then return true return false Might take 1 iteration… might take many... Big-O: at most n iterations, constant work per iteration, so O(n) Big-Ω: can you find a bad input that makes the algorithm take Ω(n) time? What do you guys think about this function? Big-O of what? Big-Omega of what? *You don’t need to find the worst input. Any input that is bad enough to give omega(n) will do. For instance, any array where key is in position n, n/2 or n/7. All will yield omega(n).*

Example 2 int A[n] for i = 1..n binarySearch(A, i) Remember: binarySearch is O(lg n). O(n lg n) How about Ω? Maybe it’s Ω(n lg n), but it’s hard to tell. Not enough to know binary search is Ω(lg n), because the worst-case input might make only one invocation of binary search take c*lg n steps (and the rest might finish in 1 step). Would need to find a particular input that causes the algorithm to take a total of c(n lg n) steps. In fact, binarySearch(A, i) takes c*lg n steps when i is not in A, so this function takes c(n lg n) steps if A[1..n] doesn’t contain any number in {1,2,…,n}.

Example 3 Your boss tells you that your group needs to develop a sorting algorithm for the company's web server that runs in O(n lg n) time. If you can't prove that it can sort any input of length n in O(n lg n) time, it's no good to him. He doesn't want to lose orders because of a slow server. Your colleague tells you that he's been working on this piece of code, which he believes should fit the bill. He says he thinks it can sort any input of length n in O(n lg n) time, but he doesn't know how to prove this.

Example 3 continued WeirdSort(A[1..n]) last := n sorted := false while not sorted do sorted := true for j := 1 to last-1 do if A[j] > A[j+1] then swap A[j] and A[j+1] last := last-1 *First spend a small amount of time understanding the control flow through the loops, and what the inner loop is doing.* Loop invariant for the WHILE loop: At end of iteration i, A[last+1..n] contains the i largest elements of A in sorted order. Show T(n) is O(n^2) For every n, and every input of size n: While loop executed at most n-1 times Each iteration of while loop takes at most cn time Total cn(n-1)+d time Show T(n) is Omega(n) There are fast inputs e.g., array already sorted => while loop executed only once Weak statement Show T(n) is Omega(n^2) Identify input BAD ENOUGH to get Omega(n^2) time e.g., array is in reverse sorted order. Each iteration of while loop, the largest item in A[1..last] "bubbles" to A[last+1] which takes time c*last, since the largest item is in A[1].