Analysis of algorithms and BIG-O

Slides:



Advertisements
Similar presentations
Analysis of algorithms and BIG-O
Advertisements

CSC401 – Analysis of Algorithms Lecture Notes 1 Introduction
© 2004 Goodrich, Tamassia 1 Lecture 01 Algorithm Analysis Topics Basic concepts Theoretical Analysis Concept of big-oh Choose lower order algorithms Relatives.
Analysis of Algorithms Algorithm Input Output. Analysis of Algorithms2 Outline and Reading Running time (§1.1) Pseudo-code (§1.1) Counting primitive operations.
Analysis of Algorithms (Chapter 4)
Analysis of Algorithms1 Estimate the running time Estimate the memory space required. Time and space depend on the input size.
Analysis of Algorithms
Fall 2006CSC311: Data Structures1 Chapter 4 Analysis Tools Objectives –Experiment analysis of algorithms and limitations –Theoretical Analysis of algorithms.
Analysis of Algorithms COMP171 Fall Analysis of Algorithms / Slide 2 Introduction * What is Algorithm? n a clearly specified set of simple instructions.
Analysis of Performance
Analysis of Algorithms Algorithm Input Output © 2014 Goodrich, Tamassia, Goldwasser1Analysis of Algorithms Presentation for use with the textbook Data.
Week 2 CS 361: Advanced Data Structures and Algorithms
For Wednesday Read Weiss chapter 3, sections 1-5. This should be largely review. If you’re struggling with the C++ aspects, you may refer to Savitch, chapter.
Analysis Tools Jyh-Shing Roger Jang ( 張智星 ) CSIE Dept, National Taiwan University.
Algorithm Efficiency CS 110: Data Structures and Algorithms First Semester,
Analysis of Algorithms1 The Goal of the Course Design “good” data structures and algorithms Data structure is a systematic way of organizing and accessing.
Algorithm Input Output An algorithm is a step-by-step procedure for solving a problem in a finite amount of time. Chapter 4. Algorithm Analysis (complexity)
Data Structures Lecture 8 Fang Yu Department of Management Information Systems National Chengchi University Fall 2010.
Analysis of Algorithms
CS 221 Analysis of Algorithms Instructor: Don McLaughlin.
1 Dr. J. Michael Moore Data Structures and Algorithms CSCE 221 Adapted from slides provided with the textbook, Nancy Amato, and Scott Schaefer.
Program Efficiency & Complexity Analysis. Algorithm Review An algorithm is a definite procedure for solving a problem in finite number of steps Algorithm.
Analysis of Algorithms Algorithm Input Output © 2010 Goodrich, Tamassia1Analysis of Algorithms.
Analysis of algorithms. What are we going to learn? Need to say that some algorithms are “better” than others Criteria for evaluation Structure of programs.
1 Chapter 2 Algorithm Analysis All sections. 2 Complexity Analysis Measures efficiency (time and memory) of algorithms and programs –Can be used for the.
Announcement We will have a 10 minutes Quiz on Feb. 4 at the end of the lecture. The quiz is about Big O notation. The weight of this quiz is 3% (please.
1 Chapter 2 Algorithm Analysis Reading: Chapter 2.
1 COMP9024: Data Structures and Algorithms Week Two: Analysis of Algorithms Hui Wu Session 2, 2014
Analysis of Algorithms Algorithm Input Output © 2014 Goodrich, Tamassia, Goldwasser1Analysis of Algorithms Presentation for use with the textbook Data.
Data Structures I (CPCS-204) Week # 2: Algorithm Analysis tools Dr. Omar Batarfi Dr. Yahya Dahab Dr. Imtiaz Khan.
Algorithm Analysis 1.
Analysis of Algorithms
Analysis of Algorithms
Chapter 2 Algorithm Analysis
COMP9024: Data Structures and Algorithms
Analysis of Algorithms
COMP9024: Data Structures and Algorithms
Analysis of Algorithms
GC 211:Data Structures Week 2: Algorithm Analysis Tools
Introduction to Algorithms
GC 211:Data Structures Algorithm Analysis Tools
03 Algorithm Analysis Hongfei Yan Mar. 9, 2016.
Algorithms Algorithm Analysis.
Analysis of Algorithms
COMP9024: Data Structures and Algorithms
Analysis of Algorithms
David Kauchak CS52 – Spring 2015
Analysis of Algorithms
Analysis of Algorithms
Analysis of Algorithms
Algorithm design and Analysis
Analysis of Algorithms
GC 211:Data Structures Algorithm Analysis Tools
Analysis of Algorithms
Analysis of Algorithms
Analysis of Algorithms
Analysis of Algorithms
Analysis of Algorithms
CS2013 Lecture 5 John Hurley Cal State LA.
Analysis of Algorithms
CS200: Algorithms Analysis
GC 211:Data Structures Algorithm Analysis Tools
Searching, Sorting, and Asymptotic Complexity
Performance Evaluation
Analysis of Algorithms
David Kauchak cs161 Summer 2009
CS210- Lecture 2 Jun 2, 2005 Announcements Questions
Analysis of Algorithms
Analysis of Algorithms
Presentation transcript:

Analysis of algorithms and BIG-O Tuesday, January 31, 2017 Analysis of algorithms and BIG-O CS16: Introduction to Algorithms & Data Structures

Outline Running time and theoretical analysis Big-O notation Tuesday, January 31, 2017 Outline Running time and theoretical analysis Big-O notation Big-Ω and Big-Θ Analyzing Seamcarve runtime Dynamic programming Fibonacci sequence

What is an “Efficient” Algorithm? Tuesday, January 31, 2017 What is an “Efficient” Algorithm? Efficiency measures Small amount of time? on a stopwatch? Low memory usage? Low power consumption? Network usage? Analysis of algorithms helps quantify this

Measuring Running Time Tuesday, January 31, 2017 Measuring Running Time Experimentally Implement algorithm Run program with inputs of varying size Measure running times Plot results Why not? What if you can’t implement algorithm? Which inputs do you choose? Depends on hardware, OS, …

Measuring Running Time Tuesday, January 31, 2017 Measuring Running Time Grows with the input size Focus on large inputs Varies with input Consider the worst-case input

Measuring Running Time Friday, January 28, 2014 Measuring Running Time Why worst-case inputs? Easier to analyze Practical: what if autopilot was slower than predicted for some untested input? Why large inputs? Practical: usually care what happens on large data

Theoretical Analysis Based on high-level description of algorithm Tuesday, January 31, 2017 Theoretical Analysis Based on high-level description of algorithm Not on implementation Takes into account all possible inputs Worst-case or average-case Quantifies running time independently of hardware or software environment

Theoretical Analysis Associate cost to elementary operations Friday, January 28, 2014 Theoretical Analysis Associate cost to elementary operations Find number of operations as function of input size

Elementary Operations Tuesday, January 31, 2017 Elementary Operations Algorithmic “time” is measured in elementary operations Math (+, -, *, /, max, min, log, sin, cos, abs, ...) Comparisons ( ==, >, <=, ...) Variable assignment Variable increment or decrement Array allocation Creating a new object Function calls and value returns (Careful, object's constructor and function calls may have elementary ops too!) In practice, all these operations take different amounts of time In algorithm analysis assume each operation takes 1 unit of time

Example: Constant Running Time Tuesday, January 31, 2017 Example: Constant Running Time function first(array): // Input: an array // Output: the first element return array[0] // index 0 and return, 2 ops How many operations are performed in this function if the list has ten elements? If it has 100,000 elements? Always 2 operations performed Does not depend on the input size

Example: Linear Running Time Tuesday, January 31, 2017 Example: Linear Running Time function argmax(array): // Input: an array // Output: the index of the maximum value index = 0 // assignment, 1 op for i in [1, array.length): // 1 op per loop if array[i] > array[index]: // 3 ops per loop index = i // 1 op per loop, sometimes return index // 1 op How many operations if the list has ten elements? 100,000 elements? Varies proportional to the size of the input list: 5n + 2 We’ll be in the for loop longer and longer as the input list grows If we were to plot, the runtime would increase linearly

Example: Quadratic Running Time Tuesday, January 31, 2017 Example: Quadratic Running Time function possible_products(array): // Input: an array // Output: a list of all possible products // between any two elements in the list products = [] // make an empty list, 1 op for i in [0, array.length): // 1 op per loop for j in [0, array.length): // 1 op per loop per loop products.append(array[i] * array[j]) // 4 ops per loop per loop return products // 1 op About 5n2 + n + 2 operations (okay to approximate!) A plot of number of operations would grow quadratically! Each element must be multiplied with every other element Linear algorithm on previous slide had 1 for loop This one has 2 nested for loops What if there were 3 nested loops?

Summarizing Function Growth Tuesday, January 31, 2017 Summarizing Function Growth For large inputs growth rate is less affected by: constant factors lower-order terms Examples 105n2 + 108n and n2 grow with same slope despite differing constants and lower-order terms 10n + 105 and n both grow with same slope as well 105n2 + 108n n2 10n + 105 n T(n) When studying growth rates, we only care about what happens for very large inputs (as n approaches infinity…) n In this graph (log scale on both axes), the slope of a line corresponds to the growth rate of its respective function

Big-O Notation if there exist positive constants c and n0 such that Tuesday, January 31, 2017 Big-O Notation Given any two functions f(n) and g(n), we say that f(n) is O(g(n)) if there exist positive constants c and n0 such that f(n) ≤ cg(n) for all n ≥ n0 Example: 2n + 10 is O(n) Pick c = 3 and n0 = 10 2n + 10 ≤ 3n 2(10) + 10 ≤ 3(10) 30 ≤ 30

Big-O Notation (continued) Tuesday, January 31, 2017 Big-O Notation (continued) Example: n2 is not O(n) n2 ≤ cn n ≤ c The above inequality cannot be satisfied because c must be a constant, therefore for any n > c the inequality is false

Big-O and Growth Rate Big-O gives upper bound on growth of function Tuesday, January 31, 2017 Big-O and Growth Rate Big-O gives upper bound on growth of function An algorithm is O(g(n)) if growth rate is no more than growth rate of g(n) n2 is not O(n) But n is O(n2) And n2 is O(n3) Why? Because Big-O is an upper bound!

Summary of Big-O Rules If f(n) is a polynomial of degree d then Tuesday, January 31, 2017 Summary of Big-O Rules If f(n) is a polynomial of degree d then f(n) is O(nd). In other words forget about lower-order terms forget about constant factors Use the smallest possible degree True that 2n is O(n50), but that’s not helpful Instead, say it’s O(n) discard constant factor & use smallest possible degree

Constants in Algorithm Analysis Tuesday, January 31, 2017 Constants in Algorithm Analysis Find number of elementary operations executed as a function of input size first: T(n) = 2 argmax: T(n) = 5n + 2 possible_products: T(n) = 5n2 + n + 3 In the future skip counting operations Replace constants with c since irrelevant as n grows first: T(n) = c argmax: T(n) = c0n + c1 possible_products: T(n) = c0n2 + n + c1

Big-O in Algorithm Analysis Tuesday, January 31, 2017 Big-O in Algorithm Analysis Easy to express T(n) in big-O drop constants and lower-order terms In big-O notation first is O(1) argmax is O(n) possible_products is O(n2) Convention for T(n) = c is O(1)

Big-Omega (Ω) Recall f(n) is O(g(n)) Tuesday, January 31, 2017 Big-Omega (Ω) Recall f(n) is O(g(n)) if f(n) ≤ cg(n) for some constant as n grows Big-O means f(n) grows no faster than g(n) g(n) acts as upper bound to f(n)’s growth rate What if we want to express a lower bound? We say f(n) is Ω(g(n)) if f(n) ≥ cg(n) f(n) grows no slower than g(n) Big-Omega

f(n) is O(g(n)) and Ω(g(n)) Tuesday, January 31, 2017 Big-Theta (Θ) What about an upper and lower bound? We say f(n)is Θ(g(n)) if f(n) is O(g(n)) and Ω(g(n)) f(n) grows the same as g(n) (tight-bound) Big-Theta

How fast is the seamcarve algorithm ? Tuesday, January 31, 2017 How fast is the seamcarve algorithm ? How many seams in c×r image? At each row, seam can go: Left, Right, Down It chooses 1 out of 3 dirs at each row and there are r rows So 3r possible seams from some starting pixel Since there are c starting pixels total seams is c × 3r For square image with n total pixels there are possible seams

Tuesday, January 31, 2017 Seamcarve Algorithms that try every possible solution are known exhaustive algorithms or brute force algorithms Exhaustive approach is to consider all seams and choose the least important What would be the big-O running time? : exponential and not good

Seamcarve What is runtime of the algorithm from last class? Tuesday, January 31, 2017 Seamcarve What is runtime of the algorithm from last class? Remember: constants don’t affect big-O runtime The algorithm: Iterate over all pixels from bottom to top populate costs and dirs arrays Create seam by choosing minimum value in top row and tracing downward How many times do we evaluate each pixel? A constant number of times So algorithm is linear: O(n), n is number of pixels Hint: we also could have looked at pseudocode and counted number of nested loops!

Seamcarve: Dynamic Programming Tuesday, January 31, 2017 Seamcarve: Dynamic Programming From exponential algorithm to a linear algorithm?!? Avoided recomputing information we already calculated! Many seams cross paths we don’t need to recompute sums of importances if we’ve already calculated it before That’s the purpose of the additional costs array Storing computed information to avoid recomputing later, is called dynamic programming

Fibonacci: Recursive 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, … Tuesday, January 31, 2017 Fibonacci: Recursive 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, … The Fibonacci sequence is defined by the recurrence relation: F0 = 0, F1 = 1 Fn = Fn-1 + Fn-2 This lends itself very well to a recursive function for finding the nth Fibonacci number function fib(n): if n = 0: return 0 if n = 1: return 1 return fib(n-1) + fib(n-2)

Recursive Fibonacci, visualized Friday, January 28, 2014 Recursive Fibonacci, visualized The recursive function for calculating the Fibonacci Sequence can be illustrated using a tree, where each row is a level of recursion. This diagram illustrates the calls made for fib(4).

Tuesday, January 31, 2017 Fibonacci: Recursive In order to calculate fib(4), how many times does fib() get called? fib(3) fib(2) fib(1) fib(0) fib(4) fib(1) alone gets recomputed 3 times! At each level of recursion Algorithm makes twice as many recursive calls as last For fib(n) number of recursive calls is approx 2n Algorithm is O(2n)

Fibonacci: Dynamic Programming Tuesday, January 31, 2017 Fibonacci: Dynamic Programming Instead of recomputing same Fibonacci numbers over and over We compute each one once & store it for later Need a table to keep track of intermediary values function dynamicFib(n): fibs = [] // make an array of size n fibs[0] = 0 fibs[1] = 1 for i from 2 to n: fibs[i] = fibs[i-1] + fibs[i-2] return fibs[n]

Fibonacci: Dynamic Programming (2) Tuesday, January 31, 2017 Fibonacci: Dynamic Programming (2) What’s the runtime of dynamicFib()? Calculates Fibonacci numbers from 0 to n Performs O(1) ops for each one Runtime is clearly O(n) Again, we reduced runtime of algorithm From exponential to linear With dynamic programming!

Readings Dasgupta Section 0.2, pp 12-15 Dasgupta Section 0.3, pp 15-17 Tuesday, January 31, 2017 Readings Dasgupta Section 0.2, pp 12-15 Goes through this Fibonacci example (although without mentioning dynamic programming) This section is easily readable now Dasgupta Section 0.3, pp 15-17 Describes big-O notation Dasgupta Chapter 6, pp 169-199 Goes into Dynamic Programming This chapter builds significantly on earlier ones and will be challenging to read now, but we’ll see much of it this semester.

Announcements 1/31/17 Homework 1 due this Friday at 3pm! Tuesday, January 31, 2017 Announcements 1/31/17 Homework 1 due this Friday at 3pm! Thursday is in-class Python lab! If you are able to work on your own laptop, go to Salomon DECI (here!). Otherwise, go to the Sunlab. Make sure you can log into your CS account before attending the lab See a SunLab consultant if you have any account issues! Sections started yesterday – if you are not signed up, you could be in trouble! Any students taking class, but not registered, see HTA’s after class!