On Time Versus Input Size Great Theoretical Ideas In Computer Science S. Rudich V. Adamchik CS 15-251 Spring 2006 Lecture 17March 21, 2006Carnegie Mellon.

Slides:



Advertisements
Similar presentations
Grade School Revisited: How To Multiply Two Numbers Great Theoretical Ideas In Computer Science Anupam Gupta Danny Sleator CS Fall 2010 Lecture.
Advertisements

Lecture: Algorithmic complexity
Complexity, Origami, etc. Big Oh Traditional Origami Fold and cut.
CSE 373: Data Structures and Algorithms Lecture 5: Math Review/Asymptotic Analysis III 1.
Razdan with contribution from others 1 Algorithm Analysis What is the Big ‘O Bout? Anshuman Razdan Div of Computing.
Grade School Revisited: How To Multiply Two Numbers Great Theoretical Ideas In Computer Science Victor Adamchik Danny Sleator CS Spring 2010 Lecture.
Chapter 1 – Basic Concepts
CSE332: Data Abstractions Lecture 2: Math Review; Algorithm Analysis Tyler Robison Summer
CS 206 Introduction to Computer Science II 09 / 10 / 2008 Instructor: Michael Eckmann.
CS 307 Fundamentals of Computer Science 1 Asymptotic Analysis of Algorithms (how to evaluate your programming power tools) based on presentation material.
Analysis of Algorithms (Chapter 4)
Runtime Analysis CSC 172 SPRING 2002 LECTURE 9 RUNNING TIME A program or algorithm has running time T(n), where n is the measure of the size of the input.
CS 206 Introduction to Computer Science II 09 / 05 / 2008 Instructor: Michael Eckmann.
On Time Versus Input Size Great Theoretical Ideas In Computer Science Steven RudichCS Spring 2004 Lecture 15March 2, 2004Carnegie Mellon University.
CS 206 Introduction to Computer Science II 01 / 28 / 2009 Instructor: Michael Eckmann.
Data Structures and Algorithms1 Basics -- 2 From: Data Structures and Their Algorithms, by Harry R. Lewis and Larry Denenberg (Harvard University: Harper.
Liang, Introduction to Java Programming, Eighth Edition, (c) 2011 Pearson Education, Inc. All rights reserved Chapter 23 Algorithm Efficiency.
1 Foundations of Software Design Fall 2002 Marti Hearst Lecture 11: Analysis of Algorithms, cont.
Design and Analysis of Algorithms Chapter Analysis of Algorithms Dr. Ying Lu August 28, 2012
Grade School Revisited: How To Multiply Two Numbers Great Theoretical Ideas In Computer Science S. Rudich V. Adamchik CS Spring 2006 Lecture 18March.
Spring2012 Lecture#10 CSE 246 Data Structures and Algorithms.
1 i206: Lecture 6: Math Review, Begin Analysis of Algorithms Marti Hearst Spring 2012.
COMPSCI 102 Introduction to Discrete Mathematics.
Analysis of Algorithms Lecture 2
Liang, Introduction to Java Programming, Seventh Edition, (c) 2009 Pearson Education, Inc. All rights reserved Chapter 23 Algorithm Efficiency.
CS Algorithm Analysis1 Algorithm Analysis - CS 312 Professor Tony Martinez.
Algorithm Analysis & Complexity We saw that a linear search used n comparisons in the worst case (for an array of size n) and binary search had logn comparisons.
Program Performance & Asymptotic Notations CSE, POSTECH.
Week 2 CS 361: Advanced Data Structures and Algorithms
SEARCHING, SORTING, AND ASYMPTOTIC COMPLEXITY Lecture 12 CS2110 – Fall 2009.
1 Chapter 24 Developing Efficient Algorithms. 2 Executing Time Suppose two algorithms perform the same task such as search (linear search vs. binary search)
Lecture 2 Computational Complexity
Algorithm Efficiency CS 110: Data Structures and Algorithms First Semester,
Analysis of Algorithms
1 Computer Algorithms Lecture 3 Asymptotic Notation Some of these slides are courtesy of D. Plaisted, UNC and M. Nicolescu, UNR.
Iterative Algorithm Analysis & Asymptotic Notations
Analysis of Algorithms
2IL50 Data Structures Fall 2015 Lecture 2: Analysis of Algorithms.
Analysis of Algorithms These slides are a modified version of the slides used by Prof. Eltabakh in his offering of CS2223 in D term 2013.
CSCI 3160 Design and Analysis of Algorithms Tutorial 1
Fundamentals CSE 373 Data Structures Lecture 5. 12/26/03Fundamentals - Lecture 52 Mathematical Background Today, we will review: ›Logs and exponents ›Series.
Analysis of Algorithms CSCI Previous Evaluations of Programs Correctness – does the algorithm do what it is supposed to do? Generality – does it.
Week 12 - Wednesday.  What did we talk about last time?  Asymptotic notation.
Program Efficiency & Complexity Analysis. Algorithm Review An algorithm is a definite procedure for solving a problem in finite number of steps Algorithm.
CMSC 341 Asymptotic Analysis. 2 Complexity How many resources will it take to solve a problem of a given size? –time –space Expressed as a function of.
Introduction to Analysis of Algorithms CS342 S2004.
1 Asymptotic Notations Iterative Algorithms and their analysis Asymptotic Notations –Big O,  Notations Review of Discrete Math –Summations –Logarithms.
Algorithm Analysis Part of slides are borrowed from UST.
1 Algorithms  Algorithms are simply a list of steps required to solve some particular problem  They are designed as abstractions of processes carried.
Chapter 2 Computational Complexity. Computational Complexity Compares growth of two functions Independent of constant multipliers and lower-order effects.
Analysis of algorithms. What are we going to learn? Need to say that some algorithms are “better” than others Criteria for evaluation Structure of programs.
1/6/20161 CS 3343: Analysis of Algorithms Lecture 2: Asymptotic Notations.
Algorithm Analysis (Big O)
Spring 2015 Lecture 2: Analysis of Algorithms
E.G.M. PetrakisAlgorithm Analysis1  Algorithms that are equally correct can vary in their utilization of computational resources  time and memory  a.
2IL50 Data Structures Spring 2016 Lecture 2: Analysis of Algorithms.
1 Chapter 2 Algorithm Analysis Reading: Chapter 2.
Grade School Revisited: How To Multiply Two Numbers CS Lecture 4 2 X 2 = 5.
On Time Versus Input Size Great Theoretical Ideas In Computer Science S. Rudich V. Adamchik CS Spring 2006 Lecture 17March 21, 2006Carnegie Mellon.
Analysis of Algorithms
Introduction to Algorithms
CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis Catie Baker Spring 2015.
Great Theoretical Ideas in Computer Science
Great Theoretical Ideas in Computer Science
Introduction to Algorithms Analysis
On Time Versus Input Size
Searching, Sorting, and Asymptotic Complexity
Introduction to Discrete Mathematics
CS210- Lecture 2 Jun 2, 2005 Announcements Questions
Presentation transcript:

On Time Versus Input Size Great Theoretical Ideas In Computer Science S. Rudich V. Adamchik CS Spring 2006 Lecture 17March 21, 2006Carnegie Mellon University # of bits timetime

Welcome to the 9 th week 1.Time complexity of algorithms 2.Space complexity of songs

How to add 2 n-bit numbers. *** *** ** ** ** ** *** *** **** **** **** **** **** **** ** ** ** ** ** ** ** ** +

*** *** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** * * * * + * ** * “Grade school addition”

Time complexity of grade school addition + T(n) = amount of time grade school addition uses to add two n-bit numbers What do you mean by “time”? * * * * * * * * * * * * * * * *

Our Goal We want to define “time” taken by the method of grade school addition without depending on the implementation details.

Hold on! But you agree that T(n) does depend on the implementation! A given algorithm will take different amounts of time on the same inputs depending on such factors as: Processor speed Instruction set Disk speed Brand of compiler

These objections are serious, but they are not undefeatable. There is a very nice sense in which we can analyze grade school addition without having to worry about implementation details. Here is how it works...

Grade school addition On any reasonable computer M, adding 3 bits and writing down 2 bits (for short ) can be done in constant time c. Total time to add two n-bit numbers using grade school addition: c*n [c time for each of n columns]

Grade school addition On another computer M’, the time to perform may be c’. Total time to add two n-bit numbers using grade school addition: c’*n [c’ time for each of n columns]

Different machines result in different slopes, but time grows linearly as input size increases. # of bits in the numbers timetime Machine M: cn Machine M’: c’n

You measure “time” as the number of elementary “steps” defined in any other way, provided each such “step” takes constant time in a reasonable implementation. Constant: independent of the length n of the input.

Thus we arrive at an implementation independent insight: Grade School Addition is a linear time algorithm.

“Keep it simple, stupid!”

The process of abstracting away details and determining the rate of resource usage in terms of the problem size is one of the fundamental ideas in computer science.

How to multiply 2 n-bit numbers. X * * * * * * * * * * * * * * * * * * * * * * * * * * * * n2n2

The total time is bounded by c n 2 (abstracting away the implementation details).

Grade School Addition: Linear time Grade School Multiplication: Quadratic time No matter how dramatic the difference in the constants, the quadratic curve will eventually dominate the linear curve # of bits in the numbers timetime

Time vs. Input Size For any algorithm, define n – the input size and T(n) - the amount of time used on inputs of size n We will ask: What is the growth rate of T(n) ?

Useful notation to discuss growth rates For any monotonic functions f, g from the positive integers to the positive integers, we say f(n) = O( g(n) ) if g(n) eventually dominates f(n) [Formally: there exists a constant c such that for all sufficiently large n: f(n) ≤ c*g(n) ]

More useful notation: Ω For any monotonic functions f, g from the positive integers to the positive integers, we say f(n) = Ω( g(n) ) if: f(n) eventually dominates g(n) [Formally: there exists a constant c such that for all sufficiently large n: f(n) ≥ c*g(n) ]

Yet more useful notation: Θ For any monotonic functions f, g from the positive integers to the positive integers, we say f(n) = Θ( g(n) ) if: f(n) = O( g(n) ) and f = Ω( g(n) )

# of bits in numbers timetime f = Θ(n) means that f can be sandwiched between two lines from some point on.

Quickies n = O(n 2 ) ? –YES n = O(√n) ? –NO n 2 = Ω(n log n) ? –YES 3n 2 + 4n +  = Ω(n 2 ) ? –YES 3n 2 + 4n +  = Θ(n 2 ) ? –YES n 2 log n = Θ(n 2 ) ? –NO

n 2 log n = Θ(n 2 ) n 2 log n = O(n 2 ) n 2 log n = Ω(n 2 ) Two statements in one! NO YES

Names For Some Growth Rates Linear Time: T(n) = O(n) Quadratic Time: T(n) = O(n 2 ) Cubic Time: T(n) = O(n 3 ) Polynomial Time: for some constant k, T(n) = O(n k ). Example: T(n) = 13 n 5

Large Growth Rates Exponential Time: for some constant k, T(n) = O(k n ) Example: T(n) = n*2 n Logarithmic Time: T(n) = O(logn) Example: T(n) = 15 log 2 (n)

Now, something different…

Complexity of Songs Suppose we want to sing a song of length n. Since n can be large, we want to memorize songs which require only a small amount of brain space. Let S(n) be the space complexity of a song.

Complexity of Songs The amount of space S(n) can be measured in either characters of words. What is asymptotic relation between # of characters and the # of words?

Complexity of Songs How long is the longest English word?

not the longest one but the funniest… " Supercalifragilisticexpialidocious " (34 letters)

Complexity of Songs The amount of space S(n) can be measured in either characters of words. What is asymptotic relation between # of characters and the # of words? # of characters = Θ (# of words)

Complexity of Songs S(n) is the space complexity of a song. What is the upper and lower bounds for S(n)?

Complexity of Songs S(n) is the space complexity of a song. What is the upper and lower bounds for S(n)? S(n) = O(n) S(n) = Ω(1)

Complexity of Songs with a refrain Let S be a song with m verses of length V and a refrain of length R. The refrain is first, last and between. What is the space complexity of S?

Complexity of Songs with a refrain Let S be a song with m verses of length V and a refrain of length R. The song length: n = R + (V+R)*m The space: s = R + V*m Eliminate m to get s = V*n/(V+R) +const Therefore, S(n) = O(n)

100 Bottles of Beer n bottles of beer on the wall, n bottles of beer. You take one down and pass it around. n-1 bottles of beer on the wall. What is the space complexity of S?

100 Bottles of Beer n bottles of beer on the wall, n bottles of beer. You take one down and pass it around. n-1 bottles of beer on the wall. What is the space complexity of S? We have to remember one verse (template) and the current value of n. S = Θ(1) + O(log n)

The k Days of Christmas On the first day of Christmas, my true love sent to me A partridge in a pear tree.

The k Days of Christmas On the k day of Christmas, my true love sent to me gift 1,…, gift k What is the space complexity of this song?

The k Days of Christmas What is the space complexity of this song? The song length: n = template + k*gift 1 +(k-1)*gift 2 +…+ gift k Let all gifts be the same length G. n = Θ(1) + G*(1+2+…+k) = O(k 2 ) The space: S = Θ(1) + k*G = O(k) = O(  n)

Does there exist arbitrarily long songs of space complexity O(1) ?

The worst time complexity The best time complexity The average time complexity The amortized time complexity The randomized time complexity Time complexities

In sorting and searching (array) algorithms, the input size is the number of items. In graph algorithm the input size is presented by V+E In number algorithms, the input size is the number of bits. The input size

Primality boolean prime(int n) { int k = 2; while( k++ <= Math.sqrt(n) ) if(n%k == 0) return false; return true; } What is the complexity of this algorithm?

Primality The number of passes through the loop is  n. Therefore, T(n) = O(  n). But what is n?

For a given algorithm, the input size is defined as the number of characters it takes to write (or encode) the input. The input size

Primality The number of passes through the loop is  n. If we use base 10 encoding, it takes O( log 10 n ) = input_size characters (digits) to encode n. Therefore, T(n) = 10^(input_size/2) The time complexity is not polynomial!

Fibonacci Numbers int fib( int n ) { int [] f = new int[n+2]; f[0]=0; f[1]=1; for( int k = 2; k<=n; k++) f[k] = f[k-1] +f[k-2]; return f[n]; } What is the complexity of this algorithm?

Fibonacci Numbers public static int fib( int n ) { int [] f = new int[n+2]; f[0]=0; f[1]=1; for( int k = 2; k<=n; k++) f[k] = f[k-1] +f[k-2]; return f[n]; } input_size = log n, then n = 2^(input_size) The algorithm is exponential in terms of size.

0-1 Knapsack problem A thief breaks into a jewelry store carrying a knapsack. Each item in the store has a value and a weight. The knapsack will break if the total weight exceeds W. The thief’s dilemma is to maximize the total value What is the time complexity using the dynamic programming approach?

0-1 Knapsack problem We create a table of size n*W, where n is the number of items and W is the maximum weight allowed T = Θ(n*W) But the input size of W is log W, therefore T = Θ( n*2^(input size of W) ) If you change W by one bit, the amount of work will double.

Ok, so… How much time does it take to square the number n using grade school multiplication?

Grade School Multiplication: Quadratic time c(log n) 2 time to square the number n # of bits in numbers timetime

Some Big Ones Doubly Exponential Time means that for some constant k Triply Exponential And so forth. T ( n ) = 2 2 k n T ( n ) = k n

Exponential Times 2STACK(0) = 1 2STACK(n) = 2 2STACK(n-1) 2STACK(1) = 2 2STACK(2) = 4 2STACK(3) = 16 2STACK(4) = STACK(5) ¸ = atoms in universe ::: 2 “tower of n 2’s” 2STACK(n) =

And the inverse of 2STACK: log* 2STACK(0) = 1log*(1) = 0 2STACK(n) = 2 2STACK(n-1) 2STACK(1) = 2log*(2) = 1 2STACK(2) = 4log*(4) = 2 2STACK(3) = 16log*(16) = 3 2STACK(4) = 65536log*(65536) = 4 2STACK(5) ¸ log*(atoms) = 5 = atoms in universe log*(n) = # of times you have to apply the log function to n to make it ≤ 1

So an algorithm that can be shown to run in O(n log * n) Time is Linear Time for all practical purposes!!

Ackermann’s Function A(0, n) = n + 1 for n ≥ 0 A(m, 0) = A(m - 1, 1) for m ≥ 1 A(m, n) = A(m - 1, A(m, n - 1)) for m, n ≥ 1 A(4,2) > # of particles in universe A(5,2) can’t be written out in this universe

Inverse Ackermann function A(0, n) = n + 1 for n ≥ 0 A(m, 0) = A(m - 1, 1) for m ≥ 1 A(m, n) = A(m - 1, A(m, n - 1)) for m, n ≥ 1 Define: A’(k) = A(k,k) Inverse Ackerman α(n) is the inverse of A’ Practically speaking:n × α(n) ≤ 4n

Inverse Ackermann function It grows slower than log* n The amortized time of Union-Find takes O(a(V, E)) Finding a MST O(E a(V, E))

Time complexity of grade school addition *** *** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** + * ** * **** **** T(n) = The amount of time grade school addition uses to add two n-bit numbers We saw that T(n) was linear. T(n) = Θ(n)

Time complexity of grade school multiplication T(n) = The amount of time grade school multiplication uses to add two n-bit numbers We saw that T(n) was quadratic. T(n) = Θ(n 2 ) X * * * * * * * * * * * * * * * * * * * * * * * * * * * * n2n2

Next Class Will Bonzo be able to add two numbers faster than Θ(n)? Can Odette ever multiply two numbers faster than Θ(n 2 )? Tune in on Thursday, same time, same place…