On Time Versus Input Size Great Theoretical Ideas In Computer Science S. Rudich V. Adamchik CS 15-251 Spring 2006 Lecture 17March 21, 2006Carnegie Mellon.

Slides:



Advertisements
Similar presentations
Grade School Revisited: How To Multiply Two Numbers Great Theoretical Ideas In Computer Science Anupam Gupta Danny Sleator CS Fall 2010 Lecture.
Advertisements

Lecture: Algorithmic complexity
Razdan with contribution from others 1 Algorithm Analysis What is the Big ‘O Bout? Anshuman Razdan Div of Computing.
Grade School Revisited: How To Multiply Two Numbers Great Theoretical Ideas In Computer Science Victor Adamchik Danny Sleator CS Spring 2010 Lecture.
CSE332: Data Abstractions Lecture 2: Math Review; Algorithm Analysis Tyler Robison Summer
CSE332: Data Abstractions Lecture 2: Math Review; Algorithm Analysis Dan Grossman Spring 2010.
CS 307 Fundamentals of Computer Science 1 Asymptotic Analysis of Algorithms (how to evaluate your programming power tools) based on presentation material.
CS 206 Introduction to Computer Science II 09 / 05 / 2008 Instructor: Michael Eckmann.
On Time Versus Input Size Great Theoretical Ideas In Computer Science Steven RudichCS Spring 2004 Lecture 15March 2, 2004Carnegie Mellon University.
CS 206 Introduction to Computer Science II 01 / 28 / 2009 Instructor: Michael Eckmann.
Data Structures and Algorithms1 Basics -- 2 From: Data Structures and Their Algorithms, by Harry R. Lewis and Larry Denenberg (Harvard University: Harper.
1 Foundations of Software Design Fall 2002 Marti Hearst Lecture 11: Analysis of Algorithms, cont.
Design and Analysis of Algorithms Chapter Analysis of Algorithms Dr. Ying Lu August 28, 2012
Grade School Revisited: How To Multiply Two Numbers Great Theoretical Ideas In Computer Science S. Rudich V. Adamchik CS Spring 2006 Lecture 18March.
1 i206: Lecture 6: Math Review, Begin Analysis of Algorithms Marti Hearst Spring 2012.
COMPSCI 102 Introduction to Discrete Mathematics.
CS Algorithm Analysis1 Algorithm Analysis - CS 312 Professor Tony Martinez.
Algorithm Analysis & Complexity We saw that a linear search used n comparisons in the worst case (for an array of size n) and binary search had logn comparisons.
Algorithm Efficiency CS 110: Data Structures and Algorithms First Semester,
Analysis of Algorithms
Analysis of Algorithms These slides are a modified version of the slides used by Prof. Eltabakh in his offering of CS2223 in D term 2013.
Fundamentals CSE 373 Data Structures Lecture 5. 12/26/03Fundamentals - Lecture 52 Mathematical Background Today, we will review: ›Logs and exponents ›Series.
On Time Versus Input Size Great Theoretical Ideas In Computer Science S. Rudich V. Adamchik CS Spring 2006 Lecture 17March 21, 2006Carnegie Mellon.
Program Efficiency & Complexity Analysis. Algorithm Review An algorithm is a definite procedure for solving a problem in finite number of steps Algorithm.
1 Algorithms  Algorithms are simply a list of steps required to solve some particular problem  They are designed as abstractions of processes carried.
Chapter 2 Computational Complexity. Computational Complexity Compares growth of two functions Independent of constant multipliers and lower-order effects.
Algorithm Analysis (Big O)
1 Chapter 2 Algorithm Analysis Reading: Chapter 2.
Grade School Revisited: How To Multiply Two Numbers CS Lecture 4 2 X 2 = 5.
Lecture 3COMPSCI.220.S1.T Running Time: Estimation Rules Running time is proportional to the most significant term in T(n) Once a problem size.
Data Structures I (CPCS-204) Week # 2: Algorithm Analysis tools Dr. Omar Batarfi Dr. Yahya Dahab Dr. Imtiaz Khan.
Algorithm Analysis 1.
CMPT 438 Algorithms.
CSE373: Data Structures and Algorithms Lecture 2: Math Review; Algorithm Analysis Kevin Quinn Fall 2015.
Analysis of Algorithms
Analysis of Algorithms
GC 211:Data Structures Week 2: Algorithm Analysis Tools
Introduction to complexity
Asymptotic Notations Algorithms perform f(n) basic operations to accomplish task Identify that function Identify size of problem (n) Count number of operations.
Introduction to Algorithms
Introduction Algorithms Order Analysis of Algorithm
CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis Catie Baker Spring 2015.
Great Theoretical Ideas in Computer Science
Analysis of Algorithms
Complexity Analysis.
Algorithm design and Analysis
CSC 413/513: Intro to Algorithms
What is CS 253 about? Contrary to the wide spread belief that the #1 job of computers is to perform calculations (which is why the are called “computers”),
Introduction to Algorithms Analysis
Asymptotic Notations Algorithms perform f(n) basic operations to accomplish task Identify that function Identify size of problem (n) Count number of operations.
Analysis of Algorithms
On Time Versus Input Size
Analysis of Algorithms
CS 201 Fundamental Structures of Computer Science
Advanced Analysis of Algorithms
Searching, Sorting, and Asymptotic Complexity
Searching, Sorting, and Asymptotic Complexity
CSE 373: Data Structures & Algorithms
Analysis of Algorithms
Algorithms Analysis Algorithm efficiency can be measured in terms of:
Asymptotic Notations Algorithms perform f(n) basic operations to accomplish task Identify that function Identify size of problem (n) Count number of operations.
Introduction to Discrete Mathematics
At the end of this session, learner will be able to:
Analysis of Algorithms
CS210- Lecture 2 Jun 2, 2005 Announcements Questions
Big-O & Asymptotic Analysis
Analysis of Algorithms
Analysis of Algorithms
Algorithms and data structures: basic definitions
Presentation transcript:

On Time Versus Input Size Great Theoretical Ideas In Computer Science S. Rudich V. Adamchik CS Spring 2006 Lecture 17March 21, 2006Carnegie Mellon University # of bits timetime

Welcome to the 9 th week 1.Time complexity of algorithms 2.Space complexity of songs

How to add 2 n-bit numbers. ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** +

*** *** ** ** ** ** ** ** ** ** ** ** *** *** ** ** ** ** ** ** ** ** +

*** *** ** ** ** ** ** ** ** ** *** *** **** **** ** ** ** ** ** ** ** ** +

*** *** ** ** ** ** ** ** *** *** **** **** **** **** ** ** ** ** ** ** ** ** +

*** *** ** ** ** ** *** *** **** **** **** **** **** **** ** ** ** ** ** ** ** ** +

*** *** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** * * * * + * ** * “Grade school addition”

Time complexity of grade school addition + T(n) = amount of time grade school addition uses to add two n-bit numbers What do you mean by “time”? * * * * * * * * * * * * * * * *

Our Goal We want to define “time” taken by the method of grade school addition without depending on the implementation details.

Hold on! But you agree that T(n) does depend on the implementation! A given algorithm will take different amounts of time on the same inputs depending on such factors as: Processor speed Instruction set Disk speed Brand of compiler

Our Goal Therefore, we can only speak of the time taken by any particular implementation, as opposed to the time taken by the method in the abstract.

These objections are serious, but they are not undefeatable. There is a very nice sense in which we can analyze grade school addition without having to worry about implementation details. Here is how it works...

Grade school addition On any reasonable computer M, adding 3 bits and writing down 2 bits (for short ) can be done in constant time c. Total time to add two n-bit numbers using grade school addition: c*n [c time for each of n columns]

Grade school addition On another computer M’, the time to perform may be c’. Total time to add two n-bit numbers using grade school addition: c’*n [c’ time for each of n columns]

Different machines result in different slopes, but time grows linearly as input size increases. f # of bits in the numbers timetime Machine M: cn Machine M’: c’n

Thus we arrive at an implementation independent insight: Grade School Addition is a linear time algorithm.

We can define away the details of the world that we do not wish to currently study, in order to recognize the similarities between seemingly different things…

The process of abstracting away details and determining the rate of resource usage in terms of the problem size is one of the fundamental ideas in computer science.

You can measure “time” as the number of elementary “steps” defined in any other way, provided each such “step” takes constant time in a reasonable implementation. Constant: independent of the length n of the input.

How to multiply 2 n-bit numbers. X * * * * * * * * * * * * * * * * * * * * * * * * * * * * n2n2

The total time is bounded by c n 2 (abstracting away the implementation details).

Grade School Addition: Linear time Grade School Multiplication: Quadratic time No matter how dramatic the difference in the constants, the quadratic curve will eventually dominate the linear curve # of bits in the numbers timetime

Time vs. Input Size For any algorithm, define n – the input size and T(n) - the amount of time used on inputs of size n We will ask: What is the growth rate of T(n) ?

Useful notation to discuss growth rates For any monotonic functions f, g from the positive integers to the positive integers, we say f(n) = O( g(n) ) if g(n) eventually dominates f(n) [Formally: there exists a constant c such that for all sufficiently large n: f(n) ≤ c*g(n) ]

More useful notation: Ω For any monotonic functions f, g from the positive integers to the positive integers, we say f(n) = Ω( g(n) ) if: f(n) eventually dominates g(n) [Formally: there exists a constant c such that for all sufficiently large n: f(n) ≥ c*g(n) ]

Yet more useful notation: Θ For any monotonic functions f, g from the positive integers to the positive integers, we say f(n) = Θ( g(n) ) if: f(n) = O( g(n) ) and f = Ω( G(n) )

# of bits in numbers timetime f = Θ(n) means that f can be sandwiched between two lines from some point on.

Quickies n = O(n 2 ) ? n = O(√n) ? n 2 = Ω(n log n) ? 3n 2 + 4n +  = Ω(n 2 ) ? 3n 2 + 4n +  = Θ(n 2 ) ? n 2 log n = Θ(n 2 ) ?

n 2 log n = Θ(n 2 ) n 2 log n = O(n 2 ) n 2 log n = Ω(n 2 ) Two statements in one! NO YES

Names For Some Growth Rates Linear Time: T(n) = O(n) Quadratic Time: T(n) = O(n 2 ) Cubic Time: T(n) = O(n 3 ) Polynomial Time: for some constant k, T(n) = O(n k ). Example: T(n) = 13n 5

Large Growth Rates Exponential Time: for some constant k, T(n) = O(k n ) Example: T(n) = n2 n = O(3 n ) Almost Exponential Time: for some constant k, T(n) = 2 kth root of n Example: T ( n ) = 2 p n

Small Growth Rates Logarithmic Time: T(n) = O(logn) Example: T(n) = 15log 2 (n) Polylogarithmic Time: for some constant k, T(n) = O(log k (n)) Note: These kind of algorithms can’t possibly read all of their inputs.

Complexity of Songs Suppose we want to sing a song of length n. Since n can be large, we want to memorize songs which require only a small amount of brain space. Let S(n) be the space complexity of a song.

Complexity of Songs The amount of space S(n) can be measured in either characters of words. What is asymptotic relation between # of characters and the # of words?

Complexity of Songs S(n) is the space complexity of a song. What is the upper and lower bounds for S(n)?

Complexity of Songs with a refrain Let S be a song with m verses of length V and a refrain of length R. The refrain is first, last and between. What is the space complexity of S?

100 Bottles of Beer n bottles of beer on the wall, n bottles of beer. You take one down and pass it around. n-1 bottles of beer on the wall. What is the space complexity of S?

The k Days of Christmas On the k day of Christmas, my true love sent to me gift 1,…, gift k What is the space complexity of this song?

The worst time complexity The best time complexity The average time complexity The amortized time complexity The randomized time complexity Time complexities

In sorting and searching (array) algorithms, the input size is the number of items. In graph algorithm the input size is presented by V+E In number algorithms, the input size is the number of bits. The input size

Primality boolean prime(int n) { int k = 2; while( k++ <= Math.sqrt(n) ) if(n%k == 0) return false; return true; } What is the complexity of this algorithm?

Primality The number of passes through the loop is  n. Therefore, T(n) = O(  n). But what is n?

For a given algorithm, the input size is defined as the number of characters it takes to write (or encode) the input. The input size

Primality The number of passes through the loop is  n. If we use base 10 encoding, it takes O( log 10 n ) = input_size characters (digits) to encode n. Therefore, T(n) = 10^(input_size/2) The time complexity is not polynomial!

Fibonacci Numbers public static int fib( int n ) { int [] f = new int[n+2]; f[0]=0; f[1]=1; for( int k = 2; k<=n; k++) f[k] = f[k-1] +f[k-2]; return f[n]; } What is the complexity of this algorithm?

0-1 Knapsack problem A thief breaks into a jewelry store carrying a knapsack. Each item in the store has a value and a weight. The knapsack will break if the total weight exceeds W. The thief’s dilemma is to maximize the total value What is the time complexity using the dynamic programming approach?

Some Big Ones Doubly Exponential Time means that for some constant k Triply Exponential And so forth. T ( n ) = 2 2 k n T ( n ) = k n

Faster and Faster: 2STACK 2STACK(0) = 1 2STACK(n) = 2 2STACK(n-1) 2STACK(1) = 2 2STACK(2) = 4 2STACK(3) = 16 2STACK(4) = STACK(5) ¸ = atoms in universe ::: 2 “tower of n 2’s” 2STACK(n) =

And the inverse of 2STACK: log* log*(n) = # of times you have to apply the log function to n to make it ≤ 1 2STACK(0) = 1 2STACK(n) = 2 2STACK(n-1) 2STACK(1) = 2 2STACK(2) = 4 2STACK(3) = 16 2STACK(4) = STACK(5) ¸ = atoms in universe

And the inverse of 2STACK: log* 2STACK(0) = 1log*(1) = 0 2STACK(n) = 2 2STACK(n-1) 2STACK(1) = 2log*(2) = 1 2STACK(2) = 4log*(4) = 2 2STACK(3) = 16log*(16) = 3 2STACK(4) = 65536log*(65536) = 4 2STACK(5) ¸ log*(atoms) = 5 = atoms in universe log*(n) = # of times you have to apply the log function to n to make it ≤ 1

So an algorithm that can be shown to run in O(n log * n) Time is Linear Time for all practical purposes!!

Ackermann’s Function A(0, n) = n + 1 for n ≥ 0 A(m, 0) = A(m - 1, 1) for m ≥ 1 A(m, n) = A(m - 1, A(m, n - 1)) for m, n ≥ 1 A(4,2) > # of particles in universe A(5,2) can’t be written out in this universe

Inverse Ackermann function A(0, n) = n + 1 for n ≥ 0 A(m, 0) = A(m - 1, 1) for m ≥ 1 A(m, n) = A(m - 1, A(m, n - 1)) for m, n ≥ 1 Define: A’(k) = A(k,k) Inverse Ackerman α(n) is the inverse of A’ Practically speaking:n × α(n) ≤ 4n

The inverse Ackermann function – in fact, Θ(n α(n)) arises in the seminal paper of D. D. Sleator and R. E. Tarjan. A data structure for dynamic trees. Journal of Computer and System Sciences, 26(3): , 1983.

Time complexity of grade school addition *** *** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** + * ** * **** **** T(n) = The amount of time grade school addition uses to add two n-bit numbers We saw that T(n) was linear. T(n) = Θ(n)

Time complexity of grade school multiplication T(n) = The amount of time grade school multiplication uses to add two n-bit numbers We saw that T(n) was quadratic. T(n) = Θ(n 2 ) X * * * * * * * * * * * * * * * * * * * * * * * * * * * * n2n2

Next Class Will Bonzo be able to add two numbers faster than Θ(n)? Can Odette ever multiply two numbers faster than Θ(n 2 )? Tune in on Thursday, same time, same place…