1 Growth of Functions CS/APMA 202 Rosen section 2.2 Aaron Bloomfield.

Slides:



Advertisements
Similar presentations
Discrete Structures CISC 2315
Advertisements

Lecture: Algorithmic complexity
HST 952 Computing for Biomedical Scientists Lecture 10.
Complexity Theory  Complexity theory is a problem can be solved? Given: ◦ Limited resources: processor time and memory space.
CSE332: Data Abstractions Lecture 2: Math Review; Algorithm Analysis Tyler Robison Summer
Computational problems, algorithms, runtime, hardness
The Growth of Functions
Tractable and intractable problems for parallel computers
CS 206 Introduction to Computer Science II 09 / 10 / 2008 Instructor: Michael Eckmann.
CS 206 Introduction to Computer Science II 09 / 05 / 2008 Instructor: Michael Eckmann.
Chapter 11: Limitations of Algorithmic Power
1 Set Operations CS/APMA 202, Spring 2005 Rosen, section 1.7 Aaron Bloomfield.
1 The Pigeonhole Principle CS/APMA 202 Rosen section 4.2 Aaron Bloomfield.
1 Integers and Division CS/APMA 202 Rosen section 2.4 Aaron Bloomfield.
Algorithms Chapter 3 With Question/Answer Animations.
CSE373: Data Structures and Algorithms Lecture 4: Asymptotic Analysis Aaron Bauer Winter 2014.
Instructor Neelima Gupta
Analysis of Algorithms Algorithm Input Output © 2014 Goodrich, Tamassia, Goldwasser1Analysis of Algorithms Presentation for use with the textbook Data.
Recursive Definitions and Structural Induction
1 Complexity Lecture Ref. Handout p
1 Sequences and Summations CS/APMA 202 Rosen section 3.2 Aaron Bloomfield.
Time Complexity Dr. Jicheng Fu Department of Computer Science University of Central Oklahoma.
1 Partial Orderings Aaron Bloomfield CS 202 Rosen, section 7.6.
Week 2 CS 361: Advanced Data Structures and Algorithms
1 Growth of Functions CS 202 Epp, section ??? Aaron Bloomfield.
Logarithmic and Exponential Equations
Difficult Problems. Polynomial-time algorithms A polynomial-time algorithm is an algorithm whose running time is O(f(n)), where f(n) is a polynomial A.
1 Permutations and Combinations CS/APMA 202 Rosen section 4.3 Aaron Bloomfield.
Analysis Tools Jyh-Shing Roger Jang ( 張智星 ) CSIE Dept, National Taiwan University.
Algorithm Efficiency CS 110: Data Structures and Algorithms First Semester,
1 Computer Algorithms Lecture 3 Asymptotic Notation Some of these slides are courtesy of D. Plaisted, UNC and M. Nicolescu, UNR.
The Growth of Functions Rosen 2.2 Basic Rules of Logarithms log z (xy) log z (x/y) log z (x y ) If x = y If x < y log z (-|x|) is undefined = log z (x)
DISCRETE MATHEMATICS I CHAPTER 11 Dr. Adam Anthony Spring 2011 Some material adapted from lecture notes provided by Dr. Chungsim Han and Dr. Sam Lomonaco.
Tonga Institute of Higher Education Design and Analysis of Algorithms IT 254 Lecture 8: Complexity Theory.
1 Predicates and Quantifiers CS/APMA 202, Spring 2005 Rosen, section 1.3 Aaron Bloomfield.
CMSC 341 Asymptotic Analysis. 2 Complexity How many resources will it take to solve a problem of a given size? –time –space Expressed as a function of.
Algorithms and their Applications CS2004 ( ) Dr Stephen Swift 4.1 Time Complexity and Asymptotic Notation.
1 Proof Strategies CS/APMA 202 Rosen section 3.1 Aaron Bloomfield.
Fall 2002CMSC Discrete Structures1 Enough Mathematical Appetizers! Let us look at something more interesting: Algorithms.
O, , and  Notations Lecture 39 Section 9.2 Wed, Mar 30, 2005.
MS 101: Algorithms Instructor Neelima Gupta
Cliff Shaffer Computer Science Computational Complexity.
1 Relations and Their Properties Rosen, section 7.1 CS/APMA 202 Aaron Bloomfield.
1 Algorithms CS/APMA 202 Rosen section 2.1 Aaron Bloomfield.
1 Algorithms CS 202 Epp section ??? Aaron Bloomfield.
NP-COMPLETE PROBLEMS. Admin  Two more assignments…  No office hours on tomorrow.
CMSC 341 Asymptotic Analysis. 2 Complexity How many resources will it take to solve a problem of a given size? –time –space Expressed as a function of.
Big Oh Notation Greek letter Omicron (Ο) is used to denote the limit of asymptotic growth of an algorithm If algorithm processing time grows linearly with.
CSC – 332 Data Structures Generics Analysis of Algorithms Dr. Curry Guinn.
Algorithmic Analysis Charl du Plessis and Robert Ketteringham.
Algorithm Analysis Part of slides are borrowed from UST.
1 Algorithms  Algorithms are simply a list of steps required to solve some particular problem  They are designed as abstractions of processes carried.
The Fundamentals. Algorithms What is an algorithm? An algorithm is “a finite set of precise instructions for performing a computation or for solving.
1 Basics of Counting CS/APMA 202 Rosen section 4.1 Aaron Bloomfield.
CS6045: Advanced Algorithms NP Completeness. NP-Completeness Some problems are intractable: as they grow large, we are unable to solve them in reasonable.
Algorithm Complexity L. Grewe 1. Algorithm Efficiency There are often many approaches (algorithms) to solve a problem. How do we choose between them?
E.G.M. PetrakisAlgorithm Analysis1  Algorithms that are equally correct can vary in their utilization of computational resources  time and memory  a.
Chapter 11 Introduction to Computational Complexity Copyright © 2011 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1.
Copyright © 2014 Curt Hill Algorithm Analysis How Do We Determine the Complexity of Algorithms.
Ch03-Algorithms 1. Algorithms What is an algorithm? An algorithm is a finite set of precise instructions for performing a computation or for solving a.
Chapter 15 Running Time Analysis. Topics Orders of Magnitude and Big-Oh Notation Running Time Analysis of Algorithms –Counting Statements –Evaluating.
Growth of Functions & Algorithms
Aaron Bloomfield CS 202 Rosen, section 7.4
Introduction to Algorithms
Permutations and Combinations
The Growth of Functions
CS 2210 Discrete Structures Algorithms and Complexity
Enough Mathematical Appetizers!
Algorithms.
CS 2210 Discrete Structures Algorithms and Complexity
Presentation transcript:

1 Growth of Functions CS/APMA 202 Rosen section 2.2 Aaron Bloomfield

2 How does one measure algorithms We can time how long it takes a computer What if the computer is doing other things? What if the computer is doing other things? And what happens if you get a faster computer? And what happens if you get a faster computer? A 3 Ghz Windows machine chip will run an algorithm at a different speed than a 3 Ghz Macintosh So that idea didn’t work out well…

3 How does one measure algorithms We can measure how many machine instructions an algorithm takes Different CPUs will require different amount of machine instructions for the same algorithm Different CPUs will require different amount of machine instructions for the same algorithm So that idea didn’t work out well…

4 How does one measure algorithms We can loosely define a “step” as a single computer operation A comparison, an assignment, etc. A comparison, an assignment, etc. Regardless of how many machine instructions it translates into Regardless of how many machine instructions it translates into This allows us to put algorithms into broad categories of efficientness An efficient algorithm on a slow computer will always beat an inefficient algorithm on a fast computer An efficient algorithm on a slow computer will always beat an inefficient algorithm on a fast computer

5 Bubble sort running time The bubble step take (n 2 -n)/2 “steps” Let’s say the bubble sort takes the following number of steps on specific CPUs: Intel Pentium IV CPU: 58*(n 2 -n)/2 Intel Pentium IV CPU: 58*(n 2 -n)/2 Motorola CPU: 84.4*(n 2 -2n)/2 Motorola CPU: 84.4*(n 2 -2n)/2 Intel Pentium V CPU: 44*(n 2 -n)/2 Intel Pentium V CPU: 44*(n 2 -n)/2 Notice that each has an n 2 term As n increases, the other terms will drop out As n increases, the other terms will drop out

6 Bubble sort running time This leaves us with: Intel Pentium IV CPU: 29n 2 Intel Pentium IV CPU: 29n 2 Motorola CPU: 42.2n 2 Motorola CPU: 42.2n 2 Intel Pentium V CPU: 22n 2 Intel Pentium V CPU: 22n 2 As processors change, the constants will always change The exponent on n will not The exponent on n will not Thus, we can’t care about the constants

7 An aside: inequalities If you have a inequality you need to show: x < y You can replace the lesser side with something greater: x+1 < y If you can still show this to be true, then the original inequality is true Consider showing that 15 < 20 You can replace 15 with 16, and then show that 16 < 20. Because 15 < 16, and 16 < 20, then 15 < 20 You can replace 15 with 16, and then show that 16 < 20. Because 15 < 16, and 16 < 20, then 15 < 20

8 An aside: inequalities If you have a inequality you need to show: x < y You can replace the greater side with something lesser: x < y-1 If you can still show this to be true, then the original inequality is true Consider showing that 15 < 20 You can replace 20 with 19, and then show that 15 < 19. Because 15 < 19, and 19 < 20, then 15 < 20 You can replace 20 with 19, and then show that 15 < 19. Because 15 < 19, and 19 < 20, then 15 < 20

9 An aside: inequalities What if you do such a replacement and can’t show anything? Then you can’t say anything about the original inequality Then you can’t say anything about the original inequality Consider showing that 15 < 20 You can replace 20 with 10 You can replace 20 with 10 But you can’t show that 15 < 10 But you can’t show that 15 < 10 So you can’t say anything one way or the other about the original inequality So you can’t say anything one way or the other about the original inequality

10 Quick survey I felt I understand running times and inequality manipulation… I felt I understand running times and inequality manipulation… a) Very well b) With some review, I’ll be good c) Not really d) Not at all

11 Biology Biology Physics Physics Interdisciplinary Interdisciplinary Chemistry Chemistry Mathematics Mathematics Literature Literature Peace Peace Hygiene Hygiene Economics Economics Medicine Medicine The 2002 Ig Nobel Prizes “Courtship behavior of ostriches towards humans under farming conditions in Britain” “Demonstration of the exponential decay law using beer froth” A comprehensive study of human belly button lint Creating a four-legged periodic table “Estimation of the surface area of African elephants” “The effects of pre-existing inappropriate highlighting on reading comprehension” For creating Bow-lingual, a computerized dog-to-human translation device For creating a washing machine for cats and dogs Enron et. al. for applying imaginary numbers to the business world “Scrotal asymmetry in man in ancient sculpture”

12 End of lecture on 3 March 2005

13 Review of last time Searches Linear: n steps Linear: n steps Binary: log 2 n steps Binary: log 2 n steps Binary search is about as fast as you can get Binary search is about as fast as you can getSorts Bubble: n 2 steps Bubble: n 2 steps Insertion: n 2 steps Insertion: n 2 steps There are other, more efficient, sorting techniques There are other, more efficient, sorting techniques In principle, the fastest are heap sort, quick sort, and merge sort These each take take n * log 2 n steps In practice, quick sort is the fastest, followed by merge sort

14 Big-Oh notation Let b(x) be the bubble sort algorithm We say b(x) is O(n 2 ) This is read as “b(x) is big-oh n 2 ” This is read as “b(x) is big-oh n 2 ” This means that the input size increases, the running time of the bubble sort will increase proportional to the square of the input size This means that the input size increases, the running time of the bubble sort will increase proportional to the square of the input size In other words, by some constant times n 2 Let l(x) be the linear (or sequential) search algorithm We say l(x) is O(n) Meaning the running time of the linear search increases directly proportional to the input size Meaning the running time of the linear search increases directly proportional to the input size

15 Big-Oh notation Consider: b(x) is O(n 2 ) That means that b(x)’s running time is less than (or equal to) some constant times n 2 That means that b(x)’s running time is less than (or equal to) some constant times n 2 Consider: l(x) is O(n) That means that l(x)’s running time is less than (or equal to) some constant times n That means that l(x)’s running time is less than (or equal to) some constant times n

16 Big-Oh proofs Show that f(x) = x 2 + 2x + 1 is O(x 2 ) In other words, show that x 2 + 2x + 1 ≤ c*x 2 In other words, show that x 2 + 2x + 1 ≤ c*x 2 Where c is some constant For input size greater than some x We know that 2x 2 ≥ 2x whenever x ≥ 1 And we know that x 2 ≥ 1 whenever x ≥ 1 So we replace 2x+1 with 3x 2 We then end up with x 2 + 3x 2 = 4x 2 We then end up with x 2 + 3x 2 = 4x 2 This yields 4x 2 ≤ c*x 2 This yields 4x 2 ≤ c*x 2 This, for input sizes 1 or greater, when the constant is 4 or greater, f(x) is O(x 2 ) We could have chosen values for c and x that were different

17 Big-Oh proofs

18 Rosen, section 2.2, question 2(b) Show that f(x) = x is O(x 2 ) In other words, show that x ≤ c*x 2 In other words, show that x ≤ c*x 2 We know that x 2 > 1000 whenever x > 31 Thus, we replace 1000 with x 2 Thus, we replace 1000 with x 2 This yields 2x 2 ≤ c*x 2 This yields 2x 2 ≤ c*x 2 Thus, f(x) is O(x 2 ) for all x > 31 when c ≥ 2

19 Rosen, section 2.2, question 1(a) Show that f(x) = 3x+7 is O(x) In other words, show that 3x+7 ≤ c*x In other words, show that 3x+7 ≤ c*x We know that x > 7 whenever x > 7 Duh! Duh! So we replace 7 with x So we replace 7 with x This yields 4x ≤ c*x This yields 4x ≤ c*x Thus, f(x) is O(x) for all x > 7 when c ≥ 4

20 Quick survey I felt I understand (more or less) Big-Oh proofs… I felt I understand (more or less) Big-Oh proofs… a) Very well b) With some review, I’ll be good c) Not really d) Not at all

21 Today’s demotivators

22 A variant of the last question Show that f(x) = 3x+7 is O(x 2 ) In other words, show that 3x+7 ≤ c*x 2 In other words, show that 3x+7 ≤ c*x 2 We know that x > 7 whenever x > 7 Duh! Duh! So we replace 7 with x So we replace 7 with x This yields 4x < c*x 2 This yields 4x < c*x 2 This will also be true for x > 7 when c ≥ 1 This will also be true for x > 7 when c ≥ 1 Thus, f(x) is O(x 2 ) for all x > 7 when c ≥ 1

23 What that means If a function is O(x) Then it is also O(x 2 ) Then it is also O(x 2 ) And it is also O(x 3 ) And it is also O(x 3 ) Meaning a O(x) function will grow at a slower or equal to the rate x, x 2, x 3, etc.

24 Function growth rates For input size n = 1000 O(1)1 O(log n)≈10 O(n)10 3 O(n log n)≈10 4 O(n 2 )10 6 O(n 3 )10 9 O(n 4 )10 12 O(n c )10 3*c c is a consant 2 n ≈ n!≈ n n Many interesting problems fall into this category

25 Function growth rates Logarithmic scale!

26 Integer factorization Factoring a composite number into it’s component primes is O(2 n ) This, if we choose 2048 bit numbers (as in RSA keys), it takes steps That’s about steps! That’s about steps!

27 Formal Big-Oh definition Let f and g be functions. We say that f(x) is O(g(x)) if there are constants c and k such that |f(x)| ≤ C |g(x)| whenever x > k

28 Formal Big-Oh definition

29 A note on Big-Oh notation Assume that a function f(x) is O(g(x)) It is sometimes written as f(x) = O(g(x)) However, this is not a proper equality! However, this is not a proper equality! It’s really saying that |f(x)| ≤ C |g(x)| It’s really saying that |f(x)| ≤ C |g(x)| In this class, we will write it as f(x) is O(g(x))

30 The growth of combinations of functions Ignore this part

31 Big-omega and Big-theta Ignore this part

32 NP Completeness Not in the textbook – this is additional material A full discussion of NP completeness takes 3 hours for somebody who already has a CS degree We are going to do the 15 minute version of it We are going to do the 15 minute version of it Any term of the form n c, where c is a constant, is a polynomial Thus, any function that is O(n c ) is a polynomial-time function Thus, any function that is O(n c ) is a polynomial-time function 2 n, n!, n n are not polynomial functions 2 n, n!, n n are not polynomial functions

33 Satisfiability Consider a Boolean expression of the form: (x 1   x 2  x 3 )  (x 2   x 3  x 4 )  (  x 1  x 4  x 5 ) This is a conjunction of disjunctions This is a conjunction of disjunctions Is such an equation satisfiable? In other words, can you assign truth values to all the x i ’s such that the equation is true? In other words, can you assign truth values to all the x i ’s such that the equation is true?

34 Satisfiability If given a solution, it is easy to check if such a solution works Plug in the values – this can be done quickly, even by hand Plug in the values – this can be done quickly, even by hand However, there is no known efficient way to find such a solution The only definitive way to do so is to try all possible values for the n Boolean variables The only definitive way to do so is to try all possible values for the n Boolean variables That means this is O(2 n )! That means this is O(2 n )! Thus it is not a polynomial time function Thus it is not a polynomial time function NP stands for “Not Polynomial” Cook’s theorem (1971) states that SAT is NP-complete There still may be an efficient way to solve it, though! There still may be an efficient way to solve it, though!

35 NP Completeness There are hundreds of NP complete problems It has been shown that if you can solve one of them efficiently, then you can solve them all It has been shown that if you can solve one of them efficiently, then you can solve them all Example: the traveling salesman problem Example: the traveling salesman problem Given a number of cities and the costs of traveling from any city to any other city, what is the cheapest round-trip route that visits each city once and then returns to the starting city? Not all algorithms that are O(2 n ) are NP complete In particular, integer factorization (also O(2 n )) is not thought to be NP complete In particular, integer factorization (also O(2 n )) is not thought to be NP complete

36 NP Completeness It is “widely believed” that there is no efficient solution to NP complete problems In other words, everybody has that belief In other words, everybody has that belief If you could solve an NP complete problem in polynomial time, you would be showing that P = NP If this were possible, it would be like proving that Newton’s or Einstein’s laws of physics were wrong In summary: NP complete problems are very difficult to solve, but easy to check the solutions of NP complete problems are very difficult to solve, but easy to check the solutions of It is believed that there is no efficient way to solve them It is believed that there is no efficient way to solve them

37 Quick survey I sorta kinda get the hang of NP completeness I sorta kinda get the hang of NP completeness a) Very well b) With some review, I’ll be good c) Not really d) Not at all

38 Star Wars: Episode III trailer No, really! No, really!

39 Quick survey I felt I understood the material in this slide set… I felt I understood the material in this slide set… a) Very well b) With some review, I’ll be good c) Not really d) Not at all

40 Quick survey The pace of the lecture for this slide set was… The pace of the lecture for this slide set was… a) Fast b) About right c) A little slow d) Too slow

41 Quick survey How interesting was the material in this slide set? Be honest! How interesting was the material in this slide set? Be honest! a) Wow! That was SOOOOOO cool! b) Somewhat interesting c) Rather borting d) Zzzzzzzzzzz