1 INFO 2950 Prof. Carla Gomes Module Algorithms and Growth Rates Rose, Chapter 3.

Slides:



Advertisements
Similar presentations
MATH 224 – Discrete Mathematics
Advertisements

 The running time of an algorithm as input size approaches infinity is called the asymptotic running time  We study different notations for asymptotic.
Chapter 1 – Basic Concepts
Chapter 3 Growth of Functions
The Growth of Functions
CSC401 – Analysis of Algorithms Lecture Notes 1 Introduction
Cutler/HeadGrowth of Functions 1 Asymptotic Growth Rate.
© 2004 Goodrich, Tamassia 1 Lecture 01 Algorithm Analysis Topics Basic concepts Theoretical Analysis Concept of big-oh Choose lower order algorithms Relatives.
Introduction to Analysis of Algorithms
Algorithms and Growth Rates
Analysis of Algorithms (Chapter 4)
Analysis of Algorithms1 Estimate the running time Estimate the memory space required. Time and space depend on the input size.
Analysis of Algorithms
25 June 2015Comp 122, Spring 2004 Asymptotic Notation, Review of Functions & Summations.
Analysis of Algorithms COMP171 Fall Analysis of Algorithms / Slide 2 Introduction * What is Algorithm? n a clearly specified set of simple instructions.
The Fundamentals: Algorithms, the Integers & Matrices.
Algorithms Chapter 3 With Question/Answer Animations.
February 17, 2015Applied Discrete Mathematics Week 3: Algorithms 1 Double Summations Table 2 in 4 th Edition: Section th Edition: Section th.
Analysis of Algorithms Algorithm Input Output © 2014 Goodrich, Tamassia, Goldwasser1Analysis of Algorithms Presentation for use with the textbook Data.
CSC 201 Analysis and Design of Algorithms Lecture 03: Introduction to a CSC 201 Analysis and Design of Algorithms Lecture 03: Introduction to a lgorithms.
Week 2 CS 361: Advanced Data Structures and Algorithms
Discrete Structures Lecture 11: Algorithms Miss, Yanyan,Ji United International College Thanks to Professor Michael Hvidsten.
Analysis of Algorithms1 The Goal of the Course Design “good” data structures and algorithms Data structure is a systematic way of organizing and accessing.
Mathematics Review and Asymptotic Notation
1 Computer Algorithms Lecture 3 Asymptotic Notation Some of these slides are courtesy of D. Plaisted, UNC and M. Nicolescu, UNR.
DISCRETE MATHEMATICS I CHAPTER 11 Dr. Adam Anthony Spring 2011 Some material adapted from lecture notes provided by Dr. Chungsim Han and Dr. Sam Lomonaco.
Analysis of Algorithms
CMSC 341 Asymptotic Analysis. 2 Complexity How many resources will it take to solve a problem of a given size? –time –space Expressed as a function of.
ดร.สุรศักดิ์ มังสิงห์ SPU, Computer Science Dept.
Asymptotic Analysis-Ch. 3
Fall 2002CMSC Discrete Structures1 Enough Mathematical Appetizers! Let us look at something more interesting: Algorithms.
MCA-2012Data Structure1 Algorithms Rizwan Rehman CCS, DU.
Program Efficiency & Complexity Analysis. Algorithm Review An algorithm is a definite procedure for solving a problem in finite number of steps Algorithm.
CompSci 102 Discrete Math for Computer Science
Nattee Niparnan Dept. of Computer Engineering, Chulalongkorn University.
CMSC 341 Asymptotic Analysis. 2 Complexity How many resources will it take to solve a problem of a given size? –time –space Expressed as a function of.
Asymptotic Analysis (based on slides used at UMBC)
Time Complexity of Algorithms (Asymptotic Notations)
Counting Discrete Mathematics. Basic Counting Principles Counting problems are of the following kind: “How many different 8-letter passwords are there?”
Analysis of Algorithm. Why Analysis? We need to know the “behavior” of algorithms – How much resource (time/space) does it use So that we know when to.
1/6/20161 CS 3343: Analysis of Algorithms Lecture 2: Asymptotic Notations.
BITS Pilani Pilani Campus Data Structure and Algorithms Design Dr. Maheswari Karthikeyan Lecture1.
Ch03-Algorithms 1. Algorithms What is an algorithm? An algorithm is a finite set of precise instructions for performing a computation or for solving a.
Chapter 3 Chapter Summary  Algorithms o Example Algorithms searching for an element in a list sorting a list so its elements are in some prescribed.
1 COMP9024: Data Structures and Algorithms Week Two: Analysis of Algorithms Hui Wu Session 2, 2014
Chapter 3 With Question/Answer Animations 1. Chapter Summary Algorithm and Growth of Function (This slide) Algorithms - Sec 3.1 – Lecture 13 Example Algorithms.
Discrete Mathematics Chapter 2 The Fundamentals : Algorithms, the Integers, and Matrices. 大葉大學 資訊工程系 黃鈴玲.
Analysis of Algorithms
COMP9024: Data Structures and Algorithms
Introduction to Algorithms
Analysis of Algorithms
COMP9024: Data Structures and Algorithms
Analysis of Algorithms
Enough Mathematical Appetizers!
Computation.
CSI Growth of Functions
Analysis of Algorithms
Rosen 5th ed., §2.1 ~31 slides, ~1 lecture
Analysis of Algorithms
Applied Discrete Mathematics Week 6: Computation
Analysis of Algorithms
Rosen 5th ed., §2.1 ~31 slides, ~1 lecture
Enough Mathematical Appetizers!
Analysis of Algorithms
Enough Mathematical Appetizers!
Discrete Mathematics 7th edition, 2009
Discrete Mathematics CS 2610
Analysis of Algorithms
Presentation transcript:

1 INFO 2950 Prof. Carla Gomes Module Algorithms and Growth Rates Rose, Chapter 3

2 The algorithm problem Specification of all legal inputs and Specification of desired output as a function of the input Any legal input The algorithm The desired output

Examples of algorithmic problems Problem 1: Input: A list L, of integers Output: The sum of the integers on L Problem 3: Input: A road map of cities with distances attached to the road map, and two designated cities A and B Output: A description of the shortest path between A and B Problem 2: Input: Two texts A and B in English Output: The list of common words in both texts

Instance of an algorithmic problem Size of an instance An instance of an algorithmic problem is a concrete case of such a problem with specific input. The size of an instance is given by the size of its input. Examples of instances: –An instance of problem 1 : L= 2, 5, 26, 8, 170, 79, 1002 Problem 1: Input: A list L, of integers Output: The sum of the integers on L Size of instance  length of list Size of instance = |L| = 7 We use a “natural” measure of input size. Why generally ok? Strictly speaking we should count bits.

Examples of instances Problem 3: Input: A road map of cities with distances attached to the road map, and two designated cities A and B Output: A description of the shortest path between A and B Size of instance  Number of cities and roads A particular instance: Size of instance: 6 nodes 9 edges The size of an instance is given by the size of its input.

6 Algorithm Definition: An algorithm is a finite set of precise instructions for performing a computation or for solving a problem. In general we describe algorithms using pseudocode: i.e., a language that is an intermediate step between an English language description of an algorithm and an implementation of this algorithm in a programming language

7 Properties of an Algorithm Input: an algorithm has input values from a specified set. Output: for each set of input values an algorithm produces output values from a specified set. The output values are the solution of the problem. Definiteness: The steps of an algorithm must be defined precisely. Correctness: An algorithm should produce the correct output values fro each set of input values. Finiteness: an algorithm should produce the desired output after a finite (but perhaps large) number of steps for any input in the set. Effectiveness: It must be possible to perform each step of an algorithm exactly and in a finite amount of time. Generality: the procedure should be applicable for all the problems of the desired from, not just for a particular set of input values. Distinction between: “problem” and “problem instance” Quite confusing for folks outside CS. Alg. should work for all instances!

Our Pseudocode Language procedure name(argument: type) variable := expression informal statement begin statements end {comment} if condition then statement [else statement] for variable := initial value to final value statement while condition statement procname(arguments) Not defined in book: return expression (*) (*) Statements Declaration

procedure procname(arg: type) Declares that the following text defines a procedure named procname that takes inputs (arguments) named arg which are data objects of the type type. –Example: procedure maximum(L: list of integers) [statements defining maximum…]

10 Algorithm: Finding the Maximum Element in a Finite Sequence procedure max(a 1,a 2,…, a n : integers) max := a 1 for i := 2 to n if max < a i then max := a i {max is the largest element}

Computer Programming Programmer (human) Compiler (software) Algorithm programming Program in high-level language (C, Java, etc) compilation Equivalent program in assembly language Equivalent program in machine code computer execution

12 Searching Algorithms Searching problems: the problem of locating an element in an ordered list. Example: searching for a word in a dictionary.

13 Algorithm: The Linear Search Algorithm procedure linear search( x: integer, a 1,a 2,…, a n : distinct integers) i := 1 while (i ≤ n and x  a i ) i := i +1 if i < n then location := i else location := 0 {location is the subscript of the term that equals x, or is 0 if x is not found}

14 Binary search To search for 19 in the list First split: Second split Third Split is located as 14 th item.

Adapted from Michael P. Frank Search alg. #2: Binary Search Basic idea: On each step, look at the middle element of the remaining list to eliminate half of it, and quickly zero in on the desired element. <x<x>x>x<x<x<x<x

16 Algorithm: The Binary Search Algorithm procedure binary search( x: integer, a 1,a 2,…, a n : increasing integers) i := 1 {i is left endpoint of search interval} j := n {j is right endpoint of search interval} while i < j begin m :=  (i +j)/2  if x > a m then i := m + 1 else j := m end if x = a i then location := i else location := 0 {location is the subscript of the term that equals x, or is 0 if x is not found}

17 Just because we know how to solve a given problem – we have an algorithm - that does not mean that the problem can be solved. The procedure (algorithm) may be so inefficient that it would not be possible to solve the problem within a useful period of time. So we would like to have an idea in terms of the “complexity” of our algorithm.

18 Complexity of Algorithms

19 Complexity of Algorithms The complexity of an algorithm is the number of steps that it takes to transform the input data into the desired output. Each simple operation (+,-,*,/,=,if,etc) and each memory access corresponds to a step.(*) The complexity of an algorithm is a function of the size of the input (or size of the instance). We’ll denote the complexity of algorithm A by C A (n), where n is the size of the input. (*) This model is a simplification but still valid to give us a good idea of the complexity of algorithms. What does this mean for the complexity of, say, chess? Complexity: C A (n) = O(1) Two issues: (1) fixed input size (2) Memory access just 1 step So, model/defn. not always “useful”!

20 Example: Insertion Sort From: Introduction to Algorithms Cormen et al

21 Different notions of complexity Worst case complexity of an algorithm A – the maximum number of computational steps required for the execution of Algorithm A, over all the inputs of the same size, s. It provides an upper bound for an algorithm. The worst that can happen given the most difficult instance – the pessimistic view. Best case complexity of an algorithm A - the minimum number of computational steps required for the execution of Algorithm A, over all the inputs of the same size, s. The most optimistic view of an algorithm– it tells us the least work a particular algorithm could possibly get away with for some one input of a fixed size – we have the chance to pick the easiest input of a given size. Linear search: Worst cost? Best cost? Average cost?

22 Average case complexity of an algorithm A - i.e., the average amount of resources the algorithm consumes assuming some plausible frequency of occurrence of each input. Figuring out the average cost is much more difficult than figuring out either the worst-cost or best-cost  e.g., we have to assume a given probability distribution for the types of inputs we get. Practical difficulty: What is the distribution of “real-world” problem instances?

23 Different notions of complexity In general this is the notion that we use to characterize the complexity of algorithms We perform upper bound analysis on algorithms.

24 Algorithm “Good Morning” For I = 1 to n For J = I+1 to n ShakeHands(student(I), student(J)) Running time of “Good Morning” Time = (# of HS) x (time/HS) + some overhead We want an expression for T(n), running time of “Good Morning” on input of size n.

25 Growth Rates Algorithm “Good Morning” For I = 1 to n For J = I+1 to n ShakeHands(student(I), student(J)) How many handshakes? 12345n n I J 12345n n 12345n n n2n2 - n 2

26 Growth Rates Algorithm “Good Morning” For I = 1 to n For J = I+1 to n ShakeHands(student(I), student(J)) T(n) = s(n 2 - n)/2 + t Where s is time for one HS, and t is time for getting organized. But do we need to characterize the complexity of algorithms with such a detail? What is the most important aspect that we care about? Scaling!with n

27 Comparing algorithms wrt complexity Let us consider two algorithms A1 and A2, with complexities: C A1 (n) = 0.5 n 2 C A2 (n) = 5 n Which one is has larger complexity?

28 C A2 (n) = 5 n ≥ C A1 (n) = 0.5 n 2 for n ≤ 10

C A1 (n) = 0.5 n 2 > C A2 (n) = 5 n for n >10 When we look at the complexity of algorithms we think asymptotically – i.e., we compare two algorithms as the problem sizes tend to infinity! Called: asymptotic complexity (concern: growth rate)

Game 30 I’m thinking of an integer between [1,64]. You guess the number. To your guesses my answer is, High, Low, Yes. How many guesses do you need in the worst case? What strategy are we assuming?

31 Growth Rates In general we only worry about growth rates because: Our main objective is to analyze the cost performance of algorithms asymptotically. (reasonable in part because computers get faster and faster every year.) Another obstacle to having the exact cost of algorithms is that sometimes the algorithms are quite complicated to analyze. When analyzing an algorithm we are not that interested in the exact time the algorithm takes to run – often we only want to compare two algorithms for the same problem – the thing that makes one algorithm more desirable than another is its growth rate relative to the other algorithm’s growth rate.

32 Growth of Rates Algorithm analysis is concerned with: Type of function that describes run time (we ignore constant factors since different machines have different speed/cycle) Large values of n

33 Growth of functions Important definition: For functions f and g from the set of integers to the set of real numbers we say f(x) is O(g(x)) to denote  C,k so that  n > k, |f(n)|  C |g(n)| We say “f(n) is big O of g(n)” Recipe for proving f(n) = O(g(n)): find a constant C and k (called witnesses to the fact that f(x) is O(g(x))) so that the inequality holds. Will be applied to running time, so you’ll usually consider T(n) (>= 0) Note: when C and k are found, there are infinitely many pairs of witnesses. Sometimes it is also said f(x) = O(g(x)), even though this is not a real equality.

k f(x) is O(g(x))

C = 4 k = 1 also C = 3 k = 2 x 2 + 2x + 1 is O(x 2 ) For x>1 0 ≤ x 2 + 2x + 1 ≤ x x 2 + x 2 ≤ 4 x 2

Note: When f(x) is O(g(x)), and h(x) is a function that has larger absolute values than g(x) does for sufficiently large values of x, it follows that f(x) is O(h(x)). In other words, the function g(x) in the relationship can be replace by a function with larger absolute values. This can be seen given that: |f(x)| ≤ C|g(x)| if x > k and if (h(x)| > |g(x)| for all x > k, then |f(x)| ≤ C|h(x)| if x > k Therefore f(x) is O(h(x))

37 Growth of functions (examples) f(x) = O(g(x)) iff  c,k so that  x>k, |f(x)|  Cg(x)| 3n = O(15n) since  n>0, 3n  1  15n There’s kThere’s C

38 The complexity of A2 is of lower order than that of A1. While A1 grows quadratically O(n 2 ) A1 only grows linearly O(n).

39 x 2 vs. (x 2 + x) (x <=20)

40 x 2 vs. (x 2 + x) (x 2 + x) is O(n 2 ) (oh of n-squared)

41 Growth of functions (examples) f(x) is O(g(x)) iff  c,k so that  x>k, |f(x)|  C|g(x)| Yes, since  x> __, x 2  x 3 1 a) Yes, and I can prove it. b) Yes, but I can’t prove it. c) No, x=1/2 implies x 2 > x 3 d) No, but I can’t prove it. x 2 is O(x 3 ) ? C = 1 k = 1

42 Growth of functions (examples) f(x) = O(g(x)) iff  c,k so that  x>k, |f(x)|  C|g(x)| 1000x 2 is O(x 2 ) since  x> __, 1000x 2  ____ ·x C = 1000 k = 0

43 Growth of functions (examples) f(x) = O(g(x)) iff  c,k so that  x>k, |f(x)|  C|g(x)| Prove that x x is O((1/100)x 2 ) 100x  100x 2 x x  201x 2 when x >  100x 2  20100·(1/100)x 2 k = 1, C = 20100

Growth of functions (examples) Prove that 5x is O(x/2) Nothing works for k Need  x> ___, 5x  ___ · x/2 Try c = 10  x> ___, 5x  10 · x/2 k = 200, c = 11 Similar problem, different technique. Try c = 11  x> ___, 5x  5x + x/2  x> __ _, 100  x/2 200

45 Theorem 1 Then

Proof: Assume the triangle inequality that states: |x| + |y|  |x + y|

47 Estimating Functions Example1: Estimate the sum of the first n positive integers

48 Estimating Functions Example2: Estimate f(n) = n! and log n!

49 Growth of functions Guidelines: In general, only the largest term in a sum matters. a 0 x n + a 1 x n-1 + … + a n-1 x 1 + a n x 0 = O(x n ) n dominates lg n. n 5 lg n = O(n 6 ) List of common functions in increasing O() order: 1 n (n lg n) n 2 n 3 … 2 n n! Constant timeLinear timeQuadratic timeExponential time

Note: log scale on y axis.

51 Combination of Growth of functions Theorem: If f 1 (x) = O(g 1 (x)) and f 2 (x)=O(g 2 (x)), then f 1 (x) + f 2 (x) is O(max{|g 1 (x)|,|g 2 (x)|}) c = c 1 +c 2, k = max{ k 1,k 2 } Proof: Let h(x) = max{|g 1 (x)|,|g 2 (x)|} Need to find constants c and k so that  x>k, |f 1 (x) + f 2 (x)|  c |h(x)| We know |f1(x) |  c1| g1(x)|and |f2(x)|  c2 |g2(x)| and using triangle inequality |f1(x) + f2(x)| ≤ |f1(x)| + |f2(x)| And |f1(x)| + |f2(x)|  c1|g1(x)| + c2|g2(x)|  c1|h(x)| + c2|h(x) | = (c1+c2)h(x)

52 Growth of functions – two more theorems Theorem: If f 1 (x) = O(g 1 (x)) and f 2 (x)=O(g 2 (x)), then f 1 (x)·f 2 (x) = O(g 1 (x)·g 2 (x)) Theorem: If f1(x) = O(g (x)) and f2(x)=O(g (x)), then (f1+f2)(x) = O(g (x))

53 Growth of functions - two definitions If f(x) = O(g(x)) then we write g(x) =  (f(x)). “g is big-omega of f” “lower bound” What does this mean? If  c,k so that  x>k, f(x)  c·g(x), then:  k,c’ so that  x>k, g(x)  c’f(x) c’ = 1/c If f(x) = O(g(x)), and f(x) =  (g(x)), then f(x) =  (x) “f is big-theta of g” When we write f=O(g), it is like f  g When we write f=  (g), it is like f  g When we write f=  (g), it is like f = g.

54 Growth of functions - other estimates For functions f and g, f = o(g) if  c>0  k so that  n>k, f(n)  c·g(n), “f is little-o of g” What does this mean? No matter how tiny c is, cg eventually dominates f. Example: Show that n 2 = o(n 2 log n) Proof foreshadowing: find a k (possibly in terms of c) that makes the inequality hold. Big difference – for all c!!!

55 Growth of functions - other estimates For functions f and g, f = o(g) if  c>0  k so that  n>k, f(n)  c·g(n), “f is little-o of g” Example: Show that n 2 = o(n 2 log n) This inequality holds when n > 2 1/c. Proof foreshadowing: find a k (possibly in terms of c) that makes the inequality hold. Choose c arbitrarily. How large does n have to be so that n 2  c n 2 log n? 1  c log n 1/c  log n 2 1/c  n So, k = 2 1/c. So, can take a while… consider c = 1/

The big difference between little o and big O is that the former has to hold for all c. 56

57 Growth of functions - other estimates For functions f and g, f = o(g) if  c>0  k so that  n>k, f(n)  c·g(n), “f is little-o of g” Example: Show that 10n 2 = o(n 3 ) This inequality holds when n > 10/c. Proof foreshadowing: find a k (possibly in terms of c) that makes the inequality hold. Choose c arbitrarily. How large does n have to be so that 10n 2  c n 3 ? 10/c  n So, k = 10/c.

58 Growth of functions - other estimates For functions f and g, if f = o(g) then g =  (f) “g is little-omega of f”

59 Growth of functions - other estimates For functions f and g, if f = o(g) then g =  (f) “g is little-omega of f” A thought to ponder: What if f = o(g) and f =  (g)?

60 How do computer scientists differentiate between good (efficient) and bad (not efficient) algorithms?

61 How do computer scientists differentiate between good (efficient) and bad (not efficient) algorithms? The yardstick is that any algorithm that runs in no more than polynomial time is an efficient algorithm; everything else is not.

62 Ordered functions by their growth rates c Order constant1 logarithmic2 polylogarithmic3 n r,0<r<1 n sublinear4 linear5 n r,1<r<2subquadratic6 quadratic7 cubic8 n c,c≥1 r n, r>1 polynomial9 exponential10 lg n lg c n n3n3 n2n2 Efficient algorithms Not efficient algorithms

63

64 exponential polynomial N2N2 Binary B&B alg. Polynomial vs. exponential growth (Harel 2000) LP’s interior point Min. Cost Flow Algs Transportation Alg Assignment Alg Dijkstra’s alg.

65 Growth of functions f(x) = O(g(x)) iff  c,k so that  x>k, f(x)  c·g(x) x f(x) g(x)c·g(x) k We give an eventual upper bound on f(x)