CSCE 411 Design and Analysis of Algorithms

Slides:



Advertisements
Similar presentations
 The running time of an algorithm as input size approaches infinity is called the asymptotic running time  We study different notations for asymptotic.
Advertisements

Chapter 1 – Basic Concepts
The Growth of Functions
Asymptotic Growth Rate
Cutler/HeadGrowth of Functions 1 Asymptotic Growth Rate.
CSCE 411 Design and Analysis of Algorithms Andreas Klappenecker TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAA.
Complexity Analysis (Part I)
Asymptotic Analysis Motivation Definitions Common complexity functions
Data Structures and Algorithms1 Basics -- 2 From: Data Structures and Their Algorithms, by Harry R. Lewis and Larry Denenberg (Harvard University: Harper.
CPSC 411 Design and Analysis of Algorithms Andreas Klappenecker.
DATA STRUCTURES AND ALGORITHMS Lecture Notes 1 Prepared by İnanç TAHRALI.
Instructor Neelima Gupta
Time Complexity Dr. Jicheng Fu Department of Computer Science University of Central Oklahoma.
Program Performance & Asymptotic Notations CSE, POSTECH.
CSC 201 Analysis and Design of Algorithms Lecture 03: Introduction to a CSC 201 Analysis and Design of Algorithms Lecture 03: Introduction to a lgorithms.
Mathematics Review and Asymptotic Notation
1 Computer Algorithms Lecture 3 Asymptotic Notation Some of these slides are courtesy of D. Plaisted, UNC and M. Nicolescu, UNR.
CPSC 411 Design and Analysis of Algorithms
Chapter 5 Algorithms (2) Introduction to CS 1 st Semester, 2015 Sanghyun Park.
1/6/20161 CS 3343: Analysis of Algorithms Lecture 2: Asymptotic Notations.
Asymptotic Notation Faculty Name: Ruhi Fatima
Department of Statistics University of Rajshahi, Bangladesh
Computability Sort homework. Formal definitions of time complexity. Big 0. Homework: Exercises. Searching. Shuffling.
CSE 421 Algorithms Richard Anderson Winter 2009 Lecture 4.
Algorithms Lecture #05 Uzair Ishtiaq. Asymptotic Notation.
Copyright © Cengage Learning. All rights reserved The Integral Test and Estimates of Sums.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 2.
INFINITE SEQUENCES AND SERIES In general, it is difficult to find the exact sum of a series.  We were able to accomplish this for geometric series and.
Asymptotic Complexity
Chapter 3 The Real Numbers.
Modeling with Recurrence Relations
Analysis of Algorithms
nalisis de lgoritmos A A
Introduction to Algorithms
Complexity analysis.
Analysis of Algorithms
Growth of functions CSC317.
The Growth of Functions
Analysis of Algorithms: Methods and Examples
DATA STRUCTURES Introduction: Basic Concepts and Notations
CS 3343: Analysis of Algorithms
Asymptotics Section 3.2 of Rosen Spring 2017
CSCI 2670 Introduction to Theory of Computing
Copyright © Cengage Learning. All rights reserved.
Asymptotic Growth Rate
CSI Growth of Functions
CS 3343: Analysis of Algorithms
Orders of Growth Rosen 5th ed., §2.2.
Copyright © Cengage Learning. All rights reserved.
Advanced Analysis of Algorithms
Copyright © Cengage Learning. All rights reserved.
Asst. Dr.Surasak Mungsing
Copyright © Cengage Learning. All rights reserved.
Copyright © Cengage Learning. All rights reserved.
Intro to Data Structures
Chapter 5 Limits and Continuity.
Miniconference on the Mathematics of Computation
Advanced Algorithms Analysis and Design
Chapter 4 Sequences.
At the end of this session, learner will be able to:
Discrete Mathematics 7th edition, 2009
Richard Anderson Winter 2019 Lecture 4
CSE 373, Copyright S. Tanimoto, 2001 Asymptotic Analysis -
CS210- Lecture 2 Jun 2, 2005 Announcements Questions
Algorithms CSCI 235, Spring 2019 Lecture 4 Asymptotic Analysis II
CS 2604 Data Structures and File Management
Richard Anderson Autumn 2015 Lecture 4
Algorithm Course Dr. Aref Rashad
Section 4 The Definite Integral
Presentation transcript:

CSCE 411 Design and Analysis of Algorithms Andreas Klappenecker TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAA

Goal of this Lecture Recall the basic asymptotic notations such as Big Oh, Big Omega, Big Theta, and little oh. Recall some basic properties of these notations Give some motivation why these notions are defined in the way they are. Give some examples of their usage.

Summary Let g: N->C be a real or complex valued function on the natural numbers. O(g) = { f: N-> C | u>0 n0 N |f(n)| <= u|g(n)| for all n>= n0 } (g) = { f: N-> C | d>0 n0 N d|g(n)| <= |f(n)| for all n>= n0 } (g) = { f: N-> C | u,d>0 n0 N d|g(n)| <= |f(n)| <= u|g(n)| for all n>= n0 } o(g) = { f: N-> C | limn-> |f(n)|/|g(n)| = 0 }

Time Complexity Estimating the time-complexity of algorithms, we simply want count the number of operations. We want to be independent of the compiler used, ignorant about details about the number of instructions generated per high-level instruction, independent of optimization settings, and architectural details. This means that performance should only be compared up to multiplication by a constant. We want to ignore details such as initial filling the pipeline. Therefore, we need to ignore the irregular behavior for small n.

Big Oh

Big Oh Notation Let f,g: N -> R be function from the natural numbers to the set of real numbers. We write f  O(g) if and only if there exists some real number n0 and a positive real constant u such that |f(n)| <= u|g(n)| for all n in S satisfying n>= n0

Big Oh Let g: N-> C be a function. Then O(g) is the set of functions O(g) = { f: N-> C | there exists a constant u and a natural number n0 such that |f(n)| <= u|g(n)| for all n>= n0 }

Notation We have O(n2)  O(n3) but it usually written as O(n2) = O(n3) This does not mean that the sets are equal!!!! The equality sign should be read as ‘is a subset of’.

Notation We write n2 = O(n3), [ read as: n2 is contained in O(n3) ] But we never write O(n3) = n2

Example O(n2)

Big Oh Notation The Big Oh notation was introduced by the number theorist Paul Bachman in 1894. It perfectly matches our requirements on measuring time complexity. Example: 4n3+3n2+6 in O(n3) The biggest advantage of the notation is that complicated expressions can be dramatically simplified.

Quiz Does O(1) contain only the constant functions?

Big versus Little Oh O(g) = { f: N-> C | u>0 n0 N |f(n)| <= u|g(n)| for all n>= n0 } o(g) = { f: N-> C | limn-> |f(n)|/|g(n)| = 0 }

Limit Let (xn) be a sequence of real numbers. We say that  is the limit of this sequence of numbers and write  = limn-> xn if and only if for each  > 0 there exists a natural number n0 such that |xn - |<  for all n >= n0

? !

Limit – Again! Let (xn) be a sequence of real numbers. We say that  is the limit of this sequence of numbers and write  = limn-> xn if and only if for each  > 0 there exists a natural number n0 such that |xn - |<  for all n >= n0

How do we prove that g = O(f)?

Quiz It follows that o(f) is a subset of O(f). Why?

Quiz What does f = o(1) mean? Hint: o(g) = { f: N-> C | limn-> |f(n)|/|g(n)| = 0 }

Quiz Some computer scientists consider little oh notations too sloppy. For example, 1/n+1/n2 is o(1) but they might prefer 1/n+1/n2 = O(1/n). Why is that?

Limits? There are no Limits! The limit of a sequence might not exist. For example, if f(n) = 1+(-1)n then limn-> f(n) does not exist.

Least Upper Bound (Supremum) The supremum b of a set of real numbers S is the defined as the smallest real number b such that b>=x for all s in S. We write s = sup S. sup {1,2,3} = 3, sup {x : x2 <2} = sqrt(2), sup {(-1)^n – 1/n : n>=0 } = 1.

The Limit Superior The limit superior of a sequence (xn) of real numbers is defined as lim supn -> xn = limn -> ( sup { xm : m>=n}) [Note that the limit superior always exists in the extended real line (which includes ), as sup { xm : m>=n}) is a monotonically decreasing function of n and is bounded below by any element of the sequence.]

The Limit Superior The limit superior of a sequence of real numbers is equal to the greatest accumulation point of the sequence.

Necessary and Sufficient Condition

Big Omega

Big Omega Notation Let f, g: N-> R be functions from the set of natural numbers to the set of real numbers. We write g  (f) if and only if there exists some real number n0 and a positive real constant C such that |g(n)| >= C|f(n)| for all n in N satisfying n>= n0.

Big Omega Theorem: f(g) iff lim infn->|f(n)/g(n)|>0. Proof: If lim inf |f(n)/g(n)|= C>0, then we have for each >0 at most finitely many positive integers satisfying |f(n)/g(n)|< C-. Thus, there exists an n0 such that |f(n)|  (C-)|g(n)| holds for all n  n0, proving that f(g). The converse follows from the definitions.

Big Theta

Big Theta Notation Let S be a subset of the real numbers (for instance, we can choose S to be the set of natural numbers). If f and g are functions from S to the real numbers, then we write g  (f) if and only if there exists some real number n0 and positive real constants C and C’ such that C|f(n)|<= |g(n)| <= C’|f(n)| for all n in S satisfying n>= n0 . Thus, (f) = O(f)  (f)

Examples

Sums 1+2+3+…+n = nn+1/2 12 +22 +32 +…+n2 = n(n+1)(2n+1)/6 We might prefer some simpler formula, especially when looking at sum of cubes, etc. The first sum is approximately equal to n2/2, as n/2 is much smaller compared to n2/2 for large n. The first sum is approximately equal to n3/3 plus smaller terms.

Approximate Formulas ( complicated function of n) = (simple function of n) + (bound for the size of the error in terms of n)

Approximate Formulas Instead of 12 +22 +32 +…+n2 = n3/3+ n2/2 + n/6 we might write 12 +22 +32 +…+n2 = n3/3 + O(n2)

Approximate Formulas If we write f(n) = g(n)+O(h(n)), then this means that there exists a constant u>0 and a natural number n0 such that |f(n)-g(n)| <= u|h(n)| for all n>=n0.

Bold Conjecture 1k +2k +3k +…+nk = nk+1/(k+1) + O(nk)

Proof Write S(n) = 1k +2k +3k +…+nk We try to estimate S(n). S(n) <1n+1xk dx nk S(n) <(n+1)k/(k+1) 3k 2k 1k

Proof Write S(n) = 1k +2k +3k +…+nk We try to estimate S(n). S(n) >0n xk dx S(n) > nk+1/(k+1) 3k 2k 1k

Proof We have shown that nk+1/(k+1) < 1k +2k +3k +…+nk < (n+1)k+1/(k+1). Let’s subtract nk+1/(k+1) to get 0< 1k +2k +3k +…+nk – nk+1/(k+1) < ((n+1) k+1-nk+1 )/(k+1)

Proof <= (some mess involving k) nk It follows that ((n+1) k+1-nk+1 )/(k+1) =(k+1)-1 i=0k+1C(k+1,i)ni- nk+1 <= i=0k C(k+1,i)nk // simplify and enlarge terms <= (some mess involving k) nk It follows that S(n) - nk+1/(k+1) < (mess involving k) nk

End of Proof Since the mess involving k does not depend on n, we have proven that 1k +2k +3k +…+nk = nk+1/(k+1) + O(nk) holds!

Harmonic Number The Harmonic number Hn is defined as Hn = 1+1/2+1/3+…+1/n. We have Hn = ln n +  + O(1/n) where  is the Euler-Mascheroni constant =0.577…

log n! Recall that 1! = 1 and n! = (n-1)! n. Theorem: log n! = (n log n) Proof: log n! = log 1 + log 2 + … + log n <= log n + log n + … + log n = n log n Hence, log n! = O(n log n).

log n! On the other hand, log n! = log 1 + log 2 + … + log n >= log ((n+1)/2) + … + log n >= ((n+1)/2) log ((n+1)/2) >= n/2 log(n/2) = (n log n) For the last step, note that lim infn-> (n/2 log(n/2))/(n log n) = ½.

Reading Assignment Read Chapter 1-3 in [CLRS] Chapter 1 introduces the notion of an algorithm Chapter 2 analyzes some sorting algorithms Chapter 3 introduces Big Oh notation