Analysis of Algorithms: Methods and Examples

Slides:



Advertisements
Similar presentations
 The running time of an algorithm as input size approaches infinity is called the asymptotic running time  We study different notations for asymptotic.
Advertisements

Razdan with contribution from others 1 Algorithm Analysis What is the Big ‘O Bout? Anshuman Razdan Div of Computing.
Discrete Structures & Algorithms Functions & Asymptotic Complexity.
Chapter 3 Growth of Functions
Asymptotic Growth Rate
Cutler/HeadGrowth of Functions 1 Asymptotic Growth Rate.
25 June 2015Comp 122, Spring 2004 Asymptotic Notation, Review of Functions & Summations.
Data Structures, Spring 2004 © L. Joskowicz 1 Data Structures – LECTURE 2 Elements of complexity analysis Performance and efficiency Motivation: analysis.
DATA STRUCTURES AND ALGORITHMS Lecture Notes 1 Prepared by İnanç TAHRALI.
Asymptotic Growth Rates Themes –Analyzing the cost of programs –Ignoring constants and Big-Oh –Recurrence Relations & Sums –Divide and Conquer Examples.
1 Chapter 2 Program Performance – Part 2. 2 Step Counts Instead of accounting for the time spent on chosen operations, the step-count method accounts.
Program Performance & Asymptotic Notations CSE, POSTECH.
Mathematics Review and Asymptotic Notation
1 Computer Algorithms Lecture 3 Asymptotic Notation Some of these slides are courtesy of D. Plaisted, UNC and M. Nicolescu, UNR.
1 COMP3040 Tutorial 1 Analysis of algorithms. 2 Outline Motivation Analysis of algorithms Examples Practice questions.
Asymptotic Analysis-Ch. 3
Algorithms Growth of Functions. Some Notation NNatural numbers RReal numbers N + Positive natural numbers R + Positive real numbers R * Non-negative real.
Program Efficiency & Complexity Analysis. Algorithm Review An algorithm is a definite procedure for solving a problem in finite number of steps Algorithm.
Growth of Functions. 2 Analysis of Bubble Sort 3 Time Analysis – Best Case The array is already sorted – no swap operations are required.
Asymptotic Growth Rates  Themes  Analyzing the cost of programs  Ignoring constants and Big-Oh  Recurrence Relations & Sums  Divide and Conquer 
CMSC 341 Asymptotic Analysis. 2 Complexity How many resources will it take to solve a problem of a given size? –time –space Expressed as a function of.
CSC – 332 Data Structures Generics Analysis of Algorithms Dr. Curry Guinn.
Time Complexity of Algorithms (Asymptotic Notations)
David Luebke 1 1/6/2016 CS 332: Algorithms Asymptotic Performance.
1/6/20161 CS 3343: Analysis of Algorithms Lecture 2: Asymptotic Notations.
Asymptotic Performance. Review: Asymptotic Performance Asymptotic performance: How does algorithm behave as the problem size gets very large? Running.
E.G.M. PetrakisAlgorithm Analysis1  Algorithms that are equally correct can vary in their utilization of computational resources  time and memory  a.
13 February 2016 Asymptotic Notation, Review of Functions & Summations.
Asymptotic Notation Faculty Name: Ruhi Fatima
Analysis of Algorithms: Methods and Examples CSE 2320 – Algorithms and Data Structures Alexandra Stefan Based on presentations by Vassilis Athitsos and.
CSE 421 Algorithms Richard Anderson Winter 2009 Lecture 4.
Algorithms Lecture #05 Uzair Ishtiaq. Asymptotic Notation.
Analysis of Algorithms: Methods and Examples CSE 2320 – Algorithms and Data Structures Vassilis Athitsos Modified by Alexandra Stefan University of Texas.
Complexity of Algorithms Fundamental Data Structures and Algorithms Ananda Guna January 13, 2005.
Data Structures I (CPCS-204) Week # 2: Algorithm Analysis tools Dr. Omar Batarfi Dr. Yahya Dahab Dr. Imtiaz Khan.
Asymptotic Complexity
Chapter 2 Algorithm Analysis
Analysis of Algorithms
GC 211:Data Structures Week 2: Algorithm Analysis Tools
Introduction to Algorithms
GC 211:Data Structures Algorithm Analysis Tools
Big-O notation.
Data Structures Using The Big-O Notation 1.
Growth of functions CSC317.
CS 3343: Analysis of Algorithms
Time Complexity for Loops
Analysis of Algorithms
Analysis of Algorithms
Foundations II: Data Structures and Algorithms
CSC 413/513: Intro to Algorithms
Introduction to Algorithms Analysis
Asymptotic Growth Rate
CSI Growth of Functions
CMSC 341 Asymptotic Analysis.
Chapter 2: Fundamentals of the Analysis of Algorithm Efficiency
CS 3343: Analysis of Algorithms
Advanced Analysis of Algorithms
Recurrences (Method 4) Alexandra Stefan.
Searching, Sorting, and Asymptotic Complexity
Introduction to Algorithms
Summary Simple runtime problems ‘Counting’ instructions
CSC 380: Design and Analysis of Algorithms
G.PULLAIAH COLLEGE OF ENGINEERING AND TECHNOLOGY
CS210- Lecture 2 Jun 2, 2005 Announcements Questions
Advanced Analysis of Algorithms
Estimating Algorithm Performance
CS 2604 Data Structures and File Management
Richard Anderson Autumn 2015 Lecture 4
Algorithm Course Dr. Aref Rashad
Presentation transcript:

Analysis of Algorithms: Methods and Examples CSE 2320 – Algorithms and Data Structures Alexandra Stefan Based on presentations by Vassilis Athitsos and Bob Weems University of Texas at Arlington Updated 9/11/2016

Reading CLRS: Chapter 3 (all) Notation: f(n) = Θ(n2) (if the dominant term is n2). Given a function, e.g. f(n) = 15n3 +7n2+3n+20, find Theta: find the dominant term: 15n3 Ignore the constant: n3 f(n) = Θ(n3)

Counting instructions – part 1 Count in detail the total number of instructions executed by each of the following pieces of code: // Example A. Notice the ; at the end of the for loop. for (i = 0; i <n; i++) ; ---------------------------------------------------------------------------------------------- // Example B (source: Dr. Bob Weems) for (i=0; i<n; i++) for (j=0; j<s; j++) { c[i][j] = 0; for (k=0; k<t; k++) c[i][j] += a[i][k]*b[k][j]; }

Counting instructions – part 1 1 + 1 + n* (1 + 1 + body_instr_count) for (init; cond; update) // assume the condition is TRUE n times body // Example A. Notice the ; at the end of the for loop. for (i = 0; i <n; i++) ; ---------------------------------------------------------------------------------------------- // Example B (source: Dr. Bob Weems) for (i=0; i<n; i++) for (j=0; j<s; j++) { c[i][j] = 0; for (k=0; k<t; k++) c[i][j] += a[i][k]*b[k][j]; } false true 1 + 1 + n* (1 + 1 + 0) = 2 + 2n 1 + 1 + n * (1 + 1 + ___ ) = 2 + n * (2 + 2 + 5*s + 3*s*t) = 2 + 2*n + 5*n*s + 3*n*s*t 1 + 1 + s* (1 + 1 + 1 + ____) = 2 + s * (3 + 2 + 3*t) = 2 + 5*s + 3*s*t 1 + 1 + t* (1 + 1 + 1) = 2 + 3 * t

Counting instructions – part 2 (n2 time complexity and it’s justification review) // Example G. T(n) = …. for (i = 0; i<n; i++) for(j=0; j<n; j = j+1) printf("A"); ---------------------------------------------------------------------------------------------- // Example H. (similar to selection sort processing) T(n) = …. for (i = 0; i<(n-1); i++) for(j=i+1; j<n; j = j+1) // Example I. (similar to insertion sort processing) T(n) = …. for (i = 1; i<n; i++) for(j=i-1; j>=0; j = j-1) n Θ(n2 ) n (n-1) + (n-2) + …2 + 1 = [n(n-1)]/2 = Θ(n2 ) 1 + 2 + … + (n-1) + (n-2) = [n(n-1)]/2 = Θ(n2 )

Time complexity – part 1 // Example C (source: Dr. Bob Weems) T(N) = … for (i = 0; i<N; i++) for(j=1; j<N; j = j+j) printf("A"); ---------------------------------------------------------------------------------------------- // Example D. T(N) = …. for(j=1; j<N; j = j*2) // Example E. T(N) = …. for(j=N; j>=1; j = j/2) // Example F. T(N) = …. for(j=1; j<N; j = j+2)

Time complexity – part 1 // Example C (source: Dr. Bob Weems) T(N) = … for (i = 0; i<N; i++) for(j=1; j<N; j = j+j) printf("A"); ---------------------------------------------------------------------------------------------- // Example D. T(N) = …. for(j=1; j<N; j = j*2) // Example E. T(N) = …. for(j=N; j>=1; j = j/2) // Example F. T(N) = …. for(j=1; j<N; j = j+2) N Θ(NlgN) j: 1, 2, 4, …, N/2 N Θ(NlgN) j: 1, 2, 4, …, N/2 lgN values N Θ(NlgN) j: N, N/2, …, 2,1 (1+lgN) values N Θ(N2 ) j:1,3,5,7,…, N-4,N-2 N/2 values

Time complexity – part 2 k i j (loop 7) Line 8 (nested 3&7) 1 2 … n-2 n-1 total Source: code section from the LU decomposition code from notes02.pdf by Dr. Weems. 1. for (k=1; k<=n-1; k++) { 2. … 3. for (j=k+1; j<=n; j++) { 4. temp=ab[ell][j]; 5. ab[ell][j]=ab[k][j]; 6. ab[k][j]=temp; 7. for (i=k+1; i<=n; i++) 8. ab[i][j] -= ab[i][k]*ab[k][j]; } …

Time complexity– part 2 k i j (loop 7) Line 8 (nested 3&7) 1 2 ↓(n-1) n 2 -> n … (n-1)2 3 ↓(n-2) 3 -> n (n-2)2 n-2 n-1 ↓(2) n-1 -> n (2)2 ↓(1) n -> n (1)2 total Source: section from the LU decomposition code: “Notes 2: Growth of functions” by Dr. Weems. 1. for (k=1; k<=n-1; k++) { 2. … 3. for (j=k+1; j<=n; j++) { 4. temp=ab[ell][j]; 5. ab[ell][j]=ab[k][j]; 6. ab[k][j]=temp; 7. for (i=k+1; i<=n; i++) 8. ab[i][j] -= ab[i][k]*ab[k][j]; } … See the approximation of summations by integrals example for solution to:

Estimate runtime Problem: Summary: Answer: Total instructions: 1012 The total number of instructions in a program (or a piece of a program) is 1012 and it runs on a computer that executes 109 instructions per second. How long will it take to run this program? Give the answer in seconds. If it is very large, transform it in larger units (hours, days, years). Summary: Total instructions: 1012 Speed: 109 instructions/second Answer: Time = (total instructions)/speed = (1012 instructions) / (109 instr/sec) = 103 seconds ~ 15 minutes Note that this computation is similar to computing the time it takes to travel a certain distance ( e.g. 120miles) given the speed (e.g. 60 miles/hour).

Estimate runtime A slightly different way to formulate the same problem: total number of instructions in a program (or a piece of a program) is 1012 and it runs on a computer that executes one instruction in one nanosecond (10-9 seconds) How long will it take to run this program? Give the answer in seconds. If it is very large, transform it in larger units (hours, days, years) Summary: 1012 total instructions 10-9 seconds per instruction Answer: Time = (total instructions) * (seconds per instruction) = (1012 instructions)* (10-9 sec/instr) = 103 seconds ~ 15 minutes

Motivation for Big-Oh Notation Scenario: a client requests an application for sorting his records. Which one of the 3 sorting algorithms that we discussed will you implement? Selection sort Insertion sort Merge sort

Comparing algorithms Comparing linear, N lg N, and quadratic complexity. Quadratic time algorithms become impractical (too slow) much faster than linear and N lg N algorithms. Of course, what we consider "impractical" depends on the application. Some applications are more tolerant of longer running times. N N lg N N2 106 (1 million) ≈ 20 million 1012 (one trillion) 109 (1 billion) ≈ 30 billion 1018 (one quintillion) 1012 (1 trillion) ≈ 40 trillion 1024 (one septillion) N lgN 1000 ≈ 10 106 =10002 ≈ 2*10 109 =10003 ≈ 3*10 1012 =10004 ≈ 4*10

Motivation for Big-Oh Notation Given an algorithm, we want to find a function that describes the time performance of the algorithm. Computing the number of instructions in detail is NOT desired: It is complicated and the details are not relevant The number of machine instructions and runtime depend on factors other than the algorithm: Programming language Compiler optimizations Performance of the computer it runs on (CPU, memory) There are some details that we would actually NOT want this function to include, because they can make a function unnecessarily complicated. When comparing two algorithms we want to see which one is better for very large data – asymptotic behavior It is not very important what happens for small size data. Asymptotic behavior = rate of growth = order of growth The Big-Oh notation describes the asymptotic behavior and greatly simplifies algorithmic analysis.

Asymptotic Notation Goal: we want to be able to say things like: Selection sort will take time strictly proportional to n2 Θ(n2 ) Insertion sort will take time at most proportional n2 O(n2 ) Note that we can still say that insertion sort is Θ(n2 ). Use big-Oh for upper bounding complex functions of n. Any sorting algorithm will take time at least proportional to n Ω(n) Math functions that are: Θ(n2 ) : O(n2 ) : Ω(n2 ) : Informal definition: f(n) grows ‘proportional’ to g(n) if: lim 𝑛→∞ 𝑓(𝑛) 𝑔(𝑛) = c ≠ 0 (c is a non-zero constant) = …. Θ tight bound ≤ .… O upper bound (big-Oh – bigger) ≥ …. Ω lower bound Abuse notation: instead of:

Abuse of notation Instead of : We may use:

Big-Oh A function f(n) is said to be O(g(n)) if there exist constants c0 and n0 such that: f(n) ≤ c0g(n) for all n ≥ n0. Theorem: if lim 𝑛→∞ 𝑓(𝑛) 𝑔(𝑛) = c is a constant, then Typically, f(n) is the running time of an algorithm. This can be a rather complicated function. We try to find a g(n) that is simple (e.g. n2), and such that f(n) = O(g(n)).

Asymptotic Bounds and Notation (CLRS chapter 3) f(n) is O(g(n)) if there exist positive constants c0 and n0 such that: f(n) ≤ c0 g(n) for all n ≥ n0. Theorem: if lim 𝑛→∞ 𝑓(𝑛) 𝑔(𝑛) = c is a constant, then g(n) is an asymptotic upper bound for f(n). f(N) is Ω(g(n)) if there exist positive constants c0 and n0 such that: c0 g(n) ≤ f(n) for all n ≥ n0. Theorem: if lim 𝑛→∞ 𝑔(𝑛) 𝑓(𝑛) = c is a constant, then g(n) is an asymptotic lower bound for f(n). f(n) is Θ(g(n)) if there exist positive constants c0 , c1 and n0 such that: c0 g(n) ≤ f(n) ≤ c1 g(n) for all n ≥ n0. Theorem: if lim 𝑛→∞ 𝑓(𝑛) 𝑔(𝑛) = c ≠ 0 is a constant, g(n) is an asymptotic tight bound for f(n).

Asymptotic Bounds and Notation (CLRS chapter 3) “little-oh”: o Theorem: if lim 𝑛→∞ 𝑓(𝑛) 𝑔(𝑛) = 0 is a constant, then f(n) is o(g(n)) if for any constant c0, there exists n0 s.t.: f(n) ≤ c0 g(n) for all n ≥ n0. g(N) is an asymptotic upper bound for f(N) (but NOT tight). E.g.: n = ω(n2), n = ω(nlgn), n2 = ω(n4),… “little-omega”: ω Theorem: if lim 𝑛→∞ 𝑓(𝑛) 𝑔(𝑛) = , then f(N) is ω(g(n)) if for any constant c0, there exists n0 s. t.: c0 g(n) ≤ f(n) for all n ≥ n0. g(n) is an asymptotic lower bound for f(n) (but NOT tight). E.g.: n2 = ω(n), nlgn = ω(n), n3 = ω(n2),…

L’Hospital’s Rule If lim 𝑛→∞ 𝑓(𝑛) and lim 𝑛→∞ 𝑔(𝑛) are both 0 or and if lim 𝑛→∞ 𝑓′(𝑛) 𝑔′(𝑛) is a constant or , Then lim 𝑛→∞ 𝑓(𝑛) 𝑔(𝑛) = lim 𝑛→∞ 𝑓′(𝑛) 𝑔′(𝑛)

Theta vs Big-Oh The Theta notation is more strict than the Big-Oh notation: TRUE: n2 = O(n100). FALSE: n2 = Θ(n100).

Simplifying Big-Oh Notation Suppose that we are given this running time: f(n) = 35n2 + 41n + lg(n) + 1532. How can we express f(n) in Big-Oh notation?

Asymptotic Notation Let f(n) = 35n2 + 41n + lg(n) + 1532. We say that f(n) = O(n2). Also correct, but too detailed (do not use them): f(n) = O(n2+n) f(n) = O(35n2 + 41n + lg(n) + 1532). In the recurrence formulas and proofs, you may see these notations (see CLRS, page 49): f(n) = 2n2 + Θ(n) There is a function h(n) in Θ(n) s.t. f(n) = 2n2 + h(n) 2n2 + Θ(n) = Θ(n2). For any function h(n) in Θ(n), there is a function g(n) in Θ(n2) s.t. f(n) = 2n2 + h(n) = g(n). For any function h(n) in Θ(n), 2n2 + h(n) is in Θ(n2).

Proofs using the definition: O Let f(n) = 35n2 + 41n + lg(n) + 1532. Show (using the definition) that f(n) = O(n2). Proof: Want to find n0 and c0 s.t., for all n ≥ n0: f(n) ≤ c0n2. Version 1: Upper bound each term by n2 for large n (e.g. n ≤ 1532) f(n) = 35n2 + 41n + lg(n) + 1532 ≤ 35n2 + n2 + n2 + n2 = 38n2 Use: c0 = 38, n0 = 1536 f(n) = 35n2 + 41n + lg(n) + 1532 ≤ 38n2, for all n ≥ 1536 Version 2: You can also pick c0 large enough to cover the coefficients of all the terms: c0 = 1609 = (35 + 41+1+1532), n0 = 1

Proofs using the definition: Ω, Θ Let f(n) = 35n2 + 41n + lg(n) + 1532. Show (using the definition) that f(n) = Ω(n2) and f(n) = Θ(n2). Proof of Ω: Want to find n1 and c1 s.t., for all n ≥ n1: f(n) ≥ c1n2. Use: c1 = 1, n1 = 1 f(n) = 35n2 + 41n + lg(n) + 1532 ≥ n2, for all n ≥ 1 Proof of Θ: Version 1: We have proved f(n) = O(n2) and f(n) = Ω(n2) and so f(n) = Θ(n2) (property 4, page 26). Version 2: We found c0 = 38, n0 = 1536 and c1 = 1, n1 = 1 s.t.: f(n) = 35n2 + 41n + lg(n) + 1532 ≤ 38n2, for all n ≥ 1536 => n2 ≤ f(n) ≤ 38n2, for all n ≥ 1536 => f(n) = Θ(n2)

Polynomial functions If f(n) is a polynomial function, then it is Θ of the dominant term. E.g. f(n) = 15n3 +7n2+3n+20, find g(n) s.t. f(n)=Θ(g(n)): find the dominant term: 15n3 Ignore the constant, left with: n3 => g(n) = n3 => f(n) = Θ(n3)

Properties of O, Ω and Θ 1. f(n) = O(g(n)) => g(n) = Ω(f(n)) 2. f(n) = Ω(g(n)) => g(n) = O(f(n)) 3. f(n) = Θ(g(n)) => g(n) = Θ(f(n)) 4. If f(n) = O(g(n)) and f(n) = Ω(g(n)) => f(n) = Θ(g(n)) If f(n) = Θ(g(n)) => f(n) = O(g(n)) and f(n) = Ω(g(n)) Transitivity (proved in future slides): 6. If 𝑓 𝑛 =𝑂 𝑔 𝑛 and 𝑔 𝑛 =𝑂(ℎ 𝑛 ), then 𝑓 𝑛 =𝑂(ℎ 𝑛 ). 7. If 𝑓 𝑛 =Ω 𝑔 𝑛 and 𝑔 𝑛 =Ω(ℎ 𝑛 ), then 𝑓 𝑛 =Ω(ℎ 𝑛 ).

Using Limits if lim 𝑛→∞ 𝑓(𝑛) 𝑔(𝑛) = c is a non-zero constant, then g(n) = ____(f(n)). In this definition, both zero and infinity are excluded. In this case we can also say that f(n) = Θ(g(n)). This can easily be proved using the limit or the reflexivity property of Θ. if lim 𝑛→∞ 𝑓(𝑛) 𝑔(𝑛) = c is a constant, then g(n) = ____(f(n)). "constant" includes zero, but not infinity. if lim 𝑛→∞ 𝑓(𝑛) 𝑔(𝑛) = ∞ then g(n) = ____(f(n)). f(n) grows much faster than g(n) if lim 𝑛→∞ 𝑔(𝑛) 𝑓(𝑛) is a constant, then g(n) = ____(f(n)). "Constant" includes zero, but does NOT include infinity.

Using Limits if lim 𝑛→∞ 𝑓(𝑛) 𝑔(𝑛) = c is a non-zero constant, then f(n) = Θ(g(n)). In this definition, both zero and infinity are excluded. In this case we can also say that g(n) = Θ(f(n)). This can easily be proved using the limit or the reflexivity property of Θ. if lim 𝑛→∞ 𝑔(𝑛) 𝑓(𝑛) = c is a constant, then f(n) = Ω(g(n)). "constant" includes zero, but not infinity. if lim 𝑛→∞ 𝑔(𝑛) 𝑓(𝑛) = ∞ then f(n) = O(g(n)). g(n) grows much faster than f(n) if lim 𝑛→∞ 𝑓(𝑛) 𝑔(𝑛) is a constant, then f(n) = O(g(n)). "Constant" includes zero, but does NOT include infinity.

Using Limits: Example 1 Suppose that we are given this running time: f(n) = 35n2 + 41n + lg(n) + 1532. Use the limits theorem to show that f(n) = O(n2).

Using Limits: Example 2 Show that 𝑛5+3𝑛4+2𝑛3+𝑛2+𝑛+12 5𝑛3+𝑛+3 =Θ(???).

Using Limits: An Example Show that 𝑛5+3𝑛4+𝑛3+2𝑛2+𝑛+12 5𝑛3+𝑛+3 =Θ(𝑛2). Proof: Here: 𝑓 𝑛 = 𝑛5+3𝑛4+𝑛3+2𝑛2+𝑛+12 5𝑛3+𝑛+3 Let 𝑔(𝑛) = 𝑛2. We want to show that lim 𝑛→∞ 𝑓(𝑛) 𝑔(𝑛) = c ≠0 and so, 𝑓 𝑛 = Θ 𝑔 𝑛 . lim 𝑛→∞ 𝑓(𝑛) 𝑔(𝑛) = lim 𝑛→∞ 𝑛5+3𝑛4+𝑛3+2𝑛2+𝑛+12 5𝑛3+𝑛+3 1 𝑛2 = lim 𝑛→∞ 𝑛5+3𝑛4+𝑛3+2𝑛2+𝑛+12 5𝑛5+𝑛3+3𝑛2 Solution 1: lim 𝑛→∞ 𝑓(𝑛) 𝑔(𝑛) = lim 𝑛→∞ 𝑛5+3𝑛4+𝑛3+2𝑛2+𝑛+12 5𝑛5+𝑛3+3𝑛2 = 1 5 Solution 2 (L’Hospital) : lim 𝑛→∞ 𝑓(𝑛) 𝑔(𝑛) = lim 𝑛→∞ 𝑓′(𝑛) 𝑔′(𝑛) = lim 𝑛→∞ 5𝑛4+3∗4𝑛3+3𝑛2+2𝑛+1 5∗5𝑛4+3∗𝑛2+3∗2𝑛 = … = lim 𝑛→∞ 5∗4∗3∗2∗𝑛 5∗5∗4∗3∗2∗𝑛 = 1 5

Big-Oh Hierarchy 1 = 𝑂(lg⁡(𝑛)) 𝑙𝑔⁡(𝑛) = 𝑂(𝑛) 𝑛 = 𝑂 𝑛2 If 0≤𝑐≤d, then 𝑛𝑐 = 𝑂(𝑛𝑑). Higher-order polynomials always get larger than lower-order polynomials, eventually. For any 𝑑, if 𝑐 > 1, 𝑛𝑑 = 𝑂 𝑐𝑛 . Exponential functions always get larger than polynomial functions, eventually. You can use these facts in your assignments. You can apply transitivity to derive other facts, e.g., that 𝑙𝑔⁡(𝑛) = 𝑂(𝑛2). O(1), 𝑂(𝑙𝑔⁡(𝑛)) , 𝑂 , 𝑂 𝑛 , 𝑂 𝑛𝑙𝑔𝑛 , 𝑂 𝑛2 , 𝑂 𝑛𝑐 , 𝑂 𝑐𝑛

Big-Oh Transitivity If 𝑓 𝑛 =𝑂 𝑔 𝑛 and 𝑔 𝑛 =𝑂(ℎ 𝑛 ), then 𝑓 𝑛 =𝑂(ℎ 𝑛 ). Proof:

Big-Oh Transitivity If 𝑓 𝑛 =𝑂 𝑔 𝑛 and 𝑔 𝑛 =𝑂(ℎ 𝑛 ), then 𝑓 𝑛 =𝑂(ℎ 𝑛 ). Proof: We want to find 𝑐3 and 𝑛3 s.t. 𝑓 𝑛 ≤𝑐3ℎ 𝑛 , for all 𝑛≥𝑛3. We know: 𝑓 𝑛 =𝑂 𝑔 𝑛 => there exist c1, n1, s.t. 𝑓 𝑛 ≤𝑐1𝑔 𝑛 , for all 𝑛≥𝑛1 𝑔 𝑛 =𝑂 ℎ 𝑛 => there exist c2, n2, s.t. g 𝑛 ≤𝑐2ℎ 𝑛 , for all 𝑛≥𝑛2 𝑓 𝑛 ≤𝑐1𝑔 𝑛 ≤𝑐1𝑐2ℎ 𝑛 , for all 𝑛≥max⁡(𝑛1,𝑛2) Use: c = c1 * c2 , and 𝑛≥𝑚𝑎𝑥⁡(𝑛1,𝑛2) 𝒏≥𝒏𝟏 𝒏≥𝒏𝟐

Using Substitutions If lim 𝑥→∞ ℎ(𝑥) = ∞, then: 𝑓 𝒙 = 𝑂 𝑔 𝒙 ⇒𝑓 ℎ(𝑥) =𝑂(𝑔 ℎ(𝑥) ). (This can be proved ) How do we use that? For example, prove that lg 𝑛 =𝑂( 𝑛 ).

Using Substitutions If lim 𝑥→∞ ℎ(𝑥) = ∞, then: 𝑓 𝑥 = 𝑂 𝑔 𝑥 ⇒𝑓 ℎ(𝑥) =𝑂(𝑔 ℎ(𝑥) ). How do we use that? For example, prove that lg 𝑛 =𝑂( 𝑛 ). Use h n = 𝑛 . We get: lg n =O n ⇒lg 𝑛 =𝑂 𝑛

Example Problem 1 Is 𝑛=𝑂( sin 𝑛 𝑛 2 )? Answer:

Example Problem 2 Show that max(f(n), g(n)) is 𝛩(f(𝑛)+ g(𝑛)) Show O:

Asymptotic notation for two parameters (CLRS) f(n,m) is O(g(n,m)) if there exist constants c0, n0 and m0 such that: f(n,m) ≤ c0g(n,m) for all pairs (n,m) s.t. either n ≥ n0 or m ≥ m0 f(n,m) n0 n m0 m

Useful logarithm properties 𝑐 𝑙𝑔⁡(𝑛) = 𝑛 𝑙𝑔⁡(𝑐) Proof: apply lg on both sides and you get two equal terms: lg( 𝑐 𝑙𝑔⁡(𝑛) ) = lg ⁡(𝑛 𝑙𝑔⁡(𝑐) ) => lg(n) * lg(c) = lg(n) * lg(c) Can we also say that 𝑐 𝑛 = 𝑛 𝑐 ? NO!

Summary Definitions Properties: transitivity, reflexivity, … Using limits Big-Oh hierarchy Substitution Example problems Asymptotic notation for two parameters 𝑐 𝑙𝑔⁡(𝑛) = 𝑛 𝑙𝑔⁡(𝑐) ( 𝑐 𝑛 ≠ 𝑛 𝑐 ) (note lg in the exponent)