Complexity & the O-Notation

Slides:



Advertisements
Similar presentations
Chapter 3 Growth of Functions
Advertisements

Analysis of Algorithms1 Estimate the running time Estimate the memory space required. Time and space depend on the input size.
CS Master – Introduction to the Theory of Computation Jan Maluszynski - HT Lecture 8+9 Time complexity 1 Jan Maluszynski, IDA, 2007
CS3381 Des & Anal of Alg ( SemA) City Univ of HK / Dept of CS / Helena Wong 2. Analysis of Algorithms - 1 Analysis.
Data Structures, Spring 2004 © L. Joskowicz 1 Data Structures – LECTURE 2 Elements of complexity analysis Performance and efficiency Motivation: analysis.
CS 310 – Fall 2006 Pacific University CS310 Complexity Section 7.1 November 27, 2006.
Algorithm analysis and design Introduction to Algorithms week1
Introduction to Algorithms (2 nd edition) by Cormen, Leiserson, Rivest & Stein Chapter 3: Growth of Functions (slides enhanced by N. Adlai A. DePano)
Lecture 2 We have given O(n 3 ), O(n 2 ), O(nlogn) algorithms for the max sub-range problem. This time, a linear time algorithm! The idea is as follows:
Lecture 2 Computational Complexity
Algorithm Efficiency CS 110: Data Structures and Algorithms First Semester,
Mathematics Review and Asymptotic Notation
Theory of Computing Lecture 15 MAS 714 Hartmut Klauck.
CS 221 Analysis of Algorithms Instructor: Don McLaughlin.
CSCI 3160 Design and Analysis of Algorithms Tutorial 1
Asymptotic Analysis-Ch. 3
Fundamentals of Algorithms MCS - 2 Lecture # 8. Growth of Functions.
Measuring complexity Section 7.1 Giorgi Japaridze Theory of Computability.
Growth of Functions. 2 Analysis of Bubble Sort 3 Time Analysis – Best Case The array is already sorted – no swap operations are required.
Lecture 2 Analysis of Algorithms How to estimate time complexity? Analysis of algorithms Techniques based on Recursions ACKNOWLEDGEMENTS: Some contents.
Asymptotic Analysis CSE 331. Definition of Efficiency An algorithm is efficient if, when implemented, it runs quickly on real instances Implemented where?
E.G.M. PetrakisAlgorithm Analysis1  Algorithms that are equally correct can vary in their utilization of computational resources  time and memory  a.
1 Asymptotes: Why? How to describe an algorithm’s running time? (or space, …) How does the running time depend on the input? T(x) = running time for instance.
Data Structures I (CPCS-204) Week # 2: Algorithm Analysis tools Dr. Omar Batarfi Dr. Yahya Dahab Dr. Imtiaz Khan.
Analysis of Non – Recursive Algorithms
Analysis of Non – Recursive Algorithms
Analysis of Algorithms
COMP108 Algorithmic Foundations Algorithm efficiency
Complexity & the O-Notation
Time complexity Here we will consider elements of computational complexity theory – an investigation of the time (or other resources) required for solving.
Introduction to Algorithms
Introduction to Algorithms (2nd edition)
Analysis of Algorithms & Orders of Growth
Analysis of algorithms
Great Theoretical Ideas in Computer Science
Growth of functions CSC317.
The Growth of Functions
DATA STRUCTURES Introduction: Basic Concepts and Notations
HIERARCHY THEOREMS Hu Rui Prof. Takahashi laboratory
Chapter 2 Fundamentals of the Analysis of Algorithm Efficiency
CS 3343: Analysis of Algorithms
Computation.
CS 2210 Discrete Structures Algorithms and Complexity
Sorting Algorithms Written by J.J. Shepherd.
O-notation (upper bound)
Asymptotes: Why? How to describe an algorithm’s running time?
CSCI 2670 Introduction to Theory of Computing
CSC 413/513: Intro to Algorithms
Asymptotic Growth Rate
CSI Growth of Functions
Chapter 14 Time Complexity.
CS 2210 Discrete Structures Algorithms and Complexity
Chapter 2.
CSE 373, Copyright S. Tanimoto, 2002 Asymptotic Analysis -
CS154, Lecture 12: Time Complexity
CSC 4170 Theory of Computation Time complexity Section 7.1.
Intro to Data Structures
O-notation (upper bound)
At the end of this session, learner will be able to:
Discrete Mathematics 7th edition, 2009
Analysis of algorithms
David Kauchak cs161 Summer 2009
Design and Analysis of Algorithms
Theory of Computability
CSE 373, Copyright S. Tanimoto, 2001 Asymptotic Analysis -
CS210- Lecture 2 Jun 2, 2005 Announcements Questions
Advanced Analysis of Algorithms
Big O notation f = O(g(n)) c g(n)
CSC 4170 Theory of Computation Time complexity Section 7.1.
Intro to Theory of Computation
Presentation transcript:

Complexity & the O-Notation

Computability So far we talked about Turing Machines that decide languages and compute functions. We only cared about making the machine decide the language-compute the function. We didn’t care about the machine’s performance.

Time Complexity The time complexity of a machine is the number of transitions it takes on input x in order to compute the function f(x) – accept or reject the input x. The time is counted in terms of the length of the input. The time complexity function is a function t: N → N.

Worst and average cases We say that a Turing machine has worst case time complexity t(n), if for every possible input x of length n the machine needs at most t(n) transitions in order to compute f(x)- decide if x is in the language. The average case analysis is to consider all the running times on every possible input of length n and take the average.

Space Complexity The space complexity of a Turing machine is very similar to the time complexity. The idea is exactly the same. Instead of number of transitions we count the number of explored cells. In the following examples the unexplored cells are shown in white and the explored ones are shown in blue.

Example: Put a $ before the input One solution: Repeat Erase the first symbol from the input. Write it last in the output. Until there is no symbol left in the input. Place a $. 1 1 1

Example: Put a $ before the input One solution: Repeat Erase the first symbol from the input. Write it last in the output. Until there is no symbol left in the input. Place a $. 1 1 1

Example: Put a $ before the input One solution: Repeat Erase the first symbol from the input. Write it last in the output. Until there is no symbol left in the input. Place a $. 1 1 1

Example: Put a $ before the input One solution: Repeat Erase the first symbol from the input. Write it last in the output. Until there is no symbol left in the input. Place a $. 1 1 1

Example: Put a $ before the input One solution: Repeat Erase the first symbol from the input. Write it last in the output. Until there is no symbol left in the input. Place a $. 1 1 1

Example: Put a $ before the input One solution: Repeat Erase the first symbol from the input. Write it last in the output. Until there is no symbol left in the input. Place a $. $ 1 1 1

Example: Put a $ before the input The number of transitions you need for this solution is: You repeat n times the following procedure (since n is the input length): Erase the first symbol of the input Move across the input and the output (about n transitions) Paste this symbol in the last position of the output Move across the output and the input (about n more transitions. So the number of transitions in total is about 2n2

Example: Put a $ before the input The number of explored cells is exactly 2n+1

Example: Put a $ before the input Another solution: Erase the first symbol. Remember it. Replace it with a $. Repeat: Move one cell right. Replace this symbol with the one you remember from the left. Until the input is consumed (you see a blank space). Put there the last symbol. 1 1

Example: Put a $ before the input Another solution: Erase the first symbol. Remember it. Replace it with a $. Repeat: Move one cell right. Replace this symbol with the one you remember from the left. Until the input is consumed (you see a blank space). Put there the last symbol. $ 1 remember 1

Example: Put a $ before the input Another solution: Erase the first symbol. Remember it. Replace it with a $. Repeat: Move one cell right. Replace this symbol with the one you remember from the left. Until the input is consumed (you see a blank space). Put there the last symbol. $ 1 1 remember 0

Example: Put a $ before the input Another solution: Erase the first symbol. Remember it. Replace it with a $. Repeat: Move one cell right. Replace this symbol with the one you remember from the left. Until the input is consumed (you see a blank space). Put there the last symbol. $ 1 1 remember 0

Example: Put a $ before the input Another solution: Erase the first symbol. Remember it. Replace it with a $. Repeat: Move one cell right. Replace this symbol with the one you remember from the left. Until the input is consumed (you see a blank space). Put there the last symbol. $ 1 remember 1

Example: Put a $ before the input Another solution: Erase the first symbol. Remember it. Replace it with a $. Repeat: Move one cell right. Replace this symbol with the one you remember from the left. Until the input is consumed (you see a blank space). Put there the last symbol. $ 1 1

Example: Put a $ before the input The number of transitions you need for this solution is: Replace the first symbol with a $. You repeat n times the following procedure (since n is the input length): Replace the symbol we see with the one that was written in the left cell. So the number of transitions in total is n+1

Example: Put a $ before the input The number of explored cells is exactly n+1

Example: Put a $ before the input The most efficient solution: Just move left and place a $ 1 1

Example: Put a $ before the input The most efficient solution: Just move left and place a $ $ 1 1

Example: Put a $ before the input The number of transitions is 2. The number of explored cells is 2. The time and space complexity of this machine doesn’t depend on the input (it is as we say “constant”). This is a very rare phenomenon! Most of the times we need at least to read all the input (so we need time and space at least n).

O-Notation It is not always easy to count the exact complexity of a Turing Machine. Furthermore sometimes we are just not interested in finding the exact number of transitions made or cells explored. In those cases we perform as we say “asymptotic analysis”.

Asymptotic Analysis In asymptotic analysis we completely ignore additive and multiplicative constants. We also don’t care about small values. We want to see how the machine performs on large input.

O-Notation We say that a function f is O(g(n)) (or that f is upper bounded by g) if there is a constant c>0 and an integer n0 such that: c∙g f n0

O-Notation We say that a function f is Ω(g(n)) (or that f is lower bounded by g) if there is a constant c>0 and an integer n0 such that: f c∙g n0

O-Notation We say that a function f is Θ(g(n)) (or that f is upper and lower bounded by g) if there are constants c1 ,c2 >0 and an integer n0 such that: c2∙g f c1∙g n0

O-Notation - Properties If f = O(g) then g = Ω(f) Since f(n) = O(g(n)) there is a constant c and an integer n0 such that for all n ≥ n0 , f(n) ≤ c ∙ g(n) So, that means that for all n ≥ n0 , g(n) ≥ 1/c ∙ f(n). Thus g(n) = Ω(f(n)) If f = O(g) and f = Ω(g) then f = Θ(g). Since f(n) = Ω(g(n)) there is a constant c1 and an integer n1 such that for all n ≥ n1 , f(n) ≥ c1 ∙ g(n) Since f(n) = O(g(n)) there is a constant c2 and an integer n2 such that for all n ≥ n2 , f(n) ≤ c2 ∙ g(n) Take n0 = max{n1 , n2}. Then for all n ≥ n0 , c1 ∙ g(n) ≤ f(n) ≤ c2 ∙ g(n). So f(n) = Θ(g(n))

The o- and ω- symbols We say that a function f is o(g(n)) if for every constant c>0 there is an integer n0 such that: We say that a function f is ω(g(n)) if for every constant c>0 there is an integer n0 such that:

The o- and ω- symbols Another way to prove that f is o(g) or ω(g) is by using limits. A function f is o(g(n)) if: A function f is ω(g(n)) if:

The o- and ω- symbols o and ω have the same relation with O and Ω as < and > have with ≤ and ≥. Θ stands for ≈ If f = O(g) but f ≠ Θ(g) then we say that f = o(g) Similarly, if f = Ω(g) but f ≠ Θ(g) then f = ω(g) It is important to understand that if a function f is o(g) this doesn’t mean that f is going to be less than g for every input but that there is some input after which f is always less than g.

Examples 10n = O(n2) because for c=1 and n0 = 10, for all n≥n0 10n ≤ n2 . Furthermore 10n is o(n2) because for all c>0 there is a n0 (n0 = 10/c) such that for all n≥n0 , 10n ≤ cn2. An other way to see this is by taking the limit The function n2 is considered greater than 10n besides the fact that for some small inputs (like n=2) 10n > n.

Examples 1000n2 = o(2n). That is because:

Examples log2n = Θ(log10n). There is a property of the logarithms: logxn = logyn ∙ logxy. So log2n = log210 ∙ log10n But log210 is a constant, 3 ≤ log210 ≤ 4. So for all n≥1, 3∙log10n ≤ log2n ≤ 4∙log10n The logarithms are a constant factor apart so whenever we right log n as a complexity function we generally ignore the base.

O- relation of several functions The polynomial aknk + ak-1nk-1 + … + a1n + a0 is Θ(nk) (the highest power conquers). If c1, c2 are constants, 1 < c1 < c2 then nc1 =o(nc2) If c1, c2 are constants, c1 < c2 < 1 then nc2 =o(nc1) If c1, c2 are constants, 1 < c1 < c2 then c1n =o(c2n) For any constants c>1 and c‘, nc’ = o(cn) For any constants 1 < c1 < c2, logc1n = Θ(logc2n)

Complexity Classes The class DTIME(t(n)) contains all those languages L for which there is a DTM that decides L in time O(t(n)) (i.e. performing O(t(n)) steps) The class DSPACE(t(n)) contains all those languages L for which there is a DTM that decides L exploring O(t(n)) cells in total