Algorithmic Time Complexity Basics

Slides:



Advertisements
Similar presentations
BY Lecturer: Aisha Dawood. The notations we use to describe the asymptotic running time of an algorithm are defined in terms of functions whose domains.
Advertisements

Computational Complexity, Choosing Data Structures Svetlin Nakov Telerik Corporation
College of Information Technology & Design
Program Efficiency & Complexity Analysis
MATH 224 – Discrete Mathematics
CSE115/ENGR160 Discrete Mathematics 03/01/12
Lecture3: Algorithm Analysis Bohyung Han CSE, POSTECH CSED233: Data Structures (2014F)
 The running time of an algorithm as input size approaches infinity is called the asymptotic running time  We study different notations for asymptotic.
HST 952 Computing for Biomedical Scientists Lecture 10.
CS 3610/5610N data structures Lecture: complexity analysis Data Structures Using C++ 2E1.
Chapter 1 – Basic Concepts
Fall 2006CENG 7071 Algorithm Analysis. Fall 2006CENG 7072 Algorithmic Performance There are two aspects of algorithmic performance: Time Instructions.
Chapter 3 Growth of Functions
The Growth of Functions
1 ICS 353 Design and Analysis of Algorithms Spring Semester (062) King Fahd University of Petroleum & Minerals Information & Computer Science.
Asymptotic Growth Rate
Cutler/HeadGrowth of Functions 1 Asymptotic Growth Rate.
DAST, Spring © L. Joskowicz 1 Data Structures – LECTURE 1 Introduction Motivation: algorithms and abstract data types Easy problems, hard problems.
Analysis of Algorithms1 Estimate the running time Estimate the memory space required. Time and space depend on the input size.
Cmpt-225 Algorithm Efficiency.
Data Structures and Algorithms1 Basics -- 2 From: Data Structures and Their Algorithms, by Harry R. Lewis and Larry Denenberg (Harvard University: Harper.
Analysis of Algorithms 7/2/2015CS202 - Fundamentals of Computer Science II1.
DAST, Spring © L. Joskowicz 1 Data Structures – LECTURE 1 Introduction Motivation: algorithms and abstract data types Easy problems, hard problems.
Cmpt-225 Simulation. Application: Simulation Simulation  A technique for modeling the behavior of both natural and human-made systems  Goal Generate.
Analysis of Algorithms Spring 2015CS202 - Fundamentals of Computer Science II1.
Spring2012 Lecture#10 CSE 246 Data Structures and Algorithms.
February 17, 2015Applied Discrete Mathematics Week 3: Algorithms 1 Double Summations Table 2 in 4 th Edition: Section th Edition: Section th.
Program Performance & Asymptotic Notations CSE, POSTECH.
Week 2 CS 361: Advanced Data Structures and Algorithms
Algorithm Analysis. Algorithm An algorithm is a clearly specified set of instructions which, when followed, solves a problem. recipes directions for putting.
Lecture 2 Computational Complexity
Algorithm Efficiency CS 110: Data Structures and Algorithms First Semester,
Mathematics Review and Asymptotic Notation
Unit III : Introduction To Data Structures and Analysis Of Algorithm 10/8/ Objective : 1.To understand primitive storage structures and types 2.To.
Analysis of Algorithms
Analysis of Algorithm Efficiency Dr. Yingwu Zhu p5-11, p16-29, p43-53, p93-96.
© 2011 Pearson Addison-Wesley. All rights reserved 10 A-1 Chapter 10 Algorithm Efficiency and Sorting.
Asymptotic Analysis-Ch. 3
Analysis of Algorithms CSCI Previous Evaluations of Programs Correctness – does the algorithm do what it is supposed to do? Generality – does it.
Copyright © Cengage Learning. All rights reserved. CHAPTER 11 ANALYSIS OF ALGORITHM EFFICIENCY ANALYSIS OF ALGORITHM EFFICIENCY.
Algorithm Analysis (Algorithm Complexity). Correctness is Not Enough It isn’t sufficient that our algorithms perform the required tasks. We want them.
Asymptotic Notation (O, Ω, )
Program Efficiency & Complexity Analysis. Algorithm Review An algorithm is a definite procedure for solving a problem in finite number of steps Algorithm.
Algorithm Analysis CS 400/600 – Data Structures. Algorithm Analysis2 Abstract Data Types Abstract Data Type (ADT): a definition for a data type solely.
Chapter 18: Searching and Sorting Algorithms. Objectives In this chapter, you will: Learn the various search algorithms Implement sequential and binary.
Chapter 5 Algorithms (2) Introduction to CS 1 st Semester, 2015 Sanghyun Park.
RUNNING TIME 10.4 – 10.5 (P. 551 – 555). RUNNING TIME analysis of algorithms involves analyzing their effectiveness analysis of algorithms involves analyzing.
Algorithmic Analysis Charl du Plessis and Robert Ketteringham.
Algorithm Analysis (Big O)
Big O David Kauchak cs302 Spring Administrative Assignment 1: how’d it go? Assignment 2: out soon… Lab code.
Algorithms Lecture #05 Uzair Ishtiaq. Asymptotic Notation.
Program Performance 황승원 Fall 2010 CSE, POSTECH. Publishing Hwang’s Algorithm Hwang’s took only 0.1 sec for DATASET1 in her PC while Dijkstra’s took 0.2.
BITS Pilani Pilani Campus Data Structure and Algorithms Design Dr. Maheswari Karthikeyan Lecture1.
1 Chapter 2 Algorithm Analysis Reading: Chapter 2.
Complexity of Algorithms Fundamental Data Structures and Algorithms Ananda Guna January 13, 2005.
Ch03-Algorithms 1. Algorithms What is an algorithm? An algorithm is a finite set of precise instructions for performing a computation or for solving a.
Analysis of Algorithms Spring 2016CS202 - Fundamentals of Computer Science II1.
Chapter 3 Chapter Summary  Algorithms o Example Algorithms searching for an element in a list sorting a list so its elements are in some prescribed.
GC 211:Data Structures Week 2: Algorithm Analysis Tools Slides are borrowed from Mr. Mohammad Alqahtani.
Chapter 15 Running Time Analysis. Topics Orders of Magnitude and Big-Oh Notation Running Time Analysis of Algorithms –Counting Statements –Evaluating.
Data Structures & Algorithm CS-102 Lecture 12 Asymptotic Analysis Lecturer: Syeda Nazia Ashraf 1.
GC 211:Data Structures Week 2: Algorithm Analysis Tools
Introduction to Algorithms
Chapter 2 Fundamentals of the Analysis of Algorithm Efficiency
Objective of This Course
Chapter 2.
Math/CSE 1019N: Discrete Mathematics for Computer Science Winter 2007
David Kauchak cs161 Summer 2009
CS210- Lecture 2 Jun 2, 2005 Announcements Questions
Presentation transcript:

Algorithmic Time Complexity Basics Shantanu Dutt ECE Dept. UIC

Time Complexity An algorithm time complexity is a function T(n) of problem size n that represents how much time the algorithm will take to complete its task. Note that there could be more than one problem size parameter n, in which case we can denote the time complexity function as T(S), where S is the set of size parameters. E.g., for the shortest path problem on a graph G, we have 2 size parameters, n the # of vertices and e the # of edges (thus T(S) = T(n,e)); for the covering part of Quine-McCluskey (QM), we also have 2 size parameters, m the # of MTs and p the # of PIs (T(S) = T(m,p)). In general, the runtime of an algorithm/program is not only determined by the algorithm, but also the processor speed, memory size and speed, bus speed, etc. However, T(n) generally overlooks technology parameters, and focuses on the intrinsic # of basic steps that an algorithm has to perform as a function of n. The main job of T(n) is representing how the algorithm’s runtime grows as a function of n, as opposed to giving us an absolute time that the algorithm will take to solve its problem. Thus if T(n) = n, then runtime doubles as n doubles, if T(n) = n2 then runtime increases by 4 times as n doubles ((2n)2) = 4n2), and if T(n) = 2n, then runtime increases by an exponent of 2 (i.e., quadratically) as n doubles (22n = (2n)2).

Time Complexity T(n) is determined by counting the number, as a function of n, of basic steps that do not depend upon the input size (i.e., are constant-time operations) that the algorithm has to perform. Note that different basic steps (e.g., addition, multiplication of integers) may take different absolute times, but each counts as 1 operation Count # of basic steps that need to be performed by algorithm A that are independent of n (i.e., are constant time operations) n T(n)

Time Complexity: Big O Notation The Big O notation for T(n) specifies the closest or smallest upper bound function f(n) for T(n) (denoted by T(n) = O(f(n)), which essentially says that for a large enough n, T(n) <= f(n). Formally, we say that for two monotonic function T(n) and f(n), T(n) = O(f(n)), if there exist a constant c, and an n0, s.t. T(n) <= c*f(n), for n >= n0 The following is a graphical representation of the T(n) = O(f(n)) relation: T(n) f(n) Ack: Graph obtained from http://www.cs.cmu.edu/~adamchik/15-121/lectures/Algorithmic%20Complexity/complexity.html

Time Complexity: Big O Notation Example: T(n) = n2 + 2n + 3, then f(n) = n2, i.e., T(n) = O(n2). Proof: For n >= 2 (= n0), T(n) = n2 + 2n + 3 < n2 + n2 + n2 = 3n2. Thus for n0 = 2 and c = 3, T(n) <= c*n2 for n >= n0 In general, to determine the big O complexity term for T(n), determine its most dominant term (the term that will be greater than all other terms for a large enough n), get rid of all constants in the term, and if this simplified term is f(n), then T(n) = O(f(n)) f(n) is also called the asymptotic complexity of T(n) Sometimes, it is important to retain the multiplicative constants or exponentiation constants, or all constants (e.g., in other complexity measures such as hardware cost). Thus it may be important to not simplify 3n2 to n2. This depends on the detail to which the algorithm is being analyzed and the complexity of a competing algorithm.

Time Complexity: Big W, Q Notations Definition of "big Omega" We need the notation for the lower bound. A capital omega Ω notation is used in this case. We say that T(n) = Ω(g(n)) when there exists constant c that T(n) ≥ c*g(n) for all sufficiently large n (n >= n0). Examples: n = Ω(1) n2 = Ω(n) n2 = Ω(n log(n)) 2 n + 1 = Ω (n) Definition of "big Theta" To measure the complexity of a particular algorithm, means to find the upper and lower bounds. A new notation is used in this case. We say that T(n) = Θ(g(n)) if and only T(n) = O(g(n)) and T(n) = Ω(g(n)). If a worst-case analysis is quite exact in terms of the dominating term in T(n), we can say T(n) = Θ(g(n)) rather than T(n) = O(g(n)). Examples: 2 n = Θ(n) n2 + 2 n + 1 = Θ( n2) Ack: Obtained from http://www.cs.cmu.edu/~adamchik/15-121/lectures/Algorithmic%20Complexity/complexity.html

Time Complexity: Big O, Q Notations Graphs illustrating Big O, Q Notations: Figure:  The asymptotical bounds O and Θ T(n) Ack: Graph obtained from http://www.leda-tutorial.org/en/official/ch02s02s03.html

Time Complexity: Big O, Q Notations—When we can use Q and when we cannot Usage O, Q Notations: When our analysis of T(n) (or any counting function) is exact at least for the dominating term, we can and should use Q. Otherwise we use the O notation. Example 1: m(n) = worst-case # of MTs of a non-trivial n-variable function f( ) (!= 0 or 1). Certainly m(n) < 2n. If we choose MTs that have an even # of 1’s in their binary representation, we have 2n-1 MTs, and the function is non-trivial (it is an even-parity function). Thus m(n) = O(2n-1) = O(2n). Q is whether we can say that m(n), in the worst-case, can be at least of the order of 2n-1. Since our analysis above was exact, we can say, yes. More formally, if we choose a constant c2 = 0.5, then the # of MTs in an even parity function > c2*2n-1, and this m(n) > c2*2n-1. Thus we have m(n) = W (2n-1), and thus m(n) = W (2n) (2n and 2n-1 are of the same “order” as they differ only by a multiplicative constant of 2). From the above, m(n) = Q(2n). Example 2: p(n) = worst-case # of PIs of an n-variable function f( ). We know that across all n-variable functions the # of PIs (or the # of ternary notations w/ symbols 0,1,X) are 3n. So we can say, p(n) = O(3n). However, the above analysis is not exact, in the sense that for this analysis at least, we do not know for certain if there is any function that has p(n) of the order of 3n. Thus, we cannot say that p(n) = W (3n), and this we cannot say that p(n) = Q(3n). Note that as has been shown earlier, we know that the max. # of PIs covering a MT can be of the order of nCn/2 ~ 2n-1. We can have a function f( ) that is the sum of these PIs. Thus we can say that in the worst-case p(n) = W (2n). But since 3n is not of the same order as 2n (3n = (2**n)(log 3)), i.e., 3n != Q (2n) (3n = W (2n), but 3n != O(2n)), we cannot say that p(n) = W (3n). Ack: Graph obtained from http://www.leda-tutorial.org/en/official/ch02s02s03.html

Different Types of Time Complexity Analysis The term analysis of algorithms is used to describe approaches to the study of the performance of algorithms. The most prevalent type is worst-case runtime complexity of the algorithm is the function defined by the maximum number of steps taken on any instance of size a. The best-case runtime complexity of the algorithm is the function defined by the minimum number of steps taken on any instance of size a. The average-case runtime complexity of the algorithm is the function defined by an average number of steps taken on any instance of size a. The amortized runtime complexity of the algorithm is the function defined by a sequence of operations applied to the input of size a and averaged over time. Example. Let us consider an algorithm of sequential searching for an element in an array of size n. Its worst-case runtime complexity is O(n) = Θ(n) Its best-case runtime complexity is O(1)  Its average-case runtime complexity is O(n/2) = O(n) = Θ(n) —Why? Ack: Obtained from http://www.cs.cmu.edu/~adamchik/15-121/lectures/Algorithmic%20Complexity/complexity.html

Examples of Time Complexity Analysis . This also is = Q(n3). Generally, important only to count “main” operations, here only additions and multiplications: n3 additions and n3 multiplications, and we arrive at the same O and Q notation complexities of n3 Ack: Adapted from http://www.cs.cornell.edu/courses/cs211/2005sp/Lectures/L14-BigO/L14-15-Complexity.4up.pdf

Examples of Time Complexity Analysis Ack: Obtained from http://www.cs.cornell.edu/courses/cs211/2005sp/Lectures/L14-BigO/L14-15-Complexity.4up.pdf

Merge Sort (MS) Analysis One way to solve recurrences, is to open it up and obtain a series of terms to be summed. Below, we obtain the complexity of merge sort directly (ignoring the recurrence). The main basic operation is a comparison of 2 elements/numbers Merge(n): Worst-case comparisons = ? Best-case comparisons = ? MS(n) Legend: : computation break-up arrow : data transfer/dependency Total comparisons @ level 1: worst-case: ? best-case: ? Merge(n) MS(n/2) Total comparisons @ level 2: worst-case: ? best-case: ? Merge(n/2) Merge(n/2) MS(n/4) Total comparisons @ last merge level (?) : worst-case: ? best-case: ? 1 Total worst-case complexity: ? Total best-case complexity: ? 1

Importance of Asymptotic Analysis—Worst- & Average-Case Asymptotic analysis tells us whether a technique/algorithm will be practical in all cases (worst-case analysis) or in the average-case (av.-case analysis) for problem sizes of interest Ack: Table obtained from http://www.csd.uwo.ca/courses/CS1037a/notes/topic13_AnalysisOfAlgs.pdf

Importance of Asymptotic Analysis—Worst- & Average-Case (contd.) Assume each basic oper. takes 1 ms T(n); n Table: T(n) values in ms’s Ack: Obtained from http://www.cs.cornell.edu/courses/cs211/2005sp/Lectures/L14-BigO/L14-15-Complexity.4up.pdf

Importance of Asymptotic Analysis—Worst- & Average-Case (contd.) T(n); n Ack: Obtained from http://www.cs.cornell.edu/courses/cs211/2005sp/Lectures/L14-BigO/L14-15-Complexity.4up.pdf

Asymptotic Complexity and Efficient Algorithms Ack: Obtained from http://www.cs.cornell.edu/courses/cs211/2005sp/Lectures/L14-BigO/L14-15-Complexity.4up.pdf

Concluding Remarks Ack: Obtained from http://www.cs.cornell.edu/courses/cs211/2005sp/Lectures/L14-BigO/L14-15-Complexity.4up.pdf