Lecture 2 Computational Complexity

Slides:



Advertisements
Similar presentations
Algorithm Analysis Input size Time I1 T1 I2 T2 …
Advertisements

Chapter 1 – Basic Concepts
Fall 2006CENG 7071 Algorithm Analysis. Fall 2006CENG 7072 Algorithmic Performance There are two aspects of algorithmic performance: Time Instructions.
1 ICS 353 Design and Analysis of Algorithms Spring Semester (062) King Fahd University of Petroleum & Minerals Information & Computer Science.
Analysis of Algorithms intro.  What is “goodness”?  How to measure efficiency? ◦ Profiling, Big-Oh  Big-Oh: ◦ Motivation ◦ Informal examples ◦ Informal.
1 Amortized Analysis Consider an algorithm in which an operation is executed repeatedly with the property that its running time fluctuates throughout the.
Introduction to Analysis of Algorithms
Analysis of Algorithms1 Estimate the running time Estimate the memory space required. Time and space depend on the input size.
1 Data Structures A program solves a problem. A program solves a problem. A solution consists of: A solution consists of:  a way to organize the data.
Cmpt-225 Algorithm Efficiency.
Algorithm Analysis CS 201 Fundamental Structures of Computer Science.
Data Structures, Spring 2004 © L. Joskowicz 1 Data Structures – LECTURE 2 Elements of complexity analysis Performance and efficiency Motivation: analysis.
Data Structure Algorithm Analysis TA: Abbas Sarraf
Analysis of Algorithms COMP171 Fall Analysis of Algorithms / Slide 2 Introduction * What is Algorithm? n a clearly specified set of simple instructions.
Design and Analysis of Algorithms Chapter Analysis of Algorithms Dr. Ying Lu August 28, 2012
CSE 5311 DESIGN AND ANALYSIS OF ALGORITHMS. Definitions of Algorithm A mathematical relation between an observed quantity and a variable used in a step-by-step.
Algorithm Design and Analysis Liao Minghong School of Computer Science and Technology of HIT July, 2003.
1 Chapter 2 Program Performance – Part 2. 2 Step Counts Instead of accounting for the time spent on chosen operations, the step-count method accounts.
Lecture 2 We have given O(n 3 ), O(n 2 ), O(nlogn) algorithms for the max sub-range problem. This time, a linear time algorithm! The idea is as follows:
Algorithmic Complexity: Complexity Analysis of Time Complexity Complexities Nate the Great.
Liang, Introduction to Java Programming, Seventh Edition, (c) 2009 Pearson Education, Inc. All rights reserved Chapter 23 Algorithm Efficiency.
Analysis and Design of Algorithms. According to math historians the true origin of the word algorism: comes from a famous Persian author named ál-Khâwrázmî.
Program Performance & Asymptotic Notations CSE, POSTECH.
Week 2 CS 361: Advanced Data Structures and Algorithms
{ CS203 Lecture 7 John Hurley Cal State LA. 2 Execution Time Suppose two algorithms perform the same task such as search (linear search vs. binary search)
Analysis Tools Jyh-Shing Roger Jang ( 張智星 ) CSIE Dept, National Taiwan University.
Algorithm Efficiency CS 110: Data Structures and Algorithms First Semester,
Mathematics Review and Asymptotic Notation
CSC 201 Analysis and Design of Algorithms Lecture 04: CSC 201 Analysis and Design of Algorithms Lecture 04: Time complexity analysis in form of Big-Oh.
1 Computer Algorithms Lecture 3 Asymptotic Notation Some of these slides are courtesy of D. Plaisted, UNC and M. Nicolescu, UNR.
Algorithm Analysis An algorithm is a clearly specified set of simple instructions to be followed to solve a problem. Three questions for algorithm analysis.
Design and Analysis of Algorithms - Chapter 21 Analysis of Algorithms b Issues: CorrectnessCorrectness Time efficiencyTime efficiency Space efficiencySpace.
2IL50 Data Structures Fall 2015 Lecture 2: Analysis of Algorithms.
Introduction to Algorithms Lecture 1. Introduction The methods of algorithm design form one of the core practical technologies of computer science. The.
CS 221 Analysis of Algorithms Instructor: Don McLaughlin.
1 COMP3040 Tutorial 1 Analysis of algorithms. 2 Outline Motivation Analysis of algorithms Examples Practice questions.
Asymptotic Analysis-Ch. 3
Asymptotic Notation (O, Ω, )
Program Efficiency & Complexity Analysis. Algorithm Review An algorithm is a definite procedure for solving a problem in finite number of steps Algorithm.
Fundamentals of Algorithms MCS - 2 Lecture # 8. Growth of Functions.
Chapter 10 Algorithm Analysis.  Introduction  Generalizing Running Time  Doing a Timing Analysis  Big-Oh Notation  Analyzing Some Simple Programs.
Algorithm Analysis Data Structures and Algorithms (60-254)
Analysis of Algorithms1 O-notation (upper bound) Asymptotic running times of algorithms are usually defined by functions whose domain are N={0, 1, 2, …}
Chapter 5 Algorithms (2) Introduction to CS 1 st Semester, 2015 Sanghyun Park.
Time Complexity of Algorithms (Asymptotic Notations)
Analysis of Algorithm. Why Analysis? We need to know the “behavior” of algorithms – How much resource (time/space) does it use So that we know when to.
Lecture 2 Analysis of Algorithms How to estimate time complexity? Analysis of algorithms Techniques based on Recursions ACKNOWLEDGEMENTS: Some contents.
Introduction to Algorithms Lecture 2 Chapter 3: Growth of Functions.
Asymptotic Notations By Er. Devdutt Baresary. Introduction In mathematics, computer science, and related fields, big O notation describes the limiting.
1/6/20161 CS 3343: Analysis of Algorithms Lecture 2: Asymptotic Notations.
Liang, Introduction to Java Programming, Sixth Edition, (c) 2007 Pearson Education, Inc. All rights reserved Chapter 23 Algorithm Efficiency.
David Meredith Growth of Functions David Meredith
E.G.M. PetrakisAlgorithm Analysis1  Algorithms that are equally correct can vary in their utilization of computational resources  time and memory  a.
Data Structures and Algorithm Analysis Algorithm Analysis and Sorting
Algorithms Lecture #05 Uzair Ishtiaq. Asymptotic Notation.
Program Performance 황승원 Fall 2010 CSE, POSTECH. Publishing Hwang’s Algorithm Hwang’s took only 0.1 sec for DATASET1 in her PC while Dijkstra’s took 0.2.
BITS Pilani Pilani Campus Data Structure and Algorithms Design Dr. Maheswari Karthikeyan Lecture1.
1 Chapter 2 Algorithm Analysis Reading: Chapter 2.
Ch03-Algorithms 1. Algorithms What is an algorithm? An algorithm is a finite set of precise instructions for performing a computation or for solving a.
Basic Concepts in Algorithmic Analysis
Design and Analysis of Algorithms Chapter -2
Introduction to Algorithms
Lecture 2 Analysis of Algorithms
Chapter 2 Fundamentals of the Analysis of Algorithm Efficiency
CS 3343: Analysis of Algorithms
Asymptotic Notations Algorithms Lecture 9.
Introduction to Algorithms Analysis
Chapter 2.
Fundamentals of the Analysis of Algorithm Efficiency
CS210- Lecture 2 Jun 2, 2005 Announcements Questions
Presentation transcript:

Lecture 2 Computational Complexity Time Complexity Space Complexity Algoirthms use less space than time. e.g., NP is included in PSPACE.

In computer science, the time complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function of the length of the string representing the input[1]:226.

The Importance of Analyzing the Running Time of an Algorithm An example to illustrate the importance of analyzing the running time of an algorithm Problem: Given an array of n elements A[1..n], sort the entries in A in non-decreasing order. Assumption: Each element comparison takes 10-6 seconds on some computing machine Conclusion: Time is undoubtedly an extremely precious resource to be investigated in the analysis of algorithms. Question: How to Analysis Running Time? algorithm # of element comparisons n=128 =27 n=1,048,567 =220 Selection Sort Merge Sort

Running Time How to measure the efficiency of an algorithm from the point of view of time? The running time usually increases with the growing of input size. So, the running time of an algorithm is usually described as a function of input size. What is the measure of time? How to define the value of the function for input size n ? If the running time is only dependent on input size? If not only dependent on input size, but also on input? Is actual (exact) running time a good measure? The answer is No. Why? Actual time is determined by not only the algorithm, but also many other factors; The measure should be machine or technology independent; Our estimates of times are relative as opposed to absolute;

Time complexity In computer science, the time complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function of the length of the string representing the input[1]:226. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, where an elementary operation takes a fixed amount of time to perform.

Running Time (continued) Notice! Different models assume different elementary steps! Examples of usually used elementary operations Arithmetic operations: addition, subtraction, multiplication and division. Comparisons and logical operations. Assignments, including assignments of pointers.

Example ! For some algorithm, selection sort for example, its running time is only dependent on input size. For any input with input size n, the running time of selection sort algorithm is always cn(n-1)/2 + bn + a for some constants a, b, c. Our main concern is not in small input instances but the behavior of the algorithms under investigation on large input instances, especially with the size of the input in the limit. In this case, we are interested in the order of growth of the running time function.

Example But for many algorithms, running time is dependent also on input structure!

Example (cont.) Then It is tedious! And tj depends on different input! e.g., if A is initially in sorted order, then for each j=2, …, n, the while loop test in line 5 is only executed once, thus, tj=1 for j=2, …,n. And we have: Which is a linear function, and the order of growth of the function is n.

Example (cont.) If the array is initially in decreasing order. Then for each j=2, …,n, the while loop test is executed j times, thus tj=j for j=2, …,n. Thus, we have Which is a quadratic function, the order of growth of which is n2

Asymptotic notions How to define running time in function of input size when running time is dependent also on input? For simplicity, we need asymptotic notions! Our main concern is not in small input instances but the behavior of the algorithms under investigation on large input instances, especially with the size of the input in the limit. In this case, we are interested in the order of growth of the running time function.

In this course, we use three notations: Asymptotic Notations e.g., the running time of INSERTION-SORT falls between a1n+b1 and a2n2+b2n+c2, and neither of them can represent the whole case. By using the asymptotic notations, the running time of INSERTION-SORT can be simplified. In this course, we use three notations: O(.) : “Big-Oh” – the most used Ω(.) : “Big omega” Θ(.) : “Big theta”

O-notation Informal definition of O(.): If an algorithm’s running time t(n) is bounded above by a function g(n), to within a constant multiple c, for n  n0, we say that the running time of the algorithm is O(g(n)), and the algorithm is an O(g(n))-algorithm. Obviously, O-notation is used to bound the worst-case running time of an algorithm

O-notation Formal definition of O(.): A function t(n) is said to be in O(g(n)), if there exist some c > 0, n0 > 0, such that 0 ≤ t(n) ≤ cg(n), for all n ≥ n0. Sometimes, also denote as t(n) = O(g(n)).

Ω-notation Informal definition of Ω(.) : If an algorithm’s running time t(n) is bounded below by a function g(n), to within a constant multiple c, for n  n0, we say that the running time of the algorithm is Ω(g(n)), and the algorithm is called an Ω(g(n))-algorithm.

Ω-notation Formal definition of Ω(.) : A function t(n) is said to be in Ω(g(n)), if there exist some c > 0, n0 > 0, such that t(n) ≥ cg(n) ≥0, for all n ≥ n0. Sometimes, also denote as t(n) = Ω(g(n)). Obviously, Ω -notation is used to bound the best-case running time of an algorithm

Θ-notation Informal definition of Θ(.) : If an algorithm’s running time t(n) is bounded above and below by a function g(n), to within a constant multiple c1 and c2, for n  n0, we say that the running time of the algorithm is Θ(g(n)), and the algorithm is called a Θ(g(n))-algorithm, or of order Θ(g(n)).

Θ-notation Formal definition of Θ(.) : A function t(n) is said to be in Θ(g(n)) if there exist c1 > 0, c2 > 0, and n0 > 0, such that 0 ≤ c2g(n) ≤ t(n) ≤ c1g(n), for all n ≥ n0. Sometimes, also denote as t(n) = Θ(g(n)). g(n) is an asymptotically tight bound for t(n)

Example The running time of INSERTION-SORT is O(n2), and (n), which means the running time of every input of size n for n  n0 is up-bounded by a constant times n2, and lower-bounded by a constant times n when n is sufficiently large. Notice that for some algorithm, INSERTION-SORT for example, there does not exist a function g(n) such that the running time of the algorithm is (g(n)).

Illustration of some typical asymptotic running time functions We can see:The linear algorithm is obviously slower than the quadratic one and faster than logarithmic one, etc..

Three Types of Analysis Given input size, the running time may vary on different input instance, e.g., the running time of INSERTION-SORT falls between linear and quadratic functions. Can we give an exact order? To have an overall grading on the performance of an algorithm, we consider: Best-Case Analysis: Too optimistic Average-Case Analysis: Too difficult, e.g. the difficulty to define “average case”, the difficulty related with mathematics. And most time average-case running time is in the same order as worst-case running time. Worst-Case Analysis: Very useful and practical. We will adopt this approach.

Example of Three Types of Analysis

Example of analysis (average-case) You should first figure out the distribution space of the instances of size n. Then compute an expected running time. Usually, we assume all possibilities are equally likely. Thus, in this example, in the while loop, on average, half of the elements in A[1, ..,j-1] are less than A[j], and half are greater than A[j]. Then on average, tj=j/2, and we can see the average-case time complexity is also quadratic.

Time complexity Since an algorithm's performance time may vary with different inputs of the same size, one commonly uses the worst-case time complexity of an algorithm, denoted as T(n), which is defined as the maximum amount of time taken on any input of size n.   Worst-case time complexity of an algorithm is the longest running time (or maximum number of elementary operations ) taken on any input of size n. E.g., the worst-case time complexity of INSERTION-SORT (or the running time of INSERTION-SORT in the worst case) is (n2), which is an asymptotic tight bound. The running time (time complexity) of INSERTION-SORT is O(n2), (n). But the running time (time complexity) of INSERTION-SORT is NOT (n2), NOT (n2), and is NOT O(n). We cannot give a tight bound on the running time for all inputs. We usually consider one algorithm to be more efficient than another if its worst-case running time has a lower order of growth.

Worst-case time complexity It is NOT true that each algorithm has a tight bound on its worst-case time complexity. E.g., suppose A is a sorted array of numbers, x is a number. n is the length of A. The following algorithm decides whether x is in A. 1. if n is odd k BINARYSEARCH (A, x) 2. else k LINEARSEARCH (A, x) Obviously, the running time of this algorithm is O(n) for each input, and thus in the worst case. And for each constant n0, there are infinitely many inputs whose size is larger than n0 and whose cost is lower-bounded by a constant times n, but, the running time in the worst case is NOT (n), and NOT (n), since, for each n0, there exists some nn0 such that no input of that size costs no less than a constant times n.

Input size Input size is the length of the string representing the input. Under TURING model, this is easy to define, that is the number of nonblank cells the input occupies on the input tape. In real world, this is impossible. The input size is not a precise measure of the input, and its interpretation is subject to the problem for which the algorithm is designed. Commonly used measures of input size are the following: 1.sorting and searching# of entries in the array or list 2.Graph algorithms # of vertices or edges, or both 3.Computational geometry # of points, vertices, edges, line segments, polygons, etc. 4.Matrix operations dimensions of the input matrices 5.Number theory and cryptography # of bits in the input.

Input size Inconsistencies brought about: An algorithm for adding two n  n matrices which performs n2 additions sounds quadratic, while it is indeed linear in the input size. The following two algorithms both compute the sum , and the # of elementary operations performed is the same, but they have different time complexity. Algorithm 1 input: A positive integer n and an array A[1..n]with A[j]=j, 1j n Output: 1.sum0 2.for j1 to n 3. sum sum + A[j] 4. end for 5.return sum Algorithm 2 input: A positive integer n Output: 1.sum0 2.for j1 to n 3. sum sum + j 4. end for 5.return sum Both algorithms run in time (n). But Algorithm 1 is a linear algorithm since its time complexity is (n), where n is the real length of its input, while Algorithm 2 is an exponential algorithm since its input is a number whose length is k=log(n+1) , and (n)= (2k).