Lecture 2 We have given O(n 3 ), O(n 2 ), O(nlogn) algorithms for the max sub-range problem. This time, a linear time algorithm! The idea is as follows:

Slides:



Advertisements
Similar presentations
Analysis of Algorithms
Advertisements

CSE 373: Data Structures and Algorithms Lecture 5: Math Review/Asymptotic Analysis III 1.
Lecture 12: Lower bounds By "lower bounds" here we mean a lower bound on the complexity of a problem, not an algorithm. Basically we need to prove that.
Spring 2015 Lecture 5: QuickSort & Selection
Computational Complexity 1. Time Complexity 2. Space Complexity.
Chapter 1 – Basic Concepts
Lecture 25 Selection sort, reviewed Insertion sort, reviewed Merge sort Running time of merge sort, 2 ways to look at it Quicksort Course evaluations.
Complexity Analysis (Part I)
Analysis of Algorithms (pt 2) (Chapter 4) COMP53 Oct 3, 2007.
Functional Design and Programming Lecture 4: Sorting.
Data Structures, Spring 2004 © L. Joskowicz 1 Data Structures – LECTURE 2 Elements of complexity analysis Performance and efficiency Motivation: analysis.
Data Structures Review Session 1
Algorithm Efficiency and Sorting
DAST 2005 Week 4 – Some Helpful Material Randomized Quick Sort & Lower bound & General remarks…
Analysis of Algorithms 7/2/2015CS202 - Fundamentals of Computer Science II1.
The Complexity of Algorithms and the Lower Bounds of Problems
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
Summary of Algo Analysis / Slide 1 Algorithm complexity * Bounds are for the algorithms, rather than programs n programs are just implementations of an.
Analysis of Algorithms COMP171 Fall Analysis of Algorithms / Slide 2 Introduction * What is Algorithm? n a clearly specified set of simple instructions.
Design and Analysis of Algorithms Chapter Analysis of Algorithms Dr. Ying Lu August 28, 2012
Analysis of Algorithms Spring 2015CS202 - Fundamentals of Computer Science II1.
Algorithm Design and Analysis Liao Minghong School of Computer Science and Technology of HIT July, 2003.
Instructor Neelima Gupta
Asymptotic Growth Rates Themes –Analyzing the cost of programs –Ignoring constants and Big-Oh –Recurrence Relations & Sums –Divide and Conquer Examples.
Analysis of Algorithms Lecture 2
1 Chapter 2 Program Performance – Part 2. 2 Step Counts Instead of accounting for the time spent on chosen operations, the step-count method accounts.
Lecture 2 Computational Complexity
Algorithm Efficiency CS 110: Data Structures and Algorithms First Semester,
Analysis of Algorithms
Mathematics Review and Asymptotic Notation
CS 3343: Analysis of Algorithms
Design and Analysis of Algorithms - Chapter 21 Analysis of Algorithms b Issues: CorrectnessCorrectness Time efficiencyTime efficiency Space efficiencySpace.
10/13/20151 CS 3343: Analysis of Algorithms Lecture 9: Review for midterm 1 Analysis of quick sort.
Analysis of Algorithms
2IL50 Data Structures Fall 2015 Lecture 2: Analysis of Algorithms.
Chapter 3 Sec 3.3 With Question/Answer Animations 1.
CS 221 Analysis of Algorithms Instructor: Don McLaughlin.
1 COMP3040 Tutorial 1 Analysis of algorithms. 2 Outline Motivation Analysis of algorithms Examples Practice questions.
Télécom 2A – Algo Complexity (1) Time Complexity and the divide and conquer strategy Or : how to measure algorithm run-time And : design efficient algorithms.
CMPT 438 Algorithms. Why Study Algorithms? Necessary in any computer programming problem ▫Improve algorithm efficiency: run faster, process more data,
CSS342: Algorithm Analysis1 Professor: Munehiro Fukuda.
Analysis of Algorithms CSCI Previous Evaluations of Programs Correctness – does the algorithm do what it is supposed to do? Generality – does it.
Divide & Conquer  Themes  Reasoning about code (correctness and cost)  recursion, induction, and recurrence relations  Divide and Conquer  Examples.
MS 101: Algorithms Instructor Neelima Gupta
Fundamentals of Algorithms MCS - 2 Lecture # 8. Growth of Functions.
Analysis of Algorithms1 O-notation (upper bound) Asymptotic running times of algorithms are usually defined by functions whose domain are N={0, 1, 2, …}
Chapter 18: Searching and Sorting Algorithms. Objectives In this chapter, you will: Learn the various search algorithms Implement sequential and binary.
Chapter 5 Algorithms (2) Introduction to CS 1 st Semester, 2015 Sanghyun Park.
Asymptotic Growth Rates  Themes  Analyzing the cost of programs  Ignoring constants and Big-Oh  Recurrence Relations & Sums  Divide and Conquer 
Analysis of algorithms. What are we going to learn? Need to say that some algorithms are “better” than others Criteria for evaluation Structure of programs.
1/6/20161 CS 3343: Analysis of Algorithms Lecture 2: Asymptotic Notations.
Midterm Review 1. Midterm Exam Thursday, October 15 in classroom 75 minutes Exam structure: –TRUE/FALSE questions –short questions on the topics discussed.
Spring 2015 Lecture 2: Analysis of Algorithms
Algorithm Analysis. What is an algorithm ? A clearly specifiable set of instructions –to solve a problem Given a problem –decide that the algorithm is.
UNIT-I FUNDAMENTALS OF THE ANALYSIS OF ALGORITHM EFFICIENCY ANALYSIS AND DESIGN OF ALGORITHMS CHAPTER 2:
E.G.M. PetrakisAlgorithm Analysis1  Algorithms that are equally correct can vary in their utilization of computational resources  time and memory  a.
2IL50 Data Structures Spring 2016 Lecture 2: Analysis of Algorithms.
Big O David Kauchak cs302 Spring Administrative Assignment 1: how’d it go? Assignment 2: out soon… Lab code.
BITS Pilani Pilani Campus Data Structure and Algorithms Design Dr. Maheswari Karthikeyan Lecture1.
1 Asymptotes: Why? How to describe an algorithm’s running time? (or space, …) How does the running time depend on the input? T(x) = running time for instance.
Complexity of Algorithms Fundamental Data Structures and Algorithms Ananda Guna January 13, 2005.
Analysis of Algorithms
Growth of functions CSC317.
CS 3343: Analysis of Algorithms
CS 583 Analysis of Algorithms
Data Structures Review Session
Chapter 2.
David Kauchak cs161 Summer 2009
Time Complexity and the divide and conquer strategy
Presentation transcript:

Lecture 2 We have given O(n 3 ), O(n 2 ), O(nlogn) algorithms for the max sub-range problem. This time, a linear time algorithm! The idea is as follows: suppose we have found the maximum subrange sum for x[1..n-1]. Now we have to find it for x[1..n]. There are two possibilities: either the subrange with maximum sum still lies entirely within x[1..n-1] (in which case we already know it), or it ends at x[n]. But if it ends at x[n], then we could determine it by finding the suffix of x[1..n-1] with maximum sum, and then adding x[n] to that. This will work provided it is at least 0; if it is negative we take 0 instead (which corresponds to an empty suffix). This suggests maintaining two different maximums: maxsofar, which is the maximum subrange sum in the portion of the array seen so far, and maxsuffixsum, which is the maximum suffix sum in the portion of the array seen so far. Then we simply update both of these as we walk across: a Θ(n) algorithm! Maxsubrangesum5(x,n); maxsofar := 0; maxsuffixsum := 0; for i := 1 to n do maxsuffixsum := max(0, maxsuffixsum + x[i]); maxsofar := max(maxsofar, maxsuffixsum); return(maxsofar); Consider this problem: given an array of n integers, the majority element is defined to be the number that appears more than n/2 times. Can you develop an efficient algorithm to solve the problem? (This will be in your assignment 1.)

Time complexities of an algorithm Worst-case time complexity of algorithm A T(n) = max |x|=n T(x) //T(x) is A’s time on x Best-case … Average-case time complexity of A T(n) = 1/2 n Σ |x|=n T(x) assuming uniform distribution (and x binary). In general, given probability distribution P, the average case complexity of A is T(n) = Σ |x|=n P(x) T(x) Space complexity defined similarly.

Asymptotic notations O,Ω,Θ,o We say f(n) is O(g(n)) if there exist constants c > 0, n 0 >0 s.t. f(n) ≤ c g(n) for all n ≥ n 0. We say f(n) is Ω(g(n)) if there exist constants c > 0, n 0 >0 such that f(n) ≥ c g(n) for all n ≥ n 0. We say f(n) is Θ(g(n)) if there exist constants c 1 > 0, c 2 > 0, n 0 >0 such that c 1 g(n) ≤ f(n) ≤ c 2 g(n) for all n ≥ n 0. We say f(n) is o(g(n)) if lim n → ∞ f(n)/g(n) = 0. We will only use asymptotic notation on non-negative valued functions in this course! Examples: n, n 2, 3n 2 + 4n + 5 are all O(n 2 ), but n 3 is not O(n 2 ). n 2, (log n)n 2, 4n are all Ω(n 2 ), but n is not Ω(n 2 ). 2n 2 + 3n + 4 is Θ(n 2 ). Exercise: What is the relationship between n log (n) and e √n ? Useful trick: If lim n → ∞ f(n)/g(n) = c < ∞ for some constant c ≥ 0, then f(n) = O(g(n)). We say an algorithm runs in polynomial time if there exists a k such that its worst-case time complexity T(n) is O(n k ).

Average Case Analysis of Algorithms : Let's go back to Insertion-Sort of Lecture 1. The worst- case running time is useless! For example, QuickSort does have Ω(n 2 ) worst-case time complexity, we use it because its average-case running time is O(nlogn). In practice, we are usually only interested in the "average case". But what is the average case complexity of Insertion-Sort? How are we going to get the average case complexity of an algorithm? Compute the time for all inputs of length n and then take the average? Usually hard! Alternatively, what if I give you one "typical" input, and tell you whatever the time the algorithm spends on this particular input is "typical" -- that is: it uses this much time on most other inputs too. Then all you need to do is to analyze the algorithm over this single input and that is the desired average case complexity!

Average-case analysis of Insertion-Sort Theorem. Average case complexity of Insertion-Sort is Θ(n 2 ). Proof. Fix a permutation π of integers 1,2,…, n such that (a) it takes at least nlogn – cn bits to encode π, for some constant c; and (b) since most permutations (> half) also require at least nlogn–cn bits to encode, π's time complexity is the average-case time complexity. Now we analyze Insertion-Sort on input π. We encode π by the computation of Insertion-Sort: at j-th round of outer loop, assume the while-loop is executed for f(j) steps, thus the total running time on π is: T(π) = Σ j=1..n f(j) (1) and, by Assignment 1 and (a), we can use Σ j=1..n log f(j) ≥ nlogn -cn (2) bits to encode π. Subjecting to (1), when f(j)'s are all equal say = f 0, the right side of (2) is maximized. Hence n log f 0 ≥ Σ j=1..n log f(j) ≥ nlogn - cn. Hence f 0 ≥ n / 2 c. Thus T(π) = Ω(n 2 ). By (b), the average-case running time of Insertion-Sort is Ω(n 2 ), hence we have Θ(n 2 ), as the worst case is O(n 2 ).

8 paradigms and 4 methods In this course, we will discuss eight paradigms of algorithm design reduce to known problem (e.g. sorting) recursion divide & conquer invent (or augment) a data structure greedy algorithm dynamic programming exploit problem structure (algebraic, geometric, etc.) probabilistic or approximate solutions And 4 methods of analyzing algorithms counting -- usually for worst case (probabilistic method -- for average case) incompressibility method -- for average case adversary arguments -- usually for worst case lower bounds

Paradigm 1. Reduce to known problem In this method, you develop an algorithm for a problem by viewing it as a special case of a problem you already know how to solve efficiently. Example 1: Decide if a list of n numbers contains repeated elements. Solution 1: Using a double loop, compare each element to every other element. This uses Θ(n 2 ) steps. Solution 2: Sort the n numbers in O(nlogn) time, then find the repeated element in O(n) time. Example 2: Given n points in the plain, find if there are three that are colinear. Solution 1: For each triple of points, say P 1 = (x 1, y 1 ), P 2 = (x 2, y 2 ), P 3 = (x 3,y 3 ), compute the slope of the line connecting P 1 with P 2 and P 1 with P 3. If they are the same, then P 1, P 2, P 3 are colinear. This costs O(n 3 ). Solution 2: For each point P, compute the slopes of all lines formed by other points joined with P. If there is a duplicate element in this list, then there are three colinear points. Finding a duplicate among each list costs O(n log n), so the total cost is O(n 2 log n). For next lecture, read CLR, section 2.3 and chapter 4.