CSC 2300 Data Structures & Algorithms January 30, 2007 Chapter 2. Algorithm Analysis.

Slides:



Advertisements
Similar presentations
Lecture 3: Parallel Algorithm Design
Advertisements

A simple example finding the maximum of a set S of n numbers.
CSE 373: Data Structures and Algorithms Lecture 5: Math Review/Asymptotic Analysis III 1.
Divide and Conquer Strategy
Lectures on Recursive Algorithms1 COMP 523: Advanced Algorithmic Techniques Lecturer: Dariusz Kowalski.
1 Divide & Conquer Algorithms. 2 Recursion Review A function that calls itself either directly or indirectly through another function Recursive solutions.
Divide-and-Conquer Recursive in structure –Divide the problem into several smaller sub-problems that are similar to the original but smaller in size –Conquer.
Chapter 4: Divide and Conquer Master Theorem, Mergesort, Quicksort, Binary Search, Binary Trees The Design and Analysis of Algorithms.
CSC 2300 Data Structures & Algorithms March 23, 2007 Chapter 7. Sorting.
Chapter 2: Algorithm Analysis
CSC 2300 Data Structures & Algorithms March 27, 2007 Chapter 7. Sorting.
Algorithm Design Techniques: Induction Chapter 5 (Except Section 5.6)
Analysis of Recursive Algorithms
CS 104 Introduction to Computer Science and Graphics Problems Data Structure & Algorithms (3) Recurrence Relation 11/11 ~ 11/14/2008 Yang Song.
CSC 2300 Data Structures & Algorithms January 26, 2007 Chapter 2. Algorithm Analysis.
Data Structures Review Session 1
Chapter 2: Fundamentals of the Analysis of Algorithm Efficiency
Liang, Introduction to Java Programming, Eighth Edition, (c) 2011 Pearson Education, Inc. All rights reserved Chapter 23 Algorithm Efficiency.
Cpt S 223 – Advanced Data Structures
UNIVERSITY OF SOUTH CAROLINA Department of Computer Science and Engineering Design and Analysis of Algorithms - Chapter 11 Algorithm An algorithm is a.
Analysis of Algorithms COMP171 Fall Analysis of Algorithms / Slide 2 Introduction * What is Algorithm? n a clearly specified set of simple instructions.
CHAPTER 10 Recursion. 2 Recursive Thinking Recursion is a programming technique in which a method can call itself to solve a problem A recursive definition.
February 17, 2015Applied Discrete Mathematics Week 3: Algorithms 1 Double Summations Table 2 in 4 th Edition: Section th Edition: Section th.
Introduction Dr. Ying Lu RAIK 283: Data Structures & Algorithms.
Chapter 2 The Fundamentals: Algorithms, the Integers, and Matrices
CS 201 Data Structures and Algorithms Chapter 2: Algorithm Analysis - II Text: Read Weiss, §2.4.3 – Izmir University of Economics.
Liang, Introduction to Java Programming, Seventh Edition, (c) 2009 Pearson Education, Inc. All rights reserved Chapter 23 Algorithm Efficiency.
Divide-and-Conquer 7 2  9 4   2   4   7
Lecture 8. How to Form Recursive relations 1. Recap Asymptotic analysis helps to highlight the order of growth of functions to compare algorithms Common.
1 Chapter 24 Developing Efficient Algorithms. 2 Executing Time Suppose two algorithms perform the same task such as search (linear search vs. binary search)
Chapter 2 - Mathematical Review Functions
Nattee Niparnan. Recall  Complexity Analysis  Comparison of Two Algos  Big O  Simplification  From source code  Recursive.
Chapter 12 Recursion, Complexity, and Searching and Sorting
Analysis of Algorithms
Project 2 due … Project 2 due … Project 2 Project 2.
Lecture 5 Jianjun Hu Department of Computer Science and Engineering University of South Carolina CSCE350 Algorithms and Data Structure.
CS 361 – Chapters 8-9 Sorting algorithms –Selection, insertion, bubble, “swap” –Merge, quick, stooge –Counting, bucket, radix How to select the n-th largest/smallest.
Big Java by Cay Horstmann Copyright © 2009 by John Wiley & Sons. All rights reserved. Selection Sort Sorts an array by repeatedly finding the smallest.
UNIT-I INTRODUCTION ANALYSIS AND DESIGN OF ALGORITHMS CHAPTER 1:
Divide and Conquer Andreas Klappenecker [based on slides by Prof. Welch]
Divide and Conquer Strategy
ADVANCED ALGORITHMS REVIEW OF ANALYSIS TECHNIQUES (UNIT-1)
Chapter 2: Algorithm Analysis Application of Big-Oh to program analysis Logarithms in Running Time Lydia Sinapova, Simpson College Mark Allen Weiss: Data.
1 Recursive algorithms Recursive solution: solve a smaller version of the problem and combine the smaller solutions. Example: to find the largest element.
Lecture # 6 1 Advance Analysis of Algorithms. Divide-and-Conquer Divide the problem into a number of subproblems Similar sub-problems of smaller size.
Vishnu Kotrajaras, PhD.1 Data Structures
1 Chapter 2 Algorithm Analysis Reading: Chapter 2.
MA/CSSE 473 Day 14 Strassen's Algorithm: Matrix Multiplication Decrease and Conquer DFS.
BINARY SEARCH CS16: Introduction to Data Structures & Algorithms Thursday February 12,
Chapter 15 Running Time Analysis. Topics Orders of Magnitude and Big-Oh Notation Running Time Analysis of Algorithms –Counting Statements –Evaluating.
CS 116 Object Oriented Programming II Lecture 13 Acknowledgement: Contains materials provided by George Koutsogiannakis and Matt Bauer.
Data Structure and Algorithm Analysis 02: Algorithm Analysis Hongfei Yan School of EECS, Peking University 3/12/2014.
Introduction to Algorithms: Divide-n-Conquer Algorithms
Lecture 3: Parallel Algorithm Design
Analysis of Algorithms
Chapter 4 Divide-and-Conquer
CS 3343: Analysis of Algorithms
Data Structures Review Session
Divide-and-Conquer 7 2  9 4   2   4   7
Decrease-and-Conquer
CE 221 Data Structures and Algorithms
CSE 373 Data Structures and Algorithms
Divide and Conquer Algorithms Part I
CE 221 Data Structures and Algorithms
At the end of this session, learner will be able to:
CSC 380: Design and Analysis of Algorithms
Introduction to Algorithms
Divide-and-Conquer 7 2  9 4   2   4   7
Divide and Conquer Merge sort and quick sort Binary search
Presentation transcript:

CSC 2300 Data Structures & Algorithms January 30, 2007 Chapter 2. Algorithm Analysis

Today Binary Search Euclid’s Algorithm Efficient Exponentiation Recursion and recurrences

Binary Search Given an integer X and integers A 0, A 1, …, A N-1, which are presorted and already in memory, find i such that A i = X, or return i=-1 if X is not in the input. What is an obvious solution? Scan through the list from left to right. Time? O(N). Better strategy? Check if X is the middle element. If yes, we get the answer. If X is smaller, we apply the same strategy to the sorted array to the left of the middle element. If X is larger, we look to the right half.

Binary Search What is the running time? O(logN).

Analyzing Binary Search

Example We have 16 integers: 1, 3, 5, 7, 9, 11, …, 29, 31, and X=9. lowhigh#ptsmida[mid]return

Example We have 16 integers: 1, 3, 5, 7, 9, 11, …, 29, 31, and X=8. lowhigh#ptsmida[mid]return

Example We have 16 integers: 1, 3, 5, 7, 9, 11, …, 29, 31, and X=1. lowhigh#ptsmida[mid]return

Example We have 16 integers: 1, 3, 5, 7, 9, 11, …, 29, 31, and X=31. lowhigh#ptsmida[mid]return

Example We have 16 integers: 1, 3, 5, 7, 9, 11, …, 29, 31, and X=33. lowhigh#ptsmida[mid]return

Euclid’s Algorithm This algorithm computes gcd(m,n), assuming m≥n. The algorithm works by computing remainders until 0 is reached. The last nonzero remainder is the answer. Example, M=1989 and N=1590. The sequence of remainders is 399, 393, 6, 3, 0. Therefore, gcd(1989,1590)=3.

Running Time Analysis The sequence of remainders is 399, 393, 6, 3, 0. The remainder does not decrease by a constant factor in one iteration. What to do? We can prove that after two iterations, the remainder is at most half of its original value. This will show that the number of iterations is at most 2 logN = O(logN). Theorem 2.1. If M>N, then M mod N < M/2. Worst case is 2 logN? No, 1.44 logN, when M and N are consecutive Fibonacci numbers.

Efficient Exponentiation The obvious way to compute X N uses N-1 multiplications. A recursive algorithm can do better. What are the base cases? N=0 and N=1. Otherwise, if N is even, we compute X N = X N/2 X N/2, and if N is odd, X N = X N/2 X N/2 X. How many multiplications are needed? At most 2 logN. Why 2? If N is odd.

Example Compute X 62. The algorithm performs these calculations: X 3 = (X 2 )X, X 7 = (X 3 ) 2 X, X 15 = (X 7 ) 2 X, X 31 = (X 15 ) 2 X, X 62 = (X 31 ) 2, How many multiplications? Nine. In general, at most 2 logN.

Number of Multiplications Does it always require more work to compute X i than X j if i > j? No. Consider X 64 (versus X 62 ). The algorithm performs these calculations: X 2 = (X) 2, X 4 = (X 2 ) 2, X 8 = (X 4 ) 2, X 16 = (X 8 ) 2, X 32 = (X 16 ) 2, X 64 = (X 32 ) 2, How many multiplications? Six (versus nine for X 62 ).

Precise Number of Multiplies Can we find the precise number of multiplies used by the algorithm? Consider X 65. The algorithm performs these calculations: X 2 = (X) 2, X 4 = (X 2 ) 2, X 8 = (X 4 ) 2, X 16 = (X 8 ) 2, X 32 = (X 16 ) 2, X 65 = (X 32 ) 2 X, How many multiplications? Seven (versus six for X 64 ).

Another Example Consider X 63. The algorithm performs these calculations: X 3 = (X) 2 X, X 7 = (X 3 ) 2 X, X 15 = (X 7 ) 2 X, X 31 = (X 15 ) 2 X, X 63 = (X 31 ) 2 X, How many multiplications? Ten (versus six for X 64 ). What does the number depend on?

Precise Number of Multiplies Compute X N. What is the number of multiplies when N=0 or N=1? Zero. What is the important feature on the value of N? Its binary representation. Let b(N) represent the number of ones in the binary representation of N. What is the formula for #mult? [ log 2 N ] + b(N) - 1 N #mult

Fast Exponentiation Is algorithm optimal? No. Compute X 62. Fast exponentiation requires nine multiplications. Can you calculate X 62 using only 8 multiplies? How? Compute X 2, X 4, X 8, …, X 62. You are asked to solve this problem in Homework 2.

Recursion and Recurrences

Methods for Recurrences Two straightforward methods for solving recurrences are 1. Repeated expansion accompanied by summation 2. Forming telescopic sums

Method 1 Repeated expansion Recurrence: T(n) = T(n-1) + 1

Method 2 Telescoping sum Recurrence: T(n) = T(n-1) + 1

Telescoping Sum Recall Algorithm 3 to find Maximum Subsequence Sum. Recurrence: T(1) = 1, T(N) = 2 T(N/2) + N. Divide recurrence by N: T(N)/N = T(N/2)/(N/2) + 1. The recurrence is valid for any N that is a power of 2: T(N)/N = T(N/2)/(N/2) + 1 T(N/2)/(N/2) = T(N/4)/(N/4) + 1 T(N/4)/(N/4) = T(N/8)/(N/8) + 1 … T(4)/4 = T(2)/2 + 1 T(2)/2 = T(1)/1 + 1 Telescoping the sum, we get T(N)/N = T(1)/(1) + logN. So, T(N) = N logN + N = O(N logN).