Lecture 4. Paradigm #2 Recursion Last time we discussed Fibonacci numbers F(n), and Alg fib(n) if (n <= 1) then return(n) else return(fib(n-1)+fib(n-2))

Slides:



Advertisements
Similar presentations
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Advertisements

Divide-and-Conquer CIS 606 Spring 2010.
A simple example finding the maximum of a set S of n numbers.
Theory of Computing Lecture 3 MAS 714 Hartmut Klauck.
Comp 122, Spring 2004 Divide and Conquer (Merge Sort)
Algorithm Design Paradigms
Introduction to Algorithms Jiafen Liu Sept
5/15/ Algorithms1 Algorithms – Ch4 - Divide & Conquer Recurrences: as we saw in another lecture, the Divide and Conquer approach leads to Recurrence.
Divide-and-Conquer Recursive in structure –Divide the problem into several smaller sub-problems that are similar to the original but smaller in size –Conquer.
1 Divide-and-Conquer CSC401 – Analysis of Algorithms Lecture Notes 11 Divide-and-Conquer Objectives: Introduce the Divide-and-conquer paradigm Review the.
DIVIDE AND CONQUER APPROACH. General Method Works on the approach of dividing a given problem into smaller sub problems (ideally of same size).  Divide.
11 Computer Algorithms Lecture 6 Recurrence Ch. 4 (till Master Theorem) Some of these slides are courtesy of D. Plaisted et al, UNC and M. Nicolescu, UNR.
Divide and Conquer. Recall Complexity Analysis – Comparison of algorithm – Big O Simplification From source code – Recursive.
Spring 2015 Lecture 5: QuickSort & Selection
Nattee Niparnan. Recall  Complexity Analysis  Comparison of Two Algos  Big O  Simplification  From source code  Recursive.
CS38 Introduction to Algorithms Lecture 7 April 22, 2014.
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Sorting. Input: A sequence of n numbers a 1, …, a n Output: A reordering a 1 ’, …, a n ’, such that a 1 ’ < … < a n ’
Lecture 3 Nearest Neighbor Algorithms Shang-Hua Teng.
Divide-and-Conquer1 7 2  9 4   2  2 79  4   72  29  94  4.
Data Structures, Spring 2004 © L. Joskowicz 1 Data Structures – LECTURE 3 Recurrence equations Formulating recurrence equations Solving recurrence equations.
Lecture 2: Divide and Conquer I: Merge-Sort and Master Theorem Shang-Hua Teng.
Data Structures, Spring 2006 © L. Joskowicz 1 Data Structures – LECTURE 3 Recurrence equations Formulating recurrence equations Solving recurrence equations.
CS3381 Des & Anal of Alg ( SemA) City Univ of HK / Dept of CS / Helena Wong 4. Recurrences - 1 Recurrences.
Grade School Revisited: How To Multiply Two Numbers Great Theoretical Ideas In Computer Science Steven RudichCS Spring 2004 Lecture 16March 4,
Data Structure Algorithm Analysis TA: Abbas Sarraf
Recurrences The expression: is a recurrence. –Recurrence: an equation that describes a function in terms of its value on smaller functions Analysis of.
Recurrences The expression: is a recurrence. –Recurrence: an equation that describes a function in terms of its value on smaller functions BIL741: Advanced.
Introduction to Algorithm design and analysis
Asymptotic Growth Rates Themes –Analyzing the cost of programs –Ignoring constants and Big-Oh –Recurrence Relations & Sums –Divide and Conquer Examples.
Lecture 2 We have given O(n 3 ), O(n 2 ), O(nlogn) algorithms for the max sub-range problem. This time, a linear time algorithm! The idea is as follows:
Divide-and-Conquer 7 2  9 4   2   4   7
2IL50 Data Structures Fall 2015 Lecture 2: Analysis of Algorithms.
Project 2 due … Project 2 due … Project 2 Project 2.
10/14/ Algorithms1 Algorithms - Ch2 - Sorting.
Searching. RHS – SOC 2 Searching A magic trick: –Let a person secretly choose a random number between 1 and 1000 –Announce that you can guess the number.
The Selection Problem. 2 Median and Order Statistics In this section, we will study algorithms for finding the i th smallest element in a set of n elements.
Divide & Conquer  Themes  Reasoning about code (correctness and cost)  recursion, induction, and recurrence relations  Divide and Conquer  Examples.
1 Chapter 4 Divide-and-Conquer. 2 About this lecture Recall the divide-and-conquer paradigm, which we used for merge sort: – Divide the problem into a.
CS 361 – Chapters 8-9 Sorting algorithms –Selection, insertion, bubble, “swap” –Merge, quick, stooge –Counting, bucket, radix How to select the n-th largest/smallest.
COMPSCI 102 Introduction to Discrete Mathematics.
CSE 421 Algorithms Lecture 15 Closest Pair, Multiplication.
Lecture 10. Paradigm #8: Randomized Algorithms Back to the “majority problem” (finding the majority element in an array A). FIND-MAJORITY(A, n) while (true)
Divide-and-Conquer UNC Chapel HillZ. Guo. Divide-and-Conquer It’s a technique instead of an algorithm Recursive in structure – Divide the problem into.
ADVANCED ALGORITHMS REVIEW OF ANALYSIS TECHNIQUES (UNIT-1)
Lecture 5 Today, how to solve recurrences We learned “guess and proved by induction” We also learned “substitution” method Today, we learn the “master.
Divide and Conquer. Recall Divide the problem into a number of sub-problems that are smaller instances of the same problem. Conquer the sub-problems by.
Spring 2015 Lecture 2: Analysis of Algorithms
CSC5101 Advanced Algorithms Analysis
2IS80 Fundamentals of Informatics Fall 2015 Lecture 6: Sorting and Searching.
Young CS 331 D&A of Algo. Topic: Divide and Conquer1 Divide-and-Conquer General idea: Divide a problem into subprograms of the same kind; solve subprograms.
Recurrences (in color) It continues…. Recurrences When an algorithm calls itself recursively, its running time is described by a recurrence. When an algorithm.
Divide and Conquer Faculty Name: Ruhi Fatima Topics Covered Divide and Conquer Matrix multiplication Recurrence.
2IL50 Data Structures Spring 2016 Lecture 2: Analysis of Algorithms.
Lecture # 6 1 Advance Analysis of Algorithms. Divide-and-Conquer Divide the problem into a number of subproblems Similar sub-problems of smaller size.
Big O David Kauchak cs302 Spring Administrative Assignment 1: how’d it go? Assignment 2: out soon… Lab code.
CSCI 256 Data Structures and Algorithm Analysis Lecture 10 Some slides by Kevin Wayne copyright 2005, Pearson Addison Wesley all rights reserved, and some.
Chapter 15 Running Time Analysis. Topics Orders of Magnitude and Big-Oh Notation Running Time Analysis of Algorithms –Counting Statements –Evaluating.
David Kauchak CS52 – Spring 2015
Great Theoretical Ideas in Computer Science
Lecture 3: Divide-and-conquer
Punya Biswas Lecture 15 Closest Pair, Multiplication
Topic: Divide and Conquer
Divide and Conquer (Merge Sort)
Lecture 15, Winter 2019 Closest Pair, Multiplication
Introduction To Algorithms
David Kauchak cs161 Summer 2009
Lecture 15 Closest Pair, Multiplication
Presentation transcript:

Lecture 4. Paradigm #2 Recursion Last time we discussed Fibonacci numbers F(n), and Alg fib(n) if (n <= 1) then return(n) else return(fib(n-1)+fib(n-2)) The problem with this algorithm is that it is woefully inefficient. Let T(n) denote the number of steps needed by fib(n). Then T(0) = 1, T(1) = 1, and T(n) = T(n-1) + T(n-2) + 1. It is now easy to guess the solution T(n) = 2 F(n+1) - 1 Proof by induction: Clearly the claim is true for n = 0, 1. Now assume it is true for all n < N; we prove it for n = N: T(N) = T(N-1) + T(N-2) + 1 = (2 F(N) - 1) + (2 F(N-1) - 1) + 1 = 2(F(N)+F(N-1)) - 1 = 2 F(N+1) - 1 We know F(n) = Θ(a n ), a=(1+√5)/2

Memoization A trick called "memoization“ can help recursion, store function values as they are computed. When the function is invoked, check the argument to see if the function value is already known; that is, don't recompute. Some programming languages even offer memoization as a built-in option. Here are two Maple programs for computing Fibonacci numbers; the first just uses the ordinary recursive method, and the second uses memoization (through the command "option remember"). A recursive one: f := proc(n) if n ≤ 1 then n else f(n - 1) + f(n - 2) end if end proc; One that looks recursive, but that uses memoization: g := proc(n) option remember; if n ≤ 1 then n else g(n - 1) + g(n - 2) end if end proc; When you run these and time them, you'll see the amazing difference in the running times. The "memoized" version runs in linear time.

Printing permutations Sometimes, recursion does provide simple & efficient solutions. Consider the problem of printing out all permutations of {1,2,...,n} in lexicographic order. Why might you want to do this? Well, some combinatorial problems have no known efficient solution (such as traveling salesman), and if you want to be absolutely sure you've covered all the possibilities for some small case. How can we do this? If we output a 1 followed by all possible permutations of the elements other than 1; then a 2 followed by all possible permutations of the elements other than 2; then a 3 followed by.... etc., we'll have covered all the cases exactly once.

Printing all permutations … So we might try a recursive algorithm. What should the input parameter be? If we just say n, with the intention that this gives all permutations of {1,2,..., n} that's not going to be good enough, since later we will be permuting some arbitrary subset of this. So you might think that the input parameters should be an arbitrary set S. But even this is not quite enough, since we will have to choose an arbitary element i out of S, and then print i followed by all the permutations of S - { i }. But if we don't want to store all the permutations of S - { i } before we output them we need some way to tell the program that when it goes and prints all the permutations of S - { i }, it should print i first, preceding each one. This suggests making a program with two parameters: one will be the fixed "prefix" of numbers that is printed out, and the second the set of remaining numbers to be permuted.

Printing permutations printperm(P,S) /* P is a prefix, S is a nonempty set */ if S={x} then print Px; else for each element i of S do printperm( (P,i), S - { i }); There are n! permutations. So any program must spend O(n*n!) time to print all the permutations, each taking n steps. Let me give a simple amortizing counting argument. We will be printing n*n! symbols. Each time the "else" statement is executed, we charge O(1) to that "i". This particular "i" in the n*n! symbols (note i appears in different permutations many times, but we are just referring to one instance of such "i") gets charged only once. Summing up, the total time is O(n · n!).

Paradigm #3: Divide-and-conquer Divide et impera [Divide and rule] -- Ancient political maxim cited by Machiavelli -- Julius Caesar (102-44BC) The paradigm of divide-and-conquer: -- DIVIDE problem up into smaller problems -- CONQUER by solving each subproblem -- COMBINE results together to solve the original problem

Divide & Conquer: MergeSort Example: merge sort (an O(n log n) algorithm for sorting) (See CLR, pp ) MERGE-SORT(A, p, r) /* A is an array to be sorted. This algorithm sorts the elements in the subarray A[p..r] */ if p < r then q := floor( (p+r)/2 ) MERGE-SORT(A, p, q) MERGE-SORT(A, q+1, r) MERGE(A, p, q, r)

MergeSort continues.. Let T(n) denote the number of comparisons performed by algorithm MERGE-SORT on an input of size n. Then we have T(n) = 2T(n/2) + n expanding … = 2 k T(n/2 k ) + kn …. = O(nlogn) --- when k=logn.

Another way: Prove T(2 k ) = (k+1)2 k by induction: It is true for k = 0. Now assume it is true for k; we will prove it for k+1. We have T(2 k+1 ) = 2T(2 k ) + 2 k+1 (by recursion) = 2(k+1)2 k + 2 k+1 (by induction) = (k+2) 2 k+1, and this proves the result by induction.

Divide & conquer: multiply 2 numbers Direct multiplication of 2 n-bit numbers takes n 2 steps. Note, we assume n is very large, and each register can hold only O(1) bits. When do we need to multiply two very large numbers? In Cryptography and Network Security message as numbers encryption and decryption need to multiply numbers My comment: but really: none of above seems to be a good enough reason. Even you wish to multiply a number of 1000 digits, an O(n 2 ) alg. is good enough!

How to multiply 2 n-bit numbers ************ ************************

History: 1960, AN Kolmogorov (we will meet him again later in this class) organized a seminar on mathematical problems in cybernetics at MSU. He conjectured Ω(n 2 ) lower bound for multiplication and other problems. Karatsuba, then 25 year old student, proposed the n log3 solution in a week, by divide and conquer. Kolmogorov was upset, he discussed this result in the next seminar, which was then terminated. The paper was written up by Kolmogorov, but authored by Karatsuba (who did not know before he received reprints) and was published in Sov Phys. Dol. AN Kolmogorov

Can we multiply 2 numbers faster? Karatsuba's 1962 algorithm for multiplying two n bit numbers in O(n 1.59 ) steps. Suppose we wish to multiply two n-bit numbers X Y. Let X = ab, Y = cd where a, b, c, d are n/2 bit numbers. Then XY = (a · 2 n/2 + b)(c · 2 n/2 + d) = (ac)2 n + (ad + bc)2 n/2 + bd cd ab X = Y =

Multiplying 2 numbers So we have broken the problem up to 4 subproblems each of size n/2. Thus, T(2 k ) = 4T(2 k-1 ) + c 2 k 4T(2 k-1 ) = 16 T(2 k-2 ) + 4c2 k-1 = 4 2 T(2 k-2 )+c2 k k-1 T(2) = 4 k T(1) + 4 k-1 · c · 2 Now T(1) = 1, so T(2 k ) = 4 k + c(2 k + 2 k k-1 ) ≤ 4 k + c 4 k = (4 k )(c+1). This gives T(n) = O(n 2 )! No improvement!

Multiplying 2 numbers But Karatsuba did not give up. He observed: XY = (2 n + 2 n/2 )· ac + 2 n/2 · (a-b) · (d-c) + (2 n/2 + 1)· bd Now, we have broken the problem up into only 3 subproblems, each of size n/2, plus some linear work. This time it should work! K(n) ≤ 3K(n/2) + cn ≤ c(3 k k+1 ) Putting n = 2k, we see that for n a power of 2, we get K(n) ≤ c(3 lg n + 1 ) – 2 lg n + 1 ) = c(3 n lg n) Here we have used the fact that a log(b) = b log(a). Since lg 3 is about , this gives us a O(n 1.59 ) algorithm Note: Using FFT. Schonhage and Strassen: O(nlogn loglogn) in In 2007, this was slightly improved by Martin Furer

Divide & conquer: finding max-min Problem: finding both the maximum and minimum of a set of n numbers. Obvious method: first compute the maximum, using n-1 comparisons; then discard this maximum, and compute the minimum of the remaining numbers, using n-2 comparisons. Total work: 2n-3 comparisons.

Maxmin by divide & conquer MAXMIN(S) /* find both the maximum and minimum elements of a set S */ if S = {a} then min=a, max=a; else if S = {a < b} min=a, max=b; else /* |S| > 2 */ divide S into 2 subsets, S1 and S2, such that S1 has floor(n/2) elements and S2 has ceil(n/2) elements (min1, max1) := MAXMIN(S1); (min2, max2) := MAXMIN(S2); min = min(min1, min2); max = max(max1, max2); return (max,min);

Time complexity T(n) = 1 when n =2 T(n) = 2T(n/2) + 2, otherwise. This gives T(n) = 3n/2 – 2, when n is a power of 2.