CS 206 Introduction to Computer Science II 10 / 08 / 2008 Instructor: Michael Eckmann
Michael Eckmann - Skidmore College - CS Fall 2008 Today’s Topics Questions? Comments? More on recursion –Change making problem Dynamic programming to the rescue when making redundant recursive calls Divide and Conquer technique –MergeSort
Tail-Recursion Let's look at that I sent out this morning.
Recursion 1. have at least one base case that is not recursive 2. recusive case(s) must progress towards the base case 3. trust that your recursive call does what it says it will do (without having to unravel all the recursion in your head.) 4. try not to do redundant work. That is, in different recursive calls, don't recalculate the same info.
Recursion So, we want to create a way to solve the minimum # of coins problem with arbitrary coin denominations. A recursive strategy is: –BASE CASE: If the change K, we're trying to make is exactly equal to a coin denomination, then we only need 1 coin, which is the least # of coins. –RECURSIVE STEP: Otherwise, for all possible values of i, split the amount of change K into two sets i and K- i. Solve these two sets recursively and when done with all the i's, keep the minimum sum of the number of coins among all the i's.
Recursion split the total into parts and solve those parts recursively. –e.g. –.63 = –.63 = –.63 = –.63 = –. –.63 = –See example on page 288, figure 7.22 –Each time through, save the least number of coins.
Recursion split the total into parts and solve those parts recursively. –e.g. –.63 = –.63 = –.63 = –.63 = –. –.63 = –Each time through, save the least number of coins. –The base case of the recursion is when the change we are making is equal to one of the coins – hence 1 coin. –Otherwise recurse. –Why is this bad?
Recursion split the total into parts and solve those parts recursively. –e.g. –.63 = –.63 = –.63 = –.63 = –. –.63 = –Each time through, save the least coins. –The base case of the recursion is when the change we are making is equal to one of the coins – hence 1 coin. –Otherwise recurse. –Why is this bad? Let's see (let's try to make change for some amounts with a Java implementation of this.)
Recursion The major problem with that change making algorithm is that it makes so many recursive calls and it duplicates work already done. Example anyone? An idea called dynamic programming is a good solution in this case. Dynamic Programming is an idea that instead of making recursive calls to figure out something that we already figured out we compute it once and save the value in a table for lookup later.
Dynamic Programming Could we use dynamic programming for the fibonacci numbers? Yes. Let's write the fibonacci method to take advantage of the stored values when we can.
Divide and Conquer The divide and conquer technique is a way of converting a problem into smaller problems that can be solved individually and then combining the answers to these subproblems in some way to solve the larger problem DIVIDE = solve smaller problems recursively, except the base case(s) CONQUER = compute the solution to the overall problem by using the solutions to the smaller problems solved in the DIVIDE part.
Divide and Conquer (MergeSort) There are several popular divide and conquer sort algorithms. One is called MergeSort (see pages ) Later in the semester we'll talk about two others HeapSort and QuickSort. But for now, let's just discuss the MergeSort algorithm and see how it uses divide and conquer technique. For this problem of sorting, can anyone guess –what might happen in the divide stage –or what happens in the conquer stage
Divide and Conquer (MergeSort) Structure of Merge Sort: –if list has 0 or 1 element it is sorted, done (base case) –else Divide list into two lists and do MergeSort on each of these lists. Combine the two sorted lists into one sorted list. Let's look at: Let's assume we're sorting an array of ints. –what will we need to keep track of? –is it recursive? –what will we need to pass in as parameters to MergeSort? –if we write a separate method for the combine part, what will be its parameters and return values?
Divide and Conquer (MergeSort) Structure of Merge Sort: –if list has 0 or 1 element it is sorted, done (base case) –else Divide list into two lists and do MergeSort on each of these lists. Combine (merge) the two sorted lists into one sorted list. We'll need to pass in an array as a parameter to MergeSort, but since sometimes we'll only be sorting part of it, we need to pass in the range of indices to sort (or first index and length). The parameters for the merge method will be the array and information to tell it the two ranges of indices that need to be combined.
Divide and Conquer (MergeSort) Can anyone think of a good way to merge two sorted parts of an array?
Divide and Conquer (MergeSort) Can anyone think of a good way to merge two sorted parts of an array? –Start off with a newarray that is empty and a newindex=0 –start off the indices to the two parts of the array to merge, i1, i2 at the correct values –while (both parts of the array still have elements to consider) –if the i1 element is less than the i2 element then copy it into newarray[newindex] i1++ –else copy the i2 element into newarray[newindex] i2++ –newindex++ –If there are any elements left in either side, copy them into the newarray at the end.
Divide and Conquer (MergeSort) Can anyone think of a good way to perform the divide part? we have an array and the first index and length of the portion we are dividing.
Divide and Conquer (MergeSort) Can anyone think of a good way to perform the divide part? we have an array and the first index (first) and length of the portion we are dividing (len). –if (len > 1) leftHalfLen = len/2 rightHalfLen = len – leftHalfLen mergeSort(array, first, leftHalfLen); mergeSort(array, first+leftHalfLen, rightHalfLen); merge(array, first, leftHalfLen, rightHalfLen); –else do nothing since a list of len=1 or 0 is already sorted(base case).