Decrease-and-Conquer Approach Lecture 06 ITS033 – Programming & Algorithms Decrease-and-Conquer Approach Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information, Computer and Communication Technology (ICT) Sirindhorn International Institute of Technology (SIIT) Thammasat University http://www.siit.tu.ac.th/bunyarit bunyarit@siit.tu.ac.th 02 5013505 X 2005
ITS033 Topic 01 - Problems & Algorithmic Problem Solving Topic 02 – Algorithm Representation & Efficiency Analysis Topic 03 - State Space of a problem Topic 04 - Brute Force Algorithm Topic 05 - Divide and Conquer Topic 06 - Decrease and Conquer Topic 07 - Dynamics Programming Topic 08 - Transform and Conquer Topic 09 - Graph Algorithms Topic 10 - Minimum Spanning Tree Topic 11 - Shortest Path Problem Topic 12 - Coping with the Limitations of Algorithms Power http://www.siit.tu.ac.th/bunyarit/its033.php and http://www.vcharkarn.com/vlesson/showlesson.php?lessonid=7
This Week Overview Problem size reduction Insertion Sort Recursive programming Examples Factorial Tower of Hanoi
Decrease & Conquer: Concept ITS033 – Programming & Algorithms Decrease & Conquer: Concept Lecture 06.1 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information, Computer and Communication Technology (ICT) Sirindhorn International Institute of Technology (SIIT) Thammasat University http://www.siit.tu.ac.th/bunyarit bunyarit@siit.tu.ac.th 02 5013505 X 2005
Introduction The decrease-and-conquer technique is based on exploiting the relationship between a solution to a given instance of a problem and a solution to a smaller instance of the same problem. Once such a relationship is established, it can be exploited either top down (recursively) or bottom up (without a recursion).
Introduction There are three major variations of decrease-and-conquer: 1. Decrease by a constant 2. Decrease by a constant factor 3. Variable size decrease
Decrease by a constant In the decrease-by-a-constant variation, the size of an instance is reduced by the same constant on each iteration of the algorithm. Typically, this constant is equal to 1
Decrease by a Constant
Decrease by a constant Consider, as an example, the exponentiation problem of computing an for positive integer exponents. The relationship between a solution to an instance of size n and an instance of size n - 1 is obtained by the obvious formula: an = an-1 x a So the function f (n) = an can be computed either “top down” by using its recursive definition or “bottom up” by multiplying a by itself n - 1 times.
Decrease by a Constant Factor The decrease-by-a-constant-factor technique suggests reducing a problem’s instance by the same constant factor on each iteration of the algorithm. In most applications, this constant factor is equal to two.
Decrease by a Constant Factor
Decrease by a Constant Factor If the instance of size n is to compute an, the instance of half its size will be to compute an/2, with the obvious relationship between the two: an = (an/2)2. But since we consider instances of the exponentiation problem with integer exponents only, the former works only for even n. If n is odd, we have to compute an-1 by using the rule for even-valued exponents and then multiply the result by
Variable Size Decrease the variable-size-decrease variety of decrease-and-conquer, a size reduction pattern varies from one iteration of an algorithm to another. Euclid’s algorithm for computing the greatest common divisor provides a good example of such a situation.
Decrease & Conquer: Insertionsort Lecture 06.2 ITS033 – Programming & Algorithms Decrease & Conquer: Insertionsort Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information, Computer and Communication Technology (ICT) Sirindhorn International Institute of Technology (SIIT) Thammasat University http://www.siit.tu.ac.th/bunyarit bunyarit@siit.tu.ac.th 02 5013505 X 2005
Insertion Sort we consider an application of the decrease-by-one technique to sorting an array A[0..n - 1]. Following the technique’s idea, we assume that the smaller problem of sorting the array A[0..n - 2] has already been solved to give us a sorted array of size n - 1: A[0]= . . . = A[n - 2]. How can we take advantage of this solution to the smaller problem to get a solution to the original problem by taking into account the element A[n - 1]?
Insertion Sort we can scan the sorted subarray from right to left until the first element smaller than or equal to A[n - 1] is encountered and then insert A[n - 1] right after that element. =>straight insertion sort or simply insertion sort. Or we can use binary search to find an appropriate position for A[n - 1] in the sorted portion of the array. => binary insertion sort.
Insertion Sort
Insertion Sort
Insertion Sort Demo B R U T E F O R C E unsorted active sorted
Insertion Sort Demo B R U T E F O R C E unsorted active sorted
Insertion Sort Demo B R U T E F O R C E unsorted active sorted
Insertion Sort Demo B R U T E F O R C E unsorted active sorted
Insertion Sort Demo B R T U E F O R C E unsorted active sorted
Insertion Sort Demo B R T U E F O R C E unsorted active sorted
Insertion Sort Demo B R T E U F O R C E unsorted active sorted
Insertion Sort Demo B R E T U F O R C E unsorted active sorted
Insertion Sort Demo B E R T U F O R C E unsorted active sorted
Insertion Sort Demo B E R T U F O R C E unsorted active sorted
Insertion Sort Demo B E R T F U O R C E unsorted active sorted
Insertion Sort Demo B E R F T U O R C E unsorted active sorted
Insertion Sort Demo B E F R T U O R C E unsorted active sorted
Insertion Sort Demo B E F R T U O R C E unsorted active sorted
Insertion Sort Demo B E F R T O U R C E unsorted active sorted
Insertion Sort
Analysis –Worst Case The basic operation of the algorithm is the key comparison A[j ]> v. The number of key comparisons in this algorithm obviously depends on the nature of the input. In the worst case, A[j ]> v is executed the largest number of times, i.e., for every j = i - 1, . . . , 0. Since v = A[i], it happens if and only if A[j ]>A[i] for j = i - 1, . . . , 0.
Analysis –Worst Case In other words, the worst-case input is an array of strictly decreasing values. The number of key comparisons for such an input is
Analysis – Best Case In the best case, the comparison A[j ]> v is executed only once on every iteration of the outer loop. It happens if and only if A[i - 1] = A[i] for every i =1, . . . , n-1, i.e., if the input array is already sorted in ascending order. Thus, for sorted arrays, the number of key comparisons is
Decrease & Conquer: Recursive Programming Lecture 06.3 ITS033 – Programming & Algorithms Decrease & Conquer: Recursive Programming Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information, Computer and Communication Technology (ICT) Sirindhorn International Institute of Technology (SIIT) Thammasat University http://www.siit.tu.ac.th/bunyarit bunyarit@siit.tu.ac.th 02 5013505 X 2005
Concept of Recursion A recursive definition is one which uses the word or concept being defined in the definition itself
Factorials n! = n × (n-1) × (n-2) × … × 3 × 2 × 1 = (n-1)! How is this recursive? n! = n × (n-1) × (n-2) × … × 3 × 2 × 1 = (n-1)! So: n! = n × (n-1) ! The factorial function is defined in terms of itself (i.e. recursively)
Recursive Calculation of Factorials n! = n × (n-1)! In order for this to work, we need a stop case (the simplest case) Here: 0! = 1
This is Iterative Problem Solving
but, this is Recursion
Iterative Solution n! = n (n-1) (n-2) … 1, if n > 0 long Factorial(int n) { long fact = 1; for (int i=2; i <= n; i++) fact = fact * i; return fact; }
Programming with Recursion Recursive definition - definition in which something is defined in terms of a smaller version of itself, e.g. n! = n (n-1)!, if n > 0 1, if n = 0
Programming with Recursion n (n-1)!, if n > 0 1, if n = 0 Stopping Condition in a recursive definition is the case for which the solution can be stated nonrecursively General (recursive) case is the case for which the solution is expressed in terms of a smaller version of itself.
Recursion Solution Stopping cond. recursive case long MyFact (int n) { if (n == 0) return 1; return (n * MyFact (n – 1)); } Stopping cond. recursive case Recursive call - a call made to the function from within the function itself;
How does this work? MyFact(3) 3*MyFact(2) 2*MyFact(1) 1*MyFact(0) int x = MyFact(3); 6 MyFact(3) 2 3*MyFact(2) 1 2*MyFact(1) 1 1*MyFact(0) int MyFact (int n) { if (n == 0) // The stop case return 1; else return n * MyFact (n-1); } // factorial
Example #1 Iterative programming #include <vcl.h> #include <stdio.h> #include <conio.h> void iforgot_A(int n) { for (int i=1; i<=n; i++) printf("%d, I will remember to do my homework.\n",i); } printf("Maybe NOT!"); void main() iforgot_A(5); getch(); >> iforgot_A(5) 1, I will remember to do my homework. 2, I will remember to do my homework. 3, I will remember to do my homework. 4, I will remember to do my homework. 5, I will remember to do my homework. Maybe NOT!
Example #2 Recursive programming #include <vcl.h> #include <stdio.h> #include <conio.h> void iforgot_B(int n) { if (n>0) printf("%d, I will remember to do my homework.\n",n); iforgot_B(n-1); } else printf("Maybe NOT!"); void main() iforgot_B(5); getch(); >> iforgot_B(5) 5, I will remember to do my homework. 4, I will remember to do my homework. 3, I will remember to do my homework. 2, I will remember to do my homework. 1, I will remember to do my homework. Maybe NOT!
Writing Recursive Functions Get an exact definition of the problem to be solved. Determine the size of the input of the problem. Identify and solve the stopping condition(s) in which the problem can be expressed non-recursively. Identify and solve the general case(s) correctly in terms of a smaller case of the same problem.
Concept 2 - Recursive Thinking Divide or decrease problem One “step” makes the problem smaller (but of the same type) Stopping case (solution is trivial)
Recursion as problem solving technique Recursive methods defined in terms of themselves In code - will see a call to the method itself Can have more than one “activation” of a method going at the same time Each activation has own values of parameters Returns to where it was called from System keeps track of this
Stopping the recursion The recursion must always STOP Stopping condition is important Recursive solutions to problems
Stopping the recursion General pattern is test for stopping condition if not at stopping condition: either do one step towards solution call the method again to solve the rest or call the method again to solve most of the problem do the final step
Implementation of Hanoi See the implementation of Tower of Hanoi in the lecture
Advantages of Recursion Some problems have complicated iterative solutions, conceptually simple recursive ones Good for dealing with dynamic data structures (size determined at run time) .
Disadvantages of Recursion Extra method calls use memory space & other resources Thinking up recursive solution is hard at first Believing that a recursive solution will work
Why Program Recursively? Recursive Code Non-Recursive Code typically has fewer lines. Code can be conceptually simpler, depending on your perspective Often easier to maintain! Code is longer. Code executes faster, depending on hardware and programming language.
This Week’s Practice Write a recursive function to calculate Fibonacci numbers What is the result of f(6) ?
Your recursive function
Fibonacci Recursive Tree fibo(5) + fibo(4) fibo(4) + fibo(3) fibo(3) + fibo(2) 1 fibo(3) + fibo(2) fibo(2) + fibo(1) fibo(2) + fibo(1) fibo(2) + fibo(1) 1 1 1 1 1 1 1
Decrease & Conquer: Homework ITS033 – Programming & Algorithms Decrease & Conquer: Homework Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information, Computer and Communication Technology (ICT) Sirindhorn International Institute of Technology (SIIT) Thammasat University http://www.siit.tu.ac.th/bunyarit bunyarit@siit.tu.ac.th 02 5013505 X 2005
Homework: Fake-Coin Problem Design an algorithm using Decrease and Conquer approach to solve Fake Coin Problem Among n identically looking coins, one is fake (lighter than genuine). Using balance scale to find that fake coin. How many time do you use the balance to find a fake coin from n coins ? Is it optimum ?
End of Chapter 6 Thank you!