Presentation is loading. Please wait.

Presentation is loading. Please wait.

Prof. S.M. Lee Department of Computer Science.

Similar presentations


Presentation on theme: "Prof. S.M. Lee Department of Computer Science."— Presentation transcript:

1 Prof. S.M. Lee Department of Computer Science

2

3

4

5

6

7

8 Recursion Recursion is more than just a programming technique. It has two other uses in computer science and software engineering, namely: as a way of describing, defining, or specifying things. as a way of designing solutions to problems (divide and conquer).

9 Mathematical Examples factorial function factorial(0) = 1 factorial(n) = n * factorial(n-1) [for n>0] Let's compute factorial(3). factorial(3) = 3 * factorial(2) = 3 * ( 2 * factorial(1) ) = 3 * ( 2 * ( 1 * factorial(0) )) = 3 * ( 2 * ( 1 * 1 ) )) = 6

10 Fibonacci function: fibonacci(0) = 1 fibonacci(1) = 1 fibonacci(n) = fibonacci(n-1) + fibonacci(n-2) [for n>1] This definition is a little different than the previous ones because It has two base cases, not just one; in fact, you can have as many as you like. In the recursive case, there are two recursive calls, not just one. There can be as many as you like.

11 Recursion Recursion can be seen as building objects from objects that have set definitions. Recursion can also be seen in the opposite direction as objects that are defined from smaller and smaller parts. “Recursion is a different concept of circularity.”(Dr. Britt, Computing Concepts Magazine, March 97, pg.78)

12 Suppose that we have a series of functions for finding the power of a number x. pow0(x) = 1 pow1(x) = x = x * pow0(x) pow2(x) = x * x = x * pow1(x) pow3(x) = x * x * x = x * pow2(x) We can turn this into something more usable by creating a variable for the power and making the pattern explicit: pow(0,x) = 1 pow(n,x) = x * pow(n-1,x) Finding the powers of numbers

13 For instance: 2**3 = 2 * 2**2 = 2 * 4 = 8 2**2 = 2 * 2**1 = 2 * 2 = 4 2**1 = 2 * 2**0 = 2 * 1 = 2 2**0 = 1

14 Almost all programming languages allow recursive functions calls. That is they allow a function to call itself. And some languages allow recursive definitions of data structures. This means we can directly implement the recursive definitions and recursive algorithms that we have just been discussing. For example, the definition of the factorial function factorial(0) = 1 factorial(n) = n * factorial(n-1) [ for n > 0 ]. can be written in C without the slightest change: int factorial(int n) { if (n == 0) return 1 ; else return n * factorial(n-1) ; }

15 Basic Recursion What we see is that if we have a base case, and if our recursive calls make progress toward reaching the base case, then eventually we terminate. We thus have our first two fundamental rules of recursion:

16 Basic Recursion 1. Base cases: –Always have at least one case that can be solved without using recursion. 2. Make progress: –Any recursive call must make progress toward a base case.

17 Euclid's Algorithm In Euclid's 7th book in the Elements series (written about 300BC), he gives an algorithm to calculate the highest common factor (largest common divisor) of two numbers x and y where (x < y). This can be stated as: 1.Divide y by x with remainder r. 2.Replace y by x, and x with r. 3.Repeat step 1 until r is zero. When this algorithm terminates, y is the highest common factor.

18 GCF(34,017 and 16,966). Euclid's algorithm works as follows: 34,017/16,966 produces a remainder 85 16,966/85 produces a remainder 51 85/51 produces a remainder 34 51/34 produces a remainder 17 34/17 produces a remainder 0 and the highest common factor of 34,017 and 16,966 is 17.

19 Euclid's algorithm involves several elements: simple arithmetic operations (calculating the remainder after division), comparison of a number against 0 (test), and the ability to repeatedly execute the same set of instructions (loop). and any computer programming language has these basic elements. The design of an algorithm to solve a given problem is the motivation for a lot of research in the field of computer science.

20 Euclid's Algorithm Euclid's Algorithm determines the greatest common divisor of two natural numbers a, b. That is, the largest natural number d such that d | a and d | b. GCD(33,21)=3 33 = 1*21 + 12 21 = 1*12 + 9 12 = 1*9 + 3 9 = 3*3

21 The main benefits of using recursion as a programming technique are these: invariably recursive functions are clearer, simpler, shorter, and easier to understand than their non-recursive counterparts. the program directly reflects the abstract solution strategy (algorithm). From a practical software engineering point of view these are important benefits, greatly enhancing the cost of maintaining the software.

22 Main disadvantage of programming recursively The main disadvantage of programming recursively is that, while it makes it easier to write simple and elegant programs, it also makes it easier to write inefficient ones. when we use recursion to solve problems we are interested exclusively with correctness, and not at all with efficiency. Consequently, our simple, elegant recursive algorithms may be inherently inefficient.

23 Consider the following program for computing the fibonacci function. int s1, s2 ; int fibonacci (int n) { if (n == 0) return 1; else if (n == 1) return 1; else { s1 = fibonacci(n-1); s2 = fibonacci(n-2); return s1 + s2; }

24 The main thing to note here is that the variables that will hold the intermediate results, S1 and S2, have been declared as global variables. This is a mistake. Although the function looks just fine, its correctness crucially depends on having local variables for storing all the intermediate results. As shown, it will not correctly compute the fibonacci function for n=4 or larger. However, if we move the declaration of s1 and s2 inside the function, it works perfectly. This sort of bug is very hard to find, and bugs like this are almost certain to arise whenever you use global variables to storeintermediate results of a recursive function.

25 Recursion is based upon calling the same function over and over, whereas iteration simply `jumps back' to the beginning of the loop. A function call is often more expensive than a jump.

26 The overheads that may be associated with a function call are: Space: Every invocation of a function call may require space for parameters and local variables, and for an indication of where to return when the function is finished. Typically this space (allocation record) is allocated on the stack and is released automatically when the function returns. Thus, a recursive algorithm may need space proportional to the number of nested calls to the same function.

27 Time: The operations involved in calling a function - allocating, and later releasing, local memory, copying values into the local memory for the parameters, branching to/returning from the function - all contribute to the time overhead.

28 If a function has very large local memory requirements, it would be very costly to program it recursively. But even if there is very little overhead in a single function call, recursive functions often call themselves many many times, which can magnify a small individual overhead into a very large cumulative overhead.

29 int factorial(int n) { if (n == 0) return 1; else return n * factorial(n-1); } There is very little overhead in calling this function, as it has only one word of local memory, for the parameter n. However, when we try to compute factorial(20), there will end up being 21 words of memory allocated - one for each invocation of the function:

30 factorial(20) -- allocate 1 word of memory, call factorial(19) -- allocate 1 word of memory, call factorial(18) -- allocate 1 word of memory,. call factorial(2) -- allocate 1 word of memory, call factorial(1) -- allocate 1 word of memory, call factorial(0) -- allocate 1 word of memory, at this point 21 words of memory

31 and 21 activation records have been allocated. return 1. -- release 1 word of memory. return 1*1. -- release 1 word of memory. return 2*1. -- release 1 word of memory.

32 Iteration as a special case of recursion The first insight is that iteration is a special case of recursion. void do_loop () { do {... } while (e); } is equivalent to: void do_loop () {... ; if (e) do_loop(); } A compiler can recognize instances of this form of recursion and turn them into loops or simple jumps. E.g.: void do_loop () { start:...; if (e) goto start; } Notice that this optimization also removes the space overhead associated with function calls.

33 Most recursive algorithms can be translated, by a fairly mechanical procedure, into iterative algorithms. Sometimes this is very straightforward - for example, most compilers detect a special form of recursion, called tail recursion, and automatically translate into iteration without your knowing. Sometimes, the translation is more involved: for example, it might require introducing an explicit stack with which to `fake' the effect of recursive calls.

34 {Non-recursive version of Power function} FUNCTION PowerNR (Base : real; Exponent : integer) : real; {Preconditions: Exponent >= 0} {Accepts Base and exponent values} {Returns Base to the Exponent power} VAR Count : integer; {Counts number of times BAse is multiplied} Product : real; {Holds the answer as it is being calculated} BEGIN Product := 1; FOR Count := 1 TO Exponent DO Product := Product * Base; PowerNR := Product END; {PowerNR}

35 We have seen one form of circularity already in our classes, with a For loop. Int x; For (x=0; x<=10; x++) { cout<<x; }

36 Problems solving used loops In a for loop, we have a set loop structure which controls the length of the repetition. Many problems solved using loops may be solved using a recursion. In recursion, problems are defined in terms of smaller versions of themselves.

37 Power function There are recursive definitions for many mathematical problems: The function Power (used to raise the number y to the xth power). Assume x is a non-negative integer: Y^x = 1 if x is 0; otherwise, Y*Y^(x-1)

38 Power Function 2^3 = 2*2^2 = 2 * 4 = 8 2^2 = 2*2^1 = 2 * 2 = 4 2^1 = 2*2^0= 2 * 1 = 2 2^0 = 1

39 Factorial Function The factorial function has a natural recursive definition: n!= 1, if n = 0 or if n = 1; otherwise, n! = n * (n - 1)!

40 For example: 5! = 5*4! = 5*24 4! = 4*3! = 4*6 3! = 3*2! = 3*2 2! = 2*1! = 2*1 1! = 1

41 Excessive Recursion When a program runs too deep: When a simple loop runs more efficiently: Fibonacci sequence:

42 Ackermann’s Function “…one of the fastest growing non-primitive recursive functions. Faster growing than any primitive recursive function.” It grows from 0 to 2^65546 in a few breaths.

43 Basis for Ackermann’s If A(0, n)=n+1, by definition If A(m, 0)=A(m-1, 1) else, A(m, n)=A(m-1, A(m, n-1))… …until A(0, A(1, A(…m-2, n-1)))…back to definition

44 Example A(2, 2)=A(1, A(2, 1))=A(0, A(1, A(2, 1))) …=A(1, A(2, 1))+1=A(0, A(1, A(2, 0)))+1 …=A(1, A(1, 1))+2=A(0, A(1, A(1, 0))) …=A(1, A(0, 1))+3=A(0, A(0, 0))+5=7!!!

45 Shortcuts: If A(0, n)=n+1 If A(1, n)=n+2 If A(2, n)=2n+3 If A(3, n)=2^n+3 If A(4, n)=2^(n+3)*2 If A(5, n)=TOO MUCH!!!!!

46 A(4, 1)=13 A(4, 2)=65533 A(4, 3)=2^65536-3 A(4, 4)=2^(2^(65536)-3)

47 “Ackermann’s function is a form of meta- multiplication.”Dr. Forb. a+b=adding the operand a*b=adding the operand “a” to itself b times a^b=multiplying the operand “a” by itself b times a@b=a^b, b times…a@@b=a@b, b times


Download ppt "Prof. S.M. Lee Department of Computer Science."

Similar presentations


Ads by Google