© 2006 Pearson Addison-Wesley. All rights reserved6-1 More on Recursion.

Slides:



Advertisements
Similar presentations
© 2006 Pearson Addison-Wesley. All rights reserved10 A-1 Chapter 10 Algorithm Efficiency and Sorting.
Advertisements

Analysis of Algorithms
Fall 2006CENG 7071 Algorithm Analysis. Fall 2006CENG 7072 Algorithmic Performance There are two aspects of algorithmic performance: Time Instructions.
Chapter 10 Algorithm Efficiency
Introduction to Analysis of Algorithms
© 2006 Pearson Addison-Wesley. All rights reserved10-1 Chapter 10 Algorithm Efficiency and Sorting CS102 Sections 51 and 52 Marc Smith and Jim Ten Eyck.
Chapter 2 Fundamentals of the Analysis of Algorithm Efficiency Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
© 2006 Pearson Addison-Wesley. All rights reserved6-1 Chapter 6 Recursion as a Problem- Solving Technique.
Notes on Labs and Assignments. Commenting Revisited You have probably been able to “get away” with poor inline documentation is labs  Tasks are straightforward.
Cmpt-225 Algorithm Efficiency.
1 Algorithm Efficiency, Big O Notation, and Role of Data Structures/ADTs Algorithm Efficiency Big O Notation Role of Data Structures Abstract Data Types.
Algorithm Analysis CS 201 Fundamental Structures of Computer Science.
Algorithm Efficiency and Sorting
The Efficiency of Algorithms
Algorithm Efficiency and Sorting Bina Ramamurthy CSE116A,B.
Liang, Introduction to Java Programming, Eighth Edition, (c) 2011 Pearson Education, Inc. All rights reserved Chapter 23 Algorithm Efficiency.
Chapter 5 Recursion as a Problem-Solving Technique.
Analysis of Algorithms 7/2/2015CS202 - Fundamentals of Computer Science II1.
Elementary Data Structures and Algorithms
Analysis of Algorithms COMP171 Fall Analysis of Algorithms / Slide 2 Introduction * What is Algorithm? n a clearly specified set of simple instructions.
© 2006 Pearson Addison-Wesley. All rights reserved10 A-1 Chapter 10 Algorithm Efficiency and Sorting.
Analysis of Algorithms Spring 2015CS202 - Fundamentals of Computer Science II1.
Abstract Data Types (ADTs) Data Structures The Java Collections API
Algorithm Analysis (Big O)
Chapter 5 Algorithm Analysis 1CSCI 3333 Data Structures.
Liang, Introduction to Java Programming, Seventh Edition, (c) 2009 Pearson Education, Inc. All rights reserved Chapter 23 Algorithm Efficiency.
Algorithm Analysis & Complexity We saw that a linear search used n comparisons in the worst case (for an array of size n) and binary search had logn comparisons.
1 Chapter 24 Developing Efficient Algorithms. 2 Executing Time Suppose two algorithms perform the same task such as search (linear search vs. binary search)
1 Recursion Algorithm Analysis Standard Algorithms Chapter 7.
Algorithm Efficiency CS 110: Data Structures and Algorithms First Semester,
CS 3343: Analysis of Algorithms
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley. Ver Chapter 5: Recursion as a Problem-Solving Technique Data Abstraction.
Analysis of Algorithms
Measuring the Efficiency of Algorithms Analysis of algorithms Provides tools for contrasting the efficiency of different methods of solution Time efficiency,
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley. Ver Chapter 9: Algorithm Efficiency and Sorting Data Abstraction &
Algorithm Efficiency Chapter 10 Data Structures and Problem Solving with C++: Walls and Mirrors, Carrano and Henry, © 2013.
© 2011 Pearson Addison-Wesley. All rights reserved 10 A-1 Chapter 10 Algorithm Efficiency and Sorting.
Chapter 10 A Algorithm Efficiency. © 2004 Pearson Addison-Wesley. All rights reserved 10 A-2 Determining the Efficiency of Algorithms Analysis of algorithms.
CS1020 Data Structures and Algorithms I Lecture Note #11 Analysis of Algorithms.
CSS342: Algorithm Analysis1 Professor: Munehiro Fukuda.
Analysis of Algorithms CSCI Previous Evaluations of Programs Correctness – does the algorithm do what it is supposed to do? Generality – does it.
Program Efficiency & Complexity Analysis. Algorithm Review An algorithm is a definite procedure for solving a problem in finite number of steps Algorithm.
1 Lecture 5 Generic Types and Big O. 2 Generic Data Types A generic data type is a type for which the operations are defined but the types of the items.
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley. Ver Chapter 5: Recursion as a Problem-Solving Technique Data Abstraction.
Algorithm Efficiency and Sorting Data Structure & Algorithm.
Algorithm Analysis CS 400/600 – Data Structures. Algorithm Analysis2 Abstract Data Types Abstract Data Type (ADT): a definition for a data type solely.
Algorithmic Analysis Charl du Plessis and Robert Ketteringham.
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 1 Chapter.
Efficiency of Algorithms. Node - data : Object - link : Node + createNode() + getData() + setData() + getLink() + setLink() + addNodeAfter() + removeNodeAfter()
1/6/20161 CS 3343: Analysis of Algorithms Lecture 2: Asymptotic Notations.
2-0 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Theoretical.
CS 150: Analysis of Algorithms. Goals for this Unit Begin a focus on data structures and algorithms Understand the nature of the performance of algorithms.
E.G.M. PetrakisAlgorithm Analysis1  Algorithms that are equally correct can vary in their utilization of computational resources  time and memory  a.
© 2006 Pearson Addison-Wesley. All rights reserved 6-1 Chapter 6 Recursion as a Problem- Solving Technique.
Analysis of Algorithms Spring 2016CS202 - Fundamentals of Computer Science II1.
Chapter 15 Running Time Analysis. Topics Orders of Magnitude and Big-Oh Notation Running Time Analysis of Algorithms –Counting Statements –Evaluating.
Algorithmic Foundations COMP108 COMP108 Algorithmic Foundations Algorithm efficiency Prudence Wong.
Algorithmic Foundations COMP108 COMP108 Algorithmic Foundations Algorithm efficiency Prudence Wong
Algorithm Analysis 1.
Algorithm Efficiency and Sorting
CS 302 Data Structures Algorithm Efficiency.
Analysis of Algorithms
Analysis of Algorithms
Algorithm Efficiency and Sorting
Analysis of Algorithms
Algorithm Efficiency and Sorting
Recursion as a Problem-Solving Technique
At the end of this session, learner will be able to:
Analysis of Algorithms
Algorithm Efficiency and Sorting
Presentation transcript:

© 2006 Pearson Addison-Wesley. All rights reserved6-1 More on Recursion

© 2006 Pearson Addison-Wesley. All rights reserved6-2 Backtracking –A strategy for guessing at a solution and backing up when an impasse is reached Recursion and backtracking can be combined to solve problems

© 2006 Pearson Addison-Wesley. All rights reserved6-3 The Eight Queens Problem Problem –Place eight queens on the chessboard so that no queen can attack any other queen Strategy: guess at a solution –There are 4,426,165,368 ways to arrange 8 queens on a chessboard of 64 squares

© 2006 Pearson Addison-Wesley. All rights reserved6-4 The Eight Queens Problem An observation that eliminates many arrangements from consideration –No queen can reside in a row or a column that contains another queen Now: only 40,320 arrangements of queens to be checked for attacks along diagonals

© 2006 Pearson Addison-Wesley. All rights reserved6-5 The Eight Queens Problem Providing organization for the guessing strategy –Place queens one column at a time –If you reach an impasse, backtrack to the previous column

© 2006 Pearson Addison-Wesley. All rights reserved6-6 The Eight Queens Problem Figure 6-1 a) Five queens that cannot attack each other, but that can attack all of column 6; b) backtracking to column 5 to try another square for the queen; c) backtracking to column 4 to try another square for the queen and then considering column 5 again

© 2006 Pearson Addison-Wesley. All rights reserved6-7 The Eight Queens Problem A recursive algorithm that places a queen in a column –Base case If there are no more columns to consider –You are finished –Recursive step If you successfully place a queen in the current column –Consider the next column If you cannot place a queen in the current column –You need to backtrack

© 2006 Pearson Addison-Wesley. All rights reserved6-8 The Eight Queens Problem Figure 6-2 A solution to the Eight Queens problem

© 2006 Pearson Addison-Wesley. All rights reserved6-9 The Relationship Between Recursion and Mathematical Induction A strong relationship exists between recursion and mathematical induction Induction can be used to –Prove properties about recursive algorithms –Prove that a recursive algorithm performs a certain amount of work

© 2006 Pearson Addison-Wesley. All rights reserved6-10 The Correctness of the Recursive Factorial Method Pseudocode for a recursive method that computes the factorial of a nonnegative integer n fact(n) if (n is 0) { return 1 } else { return n * fact(n – 1) }

© 2006 Pearson Addison-Wesley. All rights reserved6-11 The Correctness of the Recursive Factorial Method Induction on n can prove that the method fact returns the values fact(0) = 0! = 1 fact(n) = n! = n * (n – 1) * (n – 2) * …* 1 if n > 0 Proof: for n=0, returns 1 assume fact(k) returns k! then fact(k+1) returns (k+1)*fact(k) = (k+1)*k! = k!

© 2006 Pearson Addison-Wesley. All rights reserved6-12 The Cost of Towers of Hanoi Solution to the Towers of Hanoi problem solveTowers(count, source, destination, spare) if (count is 1) { Move a disk directly from source to destination } else { solveTowers(count-1, source, spare, destination) solveTowers(1, source, destination, spare) solveTowers(count-1, spare, destination, source) }

© 2006 Pearson Addison-Wesley. All rights reserved6-13 The Cost of Towers of Hanoi Question –If you begin with N disks, how many moves does solveTowers make to solve the problem? Let –moves(N) be the number of moves made starting with N disks When N = 1 –moves(1) = 1

© 2006 Pearson Addison-Wesley. All rights reserved6-14 The Cost of Towers of Hanoi When N > 1 moves(N) = moves(N – 1) + moves(1) + moves(N – 1) Recurrence relation for the number of moves that solveTowers requires for N disks moves(1) = 1 moves(N) = 2 * moves(N – 1) + 1 if N > 1

© 2006 Pearson Addison-Wesley. All rights reserved6-15 The Cost of Towers of Hanoi A closed-form formula for the number of moves would be nicer Induction on N can provide the proof that moves(N) = 2 N – 1 –base N=0: moves(N) = 1 –assume moves(k) = 2 k – 1: moves(k+1) = 2*moves(k) + 1 (recursive) = 2*(2 k – 1) + 1(hypothesis) = 2 k+1 – 1(arithmetic)

© 2006 Pearson Addison-Wesley. All rights reserved6-16 Algorithm Efficiency

© 2006 Pearson Addison-Wesley. All rights reserved6-17 Measuring the Efficiency of Algorithms Analysis of algorithms –Provides tools for contrasting the efficiency of different methods of solution A comparison of algorithms –Should focus of significant differences in efficiency –Should not consider reductions in computing costs due to clever coding tricks

© 2006 Pearson Addison-Wesley. All rights reserved6-18 Measuring the Efficiency of Algorithms Three difficulties with comparing programs instead of algorithms –How are the algorithms coded? –What computer should you use? –What data should the programs use? Algorithm analysis should be independent of –Specific implementations –Computers –Data

© 2006 Pearson Addison-Wesley. All rights reserved6-19 The Execution Time of Algorithms Counting an algorithm's operations is a way to access its efficiency –An algorithm’s execution time is related to the number of operations it requires –Examples The Towers of Hanoi Nested Loops

© 2006 Pearson Addison-Wesley. All rights reserved6-20 Actual Running Time Consider the following pseudocode algorithm: Given input n int[] n_by_n = new int[n][n]; for i<n,j<n set n_by_n[i][j] = random(); print “The random array is created”;

© 2006 Pearson Addison-Wesley. All rights reserved6-21 Actual Running Time How many steps does it take for input n? –1 step: declare an n by n array –n 2 steps: put random numbers in the array –1 step: print out the final statement Total steps: n Note: the extra 2 steps don’t carry the same importance as the n 2 steps –as n gets big… the extra steps are negligible

© 2006 Pearson Addison-Wesley. All rights reserved6-22 Actual Running Time We also think of constants as negligible –we want to say n 2 and c*n 2 have “essentially” the same running time as n increases –more accurately: the same asymptotic running time Commercial programmers would argue here… constants can matter in practice

© 2006 Pearson Addison-Wesley. All rights reserved6-23 Actual Running Time But for large n, the constants don’t matter nearly as much Plug in n=1000 –n = [+500] –2n 2 = [ ] –n 3 = [ ] –2 n = (approx)[ ]

© 2006 Pearson Addison-Wesley. All rights reserved6-24 Algorithm Growth Rates An algorithm’s time requirements can be measured as a function of the problem size An algorithm’s growth rate –Enables the comparison of one algorithm with another –Examples Algorithm A requires time proportional to n 2 Algorithm B requires time proportional to n Algorithm efficiency is typically a concern for large problems only

© 2006 Pearson Addison-Wesley. All rights reserved6-25 Algorithm Growth Rates Figure 10-1 Time requirements as a function of the problem size n

© 2006 Pearson Addison-Wesley. All rights reserved6-26 Order-of-Magnitude Analysis and Big O Notation Definition of the order of an algorithm Algorithm A is order f(n) – denoted O(f(n)) – if constants k and n 0 exist such that A requires no more than k * f(n) time units to solve a problem of size n ≥ n 0 Growth-rate function –A mathematical function used to specify an algorithm’s order in terms of the size of the problem

© 2006 Pearson Addison-Wesley. All rights reserved6-27 Big O Notation Running time will be measured with Big-O notation Big-O is a way to indicate how fast a function grows

© 2006 Pearson Addison-Wesley. All rights reserved6-28 Big-O Notation When we say an algorithm has running time O(n): –we are saying it runs in the same time as other functions with time O(n) –we are describing the running time ignoring constants –we are concerned with large values of n

© 2006 Pearson Addison-Wesley. All rights reserved6-29 Big-O Rules Ignore constants: –O(c * f(n)) = O(f(n)) Ignore smaller powers: –O(n 3 + n) = O(n 3 ) Logarithms cost less than a power –Think of log n as equvialent to n 0.000….001 –O(n a+0.1 ) > O(n a log n) > O(n a ) –e.g. O(n log n + n) = O(n log n) –e.g. O(n log n +n 2 ) = O(n 2 )

© 2006 Pearson Addison-Wesley. All rights reserved6-30 Order-of-Magnitude Analysis and Big O Notation Order of growth of some common functions O(1) < O(log 2 n) < O(n) < O(n * log 2 n) < O(n 2 ) < O(n 3 ) < O(2 n ) Properties of growth-rate functions –You can ignore low-order terms –You can ignore a multiplicative constant in the high- order term –O(f(n)) + O(g(n)) = O(f(n) + g(n))

© 2006 Pearson Addison-Wesley. All rights reserved6-31 Why Big-O? Look at what happens for large inputs –small problems are easy to do quickly –big problems are more interesting –larger function makes a huge difference for big n Ignores irrelevant details –Constants and lower order terms depend on implementation… Big-O focuses on the algorithm

© 2006 Pearson Addison-Wesley. All rights reserved6-32 Order-of-Magnitude Analysis and Big O Notation Figure 10-3a A comparison of growth-rate functions: a) in tabular form

© 2006 Pearson Addison-Wesley. All rights reserved6-33 Order-of-Magnitude Analysis and Big O Notation Figure 10-3b A comparison of growth-rate functions: b) in graphical form

© 2006 Pearson Addison-Wesley. All rights reserved6-34 Determining Running Time Need to count the number of “steps” to complete –need to consider the worst case –for input of size n –a “step” must take constant (O(1)) time Often: –iterations of the inner loop * work per loop –recursive calls * work per call

© 2006 Pearson Addison-Wesley. All rights reserved6-35 Why Does log Keep Coming Up? By default, we write log n for log 2 n High school math: –log b c = e means b e = c –so: log 2 n is the inverse of 2 n log 2 n is the power of 2 that gives result n

© 2006 Pearson Addison-Wesley. All rights reserved6-36 Why Does log Keep Coming Up? Exponential algorithm – O(2 n ) –Increasing input by 1 doubles running time Logarithmic algorithm – log n –The inverse of doubling… –Doubling input size increases running time by 1 –Intuition: O(log n) means that every step in the algorithm divides the problem size in half

© 2006 Pearson Addison-Wesley. All rights reserved6-37 Example 1: Search Linear search: –checks each element in array –just a constant number of other steps –O(n) - “order n” Binary search: –chops array in half with each step –n n/2 n/4 n/8 …. 2 1 –takes log n steps: O(log n) - “order log n”

© 2006 Pearson Addison-Wesley. All rights reserved6-38 Example 2 Recursive algorithm Power1 –Power1(x,1) = x –Power1(x,y) = x* x y-1 Recursive algorithm Power2 –Power2(x,1) = x –Power2(x,y) = x y/2 * x y/2

© 2006 Pearson Addison-Wesley. All rights reserved6-39 Example 2 Power 1: –x y = x * x y-1 –makes y recursive calls: O(y) Power 2: –x y = x y/2 * x y/2 –Makes log y recursive calls: O(log y) –Have to be careful not to compute x y/2 twice

© 2006 Pearson Addison-Wesley. All rights reserved6-40 Order-of-Magnitude Analysis and Big O Notation Worst-case and average-case analyses –An algorithm can require different times to solve different problems of the same size Worst-case analysis –A determination of the maximum amount of time that an algorithm requires to solve problems of size n Average-case analysis –A determination of the average amount of time that an algorithm requires to solve problems of size n

© 2006 Pearson Addison-Wesley. All rights reserved6-41 Keeping Your Perspective Throughout the course of an analysis, keep in mind that you are interested only in significant differences in efficiency Some seldom-used but critical operations must be efficient

© 2006 Pearson Addison-Wesley. All rights reserved6-42 Keeping Your Perspective If the problem size is always small, you can probably ignore an algorithm’s efficiency Weigh the trade-offs between an algorithm’s time requirements and its memory requirements Compare algorithms for both style and efficiency Order-of-magnitude analysis focuses on large problems