Design of Algorithms by Induction Part 2 Bibliography: [Manber]- Chap 5.

Slides:



Advertisements
Similar presentations
CS 206 Introduction to Computer Science II 02 / 27 / 2009 Instructor: Michael Eckmann.
Advertisements

Dynamic Programming Nithya Tarek. Dynamic Programming Dynamic programming solves problems by combining the solutions to sub problems. Paradigms: Divide.
Analysis of Algorithms
Greedy Algorithms Be greedy! always make the choice that looks best at the moment. Local optimization. Not always yielding a globally optimal solution.
Overview What is Dynamic Programming? A Sequence of 4 Steps
Dynamic Programming.
Comp 122, Spring 2004 Divide and Conquer (Merge Sort)
11 Computer Algorithms Lecture 6 Recurrence Ch. 4 (till Master Theorem) Some of these slides are courtesy of D. Plaisted et al, UNC and M. Nicolescu, UNR.
Divide and Conquer. Recall Complexity Analysis – Comparison of algorithm – Big O Simplification From source code – Recursive.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 5.
Induction and recursion
Nattee Niparnan. Recall  Complexity Analysis  Comparison of Two Algos  Big O  Simplification  From source code  Recursive.
Greedy vs Dynamic Programming Approach
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Proof Techniques and Recursion. Proof Techniques Proof by induction –Step 1: Prove the base case –Step 2: Inductive hypothesis, assume theorem is true.
Divide-and-Conquer1 7 2  9 4   2  2 79  4   72  29  94  4.
Divide-and-Conquer1 7 2  9 4   2  2 79  4   72  29  94  4.
CPSC 411, Fall 2008: Set 4 1 CPSC 411 Design and Analysis of Algorithms Set 4: Greedy Algorithms Prof. Jennifer Welch Fall 2008.
Fundamental Techniques
Lecture 7: Greedy Algorithms II
1 Dynamic Programming Jose Rolim University of Geneva.
1 Section 3.3 Mathematical Induction. 2 Technique used extensively to prove results about large variety of discrete objects Can only be used to prove.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Induction and recursion
Dynamic Programming. Dynamic Programing A technique for designing (optimizing) algorithms It can be applied to problems that can be decomposed in subproblems,
Divide-and-Conquer 7 2  9 4   2   4   7
Reading and Writing Mathematical Proofs
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
Nattee Niparnan. Recall  Complexity Analysis  Comparison of Two Algos  Big O  Simplification  From source code  Recursive.
Divide-and-Conquer1 7 2  9 4   2  2 79  4   72  29  94  4.
Recursion and Dynamic Programming. Recursive thinking… Recursion is a method where the solution to a problem depends on solutions to smaller instances.
10/14/ Algorithms1 Algorithms - Ch2 - Sorting.
Design of Algorithms using Brute Force Approach. Primality Testing (given number is n binary digits)
1 Summary: Design Methods for Algorithms Andreas Klappenecker.
Major objective of this course is: Design and analysis of modern algorithms Different variants Accuracy Efficiency Comparing efficiencies Motivation thinking.
Design of Algorithms by Induction Part 1 Algorithm Design and Analysis Week 3 Bibliography: [Manber]- Chap.
Mathematical Induction I Lecture 4: Sep 16. This Lecture Last time we have discussed different proof techniques. This time we will focus on probably the.
1 Chapter 4 Divide-and-Conquer. 2 About this lecture Recall the divide-and-conquer paradigm, which we used for merge sort: – Divide the problem into a.
Greedy Methods and Backtracking Dr. Marina Gavrilova Computer Science University of Calgary Canada.
1 Programming for Engineers in Python Autumn Lecture 12: Dynamic Programming.
December 14, 2015 Design and Analysis of Computer Algorithm Pradondet Nilagupta Department of Computer Engineering.
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
12-CRS-0106 REVISED 8 FEB 2013 CSG523/ Desain dan Analisis Algoritma Dynamic Programming Intelligence, Computing, Multimedia (ICM)
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
CSC5101 Advanced Algorithms Analysis
Lecture 151 Programming & Data Structures Dynamic Programming GRIFFITH COLLEGE DUBLIN.
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
Greedy Algorithms BIL741: Advanced Analysis of Algorithms I (İleri Algoritma Çözümleme I)1.
2IS80 Fundamentals of Informatics Fall 2015 Lecture 6: Sorting and Searching.
1 Today’s Material Dynamic Programming – Chapter 15 –Introduction to Dynamic Programming –0-1 Knapsack Problem –Longest Common Subsequence –Chain Matrix.
Young CS 331 D&A of Algo. Topic: Divide and Conquer1 Divide-and-Conquer General idea: Divide a problem into subprograms of the same kind; solve subprograms.
Chapter 9 Recursion © 2006 Pearson Education Inc., Upper Saddle River, NJ. All rights reserved.
Recursion Continued Divide and Conquer Dynamic Programming.
TU/e Algorithms (2IL15) – Lecture 3 1 DYNAMIC PROGRAMMING
Dynamic Programming. What is Dynamic Programming  A method for solving complex problems by breaking them down into simpler sub problems. It is applicable.
Advanced Algorithms Analysis and Design By Dr. Nazir Ahmad Zafar Dr Nazir A. Zafar Advanced Algorithms Analysis and Design.
Section Recursion 2  Recursion – defining an object (or function, algorithm, etc.) in terms of itself.  Recursion can be used to define sequences.
Section Recursion  Recursion – defining an object (or function, algorithm, etc.) in terms of itself.  Recursion can be used to define sequences.
Hubert Chan (Chapters 1.6, 1.7, 4.1)
Design by Induction – Part 2 Dynamic Programming
(Proof By) Induction Recursion
Dynamic Programming Sequence of decisions. Problem state.
Lecture on Design and Analysis of Computer Algorithm
Design of Algorithms by Induction Part 1
Hubert Chan (Chapters 1.6, 1.7, 4.1)
Algorithm Design Methods
CS Algorithms Dynamic programming 0-1 Knapsack problem 12/5/2018.
Advanced Algorithms Analysis and Design
CSC 413/513- Intro to Algorithms
Design of Algorithms by Induction Part 1
Presentation transcript:

Design of Algorithms by Induction Part 2 Bibliography: [Manber]- Chap 5

Design of algorithms by induction Represents a fundamental design principle that underlies techniques such as divide & conquer, dynamic programming and even greedy Question: how to reduce the problem to a smaller problem or a set of smaller problems ? (n -> n-1, n/2, n/4, …?) Examples: –The successful party problem [Manber 5.3] –The celebrity problem [Manber 5.5] –The skyline problem [Manber 5.6] –One knapsack problem [Manber 5.10] –The Max Consecutive Subsequence [Manber 5.8] Part 1

Conclusions (1) What is Design by induction ? An algorithm design method that uses the idea behind induction to solve problems –Instead of thinking about our algorithm as a sequence of steps to be executed, think of proving a theorem that the algorithm exists We need to prove that this “theorem” holds for a base case, and that if it holds for “n-1” this implies that it holds for “n”

Conclusions (2) Why/when should we use Design by induction ? –It is a well defined method for approaching a large variety of problems (“where do I start ?”) Just take the statement of the problem and consider it is a theorem to be proven by induction –This method is always safe: designing by induction gets us to a correct algorithm, because we designed it proving its correctness –Encourages abstract thinking vs coding => you can handle the reasoning in case of complex algorithms A[1..n]; A[n/ n/4] L, R; L=(R+3L+1)/4, R=(L+3R+1)/4 –We can make it also efficient (see next slide)

Conclusions (3) The inductive step is always based on a reduction from problem size n to problems of size <n. How to efficiently make the reduction to smaller problems: –Sometimes one has to spend some effort to find the suitable element to remove. (see Celebrity Problem) –If the amount of work needed to combine the subproblems is not trivial, reduce by dividing in subproblems of equal size – divide and conquer (see Skyline Problem)

The Knapsack Problem This is one of the multiple variants of the knapsack problem, that applies to packaging goods in standard containers of a shipping company that have to be exactly filled up The problem: Given an integer K and n items of different sizes such that the i’th item has an integer size k i, find a subset of the items whose sizes sum to exactly K, or determine that no such subset exist P(n,K) P(i,k) – the first i items and a knapsack of size k

Knapsack - Try 1 Induction hypothesis: We know how to solve P(n-1, K) Base case: n=1: there is a solution if the single element is of size K Inductive step: Case 1: P(n-1,K) has a solution: we simply do not use the n’th item, we already have the solutiom Case 2: P(n-1,K) has no solution: this means that we must use the n’th item of size k n. This implies that the rest of the items must fit into a smaller knapsack of size K- k n. We have reduced the problem to two smaller subproblems: P(n-1, K) and P(n-1, K-k n ). In order to solve this, we need to strenghten the hypothesis

Knapsack - Try 2 – Solution 1 Induction hypothesis: We know how to solve P(n-1, k) for all 0<=k<=K Base case: n=1: there is a solution if the single element is of size k Inductive step: P(n-1, k) and P(n-1, k-k n ). We have reduced the problem of size n to 2 problems of size n-1 T(n)=2 T(n-1) + O(1) T(n) is O(2 n )

Knapsack – Solution 2 Solution 1 solved 2 n problems P(i,k) Actually, the total number of distinct problems P(i,k) is n*K 2 n – n*K is redundant work ! Solution: we store all the known results of all problems P(i,k) in an n*K matrix

Knapsack - Conclusions Divide-and-conquer algorithms partition the problem into disjoint subproblems, solve the subproblems, and then combine their solutions to solve the original problem. When the subproblems overlap, a divide-and- conquer algorithm does more work than necessary, repeatedly solving the common subsubproblems. In order to avoid repeated work, the solutions of subproblems are saved and when the same subproblem appears again the saved solution is used This technique is Dynamic programming

Finding the Maximum Consecutive Subsequence Problem: Given a sequence X = (x 1, x 2, …, x n ) of (not necessarily positive) real numbers, find a subsequence x i ; x i+1 ; … ; x j of consecutive elements such that the sum of the numbers in it is maximum over all subsequences of consecutive elements Example: The profit history (in billion $) of the company ProdIncCorp for the last 10 years is given below. Find the maximum amount that ProdIncCorp earned in any contiguous span of years. y1y2y3y4y5y6y7y8y9y

Max consec subsequence – Try 1 Base case: n=1: we have a single element and it is the max subsequence Induction hypothesis: We know how to find the maximum subsequence in a sequence of length < n. We know(we assume) that the maximum consecutive subsequence of x 1 ;…; x n-1 is x i … x j, j<=n-1 We have to find the maximum consecutive subsequence of x 1 ;…; x n-1 ; x n –We distinguish 2 cases: Case 1: j=n-1 (the max subseq is a suffix of the given one) Case 2: j<n-1 (the max subseq is no suffix of the given one)

Max consec subsequence – Try 1 Case 1: j=n-1 (the max subseq is a suffix of the given one) y1y2y3y4y y1y2y3y4y5y n-1n n If x[n] >0 then the max seq will be xi;… xj; xn If x[n] <0 then the max seq remains xi;… xj

Max consec subsequence – Try 1 Case 2: j<n-1 (the max subseq is not a suffix) y1y2y3y4y5y6y n-1n y1y2y3y4y5y6y7y8y n-1n We need to know also the maximum subsequence that is a suffix !

Max consec subsequence – Try 2 Stronger Induction hypothesis: We know how to find the maximum subsequence in a sequence of length < n and the maximum subsequence that is a suffix Base case: when n=0, both subsequences are empty (their sums are 0) Inductive step: We add xn to the max suffix. If this sum is bigger than the global max subsequence, then we update it (as well as the new suffix). Otherwise we retain the previous max subsequence. We also need to find the new max suffix, and add xn to its old value. If this sum results negative, we take the empty set as max suffix.

Max consec subseq - Solution Algorithm Max_Subsequence(IN: X[1..n], OUT: GlobalMax, SuffixMax) begin if (n=0) then GlobalMax:=0; SuffixMax:=0; else Max_Subsequence(X[1..n-1], GlobalMax1, SuffixMax1); GlobalMax:=GlobalMax1; if x[n] + SuffixMax1 > GlobalMax1 then SuffixMax := SuffixMax1 + x[n]; GlobalMax := SuffixMax; else if x[n] + SuffixMax1 > 0 then SuffixMax := SuffixMax1 + x[n]; else SuffixMax := 0; end T(n)=T(n-1)+c Algorithm is O(n)

Max consec subseq - Solution Algorithm Max_Subsequence(X,n) Input: X (array of length n) Output: Global_Max (The sum of the maximum subsequence) begin Global_Max:= 0; Suffix_Max := 0; for i=1 to n do if x[i] + Suffix_Max > Global_Max then Suffix_Max := SuffixMax + x[i]; Global_Max := Suffix_Max; else if x[i] + Suffix_Max > 0 then Suffix_Max := Suffix_Max + x[i]; else Suffix_Max := 0; end It was straightforward to rewrite the solution in a non-recursive way !

Max consec subseq - Conclusion Sometimes we cannot prove that P( P(n) We can add an additional statement Q and prove easier (P( P(n) Q is a property of the solution, property that we must discover Attention: Q(n) must become part of the induction hypothesis, we must in fact prove: (P( ( P(n) and Q(n)) This is Strengthen the Induction Hypothesis

Conclusions (4) The inductive step is always based on a reduction from problem size n to problems of size <n. How to efficiently make the reduction to smaller problems: –A reduction to several subproblems might eventually compute the same subproblems again and again. In this case, when the subproblems are not disjoint, try to store these intermediate results – dynamic programming. (see Knapsack Problem)

Conclusions (5) What if induction does not succeed for the given problem statement?: –A straightforward reduction might not give sufficient information to extend the result, in this case one can try to strengthen the induction hypothesis. (see Maximum Subsequence Problem) Try to find relevant properties of the solution and add these to the induction hypothesis