Algorithms and Data Structures Lecture X

Slides:



Advertisements
Similar presentations
Fibonacci Numbers F n = F n-1 + F n-2 F 0 =0, F 1 =1 – 0, 1, 1, 2, 3, 5, 8, 13, 21, 34 … Straightforward recursive procedure is slow! Why? How slow? Lets.
Advertisements

Lecture 8: Dynamic Programming Shang-Hua Teng. Longest Common Subsequence Biologists need to measure how similar strands of DNA are to determine how closely.
Overview What is Dynamic Programming? A Sequence of 4 Steps
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Dynamic Programming.
Advanced Algorithm Design and Analysis (Lecture 6) SW5 fall 2004 Simonas Šaltenis E1-215b
September 12, Algorithms and Data Structures Lecture III Simonas Šaltenis Nykredit Center for Database Research Aalborg University
Introduction to Algorithms
1 Dynamic Programming Jose Rolim University of Geneva.
Comp 122, Fall 2004 Dynamic Programming. dynprog - 2 Lin / Devi Comp 122, Spring 2004 Longest Common Subsequence  Problem: Given 2 sequences, X =  x.
1 Dynamic Programming (DP) Like divide-and-conquer, solve problem by combining the solutions to sub-problems. Differences between divide-and-conquer and.
Data Structures Lecture 10 Fang Yu Department of Management Information Systems National Chengchi University Fall 2010.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Lecture 1 (Part 3) Design Patterns for Optimization Problems.
CPSC 311, Fall 2009: Dynamic Programming 1 CPSC 311 Analysis of Algorithms Dynamic Programming Prof. Jennifer Welch Fall 2009.
Dynamic Programming Reading Material: Chapter 7..
Dynamic Programming CIS 606 Spring 2010.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2009 Design Patterns for Optimization Problems Dynamic Programming.
Dynamic Programming Code
Dynamic Programming Dynamic Programming algorithms address problems whose solution is recursive in nature, but has the following property: The direct implementation.
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2002 Lecture 1 (Part 3) Tuesday, 9/3/02 Design Patterns for Optimization.
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
CPSC 411 Design and Analysis of Algorithms Set 5: Dynamic Programming Prof. Jennifer Welch Spring 2011 CPSC 411, Spring 2011: Set 5 1.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Analysis of Algorithms CS 477/677
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
© 2004 Goodrich, Tamassia Dynamic Programming1. © 2004 Goodrich, Tamassia Dynamic Programming2 Matrix Chain-Products (not in book) Dynamic Programming.
Lecture 8: Dynamic Programming Shang-Hua Teng. First Example: n choose k Many combinatorial problems require the calculation of the binomial coefficient.
Analysis of Algorithms
11-1 Matrix-chain Multiplication Suppose we have a sequence or chain A 1, A 2, …, A n of n matrices to be multiplied –That is, we want to compute the product.
Lecture 7 Topics Dynamic Programming
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Lecture 5 Dynamic Programming. Dynamic Programming Self-reducibility.
Dynamic Programming Dynamic programming is a technique for solving problems with a recursive structure with the following characteristics: 1.optimal substructure.
Dynamic Programming UNC Chapel Hill Z. Guo.
CSCE350 Algorithms and Data Structure Lecture 17 Jianjun Hu Department of Computer Science and Engineering University of South Carolina
October 21, Algorithms and Data Structures Lecture X Simonas Šaltenis Nykredit Center for Database Research Aalborg University
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
ADA: 7. Dynamic Prog.1 Objective o introduce DP, its two hallmarks, and two major programming techniques o look at two examples: the fibonacci.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
CS 8833 Algorithms Algorithms Dynamic Programming.
Dynamic Programming Greed is not always good.. Jaruloj Chongstitvatana Design and Analysis of Algorithm2 Outline Elements of dynamic programming.
Types of Algorithms. 2 Algorithm classification Algorithms that use a similar problem-solving approach can be grouped together We’ll talk about a classification.
Introduction to Algorithms Jiafen Liu Sept
CS4710 Algorithms. What is an Algorithm? An algorithm is a procedure to perform some task. 1.General - applicable in a variety of situations 2.Step-by-step.
1 Ch20. Dynamic Programming. 2 BIRD’S-EYE VIEW Dynamic programming The most difficult one of the five design methods Has its foundation in the principle.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
TU/e Algorithms (2IL15) – Lecture 3 1 DYNAMIC PROGRAMMING
TU/e Algorithms (2IL15) – Lecture 4 1 DYNAMIC PROGRAMMING II
Lecture 5 Dynamic Programming
Advanced Design and Analysis Techniques
Lecture 5 Dynamic Programming
Dynamic Programming Several problems Principle of dynamic programming
Dynamic Programming Comp 122, Fall 2004.
CSCE 411 Design and Analysis of Algorithms
A brief introduction to Dynamic Programming
Data Structure and Algorithms
Dynamic Programming Comp 122, Fall 2004.
Lecture 8. Paradigm #6 Dynamic Programming
Ch. 15: Dynamic Programming Ming-Te Chi
DYNAMIC PROGRAMMING.
Algorithms and Data Structures Lecture X
Dynamic Programming.
Data Structures and Algorithms Dynamic Programming
Presentation transcript:

Algorithms and Data Structures Lecture X Simonas Šaltenis Aalborg University simas@cs.auc.dk October 23, 2003

This Lecture Dynamic programming Fibonacci numbers example Optimization problems Matrix multiplication optimization Principles of dynamic programming Longest Common Subsequence October 23, 2003

Algorithm design techniques Algorithm design techniques so far: Iterative (brute-force) algorithms For example, insertion sort Algorithms that use other ADTs (implemented using efficient data structures) For example, heap sort Divide-and-conquer algorithms Binary search, merge sort, quick sort October 23, 2003

Divide and Conquer Divide and conquer method for algorithm design: Divide: If the input size is too large to deal with in a straightforward manner, divide the problem into two or more disjoint subproblems Conquer: Use divide and conquer recursively to solve the subproblems Combine: Take the solutions to the subproblems and “merge” these solutions into a solution for the original problem October 23, 2003

Divide and Conquer (2) For example, MergeSort Merge-Sort(A, p, r) if p < r then q¬(p+r)/2 Merge-Sort(A, p, q) Merge-Sort(A, q+1, r) Merge(A, p, q, r) For example, MergeSort The subproblems are independent and non-overlaping October 23, 2003

Fibonacci Numbers Leonardo Fibonacci (1202): A rabbit starts producing offspring on the second generation after its birth and produces one child each generation How many rabbits will there be after n generations? F(1)=1 F(2)=1 F(3)=2 F(4)=3 F(5)=5 F(6)=8 October 23, 2003

Fibonacci Numbers (2) F(n)= F(n-1)+ F(n-2) F(0) =0, F(1) =1 0, 1, 1, 2, 3, 5, 8, 13, 21, 34 … Straightforward recursive procedure is slow! Why? How slow? Let’s draw the recursion tree FibonacciR(n) 01 if n £ 1 then return n 02 else return FibonacciR(n-1) + FibonacciR(n-2) October 23, 2003

Fibonacci Numbers (3) F(6) = 8 F(5) F(4) F(4) F(3) F(3) F(2) F(3) F(2) F(2) F(1) F(2) F(1) F(1) F(0) F(2) F(1) F(1) F(0) F(1) F(0) F(1) F(0) F(1) F(0) We keep calculating the same value over and over! Subproblems are overlapping – they share sub-subproblems October 23, 2003

Fibonacci Numbers (4) How many summations are there S(n)? S(n) = S(n – 1) + S(n – 2) + 1 S(n) ³ 2S(n – 2) +1 and S(1) = S(0) = 0 Solving the recurrence we get S(n) ³ 2n/2 – 1 » 1.4n Running time is exponential! October 23, 2003

Fibonacci Numbers (5) We can calculate F(n) in linear time by remembering solutions to the solved subproblems – dynamic programming Compute solution in a bottom-up fashion Trade space for time! Fibonacci(n) 01 F[0]¬0 02 F[1]¬1 03 for i ¬ 2 to n do 04 F[i] ¬ F[i-1] + F[i-2] 05 return F[n] October 23, 2003

Fibonacci Numbers (6) In fact, only two values need to be remembered at any time! FibonacciImproved(n) 01 if n £ 1 then return n 02 Fim2 ¬0 03 Fim1 ¬1 04 for i ¬ 2 to n do 05 Fi ¬ Fim1 + Fim2 06 Fim2 ¬ Fim1 07 Fim1 ¬ Fi 05 return Fi October 23, 2003

History Dynamic programming Invented in the 1950s by Richard Bellman as a general method for optimizing multistage decision processes “Programming” stands for “planning” (not computer programming) October 23, 2003

Optimization Problems We have to choose one solution out of many – one with the optimal (minimum or maximum) value. A solution exhibits a structure It consists of a string of choices that were made – what choices have to be made to arrive at an optimal solution? An algorithm should compute the optimal value plus, if needed, an optimal solution October 23, 2003

Multiplying Matrices Two matrices, A – n´m matrix and B – m´k matrix, can be multiplied to get C with dimensions n´k, using nmk scalar multiplications Problem: Compute a product of many matrices efficiently Matrix multiplication is associative (AB)C = A(BC) October 23, 2003

Multiplying Matrices (2) The parenthesization matters Consider A´B´C´D, where A is 30´1, B is 1´40, C is 40´10, D is 10´25 Costs: (AB)C)D = 1200 + 12000 + 7500 = 20700 (AB)(CD) = 1200 + 10000 + 30000 = 41200 A((BC)D) = 400 + 250 + 750 = 1400 We need to optimally parenthesize October 23, 2003

Multiplying Matrices (3) Let M(i,j) be the minimum number of multiplications necessary to compute Key observations The outermost parenthesis partition the chain of matrices (i,j) at some k, (i£k<j): (Ai… Ak)(Ak+1… Aj) The optimal parenthesization of matrices (i,j) has optimal parenthesizations on either side of k: for matrices (i,k) and (k+1,j) October 23, 2003

Multiplying Matrices (4) We try out all possible k. Recurrence: A direct recursive implementation is exponential – there is a lot of duplicated work (why?) But there are only different subproblems (i,j), where 1£ i £ j £ n October 23, 2003

Multiplying Matrices (5) Thus, it requires only Q(n2) space to store the optimal cost M(i,j) for each of the subproblems: half of a 2d array M[1..n,1..n] Trivially M(i,i) = 0, 1£ i £ n To compute M(i,j), where i – j – 1 = L, we need only values of M for subproblems of length < L. Thus we have to solve subproblems in the increasing length of subproblems: first subproblems of length 2, then of length 3 and so on. To reconstruct an optimal parenthesization for each pair (i,j) we record in c[i, j]=k the optimal split into two subproblems (i, k) and (k+1, j) October 23, 2003

Multiplying Matrices (6) Matrix-Chain-Order(d0…dn) 01 for i¬1 to n do 02 M[i,i] ¬ 0 03 for L¬2 to n do 04 for i¬1 to n-L+1 do 05 j ¬ i+L-1 06 M[i,j] ¬ ¥ 07 for k¬i to j-1 do 08 q ¬ M[i,k]+M[k+1,j]+di-1dkdj 09 if q < M[i,j] then M[i,j] ¬ q c[i,j] ¬ k 12 return M, c October 23, 2003

Multiplying Matrices (7) After the execution: M [1,n] contains the value of an optimal solution and c contains optimal subdivisions (choices of k) of any subproblem into two subsubproblems Let us run the algorithm on the four matrices: A1 is a 2x10 matrix, A2 is a 10x3 matrix, A3 is a 3x5 matrix, and A4 is a 5x8 matrix. October 23, 2003

Multiplying Matrices (8) Running time It is easy to see that it is O(n3) It turns out, it is also W(n3) From exponential time to polynomial October 23, 2003

Memoization If we still like recursion very much, we can structure our algorithm as a recursive algorithm: Initialize all M elements to ¥ and call Lookup-Chain(d, i, j) Lookup-Chain(d,i,j) 1 if M[i,j] < ¥ then 2 return m[i,j] 3 if i=j then m[i,j] ¬0 5 else for k ¬ i to j-1 do q ¬ Lookup-Chain(d,i,k)+ Lookup-Chain(d,k+1,j)+di-1dkdj 7 if q < M[i,j] then 8 M[i,j] ¬ q 9 return M[i,j] October 23, 2003

Memoization (2) Memoization: Pros and cons: Solve the problem in a top-down fashion, but record the solutions to subproblems in a table. Pros and cons: L Recursion is usually slower than loops and uses stack space J Easier to understand J If not all subproblems need to be solved, you are sure that only the necessary ones are solved October 23, 2003

Dynamic Programming In general, to apply dynamic programming, we have to address a number of issues: 1. Show optimal substructure – an optimal solution to the problem contains within it optimal solutions to sub-problems Solution to a problem: Making a choice out of a number of possibilities (look what possible choices there can be) Solving one or more sub-problems that are the result of a choice (characterize the space of sub-problems) Show that solutions to sub-problems must themselves be optimal for the whole solution to be optimal (use “cut-and-paste” argument) October 23, 2003

Dynamic Programming (2) 2. Write a recurrence for the value of an optimal solution Mopt = Minover all choices k {(Combination (e.g., sum) of Mopt of all sub-problems, resulting from choice k) + (the cost associated with making the choice k)} Show that the number of different instances of sub-problems is bounded by a polynomial October 23, 2003

Dynamic Programming (3) 3. Compute the value of an optimal solution in a bottom-up fashion, so that you always have the necessary sub-results pre-computed (or use memoization) See if it is possible to reduce the space requirements, by “forgetting” solutions to sub-problems that will not be used any more 4. Construct an optimal solution from computed information (which records a sequence of choices made that lead to an optimal solution) October 23, 2003

Longest Common Subsequence Two text strings are given: X and Y There is a need to quantify how similar they are: Comparing DNA sequences in studies of evolution of different species Spell checkers One of the measures of similarity is the length of a Longest Common Subsequence (LCS) October 23, 2003

LCS: Definition Z is a subsequence of X, if it is possible to generate Z by skipping some (possibly none) characters from X For example: X =“ACGGTTA”, Y=“CGTAT”, LCS(X,Y) = “CGTA” or “CGTT” To solve LCS problem we have to find “skips” that generate LCS(X,Y) from X, and “skips” that generate LCS(X,Y) from Y October 23, 2003

LCS: Optimal Substructure We make Z to be empty and proceed from the ends of Xm=“x1 x2 …xm” and Yn=“y1 y2 …yn” If xm=yn, append this symbol to the beginning of Z, and find optimally LCS(Xm-1, Yn-1) If xm¹yn, Skip either a letter from X or a letter from Y Decide which decision to do by comparing LCS(Xm, Yn-1) and LCS(Xm-1, Yn) “Cut-and-paste” argument October 23, 2003

LCS: Reccurence The algorithm could be easily extended by allowing more “editing” operations in addition to copying and skipping (e.g., changing a letter) Let c[i,j] = LCS(Xi, Yj) Observe: conditions in the problem restrict sub-problems (What is the total number of sub-problems?) October 23, 2003

LCS: Compute the Optimum LCS-Length(X, Y, m, n) 1 for i¬1 to m do 2 c[i,0] ¬ 0 3 for j¬0 to n do 4 c[0,j] ¬ 0 5 for i¬1 to m do 6 for j¬1 to n do 7 if xi = yj then c[i,j] ¬ c[i-1,j-1]+1 9 b[i,j] ¬ ”copy” 10 else if c[i-1,j] ³ c[i,j-1] then c[i,j] ¬ c[i-1,j] b[i,j] ¬ ”skipX” 13 else c[i,j] ¬ c[i,j-1] b[i,j] ¬ ”skipY” 16 return c, b October 23, 2003

LCS: Example Lets run: X =“GGTTCAT”, Y=“GTATCT” What is the running time and space requirements of the algorithm? How much can we reduce our space requirements, if we do not need to reconstruct an LCS? October 23, 2003

Next Lecture Graphs: Representation in memory Breadth-first search Depth-first search Topological sort October 23, 2003