Algorithmics - Lecture 11

Slides:



Advertisements
Similar presentations
Dynamic Programming Introduction Prof. Muhammad Saeed.
Advertisements

Dynamic Programming.
CPSC 335 Dynamic Programming Dr. Marina Gavrilova Computer Science University of Calgary Canada.
Dynamic Programming.
Analysis of Algorithms Dynamic Programming. A dynamic programming algorithm solves every sub problem just once and then Saves its answer in a table (array),
RAIK 283: Data Structures & Algorithms
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Dynamic Programming Reading Material: Chapter 7..
Dynamic Programming Technique. D.P.2 The term Dynamic Programming comes from Control Theory, not computer science. Programming refers to the use of tables.
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
Lecture 8: Dynamic Programming Shang-Hua Teng. First Example: n choose k Many combinatorial problems require the calculation of the binomial coefficient.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
CS 8833 Algorithms Algorithms Dynamic Programming.
Algorithmics - Lecture 101 LECTURE 10: Greedy technique.
Greedy Methods and Backtracking Dr. Marina Gavrilova Computer Science University of Calgary Canada.
Honors Track: Competitive Programming & Problem Solving Optimization Problems Kevin Verbeek.
Dynamic Programming Louis Siu What is Dynamic Programming (DP)? Not a single algorithm A technique for speeding up algorithms (making use of.
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
12-CRS-0106 REVISED 8 FEB 2013 CSG523/ Desain dan Analisis Algoritma Dynamic Programming Intelligence, Computing, Multimedia (ICM)
Algorithmics - Lecture 121 LECTURE 11: Dynamic programming - II -
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
Dynamic Programming (optimization problem) CS3024 Lecture 10.
Dynamic Programming Csc 487/687 Computing for Bioinformatics.
Greedy algorithms: CSC317
Dynamic Programming Typically applied to optimization problems
Lecture 12.
Dynamic Programming Sequence of decisions. Problem state.
Modeling with Recurrence Relations
Design & Analysis of Algorithm Dynamic Programming
Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Analysis of Algorithms CS 477/677
Introduction to the Design and Analysis of Algorithms
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Chapter 8 Dynamic Programming
CS200: Algorithm Analysis
CSCE 411 Design and Analysis of Algorithms
Dynamic Programming.
Dynamic Programming.
Unit-5 Dynamic Programming
Dynamic Programming Dr. Yingwu Zhu Chapter 15.
Chapter 8 Dynamic Programming
ICS 353: Design and Analysis of Algorithms
ICS 353: Design and Analysis of Algorithms
Advanced Algorithms Analysis and Design
Dynamic Programming 1/15/2019 8:22 PM Dynamic Programming.
Advanced Algorithms Analysis and Design
Dynamic Programming.
A Note on Useful Algorithmic Strategies
A Note on Useful Algorithmic Strategies
A Note on Useful Algorithmic Strategies
Algorithms CSCI 235, Spring 2019 Lecture 29 Greedy Algorithms
Trevor Brown DC 2338, Office hour M3-4pm
Introduction to Algorithms: Dynamic Programming
Dynamic Programming.
DYNAMIC PROGRAMMING.
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
Longest Common Subsequence
Analysis of Algorithms CS 477/677
A Note on Useful Algorithmic Strategies
Longest Common Subsequence
A Note on Useful Algorithmic Strategies
Dynamic Programming.
Presentation transcript:

Algorithmics - Lecture 11 Dynamic programming - I - Algorithmics - Lecture 11

Algorithmics - Lecture 11 Outline What is dynamic programming ? Basic steps in applying dynamic programming Bottom-up vs. top-down approaches in developing recurrence relations Applications of dynamic programming Algorithmics - Lecture 11

What is dynamic programming ? It is a technique for solving problems which can be decomposed in overlapping subproblems – it can be applied to problems having the optimal substructure property Its particularity is that it solves each smaller subproblem only once and it records each result in a table from which a solution to the initial problem can be constructed. Remarks. Dynamic programming was developed by Richard Bellman in 1950 as a general method for optimizing multistage decision processes. In dynamic programming the word programming stands for planning not for computer programming. The word dynamic is related with the manner in which the tables used in obtaining the solution are constructed Algorithmics - Lecture 11

What is dynamic programming ? Dynamic programming strategy is related with divide and conquer strategy because both of them are based on dividing a problem in subproblems. However there are some differences between these two approaches: divide and conquer: the subproblems are usually independent so the solution of one subproblem cannot be used for another subproblem dynamic programming: the subproblems are dependent (overlapping) so we need the result obtained for a subproblem for several times (therefore it is important to store it) Dynamic programming is also related to greedy strategy since both of them can be applied to optimization problems which have the property of optimal substructure Algorithmics - Lecture 11

Algorithmics - Lecture 11 Outline What is dynamic programming ? Basic steps in applying dynamic programming Bottom-up vs. top-down approaches in developing recurrence relations Applications of dynamic programming Algorithmics - Lecture 11

Basic steps in applying dynamic programming Analyze the structure of a solution: establish how the solution of a problem depends on the solutions of its subproblems. Usually this step means verifying the optimal substructure property. Find a recurrence relation which relates a value (e.g. the optimization criterion) corresponding to the problem’s solution with the values corresponding to subproblems solutions. Develop the recurrence relation (in dynamic programming it is common to use a bottom up approach) and construct a table containing information useful to construct the solution. Construct the solution based on information collected in the previous step. Algorithmics - Lecture 11

Algorithmics - Lecture 11 Outline What is dynamic programming ? Basic steps in applying dynamic programming Bottom-up vs. top-down approaches in developing recurrence relations Applications of dynamic programming Algorithmics - Lecture 11

Developing recurrence relations There are two approaches in developing a recurrence relation: Bottom-up: start from the particular case and try to generate new values by using the already computed values (the values which are used to compute further values are at least temporarily stored). Top-down: try to construct the desired element by expressing it through some previous elements (which are not known yet, thus they also need to be computed). This approach is usually implemented in a recursive manner. It has the main disadvantage that it might need the (re)computation for several times of the same value Algorithmics - Lecture 11

Developing recurrence relations Example 1. Computing the m-th element of Fibonacci’s sequence f1=f2=1; fn=fn-1+fn-2 for n>2 Efficiency: Dominant op.: addition 0 if m<=2 T(m) = T(m-1)+T(m-2)+1 if m>2 T: 0 0 1 2 4 7 12 20 33 54 … Fibonacci: 1 1 2 3 5 8 13 21 34 55 … fm belongs to O(phim), phi=(1+sqrt(5))/2 Exponential complexity ! Top-down approach: fib(m) IF (m=1) OR (m=2) THEN RETURN 1 ELSE RETURN fib(m-1)+fib(m-2) ENDIF Algorithmics - Lecture 11

Developing recurrence relations Example 1. Computing the m-th element of Fibonacci’s sequence f1=f2=1; fn=fn-1+fn-2 for n>2 Bottom-up approach: Efficiency: T(m)=m-2 => linear complexity Remark: space-for-time tradeoff situation The use of extra space can be avoided (limited to only two additional variables) fib(m) f[1]←1; f[2] ← 1; FOR i ← 3,m DO f[i] ← f[i-1]+f[i-2] ENDFOR RETURN f[m] fib(m) f1 ← 1; f2 ← 1; FOR i ← 3,m DO f2 ← f1+f2; f1 ← f2-f1; ENDFOR RETURN f2 Algorithmics - Lecture 11

Developing recurrence relations Example 2. Computing a binomial coefficient C(n,k) 0, if n<k C(n,k)= 1, if k=0 or n=k C(n-1,k)+C(n-1,k-1), otherwise Efficiency: Problem size: (n,k) Dominant operation: addition T(n,k) >= 2 min{k,n-k} T(n,k)  Ω(2 min{k,n-k} ) Rmk: min{k,n-k} is the length of the shortest branch in the tree of recursive calls Top-down approach: comb(n,k) IF n<k THEN RETURN 0 ELSE IF (k=0) OR (n=k) THEN RETURN 1 ELSE RETURN comb(n-1,k)+comb(n-1,k-1) ENDIF Algorithmics - Lecture 11

Developing recurrence relations Example 2. Computing a binomial coefficient C(n,k) 0, if n<k C(n,k)= 1, if k=0 or n=k C(n-1,k)+C(n-1,k-1), otherwise Bottom-up approach: constructing the Pascal’s triangle 0 1 2 … k-1 k 0 1 1 1 1 2 1 2 1 … k 1 … 1 n-1 1 C(n-1,k-1) C(n-1,k) n 1 C(n,k) Algorithmics - Lecture 11

Developing recurrence relations Algorithm (Pascal triangle): Comb(n,k) C[0..n,0..k] ← 0 FOR i ← 0,n DO FOR j ← 0,min{i,k} DO IF (j=0) OR (j=i) THEN C[i,j] ← 1 ELSE C[i,j] ← C[i-1,j]+C[i-1,j-1] ENDIF ENDFOR RETURN C[0..n,0..k] Efficiency: Problem size: (n,k) Dominant operation: addition T(n,k)=(1+2+…+k-1) +(k+…+k) =k(k-1)/2+k(n-k+1) T(n,k)(nk) Remark. If only C(n,k) have to be computed it suffices to use an array of k elements as extra space Algorithmics - Lecture 11

Algorithmics - Lecture 11 Outline What is dynamic programming ? Basic steps in applying dynamic programming Bottom-up vs. top-down approaches in developing recurrence relations Applications of dynamic programming Algorithmics - Lecture 11

Applications of dynamic programming Application 1: Longest strictly increasing subsequence Let a1,a2,…,an be a sequence. Find the longest subsequence satisfying: aj1<aj2<…<ajk (a strictly increasing subsequence; 1<=j1<j2…<jk<=n) k is maximal (longest subsequence) Rmk: the elements in the subsequence do not have to be consecutive elements in the initial sequence Example: a = (2,5,1,3,6,8,2,10,4) Strictly increasing subsequences of length 5 (maximal length): (2,5,6,8,10) (2,3,6,8,10) (1,3,6,8,10) Algorithmics - Lecture 11

Longest strictly increasing subsequence Analysis of the solution structure. Let s=(aj1, aj2,…,aj(k-1) ,ajk ) be an optimal solution. Then it doesn’t exist an element in a[1..n] after ajk which is greater than ajk . Moreover it doesn’t exist an element which is placed between the last two elements of s (otherwise s wouldn’t be an optimal one). We have to prove that s’=(aj1, aj2,…,aj(k-1) ) is an optimal solution for the subproblem of finding the longest strictly increasing subsequence which ends in aj(k-1) . If we suppose that s’ is not optimal then it would exist an optimal s”. By adding to s” the element ajk one would obtain a better solution than s, implying that s is not optimal. This is in contradiction with the initial hypothesis, thus s’ should be optimal meaning that the problem has the property of optimal substructure. Algorithmics - Lecture 11

Longest strictly increasing subsequence Analysis of the solution structure. Thus one can consider that we have to solve a generic problem: P(i): find the longest strictly increasing subsequence of a[1..i] having a[i] the last element The optimal solution s(i) of P(i) contains the optimal solution s(j) of a subproblem P(j) (with j<i and a[j]<a[i]). The element a[j] is chosen such that s(j) has the largest possible length. Algorithmics - Lecture 11

Longest strictly increasing subsequence Find a recurrence relation Let Bi be the length of the longest strictly increasing subsequence which ends up in ai 1 if i=1 Bi = 1+ max{Bj | 1<=j<=i-1, aj<ai} Example: a = (2,5,1,3,6,8,2,10,4) B = (1,2,1,2,3,4,2,5,3) Rmk: If the set {j| 1<=j<=i-1, aj<ai} is empty then the maximum is considered to be 0. Algorithmics - Lecture 11

Longest strictly increasing subsequence computeB(a[1..n]) B[1] ← 1 FOR i ← 2,n DO max ← 0 FOR j ← 1,i-1 DO IF a[j]<a[i] AND max<B[j] THEN max ← B[j] ENDIF ENDFOR B[i] ← max+1 RETURN B[1..n] 3. Develop the recurrence relation 1 if i=1 Bi = 1+max{Bj | 1<=j<=i-1, aj<ai} Complexity: θ(n2) Algorithmics - Lecture 11

Longest strictly increasing subsequence construct(a[1..n],B[1..n]) m:=1 // m will be the index of the maximum //of B[1..n] FOR i ← 2,n DO IF B[i]>B[m] THEN m ← i ENDIF ENDFOR k ← B[m] // length of the optimal solution s[k] ← a[m] // last element of the solution WHILE B[m]>1 DO i ← m-1 // search for the previous element WHILE a[i]>=a[m] OR B[i]<>B[m]-1 DO i ← i-1 ENDWHILE m ← i; k ← k-1; s[k] ← a[m] RETURN s[1..k] Construction of the solution Find the maximum of B Construct s successively starting with the last element Complexity: θ(n) Algorithmics - Lecture 11

Longest strictly increasing subsequence Remark: the construction of the solution can be simplified if the index of the previous element of the subsequence is saved when B[1..n] is constructed construct(a[1..n],B[1..n]) m:=1 FOR i ← 2,n DO IF B[i]>B[m] THEN m ← i ENDIF ENDFOR k ← B[m] s[k] ← a[m]; i ← P[m] WHILE i>=1 DO k ← k-1; s[k] ← a[i] i ← P[i] ENDWHILE RETURN s[1..k] computeB(a[1..n]) B[1] ← 1; P[1..n] ←0 FOR i ← 2,n DO max ← 0 FOR j ← 1,i-1 DO IF a[j]<a[i] AND max<B[j] THEN max ← B[j]; P[i] ←j ENDIF ENDFOR B[i] ← max+1 RETURN B[1..n] Algorithmics - Lecture 11

Applications of dynamic programming Application 2: Longest common subsequence of two sequences Given two sequences a1,…, an and b1,…,bm find a subsequence c1,…ck which satisfies: There exists i1,…,ik and j1,…,jk such that c1=ai1=bj1, c2=ai2=bj2, … , ck=aik=bjk k is maximal Remark: This problem occurs frequently in bioinformatics where sequences of nucleotides (DNA) or amino acids (proteins) are compared in order to check if they are similar or not. Two sequences are considered similar if they contain a long common subsequence. Algorithmics - Lecture 11

Longest common subsequence Variant of this problem: find the longest common sequence of consecutive elements Example: a: 2 1 3 4 5 b: 1 3 4 2 Common subsequences: 1, 3 3, 4 1, 3, 4 Example: a: 2 1 4 3 2 b: 1 3 4 2 Common subsequences: 1, 3 1, 2 4, 2 1, 3, 2 1, 4, 2 Rmk: the subsequence does not necessarily contain consecutive elements Algorithmics - Lecture 11

Longest common subsequence Analyzing the structure of an optimal solution Let us consider that the optimal solution ends in a[i] = b[j]. Then the problem consists of finding the longest common subsequence of partial sequences a[1..i] and b[1..j]. Thus we can consider the generic problem P(i,j) as the problem of finding the longest common subsequence of the sequences a[1..i] and b[1..j] (even if a[i] is not equal to b[j]). The subproblems which should be solved are: P(i-1,j-1) (if a[i]=b[j]) or P(i-1,j) or P(i,j-1) (if a[i]<>b[j]) Algorithmics - Lecture 11

Longest common subsequence Let L(i,j) be the length of the optimal solution of P(i,j) 0 if i=0 or j=0 L[i,j]= 1+L[i-1,j-1] if a[i]=b[j] max{L[i-1,j],L[i,j-1]} otherwise Algorithmics - Lecture 11

Longest common subsequence Example: a: 2 1 4 3 2 b: 1 3 4 2 Development of the recurrence relation: 0 if i=0 or j=0 L[i,j]= 1+L[i-1,j-1] if a[i]=b[j] max{L[i-1,j],L[i,j-1]} otherwise 0 1 3 4 2 0 1 2 3 4 0 0 0 0 0 0 2 1 0 0 0 0 1 1 2 0 1 1 1 1 4 3 0 1 1 2 2 3 4 0 1 2 2 2 2 5 0 1 2 2 3 Algorithmics - Lecture 11

Longest common subsequence Recurrence relation: 0 if i=0 or j=0 L[i,j]= 1+L[i-1,j-1] if a[i]=b[j] max{L[i-1,j],L[i,j-1]} otherwise compute(a[1..n],b[1..m]) FOR i←0,n DO L[i,0] ← 0 ENDFOR FOR j← 1,m DO L[0,j]← 0 ENDFOR FOR i ← 1,n DO FOR j ← 1,m DO IF a[i]=b[j] THEN L[i,j] ← L[i-1,j-1]+1 ELSE L[i,j] ← max{L[i-1,j],L[i,j-1]} ENDIF ENDFOR ENDFOR RETURN L[0..n,0..m] Algorithmics - Lecture 11

Longest common subsequence Construction of the solution: - Start from the L[m,n] If the corresponding elements in the sequences are equal then include the common value in the solution and go up-left (to the element on the previous row and previous column) If the elements are different then go up or left (to the largest value) 0 1 3 4 2 0 1 2 3 4 0 0 0 0 0 0 2 1 0 0 0 0 1 1 2 0 1 1 1 1 4 3 0 1 1 2 2 3 4 0 1 2 2 2 2 5 0 1 2 2 3 Algorithmics - Lecture 11

Longest common subsequence Construction of the solution in a recursive manner: construct(i,j) IF i>=1 AND j>=1 THEN IF a[i]=b[j] THEN construct(i-1,j-1) k ← k+1 c[k] ← a[i] ELSE IF L[i-1,j]>L[i,j-1] THEN construct (i-1,j) ELSE construct (i,j-1) ENDIF ENDIF ENDIF Remarks: a, b, c and k are global arrays Before the call of the function, the variable k should be initialized with 0 The initial call of the function construct(n,m) Algorithmics - Lecture 11

Next lecture will be on … …other applications of dynamic programming … and memoization (memory functions) Algorithmics - Lecture 11