Lecture 2: Dynamic Programming 主講人 : 虞台文. Content What is Dynamic Programming? Matrix Chain-Products Sequence Alignments Knapsack Problem All-Pairs Shortest.

Slides:



Advertisements
Similar presentations
Dynamic Programming 25-Mar-17.
Advertisements

CPSC 335 Dynamic Programming Dr. Marina Gavrilova Computer Science University of Calgary Canada.
Chapter 7 Dynamic Programming.
 2004 SDU Lecture11- All-pairs shortest paths. Dynamic programming Comparing to divide-and-conquer 1.Both partition the problem into sub-problems 2.Divide-and-conquer.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Data Structures Lecture 10 Fang Yu Department of Management Information Systems National Chengchi University Fall 2010.
Chapter 7 Dynamic Programming 7.
§ 8 Dynamic Programming Fibonacci sequence
Dynamic Programming Reading Material: Chapter 7..
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Design and Analysis of Algorithms - Chapter 81 Dynamic Programming Dynamic Programming is a general algorithm design technique Dynamic Programming is a.
Dynamic Programming Dynamic Programming algorithms address problems whose solution is recursive in nature, but has the following property: The direct implementation.
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
7 -1 Chapter 7 Dynamic Programming Fibonacci Sequence Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, … F i = i if i  1 F i = F i-1 + F i-2 if.
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Dynamic Programming1 Modified by: Daniel Gomez-Prado, University of Massachusetts Amherst.
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
© 2004 Goodrich, Tamassia Dynamic Programming1. © 2004 Goodrich, Tamassia Dynamic Programming2 Matrix Chain-Products (not in book) Dynamic Programming.
Fundamental Techniques
Dynamic Programming 0-1 Knapsack These notes are taken from the notes by Dr. Steve Goddard at
Dynamic Programming A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River,
Analysis of Algorithms
1 Dynamic Programming Jose Rolim University of Geneva.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
CSCE350 Algorithms and Data Structure Lecture 17 Jianjun Hu Department of Computer Science and Engineering University of South Carolina
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
Dynamic Programming Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by or formulated.
7 -1 Chapter 7 Dynamic Programming Fibonacci sequence Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, … F i = i if i  1 F i = F i-1 + F i-2 if.
1 0-1 Knapsack problem Dr. Ying Lu RAIK 283 Data Structures & Algorithms.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
Greedy Methods and Backtracking Dr. Marina Gavrilova Computer Science University of Calgary Canada.
Dynamic Programming Louis Siu What is Dynamic Programming (DP)? Not a single algorithm A technique for speeding up algorithms (making use of.
1 Chapter 6 Dynamic Programming. 2 Algorithmic Paradigms Greedy. Build up a solution incrementally, optimizing some local criterion. Divide-and-conquer.
Intro to Alignment Algorithms: Global and Local Intro to Alignment Algorithms: Global and Local Algorithmic Functions of Computational Biology Professor.
CS 3343: Analysis of Algorithms Lecture 18: More Examples on Dynamic Programming.
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
Pairwise sequence alignment Lecture 02. Overview  Sequence comparison lies at the heart of bioinformatics analysis.  It is the first step towards structural.
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
CS 3343: Analysis of Algorithms Lecture 19: Introduction to Greedy Algorithms.
2/19/ ITCS 6114 Dynamic programming 0-1 Knapsack problem.
TU/e Algorithms (2IL15) – Lecture 3 1 DYNAMIC PROGRAMMING
TU/e Algorithms (2IL15) – Lecture 4 1 DYNAMIC PROGRAMMING II
Merge Sort 5/28/2018 9:55 AM Dynamic Programming Dynamic Programming.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Chapter 8 Dynamic Programming
Unit-5 Dynamic Programming
CS Algorithms Dynamic programming 0-1 Knapsack problem 12/5/2018.
ICS 353: Design and Analysis of Algorithms
Merge Sort 1/12/2019 5:31 PM Dynamic Programming Dynamic Programming.
Dynamic Programming 1/15/2019 8:22 PM Dynamic Programming.
Dynamic Programming Dynamic Programming 1/15/ :41 PM
CS6045: Advanced Algorithms
Dynamic Programming Dynamic Programming 1/18/ :45 AM
Merge Sort 1/18/ :45 AM Dynamic Programming Dynamic Programming.
Dynamic Programming Merge Sort 1/18/ :45 AM Spring 2007
Merge Sort 2/22/ :33 AM Dynamic Programming Dynamic Programming.
Dynamic Programming-- Longest Common Subsequence
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
Dynamic Programming II DP over Intervals
Merge Sort 4/28/ :13 AM Dynamic Programming Dynamic Programming.
Dynamic Programming Merge Sort 5/23/2019 6:18 PM Spring 2008
Algorithm Course Dr. Aref Rashad
Presentation transcript:

Lecture 2: Dynamic Programming 主講人 : 虞台文

Content What is Dynamic Programming? Matrix Chain-Products Sequence Alignments Knapsack Problem All-Pairs Shortest Path Problem Traveling Salesman Problem Conclusion

Lecture 2: Dynamic Programming What is Dynamic Programming?

Dynamic Programming (DP) tends to break the original problem to sub-problems, i.e., in a smaller size The optimal solution in the bigger sub-problems is found through a retroactive formula which connects the optimal solutions of sub-problems. Used when the solution to a problem may be viewed as the result of a sequence of decisions.

Properties for Problems Solved by DP Simple Subproblems – The original problem can be broken into smaller subproblems with the same structure Optimal Substructure of the problems – The solution to the problem must be a composition of subproblem solutions (the principle of optimality) Subproblem Overlap – Optimal subproblems to unrelated problems can contain subproblems in common

The Principle of Optimality The basic principle of dynamic programming Developed by Richard Bellman An optimal path has the property that whatever the initial conditions and control variables (choices) over some initial period, the control (or decision variables) chosen over the remaining period must be optimal for the remaining problem, with the state resulting from the early decisions taken to be the initial condition.

Example: Shortest Path Problem Goal Start

Example: Shortest Path Problem Start Goal

Example: Shortest Path Problem Start Goal

Recall  Greedy Method for Shortest Paths on a Multi-stage Graph Problem – Find a shortest path from v 0 to v 3 Is the greedy solution optimal?

Recall  Greedy Method for Shortest Paths on a Multi-stage Graph Problem – Find a shortest path from v 0 to v 3 Is the greedy solution optimal? The optimal path 

Example  Dynamic Programming

Lecture 2: Dynamic Programming Matrix Chain-Products

Matrix Multiplication C = A × B A is d × e and B is e × f O(def ) e B f e j C d f i,j A d i

Matrix Chain-Products Given a sequence of matrices, A 1, A 2, …, A n, find the most efficient way to multiply them together. Facts: – A(BC) = (AB)C – Different parenthesizing may need different numbers of operation. Example: A:10 × 30, B: 30 × 5, C : 5 × 60 – (AB)C = (10×30×5) + (10×5×60) = = 4500 ops – A(BC) = (30×5×60) + (10×30×60) = = ops

Matrix Chain-Products Given a sequence of matrices, A 1, A 2, …, A n, find the most efficient way to multiply them together. A Brute-force Approach: – Try all possible ways to parenthesize A=A 1  A 2  …  A n – Calculate number of operations for each one – Pick the best one Time Complexity: – #paranethesizations = #binary trees of n nodes – O(4 n )

A Greedy Approach Idea #1: – repeatedly select the product that uses the most operations. Counter-example: – A: 10  5, B: 5  10, C: 10  5, and D: 5  10 – Greedy idea #1 gives (AB)(CD), which takes = 2000 ops – A((BC)D) takes = 1000 ops

Another Greedy Approach Idea #2: – repeatedly select the product that uses the least operations. Counter-example: – A: 101  11, B: 11  9, C: 9  100, and D: 100  999 – Greedy idea #2 gives A((BC)D), which takes = ops – (AB)(CD) takes = ops

DP  Define Subproblem Original Problem Subproblem ( P ij, i  j ) (P1n)(P1n) Suppose #operations for the optimal solution of P ij is N ij #operations for the optimal solution of the original problem P 1n is N 1n

DP  Define Subproblem Original Problem Subproblem ( P ij, i  j ) (P1n)(P1n) Suppose #operations for the optimal solution of P ij is N ij #operations for the optimal solution of the original problem P 1n is N 1n

DP  Define Subproblem Original Problem Subproblem ( P ij, i  j ) (P1n)(P1n) Suppose #operations for the optimal solution of P ij is N ij #operations for the optimal solution of the original problem P 1n is N 1n What is the relation btw N ij ( P ij ) and N 1n ( P 1n )?

DP  Principle of Optimality N ik N k +1, n d i  d k +1 d k  d j +1

DP  Implementation 1 2 i n 12jn N ij

DP  Implementation 1 2 i n 12jn N ij

DP  Implementation 1 2 i n 12jn N ij ?

DP  Implementation 1 2 i n 12jn N ij ?

DP  Implementation 1 2 i n 12jn N ij ?

DP  Implementation 1 2 i n 12jn N ij

DP  Implementation 1 2 i n 12jn N ij ?

DP  Implementation 1 2 i n 12jn N ij ?

DP  Implementation 1 2 i n 12jn N ij

DP  Implementation 1 2 i n 12jn N ij ?

DP  Implementation 1 2 i n 12jn N ij ?

DP  Implementation 1 2 i n 12jn N ij

DP for Matrix Chain-Products Algorithm matrixChain(S): Input: sequence S of n matrices to be multiplied Output: number of operations in an optimal parenthesization of S for i  1 to n // main diagonal terms are all zero N i,i  0 for d  2 to n // each diagonal do following for i  1 to n  d+1 // do from top to bottom for each diagonal j  i+d  1 N i,j  infinity for k  i to j  1 // counting minimum N i,j  min(N i,j, N i,k +N k+1,j +d i d k+1 d j+1 ) Algorithm matrixChain(S): Input: sequence S of n matrices to be multiplied Output: number of operations in an optimal parenthesization of S for i  1 to n // main diagonal terms are all zero N i,i  0 for d  2 to n // each diagonal do following for i  1 to n  d+1 // do from top to bottom for each diagonal j  i+d  1 N i,j  infinity for k  i to j  1 // counting minimum N i,j  min(N i,j, N i,k +N k+1,j +d i d k+1 d j+1 )

Time Complexity Algorithm matrixChain(S): Input: sequence S of n matrices to be multiplied Output: number of operations in an optimal parenthesization of S for i  1 to n // main diagonal terms are all zero N i,i  0 for d  2 to n // each diagonal do following for i  1 to n  d+1 // do from top to bottom for each diagonal j  i+d  1 N i,j  infinity for k  i to j  1 // counting minimum N i,j  min(N i,j, N i,k +N k+1,j +d i d k+1 d j+1 ) Algorithm matrixChain(S): Input: sequence S of n matrices to be multiplied Output: number of operations in an optimal parenthesization of S for i  1 to n // main diagonal terms are all zero N i,i  0 for d  2 to n // each diagonal do following for i  1 to n  d+1 // do from top to bottom for each diagonal j  i+d  1 N i,j  infinity for k  i to j  1 // counting minimum N i,j  min(N i,j, N i,k +N k+1,j +d i d k+1 d j+1 ) O(n3)O(n3) O(n3)O(n3)

Exercises 1. The matrixChain algorithm only computes #operations of an optimal parenthesization. But, it doesn’t report the optimal parenthesization scheme. Please modify the algorithm so that it can do so. 2. Given an example with 5 matrices to illustrate your idea using a table.

Lecture 2: Dynamic Programming Sequence Alignment

Question Given two strings are they similar? what is their distance? and

Example Y:Y: applicable plausibly X:X: How similar they are? Can you give them a score?

Example Y’:Y’: applica---ble -p-l--ausibly X’:X’: Match Mismatch Indel Three cases: Matches Mismatches Insertions & deletions (indel)

Example Y’:Y’: applica---ble -p-l--ausibly X’:X’: Match Mismatch Indel Three cases: Matches Mismatches Insertions & deletions (indel) (+1) (  1) The values depends on applications. It can be described using a so-called substitution matrix, to be discussed shortly.

Example Y’:Y’: applica---ble -p-l--ausibly X’:X’: Match Mismatch Indel Three cases: Matches Mismatches Insertions & deletions (indel) (+1) (  1) Score = 5  (+1) + 1  (  1) + 7  (  1) =  3

Example Y’:Y’: applica---ble -p-l--ausibly X’:X’: Match Mismatch Indel Three cases: Matches Mismatches Insertions & deletions (indel) (+1) (  1) Score = 5  (+1) + 1  (  1) + 7  (  1) =  3 Is the alignment optimal?

Sequence Alignment In bioinformatics, a sequence alignment is a way of arranging the primary sequences of DNA, RNA, or protein to identify regions of similarity that may be a consequence of functional, structural, or evolutionary relationships between the sequences.

Global and Local Alignments L G P S S K Q T G K G S - S R I W D N Global alignment L N - I T K S A G K G A I M R L G D A T G K G Local alignment A G K G

Global and Local Alignments L G P S S K Q T G K G S - S R I W D N Global alignment L N - I T K S A G K G A I M R L G D A T G K G Local alignment A G K G

Global and Local Alignments Global Alignment – attempts to align the entire sequence – most useful when the sequences in the query set are similar and of roughly equal size. – Needleman–Wunsch algorithm (1971). Local Alignment – Attempts to align partial regions of sequences with high level of similarity. – Smith-Waterman algorithm (1981)

Needleman–Wunsch Algorithm Find the best global alignment of any two sequences under a given substitution matrix. Maximize a similarity score, to give maximum match Maximum match = largest number of residues of one sequence that can be matched with another allowing for all possible gaps Based on dynamic programming Involves an iterative matrix method of calculation

Substitution Matrix In bioinformatics, a substitution matrix estimates the rate at which each possible residue in a sequence changes to each other residue over time. Substitution matrices are usually seen in the context of amino acid or DNA sequence alignment, where the similarity between sequences depends on the mutation rates as represented in the matrix.

Substitution Matrix (DNA) w/o Gap Cost ACGT A2 11 1 11 C 11 2 11 1 G1 11 2 11 T 11 1 11 2

Substitution Matrix (DNA) w/ Gap Cost ACGT A2 11 1 11 C 11 2 11 1 G1 11 2 11 T 11 1 11 2 ACGT  A2 11 1 11 22 C 11 2 11 1 22 G1 11 2 11 22 T 11 1 11 2 22 22 22 22 22 0

Substitution Matrix ( 3D-BLAST)

DP  Define Subproblem Consider two strings, s of length n and t of length m. Let S be the substitution matrix. Subproblem: Let P ij is defined to be the optimal aligning for the two substrings: t[1..i] and s[1..j], and let M ij be the matching score. Original Problem: P mn (matching score M mn )

DP  Principle of Optimality j j1j1 i i1i1   ?

Example Step 1. Create a scoring matrix Step 2. Make an empty table for M ij Step 3. Initialize base conditions Step 4. Fill table by Step 5. Trace back s : t : ACGGTAG CCTAAG M ij   A A C C G G G G T T A A G G   C C C C T T A A A A G G

Example Step 1. Create a scoring matrix Step 2. Make an empty table for M ij Step 3. Initialize base conditions Step 4. Fill table by Step 5. Trace back s : t : ACGGTAG CCTAAG M ij   A A C C G G G G T T A A G G   C C C C T T A A A A G G 0 22 44 66 88  10  12 22 44 66 88  10  12  14

Example Step 1. Create a scoring matrix Step 2. Make an empty table for M ij Step 3. Initialize base conditions Step 4. Fill table by Step 5. Trace back s : t : ACGGTAG CCTAAG M ij   A A C C G G G G T T A A G G   C C C C T T A A A A G G 0 22 44 66 88  10  12  14 22 44 66 88  10  12 11 0 22 44 66 88  10 33 1 11 33 33 55 77 55 11 0 22 11 33 55 44 33 01 11 1 11 66 55 2 88 77 33 0013

Example Step 1. Create a scoring matrix Step 2. Make an empty table for M ij Step 3. Initialize base conditions Step 4. Fill table by Step 5. Trace back s : t : ACGGTAG CCTAAG M ij   A A C C G G G G T T A A G G   C C C C T T A A A A G G 0 22 44 66 88  10  12 22 44 66 88  10  12  14 11 33 55 44 66 88 0 1 11 33 55 77 22 11 0 0 22 33 44 33 2 66 33 11 11 0 0 88 55 3  10 77 55 1 11 0 1 s’ :t’ :s’ :t’ : GGGG AAAA T-T- GAGA GTGT CCCC ACAC

Needleman–Wunsch Algorithm Step 1. Create a scoring matrix Step 2. Make an empty table for M ij Step 3. Initialize base conditions Step 4. Fill table by Step 5. Trace back for i = 2 to m+1 do for j from 2 to n +1 do

s ’  ””, t ’  ”” while i < 1 and j < 1 do if s ’  s j + s ’ t ’  t i + t ’ else if s ’  s j + s ’ t ’  gap+ t ’ else s ’  gap+ s ’ t ’  t i + t ’ while i > 1 do t ’  gap+ t ’ while j > 1 do s ’  gap+ s ’ Needleman–Wunsch Algorithm Step 1. Create a scoring matrix Step 2. Make an empty table for M ij Step 3. Initialize base conditions Step 4. Fill table by Step 5. Trace back

Local Alignment Problem Given two strings s = s 1 ……s n, t = t 1 …….t m Find substrings s ’, t ’ whose similarity (optimal global alignment value) is maximum.

Example: Local Alignment GTAGT CATCAT ATG TGACTGAC G TC CATDOGCAT CC TGACTGAC A GTAGTCATCATATGCCTGACTGACG TCCATDOGCATCCTACTACTGACTGACA difference block Best aligned subsequeces

Global Alignment (Needleman–Wunsch Algorithm) Local Alignment (Smith-Waterman Algorithm) Recursive Formulation

Exercises 3. Find the best local aligned substrings for the following two DNA strings: GAATTCAGTTA GGATCGA You have to give the detail. Hint: start from the left table.

Exercises 4. What is longest common sequence (LCS) problem? How to solve LCS using dynamic programming technique?

Lecture 2: Dynamic Programming Knapsack Problem

Knapsack Problems Given some items, pack the knapsack to get the maximum total value. Each item has some weight and some benefit. Total weight that we can carry is no more than some fixed capacity. Fractional knapsack problem – Items are divisible: you can take any fraction of an item. – Solved with a greedy algorithm. 0-1 knapsack problem – Items are indivisible; you either take an item or not. – Solved with dynamic programming.

Given a knapsack with maximum capacity W, and a set S consisting of n items Each item i has some weight w i and benefit value b i (all w i and W are integer values) Problem: How to pack the knapsack to achieve maximum total value of packed items? 0-1 Knapsack Problem Why it is called a 0-1 Knapsack Problem?

Example: 0-1 Knapsack Problem Which boxes should be chosen to maximize the amount of money while still keeping the overall weight under 15 kg ?

Example: 0-1 Knapsack Problem Objective Function Unknowns or Variables Constraints

Formulation: 0-1 Knapsack Problem

0-1 Knapsack Problem: Brute-Force Approach Since there are n items, there are 2 n possible combinations of items. We go through all combinations and find the one with maximum value and with total weight less or equal to W Running time will be O(2 n )

DP  Define Subproblem Suppose that items are labeled 1,..., n. Define a subproblem, say, P k as to finding an optimal solution for items in S k = {1, 2,..., k}.  original problem is P n. Is such a scheme workable? Is the principle of optimality held?

A Counterexample 1. 2kgs, 3$ 2. 3kgs, 4$ 3. 4kgs, 5$ 4. 5kgs, 8$ 5. 9kgs, 10$ 20 kgs P 1 P 2 P 3 P 4 P 5

A Counterexample Sub- problem OptimumValue P1P1 13$ P2P2 1, 27$ P3P3 1, 2, 312$ P4P4 1, 2, 3, 420$ P5P5 1, 3, 4, 526$ 1. 2kgs, 3$ 2. 3kgs, 4$ 3. 4kgs, 5$ 4. 5kgs, 8$ 5. 9kgs, 10$ 20 kgs

A Counterexample Sub- problem OptimumValue P1P1 13$ P2P2 1, 27$ P3P3 1, 2, 312$ P4P4 1, 2, 3, 420$ P5P5 1, 3, 4, 526$ 1. 2kgs, 3$ 2. 3kgs, 4$ 3. 4kgs, 5$ 4. 5kgs, 8$ 5. 9kgs, 10$ 20 kgs Solution for P 4 is not part of the solution for P 5 !!!

DP  Define Subproblem Suppose that items are labeled 1,..., n. Define a subproblem, say, P k as to finding an optimal solution for items in S k = {1, 2,..., k}.  original problem is P n. Is such a scheme workable? Is the principle of optimality held?  

DP  Define Subproblem Suppose that items are labeled 1,..., n. Define a subproblem, say, P k,w as to finding an optimal solution for items in S k = {1, 2,..., k} and with total weight no more than w.  original problem is P n,W. Is such a scheme workable? Is the principle of optimality held? New version

DP  Principle of Optimality Denote the benefit for the optimal solution of P k,w as B k,w.

DP  Principle of Optimality Denote the benefit for the optimal solution of P k,w as B k,w. In this case, it is impossible to include the k th object. Not include the k th object include the k th object There are two possible choices.

Example 1. 2kgs, 3$ 2. 3kgs, 4$ 3. 4kgs, 5$ 4. 5kgs, 6$ 5 kgs w kw k

Example 1. 2kgs, 3$ 2. 3kgs, 4$ 3. 4kgs, 5$ 4. 5kgs, 6$ 5 kgs w kw k Step 1. Setup table and initialize base conditions.

Example 1. 2kgs, 3$ 2. 3kgs, 4$ 3. 4kgs, 5$ 4. 5kgs, 6$ 5 kgs w kw k Step 2. Fill all table entries progressively.

Example 1. 2kgs, 3$ 2. 3kgs, 4$ 3. 4kgs, 5$ 4. 5kgs, 6$ 5 kgs w kw k Step 2. Fill all table entries progressively.

Example 1. 2kgs, 3$ 2. 3kgs, 4$ 3. 4kgs, 5$ 4. 5kgs, 6$ 5 kgs w kw k Step 2. Fill all table entries progressively.

Example 1. 2kgs, 3$ 2. 3kgs, 4$ 3. 4kgs, 5$ 4. 5kgs, 6$ 5 kgs w kw k Step 2. Fill all table entries progressively.

Example 1. 2kgs, 3$ 2. 3kgs, 4$ 3. 4kgs, 5$ 4. 5kgs, 6$ 5 kgs w kw k Step 3. Trace back

Pseudo-Polynomial Time Algorithm The time complexity for 0-1 knapsack using DP is O(Wn). Not a polynomial-time algorithm if W is large. This is a pseudo-polynomial time algorithm.

Lecture 2: Dynamic Programming All-Pairs Shortest Path Problem

All-Pairs Shortest Path Problem Given weighted graph G(V,E), we want to determine the cost d ij of the shortest path between each pair of nodes in V.

VkVk Floyd's Algorithm Let be the minimum cost of a path from node i to node j, using only nodes in V k ={v 1,…,v k }. k k Vk1Vk1 i i j j The all-pairs shortest path problem is to find all paths with costs

Floyd's Algorithm Input Parameter: D Output Parameter: D, next all_paths(D, next) n = D.NumberOfRows; // initialize next: if no intermediate // vertices are allowed next[i][j] = j for i = 1 to n for j = 1 to n next[i][j] = j; for k = 1 to n // compute D(k) for i = 1 to n for j = 1 to n if (D[i][k] + D[k][j] < D[i][j]) D[i][j] = D[i][k] + D[k][j]; next[i][j] = next[i][k]; Input Parameter: D Output Parameter: D, next all_paths(D, next) n = D.NumberOfRows; // initialize next: if no intermediate // vertices are allowed next[i][j] = j for i = 1 to n for j = 1 to n next[i][j] = j; for k = 1 to n // compute D(k) for i = 1 to n for j = 1 to n if (D[i][k] + D[k][j] < D[i][j]) D[i][j] = D[i][k] + D[k][j]; next[i][j] = next[i][k]; O(n3)O(n3) O(n3)O(n3)

Floyd's Algorithm Input Parameters: next, i, j Output Parameters: None print_path(next, i, j) { // if no intermediate vertices, just // print i and j and return if (j == next[i][j]) { print(i + “ ” + j); return; } // output i and then the path from the vertex // after i (next[i][j]) to j print(i + “ ”); print_path(next,next[i][j], j); } Input Parameters: next, i, j Output Parameters: None print_path(next, i, j) { // if no intermediate vertices, just // print i and j and return if (j == next[i][j]) { print(i + “ ” + j); return; } // output i and then the path from the vertex // after i (next[i][j]) to j print(i + “ ”); print_path(next,next[i][j], j); }

Lecture 2: Dynamic Programming Traveling Salesman Problem

Traveling Salesman Problem (TSP)

How many feasible paths? n cities

Example (TSP) (1234) = 18 (1243) = 19 (1324) = 23 (1342) = 19 (1423) = 23 (1432) = 18

Subproblem Formulation for TSP 1 1 S i i g(i, S) length of the shortest path from i to 1 visiting each city in S exactly once. g(1, V  {1}) length of the optimal TSP tour.

Subproblem Formulation for TSP 1 1 S i i g(i, S) length of the shortest path from i to 1 visiting each city in S exactly once. S{j}S{j} j j d ij Goal: g(1, V  {1})

Example Goal: g(1, V  {1}) d 12 d 13 d 14 d 23 d 24 d 32 d 34 d 42 d 43 d 34 d 43 d 24 d 42 d 23 d 32

Example Goal: g(1, V  {1}) d 12 d 13 d 14 d 23 d 24 d 32 d 34 d 42 d 43 d 34 d 43 d 24 d 42 d 23 d

DP  TSP Algorithm Input Parameter: D Output Parameter: P // path TSP(D) n = Dim(D); for i = 1 to n g[i,  ] = D[i, 1]; for k = 1 to n  2 // compute g for subproblems for all S  V  {1} with |S|=k for all i  S  {1} g[i, S] = min j  S {D[i, j], g[j, S  {j}] }; P[i, S] = arg min j  S {D[i, j], g[j, S  {j}] }; // compute the TSP tour g[1, V  {1}] = min j  V  {1} {D[1, j], g[j, V  {1, j}]}; P[1, V  {1}] = arg min j  V  {1} {D[1, j], g[j, V  {1, j}]}; Input Parameter: D Output Parameter: P // path TSP(D) n = Dim(D); for i = 1 to n g[i,  ] = D[i, 1]; for k = 1 to n  2 // compute g for subproblems for all S  V  {1} with |S|=k for all i  S  {1} g[i, S] = min j  S {D[i, j], g[j, S  {j}] }; P[i, S] = arg min j  S {D[i, j], g[j, S  {j}] }; // compute the TSP tour g[1, V  {1}] = min j  V  {1} {D[1, j], g[j, V  {1, j}]}; P[1, V  {1}] = arg min j  V  {1} {D[1, j], g[j, V  {1, j}]}; Goal: g(1, V  {1})

DP  TSP Algorithm Input Parameter: D Output Parameter: P // path TSP(D) n = Dim(D); for i = 1 to n g[i,  ] = D[i, 1]; for k = 1 to n  2 // compute g for subproblems for all S  V  {1} with |S|=k for all i  S  {1} g[i, S] = min j  S {D[i, j], g[j, S  {j}] }; P[i, S] = arg min j  S {D[i, j], g[j, S  {j}] }; // compute the TSP tour g[1, V  {1}] = min j  V  {1} {D[1, j], g[j, V  {1, j}]}; P[1, V  {1}] = arg min j  V  {1} {D[1, j], g[j, V  {1, j}]}; Input Parameter: D Output Parameter: P // path TSP(D) n = Dim(D); for i = 1 to n g[i,  ] = D[i, 1]; for k = 1 to n  2 // compute g for subproblems for all S  V  {1} with |S|=k for all i  S  {1} g[i, S] = min j  S {D[i, j], g[j, S  {j}] }; P[i, S] = arg min j  S {D[i, j], g[j, S  {j}] }; // compute the TSP tour g[1, V  {1}] = min j  V  {1} {D[1, j], g[j, V  {1, j}]}; P[1, V  {1}] = arg min j  V  {1} {D[1, j], g[j, V  {1, j}]}; O(2 n ) Goal: g(1, V  {1})