Algorithms: Design and Analysis Summer School 2013 at VIASM: Random Structures and Algorithms Lecture 4: Dynamic Programming Phan Th ị Hà D ươ ng 1.

Slides:



Advertisements
Similar presentations
Dynamic Programming.
Advertisements

Algorithm Design Methodologies Divide & Conquer Dynamic Programming Backtracking.
15.Dynamic Programming Hsu, Lih-Hsing. Computer Theory Lab. Chapter 15P.2 Dynamic programming Dynamic programming is typically applied to optimization.
 2004 SDU Lecture11- All-pairs shortest paths. Dynamic programming Comparing to divide-and-conquer 1.Both partition the problem into sub-problems 2.Divide-and-conquer.
Lecture 2: Divide and Conquer algorithms Phan Thị Hà Dương
1 Dynamic Programming Jose Rolim University of Geneva.
CS 691 Computational Photography Instructor: Gianfranco Doretto Cutting Images.
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
Dynamic Programming (pro-gram)
Chapter 7 Dynamic Programming 7.
Dynamic Programming Reading Material: Chapter 7..
Dynamic Programming Code
Dynamic Programming Dynamic Programming algorithms address problems whose solution is recursive in nature, but has the following property: The direct implementation.
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Analysis of Algorithms CS 477/677
Design and Analysis of Algorithms - Chapter 81 Dynamic Programming Dynamic Programming is a general algorithm design techniqueDynamic Programming is a.
11-1 Matrix-chain Multiplication Suppose we have a sequence or chain A 1, A 2, …, A n of n matrices to be multiplied –That is, we want to compute the product.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
1 Dynamic Programming 2012/11/20. P.2 Dynamic Programming (DP) Dynamic programming Dynamic programming is typically applied to optimization problems.
Lecture 5 Dynamic Programming. Dynamic Programming Self-reducibility.
Dynamic Programming Dynamic programming is a technique for solving problems with a recursive structure with the following characteristics: 1.optimal substructure.
Algorithms: Design and Analysis Summer School 2013 at VIASM: Random Structures and Algorithms Lecture 3: Greedy algorithms Phan Th ị Hà D ươ ng 1.
Dynamic Programming UNC Chapel Hill Z. Guo.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CS 8833 Algorithms Algorithms Dynamic Programming.
Lecture 7 All-Pairs Shortest Paths. All-Pairs Shortest Paths.
Dynamic Programming Greed is not always good.. Jaruloj Chongstitvatana Design and Analysis of Algorithm2 Outline Elements of dynamic programming.
1 Ch20. Dynamic Programming. 2 BIRD’S-EYE VIEW Dynamic programming The most difficult one of the five design methods Has its foundation in the principle.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
15.Dynamic Programming. Computer Theory Lab. Chapter 15P.2 Dynamic programming Dynamic programming is typically applied to optimization problems. In such.
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
Introduction to Algorithms All-Pairs Shortest Paths My T. UF.
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
TU/e Algorithms (2IL15) – Lecture 3 1 DYNAMIC PROGRAMMING
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
Dynamic Programming Typically applied to optimization problems
Advanced Algorithms Analysis and Design
Lecture 5 Dynamic Programming
Algorithmics - Lecture 11
Advanced Algorithms Analysis and Design
Seminar on Dynamic Programming.
Dynamic programming techniques
Dynamic programming techniques
Lecture 5 Dynamic Programming
Dynamic Programming Several problems Principle of dynamic programming
Dynamic Programming Comp 122, Fall 2004.
Unit-5 Dynamic Programming
Dynamic Programming General Idea
ICS 353: Design and Analysis of Algorithms
Dynamic Programming.
Data Structure and Algorithms
Dynamic Programming.
Chapter 15: Dynamic Programming II
Dynamic Programming Comp 122, Fall 2004.
Ch. 15: Dynamic Programming Ming-Te Chi
Algorithms CSCI 235, Spring 2019 Lecture 28 Dynamic Programming III
Dynamic Programming-- Longest Common Subsequence
Dynamic Programming General Idea
Dynamic Programming.
Advanced Analysis of Algorithms
Matrix Chain Multiplication
CSCI 235, Spring 2019, Lecture 25 Dynamic Programming
Algorithms CSCI 235, Spring 2019 Lecture 27 Dynamic Programming II
Dynamic Programming.
Seminar on Dynamic Programming.
Data Structures and Algorithms Dynamic Programming
Presentation transcript:

Algorithms: Design and Analysis Summer School 2013 at VIASM: Random Structures and Algorithms Lecture 4: Dynamic Programming Phan Th ị Hà D ươ ng 1

Lecture 4: Dynamic Programming 0. Introduction 1. Matrix-chain multiplication 2* Longest common subsequence 3. All-Pairs shortest paths

0. Introduction Compare to Divide and Conquer algorithm: - Divide to sub problems (independently) - Conquer sub problems recursively - Combine results of sub problems to resolve initial problem. Problems: If the sub problems are not independent, there are common sub sub problem Then we use a table to stock results of common sub sub problems

Divide and Conquer Algorithm: From up to bottom. See the big problem, divide it and conquer the sub problems Dynamic Programming: From bottom to up. Beginning with small problem, construct greater problem, up to the original problem.

A very simple example Problem: Calculate C(n,k) = n!/(n-k)!k! Devide and conquer algorithm: function C(n,k) if (k=0 ou k=n) then return 1; else return C(n-1,k)+C(n-1,k-1); Question: What is the complexity of this algorithm ? Exercise: Write an dynamic programming algorithm to solve this problem. What is its complexity ?

Pascal’s Triangle Write an algorithm to the problem by using Pascal’s triangle.

1. Matrix-chain multiplication Problem: given a chain A1, A2,..., An of n matrices, where for i = 1, 2,..., n, matrix Ai has dimension p(i-1) × p(i). Fully parenthesize the product A1* A2*…* An in a way that minimizes the number of scalar multiplications.

Example: M=ABCD, A=(10,2); B=(2,100); C=(100,3); D=(3,20) Number of multiplications (with different way to parenthesize)

Analyzing the Problem Example: M=ABCD, A=(10,2); B=(2,100); C=(100,3); D=(3,20) Number of multiplications (with different way to parenthesize) ((AB)C)D: 10x2x100 +2x100x3 +10x3x20=3200 muls. (AB)(CD): 10x2x x3x20 +10x100x20 = (A(BC))D: 2x100x3 +10x2x3 +10x3x20 = 1260 A((BC)D): 2x100x3 +2x3x20 +10x2x20 = 1120 A(B(CD)): 100x3x20 +2x100x20 +10x2x20 = 10400

MATRIX-MULTIPLY(A, B) 1 if columns[A] <> rows[B] 2 then error "incompatible dimensions" 3 else for i ← 1 to rows[A] 4 do for j ← 1 to columns[B] 5 do C[i, j] ← 0 6 for k ← 1 to columns[A] 7 do C[i, j] ← C[i, j] + A[i, k] * B[k, j] 8 return C

The number of ways to parenthesize If there n matrix, this number is the number of Catalan C(n) = 1/n x C(2n-2,n-1) = Ω (4^n/n^2). We can not consider all the case to make decision. Idea: If for computing M, a cut at position i is optimal, then an minimum cost is composed of this cut and minimum cost for computing Ai … Aj and minimum cost for computing A(i+1) … An Use a table m(i,j) to stock the minimum cost for computing Ai … Aj.

Computing the minimum cost Computing m(i,j): if we cut at k, then m[i,j]=m[i,k]+m[k+1,j]+p(i-1) p(k) p(j). So the formula for m(i,j) is We compute m(i,j) by diagonal: l = j-i+1, the length of the chain of matrix. We compute m(I,j) for l from 2 to n.

Example M=ABCD, A=(10,2); B=(2,100); C=(100,3); D=(3,20). L=2:a[12]=2000,a[23]=600,a[34]=18000 … j = 1234 i = ? ? ?

Algorithm MATRIX-CHAIN-ORDER(p) 1 n ← length[p] 2 for i ← 1 to n 3 do m[i, i] ← 0 4 for l ← 2 to n ▹ l is the chain length. 5 do for i ← 1 to n - l do j ← i + l – 1 ; m[i, j] ← ∞ 8 for k ← i to j do q←m[i,k]+m[k+1,j]+p(i-1) p(k) p(j) 10 if q < m[i, j] 11-12then m[i, j] ← q; s[i, j] ← k 13return m and s

3.All-Pairs shortest paths Problem: Given a graph G= (V, E) where each edge having a length. Find the shortest path for all pairs of V.

Floyd-Warshall algorithm Notation: V={1, 2,.., n} Length of edges: l[i,i] = 0, l[i,j] = l(e) if (i,j) in E l[i,j] = ∞ otherwise Distance: d(i,j) is the temporal shortest length from i to j.

Idea of FW’s algorithm If k is on a shortest path form I to j, then the sub paths from I to k and from k to j are shortest. Algorithm – Beginning: d = l – After step k, d(i,j) is the shortest path from i to j (path containing only vertices 1, 2, …, k) – After n steps, d(i,j) is the shortest path from i to j

18 Example 0 5 ∞ ∞ D0=L= ∞ ∞ ∞ ∞ D1= D2= D3= D4=

FW’s algorithm Floyd(L) 1array D = L 2 for (k=1 to n) 3 do for (i=1 to n) 4 do for (j=1 to n) 5 do D[i,j]= min(D[i,j],D[i,k]+D[k,j]); 6 return D;

Exercise 1.What is the complexity of the FW’s algorithm ? 2.Prove the correctness of the algorithm. 3.Write an algorithm which return not only the length of shortest path but also a shortest path for each pair of vertices. 4.Write an algorithm to determine if there exist a path from each pair of vertices.