Dynamic Programming 21 August 2004 Presented by Eddy Chan Written by Ng Tung.

Slides:



Advertisements
Similar presentations
Dynamic Programming.
Advertisements

Dynamic Programming Rahul Mohare Faculty Datta Meghe Institute of Management Studies.
Types of Algorithms.
1.1 Data Structure and Algorithm Lecture 6 Greedy Algorithm Topics Reference: Introduction to Algorithm by Cormen Chapter 17: Greedy Algorithm.
Analysis of Algorithms
Dynamic Programming In this handout A shortest path example
Algorithms + L. Grewe.
Techniques for Dealing with Hard Problems Backtrack: –Systematically enumerates all potential solutions by continually trying to extend a partial solution.
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
CS420 Lecture 9 Dynamic Programming. Optimization Problems In optimization problems a set of choices are to be made to arrive at an optimum, and sub problems.
Dynamic Programming.
Bowen Yu Programming Practice Midterm, 7/30/2013.
Analysis of Algorithms Dynamic Programming. A dynamic programming algorithm solves every sub problem just once and then Saves its answer in a table (array),
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Chapter 7 Dynamic Programming 7.
Algorithm Design Techniques: Induction Chapter 5 (Except Sections 5.6 and 5.7)
Dynamic Programming CIS 606 Spring 2010.
MIT and James Orlin © Dynamic Programming 2 –Review –More examples.
Algorithm Design Techniques: Induction Chapter 5 (Except Section 5.6)
Shortest Path Problems Directed weighted graph. Path length is sum of weights of edges on path. The vertex at which the path begins is the source vertex.
What we will cover…  CPU Scheduling  Basic Concepts  Scheduling Criteria  Scheduling Algorithms  Evaluations 1-1 Lecture 4.
Recursion Chapter 7. Chapter 7: Recursion2 Chapter Objectives To understand how to think recursively To learn how to trace a recursive method To learn.
1 Dynamic Programming Jose Rolim University of Geneva.
Dynamic Programming Part One HKOI Training Team 2004.
動態規劃 ( 一 ) HKOI Training 2007 Kelly Choi 19 May 2007 Acknowledgement: References and slides extracted from 1. [Advanced] Dynamic Programming, ,
Recursion Chapter 7. Chapter Objectives  To understand how to think recursively  To learn how to trace a recursive method  To learn how to write recursive.
-動態規劃. Getting started : Fibonacci Sequence Given initial value F(1) = 1, F(2) = 1. Recursive relation F(n) = F(n-1) + F(n-2) Given any number k, ask.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
ADA: 7. Dynamic Prog.1 Objective o introduce DP, its two hallmarks, and two major programming techniques o look at two examples: the fibonacci.
Computer Science Department Data Structure & Algorithms Lecture 8 Recursion.
Review Introduction to Searching External and Internal Searching Types of Searching Linear or sequential search Binary Search Algorithms for Linear Search.
1 CSC 427: Data Structures and Algorithm Analysis Fall 2008 Dynamic programming  top-down vs. bottom-up  divide & conquer vs. dynamic programming  examples:
HKOI Training 2009 Kelly Choi 27 June 2009 Acknowledgements: References and slides extracted from 1. [Advanced] Dynamic Programming, , by Chi-Man.
David Luebke 1 10/24/2015 CS 332: Algorithms Greedy Algorithms Continued.
Greedy Methods and Backtracking Dr. Marina Gavrilova Computer Science University of Calgary Canada.
Honors Track: Competitive Programming & Problem Solving Optimization Problems Kevin Verbeek.
Dynamic Programming Louis Siu What is Dynamic Programming (DP)? Not a single algorithm A technique for speeding up algorithms (making use of.
11.1 Sequences and Series. Sequences What number comes next? 1, 2, 3, 4, 5, ____ 2, 6, 10, 14, 18, ____ 1, 2, 4, 8, 16, ____
Tuesday, April 30 Dynamic Programming – Recursion – Principle of Optimality Handouts: Lecture Notes.
Types of Algorithms. 2 Algorithm classification Algorithms that use a similar problem-solving approach can be grouped together We’ll talk about a classification.
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
Dynamic Programming Min Edit Distance Longest Increasing Subsequence Climbing Stairs Minimum Path Sum.
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
COSC 3101NJ. Elder Announcements Midterms are marked Assignment 2: –Still analyzing.
CSC5101 Advanced Algorithms Analysis
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
COSC 3101NJ. Elder Announcements Midterm Exam: Fri Feb 27 CSE C –Two Blocks: 16:00-17:30 17:30-19:00 –The exam will be 1.5 hours in length. –You can attend.
Greedy Algorithms BIL741: Advanced Analysis of Algorithms I (İleri Algoritma Çözümleme I)1.
Dynamic Programming (DP) By Denon. Outline Introduction Fibonacci Numbers (Review) Longest Common Subsequence (LCS) More formal view on DP Subset Sum.
Exponential time algorithms Algorithms and networks.
A Different Solution  alternatively we can use the following algorithm: 1. if n == 0 done, otherwise I. print the string once II. print the string (n.
Part 2 # 68 Longest Common Subsequence T.H. Cormen et al., Introduction to Algorithms, MIT press, 3/e, 2009, pp Example: X=abadcda, Y=acbacadb.
Copyright © 2014 Curt Hill Algorithms From the Mathematical Perspective.
Fundamental Data Structures and Algorithms Ananda Guna March 18, 2003 Dynamic Programming Part 1.
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
Discrete Structures Li Tak Sing( 李德成 ) Lectures
Alignment, Part II Vasileios Hatzivassiloglou University of Texas at Dallas.
Greedy Algorithms Many of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
Hidden Markov Models BMI/CS 576
Design & Analysis of Algorithm Dynamic Programming
David Meredith Dynamic programming David Meredith
Algorithmics - Lecture 11
Advanced Algorithms Analysis and Design
CS330 Discussion 4 Spring 2017.
COMPS263F Unit 2 Discrete Structures Li Tak Sing( 李德成 ) Room A
Ch. 15: Dynamic Programming Ming-Te Chi
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
Dynamic Programming.
Presentation transcript:

Dynamic Programming 21 August 2004 Presented by Eddy Chan Written by Ng Tung

Example : Grid Path Counting In a N*M grid, we want to move from the top left cell to the bottom right cell You may only move down or right Some cells may be impassable Example of one path:

Example : Grid Path Counting Na ï ve Algorithm : Perform a DFS Function DFS(x,y: integer) : integer; Begin If (x <= N) and (y <= M) Then Begin If (x = N) and (y = M) Then DFS := 1 {base case} Else If Grid[x,y] <> IMPASSABLE Then DFS := DFS(x+1,y) + DFS(x,y+1) Else DFS := 0; {impassable cell} End

Example : Grid Path Counting Time complexity of this algorithm: Each time the procedure branches into two paths Time complexity is exponential Alternate method to estimate runtime The base case is reached the number of times of paths, so time complexity is at least = number of paths

Example : Grid Path Counting Can we do better? Note that: DFS(1,1) calls DFS(1,2) and (2,1) DFS(1,2) calls DFS(1,3) and DFS(2,2) DFS(2,1) calls DFS(3,1) and DFS(2,2) DFS(2,2) is called twice, but the result is the same. Time is wasted We can try to memorize the values

Example : Grid Path Counting Every time we found the value for a particular DFS(x,y), store it in a table Next time DFS(x,y) is called, we can use the value in the table directly, without calling DFS(x+1,y) and DFS(x,y+1) again This is called recursion with memorization or Top-Down Dynamic Programming

Example : Grid Path Counting Function DFS(x,y: integer) : integer; Begin If (x <= N) and (y <= M) Then Begin If Memory[x,y] = -1 Then Begin If (x = N) and (y = M) Then Memory[x,y] := 1 Else If Grid[x,y] <> IMPASSABLE Then Memory[x,y] := DFS(x+1,y) + DFS(x,y+1) Else Memory[x,y] := 0; End DFS := Memory[x,y]; End

Example : Grid Path Counting There is also a “ Bottom-Up ” way to solve this problem Consider the arrays Grid[x,y] and Memory[x,y]: It is possible to treat DFS(x,y) not as a function, but as an array, and evaluate the values for DFS[x,y] row-by-row, column-by-column

Example : Grid Path Counting DFS[N,M] := 1; For x := 1 to N Do If Grid[x,M] = IMPASSABLE Then DFS[x,M] := 0 Else DFS[x,M] := 1; For y := 1 to M Do If Grid[N,y] = IMPASSABLE Then DFS[N,y] := 0 Else DFS[N,y] := 1; For x := N-1 downto 1 Do For y := M-1 downto 1 Do If Grid[x,y] = IMPASSABLE Then DFS[x,y] := 0; Else DFS[x,y] := DFS[x+1,y] + DFS[x,y+1];

Example : Grid Path Counting It is very important to be able to describe a DP algorithm A DP algorithm has 2 essential parts A description of “ states ”, as wells as the information associated with the states A rule describing the relationship between states

Example : Grid Path Counting In the above problem, the state is the position – (x,y) Information associated with the state is the number of paths from that position to the destination The relation between states is the formula: DFS(x,y) = 1, x=N and y = M 0, Grid[x,y] is impassable DFS(x+1,y) + DFS(x,y+1), otherwise

Example : Grid Path Counting Sometime you may consider the state as a “ semantic ” (語意的) description, and the formula as a “ mathematical ” description If you can design a good state representation, the formula will come up smoothly

Variation on a theme Consider a more advanced problem: Structure of the grid is the same as in the previous problem Additional Rule: At the beginning you got B bombs, you may use a bomb to detonate a impassable cell so that you can walk into it Now count the number of paths again

Variation on a theme How to solve this problem? Suggestion : You may either assume that the bomb is used when you enter an impassable cell, or when you exit an impassable cell. This does not make any difference

Variation on a theme In the past, we will try to find out some properties of the problem which enables us to simplify the problem Example – Dijkstra proved in his algorithm that by finding the vertex with minimum distance to source every time, we can determine the single source source shortest paths

Variation on a theme In Dynamic Programming, we often extend our state representation to deal with new situations A straightforward extension is let DP(x,y,b) represent the number of paths “ when we want to move from (x,y) to (N,M), while having b bombs at hand ”

Variation on a theme Extension to the state transition formula DFS(x,y,b) = 1, x=N and y = M 0, Grid[x,y] is impassable and b = 0 DFS(x+1,y,b-1) + DFS(x,y+1,b-1), Grid[x,y] is impassable and b > 0 DFS(x+1,y,b) + DFS(x,y+1,b), otherwise In this case, the number of bomb is decremented upon leaving an impassable cell

More types of DP problems Usually the task is not to count something, but to find an optimal solution to a certain problem Examples: IOI1994 – Triangle IOI1999 – Little Shop of Flowers NOI1998 – 免費餡餅

Example : Longest Uprising Subsequence Given a sequence of natural numbers like [1,5,3,4,8,7,6,7,9,10] with length N Let the number at position I be S(I) Find the longest subsequence with each number smaller than the next number In this case, it is: [1,5,3,4,8,7,6,7,9,10]

Example : Longest Uprising Subsequence First consider a simpler problem – find the length of the longest uprising subsequence A possible state representation is “ The length of the longest uprising subsequence starting from the first number and ending at the I ’ th number ” (the I ’ th number must be in the subsequence) Let this number be L(I)

Example : Longest Uprising Subsequence Suppose the last element in the subsequence is at position I The previous element must be in (1..I-1), and it must be less than S(I) The formula is thus: L(I) = 0 if I = 0 L(I) = Max{L(J) if S(J) < S(I) for 1<=J<=I-1} + 1 Solution to the problem can be found by finding the maximum among L(1) to L(N)

Example : Longest Uprising Subsequence But it is not yet done! We have only found the length, but not the actual sequence An auxiliary array B(I) is required to store backtracking information, i.e. the second-to- last element of the longest uprising sequence terminating at position I The complete sequence can be found by printing S(I), S(B(I)), S(B(B(I))), …

Example: A corporation has $5 million to allocate to its three plants for possible expansion. Each plant has submitted a number of proposals on how it intends to spend the money. Each proposal gives the cost of the expansion (c) and the total revenue expected (r).

Example: Each plant will only be permitted to enact one of its proposals. The goal is to maximize the firm's revenues resulting from the allocation of the $5 million. We will assume that any of the $5 million we don't spend is lost

Example: Variables: Plant, Proposal, Cost, Revenue, Money State: Plant, Money Formula: DP[i,j] (i=plant, j=money) =max{ DP[i-1,j-c1]+r1, DP[i-1,j-c2]+r2, DP[i-1,j-c3]+r3, DP[i-1,j-c4]+r4} =max{ DP[i-1,j-ck]+rk} for k=1,2,3,4