動態規劃 ( 一 ) HKOI Training 2007 Kelly Choi 19 May 2007 Acknowledgement: References and slides extracted from 1. [Advanced] Dynamic Programming, 24-04-2004,

Slides:



Advertisements
Similar presentations
Comp 122, Spring 2004 Greedy Algorithms. greedy - 2 Lin / Devi Comp 122, Fall 2003 Overview  Like dynamic programming, used to solve optimization problems.
Advertisements

Types of Algorithms.
Analysis of Algorithms
Algorithms + L. Grewe.
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Dynamic Programming.
Introduction to Algorithms
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Algorithm Strategies Nelson Padua-Perez Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
Dynamic Programming Technique. D.P.2 The term Dynamic Programming comes from Control Theory, not computer science. Programming refers to the use of tables.
Dynamic Programming CIS 606 Spring 2010.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
CS 206 Introduction to Computer Science II 10 / 14 / 2009 Instructor: Michael Eckmann.
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
Chapter 10 Recursion. Copyright © 2005 Pearson Addison-Wesley. All rights reserved Chapter Objectives Explain the underlying concepts of recursion.
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
UNIVERSITY OF SOUTH CAROLINA College of Engineering & Information Technology Bioinformatics Algorithms and Data Structures Chapter 11: Core String Edits.
Dynamic Programming A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River,
Backtracking.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Dynamic Programming Part One HKOI Training Team 2004.
Towers of Hanoi. Introduction This problem is discussed in many maths texts, And in computer science an AI as an illustration of recursion and problem.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
Graph Theory Topics to be covered:
CSCE350 Algorithms and Data Structure Lecture 17 Jianjun Hu Department of Computer Science and Engineering University of South Carolina
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
1 CSC 427: Data Structures and Algorithm Analysis Fall 2008 Dynamic programming  top-down vs. bottom-up  divide & conquer vs. dynamic programming  examples:
HKOI Training 2009 Kelly Choi 27 June 2009 Acknowledgements: References and slides extracted from 1. [Advanced] Dynamic Programming, , by Chi-Man.
CS 312: Algorithm Design & Analysis Lecture #23: Making Optimal Change with Dynamic Programming Slides by: Eric Ringger, with contributions from Mike Jones,
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
Dynamic Programming 21 August 2004 Presented by Eddy Chan Written by Ng Tung.
DP (not Daniel Park's dance party). Dynamic programming Can speed up many problems. Basically, it's like magic. :D Overlapping subproblems o Number of.
Honors Track: Competitive Programming & Problem Solving Optimization Problems Kevin Verbeek.
Christopher Moh 2005 Competition Programming Analyzing and Solving problems.
1 Programming for Engineers in Python Autumn Lecture 12: Dynamic Programming.
Dynamic Programming Louis Siu What is Dynamic Programming (DP)? Not a single algorithm A technique for speeding up algorithms (making use of.
MA/CSSE 473 Day 28 Dynamic Programming Binomial Coefficients Warshall's algorithm Student questions?
1 Dynamic Programming Andreas Klappenecker [partially based on slides by Prof. Welch]
1 Chapter 15-1 : Dynamic Programming I. 2 Divide-and-conquer strategy allows us to solve a big problem by handling only smaller sub-problems Some problems.
HKOI Training 2009 Hackson Leung 28 th March 2009 Acknowledgement: References and slides are extracted from: 1. Dynamic Programming, , by Kelly.
Types of Algorithms. 2 Algorithm classification Algorithms that use a similar problem-solving approach can be grouped together We’ll talk about a classification.
CS 3343: Analysis of Algorithms Lecture 18: More Examples on Dynamic Programming.
Algorithmics - Lecture 121 LECTURE 11: Dynamic programming - II -
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
COSC 3101NJ. Elder Announcements Midterms are marked Assignment 2: –Still analyzing.
CSC5101 Advanced Algorithms Analysis
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
COSC 3101NJ. Elder Announcements Midterm Exam: Fri Feb 27 CSE C –Two Blocks: 16:00-17:30 17:30-19:00 –The exam will be 1.5 hours in length. –You can attend.
Greedy Algorithms BIL741: Advanced Analysis of Algorithms I (İleri Algoritma Çözümleme I)1.
CS 3343: Analysis of Algorithms Lecture 19: Introduction to Greedy Algorithms.
Advanced Dynamic Programming II HKOI Training Team 2004.
Dynamic Programming. What is Dynamic Programming  A method for solving complex problems by breaking them down into simpler sub problems. It is applicable.
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Design and Analysis of Algorithm
Types of Algorithms.
Dynamic Programming.
Dynamic Programming.
Unit-5 Dynamic Programming
Types of Algorithms.
Chapter 15-1 : Dynamic Programming I
Chapter 15: Dynamic Programming II
DYNAMIC PROGRAMMING.
Types of Algorithms.
COMP108 Algorithmic Foundations Dynamic Programming
Presentation transcript:

動態規劃 ( 一 ) HKOI Training 2007 Kelly Choi 19 May 2007 Acknowledgement: References and slides extracted from 1. [Advanced] Dynamic Programming, , by cx 2. [Intermediate] Dynamic Programming, , by Ng Tung

Prerequisite ► Concepts in Recurrence ► Basic Recursion ► Functions ► Divide-and-conquer

Grid Path Counting ► In a N*M grid, we want to move from the top left cell to the bottom right cell ► You may only move down or right ► Some cells may be impassable ► Example of one path:

Naïve Algorithm: DFS ► Function DFS(x,y: integer) : integer; Begin If (x <= N) and (y <= M) Then Begin If (x = N) and (y = M) Then DFS := 1 {base case} Else If Grid[x,y] <> IMPASSABLE Then DFS := DFS(x+1,y) + DFS(x,y+1) Else DFS := 0; {impassable cell} EndEnd

Time Complexity ► Time complexity of this algorithm:  Each time the procedure branches into two paths  Time complexity is exponential ► Alternate method to estimate runtime  The base case is reached the number of times of paths, so time complexity is at least = number of paths ► Have we done anything redundant?

Slow... DFS(1,2) DFS(1,3)DFS(2,2) DFS(1,4)DFS(2,3)DFS(3,2)DFS(2,3) DFS(1,1) DFS(2,1) DFS(2,2)DFS(3,1) DFS(3,2)DFS(2,3)

Overlapping Subproblems ► Note that DFS(2,2) is called twice, DFS(2,3) three times, etc. ► But the work performed by these DFS(2,2) calls are unaffected by what called them, and thus are redundant ► We can memorize the values these calls return, and avoid the redundant work

Memo(r)ization ► Compute and store the value of DFS(x,y) in the first time we called it. Afterwards, retrieve the value of DFS(x,y) from the table directly, without calling DFS(x+1,y) and DFS(x,y+1) again ► This is called recursion with memo(r)ization. ► Time complexity is reduced to O(NM). (why?)

Example : Grid Path Counting ► Function DFS(x,y: integer) : integer; Begin If (x <= N) and (y <= M) Then Begin If Memory[x,y] = -1 Then Begin If (x = N) and (y = M) Then Memory[x,y] := 1 Else If Grid[x,y] <> IMPASSABLE Then Memory[x,y] := DFS(x+1,y) + DFS(x,y+1) Else Memory[x,y] := 0; End DFS := Memory[x,y]; EndEnd

Bottom-up approach ► Iterative, Without the use of function calls ► Consider the arrays Grid[x,y] and Memory[x,y]: ► Treat DFS(x,y) not as a function, but as an array ► Evaluate the values for DFS[x,y] row-by-row, column-by- column

Example : Grid Path Counting ► DFS[N,M] := 1; For x := 1 to N Do If Grid[x,M] = IMPASSABLE Then DFS[x,M] := 0 Else DFS[x,M] := 1; If Grid[x,M] = IMPASSABLE Then DFS[x,M] := 0 Else DFS[x,M] := 1; For y := 1 to M Do If Grid[N,y] = IMPASSABLE Then DFS[N,y] := 0 Else DFS[N,y] := 1; If Grid[N,y] = IMPASSABLE Then DFS[N,y] := 0 Else DFS[N,y] := 1; For x := N-1 downto 1 Do For y := M-1 downto 1 Do If Grid[x,y] = IMPASSABLE Then DFS[x,y] := 0; Else DFS[x,y] := DFS[x+1,y] + DFS[x,y+1];

Computation Order ► Top-down approach  Recursive function calls with memo(r)ization  Easy to implement  Can avoid calculating values for impossible states  Further optimization possible ► Bottom-up approach  Easy to code once the order is identified  Avoid function calls ► We usually prefer the bottom-up approach ► But we need to see the dependence of states on other states to decide the order

Components of DP ► It is very important to be able to describe a DP algorithm ► Two essential components of DP  State( 狀態 ) and the state value: ► State – a description of the subproblem ► State value – the value we need according to the subproblem, usually optimal in some sense, but can be defined flexibly.  A rule describing the relationship between states ( 狀態轉移方程 ), with base cases

Grid Path Counting ► In the above problem, the state is the position – (x,y) ► The state value is defined as  N(x,y) = Number of paths from that position to the destination ► Formula for State Transfer N(x,y) = 1, x=N and y = M 0, Grid[x,y] is impassable N(x+1,y) + N(x,y+1), otherwise

Defining States ► Defining a good state is the key to solving a DP problem  Formula for State Transfer comes up easily if we have a good definition for the state and the state value.  The state definition affects the dimension of the problem and thus the time complexity ► The state should include all information relevant to the computation of the value. Recall what a “function” means in the mathematical sense.

Defining States ► In optimization problems, we usually define the state value to be the optimal value subject to a set of constraints (i.e. the states). Sometimes these constraints simply come from varying the constraints from the problem, but sometimes there are better definitions.

Features of DP Problems ► Overlapping sub-problems ( 重疊子問題 ) ► Optimal Substructure ( 最優子結構 ) ► Memoryless Property ( 無後效性 )  The future depends only on the present, but not the past.  i.e. Decision leading to the subproblem does not affect how we solve the subproblem, once the subproblem is specified.

Memo(r)ization VS DP ► Memo(r)ization – the method for speeding up computations by storing previously computed results in memory ► Dynamic Programming – the process of setting up and evaluating a recurrence relation efficiently by employing memo(r)ization

Triangle (IOI ’94) ► Given a triangle with N levels like the one on the left, find a path with maximum sum from the top to the bottom ► Only the sum is required

Triangle : Analysis ► Exhaustion?  How many paths are there in total? ► Greedy?  It doesn’t work. Why? ► Graph problem?  Possible, but not simple enough  Fail to make use of the special shape of the graph

Triangle: Defining the State ► We need to find the maximum sum on a path from (1,1) to the bottom ► We can attempt to define the state by the cell (i,j) and state value F[i][j] to be the maximum sum from (i,j) to be bottom

Triangle: Formulation ► Let A[i][j] denote the number in the i-th row and j-th column, (i, j) ► Let F[i][j] denote the maximum sum of the numbers on a path from the bottom to (i, j) ► Answer = F[1][1] ► The problem exhibits optimal substructure here

Triangle: Formulation ► Base cases: F[N][i] = A[N][i] for 1≤ i ≤N ► State Transfer: (i < N, 1 ≤ j ≤ i)  F[i][j] = max{F[i+1][j],F[i+1][j+1]}+A[i][j]

Triangle: Computation Order ► F[i][*] depends on F[i+1][*] ► Compute F row by row, from bottom to top

Triangle: Algorithm ► Algorithm:  for i  1 to N do F[N][i]  A[N][i] ; for i  N-1 downto 1 do for j  1 to i do F[i][j]  max{F[i+1][j],F[i+1][j+1]} + A[i][j] + A[i][j] answer  F[1][1]

Triangle: Complexity ► Number of array entries: N(N+1)/2 ► Time for computing one entry: O(1) ► Thus the total time complexity is O(N 2 ) ► Memory complexity: O(N 2 )

Triangle: Backtracking ► What if we are asked to output the path? ► We can store the optimal decision, i.e. left or right to go for each cell ► We then travel from (1,1) down to the bottom to obtain the path ► This is called Backtracking, “back” in the sense that we go in the reversed order of which we obtained the answer.

Triangle ► If you refer to past training notes, there is another definition of the state value F[i][j] – the maximum sum of a path from (1,1) to (i,j) ► Both algorithm gives the same time complexity. But the second definition (from the top to bottom) probably gives you some insights for Triangle II on the HKOI Online Judge. Think about it. ► Both algorithm gives the same time complexity. But the second definition (from the top to bottom) probably gives you some insights for Triangle II on the HKOI Online Judge. Think about it.

Fishing ► There are N fish ponds and you are going to spend M minutes on fishing ► Given the time-reward relationship of each pond, determine the time you should spend at each pond in order to get the biggest reward time/pond123 1 minute 0 fish 2 fish 1 fish 2 minutes 3 fish 2 fish 3 minutes 3 fish 4 fish

Fishing (example) ► For example, if N=3, M=3 and the relationships are given in the previous slide, then the optimal schedule is  Pond 1: 2 minutes  Pond 2: 1 minute  Pond 3: 0 minute  Reward: 5 fish

Fishing (analysis) ► You can think of yourself visiting ponds 1, 2, 3, …, N in order  Why? ► Suppose in an optimal schedule you spend K minutes on fishing at pond 1 ► So you have M-K minutes to spend at the remaining N-1 ponds  The problem is reduced ► But how can I know what is K?  You don’t know, so try all possible values!

Fishing (formulation) ► Let F[i][j] be the maximum reward you can get by spending j minutes at the first i ponds ► Base cases: 0 ≤ i, j ≤ N F[i][0] = 0 F[0][j] = 0 ► Progress: 1 ≤ i ≤ N, 1 ≤ j ≤ M F[i][j] = max{F[i-1][k]+R[i][j-k]} 0 ≤ k ≤ j

Stamps ► Given an unlimited supply of $2, $3 and $5 stamps, find the number of different ways to form a denomination of $N ► Example:  N = 10  {2, 2, 2, 2, 2}, {2, 2, 3, 3}, {2, 3, 5}, {5, 5}

Polygon Triangulation ► Given an N-sided convex polygon A, find a triangulation scheme with minimum total cut length

Polygon (analysis) ► Every edge of A belongs to exactly one triangle resulting from the triangulation ► We get two (or one) smaller polygons after deleting a triangle

Polygon (analysis) ► The order of cutting does not matter ► Optimal substructure  If the cutting scheme for A is optimal, then the cutting schemes for B and C must also be optimal A B C

Best Flying Route ( 最佳航線問題 1 ) ► Cities are numbered from 1 to N, from West to East ► There are some one-way flights connecting pairs of cities ► You want to fly from city 1 to city N, always flying to the East, then back to city 1, always flying to the West ► Each city, except city 1, can be visited at most once ► e.g. When N = 10,  You can fly in the sequence {1, 2, 5, 8, 10, 7, 3, 1} (if there are flights connecting each adjacent pair of them), but not in the sequence {1, 4, 2, 5, 10, 1}, nor {1, 3, 5, 10, 5, 1}, even if there are flights connectin each adjacent pair of them. ► Assume at least one such route exists ► Maximize the number of cities you can visit within the above constraints

Best Flying Route ► Does the problem demonstrate optimal substructure? Are the states memoryless?  Unfortunately, after flying from 1 to N, how we can fly back to 1 depends on what cities we visited in flying to N  How to formulate the problem? How are subproblems related? ► Time Complexity and Memory Complexity ► Outputting the route

Review ► Memoryless Property ► Optimal substructure and Overlapping Subproblems ► State representation and formula for state transfer ► Bottom-up approach VS top-down approach ► Tradeoff between memory and runtime

Common Models ► DP on rectangular arrays ► DP on trees ► DP on optimized states (ugly states)  This involves some state representation

Further Optimization ► Memory optimization – rolling array ► Runtime optimization – reducing the cost in state transfer

Challenge: The Bookcase ► N books with their height h i and thickness t i ► You are required to make a bookcase with three (nonempty) shelves ► You would like to minimize the area of the bookcase:  the sum of the heights of the three shelves multiplied by the maximum of the sum of thickness in each of the shelves

Challenge: The Bookcase ► Constraints  1 ≤ N ≤ 70  150 ≤ h i ≤ 300  5 ≤ t i ≤ 30 ► Please do not ask others for the solution. You will waste the problem if you do not arrive at the solution yourself.

Practice Problems ► 1058 The Triangle II ► 3023 Amida De Go II ► 3042 Need for Speed ► 6000 Palindrome ► 6990 Little Shop of Flowers

References 1. 信息學奧林匹克教程 – 提高篇 吳耀斌, 曹利國, 向期中, 朱全民編著 湖南師范大學出版社 2. 算法藝術與信息學競賽 劉汝佳, 黃亮著 清華大學出版社