HKOI2009 Training (Advanced Group) (Reference: Powerpoint of Dynamic Programming II, HKOI Training 2005, by Liu Chi Man, cx)

Slides:



Advertisements
Similar presentations
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Advertisements

Partial Differential Equations
Counting the bits Analysis of Algorithms Will it run on a larger problem? When will it fail?
Greedy Algorithms Greed is good. (Some of the time)
Study Group Randomized Algorithms 21 st June 03. Topics Covered Game Tree Evaluation –its expected run time is better than the worst- case complexity.
Techniques for Dealing with Hard Problems Backtrack: –Systematically enumerates all potential solutions by continually trying to extend a partial solution.
Greedy Algorithms Be greedy! always make the choice that looks best at the moment. Local optimization. Not always yielding a globally optimal solution.
Introduction to Bioinformatics Algorithms Divide & Conquer Algorithms.
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
COSC 3100 Transform and Conquer
Discrete Structure Li Tak Sing( 李德成 ) Lectures
Constant-Time LCA Retrieval
Nearest Neighbor. Predicting Bankruptcy Nearest Neighbor Remember all your data When someone asks a question –Find the nearest old data point –Return.
This time: Outline Game playing The minimax algorithm
1 Pseudo-polynomial time algorithm (The concept and the terminology are important) Partition Problem: Input: Finite set A=(a 1, a 2, …, a n } and a size.
Dynamic Programming Technique. D.P.2 The term Dynamic Programming comes from Control Theory, not computer science. Programming refers to the use of tables.
Dynamic Programming CIS 606 Spring 2010.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
1 Pseudo-polynomial time algorithm (The concept and the terminology are important) Partition Problem: Input: Finite set A=(a 1, a 2, …, a n } and a size.
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
1 Dynamic Programming Jose Rolim University of Geneva.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Dynamic Programming Part One HKOI Training Team 2004.
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 2 Adapted from slides of Yoonsuck.
CISC 235: Topic 6 Game Trees.
Chapter 9 Superposition and Dynamic Programming 1 Chapter 9 Superposition and dynamic programming Most methods for comparing structures use some sorts.
Artificial Intelligence Lecture 9. Outline Search in State Space State Space Graphs Decision Trees Backtracking in Decision Trees.
David Luebke 1 10/3/2015 CS 332: Algorithms Solving Recurrences Continued The Master Theorem Introduction to heapsort.
CSCE350 Algorithms and Data Structure Lecture 17 Jianjun Hu Department of Computer Science and Engineering University of South Carolina
7 -1 Chapter 7 Dynamic Programming Fibonacci sequence Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, … F i = i if i  1 F i = F i-1 + F i-2 if.
Brought to you by Max (ICQ: TEL: ) February 5, 2005 Advanced Data Structures Introduction.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
Space Complexity. Reminder: P, NP classes P NP is the class of problems for which: –Guessing phase: A polynomial time algorithm generates a plausible.
Introduction to Algorithms Jiafen Liu Sept
CSC 413/513: Intro to Algorithms NP Completeness.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
Dynamic Programming 21 August 2004 Presented by Eddy Chan Written by Ng Tung.
Games. Adversaries Consider the process of reasoning when an adversary is trying to defeat our efforts In game playing situations one searches down the.
Honors Track: Competitive Programming & Problem Solving Optimization Problems Kevin Verbeek.
Dynamic Programming I HKOI2005 Training (Advanced Group) Liu Chi Man, cx.
Dynamic Programming Louis Siu What is Dynamic Programming (DP)? Not a single algorithm A technique for speeding up algorithms (making use of.
Lecture 3: Uninformed Search
Reactive and Output-Only HKOI Training Team 2006 Liu Chi Man (cx) 11 Feb 2006.
Course Review Fundamental Structures of Computer Science Margaret Reid-Miller 29 April 2004.
Introduction to Algorithms Jiafen Liu Sept
CS 3343: Analysis of Algorithms Lecture 18: More Examples on Dynamic Programming.
Tim Au Yeung.  Dynamic Programming on Tree  “always” from leaves to root  Children node pass information to parent, obtain solution from root  Unrooted.
CSC 413/513: Intro to Algorithms Solving Recurrences Continued The Master Theorem Introduction to heapsort.
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
CSC5101 Advanced Algorithms Analysis
COSC 3101NJ. Elder Announcements Midterm Exam: Fri Feb 27 CSE C –Two Blocks: 16:00-17:30 17:30-19:00 –The exam will be 1.5 hours in length. –You can attend.
Advanced Dynamic Programming II HKOI Training Team 2004.
Search in State Spaces Problem solving as search Search consists of –state space –operators –start state –goal states A Search Tree is an efficient way.
Chapter 13 Backtracking Introduction The 3-coloring problem
Part 2 # 68 Longest Common Subsequence T.H. Cormen et al., Introduction to Algorithms, MIT press, 3/e, 2009, pp Example: X=abadcda, Y=acbacadb.
Graph Search II GAM 376 Robin Burke. Outline Homework #3 Graph search review DFS, BFS A* search Iterative beam search IA* search Search in turn-based.
Space Complexity. Reminder: P, NP classes P is the class of problems that can be solved with algorithms that runs in polynomial time NP is the class of.
CIE Centre A-level Further Pure Maths
6/13/20161 Greedy A Comparison. 6/13/20162 Greedy Solves an optimization problem: the solution is “best” in some sense. Greedy Strategy: –At each decision.
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
CS 3343: Analysis of Algorithms
Design and Analysis of Algorithm
Dissertation for the degree of Philosophiae Doctor (PhD)
Dynamic Programming.
Dynamic Programming II DP over Intervals
Presentation transcript:

HKOI2009 Training (Advanced Group) (Reference: Powerpoint of Dynamic Programming II, HKOI Training 2005, by Liu Chi Man, cx)

Review  Recurrence relation  Dynamic programming State & Recurrence Formula Optimal substructure Overlapping subproblems 2

Outline  Dimension reduction (memory)  “Ugly” optimal value functions  DP on tree structures  Two-person games 3

Dimension reduction  Reduce the space complexity by one or more dimensions  “Rolling” array  Recall: Longest Common Subsequence (LCS)  Base conditions and recurrence relation: F i,0 = 0 for all i F 0,j = 0 for all j F i,j =F i-1,j-1 + 1(if A[i] = B[j]) max{ F i-1,j, F i,j-1 }(otherwise) 4

Dimension Reduction  A: stxc, B: sicxtc

Dimension Reduction  We may discard old table entries if they are no longer needed  Instead of “rolling” the rows, we may “roll” the columns  Even less memory (5  2 entries)  Space complexity:  (min{N, M})  Drawback  Backtracking is difficult  That means we can get the number but not the sequence easily. 6

(Simplified) Cannoneer Base  How many non-overlapping cross pieces can be put onto a H  W grid?  W ≤ 10, H is arbitrary  A cross piece:  There may be patterns, but we just focus on a DP solution 7 Packing 8 cross pieces onto a 10  6 grid

(Simplified) Cannoneer Base  We place the pieces from top to bottom Phase k - putting all pieces centered on row k-1  In phase k, we only need to consider the occupied squares in rows k-2 and k-1 8 k -2 k -1 k ? Phase 3Phase 4Phase 5Phase 6Phase 7Phase 8Phase 9Phase 10

(Simplified) Cannoneer Base  The optimal value function C is defined by: C(k,S) = the max number of pieces after phase k, with rows k-1 and k giving the shape S  How to represent a shape? In a shape, each column can be Use 2, 1, 0 to represent these 3 cases A shape is a W-digit base-3 integer For example, the following shape is encoded as (3) = 97 (10) 9

(Simplified) Cannoneer Base  The recurrence relation is easy to construct  Max possible number of states = H  3 W That’s why W ≤ 10  Cannoneer Base appeared in NOI2001  Bugs Integrated, Inc. in CEOI2002 requires similar techniques 10

Dynamic Programming on Tree Structures  States may be (related to) nodes on a graph  Usually directed acyclic graphs  Topological order is the obvious order of recurrence evaluation  Trees are special graphs  A lot of graph DP problems are based on trees  Two major types:  Rooted tree DP  Unrooted tree DP 11

Rooted Tree Dynamic Programming  Base cases at the leaves  Recurrence at a node involves its child nodes only  Solution Evaluate the recurrence relation from leaves (bottom) to the root (top) Top-down implementations work well Time complexity is often  (N) where N is the number of nodes 12

Unrooted Tree Dynamic Programming  No explicit roots given  Two cases The problem can be transformed to a rooted one It can’t, so we try root every node  Case 2 increases the time complexity by a factor of N  Sometimes it is possible to root one node in O(N) time and each subsequent node in O(1) overall O(N) time 13

Node Heights  Given a rooted tree T  The height of a node v in T is the maximum distance between v and a descendant of v  For example, all leaves have height = 0  Find the heights of all nodes in T  Notations C(v) = the set of children of v p(v) = the parent of v 14

Node Heights  Optimal value function H(v) = height of node v  Base conditions H(u) = 0 for all leaves u  Recurrence H(v) = max { H(x) | x  C(v) } + 1  Order of evaluation All children must be evaluated before self Post-order 15

Node Heights  Example 16 A BC DEFG I H H(I) = 0 H(E) = 0H(F) = 0H(G) = 0H(H) = 0 H(D) = 1 H(B) = 2 H(C) = 1 H(A) = 3

Node Heights  Time complexity analysis  Naively  There are N nodes  A node may have up to N-1 children  Overall time complexity = O(N 2 )  A better bound  The H-value of a v is at most used by one other node – p(v)  The total number of H-values inside the “max {}”s = N-1  Overall time complexity =  (N) 17

Treasure Hunt  N treasures are hidden at the N nodes of a tree (unrooted)  The treasure at node u has value v(u)  You may not take away two treasures joined by an edge, otherwise missiles will fly to you  Find the maximum value you can take away 18

Treasure Hunt  Let’s see if the problem can be transformed to a rooted one  We arbitrarily root a node, say r  How to formulate? 19

Treasure Hunt  Optimal value function  Z(u,b) = max value for the subtree rooted at u and it is b that the treasure at u is taken away  b = true or false  Base conditions  Z(x,false) = 0 and Z(x,true) = v(x) for all leaves x  Recurrence  Z(u,true) =  Z(c, false) + v(u)  Z(u,false) =  max { Z(c,false), Z(c,true) }  Answer = max { Z(r,false), Z(r,true) } 20 c  C(u)

Treasure Hunt  Example (values shown in squares) false: 0 true: 1 false: 0 true: 3 false: 0 true: 5 false: 0 true: 1 false: 0 true: 2 false: 1 true: 9 false: 12 true: 3 false: 8 true: 6 false: 20 true: 27

Treasure Hunt  Our formulation does not exploit the properties of a tree root  Moreover the correctness of our formulation can be proven by optimal substructure  Thus the unrooted-to-rooted transformation is correct  Time complexity:  (N) 22

Unrooted Tree DP – Basic Idea  In rooted tree DP, a node asks (request) for information from its children; and then provides (response) information to its parent 23 Response Request

Unrooted Tree DP – Basic Idea  In unrooted tree DP, a node also makes a request to its parent and sends response to its children  Imagine B is the root  A sends information about the “subtree” {A,C} to B 24 A BC D Response Request E

Unrooted Tree DP – Basic Idea  Similarly we can root C, D, E and get different request-response flows  These flows are very similar  The idea of unrooted tree DP is to root all nodes without resending all requests and responses every time 25 A BC DE A BC DE

Unrooted Tree DP – Basic Idea  Root A and do a complete flow  A knows about subtrees {B,D,E} and {C}  Now B sends a request to A  A sends a response to B telling what it knows about {A,C}  B already knows about {D}, {E}  Rooting of B completes 26 A BC DE

Unrooted Tree DP – Basic Idea  Now let’s root D  D sends a request to B  B knows about {A,C}, {D}, and {E}; combining {A,C}, {E} and B itself, B knows about {B,A,C,E}, and sends a response to D  Rooting of D completes 27 A BC DE

Unrooted Tree DP – Basic Idea  Rooting a new node requires only one request and one response if its parent already knows about all its subtrees (including the “imaginary” parent subtree)  Further questions:  Fast computation of {B,A,C,E} from {A,C} and {E}? (rooting of D)  Fast computation of {B,A,C,E,D} from {A,C}, {E}, {D}? (rooting of B) 28 A BC DE

Shortest Rooted Tree  Given an unrooted tree T, denote its rooted tree with root r by T(r)  Find a node v such that T(v) has the minimum height among all T(u), u  T  The height of a tree = the height of its root  Solution  Just root every node and find the min height  We know how to find a height of a tree  Trivially this is  (N 2 )  Now let’s use what we learnt 29

Shortest Rooted Tree  Since parents and children are unclear now, we use slightly different notations  N(v) = the set of neighbors of v  H(v, u) = height of the subtree rooted at v if u is treated as the parent of v  H(v,  ) = height of the whole tree if v is root 30

Shortest Rooted Tree  Root A, complete flow  Height = 3 31 A BC DE F H(F,D) = 0 H(E,B) = 0 H(D,B) = 1 H(B,A) = 2H(C,A) = 0 H(A,  ) = 3

Shortest Rooted Tree  Root B Request: B asks A for H(A,B) How can A give the answer in constant time? 32 A BC DE F H(F,D) = 0 H(E,B) = 0 H(D,B) = 1 H(B,A) = 2H(C,A) = 0 H(A,  ) = 3

Shortest Rooted Tree  Suppose now B asks A for H(A,B), how can A give the answer in constant time?  Two cases B is the only largest subtree of A in T(A) B is not the only largest subtree, or B is not a largest subtree 33 A BCDEFGHIJK H(B,A)=7 H(C,A)=8 H(D,A)=1 H(E,A)=2 H(F,A)=9H(H,A)=4H(J,A)=3 H(G,A)=5H(I,A)=6H(K,A)=0 H(A,  )=10

Shortest Rooted Tree  (1) B is the only largest subtree of A in T(A) H(A,B) < H(A,  ) H(A,B) depends on the second largest subtree Trick: record the second largest subtree of A  (2) B is not the only largest subtree, or B is not a largest subtree H(A,B) = H(A,  ) 34

Shortest Rooted Tree  To distinguish case (1) from case(2), we need to record the two largest subtrees of A When? ○ When we evaluate H(A,  )  Back to our example 35

Shortest Rooted Tree  Root B Request: B asks A for H(A,B) Response: 1 36 A BC DE F H(F,D) = 0 H(E,B) = 0 H(D,B) = 1 H(B,A) = 2H(C,A) = 0 H(A,  ) = 3 1 st = B, 2 nd = C 1 st = D, 2 nd = E 1 st = , 2 nd =  1 st = F, 2 nd =  1 st = , 2 nd =  A H(A,B) = 1

Shortest Rooted Tree  Root B H(B,  ) = 2 can be calculated in constant time 37 A BC DE F H(F,D) = 0 H(E,B) = 0 H(D,B) = 1 H(B,A) = 2H(C,A) = 0 H(A,  ) = 3 1 st = B, 2 nd = C 1 st = D, 2 nd = E 1 st = , 2 nd =  1 st = F, 2 nd =  1 st = , 2 nd =  A H(A,B) = 1 H(B,  ) = 2

Shortest Rooted Tree  Root D Request: D asks B for H(B,D) Response: 2 H(D,  ) = 3 38 A BC DE F H(F,D) = 0 H(E,B) = 0 H(D,B) = 1 H(B,A) = 2H(C,A) = 0 H(A,  ) = 3 1 st = B, 2 nd = C 1 st = D, 2 nd = E 1 st = , 2 nd =  1 st = F, 2 nd =  1 st = , 2 nd =  A H(A,B) = 1 H(B,  ) = 2 BF H(D,  ) = 3 H(B,D) = 2

Shortest Rooted Tree  Root F, E, and C in the same fashion  In general, root the nodes in preorder  Time complexity analysis Root A –  (N) Root each subsequent nodes – O(1) Overall -  (N)  The O(1) is crucial for the linearity of our algorithm If rooting of a new node cannot be done fast, unrooted tree DP may not improve running time 39

Two-person Games  Often appear in competitions as interactive tasks Playing against the judge  Most of them can be solved by the Minimax method 40

Game Tree  A (finite or infinite) rooted tree showing the movements of a game play 41 … ……… … …… … o x o x o oooxoxooxox

Game Tree  This is a game  The boxes at the bottom show your gain (your opponent’s loss)  Your opponent is clever  How should you play to maximize your gain? End of game Your turn Her turn WHY?

Minimax  You assume that your opponent always try to minimize her loss (minimize your gain)  So your opponent always takes the move that minimize your gain  Of course, you always take the move that maximize your gain Your turn Her turn

Minimax  Efficient? Only if the tree is small  In fact the game tree may in fact be an expanded version of a directed acyclic graph  Overlapping subproblems  memo(r)ization 44 A B D C 1 2 A BCD 12 D 122 D 12

Past Problems  IOI  2001 Ioiwari (game), Score (game), Twofive (ugly)  2005 Rivers(tree)  NOI  2001 炮兵陣地 (ugly), 2002 貪吃的九頭龍 (tree), 2003 逃學的小孩 (tree), 2005 聰聰與可可  IOI/NOITFT  2004 A Bomb Too Far (tree)  CEOI  2002 Bugs (ugly), 2003 Pearl (game)  Balkan OI  2003 Tribe (tree)  Baltic OI  2003 Gems (tree)  On HKOI Judge  1074 Christmas Tree 45