CSCI 235, Spring 2019, Lecture 25 Dynamic Programming

Slides:



Advertisements
Similar presentations
Welcome to our presentation
Advertisements

Multiplying Matrices Two matrices, A with (p x q) matrix and B with (q x r) matrix, can be multiplied to get C with dimensions p x r, using scalar multiplications.
Algorithm Design Methodologies Divide & Conquer Dynamic Programming Backtracking.
Dynamic Programming An algorithm design paradigm like divide-and-conquer “Programming”: A tabular method (not writing computer code) Divide-and-Conquer.
Introduction to Algorithms
CSC 252 Algorithms Haniya Aslam
CPSC 311, Fall 2009: Dynamic Programming 1 CPSC 311 Analysis of Algorithms Dynamic Programming Prof. Jennifer Welch Fall 2009.
Dynamic Programming Code
CPSC 411 Design and Analysis of Algorithms Set 5: Dynamic Programming Prof. Jennifer Welch Spring 2011 CPSC 411, Spring 2011: Set 5 1.
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Analysis of Algorithms CS 477/677
Lecture 34 CSE 331 Nov 30, Graded HW 8 On Wednesday.
Analysis of Algorithms
11-1 Matrix-chain Multiplication Suppose we have a sequence or chain A 1, A 2, …, A n of n matrices to be multiplied –That is, we want to compute the product.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Lecture 5 Dynamic Programming. Dynamic Programming Self-reducibility.
Dynamic Programming UNC Chapel Hill Z. Guo.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
Algorithms: Design and Analysis Summer School 2013 at VIASM: Random Structures and Algorithms Lecture 4: Dynamic Programming Phan Th ị Hà D ươ ng 1.
Algorithmics - Lecture 121 LECTURE 11: Dynamic programming - II -
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
Dynamic Programming Typically applied to optimization problems
Lecture 12.
Advanced Algorithms Analysis and Design
12-1 Organizing Data Using Matrices
Lecture 5 Dynamic Programming
Advanced Algorithms Analysis and Design
Seminar on Dynamic Programming.
Advanced Design and Analysis Techniques
Matrix Chain Multiplication
Data Structures and Algorithms (AT70. 02) Comp. Sc. and Inf. Mgmt
Data Structures and Algorithms
Lecture 5 Dynamic Programming
Dynamic Programming Several problems Principle of dynamic programming
7.3 Matrices.
Dynamic Programming Comp 122, Fall 2004.
CSCE 411 Design and Analysis of Algorithms
Matrix Chain Multiplication
Dynamic Programming General Idea
Merge Sort 1/12/2019 5:31 PM Dynamic Programming Dynamic Programming.
Dynamic Programming 1/15/2019 8:22 PM Dynamic Programming.
Dynamic Programming Dynamic Programming 1/15/ :41 PM
Dynamic Programming.
Dynamic Programming Dynamic Programming 1/18/ :45 AM
Merge Sort 1/18/ :45 AM Dynamic Programming Dynamic Programming.
Dynamic Programming Merge Sort 1/18/ :45 AM Spring 2007
Merge Sort 2/22/ :33 AM Dynamic Programming Dynamic Programming.
Analysis of Algorithms CS 477/677
Data Structure and Algorithms
Chapter 15: Dynamic Programming II
Dynamic Programming Comp 122, Fall 2004.
Matrix Multiplication (Dynamic Programming)
Algorithms CSCI 235, Spring 2019 Lecture 28 Dynamic Programming III
Dynamic Programming General Idea
DYNAMIC PROGRAMMING.
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
Matrix Chain Product 張智星 (Roger Jang)
Algorithms and Data Structures Lecture X
Merge Sort 4/28/ :13 AM Dynamic Programming Dynamic Programming.
Analysis of Algorithms CS 477/677
Matrix Chain Multiplication
Dynamic Programming Merge Sort 5/23/2019 6:18 PM Spring 2008
Algorithms CSCI 235, Spring 2019 Lecture 27 Dynamic Programming II
Analysis of Algorithms CS 477/677
Seminar on Dynamic Programming.
Data Structures and Algorithms Dynamic Programming
Presentation transcript:

CSCI 235, Spring 2019, Lecture 25 Dynamic Programming Dynamic programming is a problem solving technique that, like Divide and Conquer, solves problems by dividing them into subproblems. Dynamic programming is used when the subproblems are not independent, e.g. when they share the same subproblems. In this case, divide and conquer may do more work than necessary, because it solves the same subproblem multiple times. Dynamic Programming solves each subproblem once and stores the result in a table so that it can be rapidly retrieved if needed again. Example: Fibonacci

When do we use Dynamic Programming? Dynamic programming solves each subproblem once and stores the solution in a table. You can then look up the solution when needed again. It is often used in Optimization Problems: A problem with many possible solutions for which you want to find an optimal (the best) solution. (There may be more than 1 optimal solution). Applications: Control (Cruise control, Robotics, Thermostats) Flight control (balance factors that oppose one another, e.g. maximize accuracy, minimize time). Time Sharing: Schedule user and jobs to maximize CPU usage Other types of scheduling.

Development of Dynamic Programming Algorithm Characterize the structure of an optimal solution Recursively define the value of the optimal solution. Like divide and conquer, divide the problem into 2 or more optimal parts recursively. This helps to define what the solution will look like. Compute the value of the optimal solution from the bottom up (starting with the smallest subproblem). 4. Construct the optimal solution for the entire problem from the computed values of smaller subproblems.

Matrices A Matrix is a rectangular array of quantities: 2x3 Matrix mxn Matrix

Matrix Multiplication To multiply two matrices, multiply the rows in the first matrix by the columns of the second matrix. Example: In general:

Examples a) b)

Matrix Chain Multiplication Problem: Given <A1, A2, A3, ...An> are n matrices, find their product, A1A2A3...An Note that the order of multiplication can yield different running times, even though the result is the same. E.g (A1A2)A3 may have a different running time than A1(A2A3) We will measure running times in terms of the number of scalar multiplications required.

Example We will examine this in class.

The problem Problem: Find the best way to multiply n matrices (where best means the one that uses the minimum number of scalar multiplications). Practical use: Computer Graphics (Long chains of matrices to multiply). Formally: Given <A1, A2, A3, ...An> are n matrices, fully parenthesize the product A1A2A3...An in a way to minimize the number of scalar multiplications.

Examining All Possible Solutions One approach to this problem would be to examine all possible solutions and choose the one that gives the minimum number of multiplications. It can be shown that there are: such solutions. Because of the exponential growth, it would take prohibitive amount of time to do an exhaustive search of all the possibilities.

Number of Scalar Multiplications Multiplying 2 matrices, A1, A2, where A1 is pxq and A2 is qxr takes pqr scalar multiplications. Example: 2x3 3x1 Number of multiplications = 2x3x1 = 6

1. Characterize structure of optimal solution Notation: Ai .. j = AiAi+1Ai+2 . . . Aj An optimal solution of A1 ..n = (A1A2 ...Ak)(Ak+1Ak+2 ...An), 1<=k<n Cost = cost of computing A1 .. k + cost of computing Ak+1 .. n + cost to multiply the 2 results together. For the solution to be optimal, we must first find the optimal solutions to the subproblems: A1 .. k and Ak+1 .. n (Why must they be optimal solutions?) Example: A1A2A3A4