Matrix Chain Multiplication

Slides:



Advertisements
Similar presentations
Multiplying Matrices Two matrices, A with (p x q) matrix and B with (q x r) matrix, can be multiplied to get C with dimensions p x r, using scalar multiplications.
Advertisements

Dynamic Programming Lets begin by looking at the Fibonacci sequence.
Dynamic Programming Carrie Williams. What is Dynamic Programming? Method of breaking the problem into smaller, simpler sub-problems Method of breaking.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
1 Dynamic Programming Andreas Klappenecker [based on slides by Prof. Welch]
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
11-1 Matrix-chain Multiplication Suppose we have a sequence or chain A 1, A 2, …, A n of n matrices to be multiplied –That is, we want to compute the product.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
1 © 2010 Pearson Education, Inc. All rights reserved © 2010 Pearson Education, Inc. All rights reserved Chapter 9 Matrices and Determinants.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
CS 8833 Algorithms Algorithms Dynamic Programming.
1 Dynamic Programming Andreas Klappenecker [partially based on slides by Prof. Welch]
1 Ch20. Dynamic Programming. 2 BIRD’S-EYE VIEW Dynamic programming The most difficult one of the five design methods Has its foundation in the principle.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Goal: Find sums, differences, products, and inverses of matrices.
TU/e Algorithms (2IL15) – Lecture 3 1 DYNAMIC PROGRAMMING
If A and B are both m × n matrices then the sum of A and B, denoted A + B, is a matrix obtained by adding corresponding elements of A and B. add these.
Induction in Pascal’s Triangle
MAT 322: LINEAR ALGEBRA.
Dynamic Programming Typically applied to optimization problems
Chapter 4 Systems of Linear Equations; Matrices
Advanced Algorithms Analysis and Design
Lecture 5 Dynamic Programming
Advanced Algorithms Analysis and Design
L6 matrix operations.
Matrix Chain Multiplication
Data Structures and Algorithms (AT70. 02) Comp. Sc. and Inf. Mgmt
Gray Code Can you find an ordering of all the n-bit strings in such a way that two consecutive n-bit strings differed by only one bit? This is called the.
Dynamic programming techniques
Dynamic programming techniques
FP1 Matrices Introduction
Chapter 8 Dynamic Programming.
Matrix Algebra.
CSCE 411 Design and Analysis of Algorithms
Matrix Multiplication Chains
Binhai Zhu Computer Science Department, Montana State University
Merge Sort 1/12/2019 5:31 PM Dynamic Programming Dynamic Programming.
Dynamic Programming 1/15/2019 8:22 PM Dynamic Programming.
Advanced Algorithms Analysis and Design
Dynamic Programming Dynamic Programming 1/15/ :41 PM
Dynamic Programming Dynamic Programming 1/18/ :45 AM
Merge Sort 1/18/ :45 AM Dynamic Programming Dynamic Programming.
Dynamic Programming Merge Sort 1/18/ :45 AM Spring 2007
Matrices.
Merge Sort 2/22/ :33 AM Dynamic Programming Dynamic Programming.
Data Structure and Algorithms
Chapter 15: Dynamic Programming II
Lecture 8. Paradigm #6 Dynamic Programming
Matrix Multiplication (Dynamic Programming)
Matrix Algebra.
Dynamic Programming-- Longest Common Subsequence
ECE 352 Digital System Fundamentals
Longest increasing subsequence (LIS) Matrix chain multiplication
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
Matrix Chain Product 張智星 (Roger Jang)
Inverse of a Matrix Solving simultaneous equations.
Dimensions matching Rows times Columns
//****************************************
Merge Sort 4/28/ :13 AM Dynamic Programming Dynamic Programming.
Matrix Chain Multiplication
CSCI 235, Spring 2019, Lecture 25 Dynamic Programming
Dynamic Programming Merge Sort 5/23/2019 6:18 PM Spring 2008
Algorithms CSCI 235, Spring 2019 Lecture 27 Dynamic Programming II
Chapter 15: Dynamic Programming
Chapter 15: Dynamic Programming
Algorithms Tutorial 27th Sept, 2019.
Presentation transcript:

Matrix Chain Multiplication Given some matrices to multiply, determine the best order to multiply them so you minimize the number of single element multiplications. i.e. Determine the way the matrices are parenthesized. First off, it should be noted that matrix multiplication is associative, but not commutative. But since it is associative, we always have: ((AB)(CD)) = (A(B(CD))), or any other grouping as long as the matrices are in the same consecutive order. BUT NOT: ((AB)(CD)) = ((BA)(DC))

Matrix Chain Multiplication It may appear that the amount of work done won’t change if you change the parenthesization of the expression, but we can prove that is not the case! Let us use the following example: Let A be a 2x10 matrix Let B be a 10x50 matrix Let C be a 50x20 matrix But FIRST, let’s review some matrix multiplication rules…

Matrix Chain Multiplication Let’s get back to our example: We will show that the way we group matrices when multiplying A, B, C matters: Let A be a 2x10 matrix Let B be a 10x50 matrix Let C be a 50x20 matrix Consider computing A(BC): # multiplications for (BC) = 10x50x20 = 10000, creating a 10x20 answer matrix # multiplications for A(BC) = 2x10x20 = 400 Total multiplications = 10000 + 400 = 10400. Consider computing (AB)C: # multiplications for (AB) = 2x10x50 = 1000, creating a 2x50 answer matrix # multiplications for (AB)C = 2x50x20 = 2000, Total multiplications = 1000 + 2000 = 3000 Consider computing A(BC) We multiply BC first, how many element multiplications are required? What are the dimensions of our resulting matrix? Then we multiply A(BC) How many element multiplications are required? What are the TOTAL element multiplications?? Consider computing (AB)C We multiply AB first, how many element multiplications are required? Then we multiply (AB)C IS THERE A DIFFERENCE BETWEEN THE NUMBER OF MULTIPLICATIONS necessary for the different groupings??

Matrix Chain Multiplication Thus, our goal today is: Given a chain of matrices to multiply, determine the fewest number of multiplications necessary to compute the product.

Matrix Chain Multiplication Formal Definition of the problem: Let A = A1 A2 ... An Let Mi,j denote the minimal number of multiplications necessary to find the product: Ai Ai+1 ... Aj. And let pi-1xpi denote the dimensions of matrix Ai. We must attempt to determine the minimal number of multiplications necessary(m1,n) to find A, assuming that we simply do each single matrix multiplication in the standard method.

Matrix Chain Multiplication The key to solving this problem is noticing the sub-problem optimality condition: If a particular parenthesization of the whole product is optimal, then any sub-parenthesization in that product is optimal as well. Say What? If (A (B ((CD) (EF)) ) ) is optimal Then (B ((CD) (EF)) ) is optimal as well Proof on the next slide…

Matrix Chain Multiplication Assume that we are calculating ABCDEF and that the following parenthesization is optimal: (A (B ((CD) (EF)) ) ) Then it is necessarily the case that (B ((CD) (EF)) ) is the optimal parenthesization of BCDEF. Why is this? Because if it wasn't, and say ( ((BC) (DE)) F) was better, then it would also follow that (A ( ((BC) (DE)) F) ) was better than (A (B ((CD) (EF)) ) ),

Matrix Chain Multiplication Our final multiplication will ALWAYS be of the form (A1 A2 ... Ak)  (Ak+1 Ak+2 ... An) In essence, there is exactly one value of k for which we should "split" our work into two separate cases so that we get an optimal result. Here is a list of the cases to choose from:  (A1)  (A2 A3 ... An) (A1 A2)  (A3 A4 ... An) (A1 A2A3)  (A4 A5 ... An) ... (A1 A2 ... An-2)  (An-1 An) (A1 A2 ... An-1)  (An) Basically, count the number of multiplications in each of these choices and pick the minimum. One other point to notice is that you have to account for the minimum number of multiplications in each of the two products.

Matrix Chain Multiplication Consider the case multiplying these 4 matrices: A: 2x4 B: 4x2 C: 2x3 D: 3x1 1. (A)(BCD) - This is a 2x4 multiplied by a 4x1, so 2x4x1 = 8 multiplications, plus whatever work it will take to multiply (BCD). 2. (AB)(CD) - This is a 2x2 multiplied by a 2x1, so 2x2x1 = 4 multiplications, plus whatever work it will take to multiply (AB) and (CD). 3. (ABC)(D) - This is a 2x3 multiplied by a 3x1, so 2x3x1 = 6 multiplications, plus whatever work it will take to multiply (ABC).

Matrix Chain Multiplication This leads us to the following recursive formula: Our recursive formula: Mi,j = min value of Mi,k + Mk+1,j + pi-1pkpj, over all valid values of k. Now let’s turn this recursive formula into a dynamic programming solution Which sub-problems are necessary to solve first? Clearly it's necessary to solve the smaller problems before the larger ones. In particular, we need to know mi,i+1, the number of multiplications to multiply any adjacent pair of matrices before we move onto larger tasks. Similarly, the next task we want to solve is finding all the values of the form mi,i+2, then mi,i+3, etc.

Matrix Chain Multiplication Basically, we’re checking different places to “split” our matrices by checking different values of k and seeing if they improve our current minimum value.