Download presentation
Presentation is loading. Please wait.
Published byGiles Boyd Modified over 9 years ago
1
Prelude A pattern of activation in a NN is a vector A set of connection weights between units is a matrix Vectors and matrices have well-understood mathematical and geometric properties Very useful for understanding the properties of NNs
2
Operations on Vectors and Matrices
3
Outline 1)The Players: Scalars, Vectors and Matrices 2)Vectors, matrices and neural nets 3)Geometric Analysis of Vectors 4)Multiplying Vectors by Scalars 5)Multiplying Vectors by Vectors a)The inner product (produces a scalar) b)The outer product (produces a matrix) 6)Multiplying Vectors by Matrices 7)Multiplying Matrices by Matrices
4
Scalars, Vectors and Matrices 1)Scalar: A single number (integer or real) 2)Vector: An ordered list of scalars [ 1 2 3 4 5 ] [ 0.4 1.2 0.07 8.4 12.3 ] [ 12 10 ] [ 2 ]
5
Scalars, Vectors and Matrices 1)Scalar: A single number (integer or real) 2)Vector: An ordered list of scalars [ 1 2 3 4 5 ] [ 0.4 1.2 0.07 8.4 12.3 ] [ 12 10 ] [ 2 ] [ 12 10 ] ≠ [ 10 12 ]
6
Scalars, Vectors and Matrices 1)Scalar: A single number (integer or real) 2)Vector: An ordered list of scalars [ 1 2 3 4 5 ] [ 0.4 1.2 0.07 8.4 12.3 ] [ 12 10 ] [ 2 ] Row vectors
7
Scalars, Vectors and Matrices 1)Scalar: A single number (integer or real) 2)Vector: An ordered list of scalars [ 1 2 3 4 5 ] [ 0.4 1.2 0.07 8.4 12.3 ] [ 12 10 ] [ 2 ] Row vectors Column Vectors 1234512345 1.5 0.3 6.2 12.0 17.1
8
Scalars, Vectors and Matrices 1)Scalar: A single number (integer or real) 2)Vector: An ordered list of scalars 3)Matrix: An ordered list of vectors: 1 2 6 1 7 8 2 5 9 0 0 3 3 1 5 7 6 3 2 7 9 3 3 1
9
Scalars, Vectors and Matrices 1)Scalar: A single number (integer or real) 2)Vector: An ordered list of scalars 3)Matrix: An ordered list of vectors: 1 2 6 1 7 8 2 5 9 0 0 3 3 1 5 7 6 3 2 7 9 3 3 1 Row vectors
10
Scalars, Vectors and Matrices 1)Scalar: A single number (integer or real) 2)Vector: An ordered list of scalars 3)Matrix: An ordered list of vectors: 1 2 6 1 7 8 2 5 9 0 0 3 3 1 5 7 6 3 2 7 9 3 3 1 Column vectors
11
Scalars, Vectors and Matrices 1)Scalar: A single number (integer or real) 2)Vector: An ordered list of scalars 3)Matrix: An ordered list of vectors: 1 2 6 1 7 8 2 5 9 0 0 3 3 1 5 7 6 3 2 7 9 3 3 1 Matrices are indexed (row, column) M =
12
Scalars, Vectors and Matrices 1)Scalar: A single number (integer or real) 2)Vector: An ordered list of scalars 3)Matrix: An ordered list of vectors: 1 2 6 1 7 8 2 5 9 0 0 3 3 1 5 7 6 3 2 7 9 3 3 1 Matrices are indexed (row, column) M = M(1,3) = 6 (row 1, column 3)
13
Scalars, Vectors and Matrices 1)Scalar: A single number (integer or real) 2)Vector: An ordered list of scalars 3)Matrix: An ordered list of vectors: 1 2 6 1 7 8 2 5 9 0 0 3 3 1 5 7 6 3 2 7 9 3 3 1 Matrices are indexed (row, column) M = M(1,3) = 6 (row 1, column 3) M(3,1) = 3 (row 3, column 1)
14
Variable Naming Conventions 1)Scalars: Lowercase, italics x, y, z… 2)Vectors: Lowercase, bold u, v, w… 3)Matrices: Uppercase, bold M, N, O … 4)Constants: Greek , , , , …
15
Transposing Vectors If u is a row vector… u = [ 1 2 3 4 5 ] …then u’ (“u-transpose”) is a column vector 1 2 3 4 5 … and vice-versa. u’ =
16
Transposing Vectors If u is a row vector… u = [ 1 2 3 4 5 ] …then u’ (“u-transpose”) is a column vector 1 2 3 4 5 … and vice-versa. u’ = Why in the world would I care?? You
17
Transposing Vectors If u is a row vector… u = [ 1 2 3 4 5 ] …then u’ (“u-transpose”) is a column vector 1 2 3 4 5 … and vice-versa. u’ = Answer: It’ll matter when we come to vector multiplication.
18
Transposing Vectors If u is a row vector… u = [ 1 2 3 4 5 ] …then u’ (“u-transpose”) is a column vector 1 2 3 4 5 … and vice-versa. u’ = OK.
19
Vectors, Matrices & Neural Nets
20
j1j1 j2j2 j3j3 Input units, j
21
Vectors, Matrices & Neural Nets i1i1 i2i2 j1j1 j2j2 j3j3 Input units, j Output units, i
22
Vectors, Matrices & Neural Nets i1i1 i2i2 j1j1 j2j2 j3j3 Input units, j Output units, i Connection weights, w ij w 11 w 12 w 13 w 21 w 22 w 23
23
Vectors, Matrices & Neural Nets i1i1 i2i2 0.20.90.5 Input units, j Output units, i Connection weights, w ij w 11 w 12 w 13 w 21 w 22 w 23 The activations of the input nodes can be represented as a 3-dimensional vector: j = [ 0.2 0.9 0.5 ]
24
Vectors, Matrices & Neural Nets 1.00.0 j1j1 j2j2 j3j3 Input units, j Output units, i Connection weights, w ij w 11 w 12 w 13 w 21 w 22 w 23 The activations of the output nodes can be represented as a 2-dimensional vector: i = [ 1.0 0.0 ]
25
Vectors, Matrices & Neural Nets i1i1 i2i2 j1j1 j2j2 j3j3 Input units, j Output units, i Connection weights, w ij w 11 w 12 w 13 w 21 w 22 w 23 The weights leading into any output node can be represented as a 3-dimensional vector: w 1j = [ 0.1 1.0 0.2 ] 0.1 1.0 0.2
26
Vectors, Matrices & Neural Nets i1i1 i2i2 j1j1 j2j2 j3j3 Input units, j Output units, i Connection weights, w ij w 11 w 12 w 13 w 21 w 22 w 23 The complete set of weights can be represented as a 3 (row) X 2 (column) matrix: 0.1 1.0 0.2 1.0 0.1 -0.9 W = 0.1 1.0 0.2 1.0 0.1 -0.9
27
Vectors, Matrices & Neural Nets i1i1 i2i2 j1j1 j2j2 j3j3 Input units, j Output units, i Connection weights, w ij w 11 w 12 w 13 w 21 w 22 w 23 The complete set of weights can be represented as a 2 (row) X 3 (column) matrix: 0.1 1.0 0.2 1.0 0.1 -0.9 W = Why in the world would I care?? 0.1 1.0 0.2 1.0 0.1 -0.9
28
Vectors, Matrices & Neural Nets W Why in the world would I care?? 1.Because the mathematics of vectors and matrices is well- understood. 2.Because vectors have a very useful geometric interpretation. 3.Because Matlab “thinks” in vectors and matrices. 4.Because you are going to have to learn to think in Matlab.
29
Vectors, Matrices & Neural Nets OK. 1.Because the mathematics of vectors and matrices is well- understood. 2.Because vectors have a very useful geometric interpretation. 3.Because Matlab “thinks” in vectors and matrices. 4.Because you are going to have to learn to think in Matlab.
30
Geometric Analysis of Vectors Dimensionality: The number of numbers in a vector
31
Geometric Analysis of Vectors Dimensionality: The number of numbers in a vector
32
Geometric Analysis of Vectors Dimensionality: The number of numbers in a vector
33
Geometric Analysis of Vectors Implications for neural networks Auto-associative nets State of activation at time t is a vector (a point in a space) As activations change, vector moves through that space Will prove invaluable in understanding Hopfield nets Layered nets (“perceptrons”) Input vectors activate output vectors Points in input space map to points in output space Will prove invaluable in understanding perceptrons and back- propagation learning
34
Multiplying a Vector by a Scalar [ 5 4 ] * 2 = 5 4
35
Multiplying a Vector by a Scalar [ 5 4 ] * 2 = [ 10 8 ] Lengthens the vector but does not change its orientation 5 4 10 8
36
Adding a Vector to a Scalar [ 5 4 ] + 2 = 5 4
37
Adding a Vector to a Scalar [ 5 4 ] + 2 = NAN Is Illegal. 5 4
38
Adding a Vector to a Vector [ 5 4 ] + [ 3 6 ] = 5 4 3 6
39
Adding a Vector to a Vector [ 5 4 ] + [ 3 6 ] = [ 8 10 ] Forms a parallelogram. 5 4 3 6 8 10
40
Multiplying a Vector by a Vector 1: The Inner Product ( aka “Dot Product” ) If u and v are both row vectors of the same dimensionality… u = [ 1 2 3 ] v = [ 4 5 6 ]
41
Multiplying a Vector by a Vector 1: The Inner Product ( aka “Dot Product” ) If u and v are both row vectors of the same dimensionality… u = [ 1 2 3 ] v = [ 4 5 6 ] … then the product u · v =
42
Multiplying a Vector by a Vector 1: The Inner Product ( aka “Dot Product” ) If u and v are both row vectors of the same dimensionality… u = [ 1 2 3 ] v = [ 4 5 6 ] … then the product u · v = NAN Is undefined.
43
Multiplying a Vector by a Vector 1: The Inner Product ( aka “Dot Product” ) If u and v are both row vectors of the same dimensionality… u = [ 1 2 3 ] v = [ 4 5 6 ] … then the product u · v = NAN Is undefined. Huh?? Why?? That’s BS!
44
Multiplying a Vector by a Vector 1: The Inner Product ( aka “Dot Product” ) I told you you’d eventually care about transposing vectors… ?
45
Multiplying a Vector by a Vector 1: The Inner Product ( aka “Dot Product” ) The Mantra: “Rows by Columns” Multiply rows (or row vectors) by columns (or column vectors)
46
Multiplying a Vector by a Vector 1: The Inner Product ( aka “Dot Product” ) The Mantra: “Rows by Columns” Multiply rows (or row vectors) by columns (or column vectors) u = [ 1 2 3 ] v = [ 4 5 6 ]
47
Multiplying a Vector by a Vector 1: The Inner Product ( aka “Dot Product” ) The Mantra: “Rows by Columns” Multiply rows (or row vectors) by columns (or column vectors) u = [ 1 2 3 ] v = [ 4 5 6 ] v’ = 456456
48
Multiplying a Vector by a Vector 1: The Inner Product ( aka “Dot Product” ) The Mantra: “Rows by Columns” Multiply rows (or row vectors) by columns (or column vectors) u = [ 1 2 3 ] v’ = 456456 u · v’ = 32
49
Multiplying a Vector by a Vector 1: The Inner Product ( aka “Dot Product” ) The Mantra: “Rows by Columns” Multiply rows (or row vectors) by columns (or column vectors) u = [ 1 2 3 ] v’v’ 456456 u · v’ = 32 Imagine rotating your row vector into a (pseudo) column vector… 123123
50
Multiplying a Vector by a Vector 1: The Inner Product ( aka “Dot Product” ) The Mantra: “Rows by Columns” Multiply rows (or row vectors) by columns (or column vectors) u = [ 1 2 3 ] v’v’ 456456 u · v’ = 32 Now multiply corresponding elements and add up the products… 123123 4
51
Multiplying a Vector by a Vector 1: The Inner Product ( aka “Dot Product” ) The Mantra: “Rows by Columns” Multiply rows (or row vectors) by columns (or column vectors) u = [ 1 2 3 ] v’v’ 456456 u · v’ = 32 Now multiply corresponding elements and add up the products… 123123 4 10
52
Multiplying a Vector by a Vector 1: The Inner Product ( aka “Dot Product” ) The Mantra: “Rows by Columns” Multiply rows (or row vectors) by columns (or column vectors) u = [ 1 2 3 ] v’v’ 456456 u · v’ = 32 Now multiply corresponding elements and add up the products… 123123 4 10 18
53
Multiplying a Vector by a Vector 1: The Inner Product ( aka “Dot Product” ) The Mantra: “Rows by Columns” Multiply rows (or row vectors) by columns (or column vectors) u = [ 1 2 3 ] v’v’ 456456 u · v’ = 32 Now multiply corresponding elements and add up the products… 123123 4 10 18 32
54
456456 4 10 18 32 Multiplying a Vector by a Vector 1: The Inner Product ( aka “Dot Product” ) Inner product is commutative as long as you transpose correctly u = [ 1 2 3 ] v’v’ u · v’ = 32 v · u’ = 32 u’u’ 123123 v = [ 4 5 6 ] v 456456 4 10 18 32
55
The Inner (“Dot”) Product In scalar notation… v’v’ 456456 123123 4 10 18 32 u
56
The Inner (“Dot”) Product In scalar notation… Remind you of… … the net input to a unit
57
The Inner (“Dot”) Product In scalar notation… Remind you of… … the net input to a unit In vector notation…
58
What Does the Dot Product “Mean”?
59
Consider u u’
60
What Does the Dot Product “Mean”? Consider u u’ u = [ 3, 4 ] 3 4
61
What Does the Dot Product “Mean”? Consider u u’ u = [ 3, 4 ] u’u’ 3434 3434 9 16 25 u 3 4
62
What Does the Dot Product “Mean”? Consider u u’ u = [ 3, 4 ] u’u’ 3434 3434 9 16 25 u 3 4 5
63
What Does the Dot Product “Mean”? Consider u u’ u = [ 3, 4 ] u’u’ 3434 3434 9 16 25 u 3 4 5 True for vectors of any dimensionality
64
What Does the Dot Product “Mean”? So:
65
What Does the Dot Product “Mean”? What about u v where u v?
66
What Does the Dot Product “Mean”? Well… What about u v where u v?
67
What Does the Dot Product “Mean”? Well… What about u v where u v? … and cos( uv ) is a length-invariant measure of the similarity of u and v
68
What Does the Dot Product “Mean”? What about u v where u v? cos( uv ) is a length-invariant measure of the similarity of u and v U = [ 1, 0 ] V’ = [ 1, 1 ]
69
What Does the Dot Product “Mean”? What about u v where u v? cos( uv ) is a length-invariant measure of the similarity of u and v U = [ 1, 0 ] V’ = [ 1, 1 ] uv = 45º; cos( uv ) =.707 U V = (1 * 1) + (1 * 0) = 1
70
What Does the Dot Product “Mean”? What about u v where u v? cos( uv ) is a length-invariant measure of the similarity of u and v U = [ 1, 0 ] V’ = [ 1, 1 ] U V = (1 * 1) + (1 * 0) = 1 uv = 45º; cos( uv ) =.707
71
What Does the Dot Product “Mean”? What about u v where u v? cos( uv ) is a length-invariant measure of the similarity of u and v U = [ 1, 0 ] V’ = [ 1, 1 ] ||u|| = sqrt(1) = 1 ||v|| = sqrt(2) = 1.414 uv = 45º; cos( uv ) =.707
72
What Does the Dot Product “Mean”? What about u v where u v? cos( uv ) is a length-invariant measure of the similarity of u and v U = [ 1, 0 ] V’ = [ 1, 1 ] ||u|| = sqrt(1) = 1 ||v|| = sqrt(2) = 1.414 uv = 45º; cos( uv ) =.707
73
What Does the Dot Product “Mean”? What about u v where u v? cos( uv ) is a length-invariant measure of the similarity of u and v U = [ 1, 0 ] V’ = [ 0, 1 ] uv = 90º; cos( uv ) = 0
74
What Does the Dot Product “Mean”? What about u v where u v? cos( uv ) is a length-invariant measure of the similarity of u and v U = [ 1, 0 ] V’ = [ 0, -1 ] uv = 270º; cos( uv ) = 0
75
What Does the Dot Product “Mean”? What about u v where u v? cos( uv ) is a length-invariant measure of the similarity of u and v U = [ 1, 0 ] V’ = [ -1,0 ] uv = 180º; cos( uv ) = -1
76
What Does the Dot Product “Mean”? What about u v where u v? cos( uv ) is a length-invariant measure of the similarity of u and v U = [ 1, 0 ] V’ = [ 2.2,0 ] uv = 0º; cos( uv ) = 1
77
What Does the Dot Product “Mean”? What about u v where u v? cos( uv ) is a length-invariant measure of the similarity of u and v In general… cos( uv ) -1…1 True regardless of dimensionality
78
What Does the Dot Product “Mean”? What about u v where u v? cos( uv ) is a length-invariant measure of the similarity of u and v To see why, consider the cosine expressed in scalar notation…
79
What Does the Dot Product “Mean”? What about u v where u v? cos( uv ) is a length-invariant measure of the similarity of u and v … and compare it to the equation for the correlation coefficient…
80
What Does the Dot Product “Mean”? What about u v where u v? cos( uv ) is a length-invariant measure of the similarity of u and v … and compare it to the equation for the correlation coefficient… if u and v have means of zero, then cos( uv ) = r(u,v)
81
What Does the Dot Product “Mean”? What about u v where u v? cos( uv ) is a length-invariant measure of the similarity of u and v … and compare it to the equation for the correlation coefficient… if u and v have means of zero, then cos( uv ) = r(u,v) The cosine is a special case of the correlation coefficient!
82
What Does the Dot Product “Mean”? What about u v where u v? cos( uv ) is a length-invariant measure of the similarity of u and v … and let’s compare the cosine to the dot product…
83
What Does the Dot Product “Mean”? What about u v where u v? cos( uv ) is a length-invariant measure of the similarity of u and v … and let’s compare the cosine to the dot product… If u and v have lengths of 1, then the dot product is equal to the cosine.
84
What Does the Dot Product “Mean”? What about u v where u v? cos( uv ) is a length-invariant measure of the similarity of u and v … and let’s compare the cosine to the dot product… If u and v have lengths of 1, then the dot product is equal to the cosine. The dot product is a special case of the cosine, which is a special case of the correlation coefficient, which is a measure of vector similarity!
85
What Does the Dot Product “Mean”? The most common input rule is a dot product between unit i’s vector of weights and the activation vector on the other end Such a unit is computing the (biased) similarity between what it expects (w i ) and what it’s getting (a). It’s activation is a positive function of this similarity
86
What Does the Dot Product “Mean”? The most common input rule is a dot product between unit i’s vector of weights and the activation vector on the other end Such a unit is computing the (biased) similarity between what it expects (w i ) and what it’s getting (a). It’s activation is a positive function of this similarity aiai nini asymptotic
87
What Does the Dot Product “Mean”? The most common input rule is a dot product between unit i’s vector of weights and the activation vector on the other end Such a unit is computing the (biased) similarity between what it expects (w i ) and what it’s getting (a). It’s activation is a positive function of this similarity aiai nini Step (BTU)
88
What Does the Dot Product “Mean”? The most common input rule is a dot product between unit i’s vector of weights and the activation vector on the other end Such a unit is computing the (biased) similarity between what it expects (w i ) and what it’s getting (a). It’s activation is a positive function of this similarity aiai nini logistic
89
Multiplying a Vector by a Vector 2: The Outer Product The two vectors need not have the same dimensionality. Same Mantra: Rows by Columns. This time, multiply a column vector by a row vector:
90
Multiplying a Vector by a Vector 2: The Outer Product The two vectors need not have the same dimensionality. Same Mantra: Rows by Columns. This time, multiply a column vector by a row vector: u’ = 1212 v = [ 4 5 6 ] M = u’ * v
91
Multiplying a Vector by a Vector 2: The Outer Product The two vectors need not have the same dimensionality. Same Mantra: Rows by Columns. This time, multiply a column vector by a row vector: 1212 v = [ 4 5 6 ] M = u’ * v M = u’ =
92
Multiplying a Vector by a Vector 2: The Outer Product The two vectors need not have the same dimensionality. Same Mantra: Rows by Columns. This time, multiply a column vector by a row vector: 1212 v = [ 4 5 6 ] M = u’ * v M = Row 1 u’ =
93
Multiplying a Vector by a Vector 2: The Outer Product The two vectors need not have the same dimensionality. Same Mantra: Rows by Columns. This time, multiply a column vector by a row vector: 1212 v = [ 4 5 6 ] M = u’ * v M = Row 1 times column 1 u’ =
94
Multiplying a Vector by a Vector 2: The Outer Product The two vectors need not have the same dimensionality. Same Mantra: Rows by Columns. This time, multiply a column vector by a row vector: 1212 v = [ 4 5 6 ] M = u’ * v M = Row 1 times column 1 goes into row 1, column 1 4 u’ =
95
Multiplying a Vector by a Vector 2: The Outer Product The two vectors need not have the same dimensionality. Same Mantra: Rows by Columns. This time, multiply a column vector by a row vector: 1212 v = [ 4 5 6 ] M = u’ * v M = Row 1 times column 2 goes into row 1, column 2 4 5 u’ =
96
Multiplying a Vector by a Vector 2: The Outer Product The two vectors need not have the same dimensionality. Same Mantra: Rows by Columns. This time, multiply a column vector by a row vector: 1212 v = [ 4 5 6 ] M = u’ * v M = Row 1 times column 3 goes into row 1, column 3 4 5 6 u’ =
97
Multiplying a Vector by a Vector 2: The Outer Product The two vectors need not have the same dimensionality. Same Mantra: Rows by Columns. This time, multiply a column vector by a row vector: u = 1212 v = [ 4 5 6 ] M = u’ * v M = Row 2 times column 1 goes into row 2, column 1 4 5 6 8
98
Multiplying a Vector by a Vector 2: The Outer Product The two vectors need not have the same dimensionality. Same Mantra: Rows by Columns. This time, multiply a column vector by a row vector: 1212 v = [ 4 5 6 ] M = u’ * v M = Row 2 times column 2 goes into row 2, column 2 4 5 6 8 10 u’ =
99
Multiplying a Vector by a Vector 2: The Outer Product The two vectors need not have the same dimensionality. Same Mantra: Rows by Columns. This time, multiply a column vector by a row vector: 1212 v = [ 4 5 6 ] M = u’ * v M = Row 2 times column 3 goes into row 2, column 3 4 5 6 8 10 12 u’ =
100
Multiplying a Vector by a Vector 2: The Outer Product The two vectors need not have the same dimensionality. Same Mantra: Rows by Columns. This time, multiply a column vector by a row vector: 1212 v = [ 4 5 6 ] M = u’ * v = M 4 5 6 8 10 12 A better way to visualize it…
101
1212 Multiplying a Vector by a Vector 2: The Outer Product Outer product is not exactly commutative… u’ = v = [ 4 5 6 ] M = u’ * v = M 4 5 6 8 10 12 M = v’ * u u = [ 1 2 ] 456456 v’ = 4 8 5 10 6 12
102
Multiplying a Vector by a Matrix Same Mantra: Rows by Columns
103
Rows by Columns A row vector: [.2.6.3.7.9.4.3 ]
104
Rows by Columns A row vector: [.2.6.3.7.9.4.3 ] A matrix:.3.4.8.1.2.3.5.2 0.1.5.2.1.1.9.2.5.3.2.4.1.7.8.5.9.9.2.5.3.5.4.1.2.7.8.2.1.2.2.5.7.2
105
Rows by Columns A row vector: [.2.6.3.7.9.4.3 ] A matrix:.3.4.8.1.2.3.5.2 0.1.5.2.1.1.9.2.5.3.2.4.1.7.8.5.9.9.2.5.3.5.4.1.2.7.8.2.1.2.2.5.7.2 Multiply rows
106
Rows by Columns A row vector: [.2.6.3.7.9.4.3 ] A matrix:.3.4.8.1.2.3.5.2 0.1.5.2.1.1.9.2.5.3.2.4.1.7.8.5.9.9.2.5.3.5.4.1.2.7.8.2.1.2.2.5.7.2 Multiply rows by columns
107
Each such multiplication is a simple dot product A row vector: [.2.6.3.7.9.4.3 ] A matrix:.3.4.8.1.2.3.5.2 0.1.5.2.1.1.9.2.5.3.2.4.1.7.8.5.9.9.2.5.3.5.4.1.2.7.8.2.1.2.2.5.7.2
108
Each such multiplication is a simple dot product A row vector: [.2.6.3.7.9.4.3 ] A matrix:.3.4.8.1.2.3.5.2 0.1.5.2.1.1.9.2.5.3.2.4.1.7.8.5.9.9.2.5.3.5.4.1.2.7.8.2.1.2.2.5.7.2 Make a proxy column vector…
109
Each such multiplication is a simple dot product A row vector: [.2.6.3.7.9.4.3 ] A matrix:.3.4.8.1.2.3.5.2 0.1.5.2.1.1.9.2.5.3.2.4.1.7.8.5.9.9.2.5.3.5.4.1.2.7.8.2.1.2.2.5.7.2.2.6.3.7.9.4.3
110
Each such multiplication is a simple dot product A row vector: [.2.6.3.7.9.4.3 ] A matrix:.3.4.8.1.2.3.5.2 0.1.5.2.1.1.9.2.5.3.2.4.1.7.8.5.9.9.2.5.3.5.4.1.2.7.8.2.1.2.2.5.7.2.2.6.3.7.9.4.3 Now compute the dot product of the (proxy) row vector with each column of the matrix… [ 1.5 ]
111
Each such multiplication is a simple dot product A row vector: [.2.6.3.7.9.4.3 ] A matrix:.3.4.8.1.2.3.5.2 0.1.5.2.1.1.9.2.5.3.2.4.1.7.8.5.9.9.2.5.3.5.4.1.2.7.8.2.1.2.2.5.7.2.2.6.3.7.9.4.3 [ 1.5 1.4 ]
112
Each such multiplication is a simple dot product A row vector: [.2.6.3.7.9.4.3 ] A matrix:.3.4.8.1.2.3.5.2 0.1.5.2.1.1.9.2.5.3.2.4.1.7.8.5.9.9.2.5.3.5.4.1.2.7.8.2.1.2.2.5.7.2.2.6.3.7.9.4.3 [ 1.5 1.4 0.8 ]
113
Each such multiplication is a simple dot product A row vector: [.2.6.3.7.9.4.3 ] A matrix:.3.4.8.1.2.3.5.2 0.1.5.2.1.1.9.2.5.3.2.4.1.7.8.5.9.9.2.5.3.5.4.1.2.7.8.2.1.2.2.5.7.2.2.6.3.7.9.4.3 [ 1.5 1.4 0.8 1.5 ]
114
Each such multiplication is a simple dot product A row vector: [.2.6.3.7.9.4.3 ] A matrix:.3.4.8.1.2.3.5.2 0.1.5.2.1.1.9.2.5.3.2.4.1.7.8.5.9.9.2.5.3.5.4.1.2.7.8.2.1.2.2.5.7.2.2.6.3.7.9.4.3 [ 1.5 1.4 0.8 1.5 1.9 ]
115
Each such multiplication is a simple dot product A row vector: [.2.6.3.7.9.4.3 ] A matrix:.3.4.8.1.2.3.5.2 0.1.5.2.1.1.9.2.5.3.2.4.1.7.8.5.9.9.2.5.3.5.4.1.2.7.8.2.1.2.2.5.7.2.2.6.3.7.9.4.3 [ 1.5 1.4 0.8 1.5 1.9 1.2 ]
116
A row vector: [.2.6.3.7.9.4.3 ] A matrix:.3.4.8.1.2.3.5.2 0.1.5.2.1.1.9.2.5.3.2.4.1.7.8.5.9.9.2.5.3.5.4.1.2.7.8.2.1.2.2.5.7.2 The result is a row vector with as many columns (dimensions) as the matrix (not the vector) [ 1.5 1.4 0.8 1.5 1.9 1.2 ]
117
A row vector: [.2.6.3.7.9.4.3 ] A matrix:.3.4.8.1.2.3.5.2 0.1.5.2.1.1.9.2.5.3.2.4.1.7.8.5.9.9.2.5.3.5.4.1.2.7.8.2.1.2.2.5.7.2 [ 1.5 1.4 0.8 1.5 1.9 1.2 ] 7-dimensional vector
118
A row vector: [.2.6.3.7.9.4.3 ] A matrix:.3.4.8.1.2.3.5.2 0.1.5.2.1.1.9.2.5.3.2.4.1.7.8.5.9.9.2.5.3.5.4.1.2.7.8.2.1.2.2.5.7.2 [ 1.5 1.4 0.8 1.5 1.9 1.2 ] 7-dimensional vector 6-dimensional vector
119
A row vector: [.2.6.3.7.9.4.3 ] A matrix:.3.4.8.1.2.3.5.2 0.1.5.2.1.1.9.2.5.3.2.4.1.7.8.5.9.9.2.5.3.5.4.1.2.7.8.2.1.2.2.5.7.2 [ 1.5 1.4 0.8 1.5 1.9 1.2 ] 7-dimensional vector 6-dimensional vector 7 (rows) X 6 (columns) matrix
120
A row vector: [.2.6.3.7.9.4.3 ] A matrix:.3.4.8.1.2.3.5.2 0.1.5.2.1.1.9.2.5.3.2.4.1.7.8.5.9.9.2.5.3.5.4.1.2.7.8.2.1.2.2.5.7.2 [ 1.5 1.4 0.8 1.5 1.9 1.2 ] 7-dimensional vector 6-dimensional vector 7 (rows) X 6 (columns) matrix NOT Commutative!
121
Multiplying a Matrix by a Matrix The Same Mantra: Rows by Columns
122
Rows by Columns A 2 X 3 matrixA 3 X 2 matrix 1 2 3 12
123
Row 1 X Column 1 A 2 X 3 matrixA 3 X 2 matrix 1 2 3 1 2 123123 (proxy) Row 1 Column 1
124
Row 1 X Column 1 A 2 X 3 matrixA 3 X 2 matrix 1 2 3 1 2 123123 Row 1 Column 1 Result = 6
125
Row 1 X Column 1 A 2 X 3 matrixA 3 X 2 matrix 1 2 3 1 2 123123 Row 1 Column 1 Result = 6 Place the result in row 1,
126
Row 1 X Column 1 A 2 X 3 matrixA 3 X 2 matrix 1 2 3 1 2 123123 Row 1 Column 1 Result = 6 Place the result in row 1, column 1
127
Row 1 X Column 1 A 2 X 3 matrixA 3 X 2 matrix 1 2 3 1 2 123123 Row 1 Column 1 Result = 6 Place the result in row 1, column 1 of a new matrix… 6
128
Row 1 X Column 2 A 2 X 3 matrixA 3 X 2 matrix 1 2 3 1 2 123123 Row 1 Column 2 Result = 12 6
129
Row 1 X Column 2 A 2 X 3 matrixA 3 X 2 matrix 1 2 3 1 2 123123 Row 1 Column 2 Result = 12 6 12 Place the result in row 1, column 2 of the new matrix…
130
Row 2 X Column 1 A 2 X 3 matrixA 3 X 2 matrix 1 2 3 123123 Row 2 Column 1 Result = 6 6 12 1 2
131
Row 2 X Column 1 A 2 X 3 matrixA 3 X 2 matrix 1 2 3 123123 Row 2 Column 1 Result = 6 6 12 6 1 2 Place the result in row 2, column 1 of the new matrix…
132
Row 2 X Column 2 A 2 X 3 matrixA 3 X 2 matrix 1 2 3 123123 Row 2 Column 2 6 12 6 1 2 Result = 12
133
Row 2 X Column 2 A 2 X 3 matrixA 3 X 2 matrix 1 2 3 123123 Row 2 Column 2 6 12 1 2 Result = 12 Place the result in row 2, column 2 of the new matrix…
134
So… A 2 X 3 matrixA 3 X 2 matrix 1 2 3 6 12 1 2 * = A 2 X 2 matrix
135
So… A 2 X 3 matrixA 3 X 2 matrix 1 2 3 6 12 1 2 * = A 2 X 2 matrix The result has the same number of rows as the first matrix…
136
So… A 2 X 3 matrixA 3 X 2 matrix 1 2 3 6 12 1 2 * = A 2 X 2 matrix The result has the same number of rows as the first matrix… …and the same number of columns as the second.
137
…and… A 2 X 3 matrixA 3 X 2 matrix 1 2 3 6 12 1 2 * = A 2 X 2 matrix …and the number of columns in the first matrix…
138
…and… A 2 X 3 matrixA 3 X 2 matrix 1 2 3 6 12 1 2 * = A 2 X 2 matrix …and the number of columns in the first matrix… …must be equal to the number of rows in the second.
139
This is basic (default) matrix multiplication. There’s other more complicated stuff, too. You (probably) won’t need it for this class.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.