Download presentation
Published byRoland Ross Modified over 9 years ago
1
ENGG2013 Unit 15 Rank, determinant, dimension, and the links between them
Mar, 2011.
2
Review on “rank” “row-rank of a matrix” counts the max. number of linearly independent rows. “column-rank of a matrix” counts the max. number of linearly independent columns. One application: Given a large system of linear equations, count the number of essentially different equations. The number of essentially different equations is just the row-rank of the augmented matrix. kshum ENGG2013
3
Evaluating the row-rank by definition
Linearly independent Linearly independent Linearly independent Linearly dependent Linearly independent Linearly independent Linearly dependent Row-Rank = 2 kshum ENGG2013
4
Calculation of row-rank via RREF
Row reductions Row-rank = 2 Because row reductions do not affect the number of linearly independent rows Row-rank = 2 kshum ENGG2013
5
Calculation of column-rank by definition
List all combinations of columns Column-Rank = 2 Linearly independent?? Y Y Y Y Y Y Y Y Y Y N N N N N kshum ENGG2013
6
Theorem Given any matrix, its row-rank and column-rank are equal. In view of this property, we can just say the “rank of a matrix”. It means either the row-rank of column-rank. kshum ENGG2013
7
Why row-rank = column-rank?
If some column vectors are linearly dependent, they remain linearly dependent after any elementary row operation For example, are linearly dependent kshum ENGG2013
8
Why row-rank = column-rank?
Any row operation does not change the column- rank. By the same argument, apply to the transpose of the matrix, we conclude that any column operation does not change the row-rank as well. kshum ENGG2013
9
Why row-rank = column-rank?
Apply row reductions. row-rank and column-rank do not change. Apply column reductions. row-rank and column-rank do not change. The top-left corner is an identity matrix. The row-rank and column-rank of this “normal form” is certainly the size of this identity submatrix, and are therefore equal. kshum ENGG2013
10
DISCRIMINANT, DETERMINANT AND RANK
kshum ENGG2013
11
Discriminant of a quadratic equation
y = ax2+bx+c Discirminant of ax2+bx+c = b2-4ac. It determines whether the roots are distinct or not y x kshum ENGG2013
12
Discriminant measures the separation of roots
y = x2+bx+c. Let the roots be and . y = (x – )(x – ). Discriminant = ( – )2. Discriminant is zero means that the two roots coincide. y ( – )2 x kshum ENGG2013
13
Discriminant is invariant under translation
If we substitute u= x – t into y = ax2+bx+c, (t is any real constant), then the discriminant of a(u+t)2+b(u+t)+c, as a polynomial in u, is the same as before. y ( – )2 u kshum ENGG2013
14
Determinant of a square matrix
The determinant of a square matrix determine whether the matrix is invertible or not. Zero determinant: not invertible Non-zero determinant: invertible. kshum ENGG2013
15
Determinant measure the area
22 determinant measures the area of a parallelogram. 33 determinant measures the volume of a parallelopiped. nn determinant measures the “volume” of some “parallelogram” in n-dimension. Determinant is zero means that the columns vectors lie in some lower-dimensional space. kshum ENGG2013
16
Determinant is invariant under shearing action
Shearing action = third kind of elementary row or column operation kshum ENGG2013
17
Rank of a rectangular matrix
The rank of a matrix counts the maximal number of linearly independent rows. It also counts the maximal number of linearly independent columns. It is an integer. If the matrix is mn, then the rank is an integer between 0 and min(m,n). kshum ENGG2013
18
Rank is invariant under row and column operations
kshum ENGG2013
19
Comparison between det and rank
Determinant Rank Real number Defined to square matrix only Non-zero det implies existence of inverse. When det is zero, we only know that all the columns (or rows) together are linearly dependent, but don’t know any information about subset of columns (or rows) which are linearly independent. Integer Defined to any rectangular matrix When applied to nn square matrix, rank=n implies existence of inverse. kshum ENGG2013
20
Basis: Definition For any given vector in
if there is one and only one choice for the coefficients c1, c2, …,ck, such that we say that these k vectors form a basis of kshum ENGG2013
21
Yet another interpretation of rank
Recall that a subspace W in is a subset which is Closed under addition: Sum of any two vectors in W stay in W. Closed under scalar multiplication: scalar multiple of any vector in W stays in W as well. W kshum ENGG2013
22
Closedness property of subspace
W kshum ENGG2013
23
Geometric picture x – 3y + z = 0 W z y x
W is the plane generated, or spanned, by these vectors. x – 3y + z = 0 z y x kshum ENGG2013
24
Basis and dimension A basis of a subspace W is a set of linearly independent vectors which span W. A rigorous definition of the dimension is: Dim(W) = the number of vectors in a basis of W. z y W x kshum ENGG2013
25
Rank as dimension In this context, the rank of a matrix is the dimension of the subspace spanned by the rows of this matrix. The least number of row vectors required to span the subspace spanned by the rows. The rank is also the dimension of the subspace spanned by the column of this matrix. The least number of column vectors required to span the subspace spanned by the columns kshum ENGG2013
26
Example x – 2y + z = 0 z y Rank = 2
The three row vectors lie on the same plane. Two of them is enough to describe the plane. x kshum ENGG2013
27
INTERPOLATION kshum ENGG2013
28
Polynomial interpolation
Given n points, find a polynomial of degree n-1 which goes through these n points. Technical requirements: All x-coordinates must be distinct y-coordinates need not be distinct. y3 y2 y4 y1 x1 x2 x3 x4 kshum ENGG2013
29
Lagrange interpolation
Lagrange interpolating polynomial for four data points: \begin{align*} f(x)&=y_1\frac{(x-x_2)(x-x_3)(x-x_4)}{(x_1-x_2)(x_1-x_3)(x_1-x_4)}+ y_2\frac{(x-x_1)(x-x_3)(x-x_4)}{(x_2-x_1)(x_2-x_3)(x_2-x_4)}+ \\ & y_3\frac{(x-x_1)(x-x_2)(x-x_4)}{(x_3-x_1)(x_3-x_2)(x_3-x_4)}+ y_4\frac{(x-x_1)(x-x_2)(x-x_3)}{(x_4-x_1)(x_4-x_2)(x_4-x_3)} \end{align*} kshum ENGG2013
30
Computing the coefficients by linear equations
We want to solve for coefficients c3, c2, c1, and c0, such that or equivalently kshum ENGG2013
31
The theoretical basis for polynomial interpolation
The determinant of a vandermonde matrix is non-zero, if all xi’s are distinct. Hence, we can always find the matrix inverse and solve the system of linear equations. kshum ENGG2013
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.