Download presentation
Presentation is loading. Please wait.
1
Chapter 3 Linear Algebra
February 26 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Matrix of coefficients: Augmented matrix: Elementary row operations: Row switching. Row multiplication by a nonzero number. Adding a multiple of a row to another row. These operations are reversible.
2
Solving a set of linear equations by row reduction:
Row reduced matrix
3
Rank of a matrix: The number of the nonzero rows in the row reduced matrix. It is the maximal number of linearly independent row (or column) vectors of the matrix. Rank of A = Rank of AT (where (AT)ij= (A)ji, transpose matrix). Possible cases of the solution of a set of linear equation: Solving m equations with n unknowns: If Rank M < Rank A, the equations are inconsistent and there is no solution. If Rank M = Rank A = n, there is one solution. If Rank M = Rank A = R<n, then R unknowns can be expressed by the remaining n−R unknowns.
4
Read: Chapter 3: 1-2 Homework: 3.2.4,8,10,12,15. Due: March 9
5
3.3 Determinates; Cramer’s rule
March 5 Determinants 3.3 Determinates; Cramer’s rule Determinate of an n×n matrix: Minor: i j Cofactor: Determinate of a 1×1 matrix: Definition of the determinate of an n×n matrix:
6
Equivalent methods: A determinate can be expanded by any row or any column: |A|=|AT|. Example p90.1 Triple product of 3 vectors: Useful properties of determinants: A common factor in a row (column) may be factored out. Interchanging two rows (columns) changes the sign of the determinant. A multiple of one row (column) can be added to another row (column) without changing the determinant. The determinate is zero if two rows (columns) are identical or proportional. Example p91.2.
7
Cramer’s rule in solving a set of linear equations:
Theorem: For homogeneous linear equations, the determinant of the coefficient matrix must be zero for a nontrivial solution to exist.
8
Read: Chapter 3: 3 Homework: 3.3.1,10,15,17. (No computer work is needed.) Due: March 23
9
March 9 Vectors 3.4 Vectors Vector: A quantity that has both a magnitude and a direction. Geometrical representation of a vector: An arrow with a length and a direction . Addition and subtraction: Vector addition is commutative and associative. Algebraic representation of a vector: Magnitude of a vector: Note: A is a vector and A is its length. They should be distinguished.
10
Parallel and perpendicular vectors:
Scalar or dot product: B A q Vector or cross product: Cross product in determinant form: B A q A×B Parallel and perpendicular vectors: Relations between the basis vectors: Examples p102.3, p105.4. Problems 4.5,26.
11
Read: Chapter 3: 4 Homework: 3.4.5,7,12,18,26. Due: March 23
12
March 19 Lines and planes 3.5 Lines and planes
y z A r−r0 x Equations for a straight line: Equations for a plane: N r−r0 Examples p109.1, 2.
13
Distance from a point to a plane:
Q q Example p110.3. Distance from a point to a line: A P R Q q Example p110.4. Distance between two skew lines: Let FG be the shortest distance, then it must be perpendicular to both lines (proof). F A P n Q B Example p110.5,6. Problems 5.12,18,42. G
14
Read: Chapter 3: 5 Homework: 3.5.7,12,18,20,26,32,37,42. Due: March 30
15
March 21,23 Matrix operations
Matrix equation: Multiplication by a number: Matrix addition:
16
Matrix multiplication:
Note: The element on the ith row and jth column of AB is equal to the dot product between the ith row of A and the jth column of B. The number of columns in A is equal to the number of rows in B. Example: More about matrix multiplication: The product is associative: A(BC)=(AB)C The product is distributive: A(B+C)=AB+AC In general the product is not commutative: ABBA. [A,B]=AB−BA is called the commutator. Unit matrix: Zero matrix: Product theorem:
17
Solving a set of linear equations by matrix operations:
Matrix inversion: M−1 is the inverse of M if MM−1= M−1M =I=1. Only square matrix can be inversed. det (M M−1) = det(M) det( M−1) = det(I)=1, so det (M)0 is necessary for M to be invertible. Calculating the inverse matrix: Example p120.3.
18
Three equivalent ways of solving a set of linear equations:
Row reduction Cramer’s rule Inverse matrix: r=M-1k Equivalence between r=M-1k and the Cramer’s rule: Cramer’s rule is an actual realization of r=M-1k.
19
Gauss-Jordan method of matrix inversion:
Let (MLp MLp-1 MLp-2 … ML2 ML1 ) M = MLM = I be the result of a series of elementary row operations on M, then (MLp MLp-1 MLp-2 … ML2 ML1 ) I = MLI = M−1. That is, Equivalence between row reduction and r=M-1k: Row reduction is to decompose the M−1 in r=M−1k into many steps.
20
Rotation matrices: Rotation of vectors
y x q Functions of matrices: Expansion or power series expansion is implied. Examples:
21
Read: Chapter 3: 6 Homework: 3.6.9,13,15,18,21 Due: March 30
22
March 26, 28 Linear operators
3.7 Linear combinations, linear functions, linear operators Linear combination: aA + bB Linear function f (r): Linear operator O: Example p125.1; Problem 7.15 Linear transformation: The matrix M is a linear operator representing a linear transformation.
23
Orthogonal transformation:
An orthogonal transformation preserves the length of a vector. Orthogonal matrix : The matrix for an orthogonal transformation is an orthogonal matrix. Theorem: M is an orthogonal matrix if and only if MT = M−1. Theorem: det M=1 if M is orthogonal.
24
2×2 orthogonal matrix: Conclusion: A 2×2 orthogonal matrix corresponds to either a rotation (with det M=1) or a reflection (with det M=−1).
25
Two-dimensional rotation:
y x q y x q x' y' Two-dimensional reflection: y x a q/2 Example p128.3.
26
Read: Chapter 3: 7 Homework: 3.7.9,15,22,26. Due: April 6
27
March 30 Linear dependence and independence
Linear dependence of vectors: A set of vectors are linearly dependent if some linear combination of them is zero, with not all the coefficients equal to zero. 1. If a set of vectors are linearly dependent, then at least one of the vectors can be written as a linear combination of others. 2. If a set of vectors are linearly dependent, then at least one row in the row-reduced matrix of these vectors equals to zero. The rank of the matrix is then less than the number of rows. Example: Any three vectors in the x-y plane are linearly dependent. E.g. (1,2), (3,4), (5,6). Linear independence of vectors: A set of vectors are linearly independent if any linear combination of them is not zero, with not all the coefficients equal to zero.
28
Linear dependence of functions:
A set of functions are linearly dependent if some linear combination of them is always zero, with not all the coefficients equal to zero. Examples: Theorem: If the Wronskian of a set of functions then the functions are linearly independent .
29
Examples p133.1,2. Note: W=0 does not always imply the functions are linearly dependent. E.g., x2 and x|x| about x=0. However, when the functions are analytic (infinitely differentiable), which we meet often, W=0 implies linear dependence.
30
Homogeneous equations:
1. Homogeneous equations always have the trivial solution (all unknowns=0). 2. If Rank M=Number of unknowns, the trivial solution is the only solution. 3. If Rank M< Number of unknowns, there are infinitely many solutions. Theorem: A set of n homogeneous equations with n unknowns has nontrivial solutions if and only if the determinant of the coefficients is zero. Proof: 1. det M ≠0 r =M-10=0. 2. Only trivial solution exists Columns of M are linearly independent det M ≠0. Examples p135.4.
31
Read: Chapter 3: 8 Homework: 3.8.7,10,13,17,24. Due: April 6
32
April 2 Special matrices
3.9 Special matrices and formulas Transpose matrix AT of A: Complex conjugate matrix A* of A: Adjoint (transpose conjugate) matrix A+ of A: Inverse matrix A-1 of A: A-1A= AA-1=1. Symmetric matrix: A=AT (A is real) Orthogonal matrix: A−1 =AT (A is real) Hermitian matrix: A =A+ Unitary matrix: A-1 =A+ Normal matrix: AA+=A+A, or [A,A+]=0.
33
Index notation for matrix multiplication: Kronecker d symbol:
Exercises on index notations: Associative law for matrix multiplication: A(BC)=(AB)C Transpose of a product: (AB)T=BTAT Corollary: (ABC)T=CTBTAT
34
Corollary: (ABC)-1=C-1B-1A-1
Inverse of a product: (AB)-1=B-1A-1 Corollary: (ABC)-1=C-1B-1A-1 Trace of a matrix: Trace of a product: Tr(AB)=Tr(BA) Corollary: Tr(ABC)=Tr(BCA)=Tr(CAB)
35
Read: Chapter 3: 9 Homework: 3.9.2,4,5,23,24. Due: April 13
36
April 4 Linear vector spaces
n-dimensional vectors: Linear vector space: Several vectors and the linear combinations of them form a space. Subspace: A plane is a subspace of 3-dimensional space. Span: A set of vectors spans the vector space if any vector in the space can be written as a linear combination of the spanning set. Basis: A set of linearly independent vectors that spans a vector space. Dimension of a vector space: The number of basis vectors that span the vector space. Examples p143.1; p144.2.
37
Inner product of two n-dimensional vectors:
Length of an n-dimensional vector: Two n-dimensional vectors are orthogonal if Schwarz inequality:
38
Orthonormal basis: A set of vectors form an orthonormal basis if 1) they are mutually orthogonal and 2) each vector is normalized. Gram-Schmidt orthonormalization: Starting from n linearly independent vectors we can construct an orthonormal basis set Example p146.4.
39
Bra-ket notation of vectors:
Complex Euclidean space: Inner product: Length: Orthogonal vectors: Schwarz inequality: Example p146.5.
40
Read: Chapter 3: 10 Homework: ,10. Due: April 13
41
April 6, 9 Eigenvalues and eigenvectors
3.11 Eigenvalues and eigenvectors; Diagonalizing matrices Eigenvalues and eigenvectors: For a matrix M, if there is a nonzero vector r and a scalar l such that then r is called an eigenvector of M, and l is called the corresponding eigenvalue. M only changes the “length” of its eigenvector r by a factor of eigenvalue l, without affecting its “direction”. For nontrivial solutions of this homogeneous equation, we need This is called the secular equation, or characteristic equation.
42
Example: Calculate the eigenvalues and eigenvectors of
43
Similarity transformation:
Let operator M actively change (rotate, stretch, etc.) a vector. The matrix representation of the operator depends on the choice of basis vectors: Let matrix C change the basis (coordinate transformation): Question: x x' y y' r (r') R (R') M (M') M′ =CMC-1 is called a similar transformation of M. M′ and M are called similar matrices. They are the same operator represented in different bases that are related by the transformation matrix C. That is: If r′ =Cr, then M′ = CMC-1. Theorem: A similarity transformation does not change the determinate or trace of a matrix: r r' C R' M' M R
44
Diagonalization of a matrix:
Theorem: A matrix M may be diagonalized by a similarity transformation C-1MC=D , where C consists of the column eigenvectors of M, and the diagonalized matrix D consists of the corresponding eigenvalues. That is, the diagonization equation C-1MC=D just summarizes the eigenvalues and eigenvectors of M.
46
More about the diagonalization of a matrix C-1MC=D: (2×2 matrix as an example):
D describes in the (x', y') system the same operation as M describes in the (x, y) system. The new x', y' axes are along the eigenvectors of M. The operation is more clear in the new system:
47
Diagonalization of Hermitian matrices:
The eigenvalues of a Hermitian matrix are always real. The eigenvectors corresponding to different eigenvalues of a Hermitian matrix are orthogonal. {A matrix has real eigenvalues and can be diagonalized by a unitary similarity transformation}if and only if {it is Hermitian}. Example p155.2.
48
Corollary: {A matrix has real eigenvalues and can be diagonalized by an orthogonal similarity transformation} if and only if {it is symmetric}.
49
Read: Chapter 3: 11 Homework: ,13,14,19,32,33,42. Due: April 20
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.