Presentation is loading. Please wait.

Presentation is loading. Please wait.

4 - 1 Chapter 3 Vector Spaces 3.1 Vectors in R n 3.1 Vectors in R n 3.2 Vector Spaces 3.2 Vector Spaces 3.3 Subspaces of Vector Spaces 3.3 Subspaces of.

Similar presentations


Presentation on theme: "4 - 1 Chapter 3 Vector Spaces 3.1 Vectors in R n 3.1 Vectors in R n 3.2 Vector Spaces 3.2 Vector Spaces 3.3 Subspaces of Vector Spaces 3.3 Subspaces of."— Presentation transcript:

1 4 - 1 Chapter 3 Vector Spaces 3.1 Vectors in R n 3.1 Vectors in R n 3.2 Vector Spaces 3.2 Vector Spaces 3.3 Subspaces of Vector Spaces 3.3 Subspaces of Vector Spaces 3.4 Spanning Sets and Linear Independence 3.4 Spanning Sets and Linear Independence 3.5 Basis and Dimension 3.5 Basis and Dimension 3.6 Rank of a Matrix and Systems of Linear Equations 3.6 Rank of a Matrix and Systems of Linear Equations 3.7 Coordinates and Change of Basis 3.7 Coordinates and Change of Basis The idea of vectors dates back to the early 1800’s, but the generality of the concept waited until Peano’s work in 1888. It took many years to understand the importance and extent of the ideas involved. Newtonian mechanicselectromagnetism quantum mechanicsfitting of experimental data The underlying idea can be used to describe the forces and accelerations in Newtonian mechanics and the potential functions of electromagnetism and the states of systems in quantum mechanics and the least-square fitting of experimental data and much more.

2 4 - 2 is defined to be the set of all ordered n-tuple  n-space: R n The idea of a vector is far more general than the picture of a line with an arrowhead attached to its end. A short answer is “A vector is an element of a vector space”. (1) An n-tuple can be viewed as a point in R n with the x i ’s as its coordinates. (2) An n-tuple can be viewed as a vector in R n with the x i ’s as its components. 3.1 Vectors in R n which is shown to be a sequence of n real number Vector in R n is denoted as an ordered n-tuple: Ex:  Ex: a pointa vector

3 4 - 3 Note: some set of things A vector space is some set of things for which the operation of addition and the operation of multiplication by a scalar are defined. You don’t necessarily have to be able to multiply two vectors by each other or even to be able to define the length of a vector, though those are very useful operations. The common example of directed line segments (arrows) in 2D or 3D fits this idea, because you can add such arrows by the parallelogram law and you can multiply them by numbers, changing their length (and reversing direction for negative numbers).

4 4 - 4 A complete definition of a vector space requires pinning down these properties of the operators and making the concept of vector space less vague. vectors A vector space is a set whose elements are called “vectors” and such that there are two operations defined on them: axioms you can add vectors to each other and you can multiply them by scalars (numbers). These operations must obey certain simple rules, the axioms for a vector space.

5 4 - 5 (two vectors in R n ) Equal:  Equal: if and only if Vector addition (the sum of u and v):  Vector addition (the sum of u and v): Scalar multiplication (the scalar multiple of u by c):  Scalar multiplication (the scalar multiple of u by c):

6 4 - 6 Negative:  Negative: Difference:  Difference: Zero vector:  Zero vector: Notes: additive identity (1) The zero vector 0 in R n is called the additive identity in R n. additive inverse (2) The vector –v is called the additive inverse of v.

7 4 - 7 Thm 3.1: Thm 3.1: (the axioms for a vector space) Let v 1, v 2, and v 3 be vectors in R n, and let ,  and  be scalars.

8 4 - 8 Ex : Ex : (Vector operations in R 4 ) Sol: Let u=(2, – 1, 5, 0), v=(4, 3, 1, – 1), and w=(– 6, 2, 0, 3) be vectors in R 4. Solve x for 3(x+w) = 2u – v+x

9 4 - 9 Thm 3.2 Thm 3.2: (Properties of additive identity and additive inverse) Let v be a vector in R n and c be a scalar. Then the following is true. u+v=vu = 0 (1) The additive identity is unique. That is, if u+v=v, then u = 0 v+u=0u = –v (2) The additive inverse of v is unique. That is, if v+u=0, then u = –v

10 4 - 10 Thm 3.3:  Thm 3.3: (Properties of scalar multiplication) Let v be any element of a vector space V, and let c be any scalar. Then the following properties are true. (1) 0v=0 (2) c0=0 (3) If cv=0, then c=0 or v=0 (4) (-1)v = -v and –(– v) = v

11 4 - 11 Notes: A vector in can be viewed as: ( The matrix operations of addition and scalar multiplication give the same results as the corresponding vector operations ) or a n×1 column matrix (column vector): a 1×n row matrix (row vector):

12 4 - 12 Vector addition Scalar multiplication Matrix Algebra

13 4 - 13 Notes: (1) A vector space consists of four entities: (2) zero vector space additive identity containing only additive identity V : nonempty set c : scalar vector addition scalar multiplication is called a vector space a set of vectorsa set of scalarstwo operations a set of vectors, a set of scalars, and two operations

14 4 - 14 Examples of vector spaces: Examples of vector spaces: n-tuple space: (1) n-tuple space: R n Matrix space: (2) Matrix space: (the set of all m×n matrices with real values) Ex: : (m = n = 2) vector addition scalar multiplication vector addition scalar multiplication

15 4 - 15 n-th degree polynomial space: (3) n-th degree polynomial space: (the set of all real polynomials of degree n or less) Function space: (4) Function space: The set of square-integrable real-valued functions of a real variable on the domain [a  x  b]. That is, those functions with. simply note the combination So the axiom-1 is satisfied. You can verify the rest 9 axioms are also satisfied.

16 4 - 16 Function Spaces: Function Spaces: Is this a vector space? Is this a vector space? How can a function be a vector? This comes down to your understanding of the word “function.” Is f(x) a function or is f(x) a number? Answer: Answer: It’s a number. This is a confusion caused by the conventional notation for functions. We routinely call f(x) a function, but it is really the result of feeding the particular value, x, to the function f in order to get the number f(x). Think of the function f as the whole graph relating input to output; the pair {x, f(x)} is then just one point on the graph. Adding two functions is adding their graphs.

17 4 - 17 Notes: Notes: To show that a set is not a vector space, you need only find one axiom that is not satisfied. Ex2: Ex2: The set of all second-degree polynomials is not a vector space. Pf: Let and (it is not closed under vector addition) ( it is not closed under scalar multiplication ) scala r Pf: Ex1: Ex1: The set of all integer is not a vector space. integer noninteger

18 4 - 18 3.3 Subspaces of Vector Spaces Subspace: Subspace: : a vector space nonempty : a nonempty subset : a vector space (under the operations of addition and scalar multiplication defined in V) W is a subspace of V Trivial subspace:  Trivial subspace: Every vector space V has at least two subspaces. (1) Zero vector space {0} is a subspace of V. (2) V is a subspace of V.

19 4 - 19 Thm 3.4: Thm 3.4: (Test for a subspace) If W is a nonempty subset of a vector space V, then W is a subspace of V if and only if the following conditions hold. (1) If u and v are in W, then u+v is in W. (2) If u is in W and c is any scalar, then cu is in W. Axiom 1 Axiom 2

20 4 - 20  Ex: (A subspace of M 2×2 ) symmetric Let W be the set of all 2×2 symmetric matrices. Show that W is a subspace of the vector space M 2×2, with the standard operations of matrix addition and scalar multiplication. Sol:

21 4 - 21  Ex: (Determining subspaces of R 3 ) Sol:

22 4 - 22 Thm 3.5:  Thm 3.5: (The intersection of two subspaces is a subspace) Proof:Thm 3.4 Proof: Automatically from Thm 3.4.

23 4 - 23 3.4 Spanning Sets and Linear Independence Linear combination:  Linear combination: Ex Ex: Sol: Given v = (– 1, – 2, – 2), u 1 = (0,1,4), u 2 = (– 1,1,2), and u 3 = (3,1,2) in R 3, find a, b, and c such that v =

24 4 - 24 Ex: Ex: (Finding a linear combination) Sol:

25 4 - 25 (this system has infinitely many solutions)

26 4 - 26 the span of S If S={v 1, v 2,…, v k } is a set of vectors in a vector space V, then the span of S is the set of all linear combinations of the vectors in S, the span of a set: span (S)  the span of a set: span (S) a spanning set of a vector space:  a spanning set of a vector space: U U If every vector in a given vector space U can be written as a linear combination of vectors in a given set S, then S is called a spanning set of the vector space U.

27 4 - 27 Notes:  Notes:

28 4 - 28  Ex: (A spanning set for R 3 ) Sol:

29 4 - 29

30 4 - 30 Thm 3.6:  Thm 3.6: (Span (S) is a subspace of V) If S={v 1, v 2,…, v k } is a set of vectors in a vector space V, then (a)span (S) is a subspace of V. the smallest subspace (b)span (S) is the smallest subspace of V that contains the spaning S. i.e., Every other subspace of V that contains S must contain span (S).

31 4 - 31 Linear Independent (L.I.) and Linear Dependent (L.D.):  Linear Independent (L.I.) and Linear Dependent (L.D.): : a set of vectors in a vector space V

32 4 - 32 Notes:  Notes:

33 4 - 33  Ex: (Testing for linearly independent) Sol: Determine whether the following set of vectors in R 3 is L.I. or L.D.

34 4 - 34 Ex: (Testing for linearly independent) Determine whether the following set of vectors in P 2 is L.I. or L.D. S = {1+x – 2x 2, 2+5x – x 2, x+x 2 } c 1 v 1 +c 2 v 2 +c 3 v 3 = 0 i.e. c 1 (1+x – 2x 2 ) + c 2 (2+5x – x 2 ) + c 3 (x+x 2 ) = 0+0x+0x 2  c 1 +2c 2 = 0 c 1 +5c 2 +c 3 = 0 –2c 1 + c 2 +c 3 = 0 v 1 v 2 v 3 Sol:  This system has infinitely many solutions. This system has nontrivial solutions (i.e., This system has nontrivial solutions.)  S is linearly dependent. (Ex: c 1 =2, c 2 = – 1, c 3 =3) 

35 4 - 35 Ex: (Testing for linearly independent) Determine whether the following set of vectors in 2×2 matrix space is L.I. or L.D. Sol: c 1 v 1 +c 2 v 2 +c 3 v 3 = 0 v 1 v 2 v 3

36 4 - 36 (This system has only the trivial solution.)c 1 = c 2 = c 3 = 0  S is linearly independent.   2c 1 +3c 2 + c 3 = 0 c 1 = 0 2c 2 +2c 3 = 0 c 1 + c 2 = 0 

37 4 - 37 Thm 3.7: Thm 3.7: (A property of linearly dependent sets) A set S = {v 1,v 2,…,v k }, k  2, is linearly dependent if and only if at least one of the vectors v j in S can be written as a linear combination of the other vectors in S.  c i  0 for some i c 1 v 1 +c 2 v 2 +…+c k v k = 0 Pf:

38 4 - 38 Let (nontrivial solution) S is linearly dependent Corollary to Theorem 3.7: Corollary to Theorem 3.7: Two vectors u and v in a vector space V are linearly dependent if and only if one is a scalar multiple of the other. v i = d 1 v 1 +…+d i-1 v i-1 +d i+1 v i+1 +…+d k v k  d 1 v 1 +…+d i-1 v i-1 +d i+1 v i+1 -v i +…+d k v k = 0  c 1 =d 1, c 2 =d 2,…, c i =-1,…, c k =d k

39 4 - 39 3.5 Basis and Dimension Basis: V : a vector space S spans V S spans V (i.e., span (S) = V ) S is linearly independent Spanning Sets Bases Linearly Independent Sets  S is called a basis for V S ={v 1, v 2, …, v n }  V Bases and Dimension V V, A basis for a vector space V is a linearly independent spanning set of the vector space V,i.e., any vector in the space can be written as a linear combination of elements of this set. The dimension of the space is the number of elements in this basis.

40 4 - 40 Note: Beginning with the most elementary problems in physics and mathematics, it is clear that the choice of an appropriate coordinate system can provide great computational advantages. For examples, 1.for the usual two and three dimensional vectors it is useful to express an arbitrary vector as a sum of unit vectors. 2.Similarly, the use of Fourier series for the analysis of functions is a very powerful tool in analysis. These two ideas are essentially the same thing when you look at them as aspects of vector spaces. Notes:  Notes: (1) Ø is a basis for {0} (2) the standard basis for R 3 : {i, j, k} i = (1, 0, 0), j = (0, 1, 0), k = (0, 0, 1)

41 4 - 41 (3) the standard basis for R n : {e 1, e 2, …, e n } e 1 =(1,0,…,0), e 2 =(0,1,…,0), e n =(0,0,…,1) Ex: R 4 {(1,0,0,0), (0,1,0,0), (0,0,1,0), (0,0,0,1)} Ex: matrix space: (4) the standard basis for m  n matrix space: { E ij | 1  i  m, 1  j  n } (5) the standard basis for P n (x): {1, x, x 2, …, x n } Ex: P 3 (x) {1, x, x 2, x 3 }

42 4 - 42 Thm 3.8: Thm 3.8: (Uniqueness of basis representation) If is a basis for a vector space V, then every vector in V can be written as a linear combination of vectors in S in one and only one way. Pf: 1.Span (S) = V 2.S is linearly independent Span (S) = V Let v = c 1 v 1 +c 2 v 2 +…+c n v n v = b 1 v 1 +b 2 v 2 +…+b n v n  0 = (c 1 –b 1 )v 1 +(c 2 – b 2 )v 2 +…+(c n – b n )v n (i.e., uniqueness)  c 1 = b 1, c 2 = b 2,…, c n = b n

43 4 - 43 Thm 3.9: Thm 3.9: (Bases and linear dependence) If is a basis for a vector space V, then every set containing more than n vectors in V is linearly dependent. Pf: S 1 = {u 1, u 2, …, u m }, m > n Let uiVuiV

44 4 - 44  d i =0  i i.e. Let k 1 u 1 +k 2 u 2 +…+k m u m = 0 with d i = c i1 k 1 +c i2 k 2 +…+c im k m  d 1 v 1 +d 2 v 2 +…+d n v n = 0 If the homogeneous system (n<m) has fewer equations than variables, then it must have infinitely many solution. m > n  k 1 u 1 +k 2 u 2 +…+k m u m = 0 has nontrivial solution  S 1 is linearly dependent

45 4 - 45 Notes:  Notes: (1) dim({0}) = 0 = #(Ø) (2) dim(V) = n, S  V S : a spanning set  #(S)  n S : a L.I. set  #(S)  n S : a basis  #(S) = n (3) dim(V) = n, W is a subspace of V  dim(W)  n Spanning Sets Bases Linearly Independent Sets #(S) > n#(S) = n#(S) < n dim(V) = n

46 4 - 46 Thm 3.10: Thm 3.10: (Number of vectors in a basis) If a vector space V has one basis with n vectors, then every basis for V has n vectors. i.e., All bases for a finite-dimensional vector space has the same number of vectors All bases for a finite-dimensional vector space has the same number of vectors.) Pf:S ={v 1, v 2, …, v n } S'={u 1, u 2, …, u m } are two bases for a vector space

47 4 - 47 Finite dimensional: Finite dimensional: A vector space V is called finite dimensional, if it has a basis consisting of a finite number of elements. Infinite dimensional: Infinite dimensional: If a vector space V is not finite dimensional, then it is called infinite dimensional. Dimension: Dimension: The dimension of a finite dimensional vector space V is defined to be the number of vectors in a basis for V. V: a vector space S : a basis for V dim(V) = #(S)  dim(V) = #(S) (the number of vectors in S)

48 4 - 48 Ex: (Finding the dimension of a subspace) (a) W 1 ={(d, c–d, c): c and d are real numbers} (b) W 2 ={(2b, b, 0): b is a real number} Sol:Find a set of L.I. vectors that spans the subspace. (a)(d, c– d, c) = c(0, 1, 1) + d(1, – 1, 0)  S = {(0, 1, 1), (1, – 1, 0)} (S is L.I. and S spans W 1 )  S is a basis for W  dim(W 1 ) = #(S) = 2  S = {(2, 1, 0)} spans W 2 and S is L.I.  S is a basis for W dim(W 2 ) = #(S) = 1  dim(W 2 ) = #(S) = 1 (b)

49 4 - 49  Ex: (Finding the dimension of a subspace) symmetric matrices Let W be the subspace of all symmetric matrices in M 2  2. What is the dimension of W? Sol: spans W and S is L.I.  S is a basis for W  dim(W) = #(S) = 3

50 4 - 50 Thm 3.11: Thm 3.11: (Basis tests in an n-dimensional space) of dimension n Let V be a vector space of dimension n. (1) If is a linearly independent set of vectors in V, then S is a basis for V. (2) If spans V, then S is a basis for V. Spanning Sets Bases Linearly Independent Sets dim(V) = n #(S) > n #(S) = n #(S) < n

51 4 - 51 3.6 Rank of a Matrix and Systems of Linear Equations Row vectors of A row vectors:  row vectors: Column vectors of A column vectors:  column vectors: || || || A (1) A (2) A (n)

52 4 - 52 Let A be an m×n matrix. Row space:  Row space: The row space of A is the subspace of R n spanned by the m row vectors of A. Column space:  Column space: The column space of A is the subspace of R m spanned by the n column vectors of A. Null space: The null space of A is the set of all solutions of Ax=0 and it is a subspace of R n.

53 4 - 53 Notes: Notes: The row space of a matrix is not changed by elementary row operations (1) The row space of a matrix is not changed by elementary row operations.  RS(  (A)) = RS(A)  : elementary row operations lementary row operations do change the column space (2) However, elementary row operations do change the column space. Thm 3.12: Thm 3.12: (Row-equivalent matrices have the same row space) If an m  n matrix A is row equivalent to an m  n matrix B, then the row space of A is equal to the row space of B.

54 4 - 54 Thm 3.13: Thm 3.13: (Basis for the row space of a matrix) If a matrix A is row equivalent to a matrix B in row-echelon nonzero row vectors form, then the nonzero row vectors of B form a basis for the row space of A.

55 4 - 55 Find a basis of row space of A =  Ex: ( Finding a basis for a row space) Sol: A= B =

56 4 - 56 Notes: Thm 3.13 a basis for RS(A) = {the nonzero row vectors of B} (Thm 3.13) = {w 1, w 2, w 3 } = {(1, 3, 1, 3), (0, 1, 1, 0),(0, 0, 0, 1)}

57 4 - 57 Ex: (Finding a basis for the column space of a matrix) Find a basis for the column space of the matrix A. Sol. 1:

58 4 - 58 CS(A)=RS(A T ) (a basis for the column space of A) a basis for CS(A) = a basis for RS(A T ) = {the nonzero row vectors of B} = {w 1, w 2, w 3 }  Note: This basis is not a subset of {c 1, c 2, c 3, c 4 }.

59 4 - 59 Notes:  Notes: (1) This basis is a subset of {c 1, c 2, c 3, c 4 }. (2) v 3 = –2v 1 + v 2, thus c 3 = – 2c 1 + c 2. Sol. 2: The column vectors with leading 1 locate  {v 1, v 2, v 4 } is a basis for CS(B)  {c 1, c 2, c 4 } is a basis for CS(A)

60 4 - 60 Thm 3.14: Thm 3.14: (Solutions of a homogeneous system) If A is an m  n matrix, then the set of all solutions of Ax = 0 a subspace of R n is a subspace of R n called the nullspace of A. Proof: Notes:nullspacesolution space Notes: The nullspace of A is also called the solution space of the homogeneous system Ax = 0.

61 4 - 61  Ex: Find the solution space of a homogeneous system Ax = 0. Sol: The nullspace of A is the solution space of Ax = 0. x 1 = –2s – 3t, x 2 = s, x 3 = –t, x 4 = t

62 4 - 62 Thm 3.15: Thm 3.15: (Row and column space have equal dimensions) If A is an m  n matrix, then the row space and the column space of A have the same dimension. dim(RS(A)) = dim(CS(A))  Rank: The dimension of the row (or column) space of a matrix A is called the rank of A. rank(A) = dim(RS(A)) = dim(CS(A))

63 4 - 63  Notes: rank(A T ) = dim(RS(A T )) = dim(CS(A)) = rank(A) Nullity:  Nullity: The dimension of the nullspace of A is called the nullity of A. nullity(A) = dim(NS(A)) Therefore rank(A T ) = rank(A)

64 4 - 64 Thm 3.16: Thm 3.16: (Dimension of the solution space) If A is an m  n matrix of rank r, then the dimension of the solution space of Ax = 0 is n – r. That is nullity(A)n - rank(A)= n-r nullity(A) = n - rank(A)= n-r n=rank(A)+nullity(A) n=rank(A)+nullity(A) Notes:( n = #variables= #leading variables + #nonleading variables ) Notes: ( n = #variables= #leading variables + #nonleading variables ) rank(A): (1) rank(A): The number of leading variables in the solution of Ax=0. (i.e., The number of nonzero rows in the row-echelon form of A) nullity (A): (2) nullity (A): The number of free variables (non leading variables) in the solution of Ax = 0.

65 4 - 65 Fundamental SpaceDimension RS(A)=CS(A T ) r CS(A)=RS(A T )r NS(A) n – r NS(A T )m – r Notes: If A is an m  n matrix and rank(A) = r, then

66 4 - 66 Ranknullity Ex: (Rank and nullity of a matrix) Let the column vectors of the matrix A be denoted by a 1, a 2, a 3, a 4, and a 5. a 1 a 2 a 3 a 4 a 5 (a) Find the rank and nullity of A. (b) Find a subset of the column vectors of A that forms a basis for the column space of A.

67 4 - 67 B is the reduced row-echelon form Sol: B is the reduced row-echelon form of A. a 1 a 2 a 3 a 4 a 5 b 1 b 2 b 3 b 4 b 5 ( a) rank(A) = 3 (the number of nonzero rows in B)

68 4 - 68 (b) Leading 1 (c)

69 4 - 69  Thm 3.17:  Thm 3.17: (Solutions of an inhomogeneous linear system) If x p is a particular solution of the inhomogeneous system Ax = b, then every solution of this system can be written in the form x = x p + x h, wher x h is a solution of the corresponding homogeneous system Ax = 0. Pf:Let x be any solution of Ax = b. is a solution of Ax = 0

70 4 - 70 Ex: (Finding the solution set of an inhomogeneous system) Find the set of all solution vectors of the system of linear equations. Sol: s t

71 4 - 71 i.e. x h = su 1 + tu 2 is a solution of Ax = 0 is a particular solution vector of Ax=b.

72 4 - 72 Thm 3.18: Thm 3.18: (Solution of a system of linear equations) The system of linear equations Ax = b is consistent if and only b is in the column space of A (i.e., b  CS(A)) if b is in the column space of A (i.e., b  CS(A)). Pf: Let be the coefficient matrix, the column matrix of unknowns, and the right-hand side, respectively, of the system Ax = b.

73 4 - 73 Then Hence, Ax = b is consistent if and only if b is a linear combination of the columns of A. That is, the system is consistent if and only if b is in the subspace of R n spanned by the columns of A.

74 4 - 74 Ex: (Consistency of a system of linear equations) Sol:  Notes: Thm 3.18 If rank([A|b])=rank(A) (Thm 3.18) Then the system Ax=b is consistent.

75 4 - 75 c 1 c 2 c 3 b w 1 w 2 w 3 v (b is in the column space of A) The system of linear equations is consistent.  Check:

76 4 - 76 Summary of equivalent conditions for square matrices: If A is an n×n matrix, then the following conditions are equivalent. (1) A is invertible (2) Ax = b has a unique solution for any n×1 matrix b. (3) Ax = 0 has only the trivial solution (4) A is row-equivalent to I n (5) (6) rank(A) = n (7) The n row vectors of A are linearly independent. (8) The n column vectors of A are linearly independent.

77 4 - 77 3.7 Coordinates and Change of Basis Coordinate representation relative to a basis Let B = {v 1, v 2, …, v n } be an ordered basis for a vector space V and let x be a vector in V such that coordinates The scalars c 1, c 2, …, c n are called the coordinates of x relative to the basis B. The coordinate matrix (or coordinate vector) of x relative to B is the column matrix in R n whose components are the coordinates of x.

78 4 - 78  Ex: (Coordinates and components in R n ) Find the coordinate matrix of x = (–2, 1, 3) in R 3 relative to the standard basis S = {(1, 0, 0), ( 0, 1, 0), (0, 0, 1)} Sol:

79 4 - 79  Ex: (Finding a coordinate matrix relative to a nonstandard basis) Find the coordinate matrix of x=(1, 2, –1) in R 3 relative to the (nonstandard) basis B ' = {u 1, u 2, u 3 }={(1, 0, 1), (0, – 1, 2), (2, 3, – 5)} Sol:

80 4 - 80 Change of basis Change of basis: You were given the coordinates of a vector relative to one basis B and were asked to find the coordinates relative to another basis B'.  Ex: (Change of basis) Consider two bases for a vector space V

81 4 - 81 Let

82 4 - 82 Transition matrix from B' to B: where transition matrix is called the transition matrix from B' to B If [v] B is the coordinate matrix of v relative to B [v] B’ is the coordinate matrix of v relative to B'

83 4 - 83 Thm 3.19: Thm 3.19: (The inverse of a transition matrix) If P is the transition matrix from a basis B' to a basis B in R n, then (1) P is invertible (2) The transition matrix from B to B' is P –1 Notes:

84 4 - 84 Thm 3.20: Thm 3.20: (Transition matrix from B to B') Let B={v 1, v 2, …, v n } and B' ={u 1, u 2, …, u n } be two bases for R n. Then the transition matrix P –1 from B to B' can be found by using Gauss-Jordan elimination on the n×2n matrix as follows.

85 4 - 85  Ex: (Finding a transition matrix) B={(–3, 2), (4,–2)} and B' ={(–1, 2), (2,–2)} are two bases for R 2 (a) Find the transition matrix from B' to B. (b) (c) Find the transition matrix from B to B'.

86 4 - 86 Sol: (a) (b) G.J.E. B B'I P ( the transition matrix from B' to B )

87 4 - 87 (c) ( the transition matrix from B to B' ) G.J.E. B' BI P -1 Check:

88 4 - 88  Ex: (Coordinate representation in P 3 (x)) Find the coordinate matrix of p = 3x 3 -2x 2 +4 relative to the standard basis in P 3 (x), S = {1, 1+x, 1+ x 2, 1+ x 3 }. Sol: p = 3(1) + 0(1+x) + (–2)(1+x 2 ) + 3(1+x 3 ) [p] s =

89 4 - 89  Ex: (Coordinate representation in M 2x2 ) Find the coordinate matrix of x = relative to the standardbasis in M 2x2. B = Sol:


Download ppt "4 - 1 Chapter 3 Vector Spaces 3.1 Vectors in R n 3.1 Vectors in R n 3.2 Vector Spaces 3.2 Vector Spaces 3.3 Subspaces of Vector Spaces 3.3 Subspaces of."

Similar presentations


Ads by Google