Vector Spaces B.A./B.Sc. III: Mathematics (Paper II) 1 Vectors in Rn 3 Subspaces of Vector Spaces 4 Spanning Sets and Linear Independence 5 Basis and Dimension Baljeet Singh, Assistant Professor, Post Graduate Government College, Sector 11, Chandigarh
1 Vectors in Rn An ordered n-tuple : a sequence of n real numbers Rn-space : the set of all ordered n-tuples R1-space = set of all real numbers n = 1 (R1-space can be represented geometrically by the x-axis) n = 2 R2-space = set of all ordered pair of real numbers (R2-space can be represented geometrically by the xy-plane) n = 3 R3-space = set of all ordered triple of real numbers (R3-space can be represented geometrically by the xyz-space) n = 4 R4-space = set of all ordered quadruple of real numbers
Notes: (1) An n-tuple can be viewed as a point in Rn with the xi’s as its coordinates (2) An n-tuple also can be viewed as a vector in Rn with the xi’s as its components Ex: or a point a vector
(two vectors in Rn) Equality: if and only if Vector addition (the sum of u and v): Scalar multiplication (the scalar multiple of u by c): Notes: The sum of two vectors and the scalar multiple of a vector in Rn are called the standard operations in Rn
Difference between u and v: Zero vector :
Theorem 1: Properties of vector addition and scalar multiplication Let u, v, and w be vectors in Rn, and let c and d be scalars (1) u+v is a vector in Rn (closure under vector addition) (2) u+v = v+u (commutative property of vector addition) (3) (u+v)+w = u+(v+w) (associative property of vector addition) (4) u+0 = u (additive identity property) (5) u+(–u) = 0 (additive inverse property) (6) cu is a vector in Rn (closure under scalar multiplication) (7) c(u+v) = cu+cv (distributive property of scalar multiplication over vector addition) (8) (c+d)u = cu+du (distributive property of scalar multiplication over real-number addition) (9) c(du) = (cd)u (associative property of multiplication) (10) 1(u) = u (multiplicative identity property)
Notes: A vector in can be viewed as: a 1×n row matrix (row vector): or a n×1 column matrix (column vector):
Scalar multiplication Vector addition Scalar multiplication Treated as 1×n row matrix Treated as n×1 column matrix
2 Vector Spaces Vector spaces : Let V be a set on which two operations (vector addition and scalar multiplication) are defined. If the following ten axioms are satisfied for every u, v, and w in V and every scalar (real number) c and d, then V is called a vector space Addition: (1) u+v is in V (2) u+v=v+u (3) u+(v+w)=(u+v)+w (4) V has a zero vector 0 such that for every u in V, u+0=u (5) For every u in V, there is a vector in V denoted by –u such that u+(–u)=0
Scalar multiplication: (6) is in V (7) (8) (9) (10) ※ Any set V that satisfies these ten properties (or axioms) is called a vector space, and the objects in the set are called vectors ※ Thus, we can conclude that Rn is of course a vector space
(the set of all m×n matrices with real-number entries) Four examples of vector spaces are introduced as follows. (It is straightforward to show that these vector spaces satisfy the above ten axioms) (1) n-tuple space: Rn (standard vector addition) (standard scalar multiplication for vectors) (2) Matrix space : (the set of all m×n matrices with real-number entries) Ex: (m = n = 2) (standard matrix addition) (standard scalar multiplication for matrices)
(3) n-th degree or less polynomial space : (the set of all real polynomials of degree n or less) ※ By the fact that the set of real numbers is closed under addition and multiplication, it is straightforward to show that Pn satisfies the ten axioms and thus is a vector space (4) Continuous function space : (the set of all real-valued continuous functions defined on the entire real line) ※ By the fact that the sum of two continuous function is continuous and the product of a scalar and a continuous function is still a continuous function, is a vector space
Summary of important vector spaces ※ Each element in a vector space is called a vector, so a vector can be a real number, an n-tuple, a matrix, a polynomial, a continuous function, etc.
Theorem 2: Properties of scalar multiplication Let v be any element of a vector space V, and let c be any scalar. Then the following properties are true (the additive inverse of v equals ((–1)v)
Notes: To show that a set is not a vector space, you need only find one axiom that is not satisfied (it is not closed under scalar multiplication) scalar Pf: Ex 1: The set of all integers is not a vector space integer noninteger Ex 2: The set of all (exact) second-degree polynomial functions is not a vector space Pf: Let and (it is not closed under vector addition)
V=R2=the set of all ordered pairs of real numbers Ex 3: V=R2=the set of all ordered pairs of real numbers vector addition: scalar multiplication: (nonstandard definition) Verify V is not a vector space Sol: This kind of setting can satisfy the first nine axioms of the definition of a vector space (you can try to show that), but it violates the tenth axiom the set (together with the two given operations) is not a vector space
3 Subspaces of Vector Spaces a vector space a nonempty subset The nonempty subset W is called a subspace if W is a vector space under the operations of addition and scalar multiplication defined in V Trivial subspace : Every vector space V has at least two subspaces (1) Zero vector space {0} is a subspace of V (It satisfies the ten axioms) (2) V is a subspace of V ※ Any subspaces other than these two are call proper (or nontrivial) subspaces
Examination of whether W being a subspace Since the operations defined on W are the same as those defined on V, and most of the ten axioms are inherited from the properties for operations, it is not needed to verify these axioms Therefore, the following theorem tells us it is sufficient to test for the closure conditions under vector addition and scalar multiplication to identify that a nonempty subset of a vector space is a subspace Theorem 3: Test whether a nonempty subset being a subspace If W is a nonempty subset of a vector space V, then W is a subspace of V if and only if the following conditions hold (1) If u and v are in W, then u+v is in W (2) If u is in W and c is any scalar, then cu is in W
Pf: Note that if u, v, and w are in W, then they are also in V. Furthermore, W and V share the same operations. Consequently, vector space axioms 2, 3, 7, 8, 9, and 10 are satisfied automatically If the closure conditions hold in Theorem 3, vector space axioms 1 and 6 are satisfied as well Since the axiom 6 is satisfied (i.e., cu is in W if u is in W), we can obtain
Let W be the set of all 2×2 symmetric matrices. Show that Ex 2: A subspace of M2×2 Let W be the set of all 2×2 symmetric matrices. Show that W is a subspace of the vector space M2×2, with the standard operations of matrix addition and scalar multiplication Sol: The definition of a symmetric matrix A is that AT = A
Ex 3: The set of singular matrices is not a subspace of M2×2 Let W be the set of singular (noninvertible) matrices of order 2. Show that W is not a subspace of M2×2 with the standard matrix operations Sol: (W is not closed under vector addition)
Ex 4: The set of first-quadrant vectors is not a subspace of R2 Show that , with the standard operations, is not a subspace of R2 Sol: (W is not closed under scalar multiplication)
Ex 5: Identify subspaces of R2 Which of the following two subsets is a subspace of R2? (a) The set of points on the line given by x+2y=0 (b) The set of points on the line given by x+2y=1 Sol: (Note: the zero vector (0,0) is on this line) (closed under vector addition) (closed under scalar multiplication)
(b) (Note: the zero vector (0, 0) is not on this line)
Ex 8: Identify subspaces of R3 (Note: the zero vector is not in W) (Note: the zero vector is in W) Sol:
Theorem 4: The intersection of two subspaces is a subspace Pf: ※ However, the union of two subspaces is not a subspace
4.4 Spanning Sets and Linear Independence Linear combination :
Ex : Finding a linear combination Sol:
(this system has infinitely many solutions)
The span of a set: span(S) If S={v1, v2,…, vk} is a set of vectors in a vector space V, then the span of S is the set of all linear combinations of the vectors in S, Definition of a spanning set of a vector space: If every vector in a given vector space V can be written as a linear combination of vectors in a set S, then S is called a spanning set of the vector space V
Note: The above statement can be expressed as follows
Ex 5: A spanning set for R3 Sol:
Theorem 5: span(S) is a subspace of V If S={v1, v2,…, vk} is a set of vectors in a vector space V, then span(S) is a subspace of V span(S) is the smallest subspace of V that contains S, i.e., every other subspace of V containing S must contain span(S)
Pf: (a)
(b) (because U is closed under vector addition and scalar multiplication) ※ For example, V = R5, S = {(1, 0, 0, 0, 0), (0, 1, 0, 0, 0), (0, 0, 1, 0, 0)} and thus span(S) = R3, and U = R4, U contains S and contains span(S) as well
Definitions of Linear Independence (L.I.) and Linear Dependence (L.D.) :
Ex 8: Testing for linear independence Determine whether the following set of vectors in R3 is L.I. or L.D. Sol:
Ex 9: Testing for linear independence Determine whether the following set of vectors in P2 is L.I. or L.D. Sol: c1v1+c2v2+c3v3 = 0 i.e., c1(1+x – 2x2) + c2(2+5x – x2) + c3(x+x2) = 0+0x+0x2 c1+2c2 = 0 c1+5c2+c3 = 0 –2c1– c2+c3 = 0 This system has infinitely many solutions (i.e., this system has nontrivial solutions, e.g., c1=2, c2= – 1, c3=3) S is (or v1, v2, v3 are) linearly dependent
Ex 10: Testing for linear independence Determine whether the following set of vectors in the 2×2 matrix space is L.I. or L.D. Sol: c1v1+c2v2+c3v3 = 0
2c1+3c2+ c3 = 0 c1 = 0 2c2+2c3 = 0 c1+ c2 = 0 (This system has only the trivial solution) c1 = c2 = c3= 0 S is linearly independent
Theorem 6: A property of linearly dependent sets A set S = {v1,v2,…,vk}, k2, is linearly dependent if and only if at least one of the vectors vi in S can be written as a linear combination of the other vectors in S Pf: () c1v1+c2v2+…+ckvk = 0 ci 0 for some i
vi = d1v1+…+di-1vi-1+di+1vi+1+…+dkvk Let vi = d1v1+…+di-1vi-1+di+1vi+1+…+dkvk d1v1+…+di-1vi-1–vi +di+1vi+1+…+dkvk = 0 (there exits at least this nontrivial solution) c1=d1 , c2=d2 ,…, ci=–1 ,…, ck=dk S is linearly dependent Corollary to Theorem 4.8: Two vectors u and v in a vector space V are linearly dependent (for k = 2 in Theorem 6) if and only if one is a scalar multiple of the other
4.5 Basis and Dimension Basis : V: a vector space S ={v1, v2, …, vn}V Spanning Sets Bases Linearly Independent V: a vector space S ={v1, v2, …, vn}V V S spans V (i.e., span(S) = V) S is linearly independent S is called a basis for V Notes: A basis S must have enough vectors to span V, but not so many vectors that one of them could be written as a linear combination of the other vectors in S
Notes: (1) the standard basis for R3: {i, j, k} i = (1, 0, 0), j = (0, 1, 0), k = (0, 0, 1) (2) the standard basis for Rn : {e1, e2, …, en} e1=(1,0,…,0), e2=(0,1,…,0),…, en=(0,0,…,1) Ex: For R4, {(1,0,0,0), (0,1,0,0), (0,0,1,0), (0,0,0,1)}
(3) the standard basis matrix space: Ex: matrix space: (4) the standard basis for Pn(x): {1, x, x2, …, xn} Ex: P3(x) {1, x, x2, x3}
Ex 2: The nonstandard basis for R2 Because the coefficient matrix of this system has a nonzero determinant, the system has a unique solution for each u. Thus you can conclude that S spans R2 Because the coefficient matrix of this system has a nonzero determinant, you know that the system has only the trivial solution. Thus you can conclude that S is linearly independent According to the above two arguments, we can conclude that S is a (nonstandard) basis for R2
Theorem 7: Uniqueness of basis representation for any vectors If is a basis for a vector space V, then every vector in V can be written in one and only one way as a linear combination of vectors in S Pf: (1) span(S) = V (2) S is linearly independent span(S) = V Let v = c1v1+c2v2+…+cnvn v = b1v1+b2v2+…+bnvn v+(–1)v = 0 = (c1–b1)v1+(c2 – b2)v2+…+(cn – bn)vn c1= b1 , c2= b2 ,…, cn= bn (i.e., unique basis representation)
Theorem 8: Bases and linear dependence If is a basis for a vector space V, then every set containing more than n vectors in V is linearly dependent (In other words, every linearly independent set contains at most n vectors) Pf: Let S1 = {u1, u2, …, um} , m > n uiV
Consider k1u1+k2u2+…+kmum= 0 (if ki’s are not all zero, S1 is linearly dependent) d1v1+d2v2+…+dnvn= 0 (di = ci1k1+ci2k2+…+cimkm) di=0 i i.e., Theorem : If the homogeneous system has fewer equations (n equations) than variables (k1, k2, …, km), then it must have infinitely many solution m > n k1u1+k2u2+…+kmum = 0 has nontrivial (nonzero) solution S1 is linearly dependent
Theorem 9: Number of vectors in a basis If a vector space V has one basis with n vectors, then every basis for V has n vectors Pf: ※ According to Thm.8, every linearly independent set contains at most n vectors in a vector space if there is a set of n vectors spanning that vector space S ={v1, v2, …, vn} are two bases with different sizes for V S'={u1, u2, …, um} ※ For R3, since the standard basis {(1, 0, 0), (0, 1, 0), (0, 0, 1)} can span this vector space, you can infer any basis that can span R3 must have exactly 3 vectors ※ For example, S={(1, 2, 3), (0, 1, 2), (–2, 0, 1)} is another basis of R3 (because S can span R3 and S is linearly independent), and S has 3 vectors
Infinite dimensional: The dimension of a vector space V is defined to be the number of vectors in a basis for V V: a vector space S: a basis for V dim(V) = #(S) (the number of vectors in a basis S) Finite dimensional: A vector space V is finite dimensional if it has a basis consisting of a finite number of elements Infinite dimensional: If a vector space V is not finite dimensional, then it is called infinite dimensional
Ex 9: Finding the dimension of a subspace of R3 (a) W={(d, c–d, c): c and d are real numbers} (b) W={(2b, b, 0): b is a real number} (Hint: find a set of L.I. vectors that spans the subspace, i.e., find a basis for the subspace.) Sol: (a) (d, c– d, c) = c(0, 1, 1) + d(1, – 1, 0) S = {(0, 1, 1) , (1, – 1, 0)} (S is L.I. and S spans W) S is a basis for W dim(W) = #(S) = 2 (b) S = {(2, 1, 0)} spans W and S is L.I. S is a basis for W dim(W) = #(S) = 1
Ex 11: Finding the dimension of a subspace of M22 Let W be the subspace of all symmetric matrices in M22. What is the dimension of W? Sol: spans W and S is L.I. S is a basis for W dim(W) = #(S) = 3