6.4. Invariant subspaces Decomposing linear maps into smaller pieces.

Slides:



Advertisements
Similar presentations
Vector Spaces A set V is called a vector space over a set K denoted V(K) if is an Abelian group, is a field, and For every element vV and K there exists.
Advertisements

5.1 Real Vector Spaces.
5.2 Rank of a Matrix. Set-up Recall block multiplication:
Chapter 3: Linear transformations
3D Geometry for Computer Graphics
3.3 Divisibility Definition If n and d are integers, then n is divisible by d if, and only if, n = dk for some integer k. d | n  There exists an integer.
Linear Algebra Canonical Forms 資料來源: Friedberg, Insel, and Spence, “Linear Algebra”, 2nd ed., Prentice-Hall. (Chapter 7) 大葉大學 資訊工程系 黃鈴玲.
Polynomial Ideals Euclidean algorithm Multiplicity of roots Ideals in F[x].
Basic properties of the integers
6.1 Eigenvalues and Diagonalization. Definitions A is n x n. is an eigenvalue of A if AX = X has non zero solutions X (called eigenvectors) If is an eigenvalue.
8.4. Unitary Operators. Inner product preserving V, W inner product spaces over F in R or C. T:V -> W. T preserves inner products if (Ta|Tb) = (a|b) for.
Eigenvalues and Eigenvectors
THE DIMENSION OF A VECTOR SPACE
Symmetric Matrices and Quadratic Forms
Computer Graphics Recitation 5.
Deciding Primality is in P M. Agrawal, N. Kayal, N. Saxena Slides by Adi Akavia.
5.II. Similarity 5.II.1. Definition and Examples
Complexity1 Pratt’s Theorem Proved. Complexity2 Introduction So far, we’ve reduced proving PRIMES  NP to proving a number theory claim. This is our next.
Introduction Polynomials
5.IV. Jordan Form 5.IV.1. Polynomials of Maps and Matrices 5.IV.2. Jordan Canonical Form.
Orthogonality and Least Squares
Finite fields.
5.III. Nilpotence 5.III.1 Self-Composition 5.III.2 Strings Every square matrix is similar to one that is a sum of 2 kinds of simple matrices: diagonal.
App III. Group Algebra & Reduction of Regular Representations 1. Group Algebra 2. Left Ideals, Projection Operators 3. Idempotents 4. Complete Reduction.
Matrices CS485/685 Computer Vision Dr. George Bebis.
5.1 Orthogonality.
Manindra Agrawal NUS / IITK
Inner product Linear functional Adjoint
GROUPS & THEIR REPRESENTATIONS: a card shuffling approach Wayne Lawton Department of Mathematics National University of Singapore S ,
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
Polynomials Algebra Polynomial ideals
Linear Algebra Lecture 25.
Chapter 5: The Orthogonality and Least Squares
CHAPTER FIVE Orthogonality Why orthogonal? Least square problem Accuracy of Numerical computation.
Continuity ( Section 1.8) Alex Karassev. Definition A function f is continuous at a number a if Thus, we can use direct substitution to compute the limit.
4 4.2 © 2012 Pearson Education, Inc. Vector Spaces NULL SPACES, COLUMN SPACES, AND LINEAR TRANSFORMATIONS.
Eigenvalues and Eigenvectors
4.1 Vector Spaces and Subspaces 4.2 Null Spaces, Column Spaces, and Linear Transformations 4.3 Linearly Independent Sets; Bases 4.4 Coordinate systems.
Information and Coding Theory Linear Block Codes. Basic definitions and some examples. Juris Viksna, 2015.
Chapter 2: Vector spaces
7.4. Computations of Invariant factors. Let A be nxn matrix with entries in F. Goal: Find a method to compute the invariant factors p 1,…,p r. Suppose.
AN ORTHOGONAL PROJECTION
The Integers. The Division Algorithms A high-school question: Compute 58/17. We can write 58 as 58 = 3 (17) + 7 This forms illustrates the answer: “3.
6.8. The primary decomposition theorem Decompose into elementary parts using the minimal polynomials.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Signal Processing and Representation Theory Lecture 2.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Section 2.3 Properties of Solution Sets
Information and Coding Theory Cyclic codes Juris Viksna, 2015.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Matrices and linear transformations For grade 1, undergraduate students For grade 1, undergraduate students Made by Department of Math.,Anqing Teachers.
6. Elementary Canonical Forms How to characterize a transformation?
5.1 Eigenvectors and Eigenvalues 5. Eigenvalues and Eigenvectors.
7.2. Cyclic decomposition and rational forms Cyclic decomposition Generalized Cayley-Hamilton Rational forms.
Linear Programming Back to Cone  Motivation: From the proof of Affine Minkowski, we can see that if we know generators of a polyhedral cone, they.
Mathematical Physics Seminar Notes Lecture 4 Global Analysis and Lie Theory Wayne M. Lawton Department of Mathematics National University of Singapore.
4 4.5 © 2016 Pearson Education, Inc. Vector Spaces THE DIMENSION OF A VECTOR SPACE.
Reed-Solomon Codes Rong-Jaye Chen.
Annihilating polynomials
Euclidean algorithm Multiplicity of roots Ideals in F[x].
Pole Placement and Decoupling by State Feedback
GROUPS & THEIR REPRESENTATIONS: a card shuffling approach
Static Output Feedback and Estimators
Continuity Alex Karassev.
Canonical form for matrices and transformations.
Maths for Signals and Systems Linear Algebra in Engineering Lecture 6, Friday 21st October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL.
Back to Cone Motivation: From the proof of Affine Minkowski, we can see that if we know generators of a polyhedral cone, they can be used to describe.
Eigenvalues and Eigenvectors
THE DIMENSION OF A VECTOR SPACE
Null Spaces, Column Spaces, and Linear Transformations
Presentation transcript:

6.4. Invariant subspaces Decomposing linear maps into smaller pieces. Later-> Direct sum decomposition More concrete examples… needed

T:V->V. W in V a subspace. W is invariant under T if T(W) in W. Range(T), null(T) are invariant: T(range T) in range T T(null T)=0 in null T Example: T, U in L(V,V) s.t. TU=UT. Then range U and null U are T-invariant. a=Ub. Ta=TU(b) = U(Tb). Ua=0. UT(a)=TU(a)=T(0)=0. Example: Differential operator on polynomials of degree n.

When W in V is inv under T, we define TW: W -> W by restriction. Choose basis {a1,…,an} of V s.t. {a1,…,ar} is a basis of W. Then

B rxr, C rx(n-r), D (n-r)x(n-r) Conversely, if there is a basis, where A is above block form, then there is an invariant subspace corr to a1,…,ar.

Lemma: W invariant subspace of T. Char.poly of TW divides char poly of T. Min.poly of TW divides min.poly of T. Proof: det(xI-A)= det(xI-B)det(xI-C). f(A) = c0I+ c1A2+….+An.

f(A)=0 -> f(B)=0 also. Ann(A) is in Ann(B). Min.poly B divides min.poly.A. by the ideal theory.

Example 10: W subspace of V spanned by characteristic vectors of T. c1,…,c k char. values of T (all). Wi char. subspace associated with ci. Bi basis. B’={B1 ,….,Bk} basis of W. B’={a1,..,ar} dim W= dim W1 +…+ dim Wk Tai = tiai. i=1,…,r Thus, W is invariant under T.

Theorem 2. T is diag <-> e1+…+e k = n. The characteristic polynomial of Tw is where ei = dim Wi Recall: Theorem 2. T is diag <-> e1+…+e k = n. Consider restrictions of T to sums W1+ … + Wj for any j. Compare the characteristic and minimal polynomials.

T-conductors We introduce T-conductors to understand invariant subspaces better. Definition: W is invariant subspace of T. T-conductor of a in V =ST(a;W)={g in F[x]| g(T)a in W} If W={0}, then ST(a;{0}) = T-annihilator of a. (not nec. equal to Ann(T)).

Example: V = R4. W=R2. T given by a matrix Then S((1,0,0,0);W)? c(1,0,0,0)+dT(1,0,0,0)+eT2(1,0,0,0)+… Easy to see c=d=0. Equals x2F[x]

Lemma. W is invariant under T -> W is invariant under f(T) for any f in F[x]. S(a;W) is an ideal. Proof: b in W, T b in W,…, T k b in W. f(b) in W. S(a;W) is a subspace of F[x]. (cf+g)(T)(a) = (cf(T)+g(T))a = cf(T)a+g(T)a in W if f,g in S(a;W). S(a;W) is an ideal in F[x]. f in F[x], g in S(a;W). Then fg(T)(a)=f(T)g(T)(a) =f(T)(g(T)(a)) in W. fg in S(a;W).

The unique monic generator of the ideal S(a;W) is called the T-conductor of a into W. (T-annihilator if W={0}). S(a;W) contains the minimal polynomial of T (p(T)a=0 is in W). Thus, every T conductor divides the minimal polynomial of T. This gives a lot of information about the conductor.

Example: Let T be a diagonalizable transformation. W1,…,W k. Wi =null(T-ciI). (x-ci) is the conductor of any nonzero-vector a into Needed condition: a is a sum of vectors in Wjs with nonzero Wi vector.

Application T is triangulable if there exists an ordered basis s.t. T is represented by a triangular matrix. We wish to find out when a transformation is triangulable.

Proof: Let b in V. b not in W. Lemma: T in L(V,V). V n-dim v.s.over F. min.poly T is a product of linear factors. Let W be a proper invariant suspace for T. Then there exists a in V s.t. (a) a not in W (b) (T-cI)a in W for some char. value of T. Proof: Let b in V. b not in W. Let g be T-conductor of b into W. g divides p.

Some (x-cj) divides g. g=(x-cj)h. Let a=h(T)b is not in W since g is the minimal degree poly sending b into W. (T-c j)a = (T-c j)h(T)b = g(T)b in W. We obtained the desired a.

Proof: (<-) p=(x-c1)r_1…(x-ck)r_k. Theorem 5. V f.d.v.s. over F. T in L(V,V). T is triangulable <-> The minimal polynomial of T is a product of linear polynomials over F. Proof: (<-) p=(x-c1)r_1…(x-ck)r_k. Let W={0} to begin. Apply above lemma. There exists a1 0, (T-ciI)a1 =0. Ta1=cia1. Let W1=<a1>. There exists a20, (T-cjI)a2 in W1. Ta2= cja2+a1 Let W2=<a1,a2>. So on.

We obtain a sequence a1, a2,…,ai,… Let Wi = < a1, a2,…,ai>. ai+1 not in Wi s.t. (T -cj_i+1I)ai+1 in Wi. Tai+1 = cj_i+1 ai+1 + terms up to ai only. Then {a1, a2,…,an} is linearly independent. ai+1 cannot be written as a linear sum of a1, a2,…,ai by above. -> independence proved by induction. Each subspace <a1, a2,…,ai> is invariant under T. Tai is written in terms of a1, a2,…,ai.

Let the basis B= {a1, a2,…,an}. Then (->) T is triangulable. Then xI-[T]B is again triangular matrix. Char T =f= (x-c1)d_1…(x-ck)d_k. (T-c1I)d_1…(T-ckI)d_k (ai) =0 by direct computations. f is in Ann(T) and p divides f p is of the desired form.

Corollary. F algebraically closed. Every T in L(V,V) is triangulable. Proof: Every polynomial factors into linear ones. F=C complex numbers. This is true. Every field is a subfield of an algebraically closed field. Thus, if one extends fields, then every matrix is triangulable.

Another proof of Cayley-Hamilton theorem: Let f be the char poly of T. F in F’ alg closed. Min.poly T factors into linear polynomials. T is triangulable over F’. Char T is a prod. Of linear polynomials and divisible by p by Theorem 5. Thus, Char T is divisible by p over F also.

Proof: -> p.193 done already Theorem 6. T is diagonalizable <-> minimal poly p=(x-c1)…(x-ck). (c1,…, ck distinct). Proof: -> p.193 done already (<-) Let W be the subspace of V spanned by all char.vectors of T. We claim that W=V. Suppose WV. By Lemma, there exists a not in W s.t. b = (T-cjI)a is in W. b = b1+…+bk where Tbi= cibi. i=1,…,k. h(T)b =h(c1)b1+…+h(ck)bk for every poly. h. p=(x-cj)q. q(x)-q(cj)=(x-cj)h.

q(T)a-q(cj)a = h(T)(T-cjI)a = h(T)b in W. 0=p(T)a=(T-cjI)q(T)a q(T)a in W. q(cj)a in W but a not in W. Therefore, q(cj)=0. This contradicts that p has roots of multiplicities ones only. Thus W=V and T is diagonalizable.