6.8. The primary decomposition theorem Decompose into elementary parts using the minimal polynomials.

Slides:



Advertisements
Similar presentations
Vector Spaces A set V is called a vector space over a set K denoted V(K) if is an Abelian group, is a field, and For every element vV and K there exists.
Advertisements

5.4 Basis And Dimension.
5.1 Real Vector Spaces.
5.2 Rank of a Matrix. Set-up Recall block multiplication:
Linear Algebra Canonical Forms 資料來源: Friedberg, Insel, and Spence, “Linear Algebra”, 2nd ed., Prentice-Hall. (Chapter 7) 大葉大學 資訊工程系 黃鈴玲.
Polynomial Ideals Euclidean algorithm Multiplicity of roots Ideals in F[x].
Basic properties of the integers
6.1 Eigenvalues and Diagonalization. Definitions A is n x n. is an eigenvalue of A if AX = X has non zero solutions X (called eigenvectors) If is an eigenvalue.
8.4. Unitary Operators. Inner product preserving V, W inner product spaces over F in R or C. T:V -> W. T preserves inner products if (Ta|Tb) = (a|b) for.
Eigenvalues and Eigenvectors
4 4.3 © 2012 Pearson Education, Inc. Vector Spaces LINEARLY INDEPENDENT SETS; BASES.
THE DIMENSION OF A VECTOR SPACE
Symmetric Matrices and Quadratic Forms
Chapter 5 Orthogonality
Basis of a Vector Space (11/2/05)
5.II. Similarity 5.II.1. Definition and Examples
Complexity1 Pratt’s Theorem Proved. Complexity2 Introduction So far, we’ve reduced proving PRIMES  NP to proving a number theory claim. This is our next.
I. Isomorphisms II. Homomorphisms III. Computing Linear Maps IV. Matrix Operations V. Change of Basis VI. Projection Topics: Line of Best Fit Geometry.
5.IV. Jordan Form 5.IV.1. Polynomials of Maps and Matrices 5.IV.2. Jordan Canonical Form.
Ch 3.3: Linear Independence and the Wronskian
Orthogonality and Least Squares
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 08 Chapter 8: Linear Transformations.
Mathematics1 Mathematics 1 Applied Informatics Štefan BEREŽNÝ.
Inner product Linear functional Adjoint
Polynomials Algebra Polynomial ideals
Chapter 5: The Orthogonality and Least Squares
CHAPTER FIVE Orthogonality Why orthogonal? Least square problem Accuracy of Numerical computation.
Linear Algebra Chapter 4 Vector Spaces.
Ch 5.3: Series Solutions Near an Ordinary Point, Part II A function p is analytic at x 0 if it has a Taylor series expansion that converges to p in some.
4 4.2 © 2012 Pearson Education, Inc. Vector Spaces NULL SPACES, COLUMN SPACES, AND LINEAR TRANSFORMATIONS.
Eigenvalues and Eigenvectors
4.1 Vector Spaces and Subspaces 4.2 Null Spaces, Column Spaces, and Linear Transformations 4.3 Linearly Independent Sets; Bases 4.4 Coordinate systems.
Chapter 2: Vector spaces
Section 4.1 Vectors in ℝ n. ℝ n Vectors Vector addition Scalar multiplication.
Linear Programming System of Linear Inequalities  The solution set of LP is described by Ax  b. Gauss showed how to solve a system of linear.
7.4. Computations of Invariant factors. Let A be nxn matrix with entries in F. Goal: Find a method to compute the invariant factors p 1,…,p r. Suppose.
Chapter Content Real Vector Spaces Subspaces Linear Independence
Section 3.3 Real Zeros of Polynomial Functions. Objectives: – Use synthetic and long division – Use the Remainder and Factor Theorem – Use the Rational.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Chapter 4 – Linear Spaces
Diagonalization and Similar Matrices In Section 4.2 we showed how to compute eigenpairs (,p) of a matrix A by determining the roots of the characteristic.
Chap. 4 Vector Spaces 4.1 Vectors in Rn 4.2 Vector Spaces
Information and Coding Theory Cyclic codes Juris Viksna, 2015.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
6. Elementary Canonical Forms How to characterize a transformation?
1 MAC 2103 Module 8 General Vector Spaces I. 2 Rev.F09 Learning Objectives Upon completing this module, you should be able to: 1. Recognize from the standard.
6.4. Invariant subspaces Decomposing linear maps into smaller pieces.
7.2. Cyclic decomposition and rational forms Cyclic decomposition Generalized Cayley-Hamilton Rational forms.
Linear Programming Back to Cone  Motivation: From the proof of Affine Minkowski, we can see that if we know generators of a polyhedral cone, they.
Mathematical Physics Seminar Notes Lecture 4 Global Analysis and Lie Theory Wayne M. Lawton Department of Mathematics National University of Singapore.
4 4.5 © 2016 Pearson Education, Inc. Vector Spaces THE DIMENSION OF A VECTOR SPACE.
Chapter 5 Chapter Content 1. Real Vector Spaces 2. Subspaces 3. Linear Independence 4. Basis and Dimension 5. Row Space, Column Space, and Nullspace 6.
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
8.1 General Linear Transformation
Information and Coding Theory
Annihilating polynomials
Vector Spaces B.A./B.Sc. III: Mathematics (Paper II) 1 Vectors in Rn
Elementary Linear Algebra Anton & Rorres, 9th Edition
§1-3 Solution of a Dynamical Equation
Linear Algebra Lecture 39.
CMSC Discrete Structures
Affine Spaces Def: Suppose
Canonical form for matrices and transformations.
Row-equivalences again
Elementary Linear Algebra
Linear Algebra Lecture 24.
Back to Cone Motivation: From the proof of Affine Minkowski, we can see that if we know generators of a polyhedral cone, they can be used to describe.
Row-equivalences again
Eigenvalues and Eigenvectors
Eigenvalues and Eigenvectors
Presentation transcript:

6.8. The primary decomposition theorem Decompose into elementary parts using the minimal polynomials.

Theorem 12. T in L(V,V). V f.d.v.s. over F. p minimal polynomial. P=p 1 r_1 ….p k r_k.r i > 0. Let W i = null p i (T) r_i. –Then –(i) V = W 1  …  W k. –(ii) Each W i is T-invariant. –(iii) Let T i =T|W i :W i ->W i. Then minpolyT i = p i (T) r_i

Example: –Char.polyT=(x-1) 2 (x-2) 2 =min.polyT: Check this by any lower degree does not kill T by computations. –null(T-I) 2 = null = null = –Similarly null(T-2I) 2 =

Proof: idea is to get E 1,..,E k. –Let f i = p/p i r_i =p 1 r_1 …p i-1 r_i-1 p i+1 r_i+1 …p k r_k. –f 1,…,f k are relatively prime since there are no common factors. –That is, =F[x]. –There exists g 1,…,g k in F[x] s.t. g 1 f 1 +….+g k f k = 1. –p divides f i f j for i  j since f i f j contains all factors. –Let E i = h i (T)=f i (T)g i (T), h i =f i g i.

–Since h 1 +…+h k =1, E 1 +…+E k =I. –E i E j =0 for i  j. –E i =E i (E 1 +…+E k )=E i 2. Projections. –Let Im E i = W i. Then V = W 1  …  W k. –(i) is proved. –T E i = E i T. Thus Im E i = W i is T-invariant. –(ii) is proved. –We show that Im E i = null p i (T) r_i. (  ) p i (T) r_i E i a = p i (T) r_i f i (T)g i (T)a = p(T) g i (T)a =0.

(  ) a in null p i (T) r_i. If j  i, then f j (T)g j (T)a =0 since p i r_i divides f j and hence f j g j. E j a=0 for j  I. Since a=E 1 a+…+E k a, it follows that a=E i a. Hence a in Im E i. –(i),(ii) is completely proved. –(iii) T i = T|W i :W i ->W i. –P i (T i ) r_i =0 since W i is the null space of P i (T) r_i. –minpolyT i divides P i r_i. –Suppose g is s.t. g(T i )=0.

–g(T)f i (T)=0: f i = p 1 r_1 …p i-1 r_i-1 p i+1 r_i+1 …p k r_k. Im E i =null p i r_i. Thus Im f i (T) is in Im E i since V is a direct sum of Im E j s. –p divides gf i. –p= p i r_i f i by definition. –Thus p i r_i divides g. –Thus, minpoly T i = p i r_i.

Corollary: E 1,…,E k projections ass. with the primary decomposition of T. Then each E i is a polynomial in T. If a linear operator U commutes with T, then U commutes with each of E i and W i is invariant under U. Proof: E i = f i (T)g i (T). Polynomials in T. Hence commutes with U. –W i =Im E i. U(W i )= Im U E i = Im E i U in Im E i =W i.

Suppose that minpoly(T) is a product of linear polynomials. p=(x-c 1 ) r_1 …(x-c k ) r_k. (For example F=C). –Let D=c 1 E 1 +…+c k E k. Diagonalizable one. –T=TE 1 +…+TE k –N:=T-D=(T-c 1 I)E1+…+(T-c k I)E k –N 2 = (T-c 1 I) 2 E1+…+(T-c k I) 2 E k –N r = (T-c1I) r E1+…+(T-c k I) r E k

–If r  r i for each I, (T-ciI) r =0 on Im E i. –Therefore, N r = 0. N=T-D is nilpotent. Definition. N in L(V,V). N is nilpotent if there is some integer r s.t. N r = 0. Theorem 13. T in L(V,V). Minpoly T= prod.of 1st order polynomials. Then there exists a diagonalizable D and a nilpotent operator N s.t. –(i) T=D+N. –(ii) DN=ND. –D, N are uniquely determined by (i)(ii) and are polynomials of T.

Proof: T=D+N. E i =h i (T)=f i (T)g i (T). –D=c 1 E 1 +…+c k E k is a polynomial in T. –N=T-D a polynomial in T. –Hence, D,N commute. (Uniquenss) Suppose T=D’+N’, D’N’ commutes, D’ diagonalizable, N nilpotent. –D’ commutes T=D’+N’. D’ commutes with any polynomials of T. –D’ commutes with D and N. –D’+N’=D+N. –D-D’=N’-N. They commutes with each other. –Since D and D’ commutes, they are simultaneously diagonalizable. (Section. 6.5 Theorem 8.)

–N’-N is nilpotent: r is suff. large. (larger 2max of the degrees of N,N’) -> r-j or j is suff large. Thus the above is zero. –D-D’=N’-N is a nilpotent operator which has a diagonal matrix. Thus, D-D’=0 and N’- N=0. –D’=D and N’=N.

Application to differential equations. Primary decompostion theorem holds when V is infinite dimensional and when p is only that p(T)=0. Then (i),(ii) hold. This follows since the same argument will work. A positive integer n. V = {f| n times continuously differentiable complex valued functions which satisfy ODE } C n ={n times continuously differentiable complex valued functions}

Let p=x n +a n-1 x n-1 +…+a 1 x + a 0. Let D differential operator, Then V is a subspace of C n where p(D)f=0. V=null p(D). Factor p=(x-c 1 ) r_1 …(x-c k ) r_k. c 1,..,c k in the complex number field C. Define W j := null(D-c j I) r_j. Then Theorem 12 says that V = W 1  …  W k In other words, if f satisfies the given differential operator, then f is expressed as f =f 1 +…+f k, f i in W i.

What are W i s? Solve (D-cI) r f=0. Fact: (D-cI) r f=e ct D r (e -ct f): –(D-cI) f=e ct D(e -ct f). –(D-cI) 2 f= e ct D(e -ct e ct D(e -ct f))…. (D-cI) r f=0 D r (e -ct f)=0: –Solution: e -ct f is a polynomial of deg < r. –f= e ct (b 0 + b 1 t +…+ b r-1 t r-1 ). Here e ct,te ct,t 2 e ct,…, t r-1 e ct are linearly independent. Thus {t m e c_j t| m=0,…,r j -1, j=1,…,k} form a basis for V. Thus V is finite-dimensional and has dim equal to deg. p.

7.1. Rational forms Definition: T in L(V,V), a vector a. T-cyclic subspace generated by a is Z(a;T)={v=g(T)a|g in F[x]}. Z(a;T)= If Z(a:T)=V, then a is said to be a cyclic vector for T. Recall T-annihilator of a is the ideal M(a:T)= =p a F[x]. p a is the T-annihilator of a.

Theorem 1. a  0. p a T-annihilator of a. –(i) deg p a = dim Z(a;T). –(ii) If deg p a =k, a, Ta,…,T k-1 a is a basis of –(iii) Let U:=T|Z(a;T):Z(a:T)->Z(a;T). Minpoly U=p a. Proof: Let g in F[x]. g=p a q+r. deg(r ) < deg(p a ). g(T)a=r(T)a. –r(T)a is a linear combination of a, Ta,…,T k-1 a. –Thus, this k vectors span Z(a;T). –They are linearly independent. Otherwise, we get another g of lower than k degree s.t. g(T)a =0. –(i),(ii) are proved.

–U:=T|Z(a;T):Z(a:T)->Z(a;T). –g in F[x]. –p a (U)g(T)a= p a (T)g(T)a (since g(T) a is in Z(a;T).) = g(T)p a (T)a = g(T)0=0. –p a (U)=0 on Z(a;T) and p a is monic. –If h is a polynomial of lower-degree than p a, then h(U)  0. (since h(U)a=h(T)a  0). –Thus, p a is the minimal polynomial of U.

Suppose T:V->V has a cyclic vector a. deg minpolyU=dimZ(a;T)=dim V=n. minpoly U=minpoly T. Thus, minpoly T = char.poly T. We obtain: T has a cyclic vector minpoly T=char.polyT. Proof: (->) done above. (<-) Later, we show for any T, there is a vector v s.t. minpolyT=annihilator v. (p.237. Corollary). So if minpolyT=charpolyT. Then dimZ(v;T)=n and v is a cyclic vector.

Study T by cyclic vector. U on W with a cyclic vector v. (W=Z(v:T) for example and U the restriction of T.) v, Uv, U 2 v,…,U k-1 v is a basis of W. U-annihiltor of v = minpoly U by Theorem 1. Let v i =U i-1 v. i=1,…,k. Let B={v 1,…,v k }. Uv i =v i+1. i=1,…,k-1. Uv k =-c 0 v 1 -c 1 v 2 -…-c k-1 v k where minpolyU=c 0 +c 1 x+…+c k-1 x k-1 +x k. (c 0 v+c 1 Uv+…+c k-1 U k-1 v+U k v=0.)

This is called the companion matrix of pa. (defined for any monic polynomial.)

Theorem 2. If U is a linear operator on a f.d.v.s.W, then U has a cyclic vector iff there is some ordered basis where U is represented by a companion matrix. Proof: (->) Done above. (<-) If we have a basis {v 1,…,v k }, –then v 1 is the cyclic vector.

Corollary. If A is the companion matrix of a monic polynomial p, then p is both the minimal and the characteristic polynomial of A. Proof: Let a=(1,0,…0). Then a is a cyclic vector and Z(a;A)=V. –The annihilator of a is p. deg p=n also. –By Theorem 1(iii), the minimal poly for T is p. –Since p divides char.polyA. And p has degree n. p=char.polyA.