State space model: linear: or in some text: where: u: input y: output x: state vector A, B, C, D are const matrices
Example
State transition, matrix exponential
State transition matrix: e At e At is an nxn matrix e At = ℒ -1 ((sI-A) -1 ), or ℒ (e At )=(sI-A) -1 e At = Ae At = e At A e At is invertible: (e At ) -1 = e (-A)t e A0 =I e At1 e At2 = e A(t1+t2)
Example
I/O model to state space Infinite many solutions, all equivalent. Controller canonical form:
I/O model to state space Controller canonical form is not unique This is also controller canonical form
Example n=4 a 3 a 2 a 1 a 0 b 1 b 0 =b 2 =b 3 =0
Characteristic values Char. eq of a system is det(sI-A)=0 the polynomial det(sI-A) is called char. pol. the roots of char. eq. are char. values they are also the eigen-values of A e.g. ∴ (s+1)(s+2) 2 is the char. pol. (s+1)(s+2) 2 =0 is the char. eq. s 1 =-1,s 2 =-2,s 3 =-2 are char. values or eigenvalues
can Set t=0 ∴ No can at t=0: ? ? ? √ √
Solution of state space model Recall: sX(s)-x(0)=AX(s)+BU(s) (sI-A)X(s)=BU(s)+x(0) X(s)=(sI-A) -1 BU(s)+(sI-A) -1 x(0) x(t)=( ℒ -1 (sI-A) -1 ))*Bu(t)+ ℒ -1 (sI-A) -1 ) x(0) x(t)= e A(t- τ ) Bu( τ )d τ +e At x(0) y(t)= Ce A(t- τ ) Bu( τ )d τ +Ce At x(0)+Du(t)
But don’t use those for hand calculation use:X(s)=(sI-A) -1 BU(s)+(sI-A) -1 x(0) x(t)= ℒ -1 {(sI-A) -1 BU(s)}+{ ℒ -1 (sI-A) -1 } x(0) & Y(s)=C(sI-A) -1 BU(s)+DU(s)+C(sI-A) -1 x(0) y(t)= ℒ -1 {C(sI-A) -1 BU(s)+DU(s)}+C{ ℒ -1 (sI-A) -1 } x(0) e.g. If u= unit step
Note: T.F.=D+ C(sI-A) -1 B
Eigenvalues, eigenvectors Given a nxn square matrix A, nonzero vector p is called an eigenvector of A if Ap ∝ p i.e. λ s.t. Ap= λp λ is an eigenvalue of A Example:, Let, ∴ p 1 is an e-vector, & the e-value=1 Let, ∴ p 2 is also an e-vector, assoc. with the λ =-2
For a given nxn matrix A, if λ, p is an eigen-pair, then Ap= λp λp-Ap=0 λIp-Ap=0 (λI-A)p=0 ∵ p≠0 ∴ det(λI-A)=0 ∴ λ is a solution to the char. eq of A: det(λI-A)=0 char. pol. of nxn A has deg=n ∴ A has n eigen-values. e.g. A=, det(λI-A)=(λ-1)(λ+2)=0 ⇒ λ 1 =1, λ 2 =-2 Eigenvalues, eigenvectors
If λ 1 ≠λ 2 ≠λ 3 ⋯ then the corresponding p 1, p 2, ⋯ will be linearly independent, i.e., the matrix P=[p 1 ⋮ p 2 ⋮ ⋯ p n ] will be invertible. Then: Ap 1 = λ 1 p 1 Ap 2 = λ 2 p 2 ⋮ A[p 1 ⋮ p 2 ⋮ ⋯ ]=[Ap 1 ⋮ Ap 2 ⋮ ⋯ ] =[λ 1 p 1 ⋮ λ 2 p 2 ⋮ ⋯ ] =[p 1 p 2 ⋯ ]
∴ AP=PΛ P -1 AP= Λ=diag(λ 1, λ 2, ⋯ ) ∴ If A has n linearly independent Eigenvectors, then A can be diagonalized. Note: Not all square matrices can be diagonalized.
Example
In Matlab >> A=[2 0 1; 0 2 1; 1 1 4]; >> [P,D]=eig(A) P = p 1 p 2 p 3 D = λ1λ1 λ2λ2 λ3λ3
If A does not have n linearly independent eigen-vectors (some of the eigenvalues are identical), then A can not be diagonalized E.g. A= det(λI-A)= λ 4 +56λ λ λ λ 1 =-8 λ 2 =-16 λ 3 =-16 λ 4 =-16 by solving (λI-A)P=0 There are only two linearly independent eigen-vectors
>> A=[ ; ; ; ] A = >> [P,D]=eig(A) P = i i i i i i D = i i
Should use: >>[P,J]=jordan(A) P = J= a 3x3 Jordan block associated with λ=-16
More Matlab Examples >> s=sym('s'); >> A=[0 1;-2 -3]; >> det(s*eye(2)-A) ans = s^2+3*s+2 >> factor(ans) ans = (s+2)*(s+1)
>> [P,D]=eig(A) P = D = >> [P,D]=jordan(A) P = D =
A = >> exp(A) ans = >> expm(A) ans = >> t=sym('t') >> expm(A*t) ans = [ -exp(-2*t)+2*exp(-t), exp(-t)-exp(-2*t)] [ -2*exp(-t)+2*exp(-2*t), 2*exp(-2*t)-exp(-t)] ≠
√ √
Similarity transformation same system as(#)
Example diagonalized decoupled
Invariance: