§1-3 Solution of a Dynamical Equation

Slides:



Advertisements
Similar presentations
Vector Spaces A set V is called a vector space over a set K denoted V(K) if is an Abelian group, is a field, and For every element vV and K there exists.
Advertisements

Ch 3.2: Solutions of Linear Homogeneous Equations; Wronskian
5.4 Basis And Dimension.
Ch 7.7: Fundamental Matrices
Advanced Computer Architecture Lab University of Michigan Quantum Operations and Quantum Noise Dan Ernst Quantum Noise and Quantum Operations Dan Ernst.
Ch 7.9: Nonhomogeneous Linear Systems
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
Multivariable Control Systems
Ch 3.3: Linear Independence and the Wronskian
Tutorial 10 Iterative Methods and Matrix Norms. 2 In an iterative process, the k+1 step is defined via: Iterative processes Eigenvector decomposition.
III. Reduced Echelon Form
Differential Equations
Boyce/DiPrima 9th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
ME 1202: Linear Algebra & Ordinary Differential Equations (ODEs)
Chapter 3 Vector Spaces. The operations of addition and scalar multiplication are used in many contexts in mathematics. Regardless of the context, however,
6.5 Fundamental Matrices and the Exponential of a Matrix Fundamental Matrices Suppose that x 1 (t),..., x n (t) form a fundamental set of solutions for.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
1 Chapter 3 – Subspaces of R n and Their Dimension Outline 3.1 Image and Kernel of a Linear Transformation 3.2 Subspaces of R n ; Bases and Linear Independence.
4 © 2012 Pearson Education, Inc. Vector Spaces 4.4 COORDINATE SYSTEMS.
Projective Geometry Hu Zhan Yi. Entities At Infinity The ordinary space in which we lie is Euclidean space. The parallel lines usually do not intersect.
Signals and Systems Analysis NET 351 Instructor: Dr. Amer El-Khairy د. عامر الخيري.
Differential Equations MTH 242 Lecture # 09 Dr. Manshoor Ahmed.
4.8 Rank Rank enables one to relate matrices to vectors, and vice versa. Definition Let A be an m  n matrix. The rows of A may be viewed as row vectors.
Chapter 6- LINEAR MAPPINGS LECTURE 8 Prof. Dr. Zafer ASLAN.
Boyce/DiPrima 10th ed, Ch 7.9: Nonhomogeneous Linear Systems Elementary Differential Equations and Boundary Value Problems, 10th edition, by William E.
Systems of Linear Differential Equations
Modeling and Simulation Dr. Mohammad Kilani
7.7 Determinants. Cramer’s Rule
Chapter 1 Linear Equations and Vectors
EE611 Deterministic Systems
Eigenvalues and Eigenvectors
Chapter 4: Linear Differential Equations
CHARACTERIZATIONS OF INVERTIBLE MATRICES
We will be looking for a solution to the system of linear differential equations with constant coefficients.
Boyce/DiPrima 10th ed, Ch 7.7: Fundamental Matrices Elementary Differential Equations and Boundary Value Problems, 10th edition, by William E. Boyce and.
Pole Placement and Decoupling by State Feedback
Chapter 2 Simultaneous Linear Equations (cont.)
§2-3 Observability of Linear Dynamical Equations
Row Space, Column Space, and Nullspace
Pole Placement and Decoupling by State Feedback
§3-3 realization for multivariable systems
Mathematical Descriptions of Systems
Systems of First Order Linear Equations
Quantum Two.
Basis and Dimension Basis Dimension Vector Spaces and Linear Systems
4.2 State Feedback for Multivariable Systems
Boyce/DiPrima 10th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
§2-3 Observability of Linear Dynamical Equations
Static Output Feedback and Estimators
§3-2 Realization of single variable systems
§1-2 State-Space Description
Controllability and Observability of Linear Dynamical Equations
§2-2 Controllability of Linear Dynamical Equations
2.III. Basis and Dimension
Equivalent State Equations
Stability Analysis of Linear Systems
Chapter 3 Canonical Form and Irreducible Realization of Linear Time-invariant Systems.
Affine Spaces Def: Suppose
I.4 Polyhedral Theory (NW)
Linear Algebra Lecture 24.
Maths for Signals and Systems Linear Algebra in Engineering Lectures 4-5, Tuesday 18th October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN.
§2-2 Controllability of Linear Dynamical Equations
§1—2 State-Variable Description The concept of state
§3-2 Realization of single variable systems
Eigenvalues and Eigenvectors
Vector Spaces RANK © 2012 Pearson Education, Inc..
Null Spaces, Column Spaces, and Linear Transformations
NULL SPACES, COLUMN SPACES, AND LINEAR TRANSFORMATIONS
Vector Spaces COORDINATE SYSTEMS © 2012 Pearson Education, Inc.
CHARACTERIZATIONS OF INVERTIBLE MATRICES
Presentation transcript:

§1-3 Solution of a Dynamical Equation 1. Solution of a homogeneous differential equation A preliminary theorem for Preliminary Theorem: Let every element of A and f be continuous over (, +). Then for any t0(, +) and any constant vector x0, (A1) has a solution x(t) defined over (, +), such that

Furthermore, the solution satisfying (A2) is unique.

Corollary 1: If (t) is a solution to the differential equation and for some t0, (t0)=0, then Proof:It is clear that x(t)0, t is one of the solutions of the above equation. On the other hand, since (t0)=0 and (t) with the initial condition is unique, we have Q.E.D

Theorem 2: The set of all solutions of (1-50) forms an n-dimensional vector space over the field of real numbers. Outline of the proof. All the solutions of (1-50) form a linear space; The dimension of the linear space is n. More specifically, 1) (1-50) has n linearly independent solutions; 2) Any solution can be expressed as the linear combination of the n solutions.

Proof: All the solutions form a linear space. Let 1 and 2 be two arbitrary solutions of (1-50). Then, for any real numbers 1 and 2, we have

b) The dimension of the space is n. 1). Let e1, e2,…, en be n linearly independent constant vectors, and i(t), i=1, 2,…, n, are the solutions of (1-50) with i(t0)=ei, i=1, 2,…, n We then prove that i(t), i=1, 2,…, n, are linearly independent over (, +). The proof is by contradiction. If there exists a t0 (, +), such that i(t), i=1, 2,…, n are linearly dependent. Then, there exists an n×1

nonzero vector , such that Note that (t)0, t is a solution of (1-50) and is also a solution of (1-50) with Hence, the uniqueness of the solution implies that

In particular, for t=t0, which implies that the vectors e1, e2,…, en are linearly dependent, a contradiction. Hence i(t), i=1, 2,…, n, are linearly independent over (, +).

2) To prove that any solution can be expressed as the linear combination of the n solutions. That is, all the solutions of (1-50) form an n-dimensional vector space. Let (t) be an arbitrary solution of (1-50) with (t0)=e Then, e can be uniquely expressed as

Since is also the solution of (1-50) with The uniqueness of the solution implies that Q.E.D

2. Fundamental matrix and state transition matrix Definition: The matrix which is formed by n linearly independent solutions of (1-50) is said to be a fundamental matrix of (1-50). Some properties for a fundamental matrix: where E is a nonsingular constant matrix.

Theorem 3 The fundamental matrix of equation of (1-50) is nonsingular for every t over (, +). . Proof: By using Corollary 1 directly. Corollary 1: If (t) is a solution to the differential equation and for some t0, (t0)=0, then

Theorem 4: Let 1 and 2 be two fundamental matrixes of (1-50) Theorem 4: Let 1 and 2 be two fundamental matrixes of (1-50). Then, there exists a n×n nonsingular constant matrix C, such that 1(t)=2(t)C

State transition matrix Definition 9: Let (t) be any fundamental matrix. Then is said to be the state transition matrix of (1-50), where t, t0(, +). Some important properties of state transition matrix:

4) Under the condition x(t0)=x0, the solution of (1-50) is (1-53) Hence, (t, t0) can be considered as a linear transformation which maps the state at t0 to the

state x(t) at time t. In fact, x(t) can always be expressed as In particular, from which we can obtain the conclusion. Example: Prove that the state transition matrix is unique, i.e. the state transition matrix is regardless of the choice of fundamental matrix.

3. Non-homogeneous equation Solutions of linear time-variant dynamical equations The solution to the state equation can be obtained by using the method of variation of parameters. Let Then, we have

Theorem 1-5 The solution of the state equation is given by where is called zero-input response, and is called zero-state response.

Relationship between I/O and state space descriptions Corollary 1-5 The output of the dynamical equation (1-34) is If x(t0)=0, the impulse response matrix is

Solutions of a linear time-invariant dynamical equation Consider the following linear time-invariant dynamical equation where A, B, C and D are n×n, n×p, q×n and q×p real constant matrices. From the corresponding homogeneous equation, we have Fundamental matrix: eAt State transition matrix:

Usually, we assume that t0=0,

The corresponding impulse response matrix is Usually we write it as The corresponding transfer function matrix of the above equation is which is a rational function matrix.

Equivalent dynamical equations a) Time-invariant case Definition 1-10: An LTI dynamical equation is said to be equivalent to the dynamical equation (A, B, C, D) if and only if there exists a nonsingular matrix P, such that where P is said to be an equivalence transfor-mation.

Theorem: Two equivalent LTI systems are zero-state equivalent. Equivalence transformation implies that the choice of the state is not unique; different methods of analysis often lead to different choices of the state. Definition: Two time-invariant dynamical systems are said to be zero-state equivalent if and only if they have the same impulse response matrix or the same transfer function matrix. Recall that the state of a system is an auxiliary quantity introduced to give Theorem: Two equivalent LTI systems are zero-state equivalent.

Example: Consider the two systems and which are not equivalent dynamical equations, but are zero-state equivalent. Although equivalence implies zero-state equivalence, the converse is not true.