Presentation is loading. Please wait.

Presentation is loading. Please wait.

of Matrices and Vectors

Similar presentations


Presentation on theme: "of Matrices and Vectors"— Presentation transcript:

1 of Matrices and Vectors
4.0 For Reference: Basics of Matrices and Vectors Section 4.0 p1

2 (perhaps with an additional given function on the right in each ODE).
4.0 For Reference: Basics of Matrices and Vectors Most of our linear systems will consist of two linear ODEs in two unknown functions y1(t), y2(t), (1) y’1 = a11y1 + a12y2, y’1 = −5y1 + 2y2 for example, y’2 = a21y1 + a22y2, y’2 = 13y1 + y2 (perhaps with additional given functions g1(t), g2(t) on the right in the two ODEs). Similarly, a linear system of n first-order ODEs in n unknown functions y1(t), … , yn(t) is of the form y’1 = a11y1 + a12y2 + … + a1nyn (2) y’2 = a21y1 + a22y2 + … + a2nyn y’n = an1y1 + an2y2 + … + annyn (perhaps with an additional given function on the right in each ODE). Section 4.0 p2

3 Some Definitions and Terms
4.0 For Reference: Basics of Matrices and Vectors Some Definitions and Terms Matrices. In (1) the (constant or variable) coefficients form a 2 x 2 matrix A, that is, an array (3) Similarly, the coefficients in (2) form an n x n matrix (4) The a11, a12, … are called entries, the horizontal lines rows, and the vertical lines columns. Section 4.0 p3

4 Some Definitions and Terms (continued)
4.0 For Reference: Basics of Matrices and Vectors Some Definitions and Terms (continued) Vectors. A column vector x with n components x1, … , xn is of the form Similarly, a row vector v is of the form Section 4.0 p4

5 Calculations with Matrices and Vectors
4.0 For Reference: Basics of Matrices and Vectors Calculations with Matrices and Vectors Equality. Two n x n matrices are equal if and only if corresponding entries are equal. Thus for n = 2, let Then A = B if and only if a11 = b11, a12 = b12 a21 = b21, a22 = b22. Two column vectors (or two row vectors) are equal if and only if they both have n components and corresponding components are equal. Section 4.0 p5

6 Calculations with Matrices and Vectors (continued)
4.0 For Reference: Basics of Matrices and Vectors Calculations with Matrices and Vectors (continued) Equality. (continued) Thus, let Section 4.0 p6

7 Calculations with Matrices and Vectors (continued)
4.0 For Reference: Basics of Matrices and Vectors Calculations with Matrices and Vectors (continued) Addition is performed by adding corresponding entries (or components); here, matrices must both be n x n, and vectors must both have the same number of components. Thus for n = 2, (5) Section 4.0 p7

8 Calculations with Matrices and Vectors (continued)
4.0 For Reference: Basics of Matrices and Vectors Calculations with Matrices and Vectors (continued) Matrix Multiplication. The product C = AB (in this order) of two n x n matrices A = [ajk] and B = [bjk] is the n x n matrix C = [cjk] with entries j = 1, … , n (6) k = 1, … , n. that is, multiply each entry in the jth row of A by the corresponding entry in the kth column of B and then add these n products. One says briefly that this is a “multiplication of rows into columns.” Section 4.0 p8

9 Calculations with Matrices and Vectors (continued)
4.0 For Reference: Basics of Matrices and Vectors Calculations with Matrices and Vectors (continued) Matrix Multiplication. (continued) For example, CAUTION! Matrix multiplication is not commutative, AB ≠ BA in general. Section 4.0 p9

10 Systems of ODEs as Vector Equations
4.0 For Reference: Basics of Matrices and Vectors Systems of ODEs as Vector Equations Differentiation. The derivative of a matrix (or vector) with variable entries (or components) is obtained by differentiating each entry (or component). Thus, if Using matrix multiplication and differentiation, we can now write (1) as (7) Section 4.0 p10

11 Some Further Operations and Terms
4.0 For Reference: Basics of Matrices and Vectors Some Further Operations and Terms Transposition is the operation of writing columns as rows and conversely and is indicated by T. Thus the transpose AT of the 2 x 2 matrix The transpose of a column vector, say, and conversely. Section 4.0 p11

12 Some Further Operations and Terms (continued)
4.0 For Reference: Basics of Matrices and Vectors Some Further Operations and Terms (continued) Inverse of a Matrix. The n x n unit matrix I is the n x n matrix with main diagonal 1, 1, … , 1 and all other entries zero. If, for a given n x n matrix A, there is an n x n matrix B such that AB = BA = I, then A is called nonsingular and B is called the inverse of A and is denoted by A−1; thus (8) AA−1 = A−1A = I. The inverse exists if the determinant det A of A is not zero. Section 4.0 p12

13 Some Further Operations and Terms (continued)
4.0 For Reference: Basics of Matrices and Vectors Some Further Operations and Terms (continued) Inverse of a Matrix. (continued) If A has no inverse, it is called singular. For n = 2, (9) where the determinant of A is (10) Section 4.0 p13

14 Some Further Operations and Terms (continued)
4.0 For Reference: Basics of Matrices and Vectors Some Further Operations and Terms (continued) Linear Independence. r given vectors v(1), … , v(r), with n components are called a linearly independent set or, more briefly, linearly independent, if (11) c1 v(1) + … + crv(r) = 0 implies that all scalars c1, … , cr must be zero; here, 0 denotes the zero vector, whose n components are all zero. If (11) also holds for scalars not all zero (so that at least one of these scalars is not zero), then these vectors are called a linearly dependent set or, briefly, linearly dependent, because then at least one of them can be expressed as a linear combination of the others; that is, if, for instance, c1 ≠ 0 in (11), then we can obtain Section 4.0 p14

15 Eigenvalues, Eigenvectors
4.0 For Reference: Basics of Matrices and Vectors Eigenvalues, Eigenvectors Let A = [ajk] be an n x n matrix. Consider the equation (12) Ax = λx where λ is a scalar (a real or complex number) to be determined and x is a vector to be determined. Now, for every λ, a solution is x = 0. A scalar λ such that (12) holds for some vector x ≠ 0 is called an eigenvalue of A, and this vector is called an eigenvector of A corresponding to this eigenvalue λ. We can write (12) as Ax − λx = 0 or (13) (A − λI)x = 0 Section 4.0 p15

16 4.1 Systems of ODEs as Models in Engineering Applications
Section 4.1 p16

17 Conversion of an nth-Order ODE to a System
4.1 Systems of ODEs as Models in Engineering Applications Conversion of an nth-Order ODE to a System We show that an nth-order ODE of the general form (8) (see Theorem 1) can be converted to a system of n first-order ODEs. This is practically and theoretically important— practically because it permits the study and solution of single ODEs by methods for systems, and theoretically because it opens a way of including the theory of higher order ODEs into that of first-order systems. This conversion is another reason for the importance of systems, in addition to their use as models in various basic applications. The idea of the conversion is simple and straightforward, as follows. Section4.1 p17

18 Theorem 1 Conversion of an ODE An nth-order ODE
4.1 Systems of ODEs as Models in Engineering Applications Theorem 1 Conversion of an ODE An nth-order ODE (8) y(n) = F(t, y, y’, … , y(n−1)) can be converted to a system of n first-order ODEs by setting (9) y1 = y, y2 = y’, y3 = y”, … , yn = y(n−1). This system is of the form (10) Section4.1 p18

19 4.2 Basic Theory of Systems
of ODEs. Wronskian Section 4.2 p19

20 4.2 Basic Theory of Systems of ODEs. Wronskian
The first-order systems in the last section were special cases of the more general system (1) We can write the system (1) as a vector equation by introducing the column vectors y = [ y1 … yn]T and f = [ f1 … fn]T (where T means transposition and saves us the space that would be needed for writing y and f as columns). This gives (1) y’ = f(t, y). Section4.2 p20

21 (2) y1(t0) = K1, y2(t0) = K2, … , yn(t0) = Kn,
4.2 Basic Theory of Systems of ODEs. Wronskian A solution of (1) on some interval a < t < b is a set of n differentiable functions y1 = h1(t), … , yn = hn(t) on a < t < b that satisfy (1) throughout this interval. In vector form, introducing the “solution vector” h = [h1 … hn]T (a column vector!) we can write y = h(t). An initial value problem for (1) consists of (1) and n given initial conditions (2) y1(t0) = K1, y2(t0) = K2, … , yn(t0) = Kn, in vector form, y(t0) = K, where t0 is a specified value of t in the interval considered and the components of K = [K1 … Kn]T are given numbers. Section4.2 p21

22 Theorem 1 Existence and Uniqueness Theorem
4.2 Basic Theory of Systems of ODEs. Wronskian Theorem 1 Existence and Uniqueness Theorem Let f1, … , fn in (1) be continuous functions having continuous partial derivatives δf1/δy1, … , δf1/δyn , … , δfn/δyn in some domain R of ty1y2…yn-space containing the point (t0, K1, … , Kn). Then (1) has a solution on some interval t0 − α < t < t0 − α satisfying (2), and this solution is unique. Section4.2 p22

23 4.2 Basic Theory of Systems of ODEs. Wronskian
Linear Systems Extending the notion of a linear ODE, we call (1) a linear system if it is linear in y1, … , yn; that is, if it can be written (3) Section 4.2 p23

24 Linear Systems (continued)
4.2 Basic Theory of Systems of ODEs. Wronskian Linear Systems (continued) As a vector equation this becomes (3) y’ = Ay + g where This system is called homogeneous if g = 0, so that it is (4) y’ = Ay. If g ≠ 0, then (3) is called nonhomogeneous. Section 4.2 p24

25 Theorem 2 Existence and Uniqueness in the Linear Case
4.2 Basic Theory of Systems of ODEs. Wronskian Theorem 2 Existence and Uniqueness in the Linear Case Let the ajk’s and gjk’s in (3) be continuous functions of t on an open interval α < t < β containing the point t = t0. Then (3) has a solution y(t) on this interval satisfying (2), and this solution is unique. Section4.2 p25

26 Theorem 3 Superposition Principle or Linearity Principle
4.2 Basic Theory of Systems of ODEs. Wronskian Theorem 3 Superposition Principle or Linearity Principle If y(1) and y(2) are solutions of the homogeneous linear system (4) on some interval, so is any linear combination y = c1y(1) + c1y(2). Section4.2 p26


Download ppt "of Matrices and Vectors"

Similar presentations


Ads by Google