of Matrices and Vectors

Slides:



Advertisements
Similar presentations
Ch 7.7: Fundamental Matrices
Advertisements

Refresher: Vector and Matrix Algebra Mike Kirkpatrick Department of Chemical Engineering FAMU-FSU College of Engineering.
Ch 7.9: Nonhomogeneous Linear Systems
Matrices and Systems of Equations
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
化工應用數學 授課教師: 郭修伯 Lecture 9 Matrices
Arithmetic Operations on Matrices. 1. Definition of Matrix 2. Column, Row and Square Matrix 3. Addition and Subtraction of Matrices 4. Multiplying Row.
Copyright © Cengage Learning. All rights reserved. 7.6 The Inverse of a Square Matrix.
Differential Equations
Boyce/DiPrima 9th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Compiled By Raj G. Tiwari
ECON 1150 Matrix Operations Special Matrices
 Row and Reduced Row Echelon  Elementary Matrices.
6.5 Fundamental Matrices and the Exponential of a Matrix Fundamental Matrices Suppose that x 1 (t),..., x n (t) form a fundamental set of solutions for.
1 C ollege A lgebra Systems and Matrices (Chapter5) 1.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Algebra 3: Section 5.5 Objectives of this Section Find the Sum and Difference of Two Matrices Find Scalar Multiples of a Matrix Find the Product of Two.
Sheng-Fang Huang. 4.0 Basics of Matrices and Vectors Most of our linear systems will consist of two ODEs in two unknown functions y 1 (t), y 2 (t),
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Chapter 6 Systems of Linear Equations and Matrices Sections 6.3 – 6.5.
Copyright © Cengage Learning. All rights reserved. 7 Linear Systems and Matrices.
Review of Matrix Operations Vector: a sequence of elements (the order is important) e.g., x = (2, 1) denotes a vector length = sqrt(2*2+1*1) orientation.
1.3 Matrices and Matrix Operations. A matrix is a rectangular array of numbers. The numbers in the arry are called the Entries in the matrix. The size.
Copyright © Cengage Learning. All rights reserved. 2 SYSTEMS OF LINEAR EQUATIONS AND MATRICES.
Matrices and Matrix Operations. Matrices An m×n matrix A is a rectangular array of mn real numbers arranged in m horizontal rows and n vertical columns.
STROUD Worked examples and exercises are in the text Programme 5: Matrices MATRICES PROGRAMME 5.
2.5 – Determinants and Multiplicative Inverses of Matrices.
STROUD Worked examples and exercises are in the text PROGRAMME 5 MATRICES.
Linear Algebra Engineering Mathematics-I. Linear Systems in Two Unknowns Engineering Mathematics-I.
If A and B are both m × n matrices then the sum of A and B, denoted A + B, is a matrix obtained by adding corresponding elements of A and B. add these.
Matrices, Vectors, Determinants.
Copyright © Cengage Learning. All rights reserved. 7 Matrices and Determinants.
College Algebra Chapter 6 Matrices and Determinants and Applications
Boyce/DiPrima 10th ed, Ch 7.9: Nonhomogeneous Linear Systems Elementary Differential Equations and Boundary Value Problems, 10th edition, by William E.
Elementary Matrix Theory
7.3 Linear Systems of Equations. Gauss Elimination
Eigenvalues and Eigenvectors
Systems of Linear Differential Equations
MAT 322: LINEAR ALGEBRA.
3.2 Homogeneous Linear ODEs with Constant Coefficients
Matrices and Vector Concepts
7.1 Matrices, Vectors: Addition and Scalar Multiplication
Matrices and Matrix Operations
Boyce/DiPrima 10th ed, Ch 7.2: Review of Matrices Elementary Differential Equations and Boundary Value Problems, 10th edition, by William E. Boyce and.
A PART Ordinary Differential Equations (ODEs) Part A p1.
7.7 Determinants. Cramer’s Rule
CS479/679 Pattern Recognition Dr. George Bebis
Review of Matrix Operations
B PART Linear Algebra. Vector Calculus Part B p1.
Eigenvalues and Eigenvectors
Section 4.1 Eigenvalues and Eigenvectors
Ch 4.1: Higher Order Linear ODEs: General Theory
Class Notes 7: High Order Linear Differential Equation Homogeneous
Boyce/DiPrima 10th ed, Ch 7.7: Fundamental Matrices Elementary Differential Equations and Boundary Value Problems, 10th edition, by William E. Boyce and.
Section 7.4 Matrix Algebra.
Systems of First Order Linear Equations
Matrix Algebra.
A PART Ordinary Differential Equations (ODEs) Part A p1.
Boyce/DiPrima 10th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Matrices and Matrix Operations
MAE 82 – Engineering Mathematics
Ch 4.1: Higher Order Linear ODEs: General Theory
2.10 Solution by Variation of Parameters Section 2.10 p1.
Matrix Algebra.
Properties of Solution Sets
Matrix Operations Ms. Olifer.
Eigenvalues and Eigenvectors
General Vector Spaces I
Matrices and Determinants
Eigenvalues and Eigenvectors
Presentation transcript:

of Matrices and Vectors 4.0 For Reference: Basics of Matrices and Vectors Section 4.0 p1

(perhaps with an additional given function on the right in each ODE). 4.0 For Reference: Basics of Matrices and Vectors Most of our linear systems will consist of two linear ODEs in two unknown functions y1(t), y2(t), (1) y’1 = a11y1 + a12y2, y’1 = −5y1 + 2y2 for example, y’2 = a21y1 + a22y2, y’2 = 13y1 + y2 (perhaps with additional given functions g1(t), g2(t) on the right in the two ODEs). Similarly, a linear system of n first-order ODEs in n unknown functions y1(t), … , yn(t) is of the form y’1 = a11y1 + a12y2 + … + a1nyn (2) y’2 = a21y1 + a22y2 + … + a2nyn . . . . . . . . . . . . . . . . . . . . . . . . . y’n = an1y1 + an2y2 + … + annyn (perhaps with an additional given function on the right in each ODE). Section 4.0 p2

Some Definitions and Terms 4.0 For Reference: Basics of Matrices and Vectors Some Definitions and Terms Matrices. In (1) the (constant or variable) coefficients form a 2 x 2 matrix A, that is, an array (3) Similarly, the coefficients in (2) form an n x n matrix (4) The a11, a12, … are called entries, the horizontal lines rows, and the vertical lines columns. Section 4.0 p3

Some Definitions and Terms (continued) 4.0 For Reference: Basics of Matrices and Vectors Some Definitions and Terms (continued) Vectors. A column vector x with n components x1, … , xn is of the form Similarly, a row vector v is of the form Section 4.0 p4

Calculations with Matrices and Vectors 4.0 For Reference: Basics of Matrices and Vectors Calculations with Matrices and Vectors Equality. Two n x n matrices are equal if and only if corresponding entries are equal. Thus for n = 2, let Then A = B if and only if a11 = b11, a12 = b12 a21 = b21, a22 = b22. Two column vectors (or two row vectors) are equal if and only if they both have n components and corresponding components are equal. Section 4.0 p5

Calculations with Matrices and Vectors (continued) 4.0 For Reference: Basics of Matrices and Vectors Calculations with Matrices and Vectors (continued) Equality. (continued) Thus, let Section 4.0 p6

Calculations with Matrices and Vectors (continued) 4.0 For Reference: Basics of Matrices and Vectors Calculations with Matrices and Vectors (continued) Addition is performed by adding corresponding entries (or components); here, matrices must both be n x n, and vectors must both have the same number of components. Thus for n = 2, (5) Section 4.0 p7

Calculations with Matrices and Vectors (continued) 4.0 For Reference: Basics of Matrices and Vectors Calculations with Matrices and Vectors (continued) Matrix Multiplication. The product C = AB (in this order) of two n x n matrices A = [ajk] and B = [bjk] is the n x n matrix C = [cjk] with entries j = 1, … , n (6) k = 1, … , n. that is, multiply each entry in the jth row of A by the corresponding entry in the kth column of B and then add these n products. One says briefly that this is a “multiplication of rows into columns.” Section 4.0 p8

Calculations with Matrices and Vectors (continued) 4.0 For Reference: Basics of Matrices and Vectors Calculations with Matrices and Vectors (continued) Matrix Multiplication. (continued) For example, CAUTION! Matrix multiplication is not commutative, AB ≠ BA in general. Section 4.0 p9

Systems of ODEs as Vector Equations 4.0 For Reference: Basics of Matrices and Vectors Systems of ODEs as Vector Equations Differentiation. The derivative of a matrix (or vector) with variable entries (or components) is obtained by differentiating each entry (or component). Thus, if Using matrix multiplication and differentiation, we can now write (1) as (7) Section 4.0 p10

Some Further Operations and Terms 4.0 For Reference: Basics of Matrices and Vectors Some Further Operations and Terms Transposition is the operation of writing columns as rows and conversely and is indicated by T. Thus the transpose AT of the 2 x 2 matrix The transpose of a column vector, say, and conversely. Section 4.0 p11

Some Further Operations and Terms (continued) 4.0 For Reference: Basics of Matrices and Vectors Some Further Operations and Terms (continued) Inverse of a Matrix. The n x n unit matrix I is the n x n matrix with main diagonal 1, 1, … , 1 and all other entries zero. If, for a given n x n matrix A, there is an n x n matrix B such that AB = BA = I, then A is called nonsingular and B is called the inverse of A and is denoted by A−1; thus (8) AA−1 = A−1A = I. The inverse exists if the determinant det A of A is not zero. Section 4.0 p12

Some Further Operations and Terms (continued) 4.0 For Reference: Basics of Matrices and Vectors Some Further Operations and Terms (continued) Inverse of a Matrix. (continued) If A has no inverse, it is called singular. For n = 2, (9) where the determinant of A is (10) Section 4.0 p13

Some Further Operations and Terms (continued) 4.0 For Reference: Basics of Matrices and Vectors Some Further Operations and Terms (continued) Linear Independence. r given vectors v(1), … , v(r), with n components are called a linearly independent set or, more briefly, linearly independent, if (11) c1 v(1) + … + crv(r) = 0 implies that all scalars c1, … , cr must be zero; here, 0 denotes the zero vector, whose n components are all zero. If (11) also holds for scalars not all zero (so that at least one of these scalars is not zero), then these vectors are called a linearly dependent set or, briefly, linearly dependent, because then at least one of them can be expressed as a linear combination of the others; that is, if, for instance, c1 ≠ 0 in (11), then we can obtain Section 4.0 p14

Eigenvalues, Eigenvectors 4.0 For Reference: Basics of Matrices and Vectors Eigenvalues, Eigenvectors Let A = [ajk] be an n x n matrix. Consider the equation (12) Ax = λx where λ is a scalar (a real or complex number) to be determined and x is a vector to be determined. Now, for every λ, a solution is x = 0. A scalar λ such that (12) holds for some vector x ≠ 0 is called an eigenvalue of A, and this vector is called an eigenvector of A corresponding to this eigenvalue λ. We can write (12) as Ax − λx = 0 or (13) (A − λI)x = 0 Section 4.0 p15

4.1 Systems of ODEs as Models in Engineering Applications Section 4.1 p16

Conversion of an nth-Order ODE to a System 4.1 Systems of ODEs as Models in Engineering Applications Conversion of an nth-Order ODE to a System We show that an nth-order ODE of the general form (8) (see Theorem 1) can be converted to a system of n first-order ODEs. This is practically and theoretically important— practically because it permits the study and solution of single ODEs by methods for systems, and theoretically because it opens a way of including the theory of higher order ODEs into that of first-order systems. This conversion is another reason for the importance of systems, in addition to their use as models in various basic applications. The idea of the conversion is simple and straightforward, as follows. Section4.1 p17

Theorem 1 Conversion of an ODE An nth-order ODE 4.1 Systems of ODEs as Models in Engineering Applications Theorem 1 Conversion of an ODE An nth-order ODE (8) y(n) = F(t, y, y’, … , y(n−1)) can be converted to a system of n first-order ODEs by setting (9) y1 = y, y2 = y’, y3 = y”, … , yn = y(n−1). This system is of the form (10) Section4.1 p18

4.2 Basic Theory of Systems of ODEs. Wronskian Section 4.2 p19

4.2 Basic Theory of Systems of ODEs. Wronskian The first-order systems in the last section were special cases of the more general system (1) We can write the system (1) as a vector equation by introducing the column vectors y = [ y1 … yn]T and f = [ f1 … fn]T (where T means transposition and saves us the space that would be needed for writing y and f as columns). This gives (1) y’ = f(t, y). Section4.2 p20

(2) y1(t0) = K1, y2(t0) = K2, … , yn(t0) = Kn, 4.2 Basic Theory of Systems of ODEs. Wronskian A solution of (1) on some interval a < t < b is a set of n differentiable functions y1 = h1(t), … , yn = hn(t) on a < t < b that satisfy (1) throughout this interval. In vector form, introducing the “solution vector” h = [h1 … hn]T (a column vector!) we can write y = h(t). An initial value problem for (1) consists of (1) and n given initial conditions (2) y1(t0) = K1, y2(t0) = K2, … , yn(t0) = Kn, in vector form, y(t0) = K, where t0 is a specified value of t in the interval considered and the components of K = [K1 … Kn]T are given numbers. Section4.2 p21

Theorem 1 Existence and Uniqueness Theorem 4.2 Basic Theory of Systems of ODEs. Wronskian Theorem 1 Existence and Uniqueness Theorem Let f1, … , fn in (1) be continuous functions having continuous partial derivatives δf1/δy1, … , δf1/δyn , … , δfn/δyn in some domain R of ty1y2…yn-space containing the point (t0, K1, … , Kn). Then (1) has a solution on some interval t0 − α < t < t0 − α satisfying (2), and this solution is unique. Section4.2 p22

4.2 Basic Theory of Systems of ODEs. Wronskian Linear Systems Extending the notion of a linear ODE, we call (1) a linear system if it is linear in y1, … , yn; that is, if it can be written (3) Section 4.2 p23

Linear Systems (continued) 4.2 Basic Theory of Systems of ODEs. Wronskian Linear Systems (continued) As a vector equation this becomes (3) y’ = Ay + g where This system is called homogeneous if g = 0, so that it is (4) y’ = Ay. If g ≠ 0, then (3) is called nonhomogeneous. Section 4.2 p24

Theorem 2 Existence and Uniqueness in the Linear Case 4.2 Basic Theory of Systems of ODEs. Wronskian Theorem 2 Existence and Uniqueness in the Linear Case Let the ajk’s and gjk’s in (3) be continuous functions of t on an open interval α < t < β containing the point t = t0. Then (3) has a solution y(t) on this interval satisfying (2), and this solution is unique. Section4.2 p25

Theorem 3 Superposition Principle or Linearity Principle 4.2 Basic Theory of Systems of ODEs. Wronskian Theorem 3 Superposition Principle or Linearity Principle If y(1) and y(2) are solutions of the homogeneous linear system (4) on some interval, so is any linear combination y = c1y(1) + c1y(2). Section4.2 p26