Math 307 Spring, 2003 Hentzel Time: 1:10-2:00 MWF Room: 1324 Howe Hall Instructor: Irvin Roy Hentzel Office 432 Carver Phone 515-294-8141

Slides:



Advertisements
Similar presentations
5.1 Real Vector Spaces.
Advertisements

Math 307 Spring, 2003 Hentzel Time: 1:10-2:00 MWF Room: 1324 Howe Hall Instructor: Irvin Roy Hentzel Office 432 Carver Phone
Ch 7.7: Fundamental Matrices
Math 307 Spring, 2003 Hentzel Time: 1:10-2:00 MWF Room: 1324 Howe Hall Office 432 Carver Phone
SOLUTION OF STATE EQUATION
Chapter 4 Systems of Linear Equations; Matrices Section 6 Matrix Equations and Systems of Linear Equations.
Linear Transformations
Chapter 5 Orthogonality
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
Review of Matrix Algebra
Ch 3.3: Linear Independence and the Wronskian
Chapter 1 Systems of Linear Equations
Class 25: Question 1 Which of the following vectors is orthogonal to the row space of A?
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 08 Chapter 8: Linear Transformations.
Math 307 Spring, 2003 Hentzel Time: 1:10-2:00 MWF Room: 1324 Howe Hall Instructor: Irvin Roy Hentzel Office 432 Carver Phone
Instructor: Irvin Roy Hentzel Office 432 Carver Phone
Boyce/DiPrima 9th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
6.837 Linear Algebra Review Patrick Nichols Thursday, September 18, 2003.
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
Patrick Nichols Thursday, September 18, Linear Algebra Review.
Linear Algebra/Eigenvalues and eigenvectors. One mathematical tool, which has applications not only for Linear Algebra but for differential equations,
Equation --- An equation is a mathematical statement that asserts the equality of twomathematicalstatement expressions. An equation involves an unknown,
Math 307 Spring, 2003 Hentzel Time: 1:10-2:00 MWF Room: 1324 Howe Hall Instructor: Irvin Roy Hentzel Office 432 Carver Phone
Systems of Linear Equation and Matrices
MATH 250 Linear Equations and Matrices
January 22 Inverse of Matrices. Math 307 Spring, 2003 Hentzel Time: 1:10-2:00 MWF Room: 1324 Howe Hall Instructor: Irvin Roy Hentzel Office 432 Carver.
Presentation by: H. Sarper
ME 1202: Linear Algebra & Ordinary Differential Equations (ODEs)
Elementary Linear Algebra Anton & Rorres, 9th Edition
1 © 2010 Pearson Education, Inc. All rights reserved © 2010 Pearson Education, Inc. All rights reserved Chapter 9 Matrices and Determinants.
6.837 Linear Algebra Review Patrick Nichols Thursday, September 18, 2003.
Copyright © 2011 Pearson Education, Inc. Publishing as Prentice Hall.
4.4 & 4.5 Notes Remember: Identity Matrices: If the product of two matrices equal the identity matrix then they are inverses.
4 4.4 © 2012 Pearson Education, Inc. Vector Spaces COORDINATE SYSTEMS.
Section 4.1 Vectors in ℝ n. ℝ n Vectors Vector addition Scalar multiplication.
1 What you will learn  We need to review several concepts from Algebra II: Solving a system of equations graphically Solving a system of equations algebraically.
January 22 Review questions. Math 307 Spring 2003 Hentzel Time: 1:10-2:00 MWF Room: 1324 Howe Hall Instructor: Irvin Roy Hentzel Office 432 Carver Phone.
Copyright © 2013, 2009, 2005 Pearson Education, Inc. 1 5 Systems and Matrices Copyright © 2013, 2009, 2005 Pearson Education, Inc.
1 C ollege A lgebra Systems and Matrices (Chapter5) 1.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
1. Inverse of A 2. Inverse of a 2x2 Matrix 3. Matrix With No Inverse 4. Solving a Matrix Equation 1.
Math 307 Spring, 2003 Hentzel Time: 1:10-2:00 MWF Room: 1324 Howe Hall Instructor: Irvin Roy Hentzel Office 432 Carver Phone
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
1 MAC 2103 Module 7 Euclidean Vector Spaces II. 2 Rev.F09 Learning Objectives Upon completing this module, you should be able to: 1. Determine if a linear.
Math 307 Spring, 2003 Hentzel Time: 1:10-2:00 MWF Room: 1324 Howe Hall Instructor: Irvin Roy Hentzel Office 432 Carver Phone
Math 307 Spring, 2003 Hentzel Time: 1:10-2:00 MWF Room: 1324 Howe Hall Office 432 Carver Phone
Math 307 Spring, 2003 Hentzel Time: 1:10-2:00 MWF Room: 1324 Howe Hall Instructor: Irvin Roy Hentzel Office 432 Carver Phone
Math 307 Spring, 2003 Hentzel Time: 1:10-2:00 MWF Room: 1324 Howe Hall Instructor: Irvin Roy Hentzel Office 432 Carver Phone
College Algebra Sixth Edition James Stewart Lothar Redlin Saleem Watson.
TH EDITION LIAL HORNSBY SCHNEIDER COLLEGE ALGEBRA.
GUIDED PRACTICE for Example – – 2 12 – 4 – 6 A = Use a graphing calculator to find the inverse of the matrix A. Check the result by showing.
Matrices and Matrix Operations. Matrices An m×n matrix A is a rectangular array of mn real numbers arranged in m horizontal rows and n vertical columns.
2.5 – Determinants and Multiplicative Inverses of Matrices.
A function is a rule f that associates with each element in a set A one and only one element in a set B. If f associates the element b with the element.
Math 307 Spring, 2003 Hentzel Time: 1:10-2:00 MWF Room: 1324 Howe Hall Instructor: Irvin Roy Hentzel Office 432 Carver Phone
Linear Algebra Review Tuesday, September 7, 2010.
Matrix Algebra Basics Chapter 3 Section 5. Algebra.
Math 307 Spring, 2003 Hentzel Time: 1:10-2:00 MWF Room: 1324 Howe Hall Instructor: Irvin Roy Hentzel Office 432 Carver Phone
1 SYSTEM OF LINEAR EQUATIONS BASE OF VECTOR SPACE.
4.3 Matrix of Linear Transformations y RS T R ′S ′ T ′ x.
College Algebra Chapter 6 Matrices and Determinants and Applications
Chapter 4 Systems of Linear Equations; Matrices
5 Systems of Linear Equations and Matrices
L9Matrix and linear equation
Instructor: Irvin Roy Hentzel Office 432 Carver Phone
Matrices 3 1.
Instructor: Irvin Roy Hentzel Office 432 Carver Phone
Multiplicative Inverses of Matrices and Matrix Equations
Eigenvalues and Eigenvectors
Vector Spaces COORDINATE SYSTEMS © 2012 Pearson Education, Inc.
Presentation transcript:

Math 307 Spring, 2003 Hentzel Time: 1:10-2:00 MWF Room: 1324 Howe Hall Instructor: Irvin Roy Hentzel Office 432 Carver Phone Text: Linear Algebra With Applications, Second Edition Otto Bretscher

Wednesday, March 5, Chapter 4.3 Page 173 Problems 4,10,50 Main Idea: Just look at the coefficients. Key Words: Representation of a vector with respect to a basis. V B P CB P CB T BB P BC Goal: Set up the constructions symbolically before you try to do them with the actual matrices.

Previous Assignment Monday, March 3 Chapter 4.2 Page 164 Problems 6,14,16 Page 164 Problem 6 Find out which of the transformations in Exercises 1 through 25 are linear. For those that are linear, determine whether they are isomorphisms.

T(A) = BA where B = | 1 2 | | 3 6 |

The fact that T is linear is a direct result of the fact that matrix multiplication is linear. We can write out a few details as follows. T(X+Y) = B(X+Y) = BX + BY = T(X) + T(Y) T(cX) = B(cX) = c BX = c T(X) So T is linear.

To show that T is not invertible, we show that the kernel is not zero. Since T( |-2 0| ) = | 1 2 | |-2 0 | = | 0 0 | | 1 0| | 3 6 | | 1 0 | | 0 0 | T has a non trivial kernel so T will not be invertible.

Page 164 Problem 14 T(a+bt+ct 2 ) = a-bt+ct 2. What T does is to change the sign of the linear term.

The additive part of the proof that T is linear: T ( (a+bt+ct 2 ) + (a'+b't+c't 2 ) ) = T ( (a+a') + (b+b')t + (c+c')t 2 ) = (a+a') -(b+b')t + (c+c')t 2 = a-bt+ct 2 + a'-b't+c't 2 = T ( a+bt+ct 2 ) + T ( a'+b't+c't 2 ).

The scalar multiplication part of the proof that T is linear. T ( d(a+bt+ct 2 ) ) = T ( da + dbt + dct 2 ) = da -dbt +dct 2 = d(a-bt+ct 2 ) = d T ( a+bt+ct 2 )

To show that T is invertible, we will actually find the inverse of T. Since T(a-bt+ct 2 ) = a+bt+ct 2 the inverse of T exists and is T itself.

Page 164 Problem 16 T ( f(t) ) = t f '(t).

We first show the additive part of linearity. T ( f(t)+g(t) ) = t ( f(t)+g(t) ) ' = t ( f ' (t) + g ' (t) ) = t f ' (t) + t g ' (t) = T ( f(t) ) + T ( g(t) ).

We next show the scalar multiplication part of linearity. T ( c f(t) ) = t (c f(t)) ' = c t f ' (t) = c T ( f(t) ).

We know that T is not invertible because it has non-zero elements in its kernel. T kills off all of the constant functions.

New Material. The numbers in a vector have to be interpreted through the appropriate basis. We keep track of the particular basis that is being used by giving a subscript on the vector. (1) If we have a Basis B = [B 1 B 2... B n ] we can express something in terms of the basis by giving the coefficients of the B's. | c 1 | | c 2 | | c 3 | V = [B 1 B 2... B n ] |. | = c 1 B 1 + c 2 B c n B n. |. | | c n |

The coefficient vector V B is called the representation of V with respect to the basis [B 1 B 2... B n ]. | c 1 | | c 2 | V B = | c 3 | |. | | c n |

For example: (a) Represent the polynomial x x + 2 with respect to the basis [ x 4 x 3 x 2 x 1 ]. | 1 | | 0 | answer: | 0 | | 3 | | 2 |

(b) Represent x Sin[x] + 3 Cos[x] with respect to the basis [ x Sin[x], x Cos[x], Sin[x], Cos[x] ]. | 1 | | 0 | answer: | 0 |. | 3 |

(2) If you choose a different basis for whatever reason, you have to change the representation of the vector. This change is easily done by using a matrix called the change of basis matrix. | a 11 a a 1n | | a 21 a a 2n | [B 1 B 2... B n ] |... | = [C 1 C 2... C n ] |... | | a n1 a n2... a nn |

You create this matrix column by column. Column i will be the coefficients needed to make C i a linear combination of the B’s. The matrix is called P BC

To get this straight, derive what you are doing. | a 11 a a 1n | | x 1 | | x 1 | | a 21 a a 2n | | x 2 | | x 2 | [B 1 B 2... B n ] |... | |. | = [C 1 C 2... C n ] |. | |... | |. | |. | | a n1 a n2... a nn | | x n | | x n | We multiply the expression by the vector X. This gives us B A X = C X. On the right hand side, X is a representation of a vector in the C basis. On the left hand side AX is the representation of the vector in the B basis. So X B = A X C. Thus A is P BC

For example. (a) Find the change of basis matrix for [ 1 x x 2 x 3 ] and [ x 3 x 2 x 1] | | [ 1 x x 2 x 3 ] | | = [ x 3 x 2 x 1] | | | |

(b) Find the change of basis matrix for [ e x, e - x ] and [ Sinh(x), Cosh(x) ]. e x – e - x e x + e - x Sinh(x) = Cosh(x) =

[ e x e -x ] | ½ ½ | = [ Sinh(x) Cosh(x) ]. |-½ ½ |

(c) Find the change of basis matrix which rotates the axis sending (1,0) into (5/13, 12/13) and (0,1) into (-12/13, 5/13) | 1 0 | | 5/13 -12/13 | = | 5/13 -12/13 | | 0 1 | | 12/13 5/13 | |12/13 5/13 | P BB '

What does (3,5) in the B' system correspond to in the B system? | 5/13 -12/13 | | 3 | = | -45/13 | |12/13 5/13 | | 5 | | 61/13 | P BB ’ V B ’ V B

What does the equation 2 x y = 4 become in the B' basis. [ x y ] | 0 1 | | x | = 4 | 1 0 | | y | [ x' y‘ ] | 5/13 12/13 | | 0 1 | | 5/13 -12/13 | | x' | = 4 |-12/13 5/13 | | 1 0 | |12/13 5/13 | | y' | [ x’ y’ ] | 120 / /169 | | x’ | = 4 | -119 / /169 | | y’ | (120/169) (x') /169 x' y' -120/169 (y') 2 = 4

(3) The matrix of a linear transformation is written with respect to a particular basis. If you change basis, you have to change the matrix of the linear transformation. Suppose T(V) = W and you know how to compute this linear transformation in the B basis. Then using matrix multiplication we know that: T BB V B = W B Now we want to represent T in the C Basis. We first change V C to V B and then use T BB to get W B and then change W B to W C

VOILA: P CB T BB P BC V C = W C |____________| T CC

For Example: Write the matrix for differentiation using the basis [ e x, e - x ] and [ Sinh(x), Cosh(x) ]

e x e - x e x 1 0 e - x 0 -1 Sinh(x) Cosh(x) Sinh(x) | 0 1 | Cosh(x) | 1 0 |

Using the change of basis matrix. [ e x e - x ] | 1/2 1/2 | = [ Sinh[ x ] Cosh[ x ] ]  B  |-1/2 1/2 |  C  P BC P CB T BB P BC V C = W C | 1 -1 | | 1 0 | | 1/2 1/2 | | 1 1 | | 0 -1 | |-1/2 1/2 | | 1 1 | | 1/2 1/2 | = | 0 1 | | 1 -1 | |-1/2 1/2 | | 1 0 |

Page 175 Problem 61. Let V be the linear space of all functions of the form c 1 Cos[t] + c 2 Sin[t] + c 3 t Cos[t] + c t Sin[t]. Solve this differential equation: f’’ + f = Cos[t]

We are looking to solve (D 2 +I) f(t) = Cos[t]. | | D = | | | | | |

We write out the matrix of differentiation with respect to the basis: Cos[t] Sin[t] t Cos[t] t Sin[t] Cos[t] Sin[t] t Cos[t] t Sin[t]

| | D = | | | | | | | | D 2 = | | | | | |

| | D 2 + I = | | | | We wish to solve (D 2 + I) f(t) = Cos[t] This is just a typical linear equation once the basis has been chosen.

| | | c 1 | | 1 | | | | c 2 | = | 0 | | | | c 3 | | 0 | | | | c 4 | | 0 |

The Row Canonical Form is c 1 =a c 2 =b c 3 c 4 RHS | | | ½ | | | | c 1 | | 0 | | 1 | | 0 | | c 2 | = | 0 |+ a | 0 | + b | 1 | | c 3 | | 0 | | 0 | | 0 | | c 4 | | ½ | | 0 | | 0 |

f(t) = a Cos[t] + b Sin[t] + ½ t Sin[t] check: f’(t) = -a Sin[t] + b Cos[t] + ½ Sin[t] + ½ t Cos[t] f”(t) = -a Cos[t] –b Sin[t] + ½ Cos[t] + ½ Cos[t] -1/2 t Sin[t] f”(t) + f(t) = Cos[t]. It checks.

Graph your solution(s). (The differential equation f” + f = Cos[t] describes a forced undamped oscillator. In this example, we observe the phenomenon of resonance.