Section 5.1 Length and Dot Product in ℝ n. Let v = ‹v 1­­, v 2, v 3,..., v n › and w = ‹w 1­­, w 2, w 3,..., w n › be vectors in ℝ n. The dot product.

Slides:



Advertisements
Similar presentations
Vector Spaces A set V is called a vector space over a set K denoted V(K) if is an Abelian group, is a field, and For every element vV and K there exists.
Advertisements

10.4 Complex Vector Spaces.
Euclidean m-Space & Linear Equations Euclidean m-space.
Signal , Weight Vector Spaces and Linear Transformations
Signal , Weight Vector Spaces and Linear Transformations
6 6.1 © 2012 Pearson Education, Inc. Orthogonality and Least Squares INNER PRODUCT, LENGTH, AND ORTHOGONALITY.
Chapter 5 Orthogonality
Orthogonality and Least Squares
6 6.3 © 2012 Pearson Education, Inc. Orthogonality and Least Squares ORTHOGONAL PROJECTIONS.
Chapter 12 – Vectors and the Geometry of Space
Orthogonal Sets (12/2/05) Recall that “orthogonal” matches the geometric idea of “perpendicular”. Definition. A set of vectors u 1,u 2,…,u p in R n is.
Chapter 5 Inner Product Spaces
5.1 Orthogonality.
1 MAC 2103 Module 10 lnner Product Spaces I. 2 Rev.F09 Learning Objectives Upon completing this module, you should be able to: 1. Define and find the.
Length and Dot Product in R n Notes: is called a unit vector. Notes: The length of a vector is also called its norm. Chapter 5 Inner Product Spaces.
6.4 Vectors and Dot Products The Definition of the Dot Product of Two Vectors The dot product of u = and v = is Ex.’s Find each dot product.
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
Inner Product Spaces Euclidean n-space: Euclidean n-space: vector lengthdot productEuclidean n-space R n was defined to be the set of all ordered.
Chapter 5: The Orthogonality and Least Squares
CHAPTER FIVE Orthogonality Why orthogonal? Least square problem Accuracy of Numerical computation.
Chapter 5 Orthogonality.
Linear Algebra Chapter 4 Vector Spaces.
Chapter 9 Function Approximation
MAT 4725 Numerical Analysis Section 8.2 Orthogonal Polynomials and Least Squares Approximations (Part II)
Section 4.1 Vectors in ℝ n. ℝ n Vectors Vector addition Scalar multiplication.
Vectors CHAPTER 7. Ch7_2 Contents  7.1 Vectors in 2-Space 7.1 Vectors in 2-Space  7.2 Vectors in 3-Space 7.2 Vectors in 3-Space  7.3 Dot Product 7.3.
AN ORTHOGONAL PROJECTION
1 Chapter 5 – Orthogonality and Least Squares Outline 5.1 Orthonormal Bases and Orthogonal Projections 5.2 Gram-Schmidt Process and QR Factorization 5.3.
Orthogonality and Least Squares
Chapter 7 Inner Product Spaces 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
AGC DSP AGC DSP Professor A G Constantinides©1 Hilbert Spaces Linear Transformations and Least Squares: Hilbert Spaces.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Chapter 10 Real Inner Products and Least-Square
Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition
Lecture 11 Inner Product Space
4.8 Rank Rank enables one to relate matrices to vectors, and vice versa. Definition Let A be an m  n matrix. The rows of A may be viewed as row vectors.
Chap. 5 Inner Product Spaces 5.1 Length and Dot Product in R n 5.2 Inner Product Spaces 5.3 Orthonormal Bases: Gram-Schmidt Process 5.4 Mathematical Models.
Class 26: Question 1 1.An orthogonal basis for A 2.An orthogonal basis for the column space of A 3.An orthogonal basis for the row space of A 4.An orthogonal.
Class 24: Question 1 Which of the following set of vectors is not an orthogonal set?
4.8 Rank Rank enables one to relate matrices to vectors, and vice versa. Definition Let A be an m  n matrix. The rows of A may be viewed as row vectors.
1 MAC 2103 Module 11 lnner Product Spaces II. 2 Rev.F09 Learning Objectives Upon completing this module, you should be able to: 1. Construct an orthonormal.
Lecture 9 Vector & Inner Product Space
6 6.3 © 2016 Pearson Education, Inc. Orthogonality and Least Squares ORTHOGONAL PROJECTIONS.
Ch7_ Inner Product Spaces In this section, we extend those concepts of R n such as: dot product of two vectors, norm of a vector, angle between vectors,
Chapter 4 Vector Spaces Linear Algebra. Ch04_2 Definition 1: ……………………………………………………………………. The elements in R n called …………. 4.1 The vector Space R n Addition.
12.3 The Dot Product. The dot product of u and v in the plane is The dot product of u and v in space is Two vectors u and v are orthogonal  if they meet.
An inner product on a vector space V is a function that, to each pair of vectors u and v in V, associates a real number and satisfies the following.
Lecture 11 Inner Product Space Last Time - Coordinates and Change of Basis - Applications - Length and Dot Product in R n Elementary Linear Algebra R.
MAT 4725 Numerical Analysis Section 8.2 Orthogonal Polynomials and Least Squares Approximations (Part II)
6 6.5 © 2016 Pearson Education, Ltd. Orthogonality and Least Squares LEAST-SQUARES PROBLEMS.
Inner Product Spaces Euclidean n-space: Euclidean n-space: vector lengthdot productEuclidean n-space R n was defined to be the set of all ordered.
Lecture 11 Inner Product Spaces Last Time Change of Basis (Cont.) Length and Dot Product in R n Inner Product Spaces Elementary Linear Algebra R. Larsen.
QR decomposition: A = QR Q is orthonormal R is upper triangular To find QR decomposition: 1.) Q: Use Gram-Schmidt to find orthonormal basis for column.
An inner product on a vector space V is a function that, to each pair of vectors u and v in V, associates a real number and satisfies the following.
EE611 Deterministic Systems Vector Spaces and Basis Changes Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
Chapter 7 Inner Product Spaces
Section 6.1 Inner Products.
Review of Linear Algebra
Orthogonality and Least Squares
Linear Algebra Lecture 39.
Signal & Weight Vector Spaces
Linear Algebra Lecture 38.
Symmetric Matrices and Quadratic Forms
Elementary Linear Algebra Anton & Rorres, 9th Edition
Vectors and Dot Products
RAYAT SHIKSHAN SANSTHA’S S.M.JOSHI COLLEGE HADAPSAR, PUNE
Orthogonality and Least Squares
Approximation of Functions
Approximation of Functions
Presentation transcript:

Section 5.1 Length and Dot Product in ℝ n

Let v = ‹v 1­­, v 2, v 3,..., v n › and w = ‹w 1­­, w 2, w 3,..., w n › be vectors in ℝ n. The dot product of v and w is v ∙ w = __________________________________

Let v = ‹v 1­­, v 2, v 3,..., v n › and w = ‹w 1­­, w 2, w 3,..., w n › be vectors in ℝ n. The length of v (magnitude of v, norm of v ) is || v || = ___________________________________ The length of v may also be computed using the formula _________________________________________

Let v = ‹v 1­­, v 2, v 3,..., v n › and w = ‹w 1­­, w 2, w 3,..., w n › be vectors in ℝ n. The distance between v and w is d( v, w ) = _____________________________

Let v = ‹v 1­­, v 2, v 3,..., v n › and w = ‹w 1­­, w 2, w 3,..., w n › be vectors in ℝ n. The vectors v and w are orthogonal if ________________________________________

Let v = ‹v 1­­, v 2, v 3,..., v n › and w = ‹w 1­­, w 2, w 3,..., w n › be vectors in ℝ n. A unit vector in the direction of v is given by ___________________ A unit vector in the direction opposite of v is given by __________________

Let v = ‹v 1­­, v 2, v 3,..., v n › and w = ‹w 1­­, w 2, w 3,..., w n › be vectors in ℝ n. A vector in the direction of v with magnitude of c is given by ____________

Let v = ‹v 1­­, v 2, v 3,..., v n › and w = ‹w 1­­, w 2, w 3,..., w n › be vectors in ℝ n. The angle between vectors v and w is given by ____________________________________

Let v = ‹v 1­­, v 2, v 3,..., v n › and w = ‹w 1­­, w 2, w 3,..., w n › be vectors in ℝ n. || v + w || 2 = || v || 2 + || w || 2 if __________________________________

Ex. Let v = ‹4, 2›. (a) Determine all vectors which are orthogonal to v.

Ex. Let v = ‹4, 2›. (b) Find a vector parallel to v, but with a magnitude five times that of v.

Ex. Let v = ‹4, 2›. (c) Find a vector in the opposite direction of v, but with a magnitude of five.

Properties of the dot product. Let u, v, and w be vectors in ℝ n and let c be a scalar. 1. u ∙ v = v ∙ u 2. u ∙ ( v + w ) = u ∙ v + u ∙ w 3. c ( u ∙ v ) = (c u ) ∙ v = u ∙ (c v ) 4. v ∙ v ≥ 0 and v ∙ v = 0 if and only if v = 0.

Section 5.2 Inner Product Spaces

Let u, v, and w be vectors in a vector space V, and let c be a scalar. An inner product on V is a function that associates a real number with each pair of vectors u and v and satisfies the following: 1. = 2. = + 3. c = = 4. ≥ 0 and = 0 if and only if v = 0.

Ex. An inner product on M 2,2 :

Ex. An inner product on P 2.

Ex. An inner product on C[a,b]:

Def. Let v and w be vectors in an inner product space V. (a) The magnitude (norm) of v is || v || = ___________________________ (b) The distance between v and w is d( v, w ) = _______________________ (c) The angle between v and w is found by the formula ________________________________________ (d) v and w are orthogonal ( v ⊥ w ) if _____________________________

Note: || v + w || 2 = || v || 2 + || w || 2 if v and w are orthogonal.

Ex. Let f (x) = x, g(x) = x 2, and h(x) = x be functions in the inner product space C[0, 1]. (a) Compute || f ||

Ex. Let f (x) = x, g(x) = x 2, and h(x) = x be functions in the inner product space C[0, 1]. (b) Compute d( f, g) and d (g, h)

Ex. Let f (x) = x, g(x) = x 2, and h(x) = x be functions in the inner product space C[0, 1]. (c) Compare the distance between f and g with the distance between g and h.

Recall the projection vector of v onto w in ℝ n : proj w v =

The projection vector of v onto w in an inner product space V:

Ex. Let f (x) = x and g(x) = x 2 be functions in the inner product space C[a, b]. Find the projection of f onto g.

Theorem: Let v and w be two vectors in an inner product space V with w ≠ 0. Then d( v, proj w v ) ≤ d( v, c w ), with equality only when c =

Section 5.3 Orthonormal Bases & the Gram-Schmidt Process

Def.A set of vectors S is orthogonal if every pair of vectors in S is orthogonal. If in addition, every vector in S is a unit vector then S is orthonormal.

Examples: (i) In ℝ 3 the set of basis vectors {i, j, k} form an orthonormal set.

Examples: (ii) In ℝ 2 the set { (2, 2), (−3, 3) } is an orthogonal set of vectors but not orthonormal. We can turn it into an orthonormal set though.

Examples: (iii) Is the set {x+1, x−1, x 2 } an orthogonal set in P 2 ? Is it an orthonormal set in P 2 ? (Use the standard inner product in P 2 )

Examples: (iv) Is the set {x+1, x−1, x 2 } an orthogonal set in C[0,1]? Is it an orthonormal set in C[0,1]? (Use the standard inner product in C[0,1] )

Ex. Create an orthonormal basis for ℝ 3 that includes a vector in the direction of (3,0,3).

Ex. Verify that the set { 1, sin(x), cos(x), sin(2x), cos(2x), sin(3x), cos(3x),....., sin(nx), cos(nx) } is orthogonal in C[0, 2π] and then turn it into an orthonormal basis.

Ex. Verify that the set { 1, sin(x), cos(x), sin(2x), cos(2x), sin(3x), cos(3x),....., sin(nx), cos(nx) } is orthogonal in C[0, 2π] and then turn it into an orthonormal basis.

Ex. Verify that the set { 1, sin(x), cos(x), sin(2x), cos(2x), sin(3x), cos(3x),....., sin(nx), cos(nx) } is orthogonal in C[0, 2π] and then turn it into an orthonormal basis.

Ex. Verify that the set { 1, sin(x), cos(x), sin(2x), cos(2x), sin(3x), cos(3x),....., sin(nx), cos(nx) } is orthogonal in C[0, 2π] and then turn it into an orthonormal basis.

Ex. Verify that the set { 1, sin(x), cos(x), sin(2x), cos(2x), sin(3x), cos(3x),....., sin(nx), cos(nx) } is orthogonal in C[0, 2π] and then turn it into an orthonormal basis.

Def. The coordinate matrix of a vector w with respect to a basis B = { v 1, v 2, v 3,...., v n } is the column matrix [c 1, c 2, c 3,...., c n ] T, if w can be expressed as a linear combination of basis vectors with the coordinates c 1, c 2, c 3,...., c n (eg if w = c 1 v 1 + c 2 v 2 + c 3 v c n v n )

Ex. (a) Find the coordinate matrix of (2, 3, 5) in ℝ 3 with respect to the standard basis {i, j, k} and the standard inner product.

Ex. (b) Find the coordinate matrix of (2, 3, 5) in ℝ 3 with respect to the standard basis {k, j, i} and the standard inner product.

Ex. (c) Find the coordinate matrix of (2, 3, 5) in ℝ 3 with respect to the basis { (1,1,1), (1,2,3), (−1,0,4) } and the standard inner product.

Theorem: Let B = { v 1, v 2, v 3,...., v n } be an orthonormal basis. The coordinates of w = c 1 v 1 + c 2 v 2 + c 3 v c n v n can be computed by c k =

Ex. Give the coordinate matrix of (5, −5, 2) with respect to the orthonormal basis { ( 3 ⁄ 5, 4 ⁄ 5, 0), ( −4 ⁄ 5, 3 ⁄ 5, 0), (0,0,1) }.

Gram-Schmidt Orthonormalization Process: Let B = { v 1, v 2, v 3,...., v n } be a basis for an inner product space. First form B′ = { w 1, w 2, w 3,...., w n } where the w k are given by

Gram-Schmidt Orthonormalization Process: Let B = { v 1, v 2, v 3,...., v n } be a basis for an inner product space. First form B′ = { w 1, w 2, w 3,...., w n } where the w k are given by w 1 = v 1 Then form B″ = { u 1, u 2, u 3,...., u n } where each u k is given by

Ex. Use the Gram-Schmidt process on the basis { (1,1), (0,1) } to find an orthonormal basis for ℝ 2.

Ex. Use the Gram-Schmidt process on { (1,1,0), (1,2,0), (0,1,2) } to find an orthonormal basis for ℝ 3.

Alternate form of the Gram-Schmidt Orthonormalization Process: Let B = { v 1, v 2, v 3,...., v n } be a basis for an inner product space. Form B′ = { u 1, u 2, u 3,...., u n } where each u k is given by, where, where, where

Ex. Find an orthonormal basis for the vector space of solutions to the homogenous set of equations: x 1 + x 2 + 7x 4 = 0 2x 1 + x 2 + 2x 3 + 6x 4 = 0

Section 5.5 Applications of Inner Product Spaces

Def. Let f be in C[a,b] and let W be a subspace of C[a,b]. A function g in W is called a least squares approximation of f with respect to W when the value of is a minimum with respect to all other functions in W.

Ex. Find the least squares approximation g(x) = a 2 x 2 + a 1 x + a o of f (x) = e x on C[0,1].

Ex. Find the least squares approximation g(x) = a 2 x 2 + a 1 x + a o of f (x) = e x on C[0,1].

Theorem Let f be in C[a,b] and let W be a finite dimensional subspace of C[a,b]. The least squares approximation function of f with respect to W is given by g = w 1 + w 2 + …… + w n where B = { w 1, w 2, w 3,...., w n } is an orthonormal basis for W.

Ex. Find the least squares approximation of sin(x) on [0, π] with respect to the subspace of all polynomial functions of degree two or less.

Fourier Approximations Consider functions of the form g(x) = + a 1 cos(x) + a 2 cos(2x) a n cos(nx) + b 1 sin(x) + b 2 sin(2x) b n sin(nx)

Fourier Approximations Consider functions of the form g(x) = + a 1 cos(x) + a 2 cos(2x) a n cos(nx) + b 1 sin(x) + b 2 sin(2x) b n sin(nx) in the subspace of C[0,2π] spanned by the basis { 1, cos(x), cos(2x),...., cos(nx), sin(x), sin(2x),...., sin(nx) }

Fourier Approximations Consider functions of the form g(x) = + a 1 cos(x) + a 2 cos(2x) a n cos(nx) + b 1 sin(x) + b 2 sin(2x) b n sin(nx) in the subspace of C[0,2π] spanned by the basis { 1, cos(x), cos(2x),...., cos(nx), sin(x), sin(2x),...., sin(nx) } This is an orthogonal basis and if we normalize it we get a basis denoted as B = { w o, w 1, w 2,...., w n, w n+1,...., w 2n } =

Fourier Approximations Consider functions of the form g(x) = + a 1 cos(x) + a 2 cos(2x) a n cos(nx) + b 1 sin(x) + b 2 sin(2x) b n sin(nx) in the subspace of C[0,2π] spanned by the basis { 1, cos(x), cos(2x),...., cos(nx), sin(x), sin(2x),...., sin(nx) } This is an orthogonal basis and if we normalize it we get a basis denoted as B = { w o, w 1, w 2,...., w n, w n+1,...., w 2n } =

Fourier Approximations Consider functions of the form g(x) = + a 1 cos(x) + a 2 cos(2x) a n cos(nx) + b 1 sin(x) + b 2 sin(2x) b n sin(nx) in the subspace of C[0,2π] spanned by the basis { 1, cos(x), cos(2x),...., cos(nx), sin(x), sin(2x),...., sin(nx) } This is an orthogonal basis and if we normalize it we get a basis denoted as B = { w o, w 1, w 2,...., w n, w n+1,...., w 2n } = With this orthonormal basis we can write g(x) above as: g(x) = w o + w 1 + w 2 + …… + w 2n

Fourier Approximations Consider functions of the form g(x) = + a 1 cos(x) + a 2 cos(2x) a n cos(nx) + b 1 sin(x) + b 2 sin(2x) b n sin(nx) in the subspace of C[0,2π] spanned by the basis { 1, cos(x), cos(2x),...., cos(nx), sin(x), sin(2x),...., sin(nx) } This is an orthogonal basis and if we normalize it we get a basis denoted as B = { w o, w 1, w 2,...., w n, w n+1,...., w 2n } = With this orthonormal basis we can write g(x) above as: g(x) = w o + w 1 + w 2 + …… + w 2n Then:

Fourier Approximations Consider functions of the form g(x) = + a 1 cos(x) + a 2 cos(2x) a n cos(nx) + b 1 sin(x) + b 2 sin(2x) b n sin(nx) The function g(x) is called the nth order Fourier approximation of f on the interval [0,2π].

Ex. Find the third order Fourier approximation of f (x) = x on [0, 2π].