Ch 11.6: Series of Orthogonal Functions: Mean Convergence

Slides:



Advertisements
Similar presentations
Boyce/DiPrima 9th ed, Ch 2.4: Differences Between Linear and Nonlinear Equations Elementary Differential Equations and Boundary Value Problems, 9th edition,
Advertisements

Ch 3.2: Solutions of Linear Homogeneous Equations; Wronskian
Boyce/DiPrima 9th ed, Ch 2.8: The Existence and Uniqueness Theorem Elementary Differential Equations and Boundary Value Problems, 9th edition, by William.
Differential Equations Brannan Copyright © 2010 by John Wiley & Sons, Inc. All rights reserved. Chapter 08: Series Solutions of Second Order Linear Equations.
Boyce/DiPrima 10th ed, Ch 10.1: Two-Point Boundary Value Problems Elementary Differential Equations and Boundary Value Problems, 10th edition, by William.
Ch 5.8: Bessel’s Equation Bessel Equation of order :
Ch 5.2: Series Solutions Near an Ordinary Point, Part I
Ch 3.5: Nonhomogeneous Equations; Method of Undetermined Coefficients
Ch 7.9: Nonhomogeneous Linear Systems
Ch 5.1: Review of Power Series
Boyce/DiPrima 9th ed, Ch 11.2: Sturm-Liouville Boundary Value Problems Elementary Differential Equations and Boundary Value Problems, 9th edition, by.
Boyce/DiPrima 10th ed, Ch 10.2: Fourier Series Elementary Differential Equations and Boundary Value Problems, 10th edition, by William E. Boyce and Richard.
Boyce/DiPrima 10th ed, Ch 10.5: Separation of Variables; Heat Conduction in a Rod Elementary Differential Equations and Boundary Value Problems, 10th.
Differential Equations Brannan Copyright © 2010 by John Wiley & Sons, Inc. All rights reserved. Chapter 10: Boundary Value Problems and Sturm– Liouville.
Ch 8.1 Numerical Methods: The Euler or Tangent Line Method
Math 3120 Differential Equations with Boundary Value Problems
Chapter 1 Infinite Series, Power Series
The importance of sequences and infinite series in calculus stems from Newton’s idea of representing functions as sums of infinite series.  For instance,
Boyce/DiPrima 9 th ed, Ch 5.1: Review of Power Series Elementary Differential Equations and Boundary Value Problems, 9 th edition, by William E. Boyce.
Boyce/DiPrima 9 th ed, Ch 10.8: Laplace’s Equation Elementary Differential Equations and Boundary Value Problems, 9 th edition, by William E. Boyce and.
Boyce/DiPrima 9 th ed, Ch 3.2: Fundamental Solutions of Linear Homogeneous Equations Elementary Differential Equations and Boundary Value Problems, 9 th.
Series Solutions of Linear Differential Equations CHAPTER 5.
In this section we develop general methods for finding power series representations. Suppose that f (x) is represented by a power series centered at.
12 INFINITE SEQUENCES AND SERIES. In general, it is difficult to find the exact sum of a series.  We were able to accomplish this for geometric series.
1 EEE 431 Computational Methods in Electrodynamics Lecture 18 By Dr. Rasime Uyguroglu
Ch 10.6: Other Heat Conduction Problems
Copyright © Cengage Learning. All rights reserved.
Math 3120 Differential Equations with Boundary Value Problems
Boyce/DiPrima 9 th ed, Ch 11.3: Non- Homogeneous Boundary Value Problems Elementary Differential Equations and Boundary Value Problems, 9 th edition, by.
Copyright © Cengage Learning. All rights reserved. CHAPTER Graphing and Inverse Functions Graphing and Inverse Functions 4.
Copyright © Cengage Learning. All rights reserved The Integral Test and Estimates of Sums.
INTEGRALS 5. INTEGRALS In Chapter 3, we used the tangent and velocity problems to introduce the derivative—the central idea in differential calculus.
In this chapter, we explore some of the applications of the definite integral by using it to compute areas between curves, volumes of solids, and the work.
NUMERICAL ANALYSIS I. Introduction Numerical analysis is concerned with the process by which mathematical problems are solved by the operations.
INFINITE SEQUENCES AND SERIES In general, it is difficult to find the exact sum of a series.  We were able to accomplish this for geometric series and.
Ch 10.2: Fourier Series We will see that many important problems involving partial differential equations can be solved, provided a given function can.
Boyce/DiPrima 10th ed, Ch 7.9: Nonhomogeneous Linear Systems Elementary Differential Equations and Boundary Value Problems, 10th edition, by William E.
In the special case c = 0, T (x) is also called the Maclaurin Series: THEOREM 1 Taylor Series Expansion If f (x) is represented by a power series.
Boyce/DiPrima 10th ed, Ch 6.2: Solution of Initial Value Problems Elementary Differential Equations and Boundary Value Problems, 10th edition, by William.
Boyce/DiPrima 10th ed, Ch 10.8: Laplace’s Equation Elementary Differential Equations and Boundary Value Problems, 10th edition, by William E. Boyce and.
Boyce/DiPrima 9th ed, Ch 2.7: Numerical Approximations: Euler’s Method Elementary Differential Equations and Boundary Value Problems, 9th edition, by.
Ch 10.1: Two-Point Boundary Value Problems
Ch 11.5: Further Remarks on Separation of Variables: A Bessel Series Expansion In this chapter we are interested in extending the method of separation.
A power series with center c is an infinite series where x is a variable. For example, is a power series with center c = 2.
Ch 11.1: The Occurrence of Two-Point Boundary Value Problems
Alternating Series; Conditional Convergence
Ch 2.1: Linear Equations; Method of Integrating Factors
Copyright © Cengage Learning. All rights reserved.
Systems of First Order Linear Equations
Interpolation.
Chapter 1 Infinite Series, Power Series
Ch 8.6: Systems of First Order Equations
Solution of Equations by Iteration
Chapter 8: Further Topics in Algebra
Ch 5.2: Series Solutions Near an Ordinary Point, Part I
Chapter 8 Infinite Series.
Copyright © Cengage Learning. All rights reserved.
5.1 Power Series Method Section 5.1 p1.
Ch 3.7: Variation of Parameters
Ch 3.2: Fundamental Solutions of Linear Homogeneous Equations
Sequences and Series of Functions
Chapter 11: Further Topics in Algebra
Infinite Sequences and Series
Ch 5.4: Euler Equations; Regular Singular Points
Copyright © Cengage Learning. All rights reserved.
TECHNIQUES OF INTEGRATION
Chapter 4 Sequences.
Legendre Polynomials Pn(x)
Section 4 The Definite Integral
Presentation transcript:

Ch 11.6: Series of Orthogonal Functions: Mean Convergence Theorem 11.2.4 stated that, under certain restrictions, a given function f can be expanded in a series of eigenfunctions of a Sturm-Liouville boundary value problem. The eigenfunction series converged to [ f (x+) + f (x-)]/2 at each point in the open interval 0 < x < 1. Under somewhat more restrictive conditions, the series converges to f (x) at each point in the closed interval 0  x 1. This type of convergence is called pointwise convergence. In this section we describe a different kind of convergence that is especially useful for a series of orthogonal functions, such as eigenfunctions.

Best Approximation (1 of 10) Suppose we are given the set of functions 1, 2, …, n, which are continuous on the interval 0  x 1 and satisfy the orthonormality relation where r is a nonnegative weight function. Suppose we wish to approximate a given function f, defined on 0  x 1, by a linear combination of 1, 2, …, n: We want to choose the coefficients a1, a2, …, an so that Sn will best approximate f on 0  x 1.

Method of Collocation (2 of 10) What is meant precisely by “best approximate f on [0, 1]” ? There are several meanings that can be interpreted here. First, we can choose n points x1,…, xn in [0, 1] and require that Sn(x) have the same values as f (x) at each of these points. The coefficients a1,…, an are found by solving the set of linear algebraic equations This procedure is known as the method of collocation. It has the advantage that it is very easy to write down the above equations. One only needs to evaluate the functions involved at x1,…, xn.

Method of Collocation (3 of 10) The method of collocation is then to solve for a1,…, an: If the points x1,…, xn are well chosen, and if n is fairly large, then presumably Sn(x) will equal f (x) at the chosen points and be reasonably close to it at other points as well. One of the deficiencies is that if one more base function n+1 is added, then one more point xn+1 is required, and all the coefficients ai must be recomputed. Thus it is inconvenient to improve the accuracy of a collocation approximation by including additional terms. Further, the ai depend on the location of x1,…, xn, and it is not obvious how to best select these points.

Minimize Error (4 of 10) Alternatively, we can consider the difference | f (x) - Sn(x)| and try to make this as small as possible. However, | f (x) - Sn(x)| is a function of x as well as of the coefficients a1,…, an, and it is not clear how to calculate ai. The choice of ai that makes the | f (x) - Sn(x)| small at one point may make it large at another. One way to proceed is to consider instead the least upper bound (lub) of | f (x) - Sn(x)| for x in [0,1], and then choose ai so as to minimize this quantity. That is, minimize

Minimize Error: Least Upper Bound (5 of 10) Thus we choose the coefficients ai so as to minimize This approach is intuitively appealing and is often used in theoretical calculations. In practice, however, it is usually hard, if not impossible, to write down an explicit formula for En(a1,…, an). Further, this procedure also shares one of the disadvantages of collocation: Upon adding an additional term n+1 to Sn(x), we must recompute all of the preceding coefficients. Thus this method is not often useful in practical problems.

Mean Square Error (6 of 10) Another way to proceed is to consider If r(x) = 1, then In is the area between the graphs of f and Sn (see graph below). We then choose the ai to minimize In. To avoid complications resulting from absolute value calculations, it is more convenient to minimize Rn is called the mean square error. If the ai are chosen to minimize Rn, then Sn is said to approximate f in the mean square sense.

Minimizing Mean Square Error Rn (7 of 10) To choose a1,…, an so as to minimize Rn, where we must satisfy the necessary conditions Recall Sn(x) = ai i , and note that It follows that

Minimizing Rn: Fourier Coefficients (8 of 10) Thus Substituting Sn(x) = ai i and using orthogonality, we have It follows that These coefficients are called the Fourier coefficients of f with respect to 1,…, n and weight function r. The necessary conditions Rn/ai = 0 are not sufficient; but it can be shown that Rn is minimized for these ai, see text.

Fourier Coefficients (9 of 10) The coefficients in the expansion are the same as those in the eigenfunction series whose convergence, under certain conditions, was stated in Theorem 11.2.4. Thus Sn(x) is the nth partial sum in this series and constitutes the best mean square approximation to f (x) that is possible with the functions 1,…, n. We will assume hereafter that the coefficients ai in Sn(x) are the Fourier coefficients.

Mean Square Method Discussion (10 of 10) Due to the orthogonality of the n, the equation gives a formula for each ai separately, rather than a set of linear algebraic equations for a1,…, an, as in collocation. Further, the formula for ai is independent of n. Thus if we add an additional term n+1 to Sn(x), to improve the approximation to f, then we do not need to recompute the previous coefficients. If f, r and the n are complicated functions, then it may be necessary to evaluate the integrals numerically.

Mean Square Convergence Now suppose that there is an infinite sequence of functions 1,…, n ,… which are continuous and orthonormal on [0,1]. Suppose further that the mean square error Rn  0 as n  : In this case the infinite series converges in the mean square sense, or in the mean, to f (x). Note: A series may converge in the mean without converging at each point, or it may converge at each point without converging in the mean. See text for more details.

Completeness, Square Integrable Now suppose we wish to know what class of functions defined on [0, 1] can be represented as an infinite series of the orthonormal set 1,…,n ,…. We say that the set 1,…,n ,… is complete with respect to mean square convergence for a set of functions F, if for each f in F, the series converges in the mean. A function f is square integrable on [0,1] if both f and f 2 are integrable on [0,1].

Theorem 11.6.1 Consider the Sturm-Liouville boundary value problem with normalized eigenfunctions 1,…, n,…. Then 1,…, n,… are complete with respect to mean square convergence for the set of square integrable functions on [0,1]. Thus if f is a square integrable on [0,1], then the series below converges in the mean to f (x).

Discussion of Square Integrable Functions The class of square integrable functions on [0,1] is very large. This class contains some functions with many discontinuities, including some kinds of infinite discontinuities, as well as some functions that are not differentiable at any point. All of these functions have mean convergent expansions in the eigenfunctions of the Sturm-Liouville problem. However, in many cases these series do not converge pointwise, at least not at every point.

Fourier Series and Sturm Liouville Theory The theory of Fourier series discussed in Chapter 10 is a special case of the general theory of Sturm-Liouville problems. For example, the functions are normalized eigenfunctions of the Sturm-Liouville problem Thus if f is square integrable on [0,1], then converges in the mean, and is a Fourier sine series. If f satisfies the conditions of Theorem 11.2.4, then this series converges pointwise as well as in the mean.

Example 1: Square Integrable Function (1 of 2) Consider the function Recall from the previous slide that are the orthonormal eigenfunctions for Since f is square integrable on [0,1], we have where

Example 1: Convergence (2 of 2) The nth partial sum for the series is with mean square error A graph of Rn versus n for several value of n is given above. This graph indicates that Rn steadily decreases as n increases. From Theorem 11.6.1, we know that Rn  0 as n  . Pointwise, Sn(x)  f (x) as n   for 0 < x < 1 (Thm 11.2.4). However, Sn(0) = Sn(1) = 0 for all n, and hence the least upper bound for the error does not diminish as n increases.

Extending Theorem 11.6.1: Periodic Boundary Conditions (1 of 2) Theorem 11.6.1 can be extended to self-adjoint problems having periodic boundary conditions, such as the problem considered in Example 4 of Section 11.2. The eigenfunctions of this problem are If f is square integrable on –L  x  L, then

Fourier Expansion (2 of 2) Thus This expansion is exactly the Fourier series for f discussed in Sections 10.2 and 10.3. According to the generalization of Theorem 11.6.1, this series converges in the mean for any square integrable function f, although f may not satisfy the conditions of Theorem 10.3.1, which ensures pointwise convergence.