Presentation is loading. Please wait.

Presentation is loading. Please wait.

Ch 1.1: Basic Mathematical Models; Direction Fields

Similar presentations


Presentation on theme: "Ch 1.1: Basic Mathematical Models; Direction Fields"— Presentation transcript:

1 Ch 1.1: Basic Mathematical Models; Direction Fields
Differential equations are equations containing derivatives. Derivatives describe rates of change. The following are examples of physical phenomena involving rates of change: Motion of fluids Motion of mechanical systems Flow of current in electrical circuits Dissipation of heat in solid objects Seismic waves Population dynamics A differential equation that describes a physical process is often called a mathematical model.

2 Example 1: Free Fall (1 of 4)
Formulate a differential equation describing motion of an object falling in the atmosphere near sea level. Variables: time t, velocity v Newton’s 2nd Law: F = ma = m(dv/dt) net force Force of gravity: F = mg downward force Force of air resistance: F =  v upward force Then Taking g = 9.8 m/sec2, m = 10 kg,  = 2 kg/sec, we obtain

3 Example 1: Sketching Direction Field (2 of 4)
Using differential equation and table, plot slopes (estimates) on axes below. The resulting graph is called a direction field. (Note that values of v do not depend on t.)

4 Example 1: Direction Field Using Maple (3 of 4)
Sample Maple commands for graphing a direction field: with(DEtools): DEplot(diff(v(t),t)=9.8-v(t)/5,v(t), t=0..10,v=0..80,stepsize=.1,color=blue); When graphing direction fields, be sure to use an appropriate window, in order to display all equilibrium solutions and relevant solution behavior.

5 Example 1: Direction Field & Equilibrium Solution (4 of 4)
Arrows give tangent lines to solution curves, and indicate where soln is increasing & decreasing (and by how much). Horizontal solution curves are called equilibrium solutions. Use the graph below to solve for equilibrium solution, and then determine analytically by setting v' = 0.

6 Equilibrium Solutions
In general, for a differential equation of the form find equilibrium solutions by setting y' = 0 and solving for y : Example: Find the equilibrium solutions of the following.

7 Ch 1.2: Solutions of Some Differential Equations
Recall the free fall and owl/mice differential equations: These equations have the general form y' = ay - b We can use methods of calculus to solve differential equations of this form.

8 Example 1: Mice and Owls (1 of 3)
To solve the differential equation we use methods of calculus, as follows (note what happens when p = 900). Thus the solution is where k is a constant.

9 Example 1: Integral Curves (2 of 3)
Thus we have infinitely many solutions to our equation, since k is an arbitrary constant. Graphs of solutions (integral curves) for several values of k, and direction field for differential equation, are given below. Choosing k = 0, we obtain the equilibrium solution, while for k  0, the solutions diverge from equilibrium solution.

10 Example 1: Initial Conditions (3 of 3)
A differential equation often has infinitely many solutions. If a point on the solution curve is known, such as an initial condition, then this determines a unique solution. In the mice/owl differential equation, suppose we know that the mice population starts out at Then p(0) = 850, and

11 Solution to General Equation
To solve the general equation (a ≠0) we use methods of calculus, as follows (y ≠b/a). Thus the general solution is where k is a constant (k = 0 -> equilibrium solution). Special case a = 0: the general solution is y = -bt + c

12 Initial Value Problem Next, we solve the initial value problem (a ≠0)
From previous slide, the solution to differential equation is Using the initial condition to solve for k, we obtain and hence the solution to the initial value problem is

13 Equilibrium Solution Recall: To find equilibrium solution, set y' = 0 & solve for y: From previous slide, our solution to initial value problem is: Note the following solution behavior: If y0 = b/a, then y is constant, with y(t) = b/a If y0 > b/a and a > 0 , then y increases exponentially without bound If y0 > b/a and a < 0 , then y decays exponentially to b/a If y0 < b/a and a > 0 , then y decreases exponentially without bound If y0 < b/a and a < 0 , then y increases asymptotically to b/a

14 Ch 1.3: Classification of Differential Equations
The main purpose of this course is to present methods of finding solutions, and to discuss properties of solutions of differential equations. To provide a framework for this discussion, in this section we give several ways of classifying differential equations.

15 Ordinary Differential Equations
When the unknown function depends on a single independent variable, only ordinary derivatives appear in the equation. In this case the equation is said to be an ordinary differential equations (ODE). The equations discussed in the preceding two sections are ordinary differential equations. For example,

16 Systems of Differential Equations
Another classification of differential equations depends on the number of unknown functions that are involved. If there is a single unknown function to be found, then one equation is sufficient. If there are two or more unknown functions, then a system of equations is required. For example, predator-prey equations have the form where u(t) and v(t) are the respective populations of prey and predator species. The constants a, c, ,  depend on the particular species being studied. Systems of equations are discussed in Chapter 7.

17 Order of Differential Equations
The order of a differential equation is the order of the highest derivative that appears in the equation. Examples: We will be studying differential equations for which the highest derivative can be isolated:

18 Linear & Nonlinear Differential Equations
An ordinary differential equation is linear if F is linear in the variables Thus the general linear ODE has the form Example: Determine whether the equations below are linear or nonlinear.

19 Solutions to Differential Equations
A solution (t) to an ordinary differential equation satisfies the equation: Example: Verify the following solutions of the ODE

20 Solutions to Differential Equations
Three important questions in the study of differential equations: Is there a solution? (Existence) If there is a solution, is it unique? (Uniqueness) If there is a solution, how do we find it? (Analytical Solution, Numerical Approximation, etc)

21 Ch 2.1: Linear Equations; Method of Integrating Factors
A linear first order ODE has the general form where f is linear in y. Examples include equations with constant coefficients, such as those in Chapter 1, or equations with variable coefficients:

22 Constant Coefficient Case
For a first order linear equation with constant coefficients, recall that we can use methods of calculus to solve: (Integrating step)

23 Variable Coefficient Case: Method of Integrating Factors
We next consider linear first order ODEs with variable coefficients: The method of integrating factors involves multiplying this equation by a function (t), chosen so that the resulting equation is easily integrated. Note that we know how to integrate

24 Example 1: Integrating Factor (1 of 2)
Consider the following equation: Multiplying both sides by (t), we obtain We will choose (t) so that left side is derivative of known quantity. Consider the following, and recall product rule: Choose (t) so that (note that there may be MANY qualified (t) )

25 Example 1: General Solution (2 of 2)
With (t) = e2t, we solve the original equation as follows:

26 Method of Integrating Factors for General First Order Linear Equation
Next, we consider the general first order linear equation Multiplying both sides by (t), we obtain Next, we want (t) such that '(t) = p(t)(t), from which it will follow that

27 Integrating Factor for General First Order Linear Equation
Thus we want to choose (t) such that '(t) = p(t)(t). Assuming (t) > 0 (as we only need one (t) ), it follows that Choosing k = 0, we then have and note (t) > 0 as desired.

28 Solution for General First Order Linear Equation
Thus we have the following: Then

29 Example 4: General Solution (1 of 3)
To solve the initial value problem first put into standard form: Then and hence Note: y -> 0 as t -> 0

30 Example 4: Particular Solution (2 of 3)
Using the initial condition y(1) = 2 and general solution it follows that or equivalently,

31 Example 4: Graphs of Solution (3 of 3)
The graphs below show several integral curves for the differential equation, and a particular solution (in red) whose graph contains the initial point (1,2).

32 Ch 2.2: Separable Equations
In this section we examine a subclass of linear and nonlinear first order equations. Consider the first order equation We can rewrite this in the form For example, let M(x,y) = - f (x,y) and N (x,y) = 1. There may be other ways as well. In differential form, If M is a function of x only and N is a function of y only, then In this case, the equation is called separable.

33 Example 1: Solving a Separable Equation
Solve the following first order nonlinear equation: Separating variables, and using calculus, we obtain The equation above defines the solution y implicitly. A graph showing the direction field and implicit plots of several integral curves for the differential equation is given above.

34 Example 2: Implicit and Explicit Solutions (1 of 4)
Solve the following first order nonlinear equation: Separating variables and using calculus, we obtain The equation above defines the solution y implicitly. An explicit expression for the solution can be found in this case:

35 Example 2: Initial Value Problem (2 of 4)
Suppose we seek a solution satisfying y(0) = -1. Using the implicit expression of y, we obtain Thus the implicit equation defining y is Using explicit expression of y, It follows that

36 Example 2: Initial Condition y(0) = 3 (3 of 4)
Note that if initial condition is y(0) = 3, then we choose the positive sign, instead of negative sign, on square root term:

37 Example 3: Implicit Solution of Initial Value Problem (1 of 2)
Consider the following initial value problem: Separating variables and using calculus, we obtain Using the initial condition, it follows that

38 Example 3: Graph of Solutions (2 of 2)
Thus The graph of this solution (black), along with the graphs of the direction field and several integral curves (blue) for this differential equation, is given below.

39 Ch 2.6: Exact Equations & Integrating Factors
Consider a first order ODE of the form Suppose there is a function  such that and such that (x,y) = c defines y = (x) implicitly. Then and hence the original ODE becomes Thus (x,y) = c defines a solution implicitly. In this case, the ODE is said to be exact.

40 Theorem 2.6.1 Suppose an ODE can be written in the form
where the functions M, N, My and Nx are all continuous in the rectangular region R: (x, y)  (,  ) x (,  ). Then Eq. (1) is an exact differential equation iff That is, there exists a function  satisfying the conditions iff M and N satisfy Equation (2).

41 Example 1: Exact Equation (1 of 4)
Consider the following differential equation. Then and hence From Theorem 2.6.1, Thus

42 Example 1: Solution (2 of 4)
We have and It follows that Thus By Theorem 2.6.1, the solution is given implicitly by

43 Example 1: Direction Field and Solution Curves (3 of 4)
Our differential equation and solutions are given by A graph of the direction field for this differential equation, along with several solution curves, is given below.

44 Example 1: Explicit Solution and Graphs (4 of 4)
Our solution is defined implicitly by the equation below. In this case, we can solve the equation explicitly for y: Solution curves for several values of c are given below.

45 Example 3: Non-Exact Equation (1 of 3)
Consider the following differential equation. Then and hence To show that our differential equation cannot be solved by this method, let us seek a function  such that Thus

46 Example 3: Non-Exact Equation (2 of 3)
We seek  such that and Then Thus there is no such function . However, if we (incorrectly) proceed as before, we obtain as our implicitly defined y, which is not a solution of ODE.

47 Integrating Factors It is sometimes possible to convert a differential equation that is not exact into an exact equation by multiplying the equation by a suitable integrating factor (x, y): For this equation to be exact, we need This partial differential equation may be difficult to solve. If  is a function of x alone, then y = 0 and hence we solve provided right side is a function of x only. Similarly if  is a function of y alone. See text for more details.

48 Example 4: Non-Exact Equation
Consider the following non-exact differential equation. Seeking an integrating factor, we solve the linear equation Multiplying our differential equation by , we obtain the exact equation which has its solutions given implicitly by

49 Ch 3.1: Second Order Linear Homogeneous Equations with Constant Coefficients
A second order ordinary differential equation has the general form where f is some given function. This equation is said to be linear if f is linear in y and y': Otherwise the equation is said to be nonlinear. A second order linear equation often appears as If G(t) = 0 for all t, then the equation is called homogeneous. Otherwise the equation is nonhomogeneous.

50 Homogeneous Equations, Initial Values
In Sections 3.6 and 3.7, we will see that once a solution to a homogeneous equation is found, then it is possible to solve the corresponding nonhomogeneous equation, or at least express the solution in terms of an integral. The focus of this chapter is thus on homogeneous equations; and in particular, those with constant coefficients: We will examine the variable coefficient case in Chapter 5. Initial conditions typically take the form Thus solution passes through (t0, y0), and slope of solution at (t0, y0) is equal to y0'.

51 Example 1: Infinitely Many Solutions (1 of 3)
Consider the second order linear differential equation Two solutions of this equation are Other solutions include Based on these observations, we see that there are infinitely many solutions of the form It will be shown in Section 3.2 that all solutions of the differential equation above can be expressed in this form.

52 Example 1: Initial Conditions (2 of 3)
Now consider the following initial value problem for our equation: We have found a general solution of the form Using the initial equations, Thus

53 Example 1: Solution Graphs (3 of 3)
Our initial value problem and solution are Graphs of this solution are given below. The graph on the right suggests that both initial conditions are satisfied.

54 Characteristic Equation
To solve the 2nd order equation with constant coefficients, we begin by assuming a solution of the form y = ert. Substituting this into the differential equation, we obtain Simplifying, and hence This last equation is called the characteristic equation of the differential equation. We then solve for r by factoring or using quadratic formula.

55 General Solution Using the quadratic formula on the characteristic equation we obtain two solutions, r1 and r2. There are three possible results: The roots r1, r2 are real and r1  r2. The roots r1, r2 are real and r1 = r2. The roots r1, r2 are complex. In this section, we will assume r1, r2 are real and r1  r2. In this case, the general solution has the form

56 Initial Conditions For the initial value problem
we use the general solution together with the initial conditions to find c1 and c2. That is, Since we are assuming r1  r2, it follows that a solution of the form y = ert to the above initial value problem will always exist, for any set of initial conditions.

57 Example 2 Consider the initial value problem
Assuming exponential soln leads to characteristic equation: Factoring yields two solutions, r1 = -4 and r2 = 3 The general solution has the form Using the initial conditions: Thus

58 Example 4: Initial Value Problem (1 of 2)
Consider the initial value problem Then Factoring yields two solutions, r1 = -2 and r2 = -3 The general solution has the form Using initial conditions: Thus

59 Example 4: Find Maximum Value (2 of 2)
Find the maximum value attained by the solution.

60 Ch 3.2: Solutions of Linear Homogeneous Equations; Wronskian
Let p, q be continuous functions on an interval I = (, ), which could be infinite. For any function y that is twice differentiable on I, define the differential operator L by Note that L[y] is a function on I, with output value For example,

61 Differential Operator Notation
In this section we will discuss the second order linear homogeneous equation L[y](t) = 0, along with initial conditions as indicated below: We would like to know if there are solutions to this initial value problem, and if so, are they unique. Also, we would like to know what can be said about the form and structure of solutions that might be helpful in finding solutions to particular problems. These questions are addressed in the theorems of this section.

62 Theorem 3.2.1 Consider the initial value problem
where p, q, and g are continuous on an open interval I that contains t0. Then there exists a unique solution y = (t) on I. Note: While this theorem says that a solution to the initial value problem above exists, it is often not possible to write down a useful expression for the solution. This is a major difference between first and second order linear equations.

63 Theorem 3.2.2 (Principle of Superposition)
If y1and y2 are solutions to the equation then the linear combination c1y1 + y2c2 is also a solution, for all constants c1 and c2. To prove this theorem, substitute c1y1 + y2c2 in for y in the equation above, and use the fact that y1 and y2 are solutions. Thus for any two solutions y1 and y2, we can construct an infinite family of solutions, each of the form y = c1y1 + c2 y2. Can all solutions be written this way, or do some solutions have a different form altogether? To answer this question, we use the Wronskian determinant.

64 The Wronskian Determinant (1 of 3)
Suppose y1 and y2 are solutions to the equation From Theorem 3.2.2, we know that y = c1y1 + c2 y2 is a solution to this equation. Next, find coefficients such that y = c1y1 + c2 y2 satisfies the initial conditions To do so, we need to solve the following equations:

65 The Wronskian Determinant (2 of 3)
Solving the equations, we obtain In terms of determinants:

66 The Wronskian Determinant (3 of 3)
In order for these formulas to be valid, the determinant W in the denominator cannot be zero: W is called the Wronskian determinant, or more simply, the Wronskian of the solutions y1and y2. We will sometimes use the notation

67 Theorem 3.2.3 Suppose y1 and y2 are solutions to the equation
and that the Wronskian is not zero at the point t0 where the initial conditions are assigned. Then there is a choice of constants c1, c2 for which y = c1y1 + c2 y2 is a solution to the differential equation (1) and initial conditions (2).

68 Theorem 3.2.4 (Fundamental Solutions)
Suppose y1 and y2 are solutions to the equation If there is a point t0 such that W(y1,y2)(t0)  0, then the family of solutions y = c1y1 + c2 y2 with arbitrary coefficients c1, c2 includes every solution to the differential equation. The expression y = c1y1 + c2 y2 is called the general solution of the differential equation above, and in this case y1 and y2 are said to form a fundamental set of solutions to the differential equation.

69 Example 6 Consider the general second order linear equation below, with the two solutions indicated: Suppose the functions below are solutions to this equation: The Wronskian of y1and y2 is Thus y1and y2 form a fundamental set of solutions to the equation, and can be used to construct all of its solutions. The general solution is

70 Example 7: Solutions (1 of 2)
Consider the following differential equation: Show that the functions below are fundamental solutions: To show this, first substitute y1 into the equation: Thus y1 is a indeed a solution of the differential equation. Similarly, y2 is also a solution:

71 Example 7: Fundamental Solutions (2 of 2)
Recall that To show that y1 and y2 form a fundamental set of solutions, we evaluate the Wronskian of y1 and y2: Since W  0 for t > 0, y1, y2 form a fundamental set of solutions for the differential equation

72 Summary To find a general solution of the differential equation
we first find two solutions y1 and y2. Then make sure there is a point t0 in the interval such that W(y1, y2)(t0)  0. It follows that y1 and y2 form a fundamental set of solutions to the equation, with general solution y = c1y1 + c2 y2. If initial conditions are prescribed at a point t0 in the interval where W  0, then c1 and c2 can be chosen to satisfy those conditions.

73 Ch 3.3: Complex Roots of Characteristic Equation
Recall our discussion of the equation where a, b and c are constants. Assuming an exponential soln leads to characteristic equation: Quadratic formula (or factoring) yields two solutions, r1 & r2: If b2 – 4ac < 0, then complex roots: r1 =  + i, r2 =  - i Thus

74 Euler’s Formula; Complex Valued Solutions
Substituting it into Taylor series for et, we obtain Euler’s formula: Generalizing Euler’s formula, we obtain Then Therefore

75 Real Valued Solutions Our two solutions thus far are complex-valued functions: We would prefer to have real-valued solutions, since our differential equation has real coefficients. To achieve this, recall that linear combinations of solutions are themselves solutions: Ignoring constants, we obtain the two solutions

76 Real Valued Solutions: The Wronskian
Thus we have the following real-valued functions: Checking the Wronskian, we obtain Thus y3 and y4 form a fundamental solution set for our ODE, and the general solution can be expressed as

77 Example 1 Consider the equation Then Therefore
and thus the general solution is

78 Example 2 Consider the equation Then Therefore
and thus the general solution is

79 Example 3 Consider the equation Then Therefore the general solution is

80 Ch 3.4: Repeated Roots; Reduction of Order
Recall our 2nd order linear homogeneous ODE where a, b and c are constants. Assuming an exponential soln leads to characteristic equation: Quadratic formula (or factoring) yields two solutions, r1 & r2: When b2 – 4ac = 0, r1 = r2 = -b/2a, since method only gives one solution:

81 Second Solution: Multiplying Factor v(t)
We know that Since y1 and y2 are linearly dependent, we generalize this approach and multiply by a function v, and determine conditions for which y2 is a solution: Then

82 Finding Multiplying Factor v(t)
Substituting derivatives into ODE, we seek a formula for v:

83 Wronskian, Fundamental Solutions, and General Solutions
Thus, two special solutions found: The Wronskian of the two solutions is Thus y1 and y2 form a fundamental solution set for equation. The general solution is

84 Example 1 Consider the initial value problem
Assuming exponential soln leads to characteristic equation: Thus the general solution is Using the initial conditions: Thus

85 Reduction of Order The method used so far in this section also works for equations with nonconstant coefficients: That is, given that y1 is solution, try y2 = v(t)y1: Substituting these into ODE and collecting terms, Since y1 is a solution to the differential equation, this last equation reduces to a first order equation in v :

86 Example 4: Reduction of Order (1 of 3)
Given the variable coefficient equation and solution y1, use reduction of order method to find a second solution: Substituting these into ODE and collecting terms,

87 Example 4: Finding v(t) (2 of 3)
To solve for u, we can use the separation of variables method: Thus and hence

88 Example 4: General Solution (3 of 3)
We have Thus Recall and hence we can neglect the second term of y2 to obtain Hence the general solution to the differential equation is

89 Ch 3.5: Nonhomogeneous Equations; Method of Undetermined Coefficients
Recall the nonhomogeneous equation where p, q, g are continuous functions on an open interval I. The associated homogeneous equation is In this section we will learn the method of undetermined coefficients to solve the nonhomogeneous equation, which relies on knowing solutions to homogeneous equation.

90 Theorem 3.6.1 If Y1, Y2 are solutions of nonhomogeneous equation
then Y1 - Y2 is a solution of the homogeneous equation If y1, y2 form a fundamental solution set of homogeneous equation, then there exists constants c1, c2 such that

91 Theorem 3.6.2 (General Solution)
The general solution of nonhomogeneous equation can be written in the form where y1, y2 form a fundamental solution set of homogeneous equation, c1, c2 are arbitrary constants and Y is a specific solution to the nonhomogeneous equation.

92 Method of Undetermined Coefficients
Recall the nonhomogeneous equation with general solution In this section we use the method of undetermined coefficients to find a particular solution Y to the nonhomogeneous equation, assuming we can find solutions y1, y2 for the homogeneous case. The method of undetermined coefficients is usually limited to when p and q are constant, and g(t) is a polynomial, exponential, sine or cosine function.

93 Example 1: Exponential g(t)
Consider the nonhomogeneous equation We seek Y satisfying this equation. Since exponentials replicate through differentiation, a good start for Y is: Substituting these derivatives into differential equation, Thus a particular solution to the nonhomogeneous ODE is

94 Example 2: Sine g(t), First Attempt (1 of 2)
Consider the nonhomogeneous equation We seek Y satisfying this equation. Since sines replicate through differentiation, a good start for Y is: Substituting these derivatives into differential equation, Since sin(x) and cos(x) are linearly independent (they are not multiples of each other), we must have c1= c2 = 0, and hence 2 + 5A = 3A = 0, which is impossible.

95 Example 2: Sine g(t), Particular Solution (2 of 2)
Our next attempt at finding a Y is Substituting these derivatives into ODE, we obtain Thus a particular solution to the nonhomogeneous ODE is

96 Example 3: Polynomial g(t)
Consider the nonhomogeneous equation We seek Y satisfying this equation. We begin with Substituting these derivatives into differential equation, Thus a particular solution to the nonhomogeneous ODE is

97 Example 4: Product g(t) Consider the nonhomogeneous equation
We seek Y satisfying this equation, as follows: Substituting derivatives into ODE and solving for A and B:

98 Discussion: Sum g(t) Consider again our general nonhomogeneous equation Suppose that g(t) is sum of functions: If Y1, Y2 are solutions of respectively, then Y1 + Y2 is a solution of the nonhomogeneous equation above.

99 Example 5: Sum g(t) Consider the equation
Our equations to solve individually are Our particular solution is then

100 Example 6: First Attempt (1 of 3)
Consider the equation We seek Y satisfying this equation. We begin with Substituting these derivatives into ODE: Thus no particular solution exists of the form

101 Example 6: Homogeneous Solution (2 of 3)
Thus no particular solution exists of the form To help understand why, recall that we found the corresponding homogeneous solution in Section 3.4 notes: Thus our assumed particular solution solves homogeneous equation instead of the nonhomogeneous equation.

102 Example 6: Particular Solution (3 of 3)
Our next attempt at finding a Y is: Substituting derivatives into ODE,

103 Ch 3.6: Variation of Parameters
Recall the nonhomogeneous equation where p, q, g are continuous functions on an open interval I. The associated homogeneous equation is In this section we will learn the variation of parameters method to solve the nonhomogeneous equation. As with the method of undetermined coefficients, this procedure relies on knowing solutions to homogeneous equation. Variation of parameters is a general method, and requires no detailed assumptions about solution form. However, certain integrals need to be evaluated, and this can present difficulties.

104 Example: Variation of Parameters (1 of 6)
We seek a particular solution to the equation below. We cannot use method of undetermined coefficients since g(t) is a quotient of sin t or cos t, instead of a sum or product. Recall that the solution to the homogeneous equation is To find a particular solution to the nonhomogeneous equation, we begin with the form Then or

105 Example: Derivatives, 2nd Equation (2 of 6)
From the previous slide, Note that we need two equations to solve for u1 and u2. The first equation is the differential equation. To get a second equation, we will require Then Next,

106 Example: Two Equations (3 of 6)
Recall that our differential equation is Substituting y'' and y into this equation, we obtain This equation simplifies to Thus, to solve for u1 and u2, we have the two equations:

107 Example: Solve for u1' (4 of 6)
To find u1 and u2 , we need to solve the equations From second equation, Substituting this into the first equation,

108 Example : Solve for u1 and u2 (5 of 6)
From the previous slide, Then Thus

109 Example: General Solution (6 of 6)
Recall our equation and homogeneous solution yC: Using the expressions for u1 and u2 on the previous slide, the general solution to the differential equation is

110 Summary Suppose y1, y2 are fundamental solutions to the homogeneous equation associated with the nonhomogeneous equation above, where we note that the coefficient on y'' is 1. To find u1 and u2, we need to solve the equations Doing so, and using the Wronskian, we obtain Thus

111 Theorem 3.7.1 Consider the equations
If the functions p, q and g are continuous on an open interval I, and if y1 and y2 are fundamental solutions to Eq. (2), then a particular solution of Eq. (1) is and the general solution is

112 Ch 5.1: Review of Power Series
Finding the general solution of a linear differential equation depends on determining a fundamental set of solutions of the homogeneous equation. So far, we have a systematic procedure for constructing fundamental solutions if equation has constant coefficients. For a larger class of equations with variable coefficients, we must search for solutions beyond the familiar elementary functions of calculus. The principal tool we need is the representation of a given function by a power series. Then, similar to the undetermined coefficients method, we assume the solutions have power series representations, and then determine the coefficients so as to satisfy the equation.

113 Convergent Power Series
A power series about the point x0 has the form and is said to converge at a point x if exists for that x. Note that the series converges for x = x0. It may converge for all x, or it may converge for some values of x and not others.

114 Taylor Series Suppose that  an(x - x0)n converges to f (x) for |x - x0| < . Then the value of an is given by and the series is called the Taylor series for f about x = x0. Also, if then f is continuous and has derivatives of all orders on the interval of convergence. Further, the derivatives of f can be computed by differentiating the relevant series term by term.

115 Series Equality If two power series are equal, that is,
for each x in some open interval with center x0, then an = bn for n = 0, 1, 2, 3,… In particular, if then an = 0 for n = 0, 1, 2, 3,…

116 Shifting Index of Summation
The index of summation in an infinite series is a dummy parameter just as the integration variable in a definite integral is a dummy variable. Thus it is immaterial which letter is used for the index of summation: Just as we make changes in the variable of integration in a definite integral, we find it convenient to make changes of summation in calculating series solutions of differential equations.

117 Example 4: Shifting Index of Summation
We can verify the equation by letting m = n -1 in the left series. Then n = 1 corresponds to m = 0, and hence Replacing the dummy index m with n, we obtain as desired.

118 Example 5: Rewriting Generic Term
We can write the series as a sum whose generic term involves xn by letting m = n + 3. Then n = 0 corresponds to m = 3, and n + 1 equals m – 2. It follows that Replacing the dummy index m with n, we obtain as desired.

119 Ch 5.2: Series Solutions Near an Ordinary Point, Part I
In Chapter 3, we examined methods of solving second order linear differential equations with constant coefficients. We now consider the case where the coefficients are functions of the independent variable, which we will denote by x. It is sufficient to consider the homogeneous equation since the method for the nonhomogeneous case is similar. We primarily consider the case when P, Q, R are polynomials, and hence also continuous. However, as we will see, the method of solution is also applicable when P, Q and R are general analytic functions.

120 Ordinary Points Assume P, Q, R are polynomials with no common factors, and that we want to solve the equation below in a neighborhood of a point of interest x0: The point x0 is called an ordinary point if P(x0)  0. Since P is continuous, P(x)  0 for all x in some interval about x0. For x in this interval, divide the differential equation by P to get Since p and q are continuous, Theorem says there is a unique solution, given initial conditions y(x0) = y0, y'(x0) = y0'

121 Singular Points Suppose we want to solve the equation below in some neighborhood of a point of interest x0: The point x0 is called an singular point if P(x0) = 0. Since P, Q, R are polynomials with no common factors, it follows that Q(x0)  0 or R(x0)  0, or both. Then at least one of p or q becomes unbounded as x  x0, and therefore Theorem does not apply in this situation. Sections 5.4 through 5.8 deal with finding solutions in the neighborhood of a singular point.

122 Series Solutions Near Ordinary Points
In order to solve our equation near an ordinary point x0, we will assume a series representation of the unknown solution function y: As long as we are within the interval of convergence, this representation of y is continuous and has derivatives of all orders.

123 Example 1: Series Solution (1 of 8)
Find a series solution of the equation Here, P(x) = 1, Q(x) = 0, R(x) = 1. Thus every point x is an ordinary point. We will take x0 = 0. Assume a series solution of the form Differentiate term by term to obtain Substituting these expressions into the equation, we obtain

124 Example 1: Combining Series (2 of 8)
Our equation is Shifting indices, we obtain

125 Example 1: Recurrence Relation (3 of 8)
Our equation is For this equation to be valid for all x, the coefficient of each power of x must be zero, and hence This type of equation is called a recurrence relation. Next, we find the individual coefficients a0, a1, a2, …

126 Example 1: Even Coefficients (4 of 8)
To find a2, a4, a6, …., we proceed as follows:

127 Example: Odd Coefficients (5 of 8)
To find a3, a5, a7, …., we proceed as follows:

128 Example 1: Solution (6 of 8)
We now have the following information: Thus Note: a0 and a1 are determined by the initial conditions. (Expand series a few terms to see this.)

129 Example 1: Functions Defined by IVP (7 of 8)
Our solution is From Calculus, we know this solution is equivalent to In hindsight, we see that cos x and sin x are indeed fundamental solutions to our original differential equation While we are familiar with the properties of cos x and sin x, many important functions are defined by the initial value problem that they solve.

130 Example 1: Graphs (8 of 8) The graphs below show the partial sum approximations of cos x and sin x. As the number of terms increases, the interval over which the approximation is satisfactory becomes longer, and for each x in this interval the accuracy improves. However, the truncated power series provides only a local approximation in the neighborhood of x = 0.

131 Example 2: Airy’s Equation (1 of 10)
Find a series solution of Airy’s equation about x0 = 0: Here, P(x) = 1, Q(x) = 0, R(x) = - x. Thus every point x is an ordinary point. We will take x0 = 0. Assuming a series solution and differentiating, we obtain Substituting these expressions into the equation, we obtain

132 Example 2: Combine Series (2 of 10)
Our equation is Shifting the indices, we obtain

133 Example 2: Recurrence Relation (3 of 10)
Our equation is For this equation to be valid for all x, the coefficient of each power of x must be zero; hence a2 = 0 and

134 Example 2: Coefficients (4 of 10)
We have a2 = 0 and For this recurrence relation, note that a2 = a5 = a8 = … = 0. Next, we find the coefficients a0, a3, a6, …. We do this by finding a formula a3n, n = 1, 2, 3, … After that, we find a1, a4, a7, …, by finding a formula for a3n+1, n = 1, 2, 3, …

135 Example 2: Find a3n (5 of 10) Find a3, a6, a9, ….
The general formula for this sequence is

136 Example 2: Find a3n+1 (6 of 10) Find a4, a7, a10, …
The general formula for this sequence is

137 Example 2: Series and Coefficients (7 of 10)
We now have the following information: where a0, a1 are arbitrary, and

138 Example 2: Solution (8 of 10)
Thus our solution is where a0, a1 are arbitrary (determined by initial conditions). Consider the two cases (1) a0 =1, a1 = 0  y(0) = 1, y'(0) = 0 (2) a0 =0, a1 = 1  y(0) = 0, y'(0) = 1 The corresponding solutions y1(x), y2(x) are linearly independent, since W(y1, y2)(0) =1  0, where

139 Example 2: Fundamental Solutions (9 of 10)
Our solution: For the cases (1) a0 =1, a1 = 0  y(0) = 1, y'(0) = 0 (2) a0 =0, a1 = 1  y(0) = 0, y'(0) = 1, the corresponding solutions y1(x), y2(x) are linearly independent, and thus are fundamental solutions for Airy’s equation, with general solution y (x) = c1 y1(x) + c2 y2(x)

140 Example 2: Graphs (10 of 10) Thus given the initial conditions
y(0) = 1, y'(0) = 0 and y(0) = 0, y'(0) = 1 the solutions are, respectively, The graphs of y1 and y2 are given below. Note the approximate intervals of accuracy for each partial sum

141 Example 3: Airy’s Equation (1 of 7)
Find a series solution of Airy’s equation about x0 = 1: Here, P(x) = 1, Q(x) = 0, R(x) = - x. Thus every point x is an ordinary point. We will take x0 = 1. Assuming a series solution and differentiating, we obtain Substituting these into ODE & shifting indices, we obtain

142 Example 3: Rewriting Series Equation (2 of 7)
Our equation is The x on right side can be written as 1 + (x – 1); and thus

143 Example 3: Recurrence Relation (3 of 7)
Thus our equation becomes Thus the recurrence relation is Equating like powers of x -1, we obtain

144 Example 3: Solution (4 of 7)
We now have the following information: and

145 Example 3: Solution and Recursion (5 of 7)
Our solution: The recursion has three terms, and determining a general formula for the coefficients an can be difficult or impossible. However, we can generate as many coefficients as we like, preferably with the help of a computer algebra system.

146 Example 3: Fundamental Solutions (7 of 7)
Our solution: or It can be shown that the solutions y3(x), y4(x) are linearly independent, and thus are fundamental solutions for Airy’s equation, with general solution

147 Ch 5.4: Euler Equations; Regular Singular Points
Recall that for equation if P, Q and R are polynomials having no common factors, then the singular points of the differential equation are the points for which P(x) = 0.

148 Example 1: Bessel and Legendre Equations
Bessel Equation of order : The point x = 0 is a singular point, since P(x) = x2 is zero there. All other points are ordinary points. Legendre Equation: The points x = 1 are singular points, since P(x) = 1- x2 is zero there. All other points are ordinary points.

149 Euler Equations A relatively simple differential equation that has a singular point is the Euler equation, where ,  are constants. Note that x0 = 0 is a singular point. The solution of the Euler equation is typical of the solutions of all differential equations with singular points, and hence we examine Euler equations before discussing the more general problem.

150 Solutions of the Form y = xr
In any interval not containing the origin, the general solution of the Euler equation has the form Suppose x is in (0, ), and assume a solution of the form y = xr. Then Substituting these into the differential equation, we obtain or

151 Quadratic Equation Thus, after substituting y = xr into our differential equation, we arrive at and hence Let F(r) be defined by We now examine the different cases for the roots r1, r2.

152 Real, Distinct Roots If F(r) has real roots r1  r2, then
are solutions to the Euler equation. Note that Thus y1 and y2 form fundamental solutions, and the general solution to our differential equation is

153 Example 1 Consider the equation
Substituting y = xr into this equation, we obtain and Thus r1 = -1/3, r2 = 1, and our general solution is

154 Equal Roots If F(r) has equal roots r1 = r2, then we have one solution
We could use reduction of order to get a second solution; instead, we will consider an alternative method. Since F(r) has a double root r1, F(r) = (r - r1)2, and F'(r1) = 0. This suggests differentiating L[xr] with respect to r and then setting r equal to r1, as follows:

155 Equal Roots Thus in the case of equal roots r1 = r2, we have two solutions Now Thus y1 and y2 form fundamental solutions, and the general solution to our differential equation is

156 Example 2 Consider the equation Then and
Thus r1 = r2 = -3, our general solution is

157 Complex Roots Suppose F(r) has complex roots r1 =  + i, r2 =  - i, with   0. Then Thus xr is defined for complex r, and it can be shown that the general solution to the differential equation has the form However, these solutions are complex-valued. It can be shown that the following functions are solutions as well:

158 Complex Roots The following functions are solutions to our equation:
Using the Wronskian, it can be shown that y1 and y2 form fundamental solutions, and thus the general solution to our differential equation can be written as

159 Example 3 Consider the equation Then and
Thus r1 = -2i, r2 = 2i, and our general solution is

160 Solution Behavior Recall that the solution to the Euler equation
depends on the roots: where r1 =  + i, r2 =  - i. The qualitative behavior of these solutions near the singular point x = 0 depends on the nature of r1 and r2. Discuss. Also, we obtain similar forms of solution when x < 0. Overall results are summarized on the next slide.

161 General Solution of the Euler Equation
The general solution to the Euler equation in any interval not containing the origin is determined by the roots r1 and r2 of the equation according to the following cases: where r1 =  + i, r2 =  - i.

162 Shifted Equations The solutions to the Euler equation
are similar to the ones given in Theorem 5.5.1: where r1 =  + i, r2 =  - i.

163 Example 5: Initial Value Problem (1 of 4)
Consider the initial value problem Then and Using the quadratic formula on r2 + 2r + 5, we obtain

164 Example 5: General Solution (2 of 4)
Thus  = -1,  = 2, and the general solution of our initial value problem is where the last equality follows from the requirement that the domain of the solution include the initial point x = 1. To see this, recall that our initial value problem is

165 Example 5: Initial Conditions (3 of 4)
Our general solution is Recall our initial value problem: Using the initial conditions and calculus, we obtain Thus our solution to the initial value problem is

166 Example 5: Graph of Solution (4 of 4)
Graphed below is the solution of our initial value problem Note that as x approaches the singular point x = 0, the solution oscillates and becomes unbounded.

167 Solution Behavior and Singular Points
If we attempt to use the methods of the preceding section to solve the differential equation in a neighborhood of a singular point x0, we will find that these methods fail. Instead, we must use a more general series expansion. A differential equation may only have a few singular points, but solution behavior near these singular points is important. For example, solutions often become unbounded or experience rapid changes in magnitude near a singular point. Also, geometric singularities in a physical problem, such as corners or sharp edges, may lead to singular points in the corresponding differential equation.

168 Solution Behavior Near Singular Points
Thus without more information about Q/P and R/P in the neighborhood of a singular point x0, it may be impossible to describe solution behavior near x0.

169 Example 1 Consider the following equation
which has a singular point at x = 0. It can be shown by direct substitution that the following functions are linearly independent solutions, for x  0: Thus, in any interval not containing the origin, the general solution is y(x) = c1x2 + c2 x -1. Note that y = c1 x2 is bounded and analytic at the origin, even though Theorem is not applicable. However, y = c2 x -1 does not have a Taylor series expansion about x = 0, and the methods of Section 5.2 would fail here.

170 Example 2 Consider the following equation
which has a singular point at x = 0. It can be shown the two functions below are linearly independent solutions and are analytic at x = 0: Hence the general solution is If arbitrary initial conditions were specified at x = 0, then it would be impossible to determine both c1 and c2.

171 Example 3 Consider the following equation
which has a singular point at x = 0. It can be shown that the following functions are linearly independent solutions, neither of which are analytic at x = 0: Thus, in any interval not containing the origin, the general solution is y(x) = c1x -1 + c2 x -3. It follows that every solution is unbounded near the origin.

172 Classifying Singular Points
Our goal is to extend the method already developed for solving near an ordinary point so that it applies to the neighborhood of a singular point x0. To do so, we restrict ourselves to cases in which singularities in Q/P and R/P at x0 are not too severe, that is, to what might be called “weak singularities.” It turns out that the appropriate conditions to distinguish weak singularities are

173 Regular Singular Points
Consider the differential equation If P and Q are polynomials, then a regular singular point x0 is singular point for which Any other singular point x0 is an irregular singular point, which will not be discussed in this course.

174 Example 4: Bessel Equation
Consider the Bessel equation of order  The point x = 0 is a regular singular point, since both of the following limits are finite:

175 Example 5: Legendre Equation
Consider the Legendre equation The point x = 1 is a regular singular point, since both of the following limits are finite: Similarly, it can be shown that x = -1 is a regular singular point.

176 Example 6 Consider the equation
The point x = 0 is a regular singular point: The point x = 2, however, is an irregular singular point, since the following limit does not exist:

177 Ch 5.5: Series Solutions Near a Regular Singular Point, Part I
We now consider solving the general second order linear equation in the neighborhood of a regular singular point x0. For convenience, will will take x0 = 0. The point x0 = 0 is a regular singular point of iff

178 Transforming Differential Equation
Our differential equation has the form Dividing by P(x) and multiplying by x2, we obtain Substituting in the power series representations of p and q, we obtain

179 Comparison with Euler Equations
Our differential equation now has the form Note that if then our differential equation reduces to the Euler Equation In any case, our equation is similar to an Euler Equation but with power series coefficients. Thus our solution method: assume solutions have the form

180 Example 1: Regular Singular Point (1 of 13)
Consider the differential equation This equation can be rewritten as Since the coefficients are polynomials, it follows that x = 0 is a regular singular point, since both limits below are finite:

181 Example 1: Euler Equation (2 of 13)
Now xp(x) = -1/2 and x2q(x) = (1 + x )/2, and thus for it follows that Thus the corresponding Euler Equation is As in Section 5.5, we obtain We will refer to this result later.

182 Example 1: Differential Equation (3 of 13)
For our differential equation, we assume a solution of the form By substitution, our differential equation becomes or

183 Example 1: Combining Series (4 of 13)
Our equation can next be written as It follows that and

184 Example 1: Indicial Equation (5 of 13)
From the previous slide, we have The equation is called the indicial equation, and was obtained earlier when we examined the corresponding Euler Equation. The roots r1 = 1, r2 = ½, of the indicial equation are called the exponents of the singularity, for regular singular point x = 0. The exponents of the singularity determine the qualitative behavior of solution in neighborhood of regular singular point.

185 Example 1: Recursion Relation (6 of 13)
Recall that We now work with the coefficient on xr+n : It follows that

186 Example 1: First Root (7 of 13)
We have Starting with r1 = 1, this recursion becomes Thus

187 Example 1: First Solution (8 of 13)
Thus we have an expression for the n-th term: Hence for x > 0, one solution to our differential equation is

188 Example 1: Second Root (10 of 13)
Recall that When r2 = 1/2, this recursion becomes Thus

189 Example 1: Second Solution (11 of 13)
Thus we have an expression for the n-th term: Hence for x > 0, a second solution to our equation is

190 Example 1: General Solution (13 of 13)
The two solutions to our differential equation are Since the leading terms of y1 and y2 are x and x1/2, respectively, it follows that y1 and y2 are linearly independent, and hence form a fundamental set of solutions for differential equation. Therefore the general solution of the differential equation is where y1 and y2 are as given above.

191 Shifted Expansions For the analysis given in this section, we focused on x = 0 as the regular singular point. In the more general case of a singular point at x = x0, our series solution will have the form

192 Ch 6.1: Definition of Laplace Transform
Many practical engineering problems involve mechanical or electrical systems acted upon by discontinuous or impulsive forcing terms. For such problems the methods described in Chapter 3 are difficult to apply. In this chapter we use the Laplace transform to convert a problem for an unknown function f into a simpler problem for F, solve for F, and then recover f from its transform F. Given a known function K(s,t), an integral transform of a function f is a relation of the form

193 The Laplace Transform Let f be a function defined for t  0, and satisfies certain conditions to be named later. The Laplace Transform of f is defined as Thus the kernel function is K(s,t) = e-st. Since solutions of linear differential equations with constant coefficients are based on the exponential function, the Laplace transform is particularly useful for such equations. Note that the Laplace Transform is defined by an improper integral, and thus must be checked for convergence. On the next few slides, we review examples of improper integrals and piecewise continuous functions.

194 Example 1 Consider the following improper integral.
We can evaluate this integral as follows: Note that if s = 0, then est = 1. Thus the following two cases hold:

195 Example 2 Consider the following improper integral.
We can evaluate this integral using integration by parts: Since this limit diverges, so does the original integral.

196 Piecewise Continuous Functions
A function f is piecewise continuous on an interval [a, b] if this interval can be partitioned by a finite number of points a = t0 < t1 < … < tn = b such that (1) f is continuous on each (tk, tk+1) In other words, f is piecewise continuous on [a, b] if it is continuous there except for a finite number of jump discontinuities.

197 Example 3 Consider the following piecewise-defined function f.
From this definition of f, and from the graph of f below, we see that f is piecewise continuous on [0, 3].

198 Example 4 Consider the following piecewise-defined function f.
From this definition of f, and from the graph of f below, we see that f is not piecewise continuous on [0, 3].

199 Theorem 6.1.2 Suppose that f is a function for which the following hold: (1) f is piecewise continuous on [0, b] for all b > 0. (2) | f(t) |  Keat when t  M, for constants a, K, M, with K, M > 0. Then the Laplace Transform of f exists for s > a. Note: A function f that satisfies the conditions specified above is said to to have exponential order as t  .

200 Example 5 Let f (t) = 1 for t  0. Then the Laplace transform F(s) of f is:

201 Example 6 Let f (t) = eat for t  0. Then the Laplace transform F(s) of f is:

202 Example 7 Let f (t) = sin(at) for t  0. Using integration by parts twice, the Laplace transform F(s) of f is found as follows:

203 Linearity of the Laplace Transform
Suppose f and g are functions whose Laplace transforms exist for s > a1 and s > a2, respectively. Then, for s greater than the maximum of a1 and a2, the Laplace transform of c1 f (t) + c2g(t) exists. That is, with

204 Example 8 Let f (t) = 5e-2t - 3sin(4t) for t  0.
Then by linearity of the Laplace transform, and using results of previous examples, the Laplace transform F(s) of f is:

205 Ch 6.2: Solution of Initial Value Problems
The Laplace transform is named for the French mathematician Laplace, who studied this transform in 1782. The techniques described in this chapter were developed primarily by Oliver Heaviside ( ), an English electrical engineer. In this section we see how the Laplace transform can be used to solve initial value problems for linear differential equations with constant coefficients. The Laplace transform is useful in solving these differential equations because the transform of f ' is related in a simple way to the transform of f, as stated in Theorem

206 Theorem 6.2.1 Suppose that f is a function for which the following hold: (1) f is continuous and f ' is piecewise continuous on [0, b] for all b > 0. (2) | f(t) |  Keat when t  M, for constants a, K, M, with K, M > 0. Then the Laplace Transform of f ' exists for s > a, with Proof (outline): For f and f ' continuous on [0, b], we have Similarly for f ' piecewise continuous on [0, b], see text.

207 The Laplace Transform of f '
Thus if f and f ' satisfy the hypotheses of Theorem 6.2.1, then Now suppose f ' and f '' satisfy the conditions specified for f and f ' of Theorem We then obtain Similarly, we can derive an expression for L{f (n)}, provided f and its derivatives satisfy suitable conditions. This result is given in Corollary 6.2.2

208 Corollary 6.2.2 Suppose that f is a function for which the following hold: (1) f , f ', f '' ,…, f (n-1) are continuous, and f (n) piecewise continuous, on [0, b] for all b > 0. (2) | f(t) |  Keat, | f '(t) |  Keat , …, | f (n-1)(t) |  Keat for t  M, for constants a, K, M, with K, M > 0. Then the Laplace Transform of f (n) exists for s > a, with

209 Example 1: Chapter 3 Method (1 of 4)
Consider the initial value problem Recall from Section 3.1: Thus r1 = -2 and r2 = -3, and general solution has the form Using initial conditions: Thus We now solve this problem using Laplace Transforms.

210 Example 1: Laplace Tranform Method (2 of 4)
Assume that our IVP has a solution  and that '(t) and ''(t) satisfy the conditions of Corollary Then and hence Letting Y(s) = L{y}, we have Substituting in the initial conditions, we obtain Thus

211 Example 1: Partial Fractions (3 of 4)
Using partial fraction decomposition, Y(s) can be rewritten: Thus

212 Example 1: Solution (4 of 4)
Recall from Section 6.1: Thus Recalling Y(s) = L{y}, we have and hence

213 General Laplace Transform Method
Consider the constant coefficient equation Assume that this equation has a solution y = (t), and that '(t) and ''(t) satisfy the conditions of Corollary Then If we let Y(s) = L{y} and F(s) = L{ f }, then

214 Algebraic Problem Thus the differential equation has been transformed into the the algebraic equation for which we seek y = (t) such that L{(t)} = Y(s). Note that we do not need to solve the homogeneous and nonhomogeneous equations separately, nor do we have a separate step for using the initial conditions to determine the values of the coefficients in the general solution.

215 Characteristic Polynomial
Using the Laplace transform, our initial value problem becomes The polynomial in the denominator is the characteristic polynomial associated with the differential equation. The partial fraction expansion of Y(s) used to determine  requires us to find the roots of the characteristic equation. For higher order equations, this may be difficult, especially if the roots are irrational or complex.

216 Inverse Problem The main difficulty in using the Laplace transform method is determining the function y = (t) such that L{(t)} = Y(s). This is an inverse problem, in which we try to find  such that (t) = L-1{Y(s)}. There is a general formula for L-1, but it requires knowledge of the theory of functions of a complex variable, and we do not consider it here. It can be shown that if f is continuous with L{f(t)} = F(s), then f is the unique continuous function with f (t) = L-1{F(s)}. Table in the text lists many of the functions and their transforms that are encountered in this chapter.

217 Linearity of the Inverse Transform
Frequently a Laplace transform F(s) can be expressed as Let Then the function has the Laplace transform F(s), since L is linear. By the uniqueness result of the previous slide, no other continuous function f has the same transform F(s). Thus L-1 is a linear operator with

218 Example 2 Find the inverse Laplace Transform of the given function.
To find y(t) such that y(t) = L-1{Y(s)}, we first rewrite Y(s): Using Table 6.2.1, Thus

219 Example 3 Find the inverse Laplace Transform of the given function.
To find y(t) such that y(t) = L-1{Y(s)}, we first rewrite Y(s): Using Table 6.2.1, Thus

220 Example 4 Find the inverse Laplace Transform of the given function.
To find y(t) such that y(t) = L-1{Y(s)}, we first rewrite Y(s): Using Table 6.2.1, Thus

221 Example 5 Find the inverse Laplace Transform of the given function.
To find y(t) such that y(t) = L-1{Y(s)}, we first rewrite Y(s): Using Table 6.2.1, Thus

222 Example 6 Find the inverse Laplace Transform of the given function.
To find y(t) such that y(t) = L-1{Y(s)}, we first rewrite Y(s): Using Table 6.2.1, Thus

223 Example 8 (or see translated function in 6.3)
Find the inverse Laplace Transform of the given function. To find y(t) such that y(t) = L-1{Y(s)}, we first rewrite Y(s): Using Table 6.2.1, Thus

224 Example 9 For the function Y(s) below, we find y(t) = L-1{Y(s)} by using a partial fraction expansion, as follows.

225 Example 10 (or see translated function in 6.3)
For the function Y(s) below, we find y(t) = L-1{Y(s)} by completing the square in the denominator and rearranging the numerator, as follows. Using Table 6.1, we obtain

226 Example 11: Initial Value Problem (1 of 2)
Consider the initial value problem Taking the Laplace transform of the differential equation, and assuming the conditions of Corollary are met, we have Letting Y(s) = L{y}, we have Substituting in the initial conditions, we obtain Thus

227 Example 11: Solution (2 of 2)
Completing the square, we obtain Thus Using Table 6.2.1, we have Therefore our solution to the initial value problem is

228 Example 12: Nonhomogeneous Problem (1 of 2)
Consider the initial value problem Taking the Laplace transform of the differential equation, and assuming the conditions of Corollary are met, we have Letting Y(s) = L{y}, we have Substituting in the initial conditions, we obtain Thus

229 Example 12: Solution (2 of 2)
Using partial fractions, Then Solving, we obtain A = 2, B = 5/3, C = 0, and D = -2/3. Thus Hence

230 Ch 6.3: Step Functions Some of the most interesting elementary applications of the Laplace Transform method occur in the solution of linear equations with discontinuous or impulsive forcing functions. In this section, we will assume that all functions considered are piecewise continuous and of exponential order, so that their Laplace Transforms all exist, for s large enough.

231 Step Function definition
Let c  0. The unit step function, or Heaviside function, is defined by A negative step can be represented by

232 Example 1 Sketch the graph of
Solution: Recall that uc(t) is defined by Thus and hence the graph of h(t) is a rectangular pulse.

233 Laplace Transform of Step Function
The Laplace Transform of uc(t) is

234 Translated Functions Given a function f (t) defined for t  0, we will often want to consider the related function g(t) = uc(t) f (t - c): Thus g represents a translation of f a distance c in the positive t direction. In the figure below, the graph of f is given on the left, and the graph of g on the right.

235 Example 2 Sketch the graph of
Solution: Recall that uc(t) is defined by Thus and hence the graph of g(t) is a shifted parabola.

236 Theorem 6.3.1 If F(s) = L{f (t)} exists for s > a  0, and if c > 0, then Conversely, if f (t) = L-1{F(s)}, then Thus the translation of f (t) a distance c in the positive t direction corresponds to a multiplication of F(s) by e-cs.

237 Theorem 6.3.1: Proof Outline
We need to show Using the definition of the Laplace Transform, we have

238 Example 3 Find the Laplace transform of Solution: Note that Thus

239 Example 4 Find L{ f (t)}, where f is defined by
Note that f (t) = sin(t) + u/4(t) cos(t - /4), and

240 Example 5 Find L-1{F(s)}, where Solution:

241 Theorem 6.3.2 If F(s) = L{f (t)} exists for s > a  0, and if c is a constant, then Conversely, if f (t) = L-1{F(s)}, then Thus multiplication f (t) by ect results in translating F(s) a distance c in the positive t direction, and conversely. Proof Outline:

242 Example 4 Find the inverse transform of
To solve, we first complete the square: Since it follows that

243 Ch 6.4: Differential Equations with Discontinuous Forcing Functions
In this section, we focus on examples of nonhomogeneous initial value problems in which the forcing function is discontinuous.

244 Example 1: Initial Value Problem (1 of 12)
Find the solution to the initial value problem Such an initial value problem might model the response of a damped oscillator subject to g(t), or current in a circuit for a unit voltage pulse.

245 Example 1: Laplace Transform (2 of 12)
Assume the conditions of Corollary are met. Then or Letting Y(s) = L{y}, Substituting in the initial conditions, we obtain Thus

246 Example 1: Factoring Y(s) (3 of 12)
We have where If we let h(t) = L-1{H(s)}, then by Theorem

247 Example 1: Partial Fractions (4 of 12)
Thus we examine H(s), as follows. This partial fraction expansion yields the equations Thus

248 Example 1: Completing the Square (5 of 12)

249 Example 1: Solution (6 of 12)
Thus and hence For h(t) as given above, and recalling our previous results, the solution to the initial value problem is then

250 Example 1: Solution Graph (7 of 12)
Thus the solution to the initial value problem is The graph of this solution is given below.

251 Example 2: Initial Value Problem (1 of 12)
Find the solution to the initial value problem The graph of forcing function g(t) is given on right, and is known as ramp loading.

252 Example 2: Laplace Transform (2 of 12)
Assume that this ODE has a solution y = (t) and that '(t) and ''(t) satisfy the conditions of Corollary Then or Letting Y(s) = L{y}, and substituting in initial conditions, Thus

253 Example 2: Factoring Y(s) (3 of 12)
We have where If we let h(t) = L-1{H(s)}, then by Theorem

254 Example 2: Partial Fractions (4 of 12)
Thus we examine H(s), as follows. This partial fraction expansion yields the equations Thus

255 Example 2: Solution (5 of 12)
Thus and hence For h(t) as given above, and recalling our previous results, the solution to the initial value problem is then

256 Example 2: Graph of Solution (6 of 12)
Thus the solution to the initial value problem is The graph of this solution is given below.

257 Ch 7.1: Introduction to Systems of First Order Linear Equations
A system of simultaneous first order ordinary differential equations has the general form where each xk is a function of t. If each Fk is a linear function of x1, x2, …, xn, then the system of equations is said to be linear, otherwise it is nonlinear. Systems of higher order differential equations can similarly be defined.

258 Example 1 The motion of a spring-mass system from Section 3.8 was described by the equation This second order equation can be converted into a system of first order equations by letting x1 = u and x2 = u'. Thus or

259 Nth Order ODEs and Linear 1st Order Systems
The method illustrated in previous example can be used to transform an arbitrary nth order equation into a system of n first order equations, first by defining Then

260 Solutions of First Order Systems
A system of simultaneous first order ordinary differential equations has the general form It has a solution on I:  < t <  if there exists n functions that are differentiable on I and satisfy the system of equations at all points t in I. Initial conditions may also be prescribed to give an IVP:

261 Example 2 The equation can be written as system of first order equations by letting x1 = y and x2 = y'. Thus A solution to this system is which is a parametric description for the unit circle.

262 Theorem 7.1.1 Suppose F1,…, Fn and F1/x1,…, F1/xn,…, Fn/ x1,…, Fn/xn, are continuous in the region R of t x1 x2…xn-space defined by  < t < , 1 < x1 < 1, …, n < xn < n, and let the point be contained in R. Then in some interval (t0 - h, t0 + h) there exists a unique solution that satisfies the IVP.

263 Linear Systems If each Fk is a linear function of x1, x2, …, xn, then the system of equations has the general form If each of the gk(t) is zero on I, then the system is homogeneous, otherwise it is nonhomogeneous.

264 Theorem 7.1.2 Suppose p11, p12,…, pnn, g1,…, gn are continuous on an interval I:  < t <  with t0 in I, and let prescribe the initial conditions. Then there exists a unique solution that satisfies the IVP, and exists throughout I.

265 Ch 7.4: Basic Theory of Systems of First Order Linear Equations
The general theory of a system of n first order linear equations parallels that of a single nth order linear equation. This system can be written as x' = P(t)x + g(t), where

266 Vector Solutions of an ODE System
A vector x = (t) is a solution of x' = P(t)x + g(t) if the components of x, satisfy the system of equations on I:  < t < . For comparison, recall that x' = P(t)x + g(t) represents our system of equations Assuming P and g continuous on I, such a solution exists by Theorem

267 Example 1 Consider the homogeneous equation x' = P(t)x below, with the solutions x as indicated. To see that x is a solution, substitute x into the equation and perform the indicated operations:

268 Homogeneous Case; Vector Function Notation
As in Chapters 3 and 4, we first examine the general homogeneous equation x' = P(t)x. Also, the following notation for the vector functions x(1), x(2),…, x(k),… will be used:

269 Theorem 7.4.1 If the vector functions x(1) and x(2) are solutions of the system x' = P(t)x, then the linear combination c1x(1) + c2x(2) is also a solution for any constants c1 and c2. Note: By repeatedly applying the result of this theorem, it can be seen that every finite linear combination of solutions x(1), x(2),…, x(k) is itself a solution to x' = P(t)x.

270 Example 2 Consider the homogeneous equation x' = P(t)x below, with the two solutions x(1) and x(2) as indicated. Then x = c1x(1) + c2x(2) is also a solution:

271 Theorem 7.4.2 If x(1), x(2),…, x(n) are linearly independent solutions of the system x' = P(t)x for each point in I:  < t < , then each solution x = (t) can be expressed uniquely in the form If solutions x(1),…, x(n) are linearly independent for each point in I:  < t < , then they are fundamental solutions on I, and the general solution is given by

272 The Wronskian and Linear Independence
The proof of Thm uses the fact that if x(1), x(2),…, x(n) are linearly independent on I, then detX(t)  0 on I, where The Wronskian of x(1),…, x(n) is defined as W[x(1),…, x(n)](t) = detX(t). It follows that W[x(1),…, x(n)](t)  0 on I iff x(1),…, x(n) are linearly independent for each point in I.

273 Theorem 7.4.3 If x(1), x(2),…, x(n) are solutions of the system x' = P(t)x on I:  < t < , then the Wronskian W[x(1),…, x(n)](t) is either identically zero on I or else is never zero on I. This result enables us to determine whether a given set of solutions x(1), x(2),…, x(n) are fundamental solutions by evaluating W[x(1),…, x(n)](t) at any point t in  < t < .

274 Theorem 7.4.4 Let Let x(1), x(2),…, x(n) be solutions of the system x' = P(t)x,  < t < , that satisfy the initial conditions respectively, where t0 is any point in  < t < . Then x(1), x(2),…, x(n) are fundamental solutions of x' = P(t)x.

275 Ch 7.5: Homogeneous Linear Systems with Constant Coefficients
We consider here a homogeneous system of n first order linear equations with constant, real coefficients: This system can be written as x' = Ax, where

276 Solving Homogeneous System
To construct a general solution to x' = Ax, assume a solution of the form x = ert, where the exponent r and the constant vector  are to be determined. Substituting x = ert into x' = Ax, we obtain Thus to solve the homogeneous system of differential equations x' = Ax, we must find the eigenvalues and eigenvectors of A. Therefore x = ert is a solution of x' = Ax provided that r is an eigenvalue and  is an eigenvector of the coefficient matrix A.

277 Example 1: Direction Field (1 of 9)
Consider the homogeneous equation x' = Ax below. A direction field for this system is given below. Substituting x = ert in for x, and rewriting system as (A-rI) = 0, we obtain

278 Example 1: Eigenvalues (2 of 9)
Our solution has the form x = ert, where r and  are found by solving Recalling that this is an eigenvalue problem, we determine r by solving det(A-rI) = 0: Thus r1 = 3 and r2 = -1.

279 Example 1: First Eigenvector (3 of 9)
Eigenvector for r1 = 3: Solve by row reducing the augmented matrix:

280 Example 1: Second Eigenvector (4 of 9)
Eigenvector for r2 = -1: Solve by row reducing the augmented matrix:

281 Example 1: General Solution (5 of 9)
The corresponding solutions x = ert of x' = Ax are The Wronskian of these two solutions is Thus x(1) and x(2) are fundamental solutions, and the general solution of x' = Ax is

282 Example 2: (1 of 9) Consider the homogeneous equation x' = Ax below.
Substituting x = ert in for x, and rewriting system as (A-rI) = 0, we obtain

283 Example 2: Eigenvalues (2 of 9)
Our solution has the form x = ert, where r and  are found by solving Recalling that this is an eigenvalue problem, we determine r by solving det(A-rI) = 0: Thus r1 = -1 and r2 = -4.

284 Example 2: First Eigenvector (3 of 9)
Eigenvector for r1 = -1: Solve by row reducing the augmented matrix:

285 Example 2: Second Eigenvector (4 of 9)
Eigenvector for r2 = -4: Solve by row reducing the augmented matrix:

286 Example 2: General Solution (5 of 9)
The corresponding solutions x = ert of x' = Ax are The Wronskian of these two solutions is Thus x(1) and x(2) are fundamental solutions, and the general solution of x' = Ax is

287 Eigenvalues, Eigenvectors and Fundamental Solutions
In general, for an n x n real linear system x' = Ax: All eigenvalues are real and different from each other. Some eigenvalues occur in complex conjugate pairs. Some eigenvalues are repeated. If eigenvalues r1,…, rn are real & different, then there are n corresponding linearly independent eigenvectors (1),…, (n). The associated solutions of x' = Ax are Using Wronskian, it can be shown that these solutions are linearly independent, and hence form a fundamental set of solutions. Thus general solution is

288 Hermitian Case: Eigenvalues, Eigenvectors & Fundamental Solutions
If A is an n x n Hermitian matrix (real and symmetric), then all eigenvalues r1,…, rn are real, although some may repeat. In any case, there are n corresponding linearly independent and orthogonal eigenvectors (1),…, (n). The associated solutions of x' = Ax are and form a fundamental set of solutions.

289 Example 3: Hermitian Matrix (1 of 3)
Consider the homogeneous equation x' = Ax below. The eigenvalues were found previously in Ch 7.3, and were: r1 = 2, r2 = -1 and r3 = -1. Corresponding eigenvectors:

290 Example 3: General Solution (2 of 3)
The fundamental solutions are with general solution

291 Ch 7.6: Complex Eigenvalues
We consider again a homogeneous system of n first order linear equations with constant, real coefficients, and thus the system can be written as x' = Ax, where

292 Conjugate Eigenvalues and Eigenvectors
We know that x = ert is a solution of x' = Ax, provided r is an eigenvalue and  is an eigenvector of A. The eigenvalues r1,…, rn are the roots of det(A-rI) = 0, and the corresponding eigenvectors satisfy (A-rI) = 0. If A is real, then the coefficients in the polynomial equation det(A-rI) = 0 are real, and hence any complex eigenvalues must occur in conjugate pairs. Thus if r1 =  + i is an eigenvalue, then so is r2 =  - i. The corresponding eigenvectors (1), (2) are conjugates also. To see this, recall A and I have real entries, and hence

293 Conjugate Solutions It follows from the previous slide that the solutions corresponding to these eigenvalues and eigenvectors are conjugates conjugates as well, since

294 Real-Valued Solutions
Thus for complex conjugate eigenvalues r1 and r2 , the corresponding solutions x(1) and x(2) are conjugates also. To obtain real-valued solutions, use real and imaginary parts of either x(1) or x(2). To see this, let (1) = a + i b. Then where are real valued solutions of x' = Ax, and can be shown to be linearly independent.

295 General Solution To summarize, suppose r1 =  + i, r2 =  - i, and that r3,…, rn are all real and distinct eigenvalues of A. Let the corresponding eigenvectors be Then the general solution of x' = Ax is where

296 Example 1: (1 of 7) Consider the homogeneous equation x' = Ax below.
Substituting x = ert in for x, and rewriting system as (A-rI) = 0, we obtain

297 Example 1: Complex Eigenvalues (2 of 7)
We determine r by solving det(A-rI) = 0. Now Thus Therefore the eigenvalues are r1 = -1/2 + i and r2 = -1/2 - i.

298 Example 1: First Eigenvector (3 of 7)
Eigenvector for r1 = -1/2 + i: Solve by row reducing the augmented matrix: Thus

299 Example 1: General Solution (5 of 7)
The corresponding solutions x = ert of x' = Ax are The Wronskian of these two solutions is Thus u(t) and v(t) are real-valued fundamental solutions of x' = Ax, with general solution x = c1u + c2v.


Download ppt "Ch 1.1: Basic Mathematical Models; Direction Fields"

Similar presentations


Ads by Google