Download presentation
Presentation is loading. Please wait.
1
Numerical Analysis Lecture 45
2
Summing up
3
Non-Linear Equations
4
Bisection Method (Bolzano) Regula-Falsi Method Method of iteration Newton - Raphson Method Muller’s Method Graeffe’s Root Squaring Method
5
In the method of False Position, the first approximation to the root of f (x) = 0 is given by
(2.2) Here f (xn-1) and f (xn+1) are of opposite sign. Successive approximations to the root of f (x) = 0 is given by Eq. (2.2).
6
METHOD OF ITERATION can be applied to find a real root of the equation f (x) = 0 by rewriting the same in the form,
7
N-R Formula In Newton – Raphson Method successive approximations
x2, x3, …, xn to the root are obtained from N-R Formula
8
Secant Method This sequence converges to the root ‘b’ of f (x) = 0 i.e. f( b ) = 0.
9
The Secant method converges faster than linear and slower than Newton’s quadratic.
10
In Muller’s Method we can get a better approximation to the root, by using
11
Where we defined
12
Systems of Linear Equations
13
Gaussian Elimination Gauss-Jordon
Gaussian Elimination Gauss-Jordon Elimination Crout’s Reduction Jacobi’s Gauss- Seidal Iteration Relaxation Matrix Inversion
14
In Gaussian Elimination method, the solution to the system of equations is obtained in two stages.
the given system of equations is reduced to an equivalent upper triangular form using elementary transformations the upper triangular system is solved using back substitution procedure
15
Gauss-Jordon method is a variation of Gaussian method
Gauss-Jordon method is a variation of Gaussian method. In this method, the elements above and below the diagonal are simultaneously made zero
16
In Crout’s Reduction Method the coefficient matrix [A] of the system of equations is decomposed into the product of two matrices [L] and [U], where [L] is a lower-triangular matrix and [U] is an upper-triangular matrix with 1’s on its main diagonal.
17
For the purpose of illustration, consider a general matrix in the form
18
Jacobi’s Method is an iterative method, where initial approximate solution to a given system of equations is assumed and is improved towards the exact solution in an iterative way.
19
In Jacobi’s method, the (r + 1)th approximation to the above system is given by Equations
21
Here we can observe that no element of. replaces
Here we can observe that no element of replaces entirely for the next cycle of computation.
22
In Gauss-Seidel method, the corresponding elements of
In Gauss-Seidel method, the corresponding elements of replaces those of as soon as they become available. It is also called method of Successive Displacement.
23
The Relaxation Method is also an iterative method and is due to Southwell.
24
Eigen Value Problems Power Method Jcobi’s Method
25
In Power Method the result looks like
Here, is the desired largest eigen value and is the corresponding eigenvector.
26
Interpolation
27
Finite Difference Operators Newton’s Forward Difference
Finite Difference Operators Newton’s Forward Difference Interpolation Formula Newton’s Backward Difference Interpolation Formula Lagrange’s Interpolation Formula Divided Differences Interpolation in Two Dimensions Cubic Spline Interpolation
28
Finite Difference Operators. Forward Differences. Backward Differences
Finite Difference Operators Forward Differences Backward Differences Central Difference
30
Thus Similarly
31
Shift operator, E
32
The inverse operator E-1 is defined as
Similarly,
33
Average Operator,
34
Differential Operator, D
35
Important Results
36
The Newton’s forward difference formula for interpolation, which gives the value of f (x0 + ph) in terms of f (x0) and its leading differences.
37
An alternate expression is
38
Newton’s Backward difference formula is,
39
Alternatively, this formula can also be written as
Here
40
The Lagrange’s formula for interpolation
41
Newton’s divided difference interpolation formula can be written as
42
Where the first order divided difference is defined as
43
Numerical Differentiation and Integration
44
We expressed D in terms of ∆ :
45
Using backward difference operator , we have
On expansion, we have
46
Using Central difference Operator
Differentiation Using Interpolation Richardson’s Extrapolation
48
Thus, is approximated by
which is given by
51
Basic Issues in Integration
What does an integral represent? = AREA = VOLUME
52
yn-1 y3 y2 y1 y0 yn xn = b xn-1 x3 x2 x1 x0 = a X O Y (x2, y2) (x1, y1) (x0, y0) y = f(x)
54
xn = b xn-1 x3 x2 x1 x0 = a X O Y (x2, y2) (x0, y0) y2 y1 y0 y = f(x)
55
TRAPEZOIDAL RULE
56
DOUBLE INTEGRATION We described procedure to evaluate numerically a double integral of the form
57
Differential Equations
58
Taylor Series Euler Method Runge-Kutta Method Predictor Corrector
Taylor Series Euler Method Runge-Kutta Method Predictor Corrector Method
59
In Taylor’s series we expanded y (t ) by Taylor’s series about the point t = t0 and obtain
60
In Euler Method we obtained the solution of the differential equation in the form of a recurrence relation
61
We derived the recurrence relation
Which is the modified Euler’s method.
62
The fourth-order R-K method was described as
63
where
64
In general, Milne’s predictor-corrector pair can be written as
65
This is known as Adam’s predictor formula.
Alternatively, it can be written as
66
Numerical Analysis Lecture 45
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.