Download presentation
Presentation is loading. Please wait.
Published byOlivia Payne Modified over 6 years ago
1
Chapter 10. Numerical Solutions of Nonlinear Systems of Equations
Jihoon Myung Computer Networks Research Lab. Dept. of Computer Science and Engineering Korea University
2
Contents Fixed Points for Functions of Several Variables
Newton’s Method Quasi-Newton Methods Steepest Descent Techniques Homotopy and Continuation Methods
3
Fixed Points for Functions of Several Variables
A system of nonlinear equations The functions are the coordinate functions of F
4
Fixed Points for Functions of Several Variables
Example 1. The 3×3 nonlinear system
5
Fixed Points for Functions of Several Variables
6
Fixed Points for Functions of Several Variables
7
Fixed Points for Functions of Several Variables
8
Fixed Points for Functions of Several Variables
9
Fixed Points for Functions of Several Variables
Function G(x)
10
Fixed Points for Functions of Several Variables
Main function
11
Fixed Points for Functions of Several Variables
Result x(0) = (0.1, 0.1, -0.1)t Tolerance =
12
Fixed Points for Functions of Several Variables
One way to accelerate convergence of the fixed-point iteration is to use the latest estimates as the Gauss-Seidel method for linear systems This method does not always accelerate the convergence
13
Fixed Points for Functions of Several Variables
Main function for using the latest estimates
14
Fixed Points for Functions of Several Variables
Result x(0) = (0.1, 0.1, -0.1)t Tolerance =
15
Newton’s Method Newton’s method in the one-dimensional case
Newton’s method for nonlinear systems Using a similar approach in the n-dimensional case
16
Newton’s Method
17
Newton’s Method
18
Newton’s Method (Jacobian matrix)
19
Newton’s Method In practice, explicit computation of j(x)-1 is avoided
A vector y is found that satisfies J(x(k-1))y=-F(x(k-1)) The new approximation, x(k), is obtained by adding y to x(k-1) Newton’s method can converge very rapidly once an approximation is obtained that is near the true solution It is not always easy to determine starting values that will lead to a solution The method is comparatively expensive to employ Good starting values can usually be found by the Steepest Descent method
20
Newton’s Method For solving J(x(k-1))y=-F(x(k-1)), use Gaussian elimination
21
Newton’s Method
22
Newton’s Method
23
Newton’s Method Gaussian elimination with partial pivoting
24
Newton’s Method Function F(x)
25
Newton’s Method Jacobian matrix
26
Newton’s Method Main function
27
Newton’s Method Result x(0) = (0.1, 0.1, -0.1)t Tolerance =
28
Quasi-Newton Methods Broyden’s method
A generalization of the Secant method to systems of nonlinear equations Belong to a class of methods known as least-change secant updates that produce algorithms called quasi-Newton Replace the Jacobian matrix in Newton’s method with an approximation matrix that is updated at each iteration Superlinear convergence
29
Quasi-Newton Methods An initial approximation x(0) is given
Calculate the next approximation x(1) The same manner as Newton’s method, or If it is inconvenient to determine J(x(0)) exactly, use the difference equations to approximate the partial derivatives
30
Quasi-Newton Methods Compute x(2),…
Examine the Secant method for a single nonlinear equation
31
Quasi-Newton Methods
32
Quasi-Newton Methods
33
Quasi-Newton Methods Matrix inversion
34
Quasi-Newton Methods Matrix inversion (con’t)
35
Matrix inversion (con’t)
36
Quasi-Newton Methods Matrix inversion(con’t)
37
Main function
38
Main function (con’t)
39
Quasi-Newton Methods Result x(0) = (0.1, 0.1, -0.1)t
Tolerance =
40
Quasi-Newton Methods Result x(0) = (0.1, 0.1, -0.1)t
Tolerance = Euclidean norm
41
Steepest Descent Techniques
Steepest Descent method Determine a local minimum for a multivariable functions of the form Converge only linearly to the solution Converge even for poor initial approximations Used to find sufficiently accurate starting approximations for the Newton-based techniques
42
Steepest Descent Techniques
43
Steepest Descent Techniques
44
Steepest Descent Techniques
45
Steepest Descent Techniques
The direction of greatest decrease in the value of g at x is the direction given by
46
Steepest Descent Techniques
Choose three numbers α1 < α2 < α1 that, we hope, are close to where the minimum value of h(α) occurs Construct the quadratic polynomial P(x) that interpolates h at α1, α2, and α3 Define in [α1, α3] so that P( ) is a minimum and use P( ) to approximate the minimal value of h(α) Choose α1=0 A number α3 is found with h(α3) < h(α1) α2 is chosen to be α3/2 The minimum value of P occurs either at the only critical point of P or at the right endpoint α3
47
Steepest Descent Techniques
Function f1, f2,…,fn and function g
48
Steepest Descent Techniques
The gradient of g
49
Steepest Descent Techniques
Main function
50
Steepest Descent Techniques
Main function (con’t)
51
Steepest Descent Techniques
Main function (con’t)
52
Steepest Descent Techniques
Result x(0) = (0.1, 0.1, -0.1)t Tolerance =
53
Steepest Descent Techniques
Result x(0) = (0, 0, 0)t Tolerance =
54
Homotopy and Continuation Methods
Homotopy, or continuation, methods for nonlinear systems embed the problem to be solved within a collection of problems
55
Homotopy and Continuation Methods
56
Homotopy and Continuation Methods
57
Homotopy and Continuation Methods
58
Homotopy and Continuation Methods
Main function
59
Homotopy and Continuation Methods
Main function (con’t)
60
Homotopy and Continuation Methods
Result x(0) = (0.1, 0.1, -0.1)t Tolerance =
61
Homotopy and Continuation Methods
Result x(0) = (0, 0, 0)t Tolerance =
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.