Download presentation
Presentation is loading. Please wait.
Published byBartholomew Dixon Modified over 8 years ago
1
Numerical methods Themes 1.Solution of equationsSolution of equations 2.Numerical DifferentiationNumerical Differentiation 3.Numerical IntegrationNumerical Integra 4.Calculation of curve lengthCalculation of curve length 5.OptimizationOptimization
2
Theme 1. Solution of Equations Numerical methods
3
3 Basic concept Type of solution: analytical (exact, formulary) approximate (inexact) Task: solve the equation How? ? Numerical methods initial approximation at N Graphic approach
4
4 Numerical methods Idea: successive refinement of solution by the instrumentality some algorithm Field of use: when it’s impossible or extremely difficult to find the exact solution. 1)Some solution can be find 2)In many cases the error can be estimated (i.e. solution with given accuracy can be find) 1)The exact solution can’t be find 1)It’s impossible to investigate the solution at the parametric variation 2)Computationally intensive (high volume of calculation) 3)Sometimes it’s difficult to estimate the error 4)No universal methods
5
5 Whether the solution exists on [a, b] ? x y x*x* a b x y x*x* a b Solution exists Solution doesn’t exist x y x*x* a b continuous function If continuous function f (x) has different signs on the end of the interval [a, b], then in some point inside [a, b] we have f (x) = 0 ! !
6
6 Dichotomy method (bisection) 1.To find the bisecting point of a segment [a,b] : c = (a + b) / 2; 2.If f(c)*f(a)<0, to move the right boundary of the interval b = c; 3.If f(c)*f(a)≥ 0, to move the left boundary of the interval a = c; 4.To repeat steps 1-3, until b – a ≤ . x y x*x* a b с
7
7 Dichotomy method (bisection) Simplicity Solution with given accuracy can be obtained (in the range of computer calculation accuracy) must to know the interval [a, b] on the interval [a, b] must be only one solution large number of steps to reach the high accuracy only for a function with one variable
8
8 Segment bisection method //---------------------------------------------- // BinSolve finds solution on [a,b] by the //segment bisection method // Input: a, b – interval boundaries, a < b // eps - solution accuracy // Output: x – solution of equation f(x)=0 //---------------------------------------------- float BinSolve ( float a, float b, float eps ) { float c; while ( b - a > eps ) { c = (a + b) / 2; if ( f(a)*f(c) < 0 ) b = c; else a = c; } return (a + b) / 2; } //---------------------------------------------- // BinSolve finds solution on [a,b] by the //segment bisection method // Input: a, b – interval boundaries, a < b // eps - solution accuracy // Output: x – solution of equation f(x)=0 //---------------------------------------------- float BinSolve ( float a, float b, float eps ) { float c; while ( b - a > eps ) { c = (a + b) / 2; if ( f(a)*f(c) < 0 ) b = c; else a = c; } return (a + b) / 2; } float f ( float x ) { return x*x – 5; } float f ( float x ) { return x*x – 5; }
9
9 How to calculate a number of steps? float BinSolve ( float a, float b, float eps, int &n ) { float c; n = 0; while ( b - a > eps ) { c = (a + b) / 2; if ( f(a)*f(c) < 0 ) b = c; else a = c; n ++; } return (a + b) / 2; } float BinSolve ( float a, float b, float eps, int &n ) { float c; n = 0; while ( b - a > eps ) { c = (a + b) / 2; if ( f(a)*f(c) < 0 ) b = c; else a = c; n ++; } return (a + b) / 2; } int &n n = 0; n ++; Call in main program: float x; int N;... x = BinSolve ( 2, 3, 0.0001, N ); printf(“Answer: x = %7.3f", x); printf(“Number of steps: %d", N); float x; int N;... x = BinSolve ( 2, 3, 0.0001, N ); printf(“Answer: x = %7.3f", x); printf(“Number of steps: %d", N); Value of variable changes inside of function
10
10 Method of iterations (repetition) Tasks: equivalent transformations : has a solution at Idea of solution: – initial approximation (for example) Questions: 1)how to choose ? 2)Whether a solution always can be find?
11
11 Iterative convergence Convergent iterative process: Sequence approaches (converge) exact solution. One-sided convergence Double-sided convergence
12
12 Divergence of iteration Divergent iterative process: Sequence indefinitely increase or decrease, doesn’t approach to solution. One-sided divergence Double-sided divergence
13
13 On what depends convergence? converge diverge Conclusions: convergence of iteration depends on derivative Iteration converge at and diverge at convergence is defined by choosing parameter b
14
14 How to choose b ? by guesswork, trying different versions for initial approximation x 0 Recalculate on each step, forexample: What kind of problems might be? ?
15
15 Iteration method (program code) //---------------------------------------------- // Iter solution of equation by the iteration method // Input: x – initial approximation // b - parameter // eps - accuracy of solution // Output: solution of equation f(x)=0 // n – number of steps ////---------------------------------------------- float Iter ( float x, float b, float eps, int &n) { int n = 0; float dx; while ( 1 ) { dx = b*f(x); x = x + dx; if ( fabs(dx) < eps ) break; n ++; if ( n > 100 ) break; } return x; } //---------------------------------------------- // Iter solution of equation by the iteration method // Input: x – initial approximation // b - parameter // eps - accuracy of solution // Output: solution of equation f(x)=0 // n – number of steps ////---------------------------------------------- float Iter ( float x, float b, float eps, int &n) { int n = 0; float dx; while ( 1 ) { dx = b*f(x); x = x + dx; if ( fabs(dx) < eps ) break; n ++; if ( n > 100 ) break; } return x; } abend exit rated output
16
16 Newton's method (tangent method) What’s the connection with iteration method? ?
17
17 Newton's method (program code) //---------------------------------------------- // Newton solution of equation by the Newton’s method // Input: x – initial approximation // eps - exact solution // Output: solution of equation f(x)=0 // n – number of steps ////---------------------------------------------- float Newton ( float x, float eps, int &n) { int n = 0; float dx; while ( 1 ) { dx = f(x) / df(x); x = x - dx; if ( fabs(dx) < eps ) break; n ++; if ( n > 100 ) break; } return x; } //---------------------------------------------- // Newton solution of equation by the Newton’s method // Input: x – initial approximation // eps - exact solution // Output: solution of equation f(x)=0 // n – number of steps ////---------------------------------------------- float Newton ( float x, float eps, int &n) { int n = 0; float dx; while ( 1 ) { dx = f(x) / df(x); x = x - dx; if ( fabs(dx) < eps ) break; n ++; if ( n > 100 ) break; } return x; } float f ( float x ) { return 3*x*x*x+2*x+5; } float df ( float x ) { return 9*x*x + 2; } float f ( float x ) { return 3*x*x*x+2*x+5; } float df ( float x ) { return 9*x*x + 2; }
18
18 Newton's method rapid (quadratic) convergence – error on k -th step inversely proportional to k 2 No need to know interval, only initial approximation It’s applicable for a function with many variables Have to be able to calculate derivatives (by formula or numerically) Derivative mustn’t be equal to zero Might be cycled
19
Тheme 2. Numerical Differentiation Numerical methods
20
20 Taylor’s theorem states that any smooth function can be approximated as a polynomial xixi x i+1 f (x)f (x) x f ( x i+1 ) f (xi)f (xi) 1. The Taylor series and approximation L1L1 L2L2 LNLN x i-1 f ( x i-1 )
21
21 Taylor’s theorem states that any smooth function can be approximated as a polynomial xixi x i+1 f (x)f (x) x f ( x i+1 ) f (xi)f (xi) 1. The Taylor series and approximation
22
22 The first-order finite-difference of first derivative So for a finite The finite-difference formulas From the Taylor’s series a various finite-difference formulas can be obtained
23
Тheme 2. Numerical Integration Numerical methods
24
24 Area of curvilinear trapezoid x y b a y = f (x) x y b a y = f 1 (x) y = f 2 (x)
25
25 Method of (left) rectangles x y x с2 x с1 h y = f 1 (x) y = f 2 (x) S1S1S1S1 S2S2S2S2 S3S3S3S3 S4S4S4S4 SiSiSiSi x x x+h f 1 (x) f 2 (x) float Area() { float x, S = 0, h=0.001; for ( x = xc1; x < xc2; x += h) S += h*(f1(x) – f2(x)); return S; } float Area() { float x, S = 0, h=0.001; for ( x = xc1; x < xc2; x += h) S += h*(f1(x) – f2(x)); return S; } for ( x = xc1; x < xc2; x += h ) S += f1(x) – f2(x); S *= h; for ( x = xc1; x < xc2; x += h ) S += f1(x) – f2(x); S *= h; How to improve the solution? ? Why not x <= xc2 ? ?
26
26 Method of (right) rectangles x y x с2 x с1 h y = f 1 (x) y = f 2 (x) S1S1S1S1 S2S2S2S2 S3S3S3S3 S4S4S4S4 SiSiSiSi x x+h f 1 (x) f 2 (x) float Area() { float x, S = 0, h=0.001; for ( x = xc1; x < xc2; x += h) S += h*(f1(x+h) – f2(x+h)); return S; } float Area() { float x, S = 0, h=0.001; for ( x = xc1; x < xc2; x += h) S += h*(f1(x+h) – f2(x+h)); return S; } for ( x = xc1; x < xc2; x += h ) S += f1(x+h) – f2(x+h); S *= h; for ( x = xc1; x < xc2; x += h ) S += f1(x+h) – f2(x+h); S *= h;
27
27 Method of (average) rectangles x y x с2 x с1 h y = f 1 (x) y = f 2 (x) S1S1S1S1 S2S2S2S2 S3S3S3S3 S4S4S4S4 float Area() { float x, S = 0, h=0.001; for ( x = xc1; x < xc2; x += h) S += h*(f1(x+h) – f2(x+h)); return S; } float Area() { float x, S = 0, h=0.001; for ( x = xc1; x < xc2; x += h) S += h*(f1(x+h) – f2(x+h)); return S; } for ( x = xc1; x < xc2; x += h ) S += f1(x+h/2) – f2(x+h/2); S *= h; for ( x = xc1; x < xc2; x += h ) S += f1(x+h/2) – f2(x+h/2); S *= h; f 1 (x) f 2 (x) x SiSiSiSi x+h which method is more accurate? ? left (right): middle
28
28 Trapezium method x y x с2 x с1 h y = f 1 (x) y = f 2 (x) for ( x = xc1; x < xc2; x += h ) S += f1(x) – f2(x) + f1(x+h) – f2(x+h); S *= h/2; for ( x = xc1; x < xc2; x += h ) S += f1(x) – f2(x) + f1(x+h) – f2(x+h); S *= h/2; Error x x+h f 1 (x) f 2 (x) SiSiSiSi how to improve? ? S =( f1(xc1) - f2(xc1) + f1(xc2) - f2(xc2) )/2.; for ( x = xc1+h; x < xc2; x += h ) S += f1(x) – f2(x); S *= h; S =( f1(xc1) - f2(xc1) + f1(xc2) - f2(xc2) )/2.; for ( x = xc1+h; x < xc2; x += h ) S += f1(x) – f2(x); S *= h; S1S1S1S1 S2S2S2S2 S3S3S3S3 S4S4S4S4
29
29 Monte-Carlo method Application: calculation of polygram areas (difficult to use other methods). Requirements: it needs to fairly simple define, if the point (x, y) is fallen into the figure. Example: Given 100 circles (coordinates of center, radius), which are may intersect. Find the square of area which is blocked by circles. how to find S ? ?
30
30 Метод Монте-Карло 1.Polygram is inscribed into other figure, which square is easily calculated (rectangle, circle, …). 2.uniformly N points with random coordinate inside of rectangle. 3.Counting up quantity of points, which are fallen onto the figure: M. 4. Square is calculated: Totally N points On figure M points 1.Method is approximate. 2.Distribution must be uniform. 3.The more points, the more accuracy. 4.Accuracy is limited by detector of random number. !
31
Numerical methods Тheme 3. Calculation of curve length
32
32 Curve length x y b a y = f (x) L Exact solution: Needs formula for derivative Difficult to take integral Approximate solution: xixi x i +h f (x)f (x) LiLi L1L1 L2L2 LNLN
33
33 Curve length //---------------------------------------------- // CurveLen calculation of curve length // Input: a, b – boundary of integral // Output: length of curve y = f(x) on interval[a,b] //---------------------------------------------- float CurveLen ( float a, float b ) { float x, dy, h = 0.0001, h2 = h*h, L = 0; for ( x = a; x < b; x += h ) { dy = f(x+h) - f(x); L += sqrt(h2 + dy*dy); } return L; } //---------------------------------------------- // CurveLen calculation of curve length // Input: a, b – boundary of integral // Output: length of curve y = f(x) on interval[a,b] //---------------------------------------------- float CurveLen ( float a, float b ) { float x, dy, h = 0.0001, h2 = h*h, L = 0; for ( x = a; x < b; x += h ) { dy = f(x+h) - f(x); L += sqrt(h2 + dy*dy); } return L; }
34
Numerical methods Т heme 4. Optimization
35
35 Find x, at which or at given constrains. basic concepts Optimization – search of optimal solution. Aim: to define the value of unknown parameters, at which given function reaches to minimum (expenses) or maximum (profit). Constraints – condition, which makes problem sensible. or
36
36 Local and global minimums y = f (x) Global minimum Local minimum Tasks: to find global minimum Reality: Many of known algorithms find only local minimum near the initial point Algorithm of search a global minimum in general case unknown What to do?: Initial point for a function with one variable is defined by graphics Random search of initial point launch of algorithm of search from many different point and choice of the best result
37
37 Minimum of function of single variable Given: on interval [a,b] function is continuous and has unique minimum. Find: x * y = f (x) Principle of interval contraction : how to choose c and d to the best advantage ? ?
38
38 Minimum of function of single variable contraction constant in both cases: y = f (x) Compression coefficient : Very fast compression: при must be c d Method of “almost half” division: – small number It needs to search two values of function on each step
39
39 Ratio of “ golden section” Idea: choose c and d that, in order to on each step calculate only one new value of function. Equation for definition g : Ratio of «golden section»:
40
40 Ratio of “golden section” //---------------------------------------------- // Gold search of function minimum(«golden section») // Input: a, b – interval boundary // eps – accuracy // Output: x, at wich f(x) has minimum // on the interval[a,b] //---------------------------------------------- float Gold (float a, float b, float eps ) { float x1, x2, g = 0.618034, R = g*(b - a); while ( fabs(b-a) > eps ) { x1 = b - R; x2 = a + R; if ( f(x1) > f(x2) ) a = x1; else b = x2; R *= g; } return (a + b) /2.; } //---------------------------------------------- // Gold search of function minimum(«golden section») // Input: a, b – interval boundary // eps – accuracy // Output: x, at wich f(x) has minimum // on the interval[a,b] //---------------------------------------------- float Gold (float a, float b, float eps ) { float x1, x2, g = 0.618034, R = g*(b - a); while ( fabs(b-a) > eps ) { x1 = b - R; x2 = a + R; if ( f(x1) > f(x2) ) a = x1; else b = x2; R *= g; } return (a + b) /2.; } how to calculate only one value on each step? ?
41
41 Function of several variables Find, for which at given constraints. Problems: No universal algorithms to search the global minimum It’s not clear, how to choose an initial approximation (depends on problem and intuition) Approaches: Methods of local optimization (the result depends on choice of initial approximation) random search (without warrants) Method of global optimization (for special classes of function)
42
42 Alternating-variable descent method Idea: Initial point is chooses Only x 1 will be change and other variables are fixed, minimum is defined by x 1 then only x 2 will be change and other variables are fixed, … Initial approximation minimum simplicity, reduced to several problems with one variable It’s possible to move to the minimum faster high volume of calculation Solution for composite function may not be find
43
43 Gradient method Gradient – it’s a vector wish shows the direction of quick increase function. Idea: Initial point is chooses On each step we’ll move in direction of anti gradient minimum Initial approximation rapid convergence It needs to calculate derivatives (by formula or numerically) gradient
44
44 Random search method Idea: initial point is choosed we’ll try to take a step in random direction if the value of function is decreased, step is successful (is memorized) minimum Initial approximation simplicity of realization no need to calculate the derivatives applicable for function with many local minimums Very high volume of calculation
45
45
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.