Download presentation
1
Basic Numerical methods and algorithms
2
Review Differential Equations Systems of linear algebraic equations
Mathematical optimization
3
Differential Equations
An ordinary differential equation (frequently called an "ODE") is an equality involving a function and its derivatives. An ODE of order n is an equation of the form where y is a function of x, and is the first derivative, and is the n- th derivative with respect to x. Mostly numerical solution is used in practical applications. There are two groups of solutions : multy-step difference methods and Runge-Kutta methods.
4
Differential Equations
The Euler method is important in concept for it points the way of solving ODE by marching a small step at a time on the right-hand-side to approximate the "derivative" on the left-hand-side. However, the Euler method has limited value in practical usage.
5
Differential Equations
Midpoint Method The midpoint method, also known as the second-order Runga-Kutta method, improves the Euler method by adding a midpoint in the step which increases the accuracy by one order.
6
Differential Equations
Runge-Kutta Method The fourth-order Runge-Kutta method is by far the ODE solving method most often used. It can be summarized as follows:
7
Linear Algebraic Equations
Complex CG and engineering problems come to the solution of system of the linear algebraic equations: A x = b, where A is a sparse or band matrix of coefficients, x is a vector of unknown node values and b is a vector of right parts. Technology of sparse matrix requires a processing list where elements of such list are numbers, matrices, arrays or switches. Elementary algebra operations for sparse matrix are a transposition, column permutation, ordering of a sparse representation, multiplication of sparse matrices by a vector, etc. Symbolic and numerical algorithms for processing data are used.
8
Linear Algebraic Equations
Numerical methods for solving linear system of algebraic equations (SLAE) fall into two classes: iterative and direct methods. Typical iteration methods consist of a choice of the initial approximation x(1) for x and a construction of a set x(2), x(3) . ,.., such as lim x(i) = x. In practice we stop the iterations when we reach the given measure of an accuracy. On the other hand direct methods provide ultimate number of calculation steps. Which method is better? Iterative methods provide minimal memory requirements but time of calculations depends on the type of a problem.
9
Linear Algebraic Equations
See a good short overview A Library of sparse linear solveres The current version of the library includes the following functionality: Multifrontal Supernodal Cholesky Factorization Left-Looking Supernodal Cholesky Factorization Drop-Tolerance Incomplete-Cholesky Factorization LDL^T Factorization. Out-of-Core, Left-Looking Supernodal Sparse Cholesky Factorization. Out-of-Core Sparse LU with Partial Pivoting Factor and Solve. Can solve huge unsymmetric linear systems. Ordering Codes and Interfaces to Existing Ordering Codes. Matrix Operations. Matrix-vector multiplication, triangular solvers, matrix reordering.
10
Mathematical optimization
Optimization Taxonomy - Unconstrained & Constrained & Stochastic Nondifferentiable Optimization Nonlinear Least Squares Global Optimization Nonlinear Equations Linear Programming Dynamic programming Integer Programming Nonlinearly Constrained Bound Constrained Network Programming Stochastic Programming Genetic Algorithms
11
Mathematical optimization
Unconstrained The unconstrained optimization problem is central to the development of optimization software. Constrained optimization algorithms are often extensions of unconstrained algorithms, while nonlinear least squares and nonlinear equation algorithms tend to be specializations. In the unconstrained optimization problem, we seek a local minimizer of a real-valued function, f(x), where x is a vector of real variables. In other words, we seek a vector, x*, such that f(x*) <= f(x) for all x close to x*. Global optimization algorithms try to find an x* that minimizes f over all possible vectors x. This is a much harder problem to solve. At present, no efficient algorithm is known for performing this task. For many applications, local minima are good enough, particularly when the user can draw on his/her own experience and provide a good starting point for the algorithm.
12
Mathematical optimization
Unconstrained Newton's method gives rise to a wide and important class of algorithms that require computation of the gradient vector and the Hessian matrix, Newton's method forms a quadratic model of the objective function around the current iterate xk . The model function is defined by When the Hessian matrix, , is positive definite, the quadratic model has a unique minimizer that can be obtained by solving the symmetric n x n linear system: The next iterate is then
13
Mathematical optimization
Constrained The general constrained optimization problem is to minimize a nonlinear function subject to nonlinear constraints. Two equivalent formulations of this problem are useful for describing algorithms. They are where each ci is a mapping from n to , and and are index sets for inequality and equality constraints, respectively; and where c maps n to m, and the lower- l and upper-bound u vectors, and , may contain some infinite components. Fundamental to the understanding of these algorithms is the Lagrangian function, which for formulation (1.1) is defined as
14
Mathematical optimization
Dynamic programming may look somewhat familiar. Let's look at a particular type of shortest path problem. Suppose we wish to get from A to J in the road network of Figure 1. The numbers on the arcs represent distances. Due to the special structure of this problem, we can break it up into stages. Stage 1 contains node A, stage 2 contains nodes B, C, and D, stage 3 contains node E, F, and G, stage 4 contains H and I, and stage 5 contains J. The states in each stage correspond just to the node names. So stage 3 contains states E, F, and G.
15
Mathematical optimization
Dynamic programming If we let S denote a node in stage j and let fj(S) be the shortest distance from node S to the destination J, we can write where csz denotes the length of arc SZ. This gives the recursion needed to solve this problem. We begin by setting f5(J) = 0.
16
Mathematical optimization
Dynamic programming Stage 4. During stage 4, there are no real decisions to make: you simply go to your destination J. So you get: f4(H) by going to J, f4(I) by going to J. Stage 3. Here there are more choices. Here's how to calculate f3(F) . From F you can either go to H or I. The immediate cost of going to H is 6. The following cost is f4(H) = 3. The total is 9. The immediate cost of going to I is 3. The following cost is f4(I) = 4 for a total of 7. Therefore, if you are ever at F, the best thing to do is to go to I. The total cost is 7, so f3(F) = 7. Stage ... You now continue working back
17
Mathematical optimization
Of special interest is the nonlinear least squares problem in which the objective function is Fi(x) 2 = S(x); Gauss-Newton Method can be used in which the search direction is dk = −(J(xk)TJ(xk))−1 S(xk). There is a number of problems with the Gauss-Newton method (poor conditioning, not descending). The Levenberg-Marquardt Method is an attempt to deal with these problems by using dk = −(J(xk)TJ(xk)+ωI)−1 S(xk). The presence of ω deals with the conditioning problem and for sufficiently large values it ensures that dk is a descent direction.
18
Genetic Algorithms (GA)
GAs are programs used to deal with optimization problems. GAs have first been introduced by Holland. Their goal is to find optimum of a given function F on a given search space S. GAs are very robust techniques able to deal with a very large class of optimisation problems. The key idea of genetic algorithms comes from Nature: instead of searching for a single optimal solution of the particular problem, a population of candidate solutions is bred in a conceptually parallel fashion. Individuals in the population have a genetic code (as sometimes called chromosomes) which represent their inheritable properties. New individuals are created from one or two parents, reminiscent of asexual and sexual reproduction of living organisms. While in asexual reproduction only random changes occur in the genetic code of the individuals, in sexual reproduction the crossover of genes mixes the genetic code of parents.
19
Genetic Algorithms The process starts with a randomly generated initial “population” of individuals. In an initialization step a set of points of the search space S is created and then, the genetic algorithm iterates over the following 4 steps: Evaluation: The function F or fitness value is computed for each individual, ordering the population from the worst to the best. Selection: Pairs of individuals are selected, best individuals having more chance to be selected than poor ones. Reproduction: New individuals (calling “offspring”) are produced from these pairs. Replacement: A new population is generated by replacing some of the individuals of the Reproduction is done using some “genetic operators”.
20
Genetic Algorithms Number of “genetic operators” may be used but the two most common are mutation and crossing-over. The mutation operator picks at random some mutation locations among the N possible sites in the vector and flip the value of the bits at these locations. Mutation operator: Crossover children are created by combining the vectors of a pair of parents.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.