Basic Numerical methods and algorithms

Slides:



Advertisements
Similar presentations
1 Modeling and Simulation: Exploring Dynamic System Behaviour Chapter9 Optimization.
Advertisements

1 An Adaptive GA for Multi Objective Flexible Manufacturing Systems A. Younes, H. Ghenniwa, S. Areibi uoguelph.ca.
Linear Algebra Applications in Matlab ME 303. Special Characters and Matlab Functions.
1 Transportation problem The transportation problem seeks the determination of a minimum cost transportation plan for a single commodity from a number.
Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
Solving Linear Systems (Numerical Recipes, Chap 2)
Systems of Linear Equations
CHAPTER 9 E VOLUTIONARY C OMPUTATION I : G ENETIC A LGORITHMS Organization of chapter in ISSO –Introduction and history –Coding of  –Standard GA operations.
Genetic Algorithms Representation of Candidate Solutions GAs on primarily two types of representations: –Binary-Coded –Real-Coded Binary-Coded GAs must.
Visual Recognition Tutorial
Non-Linear Problems General approach. Non-linear Optimization Many objective functions, tend to be non-linear. Design problems for which the objective.
1 L-BFGS and Delayed Dynamical Systems Approach for Unconstrained Optimization Xiaohui XIE Supervisor: Dr. Hon Wah TAM.
458 Interlude (Optimization and other Numerical Methods) Fish 458, Lecture 8.
Optimization Methods One-Dimensional Unconstrained Optimization
COMP305. Part II. Genetic Algorithms. Genetic Algorithms.
Support Vector Machines
ECIV 301 Programming & Graphics Numerical Methods for Engineers REVIEW II.
Genetic Algorithms Nehaya Tayseer 1.Introduction What is a Genetic algorithm? A search technique used in computer science to find approximate solutions.
D Nagesh Kumar, IIScOptimization Methods: M1L4 1 Introduction and Basic Concepts Classical and Advanced Techniques for Optimization.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Collaborative Filtering Matrix Factorization Approach
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Universidad de los Andes-CODENSA The Continuous Genetic Algorithm.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Prepared by Barış GÖKÇE 1.  Search Methods  Evolutionary Algorithms (EA)  Characteristics of EAs  Genetic Programming (GP)  Evolutionary Programming.
Ranga Rodrigo April 6, 2014 Most of the sides are from the Matlab tutorial. 1.
Genetic Algorithm.
A Study of The Applications of Matrices and R^(n) Projections By Corey Messonnier.
Evolutionary Intelligence
Chapter 17 Boundary Value Problems. Standard Form of Two-Point Boundary Value Problem In total, there are n 1 +n 2 =N boundary conditions.
ENCI 303 Lecture PS-19 Optimization 2
CS 484 – Artificial Intelligence1 Announcements Lab 3 due Tuesday, November 6 Homework 6 due Tuesday, November 6 Lab 4 due Thursday, November 8 Current.
Zorica Stanimirović Faculty of Mathematics, University of Belgrade
Boltzmann Machine (BM) (§6.4) Hopfield model + hidden nodes + simulated annealing BM Architecture –a set of visible nodes: nodes can be accessed from outside.
Genetic algorithms Charles Darwin "A man who dares to waste an hour of life has not discovered the value of life"
2005MEE Software Engineering Lecture 11 – Optimisation Techniques.
Exact and heuristics algorithms
Genetic Algorithms. 2 Overview Introduction To Genetic Algorithms (GAs) GA Operators and Parameters Genetic Algorithms To Solve The Traveling Salesman.
Chapter 12 FUSION OF FUZZY SYSTEM AND GENETIC ALGORITHMS Chi-Yuan Yeh.
Local Search and Optimization Presented by Collin Kanaley.
EE749 I ntroduction to Artificial I ntelligence Genetic Algorithms The Simple GA.
CS621: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 5: Power of Heuristic; non- conventional search.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
D Nagesh Kumar, IIScOptimization Methods: M8L5 1 Advanced Topics in Optimization Evolutionary Algorithms for Optimization and Search.
Optimization in Engineering Design 1 Introduction to Non-Linear Optimization.
N- Queens Solution with Genetic Algorithm By Mohammad A. Ismael.
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
Genetic Algorithms. Underlying Concept  Charles Darwin outlined the principle of natural selection.  Natural Selection is the process by which evolution.
Genetic Algorithm Dr. Md. Al-amin Bhuiyan Professor, Dept. of CSE Jahangirnagar University.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
Artificial Intelligence By Mr. Ejaz CIIT Sahiwal Evolutionary Computation.
Spacetime Constraints Chris Moore CS 552. Purpose Physically Accurate Realistic Motion.
Approximation Algorithms based on linear programming.
Regularized Least-Squares and Convex Optimization.
Genetic Algorithms An Evolutionary Approach to Problem Solving.
Genetic Algorithm. Outline Motivation Genetic algorithms An illustrative example Hypothesis space search.
 Presented By: Abdul Aziz Ghazi  Roll No:  Presented to: Sir Harris.
Presented By: Farid, Alidoust Vahid, Akbari 18 th May IAUT University – Faculty.
Chapter 14 Genetic Algorithms.
Linear Programming Many problems take the form of maximizing or minimizing an objective, given limited resources and competing constraints. specify the.
Chapter 7. Classification and Prediction
Dr. Arslan Ornek IMPROVING SEARCH
Linear Programming.
Chapter 3 The Simplex Method and Sensitivity Analysis
Collaborative Filtering Matrix Factorization Approach
EE368 Soft Computing Genetic Algorithms.
Boltzmann Machine (BM) (§6.4)
EE 458 Introduction to Optimization
Searching for solutions: Genetic Algorithms
Presentation transcript:

Basic Numerical methods and algorithms

Review Differential Equations Systems of linear algebraic equations Mathematical optimization

Differential Equations An ordinary differential equation (frequently called an "ODE") is an equality involving a function and its derivatives. An ODE of order n is an equation of the form where y is a function of x, and is the first derivative, and is the n- th derivative with respect to x. Mostly numerical solution is used in practical applications. There are two groups of solutions : multy-step difference methods and Runge-Kutta methods.

Differential Equations The Euler method is important in concept for it points the way of solving ODE by marching a small step at a time on the right-hand-side to approximate the "derivative" on the left-hand-side. However, the Euler method has limited value in practical usage.

Differential Equations Midpoint Method The midpoint method, also known as the second-order Runga-Kutta method, improves the Euler method by adding a midpoint in the step which increases the accuracy by one order.

Differential Equations Runge-Kutta Method The fourth-order Runge-Kutta method is by far the ODE solving method most often used. It can be summarized as follows:

Linear Algebraic Equations Complex CG and engineering problems come to the solution of system of the linear algebraic equations: A x = b, where A is a sparse or band matrix of coefficients, x is a vector of unknown node values and b is a vector of right parts. Technology of sparse matrix requires a processing list where elements of such list are numbers, matrices, arrays or switches. Elementary algebra operations for sparse matrix are a transposition, column permutation, ordering of a sparse representation, multiplication of sparse matrices by a vector, etc. Symbolic and numerical algorithms for processing data are used.

Linear Algebraic Equations Numerical methods for solving linear system of algebraic equations (SLAE) fall into two classes: iterative and direct methods. Typical iteration methods consist of a choice of the initial approximation x(1) for x and a construction of a set x(2), x(3) . ,.., such as lim x(i) = x. In practice we stop the iterations when we reach the given measure of an accuracy. On the other hand direct methods provide ultimate number of calculation steps. Which method is better? Iterative methods provide minimal memory requirements but time of calculations depends on the type of a problem.

Linear Algebraic Equations See a good short overview http://www.seas.upenn.edu/~adavani/report.pdf A Library of sparse linear solveres http://www.tau.ac.il/~stoledo/taucs/ The current version of the library includes the following functionality: Multifrontal Supernodal Cholesky Factorization Left-Looking Supernodal Cholesky Factorization Drop-Tolerance Incomplete-Cholesky Factorization LDL^T Factorization. Out-of-Core, Left-Looking Supernodal Sparse Cholesky Factorization. Out-of-Core Sparse LU with Partial Pivoting Factor and Solve. Can solve huge unsymmetric linear systems. Ordering Codes and Interfaces to Existing Ordering Codes. Matrix Operations. Matrix-vector multiplication, triangular solvers, matrix reordering.

Mathematical optimization Optimization Taxonomy - Unconstrained & Constrained & Stochastic Nondifferentiable Optimization Nonlinear Least Squares Global Optimization Nonlinear Equations Linear Programming Dynamic programming Integer Programming Nonlinearly Constrained Bound Constrained Network Programming Stochastic Programming Genetic Algorithms

Mathematical optimization Unconstrained The unconstrained optimization problem is central to the development of optimization software. Constrained optimization algorithms are often extensions of unconstrained algorithms, while nonlinear least squares and nonlinear equation algorithms tend to be specializations. In the unconstrained optimization problem, we seek a local minimizer of a real-valued function, f(x), where x is a vector of real variables. In other words, we seek a vector, x*, such that f(x*) <= f(x) for all x close to x*. Global optimization algorithms try to find an x* that minimizes f over all possible vectors x. This is a much harder problem to solve. At present, no efficient algorithm is known for performing this task. For many applications, local minima are good enough, particularly when the user can draw on his/her own experience and provide a good starting point for the algorithm.

Mathematical optimization Unconstrained Newton's method gives rise to a wide and important class of algorithms that require computation of the gradient vector and the Hessian matrix, Newton's method forms a quadratic model of the objective function around the current iterate xk . The model function is defined by When the Hessian matrix, , is positive definite, the quadratic model has a unique minimizer that can be obtained by solving the symmetric n x n linear system: The next iterate is then

Mathematical optimization Constrained The general constrained optimization problem is to minimize a nonlinear function subject to nonlinear constraints. Two equivalent formulations of this problem are useful for describing algorithms. They are where each ci is a mapping from n to  , and and are index sets for inequality and equality constraints, respectively; and where c maps n to m, and the lower- l and upper-bound u vectors, and , may contain some infinite components. Fundamental to the understanding of these algorithms is the Lagrangian function, which for formulation (1.1) is defined as

Mathematical optimization Dynamic programming may look somewhat familiar. Let's look at a particular type of shortest path problem. Suppose we wish to get from A to J in the road network of Figure 1. The numbers on the arcs represent distances. Due to the special structure of this problem, we can break it up into stages. Stage 1 contains node A, stage 2 contains nodes B, C, and D, stage 3 contains node E, F, and G, stage 4 contains H and I, and stage 5 contains J. The states in each stage correspond just to the node names. So stage 3 contains states E, F, and G.

Mathematical optimization Dynamic programming If we let S denote a node in stage j and let fj(S) be the shortest distance from node S to the destination J, we can write where csz denotes the length of arc SZ. This gives the recursion needed to solve this problem. We begin by setting f5(J) = 0.

Mathematical optimization Dynamic programming Stage 4. During stage 4, there are no real decisions to make: you simply go to your destination J. So you get: f4(H) by going to J, f4(I) by going to J. Stage 3. Here there are more choices. Here's how to calculate f3(F) . From F you can either go to H or I. The immediate cost of going to H is 6. The following cost is f4(H) = 3. The total is 9. The immediate cost of going to I is 3. The following cost is f4(I) = 4 for a total of 7. Therefore, if you are ever at F, the best thing to do is to go to I. The total cost is 7, so f3(F) = 7. Stage ... You now continue working back

Mathematical optimization Of special interest is the nonlinear least squares problem in which the objective function is Fi(x) 2 = S(x); Gauss-Newton Method can be used in which the search direction is dk = −(J(xk)TJ(xk))−1 S(xk). There is a number of problems with the Gauss-Newton method (poor conditioning, not descending). The Levenberg-Marquardt Method is an attempt to deal with these problems by using dk = −(J(xk)TJ(xk)+ωI)−1 S(xk). The presence of ω deals with the conditioning problem and for sufficiently large values it ensures that dk is a descent direction.

Genetic Algorithms (GA) GAs are programs used to deal with optimization problems. GAs have first been introduced by Holland. Their goal is to find optimum of a given function F on a given search space S. GAs are very robust techniques able to deal with a very large class of optimisation problems. The key idea of genetic algorithms comes from Nature: instead of searching for a single optimal solution of the particular problem, a population of candidate solutions is bred in a conceptually parallel fashion. Individuals in the population have a genetic code (as sometimes called chromosomes) which represent their inheritable properties. New individuals are created from one or two parents, reminiscent of asexual and sexual reproduction of living organisms. While in asexual reproduction only random changes occur in the genetic code of the individuals, in sexual reproduction the crossover of genes mixes the genetic code of parents.

Genetic Algorithms The process starts with a randomly generated initial “population” of individuals. In an initialization step a set of points of the search space S is created and then, the genetic algorithm iterates over the following 4 steps: Evaluation: The function F or fitness value is computed for each individual, ordering the population from the worst to the best. Selection: Pairs of individuals are selected, best individuals having more chance to be selected than poor ones. Reproduction: New individuals (calling “offspring”) are produced from these pairs. Replacement: A new population is generated by replacing some of the individuals of the Reproduction is done using some “genetic operators”.

Genetic Algorithms Number of “genetic operators” may be used but the two most common are mutation and crossing-over. The mutation operator picks at random some mutation locations among the N possible sites in the vector and flip the value of the bits at these locations. Mutation operator: Crossover children are created by combining the vectors of a pair of parents.