Unconstrained optimization Gradient based algorithms –Steepest descent –Conjugate gradients –Newton and quasi-Newton Population based algorithms –Nelder.

Slides:



Advertisements
Similar presentations
Optimality conditions for constrained local optima, Lagrange multipliers and their use for sensitivity of optimal solutions.
Advertisements

Introduction to Optimization Anjela Govan North Carolina State University SAMSI NDHS Undergraduate workshop 2006.
Nelder Mead.
Optimization with Constraints
Optimization.
Optimization Methods TexPoint fonts used in EMF.
Optimization : The min and max of a function
Least Squares example There are 3 mountains u,y,z that from one site have been measured as 2474 ft., 3882 ft., and 4834 ft.. But from u, y looks 1422 ft.
Chapter 3 UNCONSTRAINED OPTIMIZATION
1cs542g-term Notes  Assignment 1 due tonight ( me by tomorrow morning)
Optimization with MATLAB
1 L-BFGS and Delayed Dynamical Systems Approach for Unconstrained Optimization Xiaohui XIE Supervisor: Dr. Hon Wah TAM.
Numerical Optimization
Unconstrained Optimization Rong Jin. Recap  Gradient ascent/descent Simple algorithm, only requires the first order derivative Problem: difficulty in.
1cs542g-term Notes  Extra class this Friday 1-2pm  If you want to receive s about the course (and are auditing) send me .
Tutorial 12 Unconstrained optimization Conjugate gradients.
Methods For Nonlinear Least-Square Problems
Engineering Optimization
Design Optimization School of Engineering University of Bradford 1 Numerical optimization techniques Unconstrained multi-parameter optimization techniques.
1 L-BFGS and Delayed Dynamical Systems Approach for Unconstrained Optimization Xiaohui XIE Supervisor: Dr. Hon Wah TAM.
Search and Optimization Methods Based in part on Chapter 8 of Hand, Manilla, & Smyth David Madigan.
Optimization Methods One-Dimensional Unconstrained Optimization
Engineering Optimization
Function Optimization. Newton’s Method Conjugate Gradients Method
Advanced Topics in Optimization
Why Function Optimization ?
An Introduction to Optimization Theory. Outline Introduction Unconstrained optimization problem Constrained optimization problem.
Optimization Methods One-Dimensional Unconstrained Optimization
Unconstrained Optimization Rong Jin. Logistic Regression The optimization problem is to find weights w and b that maximizes the above log-likelihood How.
What is Optimization? Optimization is the mathematical discipline which is concerned with finding the maxima and minima of functions, possibly subject.

9 1 Performance Optimization. 9 2 Basic Optimization Algorithm p k - Search Direction  k - Learning Rate or.
UNCONSTRAINED MULTIVARIABLE
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 9. Optimization problems.
1 Hybrid methods for solving large-scale parameter estimation problems Carlos A. Quintero 1 Miguel Argáez 1 Hector Klie 2 Leticia Velázquez 1 Mary Wheeler.
Optimization Methods.
ENCI 303 Lecture PS-19 Optimization 2
84 b Unidimensional Search Methods Most algorithms for unconstrained and constrained optimisation use an efficient unidimensional optimisation technique.
Optimization in Engineering Design Georgia Institute of Technology Systems Realization Laboratory 101 Quasi-Newton Methods.
Qualifier Exam in HPC February 10 th, Quasi-Newton methods Alexandru Cioaca.
Nonlinear programming Unconstrained optimization techniques.
System Optimization (1) Liang Yu Department of Biological Systems Engineering Washington State University
Survey of gradient based constrained optimization algorithms Select algorithms based on their popularity. Additional details and additional algorithms.
Multivariate Unconstrained Optimisation First we consider algorithms for functions for which derivatives are not available. Could try to extend direct.
Local search algorithms Most local search algorithms are based on derivatives to guide the search. For differentiable function it has been shown that.
559 Fish 559; Lecture 5 Non-linear Minimization. 559 Introduction Non-linear minimization (or optimization) is the numerical technique that is used by.
Institute of Biophysics and Biomedical Engineering - Bulgarian Academy of Sciences OLYMPIA ROEVA 105 Acad. George Bonchev Str Sofia, Bulgaria
Quasi-Newton Methods of Optimization Lecture 2. General Algorithm n A Baseline Scenario Algorithm U (Model algorithm for n- dimensional unconstrained.
Lecture 13. Geometry Optimization References Computational chemistry: Introduction to the theory and applications of molecular and quantum mechanics, E.
Introduction to Optimization Methods
1 Chapter 6 General Strategy for Gradient methods (1) Calculate a search direction (2) Select a step length in that direction to reduce f(x) Steepest Descent.
Survey of unconstrained optimization gradient based algorithms
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
INTRO TO OPTIMIZATION MATH-415 Numerical Analysis 1.
Graphical optimization Some problems are cheap to simulate or test. Even if they are not, we may fit a surrogate that is cheap to evaluate. Relying on.
Bounded Nonlinear Optimization to Fit a Model of Acoustic Foams
Non-linear Minimization
Computational Optimization
CS5321 Numerical Optimization
Non-linear Least-Squares
Local search algorithms
Introduction to Scientific Computing II
Introduction to Scientific Computing II
Introduction to Scientific Computing II
~ Least Squares example
Downhill Simplex Search (Nelder-Mead Method)
Introduction to Scientific Computing II
Performance Optimization
Section 3: Second Order Methods
Steepest Descent Optimization
Presentation transcript:

Unconstrained optimization Gradient based algorithms –Steepest descent –Conjugate gradients –Newton and quasi-Newton Population based algorithms –Nelder Mead’s sequential simplex –Stochastic algorithms

Unconstrained local minimization The necessity for one dimensional searches The most intuitive choice of s k is the direction of steepest descent This choice, however is very poor Methods are based on the dictum that all functions of interest are locally quadratic

Conjugate gradients What are the unlabeled axes?

Newton and quasi-Newton methods Newton Quasi-Newton methods use successive evaluations of gradients to obtain approximation to Hessian or its inverse Earliest was DFP, currently best known is BFGS Like conjugate gradients guaranteed to converge in n steps or less for a quadratic function.

Matlab fminfunc X=FMINUNC(FUN,X0,OPTIONS) minimizes with the default optimization parameters replaced by values in the structure OPTIONS, an argumentcreated with the OPTIMSET function. See OPTIMSET for details. Used options are Display, TolX, TolFun, DerivativeCheck, Diagnostics, FunValCheck GradObj, HessPattern, Hessian, HessMult, HessUpdate, InitialHessType, InitialHessMatrix, MaxFunEvals, MaxIter,DiffMinChange and DiffMaxChange, LargeScale, MaxPCGIter,PrecondBandWidth, TolPCG, TypicalX.

Rosenbrock Banana function. Vanderplaats’s version My version

Matlab output [x,fval,exitflag,output] = 1]) Warning: Gradient must be provided for trust-region algorithm; using line-search algorithm instead. Local minimum found. Optimization completed because the size of the gradient is less than the default value of the function tolerance. x = fval =2.8336e-011 exitflag =1 output = iterations: 36, funcCount: 138 algorithm: 'medium-scale: Quasi-Newton line search‘ How would we reduce the number of iterations?

Sequential Simplex Method (section 4.2.1) In n dimensional space start with n+1 particles at vertices of a regular (e.g., equilateral) simplex. Reflect worst point about c.g. Read about expansion and contraction

Matlab commands function [y]=banana(x) global z1 global z2 global yg global count y=100*(x(2)-x(1)^2)^2+(1-x(1))^2; z1(count)=x(1); z2(count)=x(2); yg(count)=y; count=count+1; global z2 >> global yg >> global z1 >> global count >> count =1; >> options=optimset('MaxFunEvals',20) [x,fval] = 1],options) >> mat=[z1;z2;yg] mat = Columns 1 through Columns 9 through

fminsearch Banana function ,

Next iteration

Completed search [x,fval,exitflag,output] = 1]) x = fval =8.1777e-010 exitflag =1 output = iterations: 85 funcCount: 159 algorithm: 'Nelder-Mead simplex direct search‘ Why is the number of iterations large compared to function evaluations (36 and 138 for fminunc)?