MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 9. Optimization problems.

Slides:



Advertisements
Similar presentations
Optimization with Constraints
Advertisements

Optimization.
Engineering Optimization
Optimization Methods TexPoint fonts used in EMF.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 One-Dimensional Unconstrained Optimization Chapter.
Siddharth Choudhary.  Refines a visual reconstruction to produce jointly optimal 3D structure and viewing parameters  ‘bundle’ refers to the bundle.
1 TTK4135 Optimization and control B.Foss Spring semester 2005 TTK4135 Optimization and control Spring semester 2005 Scope - this you shall learn Optimization.
Least Squares example There are 3 mountains u,y,z that from one site have been measured as 2474 ft., 3882 ft., and 4834 ft.. But from u, y looks 1422 ft.
Separating Hyperplanes
Inexact SQP Methods for Equality Constrained Optimization Frank Edward Curtis Department of IE/MS, Northwestern University with Richard Byrd and Jorge.
1cs542g-term Notes  Extra class this Friday 1-2pm  Assignment 2 is out  Error in last lecture: quasi-Newton methods based on the secant condition:
Numerical Optimization
Function Optimization Newton’s Method. Conjugate Gradients
1cs542g-term Notes  Extra class this Friday 1-2pm  If you want to receive s about the course (and are auditing) send me .
Methods For Nonlinear Least-Square Problems
Optimization Methods One-Dimensional Unconstrained Optimization
Optimization Methods One-Dimensional Unconstrained Optimization
Unconstrained Optimization Problem
Improved BP algorithms ( first order gradient method) 1.BP with momentum 2.Delta- bar- delta 3.Decoupled momentum 4.RProp 5.Adaptive BP 6.Trinary BP 7.BP.
Engineering Optimization
12 1 Variations on Backpropagation Variations Heuristic Modifications –Momentum –Variable Learning Rate Standard Numerical Optimization –Conjugate.
Advanced Topics in Optimization
ISM 206 Lecture 6 Nonlinear Unconstrained Optimization.
Why Function Optimization ?
An Introduction to Optimization Theory. Outline Introduction Unconstrained optimization problem Constrained optimization problem.
Optimization Methods One-Dimensional Unconstrained Optimization
Unconstrained Optimization Rong Jin. Logistic Regression The optimization problem is to find weights w and b that maximizes the above log-likelihood How.
Tier I: Mathematical Methods of Optimization

9 1 Performance Optimization. 9 2 Basic Optimization Algorithm p k - Search Direction  k - Learning Rate or.
Computational Optimization
UNCONSTRAINED MULTIVARIABLE
Collaborative Filtering Matrix Factorization Approach
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 8. Nonlinear equations.
Optimization Methods.
Frank Edward Curtis Northwestern University Joint work with Richard Byrd and Jorge Nocedal February 12, 2007 Inexact Methods for PDE-Constrained Optimization.
ENCI 303 Lecture PS-19 Optimization 2
84 b Unidimensional Search Methods Most algorithms for unconstrained and constrained optimisation use an efficient unidimensional optimisation technique.
General Nonlinear Programming (NLP) Software
Nonlinear programming Unconstrained optimization techniques.
Nonlinear Programming Peter Zörnig ISBN: © 2014 by Walter de Gruyter GmbH, Berlin/Boston Abbildungsübersicht / List of Figures Tabellenübersicht.
Chapter 7 Optimization. Content Introduction One dimensional unconstrained Multidimensional unconstrained Example.
1 Unconstrained Optimization Objective: Find minimum of F(X) where X is a vector of design variables We may know lower and upper bounds for optimum No.
1 Optimization Multi-Dimensional Unconstrained Optimization Part II: Gradient Methods.
Frank Edward Curtis Northwestern University Joint work with Richard Byrd and Jorge Nocedal January 31, 2007 Inexact Methods for PDE-Constrained Optimization.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
Case Study in Computational Science & Engineering - Lecture 5 1 Iterative Solution of Linear Systems Jacobi Method while not converged do { }
A comparison between PROC NLP and PROC OPTMODEL Optimization Algorithm Chin Hwa Tan December 3, 2008.
Quasi-Newton Methods of Optimization Lecture 2. General Algorithm n A Baseline Scenario Algorithm U (Model algorithm for n- dimensional unconstrained.
Data Modeling Patrice Koehl Department of Biological Sciences National University of Singapore
Lecture 13. Geometry Optimization References Computational chemistry: Introduction to the theory and applications of molecular and quantum mechanics, E.
1 Chapter 6 General Strategy for Gradient methods (1) Calculate a search direction (2) Select a step length in that direction to reduce f(x) Steepest Descent.
Inexact SQP methods for equality constrained optimization Frank Edward Curtis Department of IE/MS, Northwestern University with Richard Byrd and Jorge.
Variations on Backpropagation.
Survey of unconstrained optimization gradient based algorithms
Hand-written character recognition
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
INTRO TO OPTIMIZATION MATH-415 Numerical Analysis 1.
Linear Programming Interior-Point Methods D. Eiland.
Optimal Control.
Bounded Nonlinear Optimization to Fit a Model of Acoustic Foams
Computational Optimization
CS5321 Numerical Optimization
Collaborative Filtering Matrix Factorization Approach
Chapter 7 Optimization.
CS5321 Numerical Optimization
~ Least Squares example
~ Least Squares example
Performance Optimization
CS5321 Numerical Optimization
Presentation transcript:

MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 9. Optimization problems.

Optimization

Optimization problems

Examples

Global vs. local optimization

Global optimization Finding, or even verifying, global minimum is difficult, in general Most optimization methods are designed to find local minimum, which may or may not be global minimum If global minimum is desired, one can try several widely separated starting points and see if all produce same result For some problems, such as linear programming, global optimization is more tractable

Existence of Minimum

Level sets

Uniqueness of minimum

First-order optimality condition

Second-order optimality condition

Constrained optimality

If inequalities are present, then KKT optimality conditions also require nonnegativity of Lagrange multipliers corresponding to inequalities, and complementarity condition

Sensitivity and conditioning

Unimodality

Golden section search

Example

Example (cont.)

Successive parabolic interpolation

Example

Newton’s method Newton’s method for finding minimum normally has quadratic convergence rate, but must be started close enough to solution to converge

Example

Safeguarded methods

Multidimensional optimization. Direct search methods

Steepest descent method

Example

Example (cont.)

Newton’s method

Example

Newton’s method

Trust region methods

Quasi-Newton methods

Secant updating methods

BFGS method

Example For quadratic objective function, BFGS with exact line search finds exact solution in at most n iterations, where n is dimension of problem

Conjugate gradient method

CG method

CG method example

Example (cont.)

Truncated Newton methods Another way to reduce work in Newton-like methods is to solve linear system for Newton step by iterative method Small number of iterations may suffice to produce step as useful as true Newton step, especially far from overall solution, where true Newton step may be unreliable anyway Good choice for linear iterative solver is CG method, which gives step intermediate between steepest descent and Newton-like step Since only matrix-vector products are required, explicit formation of Hessian matrix can be avoided by using finite difference of gradient along given vector

Nonlinear Least squares

Nonlinear least squares

Gauss-Newton method

Example

Example (cont.)

Gauss-Newton method

Levenberg-Marquardt method With suitable strategy for choosing μk, this method can be very robust in practice, and it forms basis for several effective software packages

Equality-constrained optimization

Sequential quadratic programming

Merit function

Inequality-constrained optimization

Penalty methods This enables use of unconstrained optimization methods, but problem becomes ill-conditioned for large ρ, so we solve sequence of problems with gradually increasing values of, with minimum for each problem used as starting point for next problem

Barrier methods

Example: constrained optimization

Example (cont.)

Linear progamming

Linear programming

Example: linear programming