An Introduction to Optimization Theory
Outline Introduction Unconstrained optimization problem Constrained optimization problem
Introduction Mathematically speaking, optimization is the minimization of a objective function subject to constraints on its variables. Mathematically, we have
Introduction
Introduction-Linear regression
Introduction-Battery charger
Unconstrained optimization problem Definition for unconstrained optimization problem:
Unconstrained optimization problem
Gradient descent algorithm
Gradient descent algorithm may be trapped into the local extreme instead of the global extreme
Gradient descent algorithm Methodology for choosing suitable step size α k ---- Steepest descent algorithm
Gradient descent algorithm
Steepest descent algorithm with quadratic cost function:
Gradient descent algorithm Update equation:
Newton method Summary for Newton method
Newton method
Procedure for Newton method
Quasi-Newton method
What properties of F(x (k) ) -1 should it mimic ? 1. H k should be a symmetric matrix 2. H k should with secant property
Quasi-Newton method Typical approaches for Quasi-Newton method 1. Rank-one formula 2. DFP algorithm 3. BFGS algorithm (L-BFGS, L indicates limited-memory)
Constrained optimization problem Definition for constrained optimization problem
Problems with equality constraints ---- Lagrange multiplier
Suppose x * is a local minimizer
Karush-Kuhn-Tucker condition (KKT) From now on, we will consider the following problem
Karush-Kuhn-Tucker condition (KKT) Note that:
Image statistics & Image enhancement Illustration for gradient descent with projection Constrained set Ω Initial solution Projection
Useful Matlab introductions for optimization Useful instructions included in Matlab for optimization 1. fminunc: Solver for unconstrained optimization problems 2. fmincon: Solver for constrained optimization problems 3. linprog: Solver for linear programming problems 4. quadprog: Solver for quadratic programming problems