Optimization with MATLAB

Slides:



Advertisements
Similar presentations
Zhen Lu CPACT University of Newcastle MDC Technology Reduced Hessian Sequential Quadratic Programming(SQP)
Advertisements

Optimality conditions for constrained local optima, Lagrange multipliers and their use for sensitivity of optimal solutions.
MATLAB Optimization Greg Reese, Ph.D Research Computing Support Group Miami University.
Engineering Optimization
Unconstrained optimization Gradient based algorithms –Steepest descent –Conjugate gradients –Newton and quasi-Newton Population based algorithms –Nelder.
P. Venkataraman Mechanical Engineering P. Venkataraman Rochester Institute of Technology DETC2013 – 12269: Continuous Solution for Boundary Value Problems.
Optimization Methods TexPoint fonts used in EMF.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 One-Dimensional Unconstrained Optimization Chapter.
P. Venkataraman Mechanical Engineering P. Venkataraman Rochester Institute of Technology DETC2014 – 35148: Continuous Solution for Boundary Value Problems.
Optimization methods Review
1 TTK4135 Optimization and control B.Foss Spring semester 2005 TTK4135 Optimization and control Spring semester 2005 Scope - this you shall learn Optimization.
Optimization Introduction & 1-D Unconstrained Optimization
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Easy Optimization Problems, Relaxation, Local Processing for a small subset of variables.
Optimality conditions for constrained local optima, Lagrange multipliers and their use for sensitivity of optimal solutions Today’s lecture is on optimality.
Numerical Optimization
Nonlinear Optimization for Optimal Control
Optimization Methods One-Dimensional Unconstrained Optimization
ENGR 351 Numerical Methods Instructor: Dr. L.R. Chevalier
Optimization Methods One-Dimensional Unconstrained Optimization
Unconstrained Optimization Problem
Advanced Topics in Optimization
Optimality Conditions for Nonlinear Optimization Ashish Goel Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
An Introduction to Optimization Theory. Outline Introduction Unconstrained optimization problem Constrained optimization problem.
Optimization Methods One-Dimensional Unconstrained Optimization
Tier I: Mathematical Methods of Optimization
1 Chapter 5 Nonlinear Programming Chemical Engineering Department National Tsing-Hua University Prof. Shi-Shang Jang May, 2003.
Introduction to Optimization (Part 1)
UNCONSTRAINED MULTIVARIABLE
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 9. Optimization problems.
ENCI 303 Lecture PS-19 Optimization 2
84 b Unidimensional Search Methods Most algorithms for unconstrained and constrained optimisation use an efficient unidimensional optimisation technique.
Analytical vs. Numerical Minimization Each experimental data point, l, has an error, ε l, associated with it ‣ Difference between the experimentally measured.
General Nonlinear Programming (NLP) Software
Nonlinear Programming.  A nonlinear program (NLP) is similar to a linear program in that it is composed of an objective function, general constraints,
Fin500J: Mathematical Foundations in Finance
Chapter 7 Optimization. Content Introduction One dimensional unconstrained Multidimensional unconstrained Example.
System Optimization (1) Liang Yu Department of Biological Systems Engineering Washington State University
Genetic Algorithm and Direct search toolbox in MATLAB
Survey of gradient based constrained optimization algorithms Select algorithms based on their popularity. Additional details and additional algorithms.
1 Optimization Multi-Dimensional Unconstrained Optimization Part II: Gradient Methods.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
Numerical Methods.
Solution of Nonlinear Functions
559 Fish 559; Lecture 5 Non-linear Minimization. 559 Introduction Non-linear minimization (or optimization) is the numerical technique that is used by.
Optimization unconstrained and constrained Calculus part II.
A comparison between PROC NLP and PROC OPTMODEL Optimization Algorithm Chin Hwa Tan December 3, 2008.
Chapter 4 Sensitivity Analysis, Duality and Interior Point Methods.
Introduction to Optimization Methods
Inexact SQP methods for equality constrained optimization Frank Edward Curtis Department of IE/MS, Northwestern University with Richard Byrd and Jorge.
Survey of unconstrained optimization gradient based algorithms
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Introduction to Optimization
Nonlinear Programming In this handout Gradient Search for Multivariable Unconstrained Optimization KKT Conditions for Optimality of Constrained Optimization.
Optimization in Engineering Design 1 Introduction to Non-Linear Optimization.
Ch. Eick: Num. Optimization with GAs Numerical Optimization General Framework: objective function f(x 1,...,x n ) to be minimized or maximized constraints:
1 Introduction Optimization: Produce best quality of life with the available resources Engineering design optimization: Find the best system that satisfies.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 2 - Chapter 7 Optimization.
Optimal Control.
Bounded Nonlinear Optimization to Fit a Model of Acoustic Foams
Non-linear Minimization
Computational Optimization
Dr. Arslan Ornek IMPROVING SEARCH
Chapter 7 Optimization.
Optimization and Some Traditional Methods
EE 458 Introduction to Optimization
EEE 244-8: Optimization.
Part 4 - Chapter 13.
Computer Animation Algorithms and Techniques
Presentation transcript:

Optimization with MATLAB A Hands-On Workshop Farrukh Nagi Universiti Teknologi MARA , Shah Alam Campus 25-26 June, 2014 http://metalab.uniten.edu.my/~farrukh/Utim/Utimoptim.zip

Introduction To Non–linear Optimization PART I

PART 1 – INTRODUCTION TO OPTIMIZATION Contents PART 1 – INTRODUCTION TO OPTIMIZATION Introduction to Optimization Fundamental of Optimization Mathematical Background Unconstrained Optimization Methods Gradient Descent Methods (Steepest Descent) Least Square Methods Simplex Methods 4. Constrained Optimization

PART 2 - MATLAB OPTIMIZATION Matlab/Simulink Optimization Methods Function Optimization Minimization Algorithms Unconstrained Optimization – fminunc, fminsearch Optimization Options Settings Constrained Optimization Multi-objective Optimization - lsqnonlin Optimization Toolbox >>optimtool %GUI Environmental Science Optimization 4.

PART 3 - SIMULINK OPTIMIZATION RESPONSE /IEEE_optim/… Parametric Modeling Parameter Identification Parameter Passing with Component Block Input Simulink Optimization Design (SOD) – GUI 9. Optimizer Output 10. Simulink Examples List REFERENCES

What is Optimization? Optimization is an iterative process by which a desired solution(max/min) of the problem can be found while satisfying all its constraint or bounded conditions. Figure 2: Optimum solution is found while satisfying its constraint (derivative must be zero at optimum). Optimization problem could be linear or non-linear. Non –linear optimization is accomplished by numerical ‘Search Methods’. Search methods are used iteratively before a solution is achieved. The search procedure is termed as algorithm.

Optimization Methods One-Dimensional Unconstrained Optimization Golden-Section Search Quadratic Interpolation Newton's Method Multi-Dimensional Unconstrained Optimization Non-gradient or direct methods Gradient methods Linear Programming (Constrained) Graphical Solution Simplex Method Genetic Algorithm (GA) – Survival of the fittest principle based upon evolutionary theory \IEEE_OPTIM_2012\GA\GA Presentation.ppt Particle Swarm Optimization (PSO) – Concept of best solution in the neighborhood : \IEEE_OPTIM_2012\PSO\ PSO Presentation.ppt Others …….

Fundamentals of Non-Linear Optimization The solution of the linear problem lies on boundaries of the feasible region. Figure 4: Three dimensional solution of non-linear problem Figure 3: Solution of linear problem Non-linear problem solution lies within and on the boundaries of the feasible region.

Fundamentals of Non-Linear Optimization …Contd Single Objective function f(x) Maximization Minimization Maximize X1 + 1.5 X2 Subject to: X1 + X2 ≤ 150 0.25 X1 + 0.5 X2 ≤ 50 X1 ≥ 50 X2 ≥ 25 X1 ≥0, X2 ≥0 Design Variables, xi , i=0,1,2,3….. Constraints Inequality Equality Figure 5: Example of design variables and constraints used in non-linear optimization. Optimal points Local minima/maxima points: A point or Solution x* is at local point if there is no other x in its Neighborhood less than x* Global minima/maxima points: A point or Solution x** is at global point if there is no other x in entire search space less than x**

Fundamentals of Non-Linear Optimization …Contd Figure 7: Local point is equal to global point if the function is convex. Figure 6: Global versus local optimization. A set S is convex if the line segment joining any two points in the set is also in the set. convex not convex convex not convex not convex

Fundamentals of Non-Linear Optimization …Contd Function f is convex if f(Xa) is less than value of the corresponding point joining f(X1) and f(X2). Convexity condition – Hessian 2nd order derivative) matrix of function f must be positive semi definite ( eigen values +ve or zero). Figure 8: Convex and nonconvex set Figure 9: Convex function

Optimality Conditions First order Condition (FOC) Hessian – Second derivative of f of several variables Second order condition (SOC) Eigen values of H(X*) are all positive Determinants of all lower order of H(X*) are +ve

Optimization Methods …Constrained a.) Indirect approach – by transforming into unconstrained problem. b.) Exterior Penalty Function (EPF) and Augmented Lagrange Multiplier c.) Direct Method Sequential Linear Programming (SLP), SQP and Steepest Generalized Reduced Gradient Method (GRG) Figure 10: Descent Gradient or LMS

Steepest descent method Math 685/CSI 700 Spring 08 Steepest descent method George Mason University, Department of Mathematical Sciences

Example

Example (cont.)

Matlab steepest descent- fminunc %startpresentgrad.m clc clf hold off x0=[5,1]; options = optimset('OutputFcn', @outfun); %for graphic display options=optimset('LargeScale','off','Display','iter-detailed'); %for iteration options=optimset(options,'GradObj','on'); %for enabling gradient object options = optimset(options,'OutputFcn', @outfun); [x,fval]=fminunc(@myfuncon2,x0,options); function [f,g]=myfuncon2(x) function stop = outfun(x, optimValues, state) f=0.5*x(1)^2+2.5*x(2)^2; ww1= -6:0.05:6; if nargout > 1 ww2=ww1; g(1)=x(1); %gradient 1 supplied [w1,w2]=meshgrid(ww1,ww2); g(2)=5*x(2); %gradient2 supplied J=-1*(0.5*w1.^2+2.5*w2.^2); foo=max(abs(g)); cs=contour(w1,w2,J,20); hold on grid End stop=false %cs=surf(w1,w2,J); hold on; Grid plot(x(1),x(2),’bl+’); drawnow

Non-Linear least squares

Non-Linear least squares

Simplex Methods Minimize

Derivative-free optimization Downhill simplex method

Example: constrained optimization

Example (cont.)

Example (cont.)

Environmental Sciences Optimization Case Studies 1. Fish Harvesting…/Utim/fishharvester/ 2. River Pollution…../Utim/WWTP_RivPol/ 3. Noise Pollution…./Utim/machine_noise/ Data Fitting 1. Hydrology .../Utim/hydrology/ 2. Anthropometric.../Utim/anthropometry/

MATLAB Optimization PART II

>>Command Window Simulink Design Optimization MATLAB/SIMULINK OPTIMIZATION METHODS Model Block Ports/ Block update @functions Scripts M-Files Custom code MATLAB >>Command Window >>Optimtool -GUI- Simulink Design Optimization Simulink Model.mdl

Function Optimization Optimization concerns the minimization or maximization of functions Standard Optimization Problem: Subject to: Equality Constraints Inequality Constraints Side Constraints Where: is the objective function, which measure and evaluate the performance of a system. In a standard problem, we are minimizing the function. For maximization, it is equivalent to minimization of the –ve of the objective function. is a column vector of design variables, which can affect the performance of the system.

Function Optimization (Cont.) Constraints – Limitation to the design space. Can be linear or nonlinear, explicit or implicit functions Equality Constraints Inequality Constraints Most algorithm require less than!!! Side Constraints

Optimization Toolbox Is a collection of functions that extend the capability of MATLAB. The toolbox includes routines for: Unconstrained optimization Constrained nonlinear optimization, including goal attainment problems, minimax problems, and semi-infinite minimization problems Quadratic and linear programming Nonlinear least squares and curve fitting Nonlinear systems of equations solving Constrained linear least squares Specialized algorithms for large scale problems

Unconstrained Minimization Consider the problem of finding a set of values [x1 x2]T that solves Steps: Create an M-file that returns the function value (Objective Function). Call it objfun.m Then, invoke the unconstrained minimization routine. Use fminunc

Step 1 – Obj. Function Objective function function f = objfun(x) f=exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1); Objective function

Step 2 – Invoke Routine Starting with a guess x0 = [-1,1]; options = optimset(‘LargeScale’,’off’); [xmin,feval,exitflag,output]= fminunc(‘objfun’,x0,options); Optimization parameters settings Output arguments Input arguments

Results Minimum point of design variables Objective function value xmin = 0.5000 -1.0000 feval = 1.3028e-010 exitflag = 1 output = iterations: 7 funcCount: 40 stepsize: 1 firstorderopt: 8.1998e-004 algorithm: 'medium-scale: Quasi-Newton line search' Minimum point of design variables Objective function value Exit flag tells if the algorithm is converged. If exit flag > 0, then local minimum is found Some other information

More on fminunc – Input [xmin,feval,exitflag,output,grad,hessian]= fminunc(fun,x0,options,P1,P2,…) fun : Return a function of objective function. x0 : Starts with an initial guess. The guess must be a vector of size of number of design variables. Option : To set some of the optimization parameters. (More after few slides) P1,P2,… : To pass additional parameters.

More on fminunc – Output [xmin,feval,exitflag,output,grad,hessian]= fminunc(fun,x0,options,P1,P2,…) xmin : Vector of the minimum point (optimal point). The size is the number of design variables. feval : The objective function value of at the optimal point. exitflag : A value shows whether the optimization routine is terminated successfully. (converged if >0) Output : This structure gives more details about the optimization grad : The gradient value at the optimal point. hessian : The hessian value of at the optimal point

Options Setting – optimset optimset(‘param1’,value1, ‘param2’,value2,…) The routines in Optimization Toolbox has a set of default optimization parameters. However, the toolbox allows you to alter some of those parameters, for example: the tolerance, the step size, the gradient or hessian values, the max. number of iterations etc. There are also a list of features available, for example: displaying the values at each iterations, compare the user supply gradient or hessian, etc. You can also choose the algorithm you wish to use.

Options Setting (Cont.) optimset(‘param1’,value1, ‘param2’,value2,…) Type help optimset in command window, a list of options setting available will be displayed. How to read? For example: LargeScale - Use large-scale algorithm if possible [ {on} | off ] The default is with { } Value (value1) Parameter (param1)

Options Setting (Cont.) optimset(‘param1’,value1, ‘param2’,value2,…) LargeScale - Use large-scale algorithm if possible [ {on} | off ] Since the default is on, if we would like to turn off, we just type: Options = optimset(‘LargeScale’, ‘off’) and pass to the input of fminunc.

Useful Option Settings Highly recommended to use!!! Display - Level of display [ off | iter | notify | final ] MaxIter - Maximum number of iterations allowed [ positive integer ] TolCon - Termination tolerance on the constraint violation [ positive scalar ] TolFun - Termination tolerance on the function value [ positive scalar ] TolX - Termination tolerance on X [ positive scalar ]

fminunc and fminsearch fminunc uses algorithm with gradient and hessian information. Two modes: Large-Scale: interior-reflective Newton Medium-Scale: quasi-Newton (BFGS) Not preferred in solving highly discontinuous functions. This function may only give local solutions.. fminsearch is generally less efficient than fminunc for problems of order greater than two. However, when the problem is highly discontinuous, fminsearch may be more robust. This is a direct search method that does not use numerical or analytic gradients as in fminunc. This function may only give local solutions.

Constrained Minimization Vector of Lagrange Multiplier at optimal point [xmin,feval,exitflag,output,lambda,grad,hessian] = fmincon(fun,x0,A,B,Aeq,Beq,LB,UB,NONLCON,options,P1,P2,…)

Example function f = myfun(x) f=-x(1)*x(2)*x(3); Subject to:

Example (Cont.) For Create a function call nonlcon which returns 2 constraint vectors [C,Ceq] function [C,Ceq]=nonlcon(x) C=2*x(1)^2+x(2); Ceq=[]; Remember to return a null Matrix if the constraint does not apply

fmincon(fun,x0,A,B,Aeq,Beq,LB,UB,NONLCON,options,P1,P2,…) Example (Cont.) Initial guess (3 design variables) x0=[10;10;10]; A=[-1 -2 -2;1 2 2]; B=[0 72]'; LB = [0 0 0]'; UB = [30 30 30]'; [x,feval]=fmincon(@myfun,x0,A,B,[],[],LB,UB,@nonlcon) CAREFUL!!! fmincon(fun,x0,A,B,Aeq,Beq,LB,UB,NONLCON,options,P1,P2,…)

Example (Cont.) Const. 1 Const. 2 Const. 3 Const. 4 Const. 5 Const. 6 Warning: Large-scale (trust region) method does not currently solve this type of problem, switching to medium-scale (line search). > In D:\Programs\MATLAB6p1\toolbox\optim\fmincon.m at line 213 In D:\usr\CHINTANG\OptToolbox\min_con.m at line 6 Optimization terminated successfully: Magnitude of directional derivative in search direction less than 2*options.TolFun and maximum constraint violation is less than options.TolCon Active Constraints: 2 9 x = 0.00 16.231 feval = -4.657237250542452e-025 Const. 1 Const. 2 Const. 3 Const. 4 Const. 5 Const. 6 Const. 7 Sequence: A,B,Aeq,Beq,LB,UB,C,Ceq Const. 8 Const. 9

>> optimtool - % GUI

Constrained Optimization An optimization algorithm is large scale when it uses linear algebra that does not need to store, nor operate on, full matrices. In contrast, medium-scale methods internally create full matrices and use dense linear algebra The definition is based on the Karush-Kuhn-Tucker (KKT) conditions. The KKT conditions are analogous to the condition that the gradient must be zero at a minimum, modified to take constraints into account. The difference is that the KKT conditions hold for constrained problems. The KKT conditions use the auxiliary Lagrangian function:

fmincon Algorithms fmincon has four algorithm options: a) interior-point, b) active-set, c) SQP, d) trust-region-reflective a) An Interior point method is a linear or nonlinear programming method (Forsgren et al. 2002) that achieves optimization by going through the middle of the solid defined by the problem rather than around its surface b) Active Set Approach Equality constraints always remain in the active set Sk. The search direction dk is calculated and minimizes the objective function while remaining on active constraint boundaries.

Sequential Quadratic Programming and Trust Region c) Sequential quadratic programming (SQP): is an iterative method for nonlinear optimization . SQP methods are used on problems for which the objective function and the constraints are twice continuously differentiable. If the problem is unconstrained, then the method reduces to Newton's method for finding a point where the gradient of the objective vanishes. If the problem has only equality constraints, then the method is equivalent to applying Newton's method to the first-order optimality conditions, or Karush–Kuhn–Tucker conditions , of the problem. d) Trust-Region Reflective: The basic idea is to approximate f with a simpler function q, which reasonably reflects the behavior of function f in a neighborhood N around the point x. This neighborhood is the trust region. A trial step s is computed by minimizing (or approximately minimizing) over N. This is the trust-region sub problem The current point is updated to be x + s if f(x + s) < f(x); otherwise, the current point remains unchanged and N, the region of trust, is shrunk and the trial step computation is repeated.

lsqnonlin in Matlab – Multi Objective Curve fitting clc; %recfit.m clear; global data; data= [ 0.6000 0.999 0.6500 0.998 0.7000 0.997 0.7500 0.995 0.8000 0.982 0.8500 0.975 0.9000 0.932 0.9500 0.862 1.0000 0.714 1.0500 0.520 1.1000 0.287 1.1500 0.134 1.2000 0.0623 1.2500 0.0245 1.3000 0.0100 1.3500 0.0040 1.4000 0.0015 1.4500 0.0007 1.5000 0.0003 ]; % experimental data,`1st coloum x, 2nd coloum R x=data(:,1); Rexp=data(:,2); plot(x,Rexp,'ro'); % plot the experimental data hold on b0=[1.0 1.0]; % start values for the parameters b=lsqnonlin('recfun',b0) % run the lsqnonlin with start value b0, returned parameter values stored in b Rcal=1./(1+exp(1.0986/b(1)*(x-b(2)))); % calculate the fitted value with parameter b plot(x,Rcal,'b'); % plot the fitted value on the same graph Find b1 and b2 >>recfit >>b = 0.0603 1.0513 %recfun.m function y=recfun(b) global data; x=data(:,1); Rexp=data(:,2); Rcal=1./(1+exp(1.0986/b(1)*(x-b(2)))); % the calculated value from the model %y=sum((Rcal-Rexp).^2); y=Rcal-Rexp; % the sum of the square of the difference %between calculated value and experimental value Link to this Page

SIMULINK OPTIMIZATION RESPONSE PART III

SIMULINK & OPTIMIZATION DESIGN SIMULINK MODEL OPTIMIZATION Simulink Block Objective function (PSO,GA)

SIMULINK MODEL FILE – Ports input/output /Utim_Optim/PSO/Live_fn_sim SIMULINK MODEL FILE – Ports input/output /Utim_Optim/PSO/Live_fn_sim.mdl /Utim_Optim/PSO/PSO.m >> PSO % start program, Line 40 current_fitness(i) = Live_fn(current_position(:,i));

Genetic Algorithm –Objective function In-Port Parameter Optimized Out-Port Parameter Optimized /Utim_Optim/GA/Live_fn_simGA.mdl

SIMULINK OPTIMIZATION EXAMPLES PARTICLE SWARM OPTIMIZATION Particle Swarm Optimization function (\Utim\PSO\Live_fn_sim.mdl) ___________________________________________________________________ B. GENETIC ALGORITHM GA function optimization (\Utim\GA\Live_fn_simGA.mdl) C. Fish Harvesting Simulink Optimization Design (\Utim\fishharvester\fishpond2.mdl)

4. DC Motor Parameter Optimization with GA (IEEE_OPTIM_2012\Diodes_amps\lead_screw_model_opt_Motor_ga.mdl)\ ___________________________________________________________________ C. TRADITIONAL OPTIMIZATION METHODS 5

8. Buck Boost converter optimization (IEEE_OPTIM_2012\Buck_Boost\Buck_PWM_OPT12_Fs_Optim.mdl) (IEEE_OPTIM_2012\Buck_Boost\Buck_PWM_OPT12_Ti_L.mdl) ___________________________________________________________________ 9. Operational Amplifier Optimization (IEEE_OPTIM_2012\Diodes_amps\Op_amp_opt.mdl) 10. Diode Voltage Doubler (IEEE_OPTIM_2012\Diodes_amps\diode_2.mdl D. REAL DATA PARAMETERIZING 11. VCB Motor Parameter Optimization (IEEE_OPTIM_2012\lead screw\vcb_motor_opt_Motor_trk.mdl) 12. Parmeterizing Gas turbine Fuel Positioner (IEEE_OPTIM_2012\Gas_bio_Turbine\GT_Biofuel_present_b.mdl) _____________________________________________

REFERENCES 1. Optimization toolbox for use with MATLAB, User Guide, The MathWorks Inc. 2006 2. Applied Optimization with MATLAB Programming, P. Venkataraman, Wiley Inter Science, 2002 3. Optimization for Engineering Design, Kalyanmoy Deb, Prentice Hall, 1996. 4. Convex Optimization, Stephen Boyd and Lieven Vandenberghe, CUP 2004. 5. Numerical Recipes in C (or C++): The Art of Scientific Computing, W. H. Press, Brain P. F. Saul A. T. W. T. Vetterling CUP ,1992/2002 6. http://users.powernet.co.uk/kienzle/octave/optim.html 7. http://www.cse.uiuc.edu/eot/modules/optimization/SteepestDescent/