Graphical optimization Some problems are cheap to simulate or test. Even if they are not, we may fit a surrogate that is cheap to evaluate. Relying on.

Slides:



Advertisements
Similar presentations
Global Optimization General issues in global optimization Classification of algorithms The DIRECT algorithm – Relationship to EGO – Lipschitzian optimization.
Advertisements

Optimality conditions for constrained local optima, Lagrange multipliers and their use for sensitivity of optimal solutions.
MATLAB Optimization Greg Reese, Ph.D Research Computing Support Group Miami University.
Graphical optimization Some problems are cheap to simulate or test. Even if they are not, we may fit a surrogate that is cheap to evaluate. Relying on.
1 OR II GSLM Outline  some terminology  differences between LP and NLP  basic questions in NLP  gradient and Hessian  quadratic form  contour,
Unconstrained optimization Gradient based algorithms –Steepest descent –Conjugate gradients –Newton and quasi-Newton Population based algorithms –Nelder.
Global Optimization General issues in global optimization Classification of algorithms The DIRECT algorithm – Divided rectangles – Exploration and Exploitation.
Optimality conditions for constrained local optima, Lagrange multipliers and their use for sensitivity of optimal solutions Today’s lecture is on optimality.
©GoldSim Technology Group LLC., 2012 Optimization in GoldSim Jason Lillywhite and Ryan Roper June 2012 Webinar.
Engineering Analysis ENG 3420 Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 11:00-12:00.
The Simplex Algorithm An Algorithm for solving Linear Programming Problems.
OPTIMAL CONTROL SYSTEMS
Fitting models to data. Step 5) Express the relationships mathematically in equations Step 6)Get values of parameters Determine what type of model you.
Optimization Methods One-Dimensional Unconstrained Optimization
Optimization Linear Programming and Simplex Method
1 Numerical geometry of non-rigid shapes Non-Euclidean Embedding Non-Euclidean Embedding Lecture 6 © Alexander & Michael Bronstein tosca.cs.technion.ac.il/book.
Optimization Methods One-Dimensional Unconstrained Optimization
Stevenson and Ozgur First Edition Introduction to Management Science with Spreadsheets McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies,
Chapter 19 Linear Programming McGraw-Hill/Irwin
Algebra 2 Chapter 3 Notes Systems of Linear Equalities and Inequalities Algebra 2 Chapter 3 Notes Systems of Linear Equalities and Inequalities.
Part 2 Chapter 7 Optimization
84 b Unidimensional Search Methods Most algorithms for unconstrained and constrained optimisation use an efficient unidimensional optimisation technique.
Integrating Neural Network and Genetic Algorithm to Solve Function Approximation Combined with Optimization Problem Term presentation for CSC7333 Machine.
McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies, Inc. All rights reserved. 6S Linear Programming.
Nonlinear Programming.  A nonlinear program (NLP) is similar to a linear program in that it is composed of an objective function, general constraints,
Linear Programming McGraw-Hill/Irwin Copyright © 2012 by The McGraw-Hill Companies, Inc. All rights reserved.
Survey of gradient based constrained optimization algorithms Select algorithms based on their popularity. Additional details and additional algorithms.
OR Chapter 2. Simplex method (2,0) (2,2/3) (1,2)(0,2)
Progress Report Qiang Chen, Derek Dalle, Chad Griep, Jingwei Hu, Jahmario Williams, Zhenqiu Xie Multiobjective Modeling and Optimization in Design.
Introduction to Optimization Methods
Final Exam Review CS479/679 Pattern Recognition Dr. George Bebis 1.
McGraw-Hill/Irwin Copyright © 2009 by The McGraw-Hill Companies, Inc. All Rights Reserved. Supplement 6 Linear Programming.
Optimization of functions of one variable (Section 2)
Foundation of the Simplex Method.  Constraints Boundary Equations  Graphical approach is very limited based on number of variables. The simplex method.
Introduction to Optimization
1 Introduction Optimization: Produce best quality of life with the available resources Engineering design optimization: Find the best system that satisfies.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
1 Optimization Linear Programming and Simplex Method.
Linear Programming.
Linear Programming Many problems take the form of maximizing or minimizing an objective, given limited resources and competing constraints. specify the.
Special Cases in simplex method applications
Mathematical Programming
One-layer neural networks Approximation problems
Graphical Analysis – the Feasible Region
Dr. Arslan Ornek IMPROVING SEARCH
Pattern Recognition CS479/679 Pattern Recognition Dr. George Bebis
Linear Programming.
Graphical optimization
Chap 3. The simplex method
3-3 Optimization with Linear Programming
Part 3. Linear Programming
Chapter 7: Optimization
L5 Optimal Design concepts pt A
13.3 day 2 Higher- Order Partial Derivatives
13.3 day 2 Higher- Order Partial Derivatives
FUNCTION NOTATION AND EVALUATING FUNCTIONS
○ Hisashi Shimosaka (Doshisha University)
Optimization and Some Traditional Methods
EE368 Soft Computing Genetic Algorithms.
Graphical Solution of Linear Programming Problems
EE 458 Introduction to Optimization
EEE 244-8: Optimization.
First Exam 18/10/2010.
Linear Regression review
Part 3. Linear Programming
UNDERSTANDING FUNCTIONS
Part 2 Chapter 7 Optimization
Optimization formulation
EE/Econ 458 Introduction to Optimization
Analyzing Multivariable Change: Optimization
Presentation transcript:

Graphical optimization Some problems are cheap to simulate or test. Even if they are not, we may fit a surrogate that is cheap to evaluate. Relying on optimization software to find the optimum is foolhardy. It is better to thoroughly explore manually. In two dimensions, graphical optimization is a good way to go. For problems with more variables, it may still be good to identify two important variables and proceed graphically. In higher dimensions, two dimensional cuts can help understand the relationship between local optima.

Example Plot and estimate minimum of In the range Can do contour plot or mesh plot x=linspace(-2,0,40); y=linspace(0,3,40); [X,Y]=meshgrid(x,y); Z=2+X-Y+2*X.^2+2*X.*Y+Y.^2; cs=contour(X,Y,Z); cs=surfc(X,Y,Z); xlabel('x_1'); ylabel('x_2'); zlabel('f(x_1,x_2)');

Then zoom Zooming by reducing the range does not change the estimate of the position (-1,1.5), but the value is now below 0.8. x=linspace(-1.5,-0.5,40); y=linspace(1,2,40); [X,Y]=meshgrid(x,y); Z=2+X-Y+2*X.^2+2*X.*Y+Y.^2

In higher dimensions Often different solutions of optimization problems are obtained from multiple starting points or even from same starting point. It is tempting to ascribe this to local optima. Often it is the result of algorithmic or software failure. Generate line connecting two solutions

Example Consider Two candidate local optima (0,0,0) and (0.94,0.94,0.94). Line connecting first and second point indicates that they may be local optima. alpha=linspace(0,1,101); x1=[0 0 0]; x2=[ ]; x=x2(1)*alpha+x1(1)*(1-alpha); y=x2(2)*alpha+x1(2)*(1-alpha); z=x2(3)*alpha+x1(3)*(1-alpha); f=abs(sin(2*pi.*x).*sin(2*pi.*y).* sin(2*pi.*z))+x.*sqrt(y).*z.^2; plot(alpha,f);

Two-dimensional cut

Example Optimize (sin(2*pi*x(1))*sin(2*pi*x(2))*sin(2*pi*x(3)))+x(1)*sqrt(x(2))*x(3)^2; g(x), [0,0,0 ])

Unconstraiend optimization in Matlab g(x), [0,0,0]) X = gval = exitflag =1 g(x), [0.5,0.5,0.5]) X = gval = exitflag =1 1 Maximum coordinate difference between current best point and other points in simplex is less than or equal to TolX, and corresponding difference in function values is less than or equal to TolFun. 0 Maximum number of function evaluations or iterations reached. -1 Algorithm terminated by the output function. Can set convergence criteria in optimset.

Alternative using derivatives g(x), [0.24,0.24,-0.24]) Warning: Gradient must be provided for trust-region algorithm; using line-search algorithm instead. > In fminunc at 383 Local minimum found. Optimization completed because the size of the gradient is less than the default value of the function tolerance. X = gval = exitflag =1 g(x), [0,0,0]) Warning: Gradient must be provided for trust-region algorithm; using line-search algorithm instead. > In fminunc at 383 Initial point is a local minimum. Optimization completed because the size of the gradient at the initial point is less than the default value of the function tolerance. X = gval = 0 exitflag = 1

10 dimensional constrained example Aerodynamic design of supersonic transport. Objective function, take-off gross weight is a cheap and simple function of design variables. Design variables define geometry of wing and fuselage. Constraints include range, take-off and landing constraints, and maneuvrability constraints. The constraints are all uni-modal functions, but in combination, they create a complex, non-convex feasible domain.

Simple constraints create non-convex design space. Can that happen with linear constraints? Two local optima and a third point

Problems