Engineering Optimization Concepts and Applications Fred van Keulen Matthijs Langelaar CLA H21.1 A.vanKeulen@tudelft.nl
Recap / overview Optimization problem Definition Checking Negative null form Model Special topics Linear / convex problems Sensitivity analysis Topology optimization Solution methods Unconstrained problems Constrained problems Optimality criteria Optimality criteria Optimization algorithms Optimization algorithms
Summary optimality conditions Conditions for local minimum of unconstrained problem: First Order Necessity Condition: Second Order Sufficiency Condition: H positive definite For convex f in convex feasible domain: condition for global minimum: Sufficiency Condition:
Stationary point nature summary Definiteness H Nature x* Positive d. Minimum Positive semi-d. Valley Indefinite Saddlepoint Negative semi-d. Ridge Negative d. Maximum
Complex eigenvalues? Question: what is the nature of a stationary point when H has complex eigenvalues? Answer: this situation never occurs, because H is symmetric by definition. Symmetric matrices have real eigenvalues (spectral theory).
Nature of stationary points Nature of initial position depends on load (buckling): F k1 k2 l
Nature of stationary points (2)
Unconstrained optimization algorithms Single-variable methods 0th order (involving only f ) 1st order (involving f and f ’ ) 2nd order (involving f, f ’ and f ” ) Multiple variable methods
Why optimization algorithms? Optimality conditions often cannot be used: Function not explicitly known (e.g. simulation) Conditions cannot be solved analytically Example: Stationary points:
0th order methods: pro/con Weaknesses: (Usually) less efficient than higher order methods (many function evaluations) Strengths: No derivatives needed Work also for discontinuous / non- differentiable functions Easy to program Robust
Minimization with one variable Why? Simplest case: good starting point Used in multi-variable methods during line search Setting: f x Model Optimizer Iterative process:
Termination criteria Stop optimization iterations when: Solution is sufficiently accurate (check optimality criteria) Progress becomes too slow: Maximum resources have been spent The solution diverges Cycling occurs xa xb
Brute-force approach Simple approach: exhaustive search Disadvantage: rather inefficient f x L0 n points: Final interval size = Ln
Basic strategy of 0th order methods for single-variable case Find interval [a0, b0] that contains the minimum (bracketing) Iteratively reduce the size of the interval [ak, bk] (sectioning) Approximate the minimum by the minimum of a simple interpolation function over the interval [aN, bN] Sectioning methods: Dichotomous search Fibonacci method Golden section method
Bracketing the minimum f x4 = x3+g2D x1 [a0, b0] x2 = x1+D x3 = x2+gD x Starting point x1, stepsize D, expansion parameter g: user-defined
Unimodality Bracketing and sectioning methods work best for unimodal functions: “An unimodal function consists of exactly one monotonically increasing and decreasing part”
Dichotomous search Conceptually simple idea: Main Entry: di·chot·o·mous Pronunciation: dI-'kät-&-m&s also d&- Function: adjective : dividing into two parts Conceptually simple idea: Try to split interval in half in each step L0 a0 b0 L0/2 d << L0
Dichotomous search (2) Interval size after 1 step (2 evaluations): Interval size after m steps (2m evaluations): Proper choice for d :
Dichotomous search (3) Example: m = 10 Ideal interval reduction m
Sectioning - Fibonacci Situation: minimum bracketed between x1 and x3 : x4 x4 x1 x2 x3 Test new points and reduce interval Optimal point placement?
Optimal sectioning Fibonacci method: optimal sectioning method Given: Initial interval [a0, b0] Predefined total number of evaluations N, or: Desired final interval size e
Fibonacci sectioning - basic idea Start at final interval and use symmetry and maximum interval reduction: d << IN IN IN-1 = 2IN IN-2 = 3IN IN-3 = 5IN IN-4 = 8IN IN-5 = 13IN Yellow point is point that has been added in the previous iteration. Fibonacci number
Sectioning – Golden Section For large N, Fibonacci fraction b converges to golden section ratio f (0.618034…): Golden section method uses this constant interval reduction ratio f f 1
Sectioning - Golden Section Origin of golden section: I1 I2 = fI1 I2 = fI1 I3 = fI2 Final interval:
Comparison sectioning methods Ideal dichotomous interval reduction Fibonacci Golden section Evaluations N Dichotomous 12 Golden section 9 Fibonacci 8 (Exhaustive 99) Example: reduction to 2% of original interval: Conclusion: Golden section simple and near-optimal
Quadratic interpolation Three points of the bracket define interpolating quadratic function: ai+1 bi+1 xnew New point evaluated at minimum of parabola: ai bi For minimum: a > 0! Shift xnew when very close to existing point
Unconstrained optimization algorithms Single-variable methods 0th order (involving only f ) 1st order (involving f and f ’ ) 2nd order (involving f, f ’ and f ” ) Multiple variable methods
Cubic interpolation Similar to quadratic interpolation, but with 2 points and derivative information: ai bi
Bisection method Optimality conditions: minimum at stationary point Root finding of f ’ Similar to sectioning methods, but uses derivative: f f’ Interval is halved in each iteration. Note, this is better than any of the direct methods.
Secant method f ’ Also based on root finding of f ’ Uses linear interpolation f ’ Interval possibly even more than halved in each iteration. Best.
Unconstrained optimization algorithms Single-variable methods 0th order (involving only f ) 1st order (involving f and f ’ ) 2nd order (involving f, f ’ and f ” ) Multiple variable methods
Newton’s method Again, root finding of f ’ Basis: Taylor approximation of f ’: Linear approximation New guess:
Newton’s method f’ f’ Best convergence of all methods: xk+1 xk+1 xk+2 xk xk+2 xk Note, jumping from point to point, not contained in an interval. Dangerous, may diverge. Unless it diverges
Summary single variable methods Bracketing + Dichotomous sectioning Fibonacci sectioning Golden ratio sectioning Quadratic interpolation Cubic interpolation Bisection method Secant method Newton method In practice: additional “tricks” needed to deal with: Multimodality Strong fluctuations Round-off errors Divergence 0th order 1st order 2nd order And many, many more!