Download presentation
1
Engineering Optimization
Concepts and Applications Pictures show convex polyhedra, found at These resemble the feasible design spaces of linear programming problems in 3-dimensional design problems. Fred van Keulen Matthijs Langelaar CLA H21.1
2
Contents Constrained Optimization: Optimality conditions recap
Constrained Optimization: Algorithms Linear programming
3
Inequality constrained problems
Consider problem with only inequality constraints: g2 g1 x1 x2 f g3 At optimum, only active constraints matter: Optimality conditions similar to equality constrained case
4
Inequality constraints
First order optimality: Consider feasible local variation around optimum: (feasible perturbation) Since (boundary optimum)
5
Optimality condition Multipliers must be non-negative:
x1 x2 f -f This interpretation is given in Haftka. Interpretation: negative gradient (descent direction) lies in cone spanned by positive constraint gradients -f
6
Optimality condition (2)
g2 Feasible cone x2 Feasible direction: g1 f -f Descent direction: x1 This interpretation is given in Belegundu. Equivalent interpretation: no descent direction exists within the cone of feasible directions
7
Karush-Kuhn-Tucker conditions
First order optimality conditions for constrained problem: Lagrangian: Note, this condition applies only to regular points, I.e. were the constraint gradients are not linearly dependent.
8
Sufficiency KKT conditions are necessary conditions for local constrained minima For sufficiency, consider the sufficiency conditions based on the active constraints: on tangent subspace of h and active g. Special case: convex objective & convex feasible region: KKT conditions sufficient for global optimality
9
Significance of multipliers
Consider case where optimization problem depends on parameter a: Lagrangian: KKT: Looking for:
10
Significance of multipliers (3)
Lagrange multipliers describe the sensitivity of the objective to changes in the constraints: Similar equations can be derived for multiple constraints and inequalities Multipliers give “price of raising the constraint” Note, this makes it logical that at an optimum, multipliers of inequality constraints must be positive!
11
Contents Constrained Optimization: Optimality Criteria
Constrained Optimization: Algorithms Linear Programming
12
Constrained optimization methods
Approaches: Transformation methods (penalty / barrier functions) plus unconstrained optimization algorithms Random methods / Simplex-like methods Feasible direction methods Reduced gradient methods Approximation methods (SLP, SQP) Penalty and barrier methods treated before. Note, constrained problems can also have interior optima!
13
Augmented Lagrangian method
Recall penalty method: Disadvantages: High penalty factor needed for accurate results High penalty factor causes ill-conditioning, slow convergence
14
Augmented Lagrangian method
Basic idea: Add penalty term to Lagrangian Use estimates and updates of multipliers Also possible for inequality constraints Multiplier update rules determine convergence Exact convergence for moderate values of p The penalty term helps to make the Hessian of L positive definite.
15
Contents Constrained Optimization: Optimality Criteria
Constrained Optimization: Algorithms Augmented Lagrangian Feasible directions methods Reduced gradient methods Approximation methods SQP Linear Programming
16
Feasible direction methods
Moving along the boundary Rosen’s gradient projection method Zoutendijk’s method of feasible directions Basic idea: move along steepest descent direction until constraints are encountered step direction obtained by projecting steepest descent direction on tangent plane repeat until KKT point is found Both methods involve line searches along the feasible directions. The picture shows professor J.B. Rosen
17
1. Gradient projection method
x3 Iterations follow the constraint boundary: h = 0 For nonlinear constraints, mapping back to the constraint surface is needed, in normal space x1 x2 For simplicity, consider linear equality constrained problem:
18
Gradient projection method (2)
Recall: Tangent space: Normal space: Projection: decompose in tangent/normal vector:
19
Gradient projection method (3)
Search direction in tangent space: Projection matrix Nonlinear case: Correction in normal space:
20
Correction to constraint boundary
Correction in normal subspace, e.g. using Newton iterations: xk x’k+1 sk xk+1 First order Taylor approximation: Iterations:
21
Practical aspects How to deal with inequality constraints?
Use active set strategy: Keep set of active inequality constraints Treat these as equality constraints Update the set regularly (heuristic rules) In gradient projection method, if s = 0: Check multipliers: could be KKT point If any mi < 0, this constraint is inactive and can be removed from the active set
22
Slack variables Alternative way of dealing with inequality constraints: using slack variables: Disadvantages: all constraints considered all the time, + increased number of design variables
23
2. Zoutendijk’s feasible directions
Basic idea: move along steepest descent direction until constraints are encountered at constraint surface, solve subproblem to find descending feasible direction repeat until KKT point is found Subproblem: Descending: Subproblem is LP problem, can be solved efficiently. Gives best search direction. (But alpha must be negative!!!) See Belegundu p. 168 for details Feasible:
24
Zoutendijk’s method Subproblem linear: efficiently solved
Determine active set before solving subproblem! When a = 0: KKT point found Method needs feasible starting point. Dr. Zoutendijk worked at the University of Leiden, and invented this method around 1970. Nonlinear equality constraints have no interior, and this method requires an interior. This is because for a descent direction, alpha must be slightly negative, which means that the design is pushed slightly into the feasible region. See Belegundu for details.
25
Contents Constrained Optimization: Optimality Criteria
Constrained Optimization: Algorithms Augmented Lagrangian Feasible directions methods Reduced gradient methods Approximation methods SQP Linear Programming
26
Reduced gradient methods
Basic idea: Choose set of n - m decision variables d Use reduced gradient in unconstr. gradient-based method Recall: reduced gradient For the iterations, h(d,s)=0 must be written as a first order Taylor approximation, and then an iterative procedure for s can be made (basically this is a Newton method). State variables s can be determined from: (iteratively for nonlinear constraints)
27
Reduced gradient method
Nonlinear constraints: Newton iterations to return to constraint surface (determine s): until convergence A note on the selection of the variables is given in Papalambros, p The cost of the back-to-constraint-mapping procedure depends strongly on the partitioning. Variants using 2nd order information also exist Drawback: selection of decision variables (but some procedures exist)
28
Contents Constrained Optimization: Optimality Criteria
Constrained Optimization: Algorithms Augmented Lagrangian Feasible directions methods Reduced gradient methods Approximation methods SQP Linear Programming
29
Approximation methods
SLP: Sequential Linear Programming Solving series of linear approximate problems Efficient methods for linear constrained problems available
30
SLP 1-D illustration SLP iterations approach convex feasible domain from outside: f g x = 0.8 x = 0.988
31
SLP points of attention
Solves LP problem in every cycle: efficient only when analysis cost is relatively high Tendency to diverge Solution: trust region (move limits) x2 x1
32
SLP points of attention (2)
Infeasible starting point can result in unsolvable LP problem Solution: relaxing constraints in first cycles k sufficiently large to force solution into feasible region The feasible domain is enlarged by beta, which allows a certain amount of constraint violation.
33
SLP points of attention (3)
Cycling can occur when optimum lies on curved constraint Solution: move limit reduction strategy f x2 x1
34
Method of Moving Asymptotes
First order method, by Svanberg (1987) Builds convex approximate problem, approximating responses using: R, Pi, Qi, Ui and Li are determined base don the values of the gradient and objective, and the history of the optimization process. See also p. 325 of Papalambros. Approximate problem solved efficiently Popular method in topology optimization
35
Sequential Approximate Optimization
Zeroth order method: Determine initial trust region Generate sampling points (design of experiments) Build response surface (e.g. Least Squares, Kriging, …) Optimize approximate problem Check convergence, update trust region, repeate from 2 Many variants! See also Lecture 4
36
Sequential Approximate Optimization
Good approach for expensive models RS dampens noise Versatile Design domain Optimum Response surface Sub-optimal point Trust region
37
Contents Constrained Optimization: Optimality Criteria
Constrained Optimization: Algorithms Augmented Lagrangian Feasible directions methods Reduced gradient methods Approximation methods SQP Linear Programming
38
SQP SQP: Sequential Quadratic Programming
Newton method to solve the KKT conditions KKT points: Newton:
39
SQP (2) Newton:
40
Note: KKT conditions of:
SQP (3) Note: KKT conditions of: Quadratic subproblem for finding search direction sk
41
Quadratic subproblem Quadratic subproblem with linear constraints can be solved efficiently: General case: KKT condition: Efficient specialized algorithms exist (Papalambros p. 318) to solve this system of equations. Solution:
42
Basic SQP algorithm Choose initial point x0 and initial multiplier estimates l0 Set up matrices for QP subproblem Solve QP subproblem sk , lk+1 Set xk+1 = xk + sk Check convergence criteria Finished
43
SQP refinements For convergence of Newton method, must be positive definite Line search along sk improves robustness To avoid computation of Hessian information for , quasi-Newton approaches (DFP, BFGS) can be used (also ensure positive definiteness) Active set strategies operate either on the original problem or on the quadratic subproblem, and the line search is also a special line search which uses a special “merit function” to locate the best point. For dealing with inequality constraints, various active set strategies exist
44
Comparison Method AugLag Zoutendijk GRG SQP Feasible starting point? No Yes Yes No Nonlinear constraints? Yes Yes Yes Yes Equality constraints? Yes Hard Yes Yes Uses active set? Yes Yes No Yes Iterates feasible? No Yes No No Derivatives needed? Yes Yes Yes Yes SQP generally seen as best general-purpose method for constrained problems
45
Contents Constrained Optimization: Optimality Criteria
Constrained Optimization: Algorithms Linear programming
46
Linear programming problem
Linear objective and constraint functions:
47
Feasible domain Linear constraints divide design space into two convex half-spaces: x2 x1 Feasible domain = intersection of convex half-spaces: Result: X = convex polyhedron
48
Global optimality Convex objective function on convex feasible domain: KKT point = unique global optimum KKT conditions:
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.