Download presentation
Presentation is loading. Please wait.
1
Computational Optimization
General Nonlinear Equality Constraints
2
General Equality Problem
3
Sequential Quadratic Programming (SQP)
Basic Idea: QP with constraints are easy. For any guess of active constraints, just have to solve system of equations. So why not solve general problem as a series of constrained QPs. Which QP should be used?
4
Use KKT Problem Use Lagrangian
5
Solve using Newton’s Method
Newton step where
6
SQP Equations are first order KKT of First Algorithm:
Solve QP subproblem for (p,) Add to iterate. But needs line search like Newton’s Method
7
Usual Tricks Use linesearch but with merit function to
force toward feasibility Use Modify Cholesky to insure descent directions Use Quasi Newton approximation of Hessian of Lagrangian Can add linearization of g(x) for inequality constraints too.
8
General Problem Case -SQP
Use QP Merit function
9
Reduced Gradient Works similarly to SQP but maintains feasibility at each iteration. In some sense moves in the SQP direction but then corrects for feasibility by solving system of equation. Maintains feasibility at each iteration but more expensive at each iteration.
10
Penalty and Barrier Methods
Active set methods have inherent combinatorial difficulty – which constraints are active? Plus nonlinear constraints can be tricky. Idea of penalty and barrier methods is to trade combinatorial search for active constraints for a more difficult objective function.
11
Barrier Methods Create function that is infinite at boundaries and has min that asymptotically approaches min of original problem.
12
Barrier Methods The most famous example of a Barrier Method is the Interior Point Method for linear programming and other problems.
13
Exterior Penalties Idea: Construct a function that penalizes infeasibilities but has no penalty in feasible region. Asymptotically, minimum solution of such penalty function approach a soln of the original from exterior of the feasible region.
14
Sample Penalty Function
Penalty positive for all infeasible points and 0 for feasible points
15
Exterior Penalties Pros
Handles nonlinear equality constraints that have no interior Don’t need to maintain feasible. Don’t need highly non-linear transforms needed for simple constraints
16
Example Transformed problem FONC
17
Exterior Point Algorithm
For max solve the penalized problem Can show that algorithm converges Asymptotically as goes to infinity May require infinite penalty Large can make problem ill-conditioned
18
Exact Penalty Functions
Avoid problem going to infinity creating ill-conditioning Can show that P(x) for solves problem exactly for r sufficient large (finite)
19
In Class Exercise Consider The exact penalty problem is
Plot f(x), P(x,10), P(x,100), P(x,1000) Try using the ezplot command with hold between plots Compare these functions near x*
20
In Class Exercise Consider The exact penalty problem is
Plot f(x), P(x,10), P(x,100), P(x,1000) Try using the ezplot command with hold between plots Compare these functions near x*
21
Exact Penalty Pros: Cons: Finite penalty parameter
Solution of penalty problem solves the original Makes problem more convex Cons: Function is not differentiable Must use nonsmooth optimization methods e.g. Subgradient methods
22
NLP Family of Algorithms
Basic Method Sequential Quadratric Programming Sequential Linear Prog Augmented Lagrangian Projection or Reduced Gradient Directions Steepest Descent Newton Quasi Newton Conjugate Gradient Space Direct Null Range Constraints Active Set Barrier Penalty Step Size Line Search Trust Region
23
NLP Family of Algorithms
Basic Method Sequential Quadratric Programming Sequential Linear Augmented Lagrangian Projection or Reduced Gradient Directions Steepest Descent Newton Quasi Newton Conjugate Gradient Space Direct Null Range Constraints Active Set Barrier/ Interior Point Penalty Step Size Line Search Trust Region
24
Augmented Lagrangian Consider min f(x) s.t h(x)=0
Start with L(x, )=f(x)-’h(x) Add penalty L(x, ,c)=f(x)-’h(x)+c/2||h(x)||2 The penalty helps insure that the point is feasible.
25
Lagrangian Multiplier Estimate
L(x, ,c)=f(x)-’h(x)+c/2||h(x)||2 If Looks like the Lagrangian Multiplier!
26
In Class Exercise Consider Find x*, * satisfying the KKT conditions
The augmented Lagrangian is L(x, *, c)= Plot f(x), L(x, *), L(x*, * ,4), L(x*, * ,16) L(x*, * ,40) Compare these functions near x*
27
AL has Nice Properties Penalty function can improve conditioning and convexity. If x* is regular and SOSC at x* the Hessian of A is p.d. for c k sufficiently large Automatically gives estimates of Lagrangian Multipliers
28
AL : Method of Multipliers
x0, 0, c0, For k=1 to max iterations If stop (for both x, ) xk+1 = minx L(xk,k, c k) Update k+1= k - ch(xk+1 ) c k+1 c k
29
Inequality Problems Method of multiplier can be extended to this case using penalty parameter t If strict complementarity holds this function is twice differentiable.
30
Inequality Problems KKT point of Augmented Lagrangian is KKT point of original problem Estimate of Lagrangian Multiplier is
31
Inequality Problems
32
NLP Family of Algorithms
Basic Method Sequential Quadratric Programming Sequential Linear Prog Augmented Lagrangian Projection or Reduced Gradient Directions Steepest Descent Newton Quasi Newton Conjugate Gradient Space Direct Null Range Constraints Active Set Barrier Penalty Step Size Line Search Trust Region
33
Hybrid Approaches Method can be any combination of these algorithms
MINOS: For linear program utilizes a simplex method. The generalization of this to nonlinear programs with linear constraints is the reduced gradient method. Nonlinear constraints are handled by utilizing the augmented Lagrangian. A BFGS estimate of the Hessian is used.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.