Constraints.

Slides:



Advertisements
Similar presentations
Survey of gradient based constrained optimization algorithms Select algorithms based on their popularity. Additional details and additional algorithms.
Advertisements

Optimization with Constraints
Geometry and Theory of LP Standard (Inequality) Primal Problem: Dual Problem:
IEOR 4004 Midterm review (Part II) March 12, 2014.
Linear Programming Problem
Operation Research Chapter 3 Simplex Method.
Engineering Optimization
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Constrained optimization Indirect methods Direct methods.
Optimality conditions for constrained local optima, Lagrange multipliers and their use for sensitivity of optimal solutions Today’s lecture is on optimality.
Separating Hyperplanes
Optimization in Engineering Design 1 Lagrange Multipliers.
Approximation Algorithms
Economics 214 Lecture 37 Constrained Optimization.
Constrained Optimization
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 99 Chapter 4 The Simplex Method.
Constrained Optimization Rong Jin. Outline  Equality constraints  Inequality constraints  Linear Programming  Quadratic Programming.
Reformulated - SVR as a Constrained Minimization Problem subject to n+1+2m variables and 2m constrains minimization problem Enlarge the problem size and.
Constrained Optimization Rong Jin. Outline  Equality constraints  Inequality constraints  Linear Programming  Quadratic Programming.
Chapter 4 The Simplex Method
Name: Mehrab Khazraei(145061) Title: Penalty or Exterior penalty function method professor Name: Sahand Daneshvar.
Chapter 3 An Introduction to Linear Programming
Chapter 3 Linear Programming Methods 高等作業研究 高等作業研究 ( 一 ) Chapter 3 Linear Programming Methods (II)
1 1 Slide © 2005 Thomson/South-Western Slides Prepared by JOHN S. LOUCKS ST. EDWARD’S UNIVERSITY.
Chapter 6 Linear Programming: The Simplex Method
Duality Theory 對偶理論.
1 1 Slide Linear Programming (LP) Problem n A mathematical programming problem is one that seeks to maximize an objective function subject to constraints.
Duality Theory LI Xiaolei.
Barnett/Ziegler/Byleen Finite Mathematics 11e1 Learning Objectives for Section 6.4 The student will be able to set up and solve linear programming problems.
1 1 Slide © 2005 Thomson/South-Western Chapter 2 Introduction to Linear Programming n Linear Programming Problem n Problem Formulation n A Maximization.
Duality Theory  Every LP problem (called the ‘Primal’) has associated with another problem called the ‘Dual’.  The ‘Dual’ problem is an LP defined directly.
Chapter 6 Linear Programming: The Simplex Method Section 4 Maximization and Minimization with Problem Constraints.
Machine Learning Weak 4 Lecture 2. Hand in Data It is online Only around 6000 images!!! Deadline is one week. Next Thursday lecture will be only one hour.
Optimization unconstrained and constrained Calculus part II.
L8 Optimal Design concepts pt D
Chapter 4 Sensitivity Analysis, Duality and Interior Point Methods.
Inexact SQP methods for equality constrained optimization Frank Edward Curtis Department of IE/MS, Northwestern University with Richard Byrd and Jorge.
OR Relation between (P) & (D). OR optimal solution InfeasibleUnbounded Optimal solution OXX Infeasible X( O )O Unbounded XOX (D) (P)
Nonlinear Programming In this handout Gradient Search for Multivariable Unconstrained Optimization KKT Conditions for Optimality of Constrained Optimization.
Kerimcan OzcanMNGT 379 Operations Research1 Linear Programming Chapter 2.
CS B553: A LGORITHMS FOR O PTIMIZATION AND L EARNING Constrained optimization.
Economics 2301 Lecture 37 Constrained Optimization.
Lecture notes for Stat 231: Pattern Recognition and Machine Learning 1. Stat 231. A.L. Yuille. Fall 2004 Linear Separation and Margins. Non-Separable and.
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
Optimization in Engineering Design Georgia Institute of Technology Systems Realization Laboratory 117 Penalty and Barrier Methods General classical constrained.
1 Optimization Linear Programming and Simplex Method.
Approximation Algorithms based on linear programming.
1 Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 23, 2010 Piotr Mirowski Based on slides by Sumit.
An Introduction to Linear Programming
Bounded Nonlinear Optimization to Fit a Model of Acoustic Foams
Part 4 Nonlinear Programming
Chapter 11 Optimization with Equality Constraints
Computational Optimization
Module E3-a Economic Dispatch.
Chap 9. General LP problems: Duality and Infeasibility
Introduction to linear programming (LP): Minimization
CS5321 Numerical Optimization
Duality Theory and Sensitivity Analysis
The Lagrange Multiplier Method
7.5 – Constrained Optimization: The Method of Lagrange Multipliers
Part 4 Nonlinear Programming
Chapter 5. The Duality Theorem
Linear Programming Problem
CS5321 Numerical Optimization
Calculus-Based Optimization AGEC 317
Transformation Methods Penalty and Barrier methods
Linear Programming.
Linear Constrained Optimization
Presentation transcript:

Constraints

Constrained Optimization Minimizing an objective subject to design point restrictions called constraints A variety of techniques transform constrained optimization problems into unconstrained problems New optimization problem statement The set is called the feasible set

Constrained Optimization Constraints that bound feasible set can change the optimizer

Constraint Types Generally, constraints are formulated using two types Equality constraints: Inequality constraints: Any optimization problem can be written as

Transformations to Remove Constraints If necessary, some problems can be reformulated to incorporate constraints into the objective function If x is constrained between a and b

Transformations to Remove Constraints Example 1

Transformations to Remove Constraints Example 2 Consider constrained optimization problem Solve for one of the variables to eliminate constraint Transformed, unconstrained optimization problem

Lagrange Multipliers The method of Lagrange multipliers is used to optimize a function subject to only equality constraints Lagrange multipliers also refer to the variables introduced by the method denoted by λ

Lagrange Multipliers Form Lagrangian equation Set gradient of Lagrangian with respect to x to zero to get Use constraint equation , solve for x and λ

Lagrange Multipliers Intuitively, the method of Lagrange multipliers finds the point x* where the constraint function is orthogonal to the gradient

Inequality Constraints If solution lies at the constraint boundary, the constraint is called active, and the Lagrange condition holds for a constant μ If the solution lies within the boundary, the constraint is called inactive, and the Lagrange condition holds if μ = 0

Inequality Constraints This can be formulated so that the Lagrangian evaluates to infinity outside the feasible set if then The new optimization problem becomes This is called the primal problem

Inequality Constraints Any primal solution must satisfy the KKT conditions Feasibility: Dual Feasibility: Penalization is toward feasibility Complementary Slackness: Either µi or gi(x*) is zero Stationarity: Objective function tangent to each active constraint

Duality The method of Lagrange multipliers can be generalized to define generalized Lagrangian As previously mentioned, the primal form is Reversing the order of operations leads to the dual form

Duality The min-max inequality states that for any function f(a,b) Therefore, the solution to the dual problem is a lower bound to the primal solution The dual problem can be used to define the dual function

Duality By the max-min inequality, the dual solution is a lower bound to the primal solution d* ≤ p* The difference between dual and primal solutions d* and p* is called the duality gap Showing zero-duality gap is a “certificate” of optimality

Penalty Methods Penalty methods are a way of reformulating a constrained optimization problem as an unconstrained problem by penalizing the objective function value when constraints are violated Simple example →

Penalty Methods Count penalty: Quadratic penalty: Mixed Penalty:

Augmented Lagrange Method Adaptation of penalty method for equality constraints λ converges towards the Lagrange multiplier

Interior Point Methods Also called barrier methods, interior point methods ensure that each step is feasible This allows premature termination to return a nearly optimal, feasible point Barrier functions are implemented similar to penalties but must meet the following conditions Continuous Non-negative Approach infinity as x approaches boundary

Interior Point Methods Inverse Barrier: Log Barrier: New optimization problem

Summary Constraints are requirements on the design points that a solution must satisfy Some constraints can be transformed or substituted into the problem to result in an unconstrained optimization problem Analytical methods using Lagrange multipliers yield the generalized Lagrangian and the necessary conditions for optimality under constraints A constrained optimization problem has a dual problem formulation that is easier to solve and whose solution is a lower bound of the solution to the original problem

Summary Penalty methods penalize infeasible solutions and often provide gradient information to the optimizer to guide infeasible points toward feasibility Interior point methods maintain feasibility but use barrier functions to avoid leaving the feasible set