Name: Mehrab Khazraei(145061) Title: Penalty or Exterior penalty function method professor Name: Sahand Daneshvar.

Slides:



Advertisements
Similar presentations
Chapter 8 The Maximum Principle: Discrete Time 8.1 Nonlinear Programming Problems We begin by starting a general form of a nonlinear programming problem.
Advertisements

The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case.
Boyce/DiPrima 9th ed, Ch 2.8: The Existence and Uniqueness Theorem Elementary Differential Equations and Boundary Value Problems, 9th edition, by William.
LIAL HORNSBY SCHNEIDER
Linear Programming (LP) (Chap.29)
دانشگاه صنعتي اميركبير دانشكده مهندسي پزشكي Constraints in MPC کنترل پيش بين-دکتر توحيدخواه.
Introduction to Algorithms
1 Introduction to Linear Programming. 2 Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. X1X2X3X4X1X2X3X4.
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Adam Networks Research Lab Transformation Methods Penalty and Barrier methods Study of Engineering Optimization Adam (Xiuzhong) Chen 2010 July 9th Ref:
Online Scheduling with Known Arrival Times Nicholas G Hall (Ohio State University) Marc E Posner (Ohio State University) Chris N Potts (University of Southampton)
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
The Most Important Concept in Optimization (minimization)  A point is said to be an optimal solution of a unconstrained minimization if there exists no.
Visual Recognition Tutorial
Optimization in Engineering Design 1 Lagrange Multipliers.
Approximation Algorithms
Job Scheduling Lecture 19: March 19. Job Scheduling: Unrelated Multiple Machines There are n jobs, each job has: a processing time p(i,j) (the time to.
Math for CSLecture 51 Function Optimization. Math for CSLecture 52 There are three main reasons why most problems in robotics, vision, and arguably every.
Constrained Optimization Rong Jin. Outline  Equality constraints  Inequality constraints  Linear Programming  Quadratic Programming.
Definition and Properties of the Cost Function
Chapter 3 Linear Programming Methods 高等作業研究 高等作業研究 ( 一 ) Chapter 3 Linear Programming Methods (II)
Instructor: Prof.Dr.Sahand Daneshvar Presented by: Seyed Iman Taheri Student number: Non linear Optimization Spring EASTERN MEDITERRANEAN.
ENCI 303 Lecture PS-19 Optimization 2
The importance of sequences and infinite series in calculus stems from Newton’s idea of representing functions as sums of infinite series.  For instance,
Chapter 6 Linear Programming: The Simplex Method
C&O 355 Mathematical Programming Fall 2010 Lecture 4 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
Simplex method (algebraic interpretation)
Duality Theory LI Xiaolei.
Barnett/Ziegler/Byleen Finite Mathematics 11e1 Learning Objectives for Section 6.4 The student will be able to set up and solve linear programming problems.
Pareto Linear Programming The Problem: P-opt Cx s.t Ax ≤ b x ≥ 0 where C is a kxn matrix so that Cx = (c (1) x, c (2) x,..., c (k) x) where c.
Copyright © 2013, 2009, 2005 Pearson Education, Inc. 1 5 Systems and Matrices Copyright © 2013, 2009, 2005 Pearson Education, Inc.
Duality Theory  Every LP problem (called the ‘Primal’) has associated with another problem called the ‘Dual’.  The ‘Dual’ problem is an LP defined directly.
Department Of Industrial Engineering Duality And Sensitivity Analysis presented by: Taha Ben Omar Supervisor: Prof. Dr. Sahand Daneshvar.
We have used calculators and graphs to guess the values of limits.  However, we have learned that such methods do not always lead to the correct answer.
Method of Hooke and Jeeves
Computational Intelligence: Methods and Applications Lecture 23 Logistic discrimination and support vectors Włodzisław Duch Dept. of Informatics, UMK Google:
Chapter 6 Linear Programming: The Simplex Method Section 4 Maximization and Minimization with Problem Constraints.
A comparison between PROC NLP and PROC OPTMODEL Optimization Algorithm Chin Hwa Tan December 3, 2008.
Exact Differentiable Exterior Penalty for Linear Programming Olvi Mangasarian UW Madison & UCSD La Jolla Edward Wild UW Madison December 20, 2015 TexPoint.
OR Chapter 8. General LP Problems Converting other forms to general LP problem : min c’x  - max (-c)’x   = by adding a nonnegative slack variable.
Inexact SQP methods for equality constrained optimization Frank Edward Curtis Department of IE/MS, Northwestern University with Richard Byrd and Jorge.
Introduction to Optimization
Nonlinear Programming In this handout Gradient Search for Multivariable Unconstrained Optimization KKT Conditions for Optimality of Constrained Optimization.
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
Copyright © Cengage Learning. All rights reserved. 14 Partial Derivatives.
Optimization in Engineering Design Georgia Institute of Technology Systems Realization Laboratory 1 Primal Methods.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
Approximation Algorithms Duality My T. UF.
Optimization in Engineering Design Georgia Institute of Technology Systems Realization Laboratory 117 Penalty and Barrier Methods General classical constrained.
Approximation Algorithms based on linear programming.
2.3 Calculating Limits Using the Limit Laws LIMITS AND DERIVATIVES In this section, we will: Use the Limit Laws to calculate limits.
Structural Optimization
Linear program Separation Oracle. Rounding We consider a single-machine scheduling problem, and see another way of rounding fractional solutions to integer.
EMGT 6412/MATH 6665 Mathematical Programming Spring 2016
Chap 10. Sensitivity Analysis
Computational Optimization
6.5 Stochastic Prog. and Benders’ decomposition
Chapter 5. Optimal Matchings
Chap 3. The simplex method
CS5321 Numerical Optimization
Quadratic Equations and Functions
Chapter 8. General LP Problems
Chapter 5. The Duality Theorem
Chapter 8. General LP Problems
Chapter 4 Sequences.
6.5 Stochastic Prog. and Benders’ decomposition
Chapter 8. General LP Problems
Transformation Methods Penalty and Barrier methods
Constraints.
Presentation transcript:

Name: Mehrab Khazraei(145061) Title: Penalty or Exterior penalty function method professor Name: Sahand Daneshvar

Penalty or Exterior penalty function method is an approach that deal with the non-linear programming problems, having equality and inequality constraints. This method converts inequality constraints to equalities or into problems having simple bound. Exterior penalty function adds a penalty to the objective function to penalize any violation of the constraint. This procedure generates a sequence infeasible points whose it limits is an optimal solution associated with the original problem.

Methods using penalty functions transform a constrained problem into a single unconstrained problem or into a sequence of unconstrained problems. Assume the following problem having the single constraint h(x) = 0: Minimize f ( x ) subject to h(x) = 0.

It could replace by the following unconstrained problem, where μ > 0 is a large number: Minimize f ( x ) + μh 2 (x) subject to x Є R n obviously must h 2 (x) be close to zero. Now consider the following problem having single inequality constraint g(x) 5 0: Minimize f ( x ) subject to g(x) < O

It is clear that the form f ( x ) + pg 2 (x) is not appropriate, since a penalty will be incurred whether g(x) 0. Needless to say, a penalty is desired only if the point x is not feasible, that is, if g(x)> 0. A suitable unconstrained problem is therefore given by: Minimize f ( x ) + μ maximum (0, g(x)} subject to x Є R n

If g(x) 0, then max{0, g(x)} > 0 and the penalty term μg(x) is realized. However, observe that at points x where g(x) = 0, the foregoing objective function might not be differentiable, even though g is differentiable. If differentiability is desirable in such a case, we could, for example, consider instead a penalty function term of the type μ [max{0, g(x)}] 2.

In general, a suitable penalty function must incur a positive penalty for infeasible points and no penalty for feasible points. If the constraints are of the form g i (x) < 0 for i = 1,..., m and h i (x) = 0 for i = 1,..., l, a suitable penalty function α is defined by α(x)= ∑ φ[g i (x)]+∑ψ[h i (x)]

where φ and ψ are continuous functions satisfying the following: φ(y) = 0 if y 0 if y>0 φ(y)=0 if y=0 and φ(y)>0 if y≠0 Typically, φ and ψ are of the forms φ(y) = [max{O, y }] p ψ(Y) = ׀ y ׀ p

Where μ is a positive integer. Thus, the penalty function α is usually of the form α(x)= ∑ [max{0, g i (x)}] p + ∑ ׀ h i (x) ׀ p

Exterior Penalty Function Methods: Presents and proves an important result that justifies using exterior penalty functions as a means for solving constrained problems. Consider the following primal and penalty problems.

Primal Problem Minimize f(x) subject to g(x) < 0 h(x) = 0 x Є X

Where g is a vector function with components g l,..., g m and h is a vector function with components h 1,..., h l. Here, f, g 1,..., g m, h 1,..., h 1 are continuous functions defined on R n, and X is a nonempty set in R n. The set X might typically represent simple constraints that could easily be handled explicitly, such as lower and upper bounds on the variables.

Penalty Problem Let α be a continuous function of the form that satisfying the properties. The basic penalty function approach attempts to find

SUP Ө(μ) subject to μ > 0 Where Ө(p) = inf{f(x) + μα(x): x E X )} The main theorem of this section states that inf{f(x) : x Є X, g(x) < 0, h(x) = 0} = sup Ө(μ) = lim Ө(μ)

From this result it is clear that we can get arbitrarily close to the optimal objective value of the primal problem by computing Ө(p) for a sufficiently large μ. This result is established in the next theorem, however, the following lemma is needed.

Lemma Suppose that f; g l,..., g,, h,,..., h l are continuous functions on R n, and let X be a nonempty set in R n. Let a be a continuous function on R n given by α(x)= ∑ φ[g i (x)]+∑ψ[h i (x)] and suppose that for each μ, there is an x μ Є X such that Ө (μ) = f ( x μ ) + μ a(x μ ). Then, the following statements hold true:

1. Inf{f(x): x Є X, g(x) sup Ө(μ) where Ө(μ),=inf{f(x) + μα (x): x Є X } and where g is the vector function whose components are g l,..., g m, and h is the vector function whose components are h 1,..., h l.

2. f(x μ ) is a non-decreasing function of μ > 0, Ө(p) is a non-decreasing function of μ, and α (x,) is a non-increasing function of μ.

Initialization Step Let E > 0 be a termination scalar. Choose an initial point x l, a penalty parameter μ 1 > 0, and a scalar B > 1. Let k = 1, and go to the Main Step.

Main Step 1. Starting with X k, solve the following problem: Minimize f ( x ) + μ k α (x) subject to x Є X Let x k+1 be an optimal solution and go to Step 2

2. If μ k α (x k+1 ) < E, stop; otherwise, let μ k+1 = B μ k replace k by k + 1, and go to Step 1.

Example Consider the following problem: Minimize (x l - 2) 4 +(X I - 2x 2 ) 2 subject to x x 2 2 = 0 Note that at iteration k, for a given penalty parameter μ k, the problem to be solved for obtaining x μ k is, using the quadratic penalty function: Minimize (X I - 2) 4 + (x l - 2 x 2 ) 2+ μ k (x 1 - x 2 ) 2

Table summarizes the computations

THANK YOU FOR ATTENTION