EASTERN MEDITERRANEAN UNIVERSITY Department of Industrial Engineering Non linear Optimization Spring 2014-15 Instructor: Prof.Dr.Sahand Daneshvar Submited.

Slides:



Advertisements
Similar presentations
Solving Linear Programming Problems
Advertisements

5.1 Real Vector Spaces.
Ch 7.7: Fundamental Matrices
Copyright © Cengage Learning. All rights reserved. 14 Partial Derivatives.
ESSENTIAL CALCULUS CH11 Partial derivatives
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Boyce/DiPrima 10th ed, Ch 10.1: Two-Point Boundary Value Problems Elementary Differential Equations and Boundary Value Problems, 10th edition, by William.
Lecture 8 – Nonlinear Programming Models Topics General formulations Local vs. global solutions Solution characteristics Convexity and convex programming.
The Simplex Method: Standard Maximization Problems
15 PARTIAL DERIVATIVES.
Constrained Optimization
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 99 Chapter 4 The Simplex Method.
ISM 206 Lecture 4 Duality and Sensitivity Analysis.
5.6 Maximization and Minimization with Mixed Problem Constraints
D Nagesh Kumar, IIScOptimization Methods: M7L1 1 Integer Programming All Integer Linear Programming.
D Nagesh Kumar, IIScOptimization Methods: M3L1 1 Linear Programming Preliminaries.
Optimality Conditions for Nonlinear Optimization Ashish Goel Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
Math 3120 Differential Equations with Boundary Value Problems
Ordinary Differential Equations S.-Y. Leu Sept. 21,28, 2005.
Constrained Optimization Rong Jin. Outline  Equality constraints  Inequality constraints  Linear Programming  Quadratic Programming.
Chapter 4 The Simplex Method
Computer Algorithms Mathematical Programming ECE 665 Professor Maciej Ciesielski By DFG.
KKT Practice and Second Order Conditions from Nash and Sofer
Managerial Economics Managerial Economics = economic theory + mathematical eco + statistical analysis.
1 Part II: Linear Algebra Chapter 8 Systems of Linear Algebraic Equations; Gauss Elimination 8.1 Introduction There are many applications in science and.
WEEK 8 SYSTEMS OF EQUATIONS DETERMINANTS AND CRAMER’S RULE.
The Simplex Method. Standard Linear Programming Problem Standard Maximization Problem 1. All variables are nonnegative. 2. All the constraints (the conditions)
Sheng-Fang Huang. 4.0 Basics of Matrices and Vectors Most of our linear systems will consist of two ODEs in two unknown functions y 1 (t), y 2 (t),
Department Of Industrial Engineering Duality And Sensitivity Analysis presented by: Taha Ben Omar Supervisor: Prof. Dr. Sahand Daneshvar.
Matrix Differential Calculus By Dr. Md. Nurul Haque Mollah, Professor, Dept. of Statistics, University of Rajshahi, Bangladesh Dr. M. N. H. MOLLAH.
Nonlinear Programming Models
Managerial Economics Managerial Economics = economic theory + mathematical eco + statistical analysis.
D Nagesh Kumar, IIScOptimization Methods: M2L4 1 Optimization using Calculus Optimization of Functions of Multiple Variables subject to Equality Constraints.
Section 3.3 The Product and Quotient Rule. Consider the function –What is its derivative? –What if we rewrite it as a product –Now what is the derivative?
Nonlinear Programming I Li Xiaolei. Introductory concepts A general nonlinear programming problem (NLP) can be expressed as follows: objective function.
L8 Optimal Design concepts pt D
Elimination Method: Solve the linear system. -8x + 3y=12 8x - 9y=12.
OR Chapter 7. The Revised Simplex Method  Recall Theorem 3.1, same basis  same dictionary Entire dictionary can be constructed as long as we.
Ch 9.2: Autonomous Systems and Stability In this section we draw together and expand on geometrical ideas introduced in Section 2.5 for certain first order.
(iii) Lagrange Multipliers and Kuhn-tucker Conditions D Nagesh Kumar, IISc Introduction to Optimization Water Resources Systems Planning and Management:
CHAPTER 5 PARTIAL DERIVATIVES
Signal & Weight Vector Spaces
Introduction to Optimization
(i) Preliminaries D Nagesh Kumar, IISc Water Resources Planning and Management: M3L1 Linear Programming and Applications.
TH EDITION LIAL HORNSBY SCHNEIDER COLLEGE ALGEBRA.
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
Copyright © Cengage Learning. All rights reserved. 14 Partial Derivatives.
Copyright © 2006 Brooks/Cole, a division of Thomson Learning, Inc. Linear Programming: An Algebraic Approach 4 The Simplex Method with Standard Maximization.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
D Nagesh Kumar, IISc Water Resources Systems Planning and Management: M2L2 Introduction to Optimization (ii) Constrained and Unconstrained Optimization.
Chapter 14 Partial Derivatives
Week 9 4. Method of variation of parameters
Copyright © Cengage Learning. All rights reserved.
Basic Definitions and Terminology
Chap 10. Sensitivity Analysis
Section 3.3 The Product and Quotient Rule
Chapter 11 Optimization with Equality Constraints
Chain Rules for Functions of Several Variables
CHAPTER 5 PARTIAL DERIVATIVES
Chap 9. General LP problems: Duality and Infeasibility
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
The Simplex Method: Standard Minimization Problems
Chap 3. The simplex method
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
Quadratic Forms and Objective functions with two or more variables
PRELIMINARY MATHEMATICS
Chapter 5. The Duality Theorem
Solve the linear system.
Chapter 2. Simplex method
Multivariable optimization with no constraints
Presentation transcript:

EASTERN MEDITERRANEAN UNIVERSITY Department of Industrial Engineering Non linear Optimization Spring Instructor: Prof.Dr.Sahand Daneshvar Submited by: AAKASH AHMED Student number: OF JACOBIAN APPLICATIONS OF JACOBIANMETHOD METHOD

Constrained Derivatives (Jacobian) Method Minimize z = f(X) subject to g(X) = 0 where The functions f(X) and g(X), i = 1,2,..., m, are twice continuously differentiable. The idea of using constrained derivatives is to develop a closed-form expression for the first partial derivatives of f(X) at all points that satisfy the constraints g(X) = O. This corresponding stationary points are identified as the points at which these partial derivatives vanish. The sufficiency conditions introduced in Section can then be used to check the identity of stationary points.

To clarify the proposed concept, consider f(xl, X2) illustrated in Figure This function is to be minimized subject to the constraint where b is a constant. From Figure, the curve designated by the three points A, B, and C represents the values of f(x1, X2) for which the given constraint is always satisfied. The constrained derivatives method defines the gradient of f(Xl, X2) at any point on the curve ABC. Point B at which the constrained derivative vanishes is a stationary point for the constrained problem. The method is now developed mathematically. By Taylor's theorem, for in the feasible neighborhood of X, we have and

Demonstration of the idea of the Jacobian method

For feasibility, we must have, and it follows that This gives (m + 1) equations in (n + 1) unknowns, and. Note that is a dependent variable, and hence is determined as soon as is known. This means that, in effect, we have m. equations in n unknowns. If m > n, at least (m - n) equations are redundant. Eliminating redundancy, the system reduces to m < n. If m = n, the solution is, and X has no feasible neighborhood, which means that the solution space consists of one point only. The remaining case, where m < n, requires further elaboration.

Define, x = (Y, Z) such that, The vectors Y and Z are called the dependent and independent variables, respectively. Rewriting the gradient vectors of f and g in terms of Y and Z, we get,

Define, J (m*m) is called the Jacobian matrix and C (m*n- m) the control matrix. The Jacobian J is assumed non-singular. This is always possible because the given m equations are independent by definition. The components of the vector Y must thus be selected from among those of X such that J is nonsingular. The original set of equations in partial df(x) and partial df(x) may be written as

Because J is nonsingular, its inverse J-1 exists. Hence, Substituting for partial d(Y) in the equation for partial df(x) gives partial d f as a function of partial d ( Z ) -that is, From this equation, the constrained derivative with respect to the independent vector Z is given by

The sufficiency conditions are similar to those developed in Section. The Hessian matrix will correspond to the independent vector Z, and the elements of the Hessian matrix must be the constrained second derivatives. To show how this is obtained, Let It thus follows that the “i” th row of the (constrained) Hessian matrix is a Notice that W is a function of Y and Y is a function of Z. Thus, the partial derivative of with respect to Zi is based on the following chain rule:

Example: 1 Consider the following problem:

Hence, the incremental value of constrained f is given as

Example: 2 Application of the Jacobian Method to an LP Problem : Consider the linear program Maximize z = 2x1 + 3x2 subject to Xl + X2 + X3 = 5 Xl – X2 + X4 = 3 Xl, X2, X3, X4 > 0 To account for the nonnegativity constraints, substitute. With this substitution, the nonnegativity conditions become implicit and the original problem becomes Subject to

To apply the Jacobian method, let (In the terminology of linear programming, Y and Z correspond to the basic and non basic variables, respectively.) Thus So that,

The corresponding dual objective value is 5UI + 3U2 = 15, which equals the optimal primal objective value. The given solution also satisfies the dual constraints and hence is optimal and feasible. This shows that the sensitivity coefficients are the same as the dual variables. In fact, both have the same interpretation. Figure: Extreme points of the solution space of the linear program