PRELIMINARY MATHEMATICS

Slides:



Advertisements
Similar presentations
Chapter 17 Multivariable Calculus.
Advertisements

Nonlinear Programming McCarl and Spreen Chapter 12.
ESSENTIAL CALCULUS CH11 Partial derivatives
Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Lagrange Multipliers OBJECTIVES  Find maximum and minimum values using Lagrange.
Slide Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley.
Matrices: Inverse Matrix
Lecture 8 – Nonlinear Programming Models Topics General formulations Local vs. global solutions Solution characteristics Convexity and convex programming.
Optimality conditions for constrained local optima, Lagrange multipliers and their use for sensitivity of optimal solutions Today’s lecture is on optimality.
1. 2 Local maximum Local minimum 3 Saddle point.
Optimization using Calculus
Solving Linear Equations Rule 7 ‑ 1: We can perform any mathematical operation on one side of an equation, provided we perform the same operation on the.
Ch 5.2: Series Solutions Near an Ordinary Point, Part I
THE MATHEMATICS OF OPTIMIZATION
MATRICES Using matrices to solve Systems of Equations.
Non-Linear Simultaneous Equations
Today Wrap up of probability Vectors, Matrices. Calculus
Recall that a square matrix is one in which there are the same amount of rows as columns. A square matrix must exist in order to evaluate a determinant.
Optimization Techniques Methods for maximizing or minimizing an objective function Examples –Consumers maximize utility by purchasing an optimal combination.
Managerial Economics Managerial Economics = economic theory + mathematical eco + statistical analysis.
Exponential and Logarithmic Equations
Spring 2013 Solving a System of Linear Equations Matrix inverse Simultaneous equations Cramer’s rule Second-order Conditions Lecture 7.
ORDINARY DIFFERENTIAL EQUATION (ODE) LAPLACE TRANSFORM.
CHAPTER 2 MATRIX. CHAPTER OUTLINE 2.1 Introduction 2.2 Types of Matrices 2.3 Determinants 2.4 The Inverse of a Square Matrix 2.5 Types of Solutions to.
5.1 Definition of the partial derivative the partial derivative of f(x,y) with respect to x and y are Chapter 5 Partial differentiation for general n-variable.
CHAP 0 MATHEMATICAL PRELIMINARY
4.4 & 4.5 Notes Remember: Identity Matrices: If the product of two matrices equal the identity matrix then they are inverses.
4.6 Matrix Equations and Systems of Linear Equations In this section, you will study matrix equations and how to use them to solve systems of linear equations.
WEEK 8 SYSTEMS OF EQUATIONS DETERMINANTS AND CRAMER’S RULE.
Copyright © 2013, 2009, 2005 Pearson Education, Inc. 1 5 Systems and Matrices Copyright © 2013, 2009, 2005 Pearson Education, Inc.
1 C ollege A lgebra Systems and Matrices (Chapter5) 1.
Differential Equations MTH 242 Lecture # 13 Dr. Manshoor Ahmed.
A Review of Some Fundamental Mathematical and Statistical Concepts UnB Mestrado em Ciências Contábeis Prof. Otávio Medeiros, MSc, PhD.
D Nagesh Kumar, IIScOptimization Methods: M2L4 1 Optimization using Calculus Optimization of Functions of Multiple Variables subject to Equality Constraints.
G.K.BHARAD INSTITUTE OF ENGINEERING Division:D Subject:CALCULUS Subject code: TOPIC.
4.8 Using matrices to solve systems! 2 variable systems – by hand 3 or more variables – using calculator!
Introduction to Optimization
TH EDITION LIAL HORNSBY SCHNEIDER COLLEGE ALGEBRA.
MAXIMA AND MINIMA. ARTICLE -1 Definite,Semi-Definite and Indefinite Function.
1 Unconstrained and Constrained Optimization. 2 Agenda General Ideas of Optimization Interpreting the First Derivative Interpreting the Second Derivative.
D Nagesh Kumar, IISc Water Resources Systems Planning and Management: M2L2 Introduction to Optimization (ii) Constrained and Unconstrained Optimization.
If A and B are both m × n matrices then the sum of A and B, denoted A + B, is a matrix obtained by adding corresponding elements of A and B. add these.
Downhill product – Uphill product.
Simultaneous Equations 1
Chapter 14 Partial Derivatives
Another sufficient condition of local minima/maxima
Week 9 4. Method of variation of parameters
Chapter 4 Systems of Linear Equations; Matrices
Chapter 4 Quadratic Equations
Basic Definitions and Terminology
Chapter 11 Optimization with Equality Constraints
Calculus-Based Solutions Procedures MT 235.
Copyright © Cengage Learning. All rights reserved.
FIRST ORDER DIFFERENTIAL EQUATIONS
§ 4.4 The Natural Logarithm Function.
Systems of First Order Linear Equations
Lecture 8 – Nonlinear Programming Models
Unconstrained and Constrained Optimization
+ Definite, + semi definite, - definite & - semi definite functions
Chapter 7: Matrices and Systems of Equations and Inequalities
Using matrices to solve Systems of Equations
Use Inverse Matrices to Solve 2 Variable Linear Systems
Objectives Solve quadratic equations by graphing or factoring.
PRELIMINARY MATHEMATICS
Matrices.
Outline Unconstrained Optimization Functions of One Variable
Shivangi’s Questions z = x3+ 3x2y + 7y +9 What is
Using matrices to solve Systems of Equations
INEQUALITIES.
Multivariable optimization with no constraints
Analyzing Multivariable Change: Optimization
Presentation transcript:

PRELIMINARY MATHEMATICS LECTURE 7 Revision

On the pre-session exam 10-13.00 Friday 25 September 2015 Room V111 You will be given two answer books – one for math and one for statistics. The math part of the exam will be 1 1/2 hours, followed by another 90mins of statistics exam. For the math part, you will be given 5 questions, out of which you will be required to answer 3 questions. You will be allowed to use your electronic calculator in this examination provided that it cannot store text. The make and type of calculator must be stated clearly on the front of your answer book.

Equations confirmable for matrix representation

1. Simultaneous linear equations

1. Simultaneous linear equations

Quadratic forms For example, given For ease of matrix representation, we can rewrite this polynomial as

Quadratic forms The symmetric matrix representation can be obtained as: Note that in contrast to the previous example in which a system of linear equations is expressed in matrix which would allow us to test for the existence of a solution as well as to apply Cramer’s rule to obtain the solution, here we are expressing the quadratic form not in order to obtain solution to the variables (since there are three unknown variables in one equation, we cannot find unique solutions) but instead in order to find a parametrical condition for the sign of the quadratic form.

Other functional forms Consider

Other type of functions Consider We can rewrite this as

Other functional forms And express as

Other functional forms On the other hand, would not be suitable for matrix representation because we would end up with 4 variables and 3 equations.

Other functional forms In a similar spirit can be transformed to a system of linear equations

Other functional forms and hence be expressed as

Derivatives, differentials, total differentials, and second order total differentials

In a function y = f (x) the derivative Derivatives In a function y = f (x) the derivative denotes the limit of as Δ x approaches zero.

The differential of y can be expressed as Differentials The differential of y can be expressed as which measures the approximate change in y resulting from a given change in x.

Partial derivative In a function, such as z = f (x, y), z is the dependent variable; and x and y are the independent variables, partial derivative is used to measure the effect of changes in a single independent variable (x or y) on the dependent variable (z) in a multivariate function. The partial derivatives of z with respect to x measures the instantaneous rate of change of z with respect to x while y is held constant:

Partial derivative In a function, such as z = f (x, y), z is the dependent variable; and x and y are the independent variables, partial derivative is used to measure the effect of changes in a single independent variable (x or y) on the dependent variable (z) in a multivariate function. The partial derivatives of z with respect to y measures the instantaneous rate of change of z with respect to y while x is held constant:

Total differentials For a function of two or more independent variables, the total differential measures the change in the dependent variable brought about by a small change in each of the independent variables. If z = f (x, y), the total differential dz is expressed as

We can also derive the second-order total differential as Determinantal test We can also derive the second-order total differential as

Positive/ negative definiteness of q = d 2 z and the second order sufficient conditions for relative extrema

Positive and negative definiteness A quadratic form is said to be • positive definite if • positive semi-definite if • negative semi-definite if • negative definite if regardless of the values of the variables in the quadratic form, not all zero.

Positive and negative definiteness A quadratic form is said to be • positive definite if • positive semi-definite if • negative semi-definite if • negative definite if If q changes signs when the variables assume different values, q is said to be indefinite

Second-order sufficient conditions for minimum and maximum If d 2 z > 0 (positive definite), local minima If d 2 z < 0 (negative definite), local maxima

Second-order necessary conditions for minimum and maximum If d 2 z ≥ 0 (positive semi-definite), local minima If d 2 z ≤ 0 (negative semi-definite), local maxima

Indefinite q = d 2 z and saddle point When q = d 2 z is indefinite, we have a saddle point

Conversion and inversion of log base

Conversion of log base This rule can be generalised as where b ≠ c

Inversion of log base Also,

Example Find the derivative of We know that given,

Example Using the rule and

On the sign of lambda in the Lagrangian

The Lagrange multiplier method Max/Min z = f (x, y) (1) s.t. g (x, y) = c (2) Step 1. Rearrange the constraint in such a way that the right hand side of the equation equals a zero. Setting the constraint equal to zero: c – g (x, y) = 0

The Lagrange multiplier method Max/Min z = f (x, y) (1) s.t. g (x, y) = c (2) Step 2. Multiply the left hand side of the new constraint equation by λ (Greek letter lambda) and add it to the objective function to form the Lagrangian function Z. Z = f (x, y) + λ [ c – g (x, y) ]

The Lagrange multiplier method Z = f (x, y) + λ [ c – g (x, y) ] Step 3. The necessary condition for a stationary value of Z is obtained by taking the first order partial derivatives, set them equal to zero, and solve simultaneously: Zx = fx – λ gx = 0 Zy = fy – λ gy = 0 Zλ = c – g (x, y) = 0

The Lagrange multiplier method Max/Min z = f (x, y) (1) s.t. g (x, y) = c (2) Alternatively if we express the Lagrangian function Z as. Z = f (x, y) – λ [ c – g (x, y) ]

The Lagrange multiplier method Z = f (x, y) – λ [ c – g (x, y) ] In Step 3. The first order necessary conditions are now: Zx = fx + λ gx = 0 Zy = fy + λ gy = 0 Zλ = c – g (x, y) = 0 Is this a problem?

Example of cost minimizing firm Min c = 8 x2 – xy + 12y2 s.t. x + y = 42 Form the Lagrangian function C. C = 8 x2 – xy + 12y2 – λ (42 – x – y)

Example of cost minimizing firm C = 8 x2 – xy + 12y2 – λ (42 – x – y) Obtain the first order partial derivatives, C x = 16 x – y + λ C y = – x + 24y + λ C λ = 42 – x – y

Solution using matrix 16 x – y + λ = 0 – x + 24y + λ = 0 x + y = 42 Since the three first order conditions are linear equations, we can use matrix to obtain solutions:

Using the Cramer’s rule Solution using matrix Using the Cramer’s rule

Using the Cramer’s rule Solution using matrix Using the Cramer’s rule

Using the Cramer’s rule Solution using matrix Using the Cramer’s rule

Using the Cramer’s rule Solution using matrix Using the Cramer’s rule

Note that the sign of the determinants all changed except for Solution using matrix Note that the sign of the determinants all changed except for Recall from lecture 2… Property of the determinant (3) The multiplication of any one row (or one column) by a scalar k will change the value of the determinant k-fold.

Using the Cramer’s rule Solution using matrix Using the Cramer’s rule Solution for x and y are unchanged. λ is the same numerical value with a negative sign.