Analysis of Basic PMP Solution

Slides:



Advertisements
Similar presentations
Solve a System Algebraically
Advertisements

Optimality conditions for constrained local optima, Lagrange multipliers and their use for sensitivity of optimal solutions.
Slide Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley.
Constrained optimization Indirect methods Direct methods.
Optimality conditions for constrained local optima, Lagrange multipliers and their use for sensitivity of optimal solutions Today’s lecture is on optimality.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Optimization in Engineering Design 1 Lagrange Multipliers.
GG 313 Geological Data Analysis # 18 On Kilo Moana at sea October 25, 2005 Orthogonal Regression: Major axis and RMA Regression.
Economics 214 Lecture 37 Constrained Optimization.
Theoretical Mechanics - PHY6200 Chapter 6 Introduction to the calculus of variations Prof. Claude A Pruneau, Physics and Astronomy Department Wayne State.
D Nagesh Kumar, IIScOptimization Methods: M2L5 1 Optimization using Calculus Kuhn-Tucker Conditions.

Table of Contents Recall that to solve the linear system of equations in two variables... we needed to find the values of x and y that satisfied both equations.
Drill Solve the linear system by substitution. 1.y = 6x – 11 -2x – 3y = x + y = 6 -5x – y = 21.
AUTOMATIC CONTROL THEORY II Slovak University of Technology Faculty of Material Science and Technology in Trnava.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Chapter 15.
Consider minimizing and/or maximizing a function z = f(x,y) subject to a constraint g(x,y) = c. y z x z = f(x,y) Parametrize the curve defined by g(x,y)
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
Variational Data Assimilation - Adjoint Sensitivity Analysis Yan Ding, Ph.D. National Center for Computational Hydroscience and Engineering The University.
Solving Linear Systems of Equations - Addition Method Recall that to solve the linear system of equations in two variables... we need to find the values.
SOLVING SYSTEMS ALGEBRAICALLY SECTION 3-2. SOLVING BY SUBSTITUTION 1) 3x + 4y = 12 STEP 1 : SOLVE ONE EQUATION FOR ONE OF THE VARIABLES 2) 2x + y = 10.
Linear Equations in Two Variables A Linear Equation in Two Variables is any equation that can be written in the form where A and B are not both zero.
D Nagesh Kumar, IIScOptimization Methods: M2L4 1 Optimization using Calculus Optimization of Functions of Multiple Variables subject to Equality Constraints.
Lecture 26 Molecular orbital theory II
L8 Optimal Design concepts pt D
Mathe III Lecture 7 Mathe III Lecture 7. 2 Second Order Differential Equations The simplest possible equation of this type is:
Jeopardy Solving Equations Add and Subtract Multiply and Divide Multi-Step Variables on each side Grouping Symbols $100 $200 $300 $400 $500 $100 $200.
Mathe III Lecture 8 Mathe III Lecture 8. 2 Constrained Maximization Lagrange Multipliers At a maximum point of the original problem the derivatives of.
VOLTAGE CONSTRAINTS AND POWER NOMOGRAMS © 2002 Fernando L. Alvarado May 28, 2002.
(iii) Lagrange Multipliers and Kuhn-tucker Conditions D Nagesh Kumar, IISc Introduction to Optimization Water Resources Systems Planning and Management:
Solving System of Equations that have 0, 1, and Infinite Solutions
Bell Work: Find the equation of the line that passes through the point (2,3) and has a slope of -2/5.
Functions of Several Variables Copyright © Cengage Learning. All rights reserved.
PGM 2002/03 Langrage Multipliers. The Lagrange Multipliers The popular Lagrange multipliers method is used to find extremum points of a function on a.
TH EDITION LIAL HORNSBY SCHNEIDER COLLEGE ALGEBRA.
Review of PMP Derivation We want to find control u(t) which minimizes the function: x(t) = x*(t) +  x(t); u(t) = u*(t) +  u(t); (t) = *(t) +  (t);
Section 3.5 Solving Systems of Linear Equations in Two Variables by the Addition Method.
Economics 2301 Lecture 37 Constrained Optimization.
Functions of Several Variables 13 Copyright © Cengage Learning. All rights reserved.
Section 15.3 Constrained Optimization: Lagrange Multipliers.
Lagrange Multipliers. Objective: -Understand the method of Lagrange multipliers -Use Lagrange multipliers to solve constrained optimization problems (functions.
Multi Step Equations. Algebra Examples 3/8 – 1/4x = 1/2x – 3/4 3/8 – 1/4x = 1/2x – 3/4 8(3/8 – 1/4x) = 8(1/2x – 3/4) (Multiply both sides by 8) 8(3/8.
1 Introduction Optimization: Produce best quality of life with the available resources Engineering design optimization: Find the best system that satisfies.
D Nagesh Kumar, IISc Water Resources Systems Planning and Management: M2L2 Introduction to Optimization (ii) Constrained and Unconstrained Optimization.
Elimination Method - Systems. Elimination Method  With the elimination method, you create like terms that add to zero.
Jeopardy Solving Equations
Another sufficient condition of local minima/maxima
Hamiltonian principle and Lagrange’s equations (review with new notations) If coordinates q are independent, then the above equation should be true for.
Chapter 14 Partial Derivatives.
Lecture 4 Exposition of the Lagrange Method
3.2 Solve Linear Systems Algebraically
الاسبوع الإرشادي.
Algebra 12.2/12.3 Review.
6-3 Solving Systems Using Elimination
The Lagrange Multiplier Method
Coordinate Algebra Day 26
Graphical Solution Procedure
7.5 – Constrained Optimization: The Method of Lagrange Multipliers
13 Functions of Several Variables
Basic Concepts, Necessary and Sufficient conditions
Aim: How do we explain air resistance?
Introduction Constraint: important features in computer animation
Algebraic Expression A number sentence without an equal sign. It will have at least two terms and one operations.
Solving Two Step Algebraic Equations
Optimal Control of Systems
Systems of three equations with three variables are often called 3-by-3 systems. In general, to find a single solution to any system of equations,
Modeling and Simulation: Exploring Dynamic System Behaviour
Part 7 Optimization in Functional Space
Constraints.
L8 Optimal Design concepts pt D
Presentation transcript:

Analysis of Basic PMP Solution Find control u(t) which minimizes the function: 𝐽 𝑥,𝑢 = 0 𝑇 𝐿 𝑥,𝑢 𝑑𝑡 + 𝑉 𝑥 𝑇 x(t=0) = x0, T fixed, 𝑥 =𝑓(𝑥,𝑢) To find a necessary condition to minimize J(x,u), we augment the cost function with the dynamics constraints 𝐽 𝑥,𝑢,𝜆 = 𝐽 𝑥,𝑢 + 0 𝑇 𝜆 𝑇 (𝑓 𝑥,𝑢 − 𝑥 )𝑑𝑡 = 0 𝑇 𝐻 𝑥,𝑢 −𝜆 𝑇 𝑥 𝑑𝑡 +𝑉(𝑥 𝑇 ) Extremize the augmented cost w.r.t. variations dx(t); du(t); dl (t); 𝛿 𝐽 = 0 𝑇 𝜕𝐻 𝜕𝑥 +𝜆 𝑇 𝑥 𝛿𝑥 + 𝜕𝐻 𝜕𝑢 𝛿𝑢+ 𝜕𝐻 𝜕𝜆 − 𝑥 𝑇 𝛿𝜆 dt + 𝜕𝑉 𝜕𝑥 − 𝜆 𝑇 𝑇 𝛿𝑥 𝑇 + 𝜆 𝑇 0 𝛿𝑥 0 𝐻 𝑥,𝑢 ≡𝐿 𝑥,𝑢 +𝜆 𝑇 𝑓(𝑥,𝑢)

Dimension of PMP Solution Unknowns n state variables x(t) n co-state variables 𝜆(t) m control variables u(t) Optimal control satisfies: 𝑥 = 𝜕𝐻 𝜕λ 𝑇 =𝑓(𝑥,𝑢) 𝑥 0 = 𝑥 0 n o.d.e.s with I.C.s 2 point B.V. problem −λ = 𝜕𝐻 𝜕𝑥 𝑇 λ 𝑇 = 𝜕𝑉 𝜕𝑥 𝑥 𝑇 n o.d.e.s with T.C.s 𝜕𝐻 𝜕𝑢 =0 ; m algebraic equations

Additional Constraints? Example: what if final state constraints fi(x(T))=0, i=1,…,p are desired: Add constraints to cost using additional Lagrange multipliers Multipliers 𝐽𝑎 𝑥,𝑢 = 0 𝑇 𝐿 𝑥,𝑢 𝑑𝑡 + 𝑉 𝑥 𝑇 + 𝑣 1 𝜑 1 𝑥 𝑇 + …+ 𝑣 𝑝 𝜑 𝑝 𝑥 𝑇 Cost augmentation Extremizing the constrained, augmented the cost function yields 𝑥 = 𝜕𝐻 𝜕λ 𝑇 =𝑓(𝑥,𝑢) 𝑥 0 = 𝑥 0 n o.d.e.s with I.C.s −λ = 𝜕𝐻 𝜕𝑥 𝑇 λ 𝑇 = 𝜕𝑉 𝜕𝑥 (𝑥 𝑇 )+ 𝜕𝜑 𝜕𝑥 (𝑥 𝑇 ) 𝑣 n o.d.e.s with T.C.s 𝜕𝐻 𝜕𝑢 =0 ; m algebraic equations 𝜑 1 𝑥 𝑇 =0; …; 𝜑 𝑝 𝑥 𝑇 =0 p algebraic equations

Additional Constraints? Example: what if final time is undetermined? Final time T is an additional variable H(T) = 0 gives one additional constraint equation!