Review of PMP Derivation We want to find control u(t) which minimizes the function: x(t) = x*(t) +  x(t); u(t) = u*(t) +  u(t); (t) = *(t) +  (t);

Slides:



Advertisements
Similar presentations
3.4 Linear Programming 10/31/2008. Optimization: finding the solution that is either a minimum or maximum.
Advertisements

The Maximum Principle: Continuous Time Main purpose: to introduce the maximum principle as a necessary condition that must be satisfied by any optimal.
Chapter 9 Maintenance and Replacement The problem of determining the lifetime of an asset or an activity simultaneously with its management during that.
1 Chapter 2: Simplification, Optimization and Implication Where we learn more fun things we can do with constraints.
Linear Programming?!?! Sec Linear Programming In management science, it is often required to maximize or minimize a linear function called an objective.
Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Lagrange Multipliers OBJECTIVES  Find maximum and minimum values using Lagrange.
Slide Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley.
Economic Growth and Dynamic Optimization - The Comeback - Rui Mota – Tel Ext April 2009.
Constrained Near-Optimal Control Using a Numerical Kinetic Solver Alan L. Jennings & Ra úl Ordóñez, ajennings1,
Optimality conditions for constrained local optima, Lagrange multipliers and their use for sensitivity of optimal solutions Today’s lecture is on optimality.
1. 2 Local maximum Local minimum 3 Saddle point.
Faculty of Civil and Environmental Engineering P-O Gutman Abstract When using the Pontryagin Maximum Principle in optimal control problems,
Optimization in Engineering Design 1 Lagrange Multipliers.
FEA Simulations Usually based on energy minimum or virtual work Component of interest is divided into small parts – 1D elements for beam or truss structures.
Tutorial 7 Constrained Optimization Lagrange Multipliers
INTRODUCTORY MATHEMATICAL ANALYSIS For Business, Economics, and the Life and Social Sciences  2007 Pearson Education Asia Chapter 7 Linear Programming.
Colloquium on Optimisation and Control University of Sheffield Monday April 24 th 2006 Sensitivity Analysis and Optimal Control Richard Vinter Imperial.
Ch 2.1: Linear Equations; Method of Integrating Factors
Theoretical Mechanics - PHY6200 Chapter 6 Introduction to the calculus of variations Prof. Claude A Pruneau, Physics and Astronomy Department Wayne State.
The simple pendulum Energy approach q T m mg PH421:Oscillations F09
D Nagesh Kumar, IIScOptimization Methods: M2L5 1 Optimization using Calculus Kuhn-Tucker Conditions.

Slide# Ketter Hall, North Campus, Buffalo, NY Fax: Tel: x 2400 Control of Structural Vibrations.
Chapter 3 The Maximum Principle: Mixed Inequality Constraints Mixed inequality constraints: Inequality constraints involving control and possibly state.
Physics. Session Kinematics - 2 Session Opener REST ! or MOTION.
AUTOMATIC CONTROL THEORY II Slovak University of Technology Faculty of Material Science and Technology in Trnava.
Constrained Dynamics. Spring A Bead on a Wire x f.
Game Theory in Wireless and Communication Networks: Theory, Models, and Applications Lecture 3 Differential Game Zhu Han, Dusit Niyato, Walid Saad, Tamer.
Systems of Inequalities in Two Variables Sec. 7.5a.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
Variational Data Assimilation - Adjoint Sensitivity Analysis Yan Ding, Ph.D. National Center for Computational Hydroscience and Engineering The University.
Lecture 7 and 8 The efficient and optimal use of natural resources.
D Nagesh Kumar, IIScOptimization Methods: M2L4 1 Optimization using Calculus Optimization of Functions of Multiple Variables subject to Equality Constraints.
Lecture 26 Molecular orbital theory II
L8 Optimal Design concepts pt D
Mathe III Lecture 7 Mathe III Lecture 7. 2 Second Order Differential Equations The simplest possible equation of this type is:
Jump to first page Chapter 3 Splines Definition (3.1) : Given a function f defined on [a, b] and a set of numbers, a = x 0 < x 1 < x 2 < ……. < x n = b,
Phy 303: Classical Mechanics (2) Chapter 3 Lagrangian and Hamiltonian Mechanics.
(iii) Lagrange Multipliers and Kuhn-tucker Conditions D Nagesh Kumar, IISc Introduction to Optimization Water Resources Systems Planning and Management:
Functions of Several Variables Copyright © Cengage Learning. All rights reserved.
PGM 2002/03 Langrage Multipliers. The Lagrange Multipliers The popular Lagrange multipliers method is used to find extremum points of a function on a.
EE 460 Advanced Control and System Integration
CS B553: A LGORITHMS FOR O PTIMIZATION AND L EARNING Constrained optimization.
Erin N. Bodine Co-authors: Suzanne Lenhart & Louis Gross The Institute for Environmental Modeling Grant # IIS
Copyright © Cengage Learning. All rights reserved. 14 Partial Derivatives.
Chapter 4 The Maximum Principle: General Inequality Constraints.
Section Lagrange Multipliers.
STE 6239 Simulering Wednesday, Week 2: 8. More models and their simulation, Part II: variational calculus, Hamiltonians, Lagrange multipliers.
Section 15.3 Constrained Optimization: Lagrange Multipliers.
Optimal Path Planning on Matrix Lie Groups Mechanical Engineering and Applied Mechanics Sung K. Koh U n i v e r s i t y o f P e n n s y l v a n i a.
Amir Yavariabdi Introduction to the Calculus of Variations and Optical Flow.
D Nagesh Kumar, IISc Water Resources Systems Planning and Management: M2L2 Introduction to Optimization (ii) Constrained and Unconstrained Optimization.
An Introduction to Linear Programming
Another sufficient condition of local minima/maxima
Hamiltonian principle and Lagrange’s equations (review with new notations) If coordinates q are independent, then the above equation should be true for.
Chapter 2: Simplification, Optimization and Implication
Optimal Planning for Vehicles with Bounded Curvature: Coordinated Vehicles and Obstacle Avoidance Andy Perrin.
Optimal control T. F. Edgar Spring 2012.
Linear Systems Chapter 3.
FEA Simulations Boundary conditions are applied
The Lagrange Multiplier Method
L10 Optimal Design L.Multipliers
7.5 – Constrained Optimization: The Method of Lagrange Multipliers
Basic Concepts, Necessary and Sufficient conditions
LINEARPROGRAMMING 4/26/2019 9:23 AM 4/26/2019 9:23 AM 1.
Optimal Control of Systems
Analysis of Basic PMP Solution
Part 7 Optimization in Functional Space
Multivariable optimization with no constraints
Presentation transcript:

Review of PMP Derivation We want to find control u(t) which minimizes the function: x(t) = x*(t) +  x(t); u(t) = u*(t) +  u(t); (t) = *(t) +  (t); Subject to the constraints: x(t=0) = x 0, T fixed, and To find a necessary condition to minimize J(x,u), we augment the cost function Then extremize the augmented cost with respect to the variations

Pontryagin’s Maximum Principle Optimal (x *,u * ) satisfy: Can be more general and include terminal constraints Follows directly from: Optimal control is solution to O.D.E.    Unbounded controls

With respect to the appropriate variations Handling Additional Constraints Example: what if final state constraints  i ( x ( T ))=0, i=1,…,p are desired: –Add constraints to cost using additional Lagrange multipliers  x(t);  u(t);  (t); To find a necessary condition for optimal, extremize the constrained, augmented the cost function Cost augmentation Multipliers Result: New terminal co-state constraint:

Handling Additional Constraints Example: what if final time is not specified? –Must include the additional variation  T in the extremization process –Result: H(T) = 0 Summary with (1) unspecified final time; (2) terminal constraints:

Solution: PMP with unspecified final time & terminal constraints –Step 1: convert dynamics to first order form: z = (z 1 z 2 ) T –Terminal Constraints: ψ 1 (z(T)) = z 1 (T) - x F =0 –Step 2: construct the Hamiltonian for Example: Bead on a Wire Problem: bead moves without friction along a wire pushed by force u –Goal: move bead from x 0 to x F in minimum time –Constraints: x(0) = x 0 ; x(T) = x F ; –Terminal Constraints: x0x0 xFxF

–Step 3: Apply (PMP 2), the adjoint equations: Bead on a Wire (continued) –Step 4: Apply (PMP 3), the minimum principle: For constrained control, must minimize H w.r.t. control u Hence u(t) is a “switching control” with 2 (t) the “switching function The switching function is linear, implying one switch.

Bead on a Wire (continued) –Step 5: Apply (PMP 4), the adjoint terminal constraints: –Step 6: Apply (PMP 5), the undetermined final time condition: 0

Bead on a Wire (continued) –Step 7: Apply (PMP 1), the dynamics, and (BC) We know that the control “switches” at some as yet unknown time, t s Integrate the acceleration to get the velocity over t [0,T] –Step 8: Apply (PMP 1), the dynamics, and (BC) Integrate velocity to get position over t [0,T], knowing switch at T/2 For constant acceleration: 0 0

Bead on a Wire (continued) –Step 8: (continued) –Step 9: Using the switching function characteristic to find v 1 Simpler, but equivalent

Example: Bang-Bang Control 10 u H +1 T B>0 “minimum time control” Since H is linear w.r.t. u, minimization occurs at boundary