Optimal Control of Systems

Slides:



Advertisements
Similar presentations
The Maximum Principle: Continuous Time Main purpose: to introduce the maximum principle as a necessary condition that must be satisfied by any optimal.
Advertisements

Lecture 8 – Nonlinear Programming Models Topics General formulations Local vs. global solutions Solution characteristics Convexity and convex programming.
L12 LaGrange Multiplier Method Homework Review Summary Test 1.
Easy Optimization Problems, Relaxation, Local Processing for a small subset of variables.
Optimality conditions for constrained local optima, Lagrange multipliers and their use for sensitivity of optimal solutions Today’s lecture is on optimality.
© 2011 Autodesk Freely licensed for use by educational institutions. Reuse and changes require a note indicating that content has been modified from the.
Faculty of Civil and Environmental Engineering P-O Gutman Abstract When using the Pontryagin Maximum Principle in optimal control problems,
Optimization in Engineering Design 1 Lagrange Multipliers.
Engineering Optimization
Tutorial 7 Constrained Optimization Lagrange Multipliers
Constrained Optimization
Constrained Optimization Rong Jin. Outline  Equality constraints  Inequality constraints  Linear Programming  Quadratic Programming.
Colloquium on Optimisation and Control University of Sheffield Monday April 24 th 2006 Sensitivity Analysis and Optimal Control Richard Vinter Imperial.
Theoretical Mechanics - PHY6200 Chapter 6 Introduction to the calculus of variations Prof. Claude A Pruneau, Physics and Astronomy Department Wayne State.
Optimization using Calculus

Chapter 6 APPLICATIONS TO PRODUCTION AND INVENTORY Hwang, Fan, Erickson (1967) Hendricks et al (1971) Bensoussan et al (1974) Thompson-Sethi (1980) Stochastic.
Constrained Optimization Rong Jin. Outline  Equality constraints  Inequality constraints  Linear Programming  Quadratic Programming.
Chapter 3 The Maximum Principle: Mixed Inequality Constraints Mixed inequality constraints: Inequality constraints involving control and possibly state.
Math for CSLecture 71 Constrained Optimization Lagrange Multipliers ____________________________________________ Ordinary Differential equations.
KKT Practice and Second Order Conditions from Nash and Sofer
Flow Models and Optimal Routing. How can we evaluate the performance of a routing algorithm –quantify how well they do –use arrival rates at nodes and.
AUTOMATIC CONTROL THEORY II Slovak University of Technology Faculty of Material Science and Technology in Trnava.
Consider minimizing and/or maximizing a function z = f(x,y) subject to a constraint g(x,y) = c. y z x z = f(x,y) Parametrize the curve defined by g(x,y)
Variational Data Assimilation - Adjoint Sensitivity Analysis Yan Ding, Ph.D. National Center for Computational Hydroscience and Engineering The University.
D Nagesh Kumar, IIScOptimization Methods: M2L4 1 Optimization using Calculus Optimization of Functions of Multiple Variables subject to Equality Constraints.
Optimization unconstrained and constrained Calculus part II.
L8 Optimal Design concepts pt D
Part 4 Nonlinear Programming 4.1 Introduction. Standard Form.
Mathe III Lecture 8 Mathe III Lecture 8. 2 Constrained Maximization Lagrange Multipliers At a maximum point of the original problem the derivatives of.
(iii) Lagrange Multipliers and Kuhn-tucker Conditions D Nagesh Kumar, IISc Introduction to Optimization Water Resources Systems Planning and Management:
Calculus-Based Optimization AGEC 317 Economic Analysis for Agribusiness and Management.
Review of PMP Derivation We want to find control u(t) which minimizes the function: x(t) = x*(t) +  x(t); u(t) = u*(t) +  u(t); (t) = *(t) +  (t);
Copyright © Cengage Learning. All rights reserved. 14 Partial Derivatives.
Chapter 4 The Maximum Principle: General Inequality Constraints.
Section Lagrange Multipliers.
Section 15.3 Constrained Optimization: Lagrange Multipliers.
Inequality Constraints Lecture 7. Inequality Contraints (I) n A Review of Lagrange Multipliers –As we discussed last time, the first order necessary conditions.
1 Introduction Optimization: Produce best quality of life with the available resources Engineering design optimization: Find the best system that satisfies.
1 Unconstrained and Constrained Optimization. 2 Agenda General Ideas of Optimization Interpreting the First Derivative Interpreting the Second Derivative.
Mathe III Lecture 8 Mathe III Lecture 8. 2 Constrained Maximization Lagrange Multipliers At a maximum point of the original problem the derivatives of.
D Nagesh Kumar, IISc Water Resources Systems Planning and Management: M2L2 Introduction to Optimization (ii) Constrained and Unconstrained Optimization.
Optimal Control.
Critical Points. Definition A function f(x) is said to have a local maximum at c iff there exists an interval I around c such that Analogously, f(x) is.
Physics 312: Lecture 2 Calculus of Variations
deterministic operations research
Lecture 7 Constrained Optimization Lagrange Multipliers
Part 4 Nonlinear Programming
L11 Optimal Design L.Multipliers
Copyright © Cengage Learning. All rights reserved.
Optimal control T. F. Edgar Spring 2012.
Lecture 8 – Nonlinear Programming Models
Dr. Arslan Ornek IMPROVING SEARCH
Unconstrained and Constrained Optimization
Chap 3. The simplex method
3-3 Optimization with Linear Programming
The Lagrange Multiplier Method
Calculus of Variations
L10 Optimal Design L.Multipliers
PRELIMINARY MATHEMATICS
Part 4 Nonlinear Programming
Basic Concepts, Necessary and Sufficient conditions
Outline Unconstrained Optimization Functions of One Variable
Copyright © Cengage Learning. All rights reserved.
EE 458 Introduction to Optimization
Analysis of Basic PMP Solution
Part 7 Optimization in Functional Space
Multivariable optimization with no constraints
Analyzing Multivariable Change: Optimization
L8 Optimal Design concepts pt D
Presentation transcript:

Optimal Control of Systems Given a system: ; ; ; x(0) = x0 𝑥 =𝑓(𝑥,𝑢,𝑛) Find the control u(t) for t e [0,T] such that Instantaneous (Stage) Cost Terminal Cost 𝑢= arg lim 𝑢𝜖Ω 0 𝑇 𝐿 𝑥,𝑢 𝑑𝑡 + 𝑉 𝑥 𝑇 ,𝑢 𝑇 𝐽(𝑥,𝑢) Can include constraints on control u and state x (along trajectory or at final time): Final time T may or may not be free (we’ll first derive fixed T case)

Function Optimization Necessary condition for optimality is that gradient is zero Characterizes local extrema; need to check sign of second derivative Need convexity to guarantee global optimum Keep this analogy in mind: at an extremum lim ∆𝑥→0 𝑓 𝑥 ∗ +∆𝑥 −𝑓( 𝑥 ∗ ) =0 lim ∆𝑥→0 𝑓 𝑥 ∗ + 𝜕𝑓( 𝑥 ∗ ) 𝜕𝑥 ∆𝑥+ … −𝑓( 𝑥 ∗ ) =0 Gradient of f(x) orthogonal to level sets

Constrained Function optimization At the optimal solution, gradient of F(x) must be parallel to gradient of G(x): Level sets of F(x) Constraint G(x)=0 Consider: Level sets of F(x) Constraint G(x)=0 x*

Constrained Function optimization At the optimal solution, gradient of F(x) must be parallel to gradient of G(x): More generally, define: Then a necessary condition is: The Lagrange multipliers  are the sensitivity of the cost to a change in G Lagrange multiplier Level sets of F(x) Constraint G(x)=0 x*

Solution approach Add Lagrange multiplier (t) for dynamic constraint And additional multipliers for terminal constraints or state constraints Form augmented cost functional: where the Hamiltonian is: Necessary condition for optimality: vanishes for any perturbation (variation) in x, u, or  about optimum: “variations” x(t) = x*(t) + dx(t); u(t) = u*(t) + du(t); l(t) = l*(t) + dl (t); Variations must satisfy path end conditions!

variation dc(t’) dc(t)={dx(t), du(t), dl(t)} xT=c(T) c(t’) ={x(t),u(t),l(t)} x0=c(0)

Derivation… Note that (integration by parts): So:   Note that (integration by parts): So: We want this to be stationary for all variations

Pontryagin’s Maximum Principle Optimal (x*,u*) satisfy: Can be more general and include terminal constraints Follows directly from: Optimal control is solution to O.D.E. Unbounded controls

Interpretation of  Two-point boundary value problem:  is solved backwards in time  is the “co-state” (or “adjoint” variable) Recall that H = L(x,u) + Tf(x,u) If L=0, (t) is the sensitivity of the cost to a perturbation in state x(t) In the integral as Recall J = … +(0)x(0)

Terminal Constraints Assume q terminal constraints of the form: (x(T))=0 Then (T) = 𝜕𝑉 𝜕𝑥 x T + 𝜕 𝜕𝑥 (x(T)) 𝑣 Where n is a set of undetermined Lagrange Multipliers Under some conditions, n is free, and therefore (T) is free as well When the final time T is free (i.e., it is not predetermined), then the cost function J must be stationary with respect to perturbations T: T* + dT. In this case: H(T) = 0

General Remarks  

Example: Bang-Bang Control   “minimum time control” H lT B>0 -1 +1 u Since H is linear w.r.t. u, minimization occurs at boundary

Linear system, Quadratic cost Apply PMP: Guess that (t)=P(t)x(t): xTPx has an interpretation as the “cost to go” Often see the infinite-horizon solution where dP/dt=0 BT selects the part of the state that is influenced by u, so BT is sensitivity of aug. state cost to u