Download presentation
Presentation is loading. Please wait.
Published byMelvin Cook Modified over 9 years ago
1
AUTOMATIC CONTROL THEORY II Slovak University of Technology Faculty of Material Science and Technology in Trnava
2
Optimal control Formulation of optimal control problems The formulation of an optimal control problem requires the following: a mathematical model of the system to be controlled a specification of the performance index a specification of all boundary conditions on states, and constraints to be satisfied by states and controls a statement of what variables are free
3
Optimal control General case with fixed final time and no terminal or path constraints Problem 1: Find the control vector trajectory to minimize the performance index subject to
4
Optimal control Problem 1 is known as the Bolza problem If then the problem is known as the Mayer problem if it is known as the Lagrange problem define an augmented performance index
5
Optimal control Define the Hamiltonian function H as follows such that can be written variation in the performance index
6
Optimal control For a minimum, it is necessary that This gives the stationarity condition These necessary optimality conditions, which define a two point boundary value problem, are very useful as they allow to find analytical solutions to special types of optimal control problems, and to define numerical algorithms to search for solutions in general cases.
7
Optimal control The linear quadratic regulator The performance index is given by the system dynamics obey to find that the optimal control law can be expressed as a linear state feedback
8
Optimal control the state feedback gain is given by the solution to the differential Ricatti equation it is possible to express the optimal control law as a state feedback but with constant gain
9
Optimal control the positive definite solution to the algebraic Ricatti equation the closed loop system is asymptotically stable
10
Optimal control This is an important result, as the linear quadratic regulator provides a way of stabilizing any linear system that is stabilizable. An extension of the LQR concept to systems with gaussian additive noise, which is known as the linear quadratic gaussian (LQG) controller, has been widely applied.
11
Optimal control Minimum time problems to reach a terminal constraint in minimum time Find and to minimise subject to
12
Optimal control Problems with path constraints Sometimes it is necessary to restrict state and control trajectories such that a set of constraints is satisfied within the interval of interest where
13
Optimal control it may be required that the state satisfies equality constraints at some intermediate point in time These are known as interior point constraints and can be expressed as follows where
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.