Presentation is loading. Please wait.

Presentation is loading. Please wait.

Autonomous Cyber-Physical Systems: Basics of Control

Similar presentations


Presentation on theme: "Autonomous Cyber-Physical Systems: Basics of Control"— Presentation transcript:

1 Autonomous Cyber-Physical Systems: Basics of Control
Spring CS 599. Instructor: Jyo Deshmukh Acknowledgment: Some of the material in these slides is based on the lecture slides for CIS 540: Principles of Embedded Computation taught by Rajeev Alur at the University of Pennsylvania.

2 Layout Lyapunov analysis for nonlinear systems BIBO stability
Introduction to PID control

3 Nonlinear Dynamical System
Simple Pendulum Arc length 𝑠=ℓ𝜃 Linear velocity: 𝑣= 𝑑𝑠 𝑑𝑡 =ℓ 𝑑𝜃 𝑑𝑡 Linear acceleration: 𝑎= 𝑑𝑣 𝑑𝑡 =ℓ 𝑑 2 𝜃 𝑑 𝑡 2 𝐹=𝑚𝑎=−𝑚𝑔 sin 𝜃 −𝑏 𝜃 ℓ 𝑑 2 𝜃 𝑑 𝑡 2 =−𝑔 sin 𝜃 − 𝑏 𝑚 𝑑𝜃 𝑑𝑡 𝜃 𝑚𝑔 sin 𝜃 Friction 𝑚𝑔 cos 𝜃 𝑚𝑔

4 Simple Pendulum Dynamics
Let 𝑥 1 =𝜃, 𝑥 2 = 𝜃 , and let ℓ=𝑔, and 𝑏 𝑚ℓ =𝑘 Rewriting previous equation: 𝑥 1 = 𝑥 2 𝑥 2 =− sin 𝑥 1 −𝑘 𝑥 2 For small angles, the above equation is almost linear (replace sin 𝑥 1 by 𝑥 1 ), and we can use techniques for linear systems to show stability But, for any larger angle (less than 𝜋 𝑐 ), we know the pendulum eventually stops How do we show pendulum system is stable? Use my method!!

5 Lyapunov’s first method
Given 𝐱 =𝑓 𝐱 = 𝑓 1 𝐱 ⋮ 𝑓 𝑛 (𝐱) , Step 1: find the Jacobian matrix for 𝑓(𝐱) 𝐽 𝑓 = 𝜕 𝑓 1 𝜕 𝑥 1 … 𝜕 𝑓 1 𝜕 𝑥 𝑛 ⋮ ⋱ ⋮ 𝜕 𝑓 𝑛 𝜕 𝑥 1 … 𝜕 𝑓 𝑛 𝜕 𝑥 𝑛 Step 2: Set 𝐱= 𝐱 ∗ in 𝐽(𝑓) to get a matrix in ℝ 𝑛×𝑛 (say 𝐴) If linear system 𝐱 =𝐴𝐱, is stable, original nonlinear system is stable locally at the equilibrium point. (Local = exists some neighborhood of equilibrium point)

6 Lyapunov’s first method for pendulum
Dynamics: 𝑥 1 = 𝑥 2 ; 𝑥 2 =− sin 𝑥 1 −𝑘 𝑥 2 𝑓= 𝑥 2 −sin 𝑥 1 −𝑘 𝑥 2 Step 1: 𝐽 𝑓 = 0 1 −cos 𝑥 1 −𝑘 Step 2: 𝐽 𝑓 ​ 𝐱=(0,0) = 0 1 −1 −𝑘 =𝐴 Step 3: Eigenvalues 𝜆 of 𝐴 satisfy 𝜆 2 +𝑘𝜆+1=0 ⇒ both eigenvalues have negative real parts Pendulum is locally stable

7 Lyapunov’s second method
Method to prove global stability Relies on notion of Lyapunov Functions Intuition: Find Lyapunov function 𝑉(𝐱) that could be interpreted as the energy of the system Stable systems eventually lose their energy and return to the quiescent state Prove that the energy of the system (as encoded by the Lyapunov function) decreases over time

8 Lyapunov’s Second Method: the math
Assume w.l.o.g., that equilibrium point 𝐱* is at the origin, i.e. 0 𝑉(𝐱) is a Lyapunov function over the domain 𝐷 iff: ∀𝐱∈𝐷−{𝟎} : 𝑉 𝐱 >0 [Positivity condition] ∀𝐱∈𝐷− 𝟎 : 𝑑 𝑑𝑡 𝑉 𝐱 ≤0 [Derivative negativity condition] if 𝐱=𝟎: 𝑉 𝐱 =0, 𝑑 𝑑𝑡 𝑉 𝐱 =0 System 𝐱 =𝑓(𝐱) is stable in the sense of Lyapunov if such a 𝑉(𝐱) exists

9 Illustration and Lie-derivative
𝑉(𝐱) decreases as system evolves, i.e. for every pair of points 𝐚,𝐛 along a system trajectory, 𝑉 𝐚 ≥𝑉 𝐛 In other words, 𝑉(𝐱 t ) is a decreasing function in time, or its derivative is negative semi-definite 𝑉 𝐱 is called Lie derivative of 𝑉 By chain-rule, 𝑉 𝐱 = 𝜕𝑉 𝜕𝑥 𝑑𝐱 𝑑𝑡 = 𝜕𝑉 𝜕𝑥 𝑓(𝐱) 𝑥 2 𝐱(0) 𝟎 𝑥 1 𝐚 𝑉(𝐚) 𝐛 𝑉(𝐛)

10 Lyapunov’s second method for pendulum
Choose Lyapunov function 𝑉 𝐱 =(1−cos 𝑥 1 ) 𝑥 2 2 Consider 𝐷= − 𝜋 2 , 𝜋 2 × −1,1 ; recall, 𝑓= 𝑥 2 −sin 𝑥 1 −𝑘 𝑥 2 By observation, 𝑉 𝐱 >0, except at 𝟎, where it is 0 Let’s look at the Lie derivative 𝜕𝑉 𝜕𝑥 𝑓 𝐱 = sin 𝑥 1 𝑥 𝑥 2 −sin 𝑥 1 −𝑘 𝑥 2 = 𝑥 2 sin 𝑥 1 − 𝑥 2 sin 𝑥 1 −𝑘 𝑥 2 2 = 𝑥 2 sin 𝑥 1 − 𝑥 2 sin 𝑥 1 −𝑘 𝑥 2 2 =−𝑘 𝑥 2 2 ≤0

11 Second method for asymptotic/exponential stability
𝑉(𝐱) is a Lyapunov function over the domain 𝐷 iff: ∀𝐱∈𝐷−{𝟎} : 𝑉 𝐱 >0 [Positivity condition] ∀𝐱∈𝐷− 𝟎 : 𝑑 𝑑𝑡 𝑉 𝐱 ≤0 [Derivative negativity condition] if 𝐱=𝟎: 𝑉 𝐱 =0, 𝑑 𝑑𝑡 𝑉 𝐱 =0 System 𝐱 =𝑓(𝐱) is stable in the sense of Lyapunov if such a 𝑉(𝐱) exists Asymptotic stability: Change second condition to: ∀𝐱∈𝐷− 𝟎 : 𝑑 𝑑𝑡 𝑉 𝐱 <0 [Derivative negativity condition] Exponential stability: Change second condition to: ∀𝐱∈𝐷− 𝟎 : 𝑑 𝑑𝑡 𝑉 𝐱 <−𝛼‖𝐱‖ or ∀𝐱∈𝐷− 𝟎 : 𝑑 𝑑𝑡 𝑉 𝐱 <−𝛼𝑉(𝐱)

12 Challenges with Lyapunov’s second method
How do we find a Lyapunov function? Maybe use the physics of the system to understand what encodes “energy” For certain nonlinear systems (those with polynomial dynamics), can some algorithmic methods In general a hard problem Yet, is a powerful approach to prove global stability

13 Bounded signals ‖𝑥‖ A signal 𝐱 is bounded if there is a constant 𝑐, s.t. ∀𝑡: 𝐱 t <c Bounded signals: Constant signal : 𝑥 𝑡 =1 Exponential signal: 𝑥 𝑡 =𝑎 𝑒 𝑏𝑡 , for 𝑏≤0 Sinusoidal signals: 𝑥 𝑡 =𝑎sin 𝜔𝑡 Not bounded: 𝑥 𝑡 =𝑎+𝑏𝑡 for any 𝑏≠0 Exponential signal: 𝑥 𝑡 =𝑎 𝑒 𝑏𝑡 , for 𝑏>0 𝑡 ‖𝑥‖ 𝑡 𝑡 ‖𝑥‖ 𝑡 ‖𝑥‖ 𝑡 ‖𝑥‖ This Photo by Unknown Author is licensed under CC BY-SA

14 BIBO stability A system with Lipschitz-continuous dynamics is BIBO-stable if: For every bounded input 𝐮 𝑡 , the output 𝐲(𝑡) from initial state 𝐱 0 =0 is bounded Asymptotic stability ⇒ BIBO stability! Simple helicopter model: Two rotors: Main rotor gives lift, tail rotor prevents helicopter from spinning Torque produced by tail rotor must perfectly counterbalance friction with main rotor, or the helicopter spins Image credit: From Lee & Seshia: Introduction to Embedded Systems - A Cyber-Physical Systems Approach,

15 Helicopter Model continued
𝑢: net torque on tail of the helicopter – difference between frictional torque exerted by main rotor shaft and counteracting torque by the tail rotor 𝑦: rotational velocity of the body Torque = Moment of inertia × Rotational acceleration 𝑢 𝑡 =𝐼 𝑦 (𝑡) 𝑦 𝑡 = 1 𝐼 0 𝑡 𝑢 𝜏 𝑑𝜏 What happens when 𝑢 𝑡 is a constant input? 𝑦(𝑡) is not bounded ⇒ helicopter model is not BIBO-stable!

16 Relay-based Control System
Controller Design Design a controller to ensure desired objectives of the overall physical system Old area: first papers from 1860s First controllers were mechanical, hydraulic or electro-mechanical, based on relays etc. Digital control began roughly in the 1960s I.e. use of software and computers/ microcontrollers to perform control tasks Relay-based Control System Image credit: From Segovia & Theorin: History of Control, History of PLC and DCS

17 Open-loop vs. Closed-loop control
Open-loop or feed-forward control Control action does not depend on plant output Example: older-style A/Cs in car Input: user’s temperature dial setting Car’s response: lower or increase temperature Car does not actually measure temperature in the cabin to adjust temperature/air-flow Pros: Cheaper, no sensors required. Cons: Quality of control generally poor without human intervention Plant Controller 𝐢(𝑡) 𝐮(𝑡) 𝐲(𝑡)

18 Closed-loop or Feedback Control
Controller adjusts controllable inputs in response to observed outputs Can respond better to variations in disturbances Performance depends on how well outputs can be sensed, and how quickly controller can track changes in output Many different flavors of feedback control! Plant Controller 𝐢(𝑡) 𝐮(𝑡) 𝐲(𝑡)

19 Simple Linear Feedback Control: Reference Tracking
𝐲(𝑡) 𝐱 =𝐴𝐱+𝐵𝐮 𝐲=𝐶𝐱+𝐷𝐮 𝐮=𝐾(𝐫−𝐲) 𝐫(𝑡) 𝐮(𝑡) + Controller Plant Common objective: make plant state track the reference signal 𝐫(𝑡) For convenience, let 𝐶=𝐼 (identity) and 𝐷=𝟎 (zero matrix), i.e. full state is observable through sensors, and input has no immediate effect on output

20 Simple Linear Feedback Control: Reference Tracking
𝐱(𝑡) 𝐱 =𝐴𝐱+𝐵𝐮 𝐮=𝐾(𝐫−𝐱) 𝐫(𝑡) 𝐮(𝑡) + Controller Plant Closed-loop dynamics: 𝐱 =𝐴𝐱+𝐵𝐾 𝐫−𝐱 = 𝐴−𝐵𝐾 𝐱+𝐵𝐾𝐫 Pick 𝐾 such that closed-loop system has desirable behavior To make closed-loop system stable, pick 𝐾 such that eigenvalues of 𝐴−𝐵𝐾 have negative real-parts Controller designed this way also called pole placement controller

21 Designing a pole placement controller
𝐱(𝑡) 𝐱 = 𝐱 𝐮 𝐮=𝐾(𝐫−𝐱) 𝐫(𝑡) 𝐮(𝑡) + Controller Plant Note eigs 𝐴 =0.382, ⇒ unstable plant! Let 𝐾= 𝑘 1 𝑘 Then, 𝐴−𝐵𝐾= 1− 𝑘 1 1− 𝑘 eigs 𝐴−𝐵𝐾 satisfy equation 𝜆 2 + 𝑘 1 −3 𝜆+ 1−2 𝑘 1 + 𝑘 2 = 0 Suppose we want eigenvalues at −5, −6, then equation would be: 𝜆 2 +11𝜆+30=0 Comparing two equations, 𝑘 1 −3 =11, and 1−2 𝑘 1 + 𝑘 2 =30 This gives 𝑘 1 =14, 𝑘 2 =57. Thus controller with 𝐾= stabilizes the plant!

22 Controllability matrix
Can we always choose eigenvalues to find a stabilizing controller? NO! Trivial case, 𝐵= 𝑇 , no controller will stabilize! How do we determine for a given 𝐴, 𝐵 whether there is a controller? Find controllability Gramian. For linear time-invariant (LTI systems): 𝑅= 𝐵 𝐴𝐵 𝐴 2 𝐵 … 𝐴 𝑛−1 𝐵 , where 𝑛 is the state-dimension System is controllable if 𝑅 has full row rank. (i.e. rows are linearly independent)

23 Linear Quadratic Regulator
Pole placement involves heuristics (we arbitrarily decided where to put the eigenvalues) Principled approach is to put the poles such that the closed-loop system optimizes the cost function: 𝐽 𝐿𝑄𝑅 = 0 ∞ 𝐱(𝑡) 𝑻 𝑄𝐱(𝑡)+ 𝐮(𝑡) 𝑻 𝑅𝐮(𝑡) 𝑑𝑡 𝐱(𝑡) 𝑻 𝑄𝐱(𝑡) is called state cost, 𝐮(𝑡) 𝑻 𝑅𝐮(𝑡) is called control cost Given a feedback law: 𝐮 𝑡 =− 𝐾 lqr 𝐱(𝑡), 𝐾 lqr can be found precisely In Matlab, there is a simple one-line function lqr to do this!

24 PID controllers While previous controllers used systematic use of linear systems theory, PID controllers are the most widely-used and most prevalent in practice Main architecture: Controller 𝐾 𝑃 𝐞(t) 𝐫(𝑡) 𝐲(𝑡) 𝐮(𝑡) 𝐱 =𝐴𝐱+𝐵𝐮 𝐲=𝐶𝐱+𝐷𝐮 𝐾 𝐼 0 𝑡 𝐞 𝜏 𝑑𝜏 + 𝐾 𝐷 𝑑𝐞(𝑡) 𝑑𝑡 Plant

25 PID controller architecture
Compute error signal 𝐞 𝑡 =𝐫 𝑡 −𝐲(𝑡) Output of PID controller is the sum of 3 components: Proportional term: 𝐾 𝑝 𝐞 𝑡 : 𝐾 𝑝 proportional gain; Feedback correction proportional to error No error ⇒ No correction; can cause steady-state offsets by itself Integral term: 𝐾 𝐼 0 𝑡 𝐞 𝜏 𝑑𝜏 𝐾 𝐼 integral gain; Eliminates residual error by accounting for historical cumulative value of error Derivative term: 𝐾 𝐷 𝑑𝐞(𝑡) 𝑑𝑡 : 𝐾 𝐷 is the derivative gain 𝐾 𝐷 derivative gain; Predictive action, improves settling time and stability of system

26 PID controller in practice
May often use only PI or PD control Many heuristics to tune PID controllers, i.e., find values of 𝐾 𝑃 , 𝐾 𝐼 , 𝐾 𝐷 Several recipes to tune, usually rely on designer expertise E.g. Ziegler-Nichols method: increase 𝐾 𝑃 till system starts oscillating with period 𝑇 (say till 𝐾 𝑃 = 𝐾 ∗ ), then set 𝐾 𝑃 =0.6 𝐾 ∗ , 𝐾 𝐼 = 1.2 𝐾 ∗ 𝑇 , 𝐾 𝐷 = 𝐾 ∗ 𝑇 Actual implementations: integrator cannot keep summing error: leads to integrator windup. Practical implementations: saturate integrator output Matlab/Simulink has PID controller blocks + PID auto-tuning capabilities

27 Next time Brief discussion on nonlinear control, model-predictive control Start discussing hybrid systems


Download ppt "Autonomous Cyber-Physical Systems: Basics of Control"

Similar presentations


Ads by Google