Autonomous Cyber-Physical Systems: Basics of Control

Slides:



Advertisements
Similar presentations
System Function For A Closed-Loop System
Advertisements

PID Controllers and PID tuning
Introductory Control Theory I400/B659: Intelligent robotics Kris Hauser.
Robotics Research Laboratory 1 Chapter 6 Design Using State-Space Methods.
Chapter 4: Basic Properties of Feedback
PROCESS MODELLING AND MODEL ANALYSIS © CAPE Centre, The University of Queensland Hungarian Academy of Sciences Analysis of Dynamic Process Models C13.
Properties of State Variables
CHE 185 – PROCESS CONTROL AND DYNAMICS
CIS 540 Principles of Embedded Computation Spring Instructor: Rajeev Alur
A Typical Feedback System
© Goodwin, Graebe, Salgado, Prentice Hall 2000 Chapter7 Synthesis of SISO Controllers.
A De-coupled Sliding Mode Controller and Observer for Satellite Attitude Control Ronald Fenton.
Modern Control Systems1 Lecture 07 Analysis (III) -- Stability 7.1 Bounded-Input Bounded-Output (BIBO) Stability 7.2 Asymptotic Stability 7.3 Lyapunov.
Lecture 4: Basic Concepts in Control CS 344R: Robotics Benjamin Kuipers.
Lecture 5: PID Control.
CIS 540 Principles of Embedded Computation Spring Instructor: Rajeev Alur
MODEL REFERENCE ADAPTIVE CONTROL
CIS 540 Principles of Embedded Computation Spring Instructor: Rajeev Alur
A Shaft Sensorless Control for PMSM Using Direct Neural Network Adaptive Observer Authors: Guo Qingding Luo Ruifu Wang Limei IEEE IECON 22 nd International.
Ch. 6 Single Variable Control
Introductory Control Theory CS 659 Kris Hauser.
Book Adaptive control -astrom and witten mark
CIS 540 Principles of Embedded Computation Spring Instructor: Rajeev Alur
Brief Review of Control Theory
1 In this lecture we will compare two linearizing controller for a single-link robot: Linearization via Taylor Series Expansion Feedback Linearization.
Chapter 7 Stability and Steady-State Error Analysis
Lecture #11 Stability of switched system: Arbitrary switching João P. Hespanha University of California at Santa Barbara Hybrid Control and Switched Systems.
Model Reference Adaptive Control (MRAC). MRAS The Model-Reference Adaptive system (MRAS) was originally proposed to solve a problem in which the performance.
Low Level Control. Control System Components The main components of a control system are The plant, or the process that is being controlled The controller,
Pioneers in Engineering, UC Berkeley Pioneers in Engineering Week 8: Sensors and Feedback.
Introductory Control Theory. Control Theory The use of feedback to regulate a signal Controller Plant Desired signal x d Signal x Control input u Error.
MESB374 System Modeling and Analysis PID Controller Design
PID CONTROLLERS By Harshal Inamdar.
Control systems KON-C2004 Mechatronics Basics Tapio Lantela, Nov 5th, 2015.
ME 431 System Dynamics Dept of Mechanical Engineering.
Lecture 25: Implementation Complicating factors Control design without a model Implementation of control algorithms ME 431, Lecture 25.
AUTOMATIC CONTROL THEORY II Slovak University of Technology Faculty of Material Science and Technology in Trnava.
State Observer (Estimator)
1 Lecture 15: Stability and Control III — Control Philosophy of control: closed loop with feedback Ad hoc control thoughts Controllability Three link robot.
(COEN507) LECTURE III SLIDES By M. Abdullahi
Lecture #7 Stability and convergence of ODEs João P. Hespanha University of California at Santa Barbara Hybrid Control and Switched Systems NO CLASSES.
CIS 540 Principles of Embedded Computation Spring Instructor: Rajeev Alur
Eigenvalues, Zeros and Poles
CIS 540 Principles of Embedded Computation Spring Instructor: Rajeev Alur
EEN-E1040 Measurement and Control of Energy Systems Control I: Control, processes, PID controllers and PID tuning Nov 3rd 2016 If not marked otherwise,
Chapter 1: Overview of Control
PID Control Systems (Proportional, Integral, Derivative)
IPC (GROUP-5) SR. NO NAME EN. NO 1 SAVAN PADARIYA
Boyce/DiPrima 9th ed, Ch 9.6: Liapunov’s Second Method Elementary Differential Equations and Boundary Value Problems, 9th edition, by William E. Boyce.
Automatic Control Theory CSE 322
§7-4 Lyapunov Direct Method
Lec 14. PID Controller Design
Control Loops Nick Schatz FRC 3184.
Basic Design of PID Controller
Spring CS 599. Instructor: Jyo Deshmukh
Autonomous Cyber-Physical Systems: Dynamical Systems
Modern Control Systems (MCS)
Modern Control Systems (MCS)
Brief Review of Control Theory
Spring CS 599. Instructor: Jyo Deshmukh
Static Output Feedback and Estimators
8. Stability, controllability and observability
Synthesis of SISO Controllers
Dynamical Systems Basics
Spring CS 599. Instructor: Jyo Deshmukh
PID Controller Design and
Outline Control structure design (plantwide control)
IntroductionLecture 1: Basic Ideas & Terminology
Chapter 7 Inverse Dynamics Control
Presentation transcript:

Autonomous Cyber-Physical Systems: Basics of Control Spring 2018. CS 599. Instructor: Jyo Deshmukh Acknowledgment: Some of the material in these slides is based on the lecture slides for CIS 540: Principles of Embedded Computation taught by Rajeev Alur at the University of Pennsylvania. http://www.seas.upenn.edu/~cis540/

Layout Lyapunov analysis for nonlinear systems BIBO stability Introduction to PID control

Nonlinear Dynamical System Simple Pendulum Arc length 𝑠=ℓ𝜃 Linear velocity: 𝑣= 𝑑𝑠 𝑑𝑡 =ℓ 𝑑𝜃 𝑑𝑡 Linear acceleration: 𝑎= 𝑑𝑣 𝑑𝑡 =ℓ 𝑑 2 𝜃 𝑑 𝑡 2 𝐹=𝑚𝑎=−𝑚𝑔 sin 𝜃 −𝑏 𝜃 ℓ 𝑑 2 𝜃 𝑑 𝑡 2 =−𝑔 sin 𝜃 − 𝑏 𝑚 𝑑𝜃 𝑑𝑡 𝜃 ℓ 𝑚𝑔 sin 𝜃 Friction 𝑚𝑔 cos 𝜃 𝑚𝑔

Simple Pendulum Dynamics Let 𝑥 1 =𝜃, 𝑥 2 = 𝜃 , and let ℓ=𝑔, and 𝑏 𝑚ℓ =𝑘 Rewriting previous equation: 𝑥 1 = 𝑥 2 𝑥 2 =− sin 𝑥 1 −𝑘 𝑥 2 For small angles, the above equation is almost linear (replace sin 𝑥 1 by 𝑥 1 ), and we can use techniques for linear systems to show stability But, for any larger angle (less than 𝜋 𝑐 ), we know the pendulum eventually stops How do we show pendulum system is stable? Use my method!!

Lyapunov’s first method Given 𝐱 =𝑓 𝐱 = 𝑓 1 𝐱 ⋮ 𝑓 𝑛 (𝐱) , Step 1: find the Jacobian matrix for 𝑓(𝐱) 𝐽 𝑓 = 𝜕 𝑓 1 𝜕 𝑥 1 … 𝜕 𝑓 1 𝜕 𝑥 𝑛 ⋮ ⋱ ⋮ 𝜕 𝑓 𝑛 𝜕 𝑥 1 … 𝜕 𝑓 𝑛 𝜕 𝑥 𝑛 Step 2: Set 𝐱= 𝐱 ∗ in 𝐽(𝑓) to get a matrix in ℝ 𝑛×𝑛 (say 𝐴) If linear system 𝐱 =𝐴𝐱, is stable, original nonlinear system is stable locally at the equilibrium point. (Local = exists some neighborhood of equilibrium point)

Lyapunov’s first method for pendulum Dynamics: 𝑥 1 = 𝑥 2 ; 𝑥 2 =− sin 𝑥 1 −𝑘 𝑥 2 𝑓= 𝑥 2 −sin 𝑥 1 −𝑘 𝑥 2 Step 1: 𝐽 𝑓 = 0 1 −cos 𝑥 1 −𝑘 Step 2: 𝐽 𝑓 ​ 𝐱=(0,0) = 0 1 −1 −𝑘 =𝐴 Step 3: Eigenvalues 𝜆 of 𝐴 satisfy 𝜆 2 +𝑘𝜆+1=0 ⇒ both eigenvalues have negative real parts Pendulum is locally stable

Lyapunov’s second method Method to prove global stability Relies on notion of Lyapunov Functions Intuition: Find Lyapunov function 𝑉(𝐱) that could be interpreted as the energy of the system Stable systems eventually lose their energy and return to the quiescent state Prove that the energy of the system (as encoded by the Lyapunov function) decreases over time

Lyapunov’s Second Method: the math Assume w.l.o.g., that equilibrium point 𝐱* is at the origin, i.e. 0 𝑉(𝐱) is a Lyapunov function over the domain 𝐷 iff: ∀𝐱∈𝐷−{𝟎} : 𝑉 𝐱 >0 [Positivity condition] ∀𝐱∈𝐷− 𝟎 : 𝑑 𝑑𝑡 𝑉 𝐱 ≤0 [Derivative negativity condition] if 𝐱=𝟎: 𝑉 𝐱 =0, 𝑑 𝑑𝑡 𝑉 𝐱 =0 System 𝐱 =𝑓(𝐱) is stable in the sense of Lyapunov if such a 𝑉(𝐱) exists

Illustration and Lie-derivative 𝑉(𝐱) decreases as system evolves, i.e. for every pair of points 𝐚,𝐛 along a system trajectory, 𝑉 𝐚 ≥𝑉 𝐛 In other words, 𝑉(𝐱 t ) is a decreasing function in time, or its derivative is negative semi-definite 𝑉 𝐱 is called Lie derivative of 𝑉 By chain-rule, 𝑉 𝐱 = 𝜕𝑉 𝜕𝑥 𝑑𝐱 𝑑𝑡 = 𝜕𝑉 𝜕𝑥 𝑓(𝐱) 𝑥 2 𝐱(0) 𝟎 𝑥 1 𝐚 𝑉(𝐚) ≥ 𝐛 𝑉(𝐛)

Lyapunov’s second method for pendulum Choose Lyapunov function 𝑉 𝐱 =(1−cos 𝑥 1 ) + 1 2 𝑥 2 2 Consider 𝐷= − 𝜋 2 , 𝜋 2 × −1,1 ; recall, 𝑓= 𝑥 2 −sin 𝑥 1 −𝑘 𝑥 2 By observation, 𝑉 𝐱 >0, except at 𝟎, where it is 0 Let’s look at the Lie derivative 𝜕𝑉 𝜕𝑥 𝑓 𝐱 = sin 𝑥 1 𝑥 2 𝑥 2 −sin 𝑥 1 −𝑘 𝑥 2 = 𝑥 2 sin 𝑥 1 − 𝑥 2 sin 𝑥 1 −𝑘 𝑥 2 2 = 𝑥 2 sin 𝑥 1 − 𝑥 2 sin 𝑥 1 −𝑘 𝑥 2 2 =−𝑘 𝑥 2 2 ≤0

Second method for asymptotic/exponential stability 𝑉(𝐱) is a Lyapunov function over the domain 𝐷 iff: ∀𝐱∈𝐷−{𝟎} : 𝑉 𝐱 >0 [Positivity condition] ∀𝐱∈𝐷− 𝟎 : 𝑑 𝑑𝑡 𝑉 𝐱 ≤0 [Derivative negativity condition] if 𝐱=𝟎: 𝑉 𝐱 =0, 𝑑 𝑑𝑡 𝑉 𝐱 =0 System 𝐱 =𝑓(𝐱) is stable in the sense of Lyapunov if such a 𝑉(𝐱) exists Asymptotic stability: Change second condition to: ∀𝐱∈𝐷− 𝟎 : 𝑑 𝑑𝑡 𝑉 𝐱 <0 [Derivative negativity condition] Exponential stability: Change second condition to: ∀𝐱∈𝐷− 𝟎 : 𝑑 𝑑𝑡 𝑉 𝐱 <−𝛼‖𝐱‖ or ∀𝐱∈𝐷− 𝟎 : 𝑑 𝑑𝑡 𝑉 𝐱 <−𝛼𝑉(𝐱)

Challenges with Lyapunov’s second method How do we find a Lyapunov function? Maybe use the physics of the system to understand what encodes “energy” For certain nonlinear systems (those with polynomial dynamics), can some algorithmic methods In general a hard problem Yet, is a powerful approach to prove global stability

Bounded signals ‖𝑥‖ A signal 𝐱 is bounded if there is a constant 𝑐, s.t. ∀𝑡: 𝐱 t <c Bounded signals: Constant signal : 𝑥 𝑡 =1 Exponential signal: 𝑥 𝑡 =𝑎 𝑒 𝑏𝑡 , for 𝑏≤0 Sinusoidal signals: 𝑥 𝑡 =𝑎sin 𝜔𝑡 Not bounded: 𝑥 𝑡 =𝑎+𝑏𝑡 for any 𝑏≠0 Exponential signal: 𝑥 𝑡 =𝑎 𝑒 𝑏𝑡 , for 𝑏>0 𝑡 ‖𝑥‖ 𝑡 𝑡 ‖𝑥‖ 𝑡 ‖𝑥‖ 𝑡 ‖𝑥‖ This Photo by Unknown Author is licensed under CC BY-SA

BIBO stability A system with Lipschitz-continuous dynamics is BIBO-stable if: For every bounded input 𝐮 𝑡 , the output 𝐲(𝑡) from initial state 𝐱 0 =0 is bounded Asymptotic stability ⇒ BIBO stability! Simple helicopter model: Two rotors: Main rotor gives lift, tail rotor prevents helicopter from spinning Torque produced by tail rotor must perfectly counterbalance friction with main rotor, or the helicopter spins Image credit: From Lee & Seshia: Introduction to Embedded Systems - A Cyber-Physical Systems Approach, http://leeseshia.org/

Helicopter Model continued 𝑢: net torque on tail of the helicopter – difference between frictional torque exerted by main rotor shaft and counteracting torque by the tail rotor 𝑦: rotational velocity of the body Torque = Moment of inertia × Rotational acceleration 𝑢 𝑡 =𝐼 𝑦 (𝑡) 𝑦 𝑡 = 1 𝐼 0 𝑡 𝑢 𝜏 𝑑𝜏 What happens when 𝑢 𝑡 is a constant input? 𝑦(𝑡) is not bounded ⇒ helicopter model is not BIBO-stable!

Relay-based Control System Controller Design Design a controller to ensure desired objectives of the overall physical system Old area: first papers from 1860s First controllers were mechanical, hydraulic or electro-mechanical, based on relays etc. Digital control began roughly in the 1960s I.e. use of software and computers/ microcontrollers to perform control tasks Relay-based Control System Image credit: From Segovia & Theorin: History of Control, History of PLC and DCS

Open-loop vs. Closed-loop control Open-loop or feed-forward control Control action does not depend on plant output Example: older-style A/Cs in car Input: user’s temperature dial setting Car’s response: lower or increase temperature Car does not actually measure temperature in the cabin to adjust temperature/air-flow Pros: Cheaper, no sensors required. Cons: Quality of control generally poor without human intervention Plant Controller 𝐢(𝑡) 𝐮(𝑡) 𝐲(𝑡)

Closed-loop or Feedback Control Controller adjusts controllable inputs in response to observed outputs Can respond better to variations in disturbances Performance depends on how well outputs can be sensed, and how quickly controller can track changes in output Many different flavors of feedback control! Plant Controller 𝐢(𝑡) 𝐮(𝑡) 𝐲(𝑡) ∑

Simple Linear Feedback Control: Reference Tracking 𝐲(𝑡) 𝐱 =𝐴𝐱+𝐵𝐮 𝐲=𝐶𝐱+𝐷𝐮 𝐮=𝐾(𝐫−𝐲) 𝐫(𝑡) 𝐮(𝑡) ∑ + − Controller Plant Common objective: make plant state track the reference signal 𝐫(𝑡) For convenience, let 𝐶=𝐼 (identity) and 𝐷=𝟎 (zero matrix), i.e. full state is observable through sensors, and input has no immediate effect on output

Simple Linear Feedback Control: Reference Tracking 𝐱(𝑡) 𝐱 =𝐴𝐱+𝐵𝐮 𝐮=𝐾(𝐫−𝐱) 𝐫(𝑡) 𝐮(𝑡) ∑ + − Controller Plant Closed-loop dynamics: 𝐱 =𝐴𝐱+𝐵𝐾 𝐫−𝐱 = 𝐴−𝐵𝐾 𝐱+𝐵𝐾𝐫 Pick 𝐾 such that closed-loop system has desirable behavior To make closed-loop system stable, pick 𝐾 such that eigenvalues of 𝐴−𝐵𝐾 have negative real-parts Controller designed this way also called pole placement controller

Designing a pole placement controller 𝐱(𝑡) 𝐱 = 1 1 1 2 𝐱+ 1 0 𝐮 𝐮=𝐾(𝐫−𝐱) 𝐫(𝑡) 𝐮(𝑡) ∑ + − Controller Plant Note eigs 𝐴 =0.382, 2.618 ⇒ unstable plant! Let 𝐾= 𝑘 1 𝑘 2 . Then, 𝐴−𝐵𝐾= 1− 𝑘 1 1− 𝑘 2 1 2 eigs 𝐴−𝐵𝐾 satisfy equation 𝜆 2 + 𝑘 1 −3 𝜆+ 1−2 𝑘 1 + 𝑘 2 = 0 Suppose we want eigenvalues at −5, −6, then equation would be: 𝜆 2 +11𝜆+30=0 Comparing two equations, 𝑘 1 −3 =11, and 1−2 𝑘 1 + 𝑘 2 =30 This gives 𝑘 1 =14, 𝑘 2 =57. Thus controller with 𝐾= 14 57 stabilizes the plant!

Controllability matrix Can we always choose eigenvalues to find a stabilizing controller? NO! Trivial case, 𝐵= 0 0 𝑇 , no controller will stabilize! How do we determine for a given 𝐴, 𝐵 whether there is a controller? Find controllability Gramian. For linear time-invariant (LTI systems): 𝑅= 𝐵 𝐴𝐵 𝐴 2 𝐵 … 𝐴 𝑛−1 𝐵 , where 𝑛 is the state-dimension System is controllable if 𝑅 has full row rank. (i.e. rows are linearly independent)

Linear Quadratic Regulator Pole placement involves heuristics (we arbitrarily decided where to put the eigenvalues) Principled approach is to put the poles such that the closed-loop system optimizes the cost function: 𝐽 𝐿𝑄𝑅 = 0 ∞ 𝐱(𝑡) 𝑻 𝑄𝐱(𝑡)+ 𝐮(𝑡) 𝑻 𝑅𝐮(𝑡) 𝑑𝑡 𝐱(𝑡) 𝑻 𝑄𝐱(𝑡) is called state cost, 𝐮(𝑡) 𝑻 𝑅𝐮(𝑡) is called control cost Given a feedback law: 𝐮 𝑡 =− 𝐾 lqr 𝐱(𝑡), 𝐾 lqr can be found precisely In Matlab, there is a simple one-line function lqr to do this!

PID controllers While previous controllers used systematic use of linear systems theory, PID controllers are the most widely-used and most prevalent in practice Main architecture: Controller 𝐾 𝑃 𝐞(t) 𝐫(𝑡) 𝐲(𝑡) 𝐮(𝑡) 𝐱 =𝐴𝐱+𝐵𝐮 𝐲=𝐶𝐱+𝐷𝐮 𝐾 𝐼 0 𝑡 𝐞 𝜏 𝑑𝜏 ∑ + − 𝐾 𝐷 𝑑𝐞(𝑡) 𝑑𝑡 Plant

PID controller architecture Compute error signal 𝐞 𝑡 =𝐫 𝑡 −𝐲(𝑡) Output of PID controller is the sum of 3 components: Proportional term: 𝐾 𝑝 𝐞 𝑡 : 𝐾 𝑝 proportional gain; Feedback correction proportional to error No error ⇒ No correction; can cause steady-state offsets by itself Integral term: 𝐾 𝐼 0 𝑡 𝐞 𝜏 𝑑𝜏 𝐾 𝐼 integral gain; Eliminates residual error by accounting for historical cumulative value of error Derivative term: 𝐾 𝐷 𝑑𝐞(𝑡) 𝑑𝑡 : 𝐾 𝐷 is the derivative gain 𝐾 𝐷 derivative gain; Predictive action, improves settling time and stability of system

PID controller in practice May often use only PI or PD control Many heuristics to tune PID controllers, i.e., find values of 𝐾 𝑃 , 𝐾 𝐼 , 𝐾 𝐷 Several recipes to tune, usually rely on designer expertise E.g. Ziegler-Nichols method: increase 𝐾 𝑃 till system starts oscillating with period 𝑇 (say till 𝐾 𝑃 = 𝐾 ∗ ), then set 𝐾 𝑃 =0.6 𝐾 ∗ , 𝐾 𝐼 = 1.2 𝐾 ∗ 𝑇 , 𝐾 𝐷 = 3 40 𝐾 ∗ 𝑇 Actual implementations: integrator cannot keep summing error: leads to integrator windup. Practical implementations: saturate integrator output Matlab/Simulink has PID controller blocks + PID auto-tuning capabilities

Next time Brief discussion on nonlinear control, model-predictive control Start discussing hybrid systems