Download presentation
Presentation is loading. Please wait.
1
Introductory Control Theory CS 659 Kris Hauser
3
Control Theory The use of feedback to regulate a signal Controller Plant Desired signal x d Signal x Control input u Error e = x-x d (By convention, x d = 0) x’ = f(x,u)
4
What might we be interested in? Controls engineering Produce a policy u(x,t), given a description of the plant, that achieves good performance Verifying theoretical properties Convergence, stability, optimality of a given policy u(x,t)
5
Agenda PID control LTI multivariate systems & LQR control Nonlinear control & Lyapunov funcitons Control is a huge topic, and we won’t dive into much detail
6
Model-free vs model-based Two general philosophies: Model-free: do not require a dynamics model to be provided Model-based: do use a dynamics model during computation Model-free methods: Simpler Tend to require much more manual tuning to perform well Model-based methods: Can achieve good performance (optimal w.r.t. some cost function) Are more complicated to implement Require reasonably good models (system-specific knowledge) Calibration: build a model using measurements before behaving Adaptive control: “learn” parameters of the model online from sensors
7
PID control Proportional-Integral-Derivative controller A workhorse of 1D control systems Model-free
8
Proportional term u(t) = -K p x(t) Negative sign assumes control acts in the same direction as x x t Gain
9
Integral term x t Residual steady-state errors driven asymptotically to 0 Integral gain
10
Instability For a 2 nd order system (momentum), P control x t Divergence
11
Derivative term u(t) = -K p x(t) – K d x’(t) x Derivative gain
12
Putting it all together
13
Parameter tuning
14
Example: Damped Harmonic Oscillator Second order time invariant linear system, PID controller x’’(t) = A x(t) + B x’(t) + C + D u(x,x’,t) For what starting conditions, gains is this stable and convergent?
15
Stability and Convergence System is stable if errors stay bounded System is convergent if errors -> 0
16
Example: Damped Harmonic Oscillator x’’ = A x + B x’ + C + D u(x,x’) PID controller u = -K p x –K d x’ – K i I x’’ = (A-DK p ) x + (B-DK d ) x’ + C - D K i I
17
Homogenous solution Instable if A-DK p > 0 Natural frequency 0 = sqrt(DK p -A) Damping ratio =(DK d -B)/2 0 If > 1, overdamped If < 1, underdamped (oscillates)
18
Example: Trajectory following x des (t) x(t)
19
Controller Tuning Workflow Hypothesize a control policy Analysis: Assume a model Assume disturbances to be handled Test performance either through mathematical analysis, or through simulation Go back and redesign control policy Mathematical techniques give you more insight to improve redesign, but require more work
20
Multivariate Systems x’ = f(x,u) x X R n u U R m Because m n, and variables are coupled, this is not as easy as setting n PID controllers
21
Linear Time-Invariant Systems Linear: x’ = f(x,u,t) = A(t)x + B(t)u LTI: x’ = f(x,u) = Ax + Bu Nonlinear systems can sometimes be approximated by linearization
22
Convergence of LTI systems x’ = A x + B u Let u = - K x Then x’ = (A-BK) x The eigenvalues i of (A-BK) determine convergence Each i may be complex Must have real component between (-∞,0]
23
Linear Quadratic Regulator x’ = Ax + Bu Objective: minimize quadratic cost x T Q x + u T R u dt Over an infinite horizon Error term“Effort” penalization
24
Closed form LQR solution Closed form solution u = -K x, with K = R -1 BP Where P is a symmetric matrix that solves the Riccati equation A T P + PA – PBR -1 B T P + Q = 0 Derivation: calculus of variations Packages available for finding solution
25
Nonlinear Control General case: x’ = f(x,u) Two questions: Analysis: How to prove convergence and stability for a given u(x)? Synthesis: How to find u(t) to optimize some cost function?
26
Toy Nonlinear Systems Cart-poleAcrobot Mountain car
27
Proving convergence & stability with Lyapunov functions Let u = u(x) Then x’ = f(x,u) = g(x) Conjecture a Lyapunov function V(x) V(x) = 0 at origin x=0 V(x) > 0 for all x in a neighborhood of origin V(x)
28
Proving stability with Lyapunov functions Idea: prove that d/dt V(x) 0 under the dynamics x’ = g(x) around origin V(x) t g(x) t d/dt V(x)
29
Proving convergence with Lyapunov functions Idea: prove that d/dt V(x) < 0 under the dynamics x’ = g(x) around origin V(x) t g(x) t d/dt V(x)
30
Proving convergence with Lyapunov functions d/dt V(x) = dV/dx(x) dx/dt(x) = V(x) T g(x) < 0 V(x) t g(x) t d/dt V(x)
31
How does one construct a suitable Lyapunov function? Typically some form of energy (e.g., KE + PE) Some art involved
32
Direct policy synthesis: Optimal control Input: cost function J(x), estimated dynamics f(x,u), finite state/control spaces X, U Two basic classes: Trajectory optimization: Hypothesize control sequence u(t), simulate to get x(t), perform optimization to improve u(t), repeat. Output: optimal trajectory u(t) (in practice, only a locally optimal solution is found) Dynamic programming: Discretize state and control spaces, form a discrete search problem, and solve it. Output: Optimal policy u(x) across all of X
33
Discrete Search example Split X, U into cells x 1,…,x n, u 1,…,u m Build transition function x j = f(x i,u k )dt for all i,k State machine with costs dt J(x i ) for staying in state I Find u(x i ) that minimizes sum of total costs. Value iteration: repeated dynamic programming over V(x i ) = sum of total future costs Value function for 1-joint acrobot
34
Receding Horizon Control (aka model predictive control)... horizon 1 horizon h
35
Controller Hooks in RobotSim Given a loaded WorldModel sim = Simulator(world) c = sim.getController(0) By default, a trajectory queue, PID controller c.setMilestone(qdes) – moves smoothly to qdes c.addMilestone(q1), c.addMilestone(q2), … – appends a list of milestones and smoothly interpolates between them. Can override behavior to get a manual control loop. At every time step, do: Read q,dq with c.getSensedConfig(), c.getSensedVelocity() For torque commands: Compute u(q,dq,t) Send torque command via c.setTorque(u) OR for PID commands: Compute qdes(q,dq,t), dqdes(q,dq,t) Send PID command via c.setPIDCommand(qdes,dqdes)
36
Next class Motion planning Principles Ch 2, 5.1, 6.1
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.