Download presentation
Presentation is loading. Please wait.
1
580.691 Learning Theory Reza Shadmehr Optimal control
Linear quadratic tracking problem Constraint optimization with Lagrange multipliers State space equations in continuous and discrete forms Model of the eye and generation of a saccade Optimal stochastic feedback control with Gaussian noise Duality with Kalman filter Optimal control with signal dependent noise
2
Estimation: Given observations x and y, estimate the hidden state w
Estimation: Given observations x and y, estimate the hidden state w. Your estimates have no bearing on your observations. Example: classical conditioning. The actions of the learner has no effect on the stimuli. Control: figure out the u that you need to give so that your observations y behave as you want them to. Example: operant conditioning, where the learner’s actions affect whether it gets rewarded or not.
3
The linear quadratic tracking problem
qx1 We are trying to track a reference trajectory r(k) qx1 We observe y(k), which is related to x(k) mx1 We generate command u(k), which causes a change in x(k) nx1 We wish to find the control sequence u(0), u(1), …, u(p-1) such that we minimize the cost function, Given the constraint that: tracking cost control cost
4
Suppose we have a linear dynamical system:
mxm mxn mx1 nx1 Suppose we have a linear dynamical system: We have the history of inputs u(k), where k=0…p-1. We want to write the history of state x(k). (p.m)x1 (p.n)x1 (p.m)x(p.n) (p.m)xm (p.m)x(p.m)
5
tracking cost control cost Total cost
7
Constraint minimization with Lagrange multipliers: Example 1
Suppose we want to find the point (xs,ys) along the line y=mx+b that is closest to the point (xo,yo). cost constraint We want to find point (xs,ys) that belongs to the line, and among the points that belong to the line, gives us the smallest cost. The points along each line are of equal cost
8
The point where the line meets the cost contour is where the vector normal to the constraint and the vector normal to the cost are in the same direction. Vector normal to the constraint The point that we are looking for satisfies the condition: Vector normal to the cost Lagrange multiplier
9
This is 2 equations with 3 unknowns.
Here is our 3rd equation.
10
Constraint minimization with Lagrange multipliers: Example 2
Example from: Steuard Jensen Suppose the milkmaid wants to get to the cow so that she travels the shortest distance possible, given the constraint that she first washes her milk pan in the river. So we want to find the shortest route that includes a line from the milkmaid to the river edge, and a line from the river edge to the cow. Find the point P that minimizes the following: milkmaid cost cow constraint An ellipse can be defined as the set of points P for which the total distance from one focus to P and then to the other focus is constant. If we keep points M and C as the foci of this ellipse, then as soon as we have an ellipse that touches the river edge, we have found the point P that is our solution. Note that at point P, the normal vector to g(x) and the normal vector to f(x) are in the same direction.
11
Constraint minimization with Lagrange multipliers
A scalar constraint In order to minimize the scalar function: Subject to scalar constraint: We form an augmented cost: Note that when we find the x that satisfies the constraint g(x), g(x) will be zero and so we have not changed our cost function. To minimize the augmented cost, we have: So, to find the x that minimizes the cost subject to the constraint, we find the (x, lambda) that satisfies: This should look familiar from last two examples.
12
Constraint minimization with Lagrange multipliers
A “vector” constraint In order to minimize the scalar function: Subject to constraint: We form an augmented cost with two Lagrange multipliers: To minimize the augmented cost, we have: So, to find the x that minimizes the cost subject to the constraint, we find the (x, lambda1, lambda2) that satisfies: We have as many multipliers as we have constraints.
13
Example: Minimize position variance at last time point with the constraint that all states are at the goal P equations p+3 unknowns 3 equations p+3 unknowns
14
Let us construct a simple model of the eye’s dynamics and produce a saccade using optimal control
Force in the bottom spring Force in the top spring Force in the viscous element Force in the motor command If we re-define x so that we measure it from xo/2, then the equivalent system is shown on right, where the equilibrium point of the spring is at x=0.
15
System dynamics in continuous form
Our observation Goal: find the motor commands that move the mass (the eye) to a certain location by a certain time while minimizing a cost the depends on endpoint accuracy and motor commands. First step: re-formulate the system dynamics from continuous to discrete time. Second step: solve the optimum control problem.
16
Relating discrete and continuous representation of a linear system (approximate solution)
Continuous system Discrete system Simple (but approximate method) is to use Euler’s approximation:
17
Solution of continuous LTI state equations (scalar condition)
Suppose that our state is a scalar variable and the state update equation is of the form: The solution will have the exponential form: Suppose that our state is a scalar variable and the state update depends on an external input u(t):
18
Matrix exponential Suppose that our state is a vector variable:
We can imagine that the solution will have a “matrix exponential” form: For any square matrix A, the matrix exponential exp(A) is a square matrix function. We can compute it using Taylor series expansion. In Matlab, exp(A) is computed as expm(A). In Mathematica, use MatrixExp[A].
19
Some properties of the matrix exponential
Using Taylor series expansion, one can show the following properties of the matrix exponential: Other properties of the matrix exponential:
20
Solution of continuous LTI state equations (vector condition)
21
Solution of discrete LTI state equations
22
Relating discrete and continuous representation of a linear system
Assume that u(t) is constant between the two sampling intervals.
23
Discrete and continuous representation of a linear system
(noise free scenario) Continuous system Discrete system
24
State noise in the continuous system
State noise in continuous domain We note that for small D, the term inside the exponential is near zero over the range kD to (k+1)D. Therefore, we can approximate the matrix exponential with an identity matrix. Equivalent state noise in discrete domain
25
Measurement noise in the continuous system
Measurement noise in continuous domain Suppose that we imagine that we average the sample y(t) over the discrete interval D to get our discrete sample: Noise in discrete domain is: Equivalent state noise in discrete domain
26
Discrete and continuous representation of a linear system with noise
Continuous system Equivalent discrete system
27
Continuous time model of the eye
Discrete time model of the eye Optimal control problem
28
Make a 30 deg (~0.5 rad) saccade in 30ms and hold it there for 50ms
0.02 0.04 0.06 0.08 0.025 0.05 0.075 0.1 0.125 0.15 0.175 0.02 0.04 0.06 0.08 500000 1 10 6 1.5 2 0.02 0.04 0.06 0.08 -2 2 4 0.02 0.04 0.06 0.08 0.025 0.05 0.075 0.1 0.125 0.15 0.175 0.02 0.04 0.06 0.08 2 4 6 Time (sec)
29
Eye muscle activity for a 10 deg saccade.
0.02 0.04 0.06 0.08 0.05 0.1 0.15 0.2 0.25 0.3 0.35 15 deg Position (rad) 10 deg 5 deg 0.02 0.04 0.06 0.08 2 4 6 8 10 Velocity (rad/s) Eye muscle activity for a 10 deg saccade. 0.02 0.04 0.06 0.08 -2 2 4 6 8 Motor command (N.m) Time (sec)
30
Resolving redundancies
Suppose that we have a cursor that its position depends on the sum of positions of left and right joysticks. Suppose that the left joystick is heavier than the right joystick. We want to move the cursor to some location. How much should we move each joystick? 0.1 0.2 0.3 0.4 0.5 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 -1 1 2
31
Summary: Open loop optimal control of a linear system with quadratic cost
32
Issues with the control policy:
What if the system gets perturbed during the control policy? With the current approach, there is no compensation for the perturbation. In reality, both the state update equation and the measurement equation are subject to noise. How do we take that into account? To resolve this, we need a way to figure out what command to produce, given that we find ourselves at some state x at some time k. Once we figure this out, we will consider the situation where we cannot measure x directly, but have noise to deal with. Our best estimate will be through the Kalman filter. This will link estimation with control. Starting at state Sequence of actions Observations Cost to minimize
33
Note that at the last time step, cost is a quadratic function of state
Cost at the last time point Cost-to-go at the next to the last time point
34
We will now show that if we choose the optimal u at step p-1, then cost to go is once again a quadratic function of state x. Can be simplified to: Can be simplified to:
35
We just showed that for the last time step, the cost to go is a quadratic function of x:
The optimal u to at time point p-1 minimizes cost to go J(p-1): If at time point p-1 we indeed carry out this optimal policy u, then the cost to go at time p-1 also becomes a linear function of x: If we now repeat the process and find the optimal u for time point p-2, it will be: And if we apply the optimal u at time points p-2 and p-1, then the cost to go at time point p-2 will be a quadratic function of x: So in general, if for time points t+1, …, p we calculated the optimal policy for u, then the above gives us a recipe to compute the optima policy for time point t.
36
Summary: optimal feedback control
Cost to go The procedure is to compute the matrices W and G from the last time point to the first time point.
37
Modeling of an elbow movement
Continuous time model of the elbow Discrete time model of the elbow
38
Force pulse to the arm for 50ms
Goal: Reach a target at 30 deg in 300 ms time and hold it there for 100 ms. 0.1 0.2 0.3 0.4 sec 500000 1 10 6 1.5 2 s o P t c 0.1 0.2 0.3 0.4 sec 5000 10000 15000 20000 l e V t s o c 0.1 0.2 0.3 0.4 sec 1 1.25 1.5 1.75 2 2.25 2.5 L Unperturbed movement Arm held at start for 200ms Force pulse to the arm for 50ms 0.1 0.2 0.3 0.4 sec 0.5 n o i t s P 0.1 0.2 0.3 0.4 sec 0.5 n o i t s P 0.05 0.1 0.15 0.2 0.25 0.3 0.35 sec -5 5 10 15 r o t M d n a m c 0.05 0.1 0.15 0.2 0.25 0.3 0.35 sec -75 -50 -25 25 50 75 r o t M d n a m c 0.05 0.1 0.15 0.2 0.25 0.3 0.35 sec -30 -20 -10 10 r o t M d n a m c
39
Movement with a via point: we set the cost to be high at the time when we are supposed to be at the via points. 0.1 0.2 0.3 0.4 0.5 0.6 sec 500000 1 10 6 1.5 2 s o P t c 0.1 0.2 0.3 0.4 0.5 0.6 sec 0.8 n o i t s P 0.1 0.2 0.3 0.4 0.5 0.6 sec -10 10 20 30 r o t M d n a m c 0.1 0.2 0.3 0.4 0.5 0.6 sec 200 400 600 800 s o P n i a G
40
Stochastic optimal feedback control
Biological processes have noise. For example, neurons fire stochastically in response to a constant input, and muscles produce a stochastic force in response to constant stimulation. Here we will see how to solve the optimal control problem with additive Gaussian noise. Cost to minimize Because there is noise, we are no longer able to observe x directly. Rather, the best we can do is to estimate it. As we saw before, for a linear system with additive noise the best estimate of state is through the Kalman filter. So our goal is to determine the best command u for the current estimate of x so that we can minimize the global cost function. Approach: as before, at the last time point p the cost is a quadratic function of x. We will find the optimal motor command for time point p-1 so that it minimizes the expected cost to go. If we perform the optimal motor command at p-1, then we will see that the cost to go at p-1 is again a quadratic function of x.
41
Preliminaries: Expected value of a squared random variable
Preliminaries: Expected value of a squared random variable. In the following example, we assume that x is the random variable. Scalar x Vector x
42
Cost at the last time point
43
Cost-to-go at the next to the last time point
So we see that if our system has additive state or measurement noises, the optimal motor command remains the same as if the system had no noises at all. When we use the optimal policy at time point p-1, we see that, as before, the cost-to-go at p-1 is a quadratic function of x. The matrix W at p-1 remains the same as when the system had no noise. The problem is that we do not have x. The best that we can do is to estimate x via the Kalman filter. We do this in the next slide.
44
On trial p-1, our best estimate of x is the prior.
We compute the prior for the current trial from the posterior of the last trial. Kalman gain The posterior estimate. Our short-hand way to note the prior estimate of x on trial p-1. Although the noises in the system do not affect the gain G, the estimate of x is of course affected by the noises because the Kalman gain is influenced by them.
45
Summary of stochastic optimal control for a linear system with additive Gaussian noise and quadratic cost Cost to go at the start Cost to go at the end
46
The duality of the Kalman filter and optimal control
In the estimation problem, we have a model of how we think the hidden states x are related to observations y. Given an observation y, we have a rule with which we can change our estimates. Our objective is to minimize the trace of the variance of our estimate xhat. This variance is P. This trace is our scalar cost function, which is quadratic in terms of xhat. We minimize it by finding the optimal gain k. If we use this optimal k, then we can compute the variance in the next time step. Our cost (i.e., variance) of course still remains quadratic in terms of xhat.
47
The duality of the Kalman filter and optimal control, continued.
In the control problem, we have a model of how we think the hidden states x are related to commands u and observations y. Our objective is to find the u that minimizes a scalar cost. To find this u, we run time backwards! We start at the end time point and find the optimal u that minimizes the cost to go. When we find this u, we then move to the next time point and so on. The cost to go is a quadratic function of hidden states. This is very similar to the Kalman filter, where the cost was a quadratic function of the hidden states as well.
48
Duality of optimal control and Kalman filter, continued.
Weighting of state Motor cost Tracking cost State uncertainty Measurement noise State noise Kalman Filter So W is like an estimate of state uncertainty matrix, BTB is like state update noise Q, and L is like measurement noise R. In optimal control, the motor commands are generated by applying a gain to the state. This gain is like the Kalman gain.
49
Noise characteristics of biological systems are not additive Gaussian
Noise in the motor output grows with the size of the motor command Voluntary contraction of the muscle Electrical stimulation of the muscle A B The standard deviation of noise grows with mean force in an isometric task. Participants produced a given force with their thumb flexors. In one condition (labeled “voluntary”), the participants generated the force, whereas in another condition (labeled “NMES”) the experimenters stimulated their muscles artificially to produce force. To guide force production, the participants viewed a cursor that displayed thumb force, but the experimenters analyzed the data during a 4-s period in which this feedback had disappeared. A. Force produced by a typical participant. The period without visual feedback is marked by the horizontal bar in the 1st and 3rd columns (top right) and is expanded in the 2nd and 4th columns. B. When participants generated force, noise (measured as the standard deviation) increased linearly with force magnitude. Abbreviations: NMES, neuromuscular electrical stimulation; MVC, maximum voluntary contraction. From Jones et al. (2002) J Neurophysiol 88:1533.
50
Representing signal dependent noise
signal dependent motor noise Zero mean Gaussian noise Vector of zero mean, variance 1 Gaussian random variables So the motor noise has mean zero and variance that grows with the square of the motor command.
51
Computing a cost for the motor commands: minimize endpoint variance
Because there is noise in the motor commands, it will produce variance in our state. The above equation shows that the variance at the end of the movement is mostly influenced by the motor commands late in the movement. To see this, note that A is a matrix that when raised to a power, will become “smaller”. The larger the raised power, the smaller the resulting matrix will become. In the sum, we have a contribution from each motor command. When n is zero (the very first command), A is raised to a very high power. The noise in this command will have little influence on the endpoint variance. When n is larger (commands near end of the movement), A is raised to a small power. The noise in these commands will have a great deal of influence on the endpoint variance. Therefore, we have a natural cost function for the motor commands:
52
Control problem with signal dependent noise (Todorov 2005)
Cost per step: To find the motor commands that minimize the total cost, we start at the last time step p and work backwards. At time step p, the cost is a quadratic function of x. At time step p-1, we can find the optimal u that minimizes the cost to go. When we find this optimal u, the cost to go at p-1 will be a quadratic function of x plus a quadratic function of x-xhat. In general, by induction we can prove that as long as we apply the optimal u, the cost to go will have this quadratic form. This proof is due to E. Todorov, Neural Computation, 2005.
53
Cost at time step p (last time step)
Cost-to-go at p-1 Optimal u to minimize the cost-to-go at time step p-1
54
J(p-1) is the cost-to-go at time step p-1, assuming that the optimal u is produced at p-1.
Note that unlike the cost at time step p, this cost-to-go is quadratic in terms of x and the error in estimation of x. So now we need to show that if we continue to produce the optimal u at each time step, the cost-to-go remains in this form for all time steps.
55
Conjecture: If at some time point k+1 the cost-to-go under an optimal control policy is quadratic in x and e, and provided that we produce a u that minimizes the cost-to-go at time step k, then the cost-to-go at time step k will also be quadratic. To prove this, our first step is to find the u that minimizes the cost-to-go at time step k, and then show the at the resulting optimal cost-to-go remains in the quadratic form above. To compute the expected value term, we need to do some work on the term e.
56
To compute the Expected value of J(k+1), we compute the Exp value of the two quadratic terms (the Exp value of the third term has no derivatives with respect to u). Terms that do not depend on u
57
So we just showed that if at some time point k+1 the cost-to-go under an optimal control policy is quadratic in x and e, and provided that we produce a u that minimizes the cost-to-go at time step k, then the cost-to-go at time step k will also be quadratic. Since we had earlier shown that at time step p-1 the cost is quadratic in x and e, we now have the solution to our problem.
58
Summary: Control problem with signal dependent noise (Todorov 2005)
Cost per step For the last time step
59
feedback gain becomes smaller with increased signal dependent noise
Unlike the Gaussian noise, signal dependent noise affects the optimal control policy: feedback gain becomes smaller with increased signal dependent noise This reduction is particularly large near the end of the movement when the cost associated with motor commands tends to be larger. Feedback gain for a 30 deg saccade 50 40 n i a 30 G 0.01 s o 0.1 Variance of the motor noise P 20 1 10 0.02 0.04 0.06 0.08 0.1 0.12 0.14 sec
60
eye velocity Average speed deg/sec deg/sec Time (sec) Time (sec)
0.05 0.1 0.15 0.2 0.25 100 200 300 400 500 0.05 0.1 0.15 0.2 100 200 300 400 Low noise system deg/sec deg/sec 5 10 15 30 40 50 Saccade size Time (sec) 0.05 0.1 0.15 0.2 100 200 300 400 High noise system Time (sec)
61
Control policies and generating motor commands Choosing the best movement that produces most amount of reward while minimizing motor costs State change Goal selector Motor command Generator (costs and rewards) Body + environment Belief about state of body and world Predicted sensory consequences Time delay Integration Forward model Sensory system Proprioception Vision Audition Measured sensory consequences
62
The evolution of the control policies for the high-jump
Ethel Catherwood (Canada), gold medal winner, 1928 Olympics Cornelius Johnson (USA), gold medal winner, 1936 Olympics Dick Fosbury (USA), gold medal winner, 1968 Olympics
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.