Download presentation
Presentation is loading. Please wait.
Published byRandell Julian Anderson Modified over 9 years ago
1
Robotics Research Laboratory 1 Chapter 7 Multivariable and Optimal Control
2
Robotics Research Laboratory 2 Time-Varying Optimal Control - deterministic systems
3
Robotics Research Laboratory 3
4
4 LQ problem (Linear Quadratic)–Finite time problem Using Lagrange multipliers
5
Robotics Research Laboratory 5
6
6
7
7
8
8
9
9
10
10
11
Robotics Research Laboratory 11 LQR (Linear Quadratic Regulator) -Infinite time problem ARE(Algebraic Riccati Equation) – analytic solution is impossible in most cases. – numerical solution is required.
12
Robotics Research Laboratory 12
13
Robotics Research Laboratory 13
14
Robotics Research Laboratory 14 Remark : Using reciprocal root properties in p372
15
Robotics Research Laboratory 15
16
Robotics Research Laboratory 16
17
Robotics Research Laboratory 17
18
Robotics Research Laboratory 18 Eigenvector Decomposition
19
Robotics Research Laboratory 19 inside the unit circle outside the unit circle
20
Robotics Research Laboratory 20
21
Robotics Research Laboratory 21
22
Robotics Research Laboratory 22
23
Robotics Research Laboratory 23
24
Robotics Research Laboratory 24
25
Robotics Research Laboratory 25
26
Robotics Research Laboratory 26
27
Robotics Research Laboratory 27 Cost Equivalents
28
Robotics Research Laboratory 28
29
Robotics Research Laboratory 29 Least Squares Estimation p 1 measurement vector p 1 measurement error vector p n matrix n 1 unknown vector
30
Robotics Research Laboratory 30
31
Robotics Research Laboratory 31
32
Robotics Research Laboratory 32 0510152025 0 2 4 6 8 10 12 14 Sales fit and prediction Sales ($1000) Months
33
Robotics Research Laboratory 33 Weighted Least Squares
34
Robotics Research Laboratory 34 Recursive Least Square
35
Robotics Research Laboratory 35 old estimate new estimate covariance of old estimate
36
Robotics Research Laboratory 36 Sometime, it is a scalar. That is if we use just one new information.
37
Robotics Research Laboratory 37 Stochastic Models of Disturbance We have dealt with well-known well-defined, ideal systems. - disturbance (process, load variation) - measurement noise 0 real line new (range) sample space sample point in sample space (event) s
38
Robotics Research Laboratory 38
39
Robotics Research Laboratory 39
40
Robotics Research Laboratory 40
41
Robotics Research Laboratory 41
42
Robotics Research Laboratory 42
43
Robotics Research Laboratory 43
44
Robotics Research Laboratory 44
45
Robotics Research Laboratory 45
46
Robotics Research Laboratory 46
47
Robotics Research Laboratory 47
48
Robotics Research Laboratory 48 F(X, t 2 ) F(X, t 1 ) F(X, t 3 ) 0 1 X X 0 t1t1 t2t2 t3t3 t X(, 1 ) X(, 2 ) X(, 3 )
49
Robotics Research Laboratory 49 Remark:
50
Robotics Research Laboratory 50
51
Robotics Research Laboratory 51
52
Robotics Research Laboratory 52 A random process X(t) is Gaussian process if for any t 1, …, t m, and any m, the random vector X(t 1 ) … X(t m ) have the Gaussian distribution. A Gaussian process is completely characterized by its mean and its autocorrelation If Gaussian process X(t) is w.s.s, then it is strictly stationary. Assume that X(t) is wide sense stationary Let then is Fourier transform of. It is called a Spectral Density Matrix.
53
Robotics Research Laboratory 53 Remark: A random process X(t) is a Markov process if for all t 1 <t 2 <···<t m, all m, all x 1,…, x m A random processes X(t) is independent if the random vectors X(t 1 ) ··· X(t m ) are mutually independent for all t 1 <t 2 <···<t m and all m. Note: Andrei Andreevich Markov (1856 – 1922)
54
Robotics Research Laboratory 54 ex) Consider a scalar random process X(t), t 0 defined from where X(0) is zero mean Gaussian random variable with. 00 t X(t)X(t)
55
Robotics Research Laboratory 55
56
Robotics Research Laboratory 56 The density function X(t) is A random process w(t), t 0 is a white process if it is zero mean with the property that w(t 1 ) and w(t 2 ) are independent for all t 1 t 2 and where Q(t 1 ) is intensity
57
Robotics Research Laboratory 57 Remarks: i) If Q(t) is constant, i.e. Q(t) = Q then w(t) is w.s.s. and the spectral density is ii) A white process is not a mathematical rigorous random process. iii) A sample function for a white noise process can be thought as composed of superposition of large number of independent pulse of brief duration with random amplitude. iv) If the amplitude of the pulse is Gaussian, the w(t) is a Gaussian white noise. v) A white noise is a ‘derivative’ of a Wiener process (Brown motion)
58
Robotics Research Laboratory 58 ex) Similarly, Since {v(k)} is a white process, {X(k)} is a random process. X(0) should be specified. It is assumed that white process Wiener process
59
Robotics Research Laboratory 59
60
Robotics Research Laboratory 60
61
Robotics Research Laboratory 61
62
Robotics Research Laboratory 62 Remarks: A stable linear time-invariant discrete-time system has a pulse transfer function H(z). Suppose that the input u(k) is w.s.s. with a spectral density matrix. Then the output y(k) is w.s.s. and the spectral density of the output y(k) is In a scalar case, h(k-j) u(j)y(k)
63
Robotics Research Laboratory 63
64
Robotics Research Laboratory 64
65
Robotics Research Laboratory 65 ex)
66
Robotics Research Laboratory 66 white noise with intensity I Note: Norbert Wiener (1894 – 1964) Wiener filter for stationary I/O case in 1949 “Everything” can be generated by filtering white noise. L.T.I w(k)y(k) white process with intensity I colored noise
67
Robotics Research Laboratory 67
68
Robotics Research Laboratory 68 LQ + Kalman Filter ( ~ state feedback + observer by pole placement) LQG(Linear Quadratic Gaussian) problem - Partially informed states
69
Robotics Research Laboratory 69
70
Robotics Research Laboratory 70 Given y(0), y(1), … y(k), determine the optimal estimate such that an n n positive definite matrix i.e., minimum variance of error Remarks: i) P(k) is minimum *P(k) is minimum where is an arbitrary vector ii) P(k) is minimum Problem Formulation
71
Robotics Research Laboratory 71 Let the prediction-type Kalman filter have the form. -Predictor type, One-step-ahead estimator - where L(k) is time-varying y(k) is a measured output is an output from the model. Define as a reconstruction error.
72
Robotics Research Laboratory 72
73
Robotics Research Laboratory 73 where 0 0
74
Robotics Research Laboratory 74 minimize matrix scalar
75
Robotics Research Laboratory 75 Note: Kalman and Bucy filter for time-varying state space in 1960
76
Robotics Research Laboratory 76 Remarks: i) ii) a priori information are iii) due to system dynamics due to disturbance w(k) last term due to newly measured information iv) P(k) does not depend on the observation. Thus the gain can be precomputed in forward time and stored. v) steady-state Kalman filter – all constants
77
Robotics Research Laboratory 77
78
Robotics Research Laboratory 78
79
Robotics Research Laboratory 79
80
Robotics Research Laboratory 80 time transient 1 2 0 L = 0.01 L = 0.05 L(k):optimal gain Steady-state
81
Robotics Research Laboratory 81 colored noise Frequency Domain Properties of Kalman Filter
82
Robotics Research Laboratory 82
83
Robotics Research Laboratory 83
84
Robotics Research Laboratory 84 Remarks: i)It gives an idea how the Kalman filter attenuates different frequency. ii)Kalman filter has zeros at the poles of the noise model. (notch filter)
85
Robotics Research Laboratory 85 Smoothing: To estimate the Wednesday temperature based on temperature measurements from Monday, Tuesday and Thursday. Filtering: To estimate the Wednesday temperature based on temperature measurements from Monday, Tuesday and Wednesday. Prediction: To estimate the Wednesday temperature based on temperature based on temperature measurements from Sunday, Monday and Tuesday.
86
Robotics Research Laboratory 86 Stochastic LQ Control Problem
87
Robotics Research Laboratory 87 LQG Control Problem
88
Robotics Research Laboratory 88
89
Robotics Research Laboratory 89 Stationary LQG Control Problem
90
Robotics Research Laboratory 90
91
Robotics Research Laboratory 91 Control and Estimation Duality
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.