Download presentation
Presentation is loading. Please wait.
Published byMorris Gilmore Modified over 9 years ago
1
CHEE825/436 - Module 4J. McLellan - Fall 20051 Process and Disturbance Models
2
CHEE825/436 - Module 4J. McLellan - Fall 20052 Outline Types of Models Model Estimation Methods Identifying Model Structure Model Diagnostics
3
CHEE825/436 - Module 4J. McLellan - Fall 20053 The Task of Dynamic Model Building partitioning process data into a deterministic component (the process) and a stochastic component (the disturbance) processdisturbance ? time series model transfer function model
4
CHEE825/436 - Module 4J. McLellan - Fall 20054 Process Model Types non-parametric –impulse response –step response –spectrum parametric –transfer function models »numerator »denominator –difference equation models »equivalent to transfer function models with backshift operator } technically “parametric” when in finite form (e.g., FIR)
5
CHEE825/436 - Module 4J. McLellan - Fall 20055 Impulse and Step Process Models described as a set of weights: impulse model step model Note - typically treat u(t-N) as a step from 0 - i.e., u(t-N) = u(t-N)
6
CHEE825/436 - Module 4J. McLellan - Fall 20056 Process Spectrum Model represented as a set of frequency response values, or graphically frequency (rad/s) amplitude ratio
7
CHEE825/436 - Module 4J. McLellan - Fall 20057 Process Transfer Function Models numerator, denominator dynamics and time delay poles zeros time delay extra 1 step delay introduced by zero order hold and sampling - f is pure time delay q -1 is backwards shift operator: q -1 y(t)=y(t-1)
8
CHEE825/436 - Module 4J. McLellan - Fall 20058 Model Types for Disturbances non-parametric –“impulse response” - infinite moving average –spectrum parametric –“transfer function” form »autoregressive (denominator) »moving average (numerator)
9
CHEE825/436 - Module 4J. McLellan - Fall 20059 ARIMA Models for Disturbances autoregressive component moving average component random shock AutoRegressive Integrated Moving Average Model Time Series Notation - ARIMA(p,d,q) model has pth-order denominator - AR qth-order numerator - MA d integrating poles (on the unit circle)
10
CHEE825/436 - Module 4J. McLellan - Fall 200510 ARMA Models for Disturbances autoregressive component moving average component random shock Simply have no integrating component
11
CHEE825/436 - Module 4J. McLellan - Fall 200511 Typical Model Combinations model predictive control –impulse/step process model + ARMA disturbance model »typically a step disturbance model which can be considered as a pure integrator driven by a single pulse single-loop control –transfer function process model + ARMA disturbance model
12
CHEE825/436 - Module 4J. McLellan - Fall 200512 Classification of Models in Identification AutoRegressive with eXogenous inputs (ARX) Output Error (OE) AutoRegressive Moving Average with eXogenous inputs (ARMAX) Box-Jenkins (BJ) per Ljung’s terminology
13
CHEE825/436 - Module 4J. McLellan - Fall 200513 ARX Models –u(t) is the exogenous input –same autoregressive component for process, disturbance –numerator term for process, no moving average in disturbance –physical interpretation - disturbance passes through entire process dynamics »e.g., feed disturbance
14
CHEE825/436 - Module 4J. McLellan - Fall 200514 Output Error Models –no disturbance dynamics –numerator and denominator process dynamics –physical interpretation - process subject to white noise disturbance (is this ever true?)
15
CHEE825/436 - Module 4J. McLellan - Fall 200515 ARMAX Models –process and disturbance have same denominator dynamics –disturbance has moving average dynamics –physical interpretation - disturbance passing though process which enters at a point away from the input »except if C(q -1 ) = B(q -1 )
16
CHEE825/436 - Module 4J. McLellan - Fall 200516 Box-Jenkins Model –autoregressive component plus input, disturbance can have different dynamics –AR component A(q -1 ) represents dynamic elements common to both process and disturbance –physical interpretation - disturbance passes through other dynamic elements before entering process
17
CHEE825/436 - Module 4J. McLellan - Fall 200517 Range of Model Types Output Error ARX ARMAX Box-Jenkins least general most general
18
CHEE825/436 - Module 4J. McLellan - Fall 200518 Outline Types of Models Model Estimation Methods Identifying Model Structure Model Diagnostics
19
CHEE825/436 - Module 4J. McLellan - Fall 200519 Model Estimation - General Philosophy Form a “loss function” which is to be minimized to obtain the “best” parameter estimates Loss function »“loss” can be considered as missed trend or information »e.g. - linear regression loss would represent left-over trends in residuals which could be explained by a model if we picked up all trend, only the random noise e(t) would be left additional trends drive up the variation of the residuals loss function is the sum of squares of the residuals (related to the variance of the residuals)
20
CHEE825/436 - Module 4J. McLellan - Fall 200520 Linear Regression - Types of Loss Functions First, consider the linear regression model: Least Squares estimation criterion - squared prediction error at point “i”
21
CHEE825/436 - Module 4J. McLellan - Fall 200521 Linear Regression - Types of Loss Functions The model describes how the mean of Y varies: and the variance of Y is because the random component in Y comes from the additive noise “e”. The probability density function at point “i” is where e i is the noise at point “i”
22
CHEE825/436 - Module 4J. McLellan - Fall 200522 Linear Regression - Types of Loss Functions We can write the joint probability density function for all observations in the data set:
23
CHEE825/436 - Module 4J. McLellan - Fall 200523 Linear Regression - Types of Loss Functions Given parameters, we can use to determine probability that a given range of observations will occur. What if we have observations but don’t know parameters? »assume that we have the most common, or “likely”, observations - i.e., observations that have the greatest probability of occurrence »find the parameter values that maximize the probability of the observed values occurring »the joint density function becomes a “likelihood function” »the parameter estimates are “maximum likelihood estimates”
24
CHEE825/436 - Module 4J. McLellan - Fall 200524 Linear Regression - Types of Loss Functions Maximum Likelihood Parameter Estimation Criterion -
25
CHEE825/436 - Module 4J. McLellan - Fall 200525 Linear Regression - Types of Loss Functions Given the form of the likelihood function, maximizing is equivalent to minimizing the argument of the exponential, i.e., For the linear regression case, the maximum likelihood parameter estimates are equivalent to the least squares parameter estimates.
26
CHEE825/436 - Module 4J. McLellan - Fall 200526 Linear Regression - Types of Loss Functions Least Squares Estimation »loss function is sum of squared residuals = sum of squared prediction errors Maximum Likelihood »loss function is likelihood function, which in the linear regression case is equivalent to the sum of squared prediction errors Prediction Error = observation - predicted value
27
CHEE825/436 - Module 4J. McLellan - Fall 200527 Loss Functions for Identification Least Squares “minimize the sum of squared prediction errors” The loss function is where N is the number of points in the data record.
28
CHEE825/436 - Module 4J. McLellan - Fall 200528 Least Squares Identification Example Given an ARX(1) process+disturbance model: the loss function can be written as
29
CHEE825/436 - Module 4J. McLellan - Fall 200529 Least Squares Identification Example In matrix form, and the sum of squares prediction error is
30
CHEE825/436 - Module 4J. McLellan - Fall 200530 Least Squares Identification Example The least squares parameter estimates are: Note that the disturbance structure in the ARX model is such that the disturbance contribution appears in the formulation as a white noise additive error --> satisfies assumptions for this formulation.
31
CHEE825/436 - Module 4J. McLellan - Fall 200531 Least Squares Identification ARX models fit into this framework Output Error models - or in difference equation form: violates least squares assumptions of independent errors
32
CHEE825/436 - Module 4J. McLellan - Fall 200532 Least Squares Identification Any process+disturbance model other than the ARX model will not satisfy the structural requirements. Implications? »estimators are not consistent - don’t asymptotically tend to true values of parameters »potential for bias
33
CHEE825/436 - Module 4J. McLellan - Fall 200533 Prediction Error Methods Choose parameter estimates to minimize some function of the prediction errors. For example, for the Output Error Model, we have Use a numerical optimization routine to obtain “best” estimates. prediction error
34
CHEE825/436 - Module 4J. McLellan - Fall 200534 Prediction Error Methods AR(1) Example - Use model to predict one step ahead given past values: This is an optimal predictor when e(t) is normally distributed, and can be obtained by taking the “conditional expectation” of y(t) given information up to and including time t-1. e(t) disappears because it has zero mean and adds no information on average. “one step ahead predictor”
35
CHEE825/436 - Module 4J. McLellan - Fall 200535 Prediction Error Methods Prediction Error for the one step ahead predictor: We could obtain parameter estimates to minimize sum of squared prediction errors: same as Least Squares Estimates for this ARX example
36
CHEE825/436 - Module 4J. McLellan - Fall 200536 Prediction Error Methods What happens if we have an ARMAX(1,1) model? One step ahead predictor is: But what is e(t-1)? »estimate it using measured y(t-1) and estimate of y(t-1)
37
CHEE825/436 - Module 4J. McLellan - Fall 200537 Prediction Error Methods Note that estimate of e(t-1) depends on e(t-2), which depends on e(t-3), and so forth »eventually end up with dependence on e(0), which is typically assumed to be zero »“conditional” estimates - conditional on assumed initial values »can also formulate in a way to avoid conditional estimates »impact is typically negligible for large data sets during computation, it isn’t necessary to solve recursively all the way back to the original condition »use previous prediction to estimate previous prediction error
38
CHEE825/436 - Module 4J. McLellan - Fall 200538 Prediction Error Methods Formulation for General Case - given a process plus disturbance model: we can write so that the prediction is: The random shocks are estimated as
39
CHEE825/436 - Module 4J. McLellan - Fall 200539 Prediction Error Methods Putting these expressions together yields which is of the form The prediction error for use in the estimation loss function is
40
CHEE825/436 - Module 4J. McLellan - Fall 200540 Prediction Error Methods How does this look for a general ARMAX model? Getting ready for the prediction, we obtain
41
CHEE825/436 - Module 4J. McLellan - Fall 200541 Prediction Error Methods Note that the ability to estimate the random shocks depends on the ability to invert C(q -1 ) »invertibility discussed in moving average disturbances »ability to express shocks in terms of present and past outputs - convert to an infinite autoregressive sum Note that the moving average parameters appear in the denominator of the prediction »the model is nonlinear in the moving average parameters, and conditionally linear in the others
42
CHEE825/436 - Module 4J. McLellan - Fall 200542 Likelihood Function Methods Conditional Likelihood Function »assume initial conditions for outputs, random shocks »e.g., for ARX(1), values for y(0) »e.g., for ARMAX(1,1), values for y(0), e(0) General argument - form joint distribution for this expression over all times find optimal parameter values to maximize likelihood normally distributed, zero mean, known variance
43
CHEE825/436 - Module 4J. McLellan - Fall 200543 Likelihood Function Methods Exact Likelihood Function Note that we can also form an exact likelihood function which includes the initial conditions »maximum likelihood estimation procedure estimates parameters AND initial conditions »exact likelihood function is more complex In either case, we use a numerical optimization procedure to solve for the maximum likelihood estimates.
44
CHEE825/436 - Module 4J. McLellan - Fall 200544 Likelihood Function Methods Final Comment - »derivation of likelihood function requires convergence of moving average, autoregressive elements »moving average --> invertibility »autoregressive --> stability Example - Box-Jenkins model:` can be re-arranged to yield the random shock inverted MA component inverted AR component
45
CHEE825/436 - Module 4J. McLellan - Fall 200545 Outline Types of Models Model Estimation Methods Identifying Model Structure Model Diagnostics
46
CHEE825/436 - Module 4J. McLellan - Fall 200546 Model-Building Strategy graphical pre-screening select initial model structure estimate parameters examine model diagnostics examine structural diagnostics validate model using additional data set } modify model and re-estimate as required
47
CHEE825/436 - Module 4J. McLellan - Fall 200547 Example - Debutanizer Objective - fit a transfer function +disturbance model describing changes in bottoms RVP in response to changes in internal reflux Data –step data –slow PRBS (switch down, switch up, switch down)
48
CHEE825/436 - Module 4J. McLellan - Fall 200548 Graphical Pre-Screening examine time traces of outputs, inputs, secondary variables –are there any outliers or major shifts in operation? could there be a model in this data? engineering assessment –should there be a model in this data?
49
CHEE825/436 - Module 4J. McLellan - Fall 200549 Selecting Initial Model Structure examine auto- and cross-correlations of output, input –look for autoregressive, moving average components examine spectrum of output –indication of order of process »first-order »second-order underdamped - resonance »second or higher order overdamped
50
CHEE825/436 - Module 4J. McLellan - Fall 200550 Selecting Initial Model Structure... examine correlation estimate of impulse or step response –available if input is not a step –what order is the process ? »1st order, 2nd order over/underdamped –size of the time delay
51
CHEE825/436 - Module 4J. McLellan - Fall 200551 Selecting Initial Model Structure Time Delays For low frequency input signal (e.g., few steps or filtered PRBS), examine transient response for delay For pre-filtered data, examine cross-correlation plots - where is first non-zero cross-correlation?
52
CHEE825/436 - Module 4J. McLellan - Fall 200552 Debutanizer Example step response –indicates settling time ~100 min –potentially some time delay –positive gain –1st order or overdamped higher-order correlation estimate of step response –indicates time delay of ~4-5 min –overdamped higher-order
53
CHEE825/436 - Module 4J. McLellan - Fall 200553 Debutanizer Example - PRBS Test 050100150 -0.2 -0.1 0 0.1 0.2 Output # 1 Input and output signals 050100150 -50 0 50 Time Input # 1
54
CHEE825/436 - Module 4J. McLellan - Fall 200554 Debutanizer Example - Step Response Test 050100150 0 0.05 0.1 0.15 0.2 Output # 1 Input and output signals 050100150 49 49.5 50 50.5 51 Time Input # 1
55
CHEE825/436 - Module 4J. McLellan - Fall 200555 Debutanizer Example - Correlation Step Response Estimate 0510152025303540 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 x 10 -3 Time Step Response
56
CHEE825/436 - Module 4J. McLellan - Fall 200556 Debutanizer Example process spectrum –suggests higher-order disturbance spectrum –cut-off behaviour suggests AR type of disturbance initial model –ARX with delay of 4 or 5 –ARMAX –Box-Jenkins –NOT output error - disturbance isn’t white
57
CHEE825/436 - Module 4J. McLellan - Fall 200557 Debutanizer Example - Process Spectrum Plot 10 -2 10 10 0 1 -6 10 -4 10 -2 Amplitude Frequency response 10 -2 10 10 0 1 -1000 -500 0 Frequency (rad/s) Phase (deg)
58
CHEE825/436 - Module 4J. McLellan - Fall 200558 Debutanizer Example - Disturbance Spectrum 10 -2 10 10 0 1 -8 10 -6 10 -4 10 -2 10 0 Frequency (rad/s) Power Spectrum
59
CHEE825/436 - Module 4J. McLellan - Fall 200559 Additional Initial Selection Tests
60
CHEE825/436 - Module 4J. McLellan - Fall 200560 Singularity Test Form the data vector Covariance matrix for this vector will be singular if s>model order, non-singular if s≤model order Notes: 1.Test developed for deterministic model – results are exact for this 2.Test is approximate when random shocks enter process – results will depend on signal-to-noise ratio
61
CHEE825/436 - Module 4J. McLellan - Fall 200561 Pre-Filtering If input is not white noise, cross-correlation does not show process structure clearly »autocorrelation in u(t) complicates structure Solution - estimate time series model for input, and pre- filter using inverse of this model –prefilter input and output to ensure consistency Now estimate cross-correlations between filtered input, filtered output –look for sharp cut-off - negligible denominator –gradual decline - denominator dynamics
62
CHEE825/436 - Module 4J. McLellan - Fall 200562 Pre-Filtering can also examine cross-correlation plots for indication of time delay –first non-zero lag in cross-correlation function Note that differencing, which is used to treat non- stationary disturbances, is a form of pre-filtering –more on this later...
63
CHEE825/436 - Module 4J. McLellan - Fall 200563 Outline Types of Models Model Estimation Methods Identifying Model Structure Model Diagnostics
64
CHEE825/436 - Module 4J. McLellan - Fall 200564 Model Diagnostics Analyze residuals: –look for unmodelled trends »auto-correlation »cross-correlation with inputs »spectrum - should be flat –assess size of residual standard error Wet towel analogy - wring out all moisture (information) until there is nothing left
65
CHEE825/436 - Module 4J. McLellan - Fall 200565 Unmodelled Trends in Residuals autocorrelations –should be statistically zero cross-correlations –between residual and inputs should be zero for lags greater than the numerator order »i.e., at long lags –if cross-correlation between inputs and past residuals is non- zero, indicates feedback present in data (inputs depend on past errors) »i.e., at negative lags
66
CHEE825/436 - Module 4J. McLellan - Fall 200566 Debutanizer Example Consider ARX(2,2,5) model –2 poles, 1 zero, delay of 5 Autocorrelation plots –no systematic trend in residuals Cross-correlation plots –no systematic relationship between residuals and input
67
CHEE825/436 - Module 4J. McLellan - Fall 200567 Debutanizer Example - Residual Correlation Plots -20-15-10-505101520 -0.5 0 0.5 Autocorrelation of residuals for output 1 -20-15-10-505101520 -0.5 0 0.5 Samples Cross corr for input 1and output 1 resids
68
CHEE825/436 - Module 4J. McLellan - Fall 200568 Debutanizer Example - Predicted vs. Response 050100150 -0.15 -0.1 -0.05 0 0.05 0.1 0.15 Time Measured and simulated model output
69
CHEE825/436 - Module 4J. McLellan - Fall 200569 Detecting Incorrect Time Delays If cross-correlation between residual and input is non- zero for small lags, the time delay is possibly too large –additional early transients aren’t being modeled because model assumes nothing is happening
70
CHEE825/436 - Module 4J. McLellan - Fall 200570 Debutanizer Example Let’s choose a delay of 7 Cross-correlation plot –indicates significant cross-correlation between input and output at positive lag –estimate of time delay is too large
71
CHEE825/436 - Module 4J. McLellan - Fall 200571 Model Diagnostics Quantitative Tests –significance of parameter estimates –ratio tests - of explained variation Debutanizer Example –parameters are all significant
72
CHEE825/436 - Module 4J. McLellan - Fall 200572 Debutanizer Example - Parameter Estimates This matrix was created by the command ARX on 11/16 1996 at 11:36 Loss fcn: 5.805e-006 Akaike`s FPE: 6.123e-006 Sampling interval 1 The polynomial coefficients and their standard deviations are B = 1.0e-003 * 0 0 0 0 0 0.1428 -0.0605 0 0 0 0 0 0.0243 0.0272 A = 1.0000 -1.3924 0.4303 0 0.0747 0.0697 parameter standard error standard error AR parameters numerator parameters
73
CHEE825/436 - Module 4J. McLellan - Fall 200573 Model Diagnostics Cross-Validation Use model to predict behaviour of a new data set collected under similar circumstances Reject model if prediction error is large
74
CHEE825/436 - Module 4J. McLellan - Fall 200574 Debutanizer Example Use initial step test data as a cross-validation data set. Prediction errors are small, and trend is predicted quite well Conclusion - acceptable model
75
CHEE825/436 - Module 4J. McLellan - Fall 200575 Debutanizer Example - Prediction for Validation Data 050100150 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 Time Measured and simulated model output
76
CHEE825/436 - Module 4J. McLellan - Fall 200576 Debutanizer Example - Residual Correlation Plots for Validation Data -20-15-10-505101520 -0.5 0 0.5 Autocorrelation of residuals for output 1 -20-15-10-505101520 -0.5 0 0.5 Samples Cross corr for input 1and output 1 resids
77
CHEE825/436 - Module 4J. McLellan - Fall 200577 Outline Types of Models Model Estimation Methods Identifying Model Structure Model Diagnostics
78
CHEE825/436 - Module 4J. McLellan - Fall 200578 Initially... Use the structure selection methods described earlier. Once you have estimated several candidate models...
79
CHEE825/436 - Module 4J. McLellan - Fall 200579 Model Structure Diagnostics Akaike’s Information Criterion (AIC) –weighted estimation error »unexplained variation with term penalizing excess parameters »analogous to adjusted R 2 for regression –find model structure that minimizes the AIC
80
CHEE825/436 - Module 4J. McLellan - Fall 200580 Akaike’s Information Criterion Definition number of data points in sample } related to prediction error (residual sum of squares) number of parameters
81
CHEE825/436 - Module 4J. McLellan - Fall 200581 Akaike’s Information Criterion best model at minimum AIC # of parameters
82
CHEE825/436 - Module 4J. McLellan - Fall 200582 Akaike’s Final Prediction Error An attempt to estimate prediction error when model is used to predict new outputs Goal - choose model that minimizes FPE (balance between number of parameters and explained variation)
83
CHEE825/436 - Module 4J. McLellan - Fall 200583 Minimum Data Length (MDL) Another approach - find “minimum length description” of data - measure is based on loss function + penalty for terms find description that minimizes this criterion
84
CHEE825/436 - Module 4J. McLellan - Fall 200584 Cross-Validation Collect additional data, or partition your data set, and predict output(s) for the additional input sequence –poor predictions - modify model accordingly, re-estimate with old data and re-validate –good predictions - use your model! Note - cross-validation set should be collected under similar conditions –operating point, no known disturbances (e.g., feed changes)
85
CHEE825/436 - Module 4J. McLellan - Fall 200585 Debutanizer Example Search over a range of ARX model orders and time delay: poles: 1-4 zeros: 1-4 time delay: 1-6 Examine mean square error, MDL, AIC and/or FPE - Matlab generated -> ARX(2,2,5) model is best
86
CHEE825/436 - Module 4J. McLellan - Fall 200586 Debutanizer Example 0246810 0 0.02 0.04 0.06 0.08 0.1 0.12 # of par's % Unexplained of output variance Model Fit vs # of par's AIC optimal (ARX3,2,5) MDL optimal (ARX2,2,5)
87
CHEE825/436 - Module 4J. McLellan - Fall 200587 Other methods... Look for Singularity of the “Information Matrix”
88
CHEE825/436 - Module 4J. McLellan - Fall 200588 Outline The Modeling Task Types of Models Model Building Strategy Model Diagnostics Identifying Model Structure Modeling Non-Stationary Data MISO vs. SISO Model Fitting Closed-Loop Identification
89
CHEE825/436 - Module 4J. McLellan - Fall 200589 What is Non-Stationary Data? Non-stationary disturbances –exhibit meandering or wandering behaviour –mean may appear to be non-zero for periods of time –stochastic analogue of integrating disturbance Non-stationarity is associated with poles on the unit circle in the disturbance transfer function »AR component has one or more roots at 1
90
CHEE825/436 - Module 4J. McLellan - Fall 200590 Non-StationaryData 0100200300 -4 -2 0 2 4 AR parameter of 0.3 output 0100200300 -5 0 5 AR parameter of 0.6 output 0100200300 -10 -5 0 5 10 AR parameter of 0.9 time output 0100200300 -20 -10 0 10 20 Non-stationary time output
91
CHEE825/436 - Module 4J. McLellan - Fall 200591 How can you detect non-stationary data? Visual –meandering behaviour Quantitative –slowly decaying autocorrelation behaviour –difference the data –examine autocorrelation, partial autocorrelation functions for differenced data –evidence of MA or AR indicates a non-stationary, or integrated MA or AR disturbance
92
CHEE825/436 - Module 4J. McLellan - Fall 200592 Differencing Data … is the procedure of putting the data in “delta form” Start with y(t) and convert to –explicitly accounting for the pole on the unit circle
93
CHEE825/436 - Module 4J. McLellan - Fall 200593 Detecting Non-Stationarity -2024681012 0 0.5 1 response Autocorrelation for Non-Stationary Disturbance -2024681012 -0.5 0 0.5 1 time response Autocorrelation for Differenced Disturbance
94
CHEE825/436 - Module 4J. McLellan - Fall 200594 Impact of Over-Differencing Over-differencing can introduce extra meandering and local trends into data Differencing - “cancels” pole on unit circle Over-differencing - introduces artificial unit pole into data
95
CHEE825/436 - Module 4J. McLellan - Fall 200595 Recognizing Over-Differencing Visual –more local trends, meandering in data Quantitative –autocorrelation behaviour decays more slowly than initial undifferenced data
96
CHEE825/436 - Module 4J. McLellan - Fall 200596 Estimating Models for Non-Stationary Data Approaches Estimate the model using the differenced data Explicitly incorporate the pole on the unit circle in the disturbance transfer function specification
97
CHEE825/436 - Module 4J. McLellan - Fall 200597 Estimating Models from Differenced Data Prepare the data by differencing BOTH the input and the output Specify initial model structure after using graphical, quantitative tools Estimate, diagnose model for differenced data Convert model to undifferenced form by multiplying through by (1-q -1 ) Assess predictions on undifferenced data for fitting and validation data sets
98
CHEE825/436 - Module 4J. McLellan - Fall 200598 Differenced Form of Box-Jenkins Model Note - in time series literature, is used to denote differencing
99
CHEE825/436 - Module 4J. McLellan - Fall 200599 Outline Types of Models Model Estimation Methods Identifying Model Structure Model Diagnostics Estimating MIMO models
100
CHEE825/436 - Module 4J. McLellan - Fall 2005100 SISO Approach Estimate models individually Advantage –simplicity Disadvantage –need to reconcile disturbance models for each input-output channel in order to obtain one disturbance model for the output –can’t assess directionality with respect to inputs
101
CHEE825/436 - Module 4J. McLellan - Fall 2005101 MISO Approach Estimate the transfer function models + disturbance model for a single output and all inputs simultaneously Advantage –consistency - obtain one disturbance model directly –potential to assess directionality Disadvantage –complexity - recognizing model structures is more difficult
102
CHEE825/436 - Module 4J. McLellan - Fall 2005102 A Hybrid Approach conduct preliminary analysis using SISO approach –model structures –apparent disturbance structure estimate final model using MISO approach –must decide on a common disturbance structure feasible if input sequences are independent
103
CHEE825/436 - Module 4J. McLellan - Fall 2005103 Outline Types of Models Model Estimation Methods Identifying Model Structure Model Diagnostics Closed-loop vs. open-loop estimation
104
CHEE825/436 - Module 4J. McLellan - Fall 2005104 The Closed-Loop Identification Problem YtYt UtUt SP t + - Controller Gc Process Gp dither signal W t X
105
CHEE825/436 - Module 4J. McLellan - Fall 2005105 Where should the input signal be introduced? Options: Dither at the controller output –clearer indication of process dynamics –preferred approach Perturbations in the setpoint –additional controller dynamics will be included in estimated model
106
CHEE825/436 - Module 4J. McLellan - Fall 2005106 What do the closed-loop data represent? dither signal case, without disturbances open-loop –input-output data represents closed-loop –input-output data represents
107
CHEE825/436 - Module 4J. McLellan - Fall 2005107 Estimating Models from Closed-Loop Data Approach #1: Working with W-Y data, estimate and back out controller to obtain process transfer function. –we already know the controller transfer function
108
CHEE825/436 - Module 4J. McLellan - Fall 2005108 Estimating Models from Closed-Loop Data Approach #2: Estimate transfer functions for the process (U ->Y), and for the controller (Y->U) simultaneously.
109
CHEE825/436 - Module 4J. McLellan - Fall 2005109 Estimating Models from Closed-Loop Data Approach #3: Fit the model as in the open-loop case (U->Y). Note that so that we are effectively using a filtered input signal.
110
CHEE825/436 - Module 4J. McLellan - Fall 2005110 Some Useful References Identification Case Study - paper by Shirt, Harris and Bacon (1994). Closed-Loop Identification - issues - paper by MacGregor and Fogal System Identification Workshop - paper edited by Barry Cott
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.