CY3A2 System identification Input signals Signals need to be realisable, and excite the typical modes of the system. Ideally the signal should be persistent.

Slides:



Advertisements
Similar presentations
Lectures 12&13: Persistent Excitation for Off-line and On-line Parameter Estimation Dr Martin Brown Room: E1k Telephone:
Advertisements

Dates for term tests Friday, February 07 Friday, March 07
AGC DSP AGC DSP Professor A G Constantinides©1 Modern Spectral Estimation Modern Spectral Estimation is based on a priori assumptions on the manner, the.
CY3A2 System identification Modelling Elvis Impersonators Fresh evidence that pop stars are more popular dead than alive. The University of Missouri’s.
10 Further Time Series OLS Issues Chapter 10 covered OLS properties for finite (small) sample time series data -If our Chapter 10 assumptions fail, we.
Lecture 11: Recursive Parameter Estimation
The Simple Linear Regression Model: Specification and Estimation
280 SYSTEM IDENTIFICATION The System Identification Problem is to estimate a model of a system based on input-output data. Basic Configuration continuous.
CHAPTER 3 ECONOMETRICS x x x x x Chapter 2: Estimating the parameters of a linear regression model. Y i = b 1 + b 2 X i + e i Using OLS Chapter 3: Testing.
Simple Linear Regression
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
Topic4 Ordinary Least Squares. Suppose that X is a non-random variable Y is a random variable that is affected by X in a linear fashion and by the random.
Violations of Assumptions In Least Squares Regression.
Linear Regression Models Powerful modeling technique Tease out relationships between “independent” variables and 1 “dependent” variable Models not perfect…need.
Separate multivariate observations
Ordinary Least Squares
Adaptive Signal Processing
RLSELE Adaptive Signal Processing 1 Recursive Least-Squares (RLS) Adaptive Filters.
1 MADE WHAT IF SOME OLS ASSUMPTIONS ARE NOT FULFILED?
1 FORECASTING Regression Analysis Aslı Sencer Graduate Program in Business Information Systems.
Linear Regression Andy Jacobson July 2006 Statistical Anecdotes: Do hospitals make you sick? Student’s story Etymology of “regression”
Multiple Regression The Basics. Multiple Regression (MR) Predicting one DV from a set of predictors, the DV should be interval/ratio or at least assumed.
Regression Regression relationship = trend + scatter
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
PROCESS MODELLING AND MODEL ANALYSIS © CAPE Centre, The University of Queensland Hungarian Academy of Sciences Statistical Model Calibration and Validation.
CY3A2 System identification
Geology 5670/6670 Inverse Theory 21 Jan 2015 © A.R. Lowry 2015 Read for Fri 23 Jan: Menke Ch 3 (39-68) Last time: Ordinary Least Squares Inversion Ordinary.
CY3A2 System identification1 Maximum Likelihood Estimation: Maximum Likelihood is an ancient concept in estimation theory. Suppose that e is a discrete.
The Simple Linear Regression Model: Specification and Estimation ECON 4550 Econometrics Memorial University of Newfoundland Adapted from Vera Tabakova’s.
Ch14: Linear Least Squares 14.1: INTRO: Fitting a pth-order polynomial will require finding (p+1) coefficients from the data. Thus, a straight line (p=1)
Estimation Method of Moments (MM) Methods of Moment estimation is a general method where equations for estimating parameters are found by equating population.
Dept. E.E./ESAT-STADIUS, KU Leuven
5. Consistency We cannot always achieve unbiasedness of estimators. -For example, σhat is not an unbiased estimator of σ -It is only consistent -Where.
LESSON 6: REGRESSION 2/21/12 EDUC 502: Introduction to Statistics.
Dynamic Models, Autocorrelation and Forecasting ECON 6002 Econometrics Memorial University of Newfoundland Adapted from Vera Tabakova’s notes.
Autoregressive (AR) Spectral Estimation
Machine Learning 5. Parametric Methods.
CLASSICAL NORMAL LINEAR REGRESSION MODEL (CNLRM )
Chapter 5 Joint Probability Distributions and Random Samples  Jointly Distributed Random Variables.2 - Expected Values, Covariance, and Correlation.3.
September 28, 2000 Improved Simultaneous Data Reconciliation, Bias Detection and Identification Using Mixed Integer Optimization Methods Presented by:
ELG5377 Adaptive Signal Processing Lecture 15: Recursive Least Squares (RLS) Algorithm.
Week 21 Statistical Assumptions for SLR  Recall, the simple linear regression model is Y i = β 0 + β 1 X i + ε i where i = 1, …, n.  The assumptions.
DSP-CIS Part-III : Optimal & Adaptive Filters Chapter-9 : Kalman Filters Marc Moonen Dept. E.E./ESAT-STADIUS, KU Leuven
Analysis of financial data Anders Lundquist Spring 2010.
Week 21 Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
1 AAEC 4302 ADVANCED STATISTICAL METHODS IN AGRICULTURAL RESEARCH Part II: Theory and Estimation of Regression Models Chapter 5: Simple Regression Theory.
Statistics 350 Lecture 2. Today Last Day: Section Today: Section 1.6 Homework #1: Chapter 1 Problems (page 33-38): 2, 5, 6, 7, 22, 26, 33, 34,
Econometrics III Evgeniya Anatolievna Kolomak, Professor.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Statistical Interpretation of Least Squares ASEN.
STA302/1001 week 11 Regression Models - Introduction In regression models, two types of variables that are studied:  A dependent variable, Y, also called.
Chapter 4. The Normality Assumption: CLassical Normal Linear Regression Model (CNLRM)
Estimating standard error using bootstrap
ELG5377 Adaptive Signal Processing
CH 5: Multivariate Methods
t distribution Suppose Z ~ N(0,1) independent of X ~ χ2(n). Then,
Inference about the Slope and Intercept
Modern Spectral Estimation
Basic Econometrics Chapter 4: THE NORMALITY ASSUMPTION:
The regression model in matrix form
The Regression Model Suppose we wish to estimate the parameters of the following relationship: A common method is to choose parameters to minimise the.
Inference about the Slope and Intercept
Model Comparison: some basic concepts
Statistical Assumptions for SLR
OVERVIEW OF LINEAR MODELS
مدلسازي تجربي – تخمين پارامتر
OVERVIEW OF LINEAR MODELS
Bayes and Kalman Filter
Learning Theory Reza Shadmehr
Parametric Methods Berlin Chen, 2005 References:
Mathematical Foundations of BME
Presentation transcript:

CY3A2 System identification Input signals Signals need to be realisable, and excite the typical modes of the system. Ideally the signal should be persistent. Examples include Step function e.g. u=ones(1,100) PRBS - Pseudo random binary sequences (Pseudo noise) ARMA - Autoregressive moving average process. The most common input (when the system can stand it) is to simply use a normally distributed random number i.e. u i =e i or in Matlab speak u=randn(1,100) Sum of sinusoids It may be possible to overlay an existing input signal with one of the above signals so the system can be identified in-situ.

CY3A2 System identification Modelling the noise (Pseudo Linear Regression PLR) The ARMAX model includes a moving average model of noise, that is Which can be represented as

CY3A2 System identification but we do not usually have access to the noise terms. However one approach is to use the prediction errors as signal driving the noise process. One way of modifying the recursive least squares algorithm is to use the following Look at RLS again: Notice that the data matrix is now filled by prediction errors (5) from previous lags.

CY3A2 System identification Expected value The expected value of a variable is the average value that the variable would have if averaged over an extremely long period of time (N  ∞). It represents the average amount one would expect to win on a bet in the long run if that same bet is taken many times. is the mean of x is the variance of x if and only if x and y are independent variables

CY3A2 System identification Recalling: LS estimate Assume that has zero mean, and uncorrelated with We would like to know if LS estimate is good and how?

CY3A2 System identification We show that LS estimate is unbiased, i.e. Proof:

CY3A2 System identification Now how accurate is LS estimate ? Covariance of LS estimate Assume

CY3A2 System identification Weighted least squares IF every data point is given a weighting, to indicate individual importance, then it’s called Weighted least squares. Minimising Weighted least squares is given by e.g. in RLS with forgetting

CY3A2 System identification If, it can be shown that the LS squares estimate (W=I) also has the minimum variance amongst all weighted LS estimates. Conclusion 1: 1. Under certain conditions, LS is unbiased, minimum variance parameter estimate, it is also a consistent parameter* estimator. *which means

CY3A2 System identification Conclusion 2 : 2. If certain conditions are violated, you can’t get good results; e.g. input signal needs to be exciting, so that the data matrix is full rank. or, if the noise term is correlated with the signal, the LS will be Biased, so we need to use a noise model, e.g., to include moving average noise term in data matrix.