President UniversityErwin SitompulSMI 10/1 Lecture 10 System Modeling and Identification Dr.-Ing. Erwin Sitompul President University

Slides:



Advertisements
Similar presentations
President UniversityErwin SitompulModern Control 7/1 Dr.-Ing. Erwin Sitompul President University Lecture 7 Modern Control
Advertisements

Numerical Solution of Linear Equations
Chapter Outline 3.1 Introduction
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The Linear Prediction Model The Autocorrelation Method Levinson and Durbin.
President UniversityErwin SitompulSMI 7/1 Dr.-Ing. Erwin Sitompul President University Lecture 7 System Modeling and Identification
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
President UniversityErwin SitompulModern Control 11/1 Dr.-Ing. Erwin Sitompul President University Lecture 11 Modern Control
President UniversityErwin SitompulModern Control 5/1 Dr.-Ing. Erwin Sitompul President University Lecture 5 Modern Control
Matrix Operations. Matrix Notation Example Equality of Matrices.
Multiple Regression Models
NOTES ON MULTIPLE REGRESSION USING MATRICES  Multiple Regression Tony E. Smith ESE 502: Spatial Data Analysis  Matrix Formulation of Regression  Applications.
Development of Empirical Models From Process Data
Course AE4-T40 Lecture 5: Control Apllication
Linear and generalised linear models
Chi Square Distribution (c2) and Least Squares Fitting
Linear and generalised linear models
President UniversityErwin SitompulSMI 9/1 Dr.-Ing. Erwin Sitompul President University Lecture 9 System Modeling and Identification
PETE 603 Lecture Session #29 Thursday, 7/29/ Iterative Solution Methods Older methods, such as PSOR, and LSOR require user supplied iteration.
Autar Kaw Humberto Isaza
Least-Squares Regression
Ch 8.1 Numerical Methods: The Euler or Tangent Line Method
Lecture 12 Statistical Inference (Estimation) Point and Interval estimation By Aziza Munir.
Lecture 3 Introduction to Neural Networks and Fuzzy Logic President UniversityErwin SitompulNNFL 3/1 Dr.-Ing. Erwin Sitompul President University
CSCI 256 Data Structures and Algorithm Analysis Lecture 14 Some slides by Kevin Wayne copyright 2005, Pearson Addison Wesley all rights reserved, and some.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Matrix Algebra and Regression a matrix is a rectangular array of elements m=#rows, n=#columns  m x n a single value is called a ‘scalar’ a single row.
Dr.-Ing. Erwin Sitompul President University Lecture 5 Multivariable Calculus President UniversityErwin SitompulMVC 5/1
CHAPTER 4 Adaptive Tapped-delay-line Filters Using the Least Squares Adaptive Filtering.
Sullivan – Fundamentals of Statistics – 2 nd Edition – Chapter 4 Section 2 – Slide 1 of 20 Chapter 4 Section 2 Least-Squares Regression.
Dr.-Ing. Erwin Sitompul President University Lecture 2 Multivariable Calculus President UniversityErwin SitompulMVC 2/1
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION ASEN 5070 LECTURE 11 9/16,18/09.
Scientific Computing General Least Squares. Polynomial Least Squares Polynomial Least Squares: We assume that the class of functions is the class of all.
Generalised method of moments approach to testing the CAPM Nimesh Mistry Filipp Levin.
ADALINE (ADAptive LInear NEuron) Network and
1 DYNAMIC BEHAVIOR OF PROCESSES : Development of Empirical Models from Process Data ERT 210 Process Control & Dynamics.
President UniversityErwin SitompulSMI 3/1 Dr.-Ing. Erwin Sitompul President University Lecture 3 System Modeling and Identification
Review of fundamental 1 Data mining in 1D: curve fitting by LLS Approximation-generalization tradeoff First homework assignment.
Dept. E.E./ESAT-STADIUS, KU Leuven
State Observer (Estimator)
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Normal Equations The Orthogonality Principle Solution of the Normal Equations.
1.3 Solutions of Linear Systems
MAT 2401 Linear Algebra 2.5 Applications of Matrix Operations
Section 1.7 Linear Independence and Nonsingular Matrices
Model Structures 1. Objective Recognize some discrete-time model structures which are commonly used in system identification such as ARX, FIR, ARMAX,
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 18.
Dr.-Ing. Erwin Sitompul President University Lecture 6 Multivariable Calculus President UniversityErwin SitompulMVC 6/1
University of Colorado Boulder ASEN 5070 Statistical Orbit determination I Fall 2012 Professor George H. Born Professor Jeffrey S. Parker Lecture 9: Least.
Topics 1 Specific topics to be covered are: Discrete-time signals Z-transforms Sampling and reconstruction Aliasing and anti-aliasing filters Sampled-data.
President UniversityErwin SitompulSMI 1/1 Dr.-Ing. Erwin Sitompul President University Lecture 1 System Modeling and Identification
1 Development of Empirical Models From Process Data In some situations it is not feasible to develop a theoretical (physically-based model) due to: 1.
MathematicalMarketing Slide 5.1 OLS Chapter 5: Ordinary Least Square Regression We will be discussing  The Linear Regression Model  Estimation of the.
DSP-CIS Part-III : Optimal & Adaptive Filters Chapter-9 : Kalman Filters Marc Moonen Dept. E.E./ESAT-STADIUS, KU Leuven
President UniversityErwin SitompulSMI 6/1 Lecture 6 System Modeling and Identification Dr.-Ing. Erwin Sitompul President University
Lecture 2 Introduction to Neural Networks and Fuzzy Logic President UniversityErwin SitompulNNFL 2/1 Dr.-Ing. Erwin Sitompul President University
Section 6-2: Matrix Multiplication, Inverses and Determinants There are three basic matrix operations. 1.Matrix Addition 2.Scalar Multiplication 3.Matrix.
Lecture XXVII. Orthonormal Bases and Projections Suppose that a set of vectors {x 1,…,x r } for a basis for some space S in R m space such that r  m.
MTH108 Business Math I Lecture 20.
Chapter 7. Classification and Prediction
Linear Algebra Lecture 2.
ELG5377 Adaptive Signal Processing
Chapter 10 Optimal Control Homework 10 Consider again the control system as given before, described by Assuming the linear control law Determine the constants.
Numerical Analysis Lecture 16.
Chapter 4 Systems of Linear Equations; Matrices
6.5 Taylor Series Linearization
5.2 Least-Squares Fit to a Straight Line
Equivalent State Equations
Chapter 3 Canonical Form and Irreducible Realization of Linear Time-invariant Systems.
Maths for Signals and Systems Linear Algebra in Engineering Lecture 6, Friday 21st October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL.
Parametric Methods Berlin Chen, 2005 References:
Chapter 4 Systems of Linear Equations; Matrices
Presentation transcript:

President UniversityErwin SitompulSMI 10/1 Lecture 10 System Modeling and Identification Dr.-Ing. Erwin Sitompul President University

President UniversityErwin SitompulSMI 10/2 Homework 9 Chapter 6Identification from Step Response Time Percent Value Method Determine the approximation of the model in the last example, if after examining the t/t table, the model order is chosen to be 4 instead of 5.

President UniversityErwin SitompulSMI 10/3 t/τ Table 5 values of t i /τ are to be located for n = 4 Result: Solution to Homework 9 Chapter 6Identification from Step Response

President UniversityErwin SitompulSMI 10/4 Solution to Homework 9 Chapter 6Identification from Step Response : 5 th order approximation : 4 th order approximation

President UniversityErwin SitompulSMI 10/5 Least Squares Methods Chapter 6Least Squares Methods The Least Squares Methods are based on the minimization of squares of errors. The errors are defined as the difference between the measured value and the estimated value of the process output, or between y(k) and y(k). There are two version of the methods: batch version and recursive version. ^

President UniversityErwin SitompulSMI 10/6 Least Squares Methods Chapter 6Least Squares Methods Consider the discrete-time transfer function in the form of: The aim of Least Squares (LS) Methods is to identify the parameters a 1,..., a n, b 1,..., b m from the knowledge of process inputs u(k) and process output y(k). As described by the transfer function above, the relation of process inputs and process outputs is:

President UniversityErwin SitompulSMI 10/7 Least Squares Methods Chapter 6Least Squares Methods This relation can be written in matrix notation as: where: Vector of Parameters Vector of Measured Data Hence, the identification problem in this case is how to find θ based on the actual process output y(k) and the measured data from the past m(k).

President UniversityErwin SitompulSMI 10/8 Least Squares Methods Chapter 6Least Squares Methods Assuming that the measurement was done for k times, with the condition k ≥ n + m, then k equations can be constructed as: or:

President UniversityErwin SitompulSMI 10/9 If M is nonsingular, then the direct solution can be calculated as: In this method, error is minimized as a linear function of the parameter vector. The disadvantage of this solution is, that error can be abruptly larger for t > k. Least Squares Methods Chapter 6Least Squares Methods Least Error (LE) Method, Batch Version

President UniversityErwin SitompulSMI 10/10 Least Squares Methods Chapter 6Least Squares Methods A better way to calculate the parameter estimate θ is to find the parameter set that will minimize the sum of squares of errors between the measured outputs y(k) and the model outputs y(k) = m T (k)θ ^ The extreme of J with respect to θ is found when:

President UniversityErwin SitompulSMI 10/11 The derivation of J(θ) with respect to θ can be calculated as: Least Squares Methods Chapter 6Least Squares Methods if A symmetric Least Squares (LS) Method, Batch Version

President UniversityErwin SitompulSMI 10/12 Performing the “Second Derivative Test”, Least Squares Methods Chapter 6Least Squares Methods Second Derivative Test If f ’ (x) = 0 and f ” (x) > 0 then f has a local minimum at x If f ’ (x) = 0 and f ” (x) < 0 then f has a local maximum at x If f ’ (x) = 0 and f ” (x) = 0 then no conclusion can be drawn Always positive definite is a solution that will minimize the squares of errors

President UniversityErwin SitompulSMI 10/13 In order to guarantee that M T M is invertible, the number of row of M must be at least equal to the number of its column, which is again the number of parameters to be identified. More row of M increase the accuracy of the calculation. In other words, the number of data row does not have to be the same as the sum of the order of numerator and denominator of the model to be identified. If possible, rows with any value assumed to be zero (because no measurement data exist) should not be used. Least Squares Methods Chapter 6Least Squares Methods

President UniversityErwin SitompulSMI 10/14 The parameters of a model with the structure of: Example: Least Squares Methods Chapter 6Least Squares Methods are to be identified out of the following measurement data: Perform the batch version of the Least Squares Methods to find out a 1, a 2, and b 2. Hint: n + m =  At least 3 measurements must be available/ utilized. Hint: If possible, avoid to many zeros due to unavailable data for u(k) = 0 and y(k) = 0, k < 0.

President UniversityErwin SitompulSMI 10/15 Using the least allowable data, from k = 2 to k = 4, the matrices Y and M can be constructed as: Example: Least Squares Methods Chapter 6Least Squares Methods

President UniversityErwin SitompulSMI 10/16 Example: Least Squares Methods Chapter 6Least Squares Methods

President UniversityErwin SitompulSMI 10/17 Homework 10 Chapter 6Least Squares Methods Redo the example, utilizing as many data as possible. Does your result differ from the result given in the slide? What could be the reason for that? Which result is more accurate?

President UniversityErwin SitompulSMI 10/18 Homework 10A Chapter 6Least Squares Methods Redo the example, utilizing least allowable data, if the structure of the model is chosen to be After you found the three parameters a 1, a 2, and b 1, for G 2 (z), use Matlab/Simulink to calculate the response of both G 1 (z) and G 2 (z) if they are given the sequence of input as given before. Compare y(k) from Slide 10/15 with y 1 (k) and y 2 (k) from the outputs of the transfer functions G 1 (z) and G 2 (z). Give analysis and conclusions. Odd-numbered Student-ID Even-numbered Student-ID