Identification of Reduced-Oder Dynamic Models of Gas Turbines

Slides:



Advertisements
Similar presentations
ECE 471/571 - Lecture 13 Gradient Descent 10/13/14.
Advertisements

Machine Learning and Data Mining Linear regression
Data Modelling and Regression Techniques M. Fatih Amasyalı.
Tracking Unknown Dynamics - Combined State and Parameter Estimation Tracking Unknown Dynamics - Combined State and Parameter Estimation Presenters: Hongwei.
Siddharth Choudhary.  Refines a visual reconstruction to produce jointly optimal 3D structure and viewing parameters  ‘bundle’ refers to the bundle.
Adaptive Filters S.B.Rabet In the Name of GOD Class Presentation For The Course : Custom Implementation of DSP Systems University of Tehran 2010 Pages.
Particle Swarm Optimization (PSO)  Kennedy, J., Eberhart, R. C. (1995). Particle swarm optimization. Proc. IEEE International Conference.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Newton’s Method Application to LMS Recursive Least Squares Exponentially-Weighted.
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
1 L-BFGS and Delayed Dynamical Systems Approach for Unconstrained Optimization Xiaohui XIE Supervisor: Dr. Hon Wah TAM.
I welcome you all to this presentation On: Neural Network Applications Systems Engineering Dept. KFUPM Imran Nadeem & Naveed R. Butt &
CSci 6971: Image Registration Lecture 4: First Examples January 23, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart, RPI Dr.
Goals of Adaptive Signal Processing Design algorithms that learn from training data Algorithms must have good properties: attain good solutions, simple.
Identifiability of biological systems Afonso Guerra Assunção Senra Paula Freire Susana Barbosa.
Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL.
Collaborative Filtering Matrix Factorization Approach
Muhammad Moeen YaqoobPage 1 Moment-Matching Trackers for Difficult Targets Muhammad Moeen Yaqoob Supervisor: Professor Richard Vinter.
1 Hybrid methods for solving large-scale parameter estimation problems Carlos A. Quintero 1 Miguel Argáez 1 Hector Klie 2 Leticia Velázquez 1 Mary Wheeler.
Natural Gradient Works Efficiently in Learning S Amari (Fri) Computational Modeling of Intelligence Summarized by Joon Shik Kim.
By Asst.Prof.Dr.Thamer M.Jamel Department of Electrical Engineering University of Technology Baghdad – Iraq.
Integrating Neural Network and Genetic Algorithm to Solve Function Approximation Combined with Optimization Problem Term presentation for CSC7333 Machine.
1 Artificial Neural Networks Sanun Srisuk EECP0720 Expert Systems – Artificial Neural Networks.
Least-Mean-Square Training of Cluster-Weighted-Modeling National Taiwan University Department of Computer Science and Information Engineering.
Improved Gene Expression Programming to Solve the Inverse Problem for Ordinary Differential Equations Kangshun Li Professor, Ph.D Professor, Ph.D College.
Application of Differential Applied Optimization Problems.
Lecture 3 Introduction to Neural Networks and Fuzzy Logic President UniversityErwin SitompulNNFL 3/1 Dr.-Ing. Erwin Sitompul President University
Well Log Data Inversion Using Radial Basis Function Network Kou-Yuan Huang, Li-Sheng Weng Department of Computer Science National Chiao Tung University.
Statistical Tools for Solar Resource Forecasting Vivek Vijay IIT Jodhpur Date: 16/12/2013.
Applications of Neural Networks in Time-Series Analysis Adam Maus Computer Science Department Mentor: Doctor Sprott Physics Department.
Patch Based Prediction Techniques University of Houston By: Paul AMALAMAN From: UH-DMML Lab Director: Dr. Eick.
Reinforcement Learning with Laser Cats! Marshall Wang Maria Jahja DTR Group Meeting October 5, 2015.
Bundle Adjustment A Modern Synthesis Bill Triggs, Philip McLauchlan, Richard Hartley and Andrew Fitzgibbon Presentation by Marios Xanthidis 5 th of No.
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
Final Exam Review CS479/679 Pattern Recognition Dr. George Bebis 1.
Chapter 2-OPTIMIZATION
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Brain-Machine Interface (BMI) System Identification Siddharth Dangi and Suraj Gowda BMIs decode neural activity into control signals for prosthetic limbs.
Real Time Nonlinear Model Predictive Control Strategy for Multivariable Coupled Tank System Kayode Owa Kayode Owa Supervisor - Sanjay Sharma University.
RECONSTRUCTION OF MULTI- SPECTRAL IMAGES USING MAP Gaurav.
Wireless Based Positioning Project in Wireless Communication.
Evolutionary Design of the Closed Loop Control on the Basis of NN-ANARX Model Using Genetic Algoritm.
Bounded Nonlinear Optimization to Fit a Model of Acoustic Foams
LECTURE 11: Advanced Discriminant Analysis
Figure 11.1 Linear system model for a signal s[n].
Energy Based Acoustic Source Localization
Pipelined Adaptive Filters
Derivation of a Learning Rule for Perceptrons
A Brief Introduction of RANSAC
Assoc. Prof. Dr. Peerapol Yuvapoositanon
Closed-form Algorithms in Hybrid Positioning: Myths and Misconceptions
Going Backwards In The Procedure and Recapitulation of System Identification By Ali Pekcan 65570B.
Basic constituents of the methodology for the numerical solution of compressor blade row gasdynamics inverse problems Choice of the starting values of.
Pattern Recognition CS479/679 Pattern Recognition Dr. George Bebis
Non-linear Least-Squares
Collaborative Filtering Matrix Factorization Approach
Structure from Motion with Non-linear Least Squares
Instructor :Dr. Aamer Iqbal Bhatti
Introduction to Scientific Computing II
Introduction to Scientific Computing II
Introduction to Scientific Computing II
Wireless Sensor Networks: nodes localization issue
6.5 Taylor Series Linearization
Overfitting and Underfitting
Least squares linear classifier
Finding Periodic Discrete Events in Noisy Streams
Introduction to Scientific Computing II
MODEL DEVELOPMENT FOR HIGH-SPEED RECEIVERS
Forecasting - Introduction
Nonlinear Conjugate Gradient Method for Supervised Training of MLP
Structure from Motion with Non-linear Least Squares
Presentation transcript:

Identification of Reduced-Oder Dynamic Models of Gas Turbines CSC Student Seminars (Spring/Summer, 2006) Identification of Reduced-Oder Dynamic Models of Gas Turbines PhD Student: Xuewu Dai Supervisor: Tim Breikin and Hong Wang

Introduction 1. Introduction 2. Reduced-order Model 3. Long-term Prediction 4. Dynamic Gradient Descent 5. Nonlinear Least-Squares Optimization 6. Future Works

1. Introduction Modlling of Gas Turbines Fault Detection Condition Monitoring

Aims Reducing Computational Complexity: Real time Improving Prediction Accuracy: Long-term prediction Robustness

2. Reduced Order Thermodynamic models: 1. High order : 26th 2. Non-linear Linearisation Our ARX models : 1. Reduced order: 1st, 2nd … 2. Linear:

3. Long-term Prediction Model Model b. Long-term Prediction Model a. One-step Ahead Prediction Model

Model Equations One-step ahead prediction 2. Long-term prediction

Challenges Computational Burden How many iterations need to identify the parameters? Dependency of Prediction Errors (Non-Gaussian Noise) MSE=9.1318 Autocorrelation of prediction errors

4. Dynamic Gradient Descent Objective Function Global Gradient and local gradient

Dynamic Gradient Descent

Results 1: deepest direction

BFGS direction

5. Nonlinear Least-squares Optimization (Gauss-Newton)

Search direction, step size and initial value Deepest descent: inverse global gradient Nonlinear Least Squares: Gauss-Newton Step size: fixed, adjustable, line search Initial value: Blind guess: [0.5 0.5 0.5 0.5] LSE: [1.2805 -0.29191 0.10582 0.15903]

Result 3 Gauss-Newton

Prediction of 1st Order Model

Comparison of 1st Order Model Methods MSE a b Iterations LSE 23.49449 0.987395 0.032551 1 ANFIS 22.2925 N/A 200 GD 11.0163 0.9809 0.0376 Exhausted Search 9.131926 0.977774 0.043542 10000 DGD1* 9.131816 0.977764 0.043568 1000 DGD2* 9.131786 0.777774 0.043544 101 DGD3* 9.131785 0.977776 0.043543 98 DGD1: Deepest descent direction and adjusting step size DGD2: BFGS direction and adjusting step size DGD3: Gauss-Newton and line search

High Order Model initial (by LSE) : [1.2805 -0.29191 0.10582 0.15903] final: [1.8604 -0.8641 0.07045 -0.007475]

6. Future Works Initial value Problem: Robustness Problem: ??? Applying such learning algorithm to Neural Networks Model structure selection by autocorrelation of prediction errors NARMX models

CSC Student Seminars (Spring/Summer, 2006) Thanks

Appendix

Initial value problem manual setting of initial value [0.5 0.5 0.5 0.5] [1.8604 -0.8641 0.07045 -0.007475] Final MSE=8.60188 setting initial value by LSE [1.2805 -0.29191 0.10582 0.15903] [1.8604 -0.8641 0.07045 -0.007475] Final MSE=3.313612

appendix