Robust Pareto Design of GMDH-type Neural Networks for Systems with Probabilistic Uncertainties N. Nariman-zadeh, F. Kalantary, A. Jamali, F. Ebrahimi Faculty.

Slides:



Advertisements
Similar presentations
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Advertisements

Hazırlayan NEURAL NETWORKS Least Squares Estimation PROF. DR. YUSUF OYSAL.
Cost of surrogates In linear regression, the process of fitting involves solving a set of linear equations once. For moving least squares, we need to.
Artificial Intelligence Lecture 2 Dr. Bo Yuan, Professor Department of Computer Science and Engineering Shanghai Jiaotong University
Machine Learning Neural Networks
1 Introduction to Bio-Inspired Models During the last three decades, several efficient machine learning tools have been inspired in biology and nature:
D Nagesh Kumar, IIScOptimization Methods: M1L1 1 Introduction and Basic Concepts (i) Historical Development and Model Building.
280 SYSTEM IDENTIFICATION The System Identification Problem is to estimate a model of a system based on input-output data. Basic Configuration continuous.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
POWER SYSTEM DYNAMIC SECURITY ASSESSMENT – A.H.M.A.Rahim, S.K.Chakravarthy Department of Electrical Engineering K.F. University of Petroleum and Minerals.
I welcome you all to this presentation On: Neural Network Applications Systems Engineering Dept. KFUPM Imran Nadeem & Naveed R. Butt &
Neural Networks. R & G Chapter Feed-Forward Neural Networks otherwise known as The Multi-layer Perceptron or The Back-Propagation Neural Network.
September 21, 2010Neural Networks Lecture 5: The Perceptron 1 Supervised Function Approximation In supervised learning, we train an ANN with a set of vector.
Neural Networks Chapter Feed-Forward Neural Networks.
D Nagesh Kumar, IIScOptimization Methods: M1L4 1 Introduction and Basic Concepts Classical and Advanced Techniques for Optimization.
Hazırlayan NEURAL NETWORKS Radial Basis Function Networks I PROF. DR. YUSUF OYSAL.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Linear and generalised linear models Purpose of linear models Least-squares solution for linear models Analysis of diagnostics Exponential family and generalised.
Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Radial Basis Function Networks
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Attention Deficit Hyperactivity Disorder (ADHD) Student Classification Using Genetic Algorithm and Artificial Neural Network S. Yenaeng 1, S. Saelee 2.
Approximating the Algebraic Solution of Systems of Interval Linear Equations with Use of Neural Networks Nguyen Hoang Viet Michal Kleiber Institute of.
Dr M F Abbod Using Intelligent Optimisation Methods to Improve the Group Method of Data Handling in Time Series Prediction Maysam Abbod and Karishma Dashpande.
THEORETICAL STUDY OF SOUND FIELD RECONSTRUCTION F.M. Fazi P.A. Nelson.
Function Approximation
Radial Basis Function Networks
A Genetic Algorithms Approach to Feature Subset Selection Problem by Hasan Doğu TAŞKIRAN CS 550 – Machine Learning Workshop Department of Computer Engineering.
Evolving a Sigma-Pi Network as a Network Simulator by Justin Basilico.
Artificial Neural Networks (ANN). Output Y is 1 if at least two of the three inputs are equal to 1.
Slides are based on Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems.
Integrating Neural Network and Genetic Algorithm to Solve Function Approximation Combined with Optimization Problem Term presentation for CSC7333 Machine.
C. Benatti, 3/15/2012, Slide 1 GA/ICA Workshop Carla Benatti 3/15/2012.
Soft Computing Lecture 18 Foundations of genetic algorithms (GA). Using of GA.
Deep Learning – Fall 2013 Instructor: Bhiksha Raj Paper: T. D. Sanger, “Optimal Unsupervised Learning in a Single-Layer Linear Feedforward Neural Network”,
Chapter 9 Neural Network.
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
A two-stage approach for multi- objective decision making with applications to system reliability optimization Zhaojun Li, Haitao Liao, David W. Coit Reliability.
NEURAL NETWORKS FOR DATA MINING
Dünaamiliste süsteemide modelleerimine Identification for control in a non- linear system world Eduard Petlenkov, Automaatikainstituut, 2013.
CS344: Introduction to Artificial Intelligence (associated lab: CS386) Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 31: Feedforward N/W; sigmoid.
Neural Nets: Something you can use and something to think about Cris Koutsougeras What are Neural Nets What are they good for Pointers to some models and.
Chapter 28 Cononical Correction Regression Analysis used for Temperature Retrieval.
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
NIMIA Crema, Italy1 Identification and Neural Networks I S R G G. Horváth Department of Measurement and Information Systems.
Math 4030 – 11b Method of Least Squares. Model: Dependent (response) Variable Independent (control) Variable Random Error Objectives: Find (estimated)
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Spectrum Reconstruction of Atmospheric Neutrinos with Unfolding Techniques Juande Zornoza UW Madison.
A field of study that encompasses computational techniques for performing tasks that require intelligence when performed by humans. Simulation of human.
Computacion Inteligente Least-Square Methods for System Identification.
The University of SydneySlide 1 Simulation Driven Biomedical Optimisation Andrian Sue AMME4981/9981 Week 5 Semester 1, 2016 Lecture 5.
Data Mining: Concepts and Techniques1 Prediction Prediction vs. classification Classification predicts categorical class label Prediction predicts continuous-valued.
A Presentation on Adaptive Neuro-Fuzzy Inference System using Particle Swarm Optimization and it’s Application By Sumanta Kundu (En.R.No.
A PID Neural Network Controller
 Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems n Introduction.
Evolutionary Computation Evolving Neural Network Topologies.
Chapter 12 Case Studies Part B. Control System Design.
Business Intelligence and Decision Support Systems (9 th Ed., Prentice Hall) Chapter 6: Artificial Neural Networks for Data Mining.
Unfolding Problem: A Machine Learning Approach
Neural Networks and Their Application in the Fields of Coporate Finance By Eric Séverin Hanna Viinikainen.
network of simple neuron-like computing elements
An Introduction To The Backpropagation Algorithm
The use of Neural Networks to schedule flow-shop with dynamic job arrival ‘A Multi-Neural Network Learning for lot Sizing and Sequencing on a Flow-Shop’
Discrete Least Squares Approximation
Introduction to Radial Basis Function Networks
Facultad de Ingeniería, Centro de Cálculo
Unfolding with system identification
Presentation transcript:

Robust Pareto Design of GMDH-type Neural Networks for Systems with Probabilistic Uncertainties N. Nariman-zadeh, F. Kalantary, A. Jamali, F. Ebrahimi Faculty of Engineering, The University of Guilan

System identification techniques are applied in many fields in order to model and predict the behaviors of unknown and/or very complex systems based on given input-output data GMDH is a self-organizing approach by which gradually complicated models are generated based on the evaluation of their performances on a set of multi-input-single-output data In order to obtain more robust models, it is required to consider all the conflicting objectives, namely, training error (TE), prediction error (PE) in the sense of multi-objective Pareto optimization process For multi-objective optimization problems, there is a set of optimal solutions, known as Pareto optimal solutions or Pareto front Introduction

System Identification Techniques Are Applied in Many Fields in order to Model and Predict the Behaviors of Unknown and/or Very Complex Systems Based on Given Input-Output Data. Group Method of Data Handling (GMDH) Algorithm is Self- Organizing Approach by which Gradually Complicated Models are Generated Based on the Evaluation of their Performances on a set of Multi-Input-Single-Output Data Pairs (i=1, 2, …, M) X1X1 X2X2 XnXn Y1Y1.... YmYm Modelling Using GMDH-type Networks

The classical GMDH algorithm can be represented as set of neurons in which different pairs of them in each layer are connected through a quadratic polynomial and thus produce new neurons in the next layer. G1G1 G2G2 G4G4 G6G6 X1X1 X2X2 X3X3 X4X4 A Feedforward GMDH-Type Network G3G3 G5G5 Input Layer Output Layer Hidden Layer(s) Modelling Using GMDH-type Networks

A Generalized GMDH Network Structure of a Chromosome a c b d ad bc adbc a d b c b c b c Application of Genetic Algorithm in the Topology Design of GMDH-type NNs

a c b d ad bc adbc a d b c d d d d Application of Genetic Algorithm in the Topology Design of GMDH-type NNs

Crossover operation for two individuals in GS-GMDH networks

Application of Singular Value Decomposition to the Design of GMDH-type Networks SVD is the method for solving most linear least squares problems that some singularities may exist in the normal equations The SVD of a matrix,, is a factorization of the matrix into the product of three matrices, matrix, diagonal matrix with non-negative elements (Singular Values), and orthogonal matrix such that :

Genetic Algorithms and Multi-objective Pareto Optimization Genetic algorithms are iterative and stochastic optimization techniques. In the optimization of complex real-world problems, there are several objective functions to be optimized simultaneously. There is no single optimal solution as the best because objectives conflict each other. There is a set of optimal solutions, well known as Pareto optimal solutions or Pareto front.

Modelling error Prediction error Multi-objective optimization

Modelling error Prediction error Multi-objective optimization

Difference between robust optimization and traditional optimization Design Variable Objective Function Feasible Infeasible Optimal solution Robust optimal solution

Random variable PDF CDF For the discrete sampling: Stochastic Robust Analysis

Modelling and prediction of soil shear strength, Su, based on 5 input parameters, namely, SPT number (Standard Penetration Test) N′, effective overburden stress s / 0, moisture content percent W, LL liquid limit, and PL plastic limit of fine-graded clay soil The data used in this study were gathered from the National Iranian Geotechnical Database, which has been set up in the Building and Housing Research Centre (BHRC) The database has been established under a mandate from the Management and Planning Organization (MPORG), which supervises the professional activities of all of the consultancy firms in Iran

Comparison of actual values with the evolved GMDH model corresponding to optimum point C (nominal table) Training set Prediction set

PointNetwork’s structureTEPEMean of TEMean of PEVariance of TEVariance of PE A bbaebcacbcaeacee B bcaebacdbcbbadde e113.3e9 C bcaebccdbdbcaccd e102.6e6 Objective functions and structure of networks of different optimum design points

PointNetwork’s structureTEPEMean of TE Mean of PEVariance of TEVariance of PE Abbaebcacbcaeacee Bbcaebacdbcbbadde e113.3e9 Cbcaebccdbdbcaccd e102.6e6 Dabeecddd Objective functions and structure of networks of different optimum design points

Y1 Y4 Y3 Y5 Y2 Point C Point D The structure of network corresponding to point C and D Y1= N ’ σ 0 ’ N ’ σ 0 ’ N ’ σ 0 ’ Y2= w LL w LL w(LL) Y3= Y LL Y (LL) (Y2)(LL) Y4= Y PL Y PL (Y1)(PL) Y5= Y Y Y Y (Y4)(Y3)

Conclusion A multi-objective genetic algorithm was used to optimally design GMDH-type neural networks from a robustness point of view in a probabilistic approach. Multi-objective optimization of robust GMDH models led to the discovering some important trade-off among those objective functions. The framework of this work is very promising and can be generally used in the optimum design of GMDH models in real-world complex systems with probabilistic uncertainties.

Thanks for your attention…