A H YBRID A RTIFICIAL N EURAL N ETWORK M ODEL WITH L INEAR & N ONLINEAR C OMPONENTS Yolcu Ufuk, Department of.

Slides:



Advertisements
Similar presentations
GHEORGHE GRIGORAS GHEORGHE CARTINA MIHAI GAVRILAS
Advertisements


1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
Particle swarm optimization for parameter determination and feature selection of support vector machines Shih-Wei Lin, Kuo-Ching Ying, Shih-Chieh Chen,
Part II – TIME SERIES ANALYSIS C5 ARIMA (Box-Jenkins) Models
« هو اللطیف » By : Atefe Malek. khatabi Spring 90.
Machine Learning Neural Networks
Data Mining Techniques Outline
Radial Basis Functions
Prediction and model selection
Short-Term Load Forecasting In Electricity Market N. M. Pindoriya Ph. D. Student (EE) Acknowledge: Dr. S. N. Singh ( EE ) Dr. S. K. Singh ( IIM-L )
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Business Forecasting Chapter 5 Forecasting with Smoothing Techniques.
1 Prediction of Software Reliability Using Neural Network and Fuzzy Logic Professor David Rine Seminar Notes.
G. Peter Zhang Neurocomputing 50 (2003) 159–175 link Time series forecasting using a hybrid ARIMA and neural network model Presented by Trent Goughnour.
Traffic modeling and Prediction ----Linear Models
1 PSO-based Motion Fuzzy Controller Design for Mobile Robots Master : Juing-Shian Chiou Student : Yu-Chia Hu( 胡育嘉 ) PPT : 100% 製作 International Journal.
CS623: Introduction to Computing with Neural Nets (lecture-10) Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay.
AN ITERATIVE METHOD FOR MODEL PARAMETER IDENTIFICATION 4. DIFFERENTIAL EQUATION MODELS E.Dimitrova, Chr. Boyadjiev E.Dimitrova, Chr. Boyadjiev BULGARIAN.
1 Introduction to Artificial Neural Networks Andrew L. Nelson Visiting Research Faculty University of South Florida.
Chapter 14: Artificial Intelligence Invitation to Computer Science, C++ Version, Third Edition.
STIFF: A Forecasting Framework for Spatio-Temporal Data Zhigang Li, Margaret H. Dunham Department of Computer Science and Engineering Southern Methodist.
Multiple-Layer Networks and Backpropagation Algorithms
Shrinkage Estimation of Vector Autoregressive Models Pawin Siriprapanukul 11 January 2010.
Improved Gene Expression Programming to Solve the Inverse Problem for Ordinary Differential Equations Kangshun Li Professor, Ph.D Professor, Ph.D College.
Outline What Neural Networks are and why they are desirable Historical background Applications Strengths neural networks and advantages Status N.N and.
Introduction to Artificial Neural Network Models Angshuman Saha Image Source: ww.physiol.ucl.ac.uk/fedwards/ ca1%20neuron.jpg.
Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos VC 14/15 – TP19 Neural Networks & SVMs Miguel Tavares.
1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265
Department of Electrical Engineering, Southern Taiwan University Robotic Interaction Learning Lab 1 The optimization of the application of fuzzy ant colony.
Comparison of Bayesian Neural Networks with TMVA classifiers Richa Sharma, Vipin Bhatnagar Panjab University, Chandigarh India-CMS March, 2009 Meeting,
Various topics Petter Mostad Overview Epidemiology Study types / data types Econometrics Time series data More about sampling –Estimation.
Well Log Data Inversion Using Radial Basis Function Network Kou-Yuan Huang, Li-Sheng Weng Department of Computer Science National Chiao Tung University.
A hybrid SOFM-SVR with a filter-based feature selection for stock market forecasting Huang, C. L. & Tsai, C. Y. Expert Systems with Applications 2008.
Fuzzy Reinforcement Learning Agents By Ritesh Kanetkar Systems and Industrial Engineering Lab Presentation May 23, 2003.
AN ARTIFICIAL NEURAL NETWORK APPLICATION TO PRODUCE DEBRIS SOURCE AREAS OF BARLA, BESPARMAK, AND KAPI MOUNTAINS (NW TAURIDS, TURKEY) Prepared by : Lamiya.
Soft Computing Lecture 8 Using of perceptron for image recognition and forecasting.
Akram Bitar and Larry Manevitz Department of Computer Science
A Trust Based Distributed Kalman Filtering Approach for Mode Estimation in Power Systems Tao Jiang, Ion Matei and John S. Baras Institute for Systems Research.
Estimating and Predicting Stock Returns Using Artificial Neural Networks Dissertation Paper BUCHAREST ACADEMY OF ECONOMIC STUDIES DOCTORAL SCHOOL OF FINANCE.
Learning from Positive and Unlabeled Examples Investigator: Bing Liu, Computer Science Prime Grant Support: National Science Foundation Problem Statement.
Reservoir Uncertainty Assessment Using Machine Learning Techniques Authors: Jincong He Department of Energy Resources Engineering AbstractIntroduction.
Modeling Electricity Demand: A Neural Network Approach Christian Crowley GWU Department of Economics 28 th Annual IAEE International Conference June 6,
Hybrid Load Forecasting Method With Analysis of Temperature Sensitivities Authors: Kyung-Bin Song, Seong-Kwan Ha, Jung-Wook Park, Dong-Jin Kweon, Kyu-Ho.
Image Source: ww.physiol.ucl.ac.uk/fedwards/ ca1%20neuron.jpg
Time Series Prediction Using Support Vector Machine: A Survey By Ma Yongning.
CHEE825 Fall 2005J. McLellan1 Nonlinear Empirical Models.
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
Artificial Intelligence for Data Mining in the Context of Enterprise Systems Thesis Presentation by Real Carbonneau.
Dynamic Neural Network Control (DNNC): A Non-Conventional Neural Network Model Masoud Nikravesh EECS Department, CS Division BISC Program University of.
Stock market forecasting using LASSO Linear Regression model
Time Series Analysis PART II. Econometric Forecasting Forecasting is an important part of econometric analysis, for some people probably the most important.
Operation and Control Strategy of PV/WTG/EU Hybrid Electric Power System Using Neural Networks Faculty of Engineering, Elminia University, Elminia, Egypt.
LOAD FORECASTING. - ELECTRICAL LOAD FORECASTING IS THE ESTIMATION FOR FUTURE LOAD BY AN INDUSTRY OR UTILITY COMPANY - IT HAS MANY APPLICATIONS INCLUDING.
Kim HS Introduction considering that the amount of MRI data to analyze in present-day clinical trials is often on the order of hundreds or.
1 A latent information function to extend domain attributes to improve the accuracy of small-data-set forecasting Reporter : Zhao-Wei Luo Che-Jung Chang,Der-Chiang.
CPH Dr. Charnigo Chap. 11 Notes Figure 11.2 provides a diagram which shows, at a glance, what a neural network does. Inputs X 1, X 2,.., X P are.
Data Mining: Concepts and Techniques1 Prediction Prediction vs. classification Classification predicts categorical class label Prediction predicts continuous-valued.
A Presentation on Adaptive Neuro-Fuzzy Inference System using Particle Swarm Optimization and it’s Application By Sumanta Kundu (En.R.No.
Big data classification using neural network
Chapter 7. Classification and Prediction
Deep Feedforward Networks
Mid Semester UEB Energy Analysis of a Wastewater Treatment Plant by using Artificial Neural Network Presented by: Dr. Nor Azuana Ramli Electrical Engineering.
Luís Filipe Martinsª, Fernando Netoª,b. 
Dr. Unnikrishnan P.C. Professor, EEE
Neural Networks Advantages Criticism
Artificial Neural Network & Backpropagation Algorithm
Zip Codes and Neural Networks: Machine Learning for
Masoud Nikravesh EECS Department, CS Division BISC Program
Akram Bitar and Larry Manevitz Department of Computer Science
Presentation transcript:

A H YBRID A RTIFICIAL N EURAL N ETWORK M ODEL WITH L INEAR & N ONLINEAR C OMPONENTS Yolcu Ufuk, Department of Statistics, Giresun University, Giresun 28000, Turkey, Egrioglu Erol Department of Statistics, Ondokuz Mayis University, Samsun 55139, Turkey Aladag Cagdas H. Department of Statistics, Hacettepe University, Ankara 06800, Turkey

CONTENT 1. Introduction 2. The Proposed Method (L&NL-ANN Model) 3. Application of L&NL-ANN Model 3.1. Data Set Data Set 2 4. Conclusions

I NTRODUCTION  In the literature, while linear models such as autoregressive integrated moving average (ARIMA; [2]) have being used for linear time series, nonlinear models such as artificial neural networks (ANN), bilinear, and threshold autoregressive (TAR; [15]) have being preferred for nonlinear time series.  It is a well-known fact that real life time series can generally contain both linear and non-linear components.  It is almost impossible that a time series is pure linear or pure nonlinear.

 For some time series, linear models can produce satisfactory results when linear part of the time series is superior to nonlinear part.  In a similar way, when nonlinear part of the time series is superior to linear part, nonlinear models can give satisfactory results.  However, in both case, one of these parts is not taken into consideration.  Thus, it can lead to deceptive results.  To deal with this problem, various hybrid approaches have been suggested in the literature.

 Tseng et al. (2002) proposed a hybrid forecasting model, which combines the seasonal ARIMA (SARIMA) and ANN [16].  Zhang (2003) improved a hybrid model based on ARIMA and ANN [20].  Pai and Lin (2005) combined ARIMA and support vector machines (SVM) [12].  Chen and Wang (2007) combined SARIMA and SVM [4].  Aladag et al. (2009) combined ARIMA and Elman neural networks [1].

 Lee and Tong (2011) combined ARIMA and genetic algorithms [9].  Besides, Ince and Traffalis (2006) proposed a hybrid model which incorporates parametric techniques such as ARIMA, vector autoregressive (VAR) and co- integration techniques, and nonparametric techniques such as support vector regression (SVR) and ANN [6].  Wang et al. (2006) introduced some hybrid approaches which are called threshold ANN, cluster based ANN, and periodic ANN [18].

 BuHamra et al. (2003) and Jain and Kumar (2007) suggested hybrid approaches in which the inputs of ANN are determined by Box-Jenkins procedure ([3]; [7]).  In addition to these studies, hybrid approaches combine SARIMA and ANN also proposed to analyze fuzzy time series by Egrioglu et al. (2009) and Uslu et al. (2010) ([5]; [17]).

 These hybrid approaches generally consist of two phases.  After linear component of time series is modeled with a linear model in the first phase, nonlinear component is modeled by utilizing a nonlinear model in the next phase.  In two-phase methods, it is assumed that time series has only linear structure in the first phase and it is assumed that time series has only nonlinear structure in the second phase.  Therefore, this causes model specification error.

 In this study, a new ANN model composed of both linear and nonlinear structures is proposed to deal with this problem and to increase forecasting accuracy.  Therefore, this method has the ability to model both linear and nonlinear parts in time series at the same time.  In the proposed model, Multiplicative and Mc Culloch-Pitts neuron structures are used for nonlinear and linear parts, respectively.  In addition, the modified particle swarm optimization (MPSO) method is used to train the proposed neural network model.

 To show the applicability of the proposed method, it is applied to two real life time series in the implementation.  For the aim of comparison, the obtained results are compared to those calculated from other approaches available in the literature.  As a result of the implementation, it is seen that the proposed method has the best forecasting accuracy.

2. T HE P ROPOSED M ETHOD Hybrid models include advantages of both linear and nonlinear models have also been proposed in the literature. In hybrid approaches, it is assumed that the time series is composed of sum of linear and nonlinear parts and can be defined by where y t, L t, and N t represent the time series, the linear part and the nonlinear part of the time series, respectively.

 In the literature, Zhang (2003), Pai and Lin (2005), Chen and Wang (2007), Aladag et al. (2009) and Lee and Tong (2011) employed two-phase hybrid approaches.  After the linear part of time series is modeled in the first phase, by assuming that residuals obtained in the first phase contain the nonlinear part, these residuals are analyzed with nonlinear models in the second phase.  Employing a linear model in the first phase means that nonlinear relations are not taken into consideration.  This situation causes model specification error.

 To overcome this problem, one-phase method which can simultaneously analyze both linear and nonlinear structures is needed when time series given in (1) are analyzed.  Therefore, a novel linear & nonlinear artificial neural network (L&NL-ANN) model is proposed in this study.  The broad structure of the proposed model is illustrated in Fig. 2.1.

Fig. 2.1 The architecture of L&NL-ANN model

 In Fig. 2.1, ∑ and ∏ represent neuron models Mc Culloch and Pitts, and multiplicative neuron models, respectively. The functions f 1 and f 2 are given in (2) and (3), respectively.  As seen in Fig. 2.1, L&NL-ANN model includes two components linear and nonlinear.

 W 1 is a vector that includes the weights between the inputs of the linear component and neurons in the hidden layer corresponding to the linear part of the model.  Similarly, the vector W 2 contains the weights between the inputs of the nonlinear component and neurons in the hidden layer corresponding to the nonlinear part of the model.  Each component has m inputs so both W 1 and W 2 are m x1. The vector W 3, which is 2x1, consists of two weights which are used to combine outputs calculated from linear and nonlinear components.

 Thus, calculation of the output of L&NL-ANN model is given in three stages. Stage 1. The output value of the neuron in the hidden layer corresponding to the linear component ( o 1 ) is calculated. Firstly, the activation value net 1 for the neuron is obtained by using the formula given as follows: where w 1 j ( j =1,2,…, m ) are elements of W 1, and b 1 is bias weight for the linear part. The activation function used in this neuron is f 1 given in (2) so the output value o 1 is calculated by

Stage 2. The output value of the neuron in the hidden layer corresponding to the nonlinear component ( o 2 ) is calculated. Before calculating o 2, the activation value net 2 for the neuron is computed by using the formula given in (6). where w 2 j, and b 2 j ( j =1,2,…, m ) are elements of W 2, and bias weight values for the nonlinear part. The activation function used in this neuron is f 2 given in (3) so the output value o 2 is calculated by

Stage 3. The output value of the model is calculated. First of all, the activation value net 3 for the neuron in the output layer is obtained by using the formula given in (8). In (8), b 3 is bias weight. Then, the output value is computed as follows: As seen from (9), the output value is obtained from the weighted sum of linear and nonlinear autoregressive models. Unlike the model given in (1), L&NL-ANN model can be expressed as in (10).

It should be noted in here that in the model given in (1), weights of linear and nonlinear components are equal. However, in L&NL-ANN model, these weights are determined during the optimization process of ANN due to the structure of the data. In the proposed approach, L&NL-ANN model, which is also defined in this study, is trained using MPSO method. In the MPSO, positions of a particle are weights of L&NL-ANN model. Hence, a particle has 3 m + 4 positions. Structure of a particle is illustrated in Fig. 2.2.

Fig. 2.2 Structure of a particle

Mean square error (MSE), which is a well-known forecasting performance criterion, is used as evaluation function. MSE can be calculated using the formula given in (11). where n represents the number of learning sample. The algorithm for calculation of the output value of the proposed L&NL-ANN model is presented below.

Algorithm: The algorithm for calculation of the output value of the proposed L&NL-ANN model. Step 1. The parameters of MPSO are determined. In the first step, the parameters which direct the MPSO algorithm are determined. These parameters are pn, vm, c 1 i, c 1 f, c 2 i, c 2 f, w 1, and w 2 that were given in the [10] and [13]. Step 2. Initial values of positions and velocities are determined. The initial positions and velocities of each particle in a swarm are randomly generated from uniform distribution (0,1) and (- vm,vm ), respectively.

Step 3. Evaluation function values are computed. Evaluation function values for each particle are calculated. MSE given in (11) is used as evaluation function. Step 4. Pbest k (k = 1,2, …, pn) and Gbest are determined due to evaluation function values calculated in the previous step. Pbest k is a vector stores the positions corresponding to the k th particle’s best individual performance, and Gbest is the best particle, which has the best evaluation function value, found so far.

Step 5. The parameters are updated. The updated values of cognitive coefficient c 1, social coefficient c 2, and inertia parameter w are calculated like in [10] and [13]. Step 6. New values of positions and velocities are calculated. New values of positions and velocities for each particle are computed like in [10] and [13]. If maximum iteration number is reached, the algorithm goes to Step 3; otherwise, it goes to Step 7. Step 7. The optimal solution is determined. The elements of Gbest are taken as the optimal weight values of the L&NL-ANN.

3. A PPLICATION OF L&NL-ANN M ODEL In order to evaluate the performance of the proposed approach based on the L&NL-ANN model, which also defined in this study, and the MPSO algorithm, the proposed approach is applied to two real time series in the implementation. In all computations, MATLAB version (R2011a) computer package was used.

3.1 D ATA S ET 1 The first time series is the amount of carbon dioxide measured monthly in Ankara capitol of Turkey (ANSO) between March 1995 and April It has both trend and seasonal components and its period is 12. The first 124 observations are used for training and the last 10 observations are used for test set.

In addition to the proposed approach,  Seasonal Autoregressive Integrated Moving Average (SARIMA),  Winters Multiplicative Exponential Smoothing (WMES),  Feed Forward Neural Networks (FFANN) methods and the fuzzy time series forecasting methods proposed by Song [14], Egrioglu [5] and Uslu [17] are used to analyze ANSO data. For the test set, the forecasts and root mean square error (RMSE) values produced by all methods are summarized in Table 3.1.

Table 3.1 The obtained results for ANSO data The results of the methods SARIMA, WMES, Song [14], Egrioglu et al. [5] and Uslu et al. [17] were taken from Uslu et al.s’ study. Test Data SARIMAWMES Song [14] Egrioglu et al. [5] Uslu et al. [17] FFANNL&NL-ANN RMSE

When FFANN is used, the numbers of neurons in both the hidden and input layers are changed from 1 to 12 and one output neuron is employed. The best architecture among them is found as the architecture contains 12 neurons in the input layer and one neuron in the hidden layer. Thus, the inputs of the best FFANN model are the lagged variables X t -1, X t -2, …, X t -12. Inputs of L&NL- ANN model are taken as X t -1, X t -2, …, X t -12 like in the FFANN model.

And, in the training process of L&NL-ANN model, the parameters of the modified particle swarm optimization are determined as follows :( c 1 i, c 1 f ) = (2, 3), ( c 2 i, c 2 f ) = (2, 3), ( w 1, w 2 ) = (1, 2), pn = 30, and maxt = According to Table 3.1, the proposed approach has the best forecasting accuracy for ANSO data in terms of RMSE. To examine the results visually, the graph of the real observations and the forecasts produced by the proposed approach for test set is given in Fig. 3.1

Fig. 3.1 The graph of observations and forecasts for test data As clearly seen from this graph, the proposed approach produces very accurate forecasts for ANSO data.

3.1 D ATA S ET 2 L&NL-ANN model is also applied to Canadian lynx data consisting of the set of annual numbers of lynx trappings in the Mackenzie River District of North- West Canada for the period from 1821 to This data has also been extensively analyzed in the time series literature. We use the logarithm (to the base 10) of the data in the analysis.

In addition to the proposed approach, logarithm of Canada lynx data is examined by using the methods proposed by Zhang [20], Katijani et al. [8], Pai and Lin [12], Aladag et al. [1]. When the proposed method is used, the order of the L&NL- ANN model is m =3. MSE values obtained from the methods are presented in Table 3.2. Table 3.2 The obtained MSE values for Test Data of Logarithmic Canadian Lynx Data. ARIMAFANN Zhang [20] Kajitani et al.[8] Pai and Lin [12] Aladag et al.[1] L&NL-ANN

As seen from Table 3.2, the best forecasts are obtained when L&NL-ANN model is used. The graph of the real observations and the forecasts obtained from the proposed approach for test set is given in Fig. 3.2.

Fig. 3.2 The graph of observations and forecasts for test data of data set 2 It is clearly seen from the graph that the forecasts produced by the proposed approach are very accurate.

5. C ONCLUSIONS It is a well-known fact that real life time series can contain both linear and nonlinear structures. In the literature, various hybrid approaches, which are generally two-phase methods, have been proposed to deal with such time series. After linear component of time series is modeled with a linear model in the first phase, nonlinear component is modeled by utilizing a nonlinear model in the next phase.

In two-phase methods, it is assumed that time series has only linear structure in the first phase and it is assumed that time series has only nonlinear structure in the second phase. Therefore, this causes model specification error. To overcome this problem and to reach high forecasting accuracy level, a new ANN model which can simultaneously analyze both linear and nonlinear structures is introduced in this study.

This model can be considered as one-phase hybrid approach. In the other hybrid approaches available in the literature, weights of linear and nonlinear components are equal. Unlike the other hybrid approaches suggested in the literature, in the proposed neural network model, weights of linear and nonlinear components are determined during the optimization process of ANN due to the structure of the data.

In the proposed model, Multiplicative and Mc Culloch- Pitts neuron structures are used for nonlinear and linear parts, respectively. To show forecasting performance of the proposed method, it is applied to two real life time series in the implementation. As a result of the implementation, it is clearly observed that the proposed method produced the best forecasts for these two real time series.

R EFERENCES 1. Aladag, C.H., Egrioglu, E., Kadilar, C.: Forecasting nonlinear time series with a hybrid methodology, Applied Mathematic Letters, 22, (2009). 2. Box, G.E.P., Jenkins, G.M.: Time Series Analysis: Forecasting and Control Holdan- Day, San Francisco, CA, (1976). 3. BuHamra, S., Smaoui, N., Gabr, M.: The Box-Jenkins analysis and neural networks: prediction and time series modeling, Applied Mathematical Modeling, 27, (2003). 4. Chen, K.Y., Wang, C.H.: A Hybrid SARIMA and support vector machines in forecasting the production values of the machinery industry in Taiwan, Expert Systems with Applications, 32(1), (2007). 5. Egrioglu, E., Aladag, C.H., Yolcu, U., Basaran, M.A., Uslu, V.R.: A new hybrid approach based on SARIMA and partial high order bivariate fuzzy time series forecasting model, Expert Systems with Applications, 36, (2009). 6. Ince, H., Traffalis, T.B.: A hybrid model for exchange rate prediction, Decisions Support Systems, 42, (2006). 7. Jain, A. Kumar, A.M.; Hybrid neural network models for hydrological time series forecasting, Applied Soft Computing, 7, (2007). 8. Katijani, Y., Hipel, W.K. Mcleod, A.I.: Forecasting nonlinear time series with feed forward neural networks: A case study of Canadian Lynx Data, Journal of Forecasting, (2005). 9. Lee, Y-S., Tong, L-I.: Forecasting time series using a methodology based on autoregressive integrated moving average and genetic programming, Knowledge- Based Systems, 24, (2011).

10. Ma, Y., Jiang, C., Hou, Z., Wang, C.: The formulation of the optimal strategies for the electricity producers based on the particle swarm optimization algorithm, IEEE Trans Power System, 21(4):1663–71 (2006). 11. Mc Culloch, W.S., Pitts, W.: A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5, 115–133 (1943). 12. Pai, P.F., Lin, C.S.: A Hybrid ARIMA and support vector machines model in stock price forecasting, Omega, 33(6), (2005). 13. Shi, Y, Eberhart, R.C.: Empirical study of particle swarm optimization. Proc IEEE Int Congr Evol Comput, 3:101–6 (1999). 14. Song, Q.: Seasonal forecasting in fuzzy time series. Fuzzy Sets and Systems, 107, 235–236 (1999). 15. Tong, H.: Non-linear time series: A Dynamical system approach, New York, Oxford University Press (1990). 16. Tseng, F.M., Yu, H.C., Tzeng, G.H.: Combining neural network model with seasonal time series ARIMA model, Technological Forecasting&Social Change, 69, (2002). 17. Uslu, V.R., Aladag, C.H., Yolcu, U., Egrioglu, E.: A new hybrid approach for forecasting a seasonal fuzzy time series, International Symposium Computing Science and Engineering Proceeding Book, (2010). 18. Wang, W., VanGelder, P.H.A.J.M., Vrijling, J.K., Ma, J.: Forecasting daily stream flow using hybrid ANN models, Journal of Hydrology, 324, (2006). 19. Yadav, R.N., Kalra, P.K., John, J.: Time series prediction with single multiplicative neuron model, Applied Soft Computing, 7, (2007). 20. Zhang, G.: Time series forecasting using a hybrid ARIMA and neural network model, Neurocomputing, 50, (2003).