Research on Advanced Training Algorithms of Neural Networks Hao Yu Ph.D Defense Aug 17 th 2011 Supervisor: Bogdan Wilamowski Committee Members: Hulya Kirkici.

Slides:



Advertisements
Similar presentations
Multi-Layer Perceptron (MLP)
Advertisements

EE-M /7: IS L7&8 1/24, v3.0 Lectures 7&8: Non-linear Classification and Regression using Layered Perceptrons Dr Martin Brown Room: E1k
EE 690 Design of Embodied Intelligence
1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
1 Image Classification MSc Image Processing Assignment March 2003.
INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Neural network architectures and learning algorithms Author : Bogdan M. Wilamowski Source : IEEE INDUSTRIAL ELECTRONICS MAGAZINE Date : 2011/11/22 Presenter.
1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
Computing Gradient Vector and Jacobian Matrix in Arbitrarily Connected Neural Networks Author : Bogdan M. Wilamowski, Fellow, IEEE, Nicholas J. Cotton,
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
Machine Learning Neural Networks
Lecture 14 – Neural Networks
INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Supervised learning 1.Early learning algorithms 2.First order gradient methods 3.Second order gradient methods.
The back-propagation training algorithm
Connectionist models. Connectionist Models Motivated by Brain rather than Mind –A large number of very simple processing elements –A large number of weighted.
Chapter 5 NEURAL NETWORKS
Back-Propagation Algorithm
Improved BP algorithms ( first order gradient method) 1.BP with momentum 2.Delta- bar- delta 3.Decoupled momentum 4.RProp 5.Adaptive BP 6.Trinary BP 7.BP.
Artificial Neural Networks
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Radial Basis Function (RBF) Networks
Neural networks.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Artificial Neural Networks
Biointelligence Laboratory, Seoul National University
Artificial Neural Networks
Multi Layer NN and Bit-True Modeling of These Networks SILab presentation Ali Ahmadi September 2007.
Introduction to Neural Networks Debrup Chakraborty Pattern Recognition and Machine Learning 2006.
Machine Learning Chapter 4. Artificial Neural Networks
11 CSE 4705 Artificial Intelligence Jinbo Bi Department of Computer Science & Engineering
1 Chapter 6: Artificial Neural Networks Part 2 of 3 (Sections 6.4 – 6.6) Asst. Prof. Dr. Sukanya Pongsuparb Dr. Srisupa Palakvangsa Na Ayudhya Dr. Benjarath.
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
NEURAL NETWORKS FOR DATA MINING
Artificial Intelligence Techniques Multilayer Perceptrons.
CS 478 – Tools for Machine Learning and Data Mining Backpropagation.
Machine Learning Using Support Vector Machines (Paper Review) Presented to: Prof. Dr. Mohamed Batouche Prepared By: Asma B. Al-Saleh Amani A. Al-Ajlan.
Well Log Data Inversion Using Radial Basis Function Network Kou-Yuan Huang, Li-Sheng Weng Department of Computer Science National Chiao Tung University.
Artificial Intelligence Chapter 3 Neural Networks Artificial Intelligence Chapter 3 Neural Networks Biointelligence Lab School of Computer Sci. & Eng.
Non-Bayes classifiers. Linear discriminants, neural networks.
SUPERVISED LEARNING NETWORK
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Chapter 18 Connectionist Models
EEE502 Pattern Recognition
Neural Networks Vladimir Pleskonjić 3188/ /20 Vladimir Pleskonjić General Feedforward neural networks Inputs are numeric features Outputs are in.

Perceptrons Michael J. Watts
Chapter 6 Neural Network.
Neural Networks The Elements of Statistical Learning, Chapter 12 Presented by Nick Rizzolo.
Machine Learning Artificial Neural Networks MPλ ∀ Stergiou Theodoros 1.
A Presentation on Adaptive Neuro-Fuzzy Inference System using Particle Swarm Optimization and it’s Application By Sumanta Kundu (En.R.No.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Machine Learning Supervised Learning Classification and Regression
Neural networks.
Adavanced Numerical Computation 2008, AM NDHU
第 3 章 神经网络.
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Synaptic DynamicsII : Supervised Learning
Artificial Intelligence Chapter 3 Neural Networks
Neural Network - 2 Mayank Vatsa
Artificial Intelligence Chapter 3 Neural Networks
Machine Learning: Lecture 4
Machine Learning: UNIT-2 CHAPTER-1
Artificial Intelligence Chapter 3 Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
Computational Intelligence
Computer Vision Lecture 19: Object Recognition III
Artificial Intelligence Chapter 3 Neural Networks
Presentation transcript:

Research on Advanced Training Algorithms of Neural Networks Hao Yu Ph.D Defense Aug 17 th 2011 Supervisor: Bogdan Wilamowski Committee Members: Hulya Kirkici Vishwani D. Agrawal Vitaly Vodyanoy University Reader: Weikuan Yu

Outlines Why Neural Networks Network Architectures Training Algorithms How to Design Neural Networks Problems in Second Order Algorithms Proposed Second Order Computation Proposed Forward-Only Algorithm Neural Network Trainer Conclusion & Recent Research

What is Neural Network Classification: separate the two groups (red circles and blue stars) of twisted points [1].

What is Neural Network Interpolation: with the given 25 points (red), find the values of points A and B (black)

What is Neural Network Human Solutions Neural Network Solutions

What is Neural Network Recognition: retrieve the noised digit images (left) to original images (right) Original ImagesNoised Images

What is Neural Network “Learn to Behave” Build any relationship between input and outputs [2] Learning Process“Behave”

Why Neural Network What makes neural network different Given Patterns (5×5=25) Testing Patterns (41×41=1,681)

Different Approximators Test Results of Different Approximators Mamdani fuzzyTSK fuzzyNeuro-fuzzySVM-RBFSVM-Poly NearestLinearSplineCubicNeural Network Matlab Function: Interp2

Comparison Neural networks behave potentially as the best approximator Methods of Computational IntelligenceSum Square Errors Fuzzy inference system – Mamdani Fuzzy inference system – TSK Neuron – fuzzy system Support vector machine – RBF kernel Support vector machine – polynomial kernel Interpolation – nearest Interpolation – linear Interpolation – spline Interpolation – cubic Neural network – 4 neurons in FCC network Neural network – 5 neurons in FCC network0.4648

Outlines Why Neural Networks Network Architectures Training Algorithms How to Design Neural Networks Problems in Second Order Algorithms Proposed Second Order Computation Proposed Forward-Only Algorithm Neural Network Trainer Conclusion & Recent Research

A Single Neuron Two basic computations (1) (2)

Network Architectures Multiplayer perceptron network is the most popular architecture Networks with connections across layers, such as bridged multiplayer perceptron (BMLP) networks and fully connected cascade (FCC) networks are much powerful than MLP networks. Wilamowski, B. M. Hunter, D. Malinowski, A., "Solving parity-N problems with feedforward neural networks". Proc IEEE IJCNN, , IEEE Press, M. E. Hohil, D. Liu, and S. H. Smith, "Solving the N-bit parity problem using neural networks," Neural Networks, vol. 12, pp , Example: smallest networks for solving parity-7 problem (analytical results) MLP network FCC network BMLP network

Outlines Why Neural Networks Network Architectures Training Algorithms How to Design Neural Networks Problems in Second Order Algorithms Proposed Second Order Computation Proposed Forward-Only Algorithm Neural Network Trainer Conclusion & Recent Research

Error Back Propagation Algorithm The most popular algorithm for neural network training Update rule of EBP algorithm [3] Developed based on gradient optimization Advantages: –Easy –Stable Disadvantages: –Very limited power –Slow convergence

Improvement of EBP Improved gradient using momentum [4] Adjusted learning constant [5-6]

Newton Algorithm Newton algorithm: using the derivative of gradient to evaluate the change of gradient, then select proper learning constants in each direction [7] Advantages: –Fast convergence Disadvantages: –Not stable –Requires computation of second order derivative

Gaussian-Newton Algorithm Gaussian-Newton algorithm: eliminate the second order derivatives in Newton Method, by introducing Jacobian matrix Advantages: –Fast convergence Disadvantages: –Not stable

Levenberg Marquardt Algorithm LM algorithm: blend EBP algorithm and Gaussian-Newton algorithm [8-9] –When evaluation error increases, μ increase, LM algorithm switches to EBP algorithm –When evaluation error decreases, μ decreases, LM algorithm switches to Gaussian-Newton method Advantages –Fast convergence –Stable training Comparing with first order algorithms, LM algorithm has much more powerful search ability, but it also requires more complex computation

Comparison of Different Algorithms Training XOR patterns using different algorithms XOR problem EBPα=0.1α=10 success rate100%18% average iteration average time (ms) XOR problem EBP using momentum α=0.1α=10 m=0.5 success rate100% average iteration average time (ms) XOR problem – EBP adjusted learning constant success rate100% average iteration average time (ms)41.19 XOR problem – Gaussian-Newton algorithm success rate6% average iteration1.29 average time (ms)2.29 XOR problem – LM algorithm success rate100% average iteration5.49 average time (ms)4.35

Outlines Why Neural Networks Network Architectures Training Algorithms How to Design Neural Networks Problems in Second Order Algorithms Proposed Second Order Computation Proposed Forward-Only Algorithm Neural Network Trainer Conclusion & Recent Research

How to Design Neural Networks Traditional design: –Most popular training algorithm: EBP algorithm –Most popular network architecture: MLP network Results: –Large size neural networks –Poor generalization ability –Lots of engineers move to other methods, such as fuzzy systems

How to Design Neural Networks B. M. Wilamowski, "Neural Network Architectures and Learning Algorithms: How Not to Be Frustrated with Neural Networks," IEEE Ind. Electron. Mag., vol. 3, no. 4, pp , –Over-fitting problem –Mismatch between size of training patterns and network size Recommended design policy: compact networks benefit generalization ability –Powerful training algorithm: LM algorithm –Efficient network architecture: BMLP network and FCC network 2 neurons3 neurons4 neurons5 neurons 6 neurons7 neurons8 neurons9 neurons

Outlines Why Neural Networks Network Architectures Training Algorithms How to Design Neural Networks Problems in Second Order Algorithms Proposed Second Order Computation Proposed Forward-Only Algorithm Neural Network Trainer Conclusion & Recent Research

Problems in Second Order Algorithms Matrix inversion –Nature of second order algorithms –The size of matrix is proportional to the size of networks –As the size of networks increases, second order algorithms may not as efficient as first order algorithms

Problems in Second Order Algorithms Architecture limitation M. T. Hagan and M. Menhaj, "Training feedforward networks with the Marquardt algorithm". IEEE Trans. on Neural Networks, vol. 5, no. 6, pp , (citation 2474) –Only developed for training MLP networks –Not proper for design compact networks Neuron-by-Neuron Algorithm B. M. Wilamowski, N. J. Cotton, O. Kaynak and G. Dundar, "Computing Gradient Vector and Jacobian Matrix in Arbitrarily Connected Neural Networks", IEEE Trans. on Industrial Electronics, vol. 55, no. 10, pp , Oct –SPICE computation routines –Capable of training arbitrarily connected neural networks –Compact neural network design: NBN algorithm + BMLP (FCC) networks –Very complex computation

Problems in Second Order Algorithms Memory limitation: –The size of Jacobian matrix J is P×M×N –P is the number of training patterns –M is the number of outputs –N is the number of weights Practically, the number of training patterns is huge and is encouraged to be as large as possible MINST handwritten digit database [10]: 60,000 training patterns, 784 inputs and 10 outputs. Using the simplest network architecture (1 neuron per output), the required memory could be nearly 35 GB. Limited by most of the Windows compiler.

Problems in Second Order Algorithms Computational duplication –Forward computation: calculate errors –Backward computation: error backpropagation In second order algorithms, both Hagan and Menhaj LM algorithm and NBN algorithm, the error backpropagation process has to be repeated for each output. –Very complex –Inefficient for networks with multiple outputs

Outlines Why Neural Networks Network Architectures Training Algorithms How to Design Neural Networks Problems in Second Order Algorithms Proposed Second Order Computation Proposed Forward-Only Algorithm Neural Network Trainer Conclusion & Recent Research

Proposed Second Order Computation – Basic Theory Matrix Algebra [11] In neural network training, considering –Each pattern is related to one row of Jacobian matrix –Patterns are independent of each other Multiplication Methods Elements for storage Row-column (P × M) × N + N × N + N Column-row N × N + N Difference(P × M) × N Row-column multiplication Column-row multiplication Memory comparison Multiplication Methods AdditionMultiplication Row-column (P × M) × N × N Column-rowN × N × (P × M) Computation comparison

Proposed Second Order Computation – Derivation Hagan and Menhaj LM algorithm or NBN algorithm Improved Computation

Proposed Second Order Computation – Pseudo Code Properties: –No need for Jacobian matrix storage –Vector operation instead of matrix operation Main contributions: –Significant memory reduction –Memory reduction benefits computation speed –NO tradeoff ! Memory limitation caused by Jacobian matrix storage in second order algorithms is solved Again, considering the MINST problem, the memory cost for storage Jacobian elements could be reduced from more than 35 gigabytes to nearly 30.7 kilobytes Pseudo Code

Proposed Second Order Computation – Experimental Results Memory Comparison Time Comparison Parity-N ProblemsN=14N=16 Patterns16,38465,536 Structures15 neurons17 neurons Jacobian matrix sizes5,406,72027,852,800 Weight vector sizes Average iteration Success Rate13%9% AlgorithmsActual memory cost Traditional LM79.21Mb385.22Mb Improved LM3.41Mb4.30Mb Parity-N ProblemsN=9N=11N=13N=15 Patterns5122,0488,19232,768 Neurons Weights Average Iterations Success Rate58%37%24%12% AlgorithmsAveraged training time (s) Traditional LM , Improved LM ,797.93

Outlines Why Neural Networks Network Architectures Training Algorithms How to Design Neural Networks Problems in Second Order Algorithms Proposed Second Order Computation Proposed Forward-Only Algorithm Neural Network Trainer Conclusion & Recent Research

Traditional Computation – Forward Computation For each training pattern p Calculate net for neuron j Calculate output for neuron j Calculate derivative for neuron j Calculate output at output m Calculate error at output m

Traditional Computation – Backward Computation For first order algorithms Calculate delta [12] Do gradient vector For second order algorithms Calculate delta Calculate Jacobian elements

Proposed Forward-Only Algorithm Extend the concept of backpropagation factor δ –Original definition: backpropagated from output m to neuron j –Our definition: backpropagated from neuron k to neuron j

Proposed Forward-Only Algorithm Regular Table –lower triangular elements: k≥j, matrix δ has triangular shape –diagonal elements: δ k,k =s k –Upper triangular elements: weight connections between neurons

Proposed Forward-Only Algorithm Train arbitrarily connected neural networks

Proposed Forward-Only Algorithm Train networks with multiple outputs The more outputs the networks have, the more efficient the forward-only algorithm will be 1 output2 outputs 3 outputs4 outputs

Proposed Forward-Only Algorithm Pseudo codes of two different algorithms In forward-only computation, the backward computation (bold in left figure) is replaced by extra computation in forward process (bold in right figure) Traditional forward-backward algorithm Forward-only algorithm

Proposed Forward-Only Algorithm Computation cost estimation Properties of the forward-only algorithm –Simplified computation: organized in a regular table with general formula –Easy to be adapted for training arbitrarily connected neural networks –Improved computation efficiency for networks with multiple outputs Tradeoff –Extra memory is required to store the extended δ array Hagan and Menhaj Computation Forward PartBackward Part +/ – nn×nx + 3nn + nono×nn×ny ×/÷nn×nx + 4nnno×nn×ny + no×(nn – no) Expnn0 Forward-only computation ForwardBackward +/ – nn×nx + 3nn + no + nn×ny×nz0 ×/÷nn×nx + 4nn + nn×ny + nn×ny×nz0 Expnn0 Subtraction forward-only from traditional +/ – nn×ny×(no – 1) ×/÷nn×ny×(no – 1) + no×(nn – no) – nn×ny×nz exp0 MLP networks with one hidden layer; 20 inputs

Proposed Forward-Only Algorithm Experiments: training compact neural networks with good generalization ability Neur ons Success RateAverage IterationAverage Time (s) EBPFOEBPFOEBPFO 80%5%Failing222.5Failing %25%Failing214.6Failing %61%Failing183.5Failing %76%Failing177.2Failing %90%Failing149.5Failing %96%573, %99%544, %100%627, neurons, FO SSE Train =0.0044, SSE Verify = neurons, EBP SSE Train =0.0764, SSE Verify = Under-fitting 12 neurons, EBP SSE Train =0.0018, SSE Verify = Over-fitting

Proposed Forward-Only Algorithm Experiments: comparison of computation efficiency Computation methods Time cost (ms/iteration) Relative time ForwardBackward Traditional8.241, % Forward-only % Problems Computation Methods Time Cost (ms/iteration) Relative Time ForwardBackward 8-bit signalTraditional % Forward-only % Computation methods Time cost (ms/iteration) Relative time ForwardBackward Traditional % Forward-only % ASCII to Images Forward Kinematics [13] Error Correction

Outlines Why Neural Networks Network Architectures Training Algorithms How to Design Neural Networks Problems in Second Order Algorithms Proposed Second Order Computation Proposed Forward-Only Algorithm Neural Network Trainer Conclusion & Recent Research

Software The tool NBN Trainer is developed based on Visual C++ and used for training neural networks Pattern classification and recognition Function approximation Available online (currently free):

Parity-2 Problem Parity-2 Patterns

Outlines Why Neural Networks Network Architectures Training Algorithms How to Design Neural Networks Problems in Second Order Algorithms Proposed Second Order Computation Proposed Forward-Only Algorithm Neural Network Trainer Conclusion & Recent Research

Conclusion Second order algorithms are more efficient and advanced in training neural networks The proposed second order computation removes Jacobian matrix storage and multiplication. It solves memory limitation The proposed forward-only algorithm simplifies the computation process in second order training: a regular table + a general formula The proposed forward-only algorithm can handle arbitrarily connected neural networks The proposed forward-only algorithm has speed benefit for networks with multiple outputs

Recent Research RBF networks –ErrCor algorithm: hierarchical training algorithm –Network size increases based on the training information –No more trial-by-trial Applications of Neural Networks (future work) –Dynamic controller design –Smart grid distribution systems –Pattern recognition in EDA software design

References [1] J. X. Peng, Kang Li, G.W. Irwin, "A New Jacobian Matrix for Optimal Learning of Single-Layer Neural Networks," IEEE Trans. on Neural Networks, vol. 19, no. 1, pp , Jan 2008 [2] K. Hornik, M. Stinchcombe and H. White, "Multilayer Feedforward Networks Are Universal Approximators," Neural Networks, vol. 2, issue 5, pp , [3] D. E. Rumelhart, G. E. Hinton and R. J. Wiliams, "Learning representations by back-propagating errors," Nature, vol. 323, pp , 1986 MA. [4] V. V. Phansalkar, P.S. Sastry, "Analysis of the back-propagation algorithm with momentum," IEEE Trans. on Neural Networks, vol. 5, no. 3, pp , March [5] M. Riedmiller, H. Braun, "A direct adaptive method for faster backpropagation learning: The RPROP algorithm". Proc. International Conference on Neural Networks, San Francisco, CA, 1993, pp [6] Scott E. Fahlman. Faster-learning variations on back-propagation: An empirical study. In T. J. Sejnowski G. E. Hinton and D. S. Touretzky, editors, 1988 Connectionist Models Summer School, San Mateo, CA, Morgan Kaufmann. [7] M. R. Osborne, "Fisher’s method of scoring," Internat. Statist. Rev., 86 (1992), pp [8] K. Levenberg, "A method for the solution of certain problems in least squares," Quarterly of Applied Machematics, 5, pp , [9] D. Marquardt, "An algorithm for least-squares estimation of nonlinear parameters," SIAM J. Appl. Math., vol. 11, no. 2, pp , Jun [10] L. J. Cao, S. S. Keerthi, Chong-Jin Ong, J. Q. Zhang, U. Periyathamby, Xiu Ju Fu, H. P. Lee, "Parallel sequential minimal optimization for the training of support vector machines," IEEE Trans. on Neural Networks, vol. 17, no. 4, pp , April [11] D. C. Lay, Linear Algebra and its Applications. Addison-Wesley Publishing Company, 3 rd version, pp. 124, July, [12] H. N. Robert, "Theory of the Back Propagation Neural Network," Proc IEEE IJCNN, , IEEE Press, New York, [13] N. J. Cotton and B. M. Wilamowski, "Compensation of Nonlinearities Using Neural Networks Implemented on Inexpensive Microcontrollers" IEEE Trans. on Industrial Electronics, vol. 58, No 3, pp , March 2011.

Prepared Publications – Journals H. Yu, T. T. Xie, Stanisław Paszczyñski and B. M. Wilamowski, "Advantages of Radial Basis Function Networks for Dynamic System Design," IEEE Trans. on Industrial Electronics (Accepted and scheduled publication in December, 2011) H. Yu, T. T. Xie and B. M. Wilamowski, "Error Correction – A Robust Learning Algorithm for Designing Compact Radial Basis Function Networks," IEEE Trans. on Neural Networks (Major revision) T. T. Xie, H. Yu, J. Hewllet, Pawel Rozycki and B. M. Wilamowski, "Fast and Efficient Second Order Method for Training Radial Basis Function Networks," IEEE Trans. on Neural Networks (Major revision) A. Malinowski and H. Yu, "Comparison of Various Embedded System Technologies for Industrial Applications," IEEE Trans. on Industrial Informatics, vol. 7, issue 2, pp , May 2011 B. M. Wilamowski and H. Yu, "Improved Computation for Levenberg Marquardt Training," IEEE Trans. on Neural Networks, vol. 21, no. 6, pp , June 2010 (14 citations) B. M. Wilamowski and H. Yu, "Neural Network Learning Without Backpropagation," IEEE Trans. on Neural Networks, vol. 21, no.11, pp , Nov (5 citations) Pierluigi Siano, Janusz Kolbusz, H. Yu and Carlo Cecati, "Real Time Operation of a Smart Microgrid via FCN Networks and Optimal Power Flow," IEEE Trans. on Industrial Informatics (under reviewing)

Prepared Publications – Conferences H. Yu and B. M. Wilamowski, "Efficient and Reliable Training of Neural Networks," IEEE Human System Interaction Conference, HSI 2009, Catania. Italy, May 21-23, 2009, pp (Best paper award in Computational Intelligence section) (11 citations) H. Yu and B. M. Wilamowski, "C++ Implementation of Neural Networks Trainer," 13th IEEE Intelligent Engineering Systems Conference, INES 2009, Barbados, April 16-18, 2009, pp (8 citations) H. Yu and B. M. Wilamowski, "Fast and efficient and training of neural networks," in Proc. 3nd IEEE Human System Interaction Conf. HSI 2010, Rzeszow, Poland, May 13-15, 2010, pp (2 citations) H. Yu and B. M. Wilamowski, "Neural Network Training with Second Order Algorithms," monograph by Springer on Human-Computer Systems Interaction. Background and Applications, 31 st October, (Accepted) H. Yu, T. T. Xie, M. Hamilton and B. M. Wilamowski, "Comparison of Different Neural Network Architectures for Digit Image Recognition," in Proc. 3nd IEEE Human System Interaction Conf. HSI 2011, Yokohama, Japan, pp , May 19-21, 2011 N. Pham, H. Yu and B. M. Wilamowski, "Neural Network Trainer through Computer Networks," 24 th IEEE International Conference on Advanced Information Networking and Applications, AINA2010, Perth, Australia, April 20-23, 2010, pp (1 citations) T. T. Xie, H. Yu and B. M. Wilamowski, "Replacing Fuzzy Systems with Neural Networks," in Proc. 3nd IEEE Human System Interaction Conf. HSI 2010, Rzeszow, Poland, May 13-15, 2010, pp T. T. Xie, H. Yu and B. M. Wilamowski, "Comparison of Traditional Neural Networks and Radial Basis Function Networks," in Proc. 20th IEEE International Symposium on Industrial Electronics, ISIE2011, Gdansk, Poland, June 2011 (Accepted)

Prepared Publications – Chapters for IE Handbook (2 nd Edition) H. Yu and B. M. Wilamowski, "Levenberg Marquardt Training," Industrial Electronics Handbook, vol. 5 – INTELLIGENT SYSTEMS, 2 nd Edition, 2010, chapter 12, pp to 12-16, CRC Press. H. Yu and M. Carroll, "Interactive Website Design Using Python Script," Industrial Electronics Handbook, vol. 4 – INDUSTRIAL COMMUNICATION SYSTEMS, 2 nd Edition, 2010, chapter 62, pp to 62-8, CRC Press. B. M. Wilamowski, H. Yu and N. Cotton, "Neuron by Neuron Algorithm," Industrial Electronics Handbook, vol. 5 – INTELLIGENT SYSTEMS, 2 nd Edition, 2010, chapter 13, pp to 13-24, CRC Press. T. T. Xie, H. Yu and B. M. Wilamowski, "Neuro-fuzzy System," Industrial Electronics Handbook, vol. 5 – INTELLIGENT SYSTEMS, 2 nd Edition, 2010, chapter 20, pp to 20-9, CRC Press. B. M. Wilamowski, H. Yu and K. T. Chung, "Parity-N problems as a vehicle to compare efficiency of neural network architectures," Industrial Electronics Handbook, vol. 5 – INTELLIGENT SYSTEMS, 2 nd Edition, 2010, chapter 10, pp to 10-8, CRC Press.

Thanks