Neural Optimization of Evolutionary Algorithm Strategy Parameters Hiral Patel.

Slides:



Advertisements
Similar presentations
NEURAL NETWORKS Backpropagation Algorithm
Advertisements

1 Image Classification MSc Image Processing Assignment March 2003.
Yuri R. Tsoy, Vladimir G. Spitsyn, Department of Computer Engineering
Artificial Neural Networks - Introduction -
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Neural NetworksNN 11 Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
A Review: Architecture
1 Part I Artificial Neural Networks Sofia Nikitaki.
Neural Networks Part 4 Dan Simon Cleveland State University 1.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Self Organization: Hebbian Learning CS/CMPE 333 – Neural Networks.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Implementing a reliable neuro-classifier
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks / Fall 2004 Shreekanth Mandayam ECE Department Rowan University.
Face Processing System Presented by: Harvest Jang Group meeting Fall 2002.
CS 4700: Foundations of Artificial Intelligence
SOMTIME: AN ARTIFICIAL NEURAL NETWORK FOR TOPOLOGICAL AND TEMPORAL CORRELATION FOR SPATIOTEMPORAL PATTERN LEARNING.
Revision Michael J. Watts
A Genetic Algorithms Approach to Feature Subset Selection Problem by Hasan Doğu TAŞKIRAN CS 550 – Machine Learning Workshop Department of Computer Engineering.
Machine Learning. Learning agent Any other agent.
Efficient Model Selection for Support Vector Machines
Evolving a Sigma-Pi Network as a Network Simulator by Justin Basilico.
Artificial Neural Networks
C. Benatti, 3/15/2012, Slide 1 GA/ICA Workshop Carla Benatti 3/15/2012.
Neural NetworksNN 11 Neural netwoks thanks to: Basics of neural network theory and practice for supervised and unsupervised.
Neural Networks Ellen Walker Hiram College. Connectionist Architectures Characterized by (Rich & Knight) –Large number of very simple neuron-like processing.
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
Matlab Matlab Sigmoid Sigmoid Perceptron Perceptron Linear Linear Training Training Small, Round Blue-Cell Tumor Classification Example Small, Round Blue-Cell.
Artificial Neural Networks. Applied Problems: Image, Sound, and Pattern recognition Decision making  Knowledge discovery  Context-Dependent Analysis.
Back-Propagation MLP Neural Network Optimizer ECE 539 Andrew Beckwith.
NEURAL NETWORKS FOR DATA MINING
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Intelligence Lecture No. 29 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
GA-Based Feature Selection and Parameter Optimization for Support Vector Machine Cheng-Lung Huang, Chieh-Jen Wang Expert Systems with Applications, Volume.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
IE 585 Associative Network. 2 Associative Memory NN Single-layer net in which the weights are determined in such a way that the net can store a set of.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
METU Informatics Institute Min720 Pattern Classification with Bio-Medical Applications Part 8: Neural Networks.
Linear Discrimination Reading: Chapter 2 of textbook.
Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning.
J.-Y. Yang, J.-S. Wang and Y.-P. Chena, Using acceleration measurements for activity recognition: An effective learning algorithm for constructing neural.
Neural Networks and Backpropagation Sebastian Thrun , Fall 2000.
PARALLELIZATION OF ARTIFICIAL NEURAL NETWORKS Joe Bradish CS5802 Fall 2015.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
CITS7212: Computational Intelligence An Overview of Core CI Technologies Lyndon While.
Unsupervised Learning Networks 主講人 : 虞台文. Content Introduction Important Unsupervised Learning NNs – Hamming Networks – Kohonen’s Self-Organizing Feature.
Semiconductors, BP&A Planning, DREAM PLAN IDEA IMPLEMENTATION.
SUPERVISED LEARNING NETWORK
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Artificial Neural Networks (Cont.) Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
Supervised learning network G.Anuradha. Learning objectives The basic networks in supervised learning Perceptron networks better than Hebb rule Single.
November 21, 2013Computer Vision Lecture 14: Object Recognition II 1 Statistical Pattern Recognition The formal description consists of relevant numerical.
Neural NetworksNN 21 Architecture We consider the architecture: feed- forward NN with one layer It is sufficient to study single layer perceptrons with.
ECE 471/571 - Lecture 16 Hopfield Network 11/03/15.
語音訊號處理之初步實驗 NTU Speech Lab 指導教授: 李琳山 助教: 熊信寬
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
Data Mining: Concepts and Techniques1 Prediction Prediction vs. classification Classification predicts categorical class label Prediction predicts continuous-valued.
Evolutionary Design of the Closed Loop Control on the Basis of NN-ANARX Model Using Genetic Algoritm.
Pattern Recognition Lecture 20: Neural Networks 3 Dr. Richard Spillman Pacific Lutheran University.
Self-Organizing Network Model (SOM) Session 11
Announcements HW4 due today (11:59pm) HW5 out today (due 11/17 11:59pm)
CSE P573 Applications of Artificial Intelligence Neural Networks
Synaptic DynamicsII : Supervised Learning
CSE 573 Introduction to Artificial Intelligence Neural Networks
Department of Electrical Engineering
Review NNs Processing Principles in Neuron / Unit
Artificial Neural Network learning
Presentation transcript:

Neural Optimization of Evolutionary Algorithm Strategy Parameters Hiral Patel

Outline Why optimize parameters of an EA? Why optimize parameters of an EA? Why use neural networks? Why use neural networks? What has been done so far in this field? What has been done so far in this field? Experimental Model Experimental Model Preliminary Results and Conclusion Preliminary Results and Conclusion Questions Questions

Why optimize parameters of an EA? Faster convergence Faster convergence Better overall results Better overall results Avoid premature convergence Avoid premature convergence

Why use neural networks? Ability to learn Ability to learn Adaptability Adaptability Pattern recognition Pattern recognition Faster then using another EA Faster then using another EA

What has been done so far in this field? Machine Learning primarily used to optimize ES and EP Machine Learning primarily used to optimize ES and EP Optimized mutation operators Optimized mutation operators Little has been done to optimize GA parameters Little has been done to optimize GA parameters

Experimental Model Outline Neural Network Basics Neural Network Basics Hebbian Learning Hebbian Learning Parameters of the Genetic Algorithm to be optimized Parameters of the Genetic Algorithm to be optimized Neural Network Inputs Neural Network Inputs

Neural Network Basics Weight update algorithm w q1 (k) w q2 (k) w qn (k) v q (k) d q (k) g()=f’() y q (k) b q (k) bias Vector input signal x(k)  R n  1 Deviation of activation function Synaptic weights Desired neuron response Neuron response (output) Sigmoid activation function f() Adapted from: Ham, M. H., Kostanic, I Principles of Neurocomputing for Science and Engineering, McGraw-Hilll, NY, 2001

Hebbian Learning Unsupervised learning Unsupervised learning Time-dependent Time-dependent Learning signal and Forgetting factor Learning signal and Forgetting factor

Hebb Learning for single neuron Standard Hebbian learning rule { ,  } x1x1 xnxn w0w0 w1w1 wnwn v f(v)y Adapted from: Ham, M. H., Kostanic, I Principles of Neurocomputing for Science and Engineering, McGraw-Hilll, NY, 2001 x0x0

Parameters of the Genetic Algorithm to be optimized Crossover Probability Crossover Probability Crossover Cell Divider Crossover Cell Divider Cell Crossover Probability Cell Crossover Probability Mutation Probability Mutation Probability Mutation Cell Divider Mutation Cell Divider Cell Mutation Probability Cell Mutation Probability Bit Mutation Probability Bit Mutation Probability

Neural Network Inputs Current Parameter Values Current Parameter Values Variance Variance Mean Mean Max fitness Max fitness Average bit changes for crossover Average bit changes for crossover Constant parameters of the GA Constant parameters of the GA

Preliminary Results Tests run with Knapsack problem with dataset 3, pop. size 800, rep. size 1600 Tests run with Knapsack problem with dataset 3, pop. size 800, rep. size 1600 Learning Signal and Forgetting factor are not yet optimal enough to suggest better performance with NN Learning Signal and Forgetting factor are not yet optimal enough to suggest better performance with NN

Output for 1600 generations

Probabilities for 1600 generations

Conclusion It may be possible to get better performance out of a Neural Optimized EA as long as the (unsupervised) Neural Network is able to adapt to the changes quickly and to recognize local minima. It may be possible to get better performance out of a Neural Optimized EA as long as the (unsupervised) Neural Network is able to adapt to the changes quickly and to recognize local minima.

Possible Future Work ES to optimize parameters, use a SOM to do feature extraction of the optimized parameter values, use the SOM output as codebook vectors for LVQ network and then classify the output of the original ES, use the classifications to perform supervised training of Levenberg- Marquardt Backpropagation network to form rule set. ES to optimize parameters, use a SOM to do feature extraction of the optimized parameter values, use the SOM output as codebook vectors for LVQ network and then classify the output of the original ES, use the classifications to perform supervised training of Levenberg- Marquardt Backpropagation network to form rule set.

Question ?