CAP6938 Neuroevolution and Developmental Encoding Evolving Adaptive Neural Networks Dr. Kenneth Stanley October 23, 2006.

Slides:



Advertisements
Similar presentations
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
Advertisements

NEURAL NETWORKS Perceptron
Neural Networks  A neural network is a network of simulated neurons that can be used to recognize instances of patterns. NNs learn by searching through.
Computer Vision Lecture 18: Object Recognition II
Neural Representation, Embodied and Evolved Pete Mandik Chairman, Department of Philosophy Coordinator, Cognitive Science Laboratory William Paterson University,
Tuomas Sandholm Carnegie Mellon University Computer Science Department
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Artificial neural networks:
Adaptive Resonance Theory (ART) networks perform completely unsupervised learning. Their competitive learning algorithm is similar to the first (unsupervised)
Learning crossmodal spatial transformations through STDP Gerhard Neumann Seminar B, SS 06.
CAP6938 Neuroevolution and Developmental Encoding Leaky Integrator Neurons and CTRNNs Dr. Kenneth Stanley October 25, 2006.
Artificial Neural Networks - Introduction -
Artificial Neural Networks - Introduction -
Neuro-Evolution of Augmenting Topologies Ben Trewhella.
Evolving Neural Network Agents in the NERO Video Game Author : Kenneth O. Stanley, Bobby D. Bryant, and Risto Miikkulainen Presented by Yi Cheng Lin.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Synapses are everywhere neurons synapses Synapse change continuously –From msec –To hours (memory) Lack HH type model for the synapse.
September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 1 Visual Illusions demonstrate how we perceive an “interpreted version”
November 5, 2009Introduction to Cognitive Science Lecture 16: Symbolic vs. Connectionist AI 1 Symbolism vs. Connectionism There is another major division.
How does the mind process all the information it receives?
Artificial neural networks.
The Performance of Evolutionary Artificial Neural Networks in Ambiguous and Unambiguous Learning Situations Melissa K. Carroll October, 2004.
Evolutionary Robotics NEAT / HyperNEAT Stanley, K.O., Miikkulainen (2001) Evolving Neural Networks through Augmenting Topologies. Competing Conventions:
How to do backpropagation in a brain
1 Evolutionary Growth of Genomes for the Development and Replication of Multi-Cellular Organisms with Indirect Encodings Stefano Nichele and Gunnar Tufte.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
CAP6938 Neuroevolution and Developmental Encoding Working with NEAT Dr. Kenneth Stanley September 27, 2006.
Cognition, Brain and Consciousness: An Introduction to Cognitive Neuroscience Edited by Bernard J. Baars and Nicole M. Gage 2007 Academic Press Chapter.
From Biological to Artificial Neural Networks Marc Pomplun Department of Computer Science University of Massachusetts at Boston
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Pencil-and-Paper Neural Networks Prof. Kevin Crisp St. Olaf College.
CAP6938 Neuroevolution and Developmental Encoding Real-time NEAT Dr. Kenneth Stanley October 18, 2006.
Artificial Life/Agents Creatures: Artificial Life Autonomous Software Agents for Home Entertainment Stephen Grand, 1997 Learning Human-like Opponent Behaviour.
CAP6938 Neuroevolution and Developmental Encoding Basic Concepts Dr. Kenneth Stanley August 23, 2006.
Mike Taks Bram van de Klundert. About Published 2005 Cited 286 times Kenneth O. Stanley Associate Professor at University of Central Florida Risto Miikkulainen.
Autonomous Virtual Humans Tyler Streeter. Contents Introduction Introduction Implementation Implementation –3D Graphics –Simulated Physics –Neural Networks.
Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu.
Pac-Man AI using GA. Why Machine Learning in Video Games? Better player experience Agents can adapt to player Increased variety of agent behaviors Ever-changing.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Synaptic Dynamics: Unsupervised Learning
Artificial Intelligence Research in Video Games By Jacob Schrum
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
CAP6938 Neuroevolution and Artificial Embryogeny Leaky Integrator Neurons and CTRNNs Dr. Kenneth Stanley March 6, 2006.
CAP6938 Neuroevolution and Artificial Embryogeny Evolving Adaptive Neural Networks Dr. Kenneth Stanley March 1, 2006.
November 21, 2013Computer Vision Lecture 14: Object Recognition II 1 Statistical Pattern Recognition The formal description consists of relevant numerical.
COSC 4426 AJ Boulay Julia Johnson Artificial Neural Networks: Introduction to Soft Computing (Textbook)
CAP6938 Neuroevolution and Artificial Embryogeny Neural Network Weight Optimization Dr. Kenneth Stanley January 18, 2006.
Minds and Computers Discovering the nature of intelligence by studying intelligence in all its forms: human and machine Artificial intelligence (A.I.)
Evolutionary Robotics The French Approach Jean-Arcady Meyer Commentator on the growth of the field. Animats: artificial animals anima-materials Coined.
1 Neural networks 2. 2 Introduction: Neural networks The nervous system contains 10^12 interconnected neurons.
Where are we? What’s left? HW 7 due on Wednesday Finish learning this week. Exam #4 next Monday Final Exam is a take-home handed out next Friday in class.
CAP6938 Neuroevolution and Artificial Embryogeny Real-time NEAT Dr. Kenneth Stanley February 22, 2006.
An Evolutionary Algorithm for Neural Network Learning using Direct Encoding Paul Batchis Department of Computer Science Rutgers University.
March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 1 … let us move on to… Artificial Neural Networks.
Artificial Neural Networks This is lecture 15 of the module `Biologically Inspired Computing’ An introduction to Artificial Neural Networks.
CAP6938 Neuroevolution and Artificial Embryogeny Real-time NEAT Dr. Kenneth Stanley February 22, 2006.
Dr. Kenneth Stanley September 25, 2006
Dr. Kenneth Stanley September 6, 2006
Neural Networks A neural network is a network of simulated neurons that can be used to recognize instances of patterns. NNs learn by searching through.
Simple learning in connectionist networks
Added After Talk Looking for a review paper on evolving plastic networks? Here is a recent one from Andrea Soltoggio, Sebastian Risi, and Kenneth Stanley:
The Brain as an Efficient and Robust Adaptive Learner
Faustino J. Gomez, Doug Burger, and Risto Miikkulainen
Dr. Kenneth Stanley February 6, 2006
Volume 40, Issue 6, Pages (December 2003)
The Naïve Bayes (NB) Classifier
Simple learning in connectionist networks
The Brain as an Efficient and Robust Adaptive Learner
Continuous attractor neural networks (CANNs)
Presentation transcript:

CAP6938 Neuroevolution and Developmental Encoding Evolving Adaptive Neural Networks Dr. Kenneth Stanley October 23, 2006

Remember This Thing? What’s missing from current neural models?

An ANN Link is a Synapse (from Dr. George Johnson at )

What Happens at Synapses? Weighted signal transmission But also: –Strengthening –Weakening –Sensitization –Habituation –Hebbian learning –None of these weight changes during a lifetime are happening in static models

Why Should Weights Change? The world changes Evolution cannot predict all future possibilities Evolution can succeed with less accuracy The Baldwin Effect –Learning smooths the fitness landscape –Traits that initially require learning eventually become instinct if the environment is consistent If the mind is static, you can’t learn!

How Should Weights Change? Remember Hebbian Learning? (lecture 3) –Weight update based on correlation: –Incremental version: How can this be made to evolve? –Which weights should be adaptive? Which rule should they follow if there is more than one? –Which weights should be fixed? –To what degree should they adapt (evolve alpha) Evolve alpha parameter on each link

Floreano’s Weight Update Equations Plain Hebb Rule: Postsynaptic rule: –Weakens synapse if postsynaptic node fires alone Presynaptic rule: Covariance rule: Strengthens when correlated, weakens when not

Floreano’s Genetic Encoding

Experiment: Light-switching Task: Go to black area to turn on light, then go to area under light Requires a policy change in mid-task: Reconfigure weights for new policy Fully Recurrent Network Blynel, J.Blynel, J. and Floreano, D. (2002) Levels of Dynamics and Adaptive Behavior in Evolutionary Neural Controllers. In B. Hallam, D. Floreano, J. Hallam, G. Hayes, and J.-A. Meyer, editors. From Animals to Animats 7: Proceedings of the Seventh International Conference on Simulation on Adaptive Behavior, MIT Press.Floreano, D.

Results Adaptive synapse networks evolved straighter and faster trajectories Rapid and appropriate weight modifications occur at the moment if change

However, It’s Not That Simple A recurrent network with fixed synapses can change its policy too The activation levels cycling through the network are a kind of memory that can affect its functioning Do we need synaptic adaptation at all? Experiment in paper: Kenneth O. Stanley, Bobby D. Bryant, and Risto Miikkulainen (2003). Evolving Adaptive Neural Networks with and without Adaptive Synapses, Proceedings of the 2003 IEEE Congress on Evolutionary Computation (CEC-2003).

Experimental Domain: Dangerous Food Foraging Food may be poisonous or may not No way to tell at birth Only way to tell is to try one Then policy should depend on “pain” or not

Condensed Floreano Rules Two adaptation rules: One for excitatory connections, the other for inhibitory: First term is Hebbian, second term is a decay term

NEAT Trick: Use “Traits” to Prevent Dimensionality Multiplication One set of rules/traits Each connection gene points to one of the rules Rules evolve in parallel with network Weights evolve as usual

Robot NNs

Surprising Result Fixed-weight recurrent networks could evolve a solution more efficiently! Adaptive networks found solutions, but more slowly and less reliably

Explanation Fixed networks evolved a “trick”: Strong inhibitory recurrent connection on left turn output causes it to stay on until it experiences pain. Then it turns off and robot spins (from right turn output) until it doesn’t see food anymore, and it runs to the wall In adaptive network, 22% of connections diverge after pain, causing network to spin in place: a holistic change

Discussion Adaptive neurons are not for everything, not even all adaptive tasks In non-adaptive tasks, they only add unnecessary dimensions to the search space In adaptive tasks, they may be best for tasks requiring holistic solutions What are those? Don’t underestimate the power of recurrence

Next Topic: Leaky Integrator Neurons, CTRNNs, and Pattern Generators Real neurons encode information as spikes and spike trains with differing rates Dendrite may integrate spike train at different rates Rate differences can create central pattern generators without a clock! Levels of dynamics and adaptive behavior in evolutionary neural controllersLevels of dynamics and adaptive behavior in evolutionary neural controllers by Blynel, J., and Floreano, D. (2002) Evolution of Central Pattern Generators for Bipedal Walking in a Real-Time Physics Environment by Torsten Reil and Phil Husbands (2002) Optional: Evolution and analysis of model CPGs for walking I. Dynamical modules by Chiel, H.J., Beer, R.D. and Gallagher, J.C. (1999) Evolution of Central Pattern Generators for Bipedal Walking in a Real-Time Physics EnvironmentEvolution and analysis of model CPGs for walking I. Dynamical modules