CAP6938 Neuroevolution and Developmental Encoding Basic Concepts Dr. Kenneth Stanley August 23, 2006.

Slides:



Advertisements
Similar presentations
Artificial Neural Networks
Advertisements

A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C. Wunsch.
Slides from: Doug Gray, David Poole
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
NEURAL NETWORKS Perceptron
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 7: Learning in recurrent networks Geoffrey Hinton.
November 18, 2010Neural Networks Lecture 18: Applications of SOMs 1 Assignment #3 Question 2 Regarding your cascade correlation projects, here are a few.
Kostas Kontogiannis E&CE
CAP6938 Neuroevolution and Developmental Encoding Leaky Integrator Neurons and CTRNNs Dr. Kenneth Stanley October 25, 2006.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Machine Learning Neural Networks
Chapter 10 Artificial Intelligence © 2007 Pearson Addison-Wesley. All rights reserved.
Neural Networks Basic concepts ArchitectureOperation.
Evolving Neural Network Agents in the NERO Video Game Author : Kenneth O. Stanley, Bobby D. Bryant, and Risto Miikkulainen Presented by Yi Cheng Lin.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Neural Networks Marco Loog.
Neural Networks Chapter Feed-Forward Neural Networks.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
Artificial Neural Network
SOMTIME: AN ARTIFICIAL NEURAL NETWORK FOR TOPOLOGICAL AND TEMPORAL CORRELATION FOR SPATIOTEMPORAL PATTERN LEARNING.
Artificial Neural Networks:
CSSE463: Image Recognition Day 21 Upcoming schedule: Upcoming schedule: Exam covers material through SVMs Exam covers material through SVMs.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Artificial Neural Networks
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
 The most intelligent device - “Human Brain”.  The machine that revolutionized the whole world – “computer”.  Inefficiencies of the computer has lead.
Artificial Neural Networks An Overview and Analysis.
What is a neural network? Collection of interconnected neurons that compute and generate impulses. Components of a neural network include neurons, synapses,
Hybrid AI & Machine Learning Systems Using Ne ural Networks and Subsumption Architecture By Logan Kearsley.
NEURAL NETWORKS FOR DATA MINING
Artificial Intelligence Techniques Multilayer Perceptrons.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Artificial Neural Networks Bruno Angeles McGill University – Schulich School of Music MUMT-621 Fall 2009.
CAP6938 Neuroevolution and Developmental Encoding Real-time NEAT Dr. Kenneth Stanley October 18, 2006.
Artificial Life/Agents Creatures: Artificial Life Autonomous Software Agents for Home Entertainment Stephen Grand, 1997 Learning Human-like Opponent Behaviour.
Bain on Neural Networks and Connectionism Stephanie Rosenthal September 9, 2015.
Adaptive Algorithms for PCA PART – II. Oja’s rule is the basic learning rule for PCA and extracts the first principal component Deflation procedure can.
Pac-Man AI using GA. Why Machine Learning in Video Games? Better player experience Agents can adapt to player Increased variety of agent behaviors Ever-changing.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Inverse Kinematics for Robotics using Neural Networks. Authors: Sreenivas Tejomurtula., Subhash Kak
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
CITS7212: Computational Intelligence An Overview of Core CI Technologies Lyndon While.
Course Overview  What is AI?  What are the Major Challenges?  What are the Main Techniques?  Where are we failing, and why?  Step back and look at.
CAP6938 Neuroevolution and Developmental Encoding Evolving Adaptive Neural Networks Dr. Kenneth Stanley October 23, 2006.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
CAP6938 Neuroevolution and Developmental Encoding Intro to Neuroevolution Dr. Kenneth Stanley September 18, 2006.
CAP6938 Neuroevolution and Artificial Embryogeny Neural Network Weight Optimization Dr. Kenneth Stanley January 18, 2006.
Chapter 6 Neural Network.
Joe Bradish Parallel Neural Networks. Background  Deep Neural Networks (DNNs) have become one of the leading technologies in artificial intelligence.
CAP6938 Neuroevolution and Artificial Embryogeny Approaches to Neuroevolution Dr. Kenneth Stanley February 1, 2006.
Neural Networks Lecture 11: Learning in recurrent networks Geoffrey Hinton.
CAP6938 Neuroevolution and Artificial Embryogeny Real-time NEAT Dr. Kenneth Stanley February 22, 2006.
Machine Learning Artificial Neural Networks MPλ ∀ Stergiou Theodoros 1.
Artificial Neural Networks By: Steve Kidos. Outline Artificial Neural Networks: An Introduction Frank Rosenblatt’s Perceptron Multi-layer Perceptron Dot.
Ghent University Backpropagation for Population-Temporal Coded Spiking Neural Networks July WCCI/IJCNN 2006 Benjamin Schrauwen and Jan Van Campenhout.
1 Neural Networks MUMT 611 Philippe Zaborowski April 2005.
Dr. Kenneth Stanley January 30, 2006
Done Done Course Overview What is AI? What are the Major Challenges?
Dr. Kenneth Stanley September 25, 2006
Dr. Kenneth Stanley September 6, 2006
Introduction to CAP6938 Neuroevolution and Developmental Encoding
Artificial Intelligence Methods
Dr. Kenneth Stanley September 20, 2006
RET is funded by the National Science Foundation, grant # EEC
Capabilities of Threshold Neurons
Dr. Kenneth Stanley February 6, 2006
Fundamentals of Neural Networks Dr. Satinder Bal Gupta
A Dynamic System Analysis of Simultaneous Recurrent Neural Network
CSC321: Neural Networks Lecture 11: Learning in recurrent networks
Background “Structurally dynamic” cellular automata (Ilachinski, Halpern 1987) have been shown to simulate biological functions with emergent behavior.
Presentation transcript:

CAP6938 Neuroevolution and Developmental Encoding Basic Concepts Dr. Kenneth Stanley August 23, 2006

We Care About Evolving Complexity So Why Neural Networks? Historical origin of ideas in evolving complexity Representative of a broad class of structures Illustrative of general challenges Clear beneficiary of high complexity

How Do NNs Work? Input Output Input Output

How do NNs Work? Example Inputs (Sensors) Outputs (effectors/controls) Front Left Right Back Forward Left Right

What Exactly Happens Inside the Network? Network Activation X1X1 X2X2 H1H1 H2H2 Neuron j activation: out 1 out 2 w 11 w 21 w 12 w 22

Recurrent connections are backward connections in the network They allow feedback Recurrence is a type of memory Recurrent Connections X1X1 X2X2 H out w 21 w 11 w H-out W out-H Recurrent connection

Activating Networks of Arbitrary Topology Standard method makes no distinction between feedforward and recurrent connections: The network is then usually activated once per time tick The number of activations per tick can be thought of as the speed of thought Thinking fast is expensive X1X1 X2X2 H out w 21 w 11 w H-out W out-H

Arbitrary Topology Activation Controversy The standard method is not necessarily the best It allows “delay-line” memory and a very simple activation algorithm with no special case for recurrence However, “all-at-once” activation utilizes the entire net in each tick with no extra cost This issue is unsettled

The Big Questions What is the topology that works? What are the weights that work? ? ? ? ?? ? ? ?? ? ? ??

Problem Dimensionality Each connection (weight) in the network is a dimension in a search space The space you’re in matters: Optimization is not the only issue! Topology defines the space 21-dimensional space3-dimensional space

High Dimensional Space is Hard to Search 3 dimensional – easy 100 dimensional – need a good optimization method 10,000 dimensional – very hard 1,000,000 dimensional – very very hard 100,000,000,000,000 dim. – forget it

Bad News Most interesting solutions are high-D: –Robotic Maid –World Champion Go Player –Autonomous Automobile –Human-level AI –Great Composer We need to get into high-D space

A Solution (preview) Complexification: Instead of searching directly in the space of the solution, start in a smaller, related space, and build up to the solution Complexification is inherent in vast examples of social and biological progress

So how do computers optimize those weights anyway? Depends on the type of problem –Supervised: Learn from input/output examples –Reinforcement Learning: Sparse feedback –Self-Organization: No teacher In general, the more feedback you get, the easier the learning problem Humans learn language without supervision

Significant Weight Optimization Techniques Backpropagation: Change weights based on their contribution to error Hebbian learning: Changes weights based on firing correlations between connected neurons Homework: -Fausett pp (in Chapter 2) -and Fausett pp (in Chapter 6) -Online intro chaper on RL -Optional RL survery