Download presentation
Presentation is loading. Please wait.
Published byJanel Chapman Modified over 9 years ago
1
CAP6938 Neuroevolution and Developmental Encoding Basic Concepts Dr. Kenneth Stanley August 23, 2006
2
We Care About Evolving Complexity So Why Neural Networks? Historical origin of ideas in evolving complexity Representative of a broad class of structures Illustrative of general challenges Clear beneficiary of high complexity
3
How Do NNs Work? Input Output Input Output
4
How do NNs Work? Example Inputs (Sensors) Outputs (effectors/controls) Front Left Right Back Forward Left Right
5
What Exactly Happens Inside the Network? Network Activation X1X1 X2X2 H1H1 H2H2 Neuron j activation: out 1 out 2 w 11 w 21 w 12 w 22
6
Recurrent connections are backward connections in the network They allow feedback Recurrence is a type of memory Recurrent Connections X1X1 X2X2 H out w 21 w 11 w H-out W out-H Recurrent connection
7
Activating Networks of Arbitrary Topology Standard method makes no distinction between feedforward and recurrent connections: The network is then usually activated once per time tick The number of activations per tick can be thought of as the speed of thought Thinking fast is expensive X1X1 X2X2 H out w 21 w 11 w H-out W out-H
8
Arbitrary Topology Activation Controversy The standard method is not necessarily the best It allows “delay-line” memory and a very simple activation algorithm with no special case for recurrence However, “all-at-once” activation utilizes the entire net in each tick with no extra cost This issue is unsettled
9
The Big Questions What is the topology that works? What are the weights that work? ? ? ? ?? ? ? ?? ? ? ??
10
Problem Dimensionality Each connection (weight) in the network is a dimension in a search space The space you’re in matters: Optimization is not the only issue! Topology defines the space 21-dimensional space3-dimensional space
11
High Dimensional Space is Hard to Search 3 dimensional – easy 100 dimensional – need a good optimization method 10,000 dimensional – very hard 1,000,000 dimensional – very very hard 100,000,000,000,000 dim. – forget it
12
Bad News Most interesting solutions are high-D: –Robotic Maid –World Champion Go Player –Autonomous Automobile –Human-level AI –Great Composer We need to get into high-D space
13
A Solution (preview) Complexification: Instead of searching directly in the space of the solution, start in a smaller, related space, and build up to the solution Complexification is inherent in vast examples of social and biological progress
14
So how do computers optimize those weights anyway? Depends on the type of problem –Supervised: Learn from input/output examples –Reinforcement Learning: Sparse feedback –Self-Organization: No teacher In general, the more feedback you get, the easier the learning problem Humans learn language without supervision
15
Significant Weight Optimization Techniques Backpropagation: Change weights based on their contribution to error Hebbian learning: Changes weights based on firing correlations between connected neurons Homework: -Fausett pp. 39-80 (in Chapter 2) -and Fausett pp. 289-316 (in Chapter 6) -Online intro chaper on RL -Optional RL survery
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.