Introduction to the TLearn Simulator n CS/PY 399 Lab Presentation # 5 n February 8, 2001 n Mount Union College.

Slides:



Advertisements
Similar presentations
Multi-Layer Perceptron (MLP)
Advertisements

Beyond Linear Separability
Slides from: Doug Gray, David Poole
NEURAL NETWORKS Backpropagation Algorithm
1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
NEURAL NETWORKS Perceptron
1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
Multilayer Perceptrons 1. Overview  Recap of neural network theory  The multi-layered perceptron  Back-propagation  Introduction to training  Uses.
Introduction to Training and Learning in Neural Networks n CS/PY 399 Lab Presentation # 4 n February 1, 2001 n Mount Union College.
Automatic Speech Recognition II  Hidden Markov Models  Neural Network.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
1 Part I Artificial Neural Networks Sofia Nikitaki.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2008 Shreekanth Mandayam ECE Department Rowan University.
The back-propagation training algorithm
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks / Fall 2004 Shreekanth Mandayam ECE Department Rowan University.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2010 Shreekanth Mandayam ECE Department Rowan University.
Chapter 6: Multilayer Neural Networks
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks / Spring 2002 Shreekanth Mandayam Robi Polikar ECE Department.
Multi Layer Perceptrons (MLP) Course website: The back-propagation algorithm Following Hertz chapter 6.
Before we start ADALINE
An Introduction To The Backpropagation Algorithm Who gets the credit?
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2006 Shreekanth Mandayam ECE Department Rowan University.
CS 4700: Foundations of Artificial Intelligence
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Traffic Sign Recognition Using Artificial Neural Network Radi Bekker
Artificial Neural Networks (ANN). Output Y is 1 if at least two of the three inputs are equal to 1.
Multiple-Layer Networks and Backpropagation Algorithms
Artificial Neural Networks
Neural Networks Chapter 6 Joost N. Kok Universiteit Leiden.
Neural Networks Ellen Walker Hiram College. Connectionist Architectures Characterized by (Rich & Knight) –Large number of very simple neuron-like processing.
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
 Diagram of a Neuron  The Simple Perceptron  Multilayer Neural Network  What is Hidden Layer?  Why do we Need a Hidden Layer?  How do Multilayer.
Artificial Intelligence Techniques Multilayer Perceptrons.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
Back Propagation and Representation in PDP Networks Psychology 209 February 6, 2013.
Choosing Weight and Threshold Values for Single Perceptrons n CS/PY 231 Lab Presentation # 2 n January 24, 2005 n Mount Union College.
Methodology of Simulations n CS/PY 399 Lecture Presentation # 19 n February 21, 2001 n Mount Union College.
Perceptron Networks and Vector Notation n CS/PY 231 Lab Presentation # 3 n January 31, 2005 n Mount Union College.
Multi-Layer Perceptron
Music Genre Classification Alex Stabile. Example File
Perceptrons Gary Cottrell. Cognitive Science Summer School 2 Perceptrons: A bit of history Frank Rosenblatt studied a simple version of a neural net called.
CS621 : Artificial Intelligence
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
EEE502 Pattern Recognition
Chapter 8: Adaptive Networks
Hazırlayan NEURAL NETWORKS Backpropagation Network PROF. DR. YUSUF OYSAL.
C - IT Acumens. COMIT Acumens. COM. To demonstrate the use of Neural Networks in the field of Character and Pattern Recognition by simulating a neural.
Neural Networks 2nd Edition Simon Haykin
Previous Lecture Perceptron W  t+1  W  t  t  d(t) - sign (w(t)  x)] x Adaline W  t+1  W  t  t  d(t) - f(w(t)  x)] f’ x Gradient.
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Computational Properties of Perceptron Networks n CS/PY 399 Lab Presentation # 3 n January 25, 2001 n Mount Union College.
An Introduction To The Backpropagation Algorithm.
Introduction to the TLearn Simulator n CS/PY 231 Lab Presentation # 5 n February 16, 2005 n Mount Union College.
Neural networks.
Multiple-Layer Networks and Backpropagation Algorithms
Neural Networks.
Experimenting with TLearn
CSE 473 Introduction to Artificial Intelligence Neural Networks
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Backpropagation.
Artificial Intelligence Chapter 3 Neural Networks
Backpropagation.
Artificial neurons Nisheeth 10th January 2019.
Artificial Intelligence Chapter 3 Neural Networks
Outline Announcement Neural networks Perceptrons - continued
Presentation transcript:

Introduction to the TLearn Simulator n CS/PY 399 Lab Presentation # 5 n February 8, 2001 n Mount Union College

TLearn Software n Developed by Cognitive Psychologists to study properties of connectionist models and learning –Kim Plunkett, Oxford Experimental Psychologist –Jeffrey Elman, U.C. San Diego Cognitive Psychologist n Simulates massively-parallel networks on serial computer platforms

Notational Conventions n TLearn uses a slightly different notation than that which we have been using n Input signals are treated as nodes in the network, and displayed on screen as squares n Other nodes (representing neurons) are displayed as circles n Input and output values can be any real numbers (decimals allowed)

Weight Adjustments: Learning n TLearn uses a more sophisticated rule than the simple one seen last week n Let t kp be the target (desired) output for node k on pattern p n Let o kp be the actual (obtained) output for node k on pattern p

Weight Adjustments: Learning n Error for node k on pattern p (  kp ) is the difference between target output and observed output, times the derivative of the activation function for node k –why? Don’t ask! (actually, this value simulates actual observed learning) n  kp = (t kp - o kp ) · [o kp · (1 - o kp ) ]

Weight Adjustments: Learning n This is used to calculate adjustments to weights n Let w kj be the weight on the connection from node j to node k (backwards notation is what the authors use) n Let  w kj be the change required for w kj due to training n  w kj is determined by: error for node k, input from node j, learning rate (  )

Weight Adjustments: Learning n  w kj =  ·  kp · o jp n  is small (< 1, usually 0.05 to 0.5), to keep weights from making wild swings that overshoot goals for all patterns n This actually makes sense... –a larger error (  kp ) should make  w kj larger –if o jp is large, it contributed a great deal to the error, so it should contribute a large value to the weight adjustment

Weight Adjustments: Learning n The preceding is called the delta rule n Used in Backpropagation Training –error adjustments are propagated backwards from output layer to previous layers when weight changes are calculated n Luckily, the simulator will perform these calculations for you! n Read more in Ch. 1 of Plunkett & Elman

TLearn Simulation Basics n For each problem on which you will work, the simulator maintains a PROJECT description file n Each project consists of three text files: –.CF file: configuration information about the network’s architecture –.DATA file: input for each of the network’s training cases –.TEACH file: output for each training case

TLearn Simulation Basics n Each file must contain information in EXACTLY the format TLearn expects, or else the simulation won’t work n Example: AND project from Chapter 3 folder –2 inputs, one outupt, output = 1 only if both inputs = 1

.DATA and.TEACH Files

.DATA File format n first line: distributed or localist –to start, we’ll always use distributed n second line: n = # of training cases n next n lines: inputs for each training case – a list of v values, separated by spaces, where v = # of inputs in network

.TEACH File format n first line: distributed or localist –must match mode used in.DATA file n second line: n = # of training cases n next n lines: outputs for each training case – a list of w values, separated by spaces, where w = # of outputs in network –a value may be *, meaning output is ignored during training for this pattern

.CF File

.CF File format n Three sections n NODES: section –nodes = # of non-input units in network –inputs = # of inputs to network –outputs = # of output units –output node is ___ <== which node is the output node? > 1 output node ==> syntax changes to “output nodes are”

.CF File format n CONNECTIONS: section –groups = 0 ( explained later ) –1 from i1-i2 (says that node # 1 gets values from input nodes i1 and i2) –1 from 0 (says that node # 1 gets values from the bias node -- explained below) n input nodes always start with i1, i2, etc. n non-input nodes start with 1, 2, etc.

.CF File format n SPECIAL: section –selected = 1 (special simulator results reporting) –weight-limit = 1.00 (range of random weight values to use in initial network creation)

Bias node n TLearn units all have same threshold –defined by logistic function n  values are represented by a bias node –connected to all non-input nodes –signal always = 1 –weight of the connection is -  –same as a perceptron with a threshold example on board

Network Arch. with Bias Node

.CF File Example (Draw it!) –NODES: nodes = 5 inputs = 3 outputs = 2 output nodes are 4-5 –CONNECTIONS: groups = from i1-i3 4-5 from from 0

Learning to use TLearn n Chapter 3 of the Plunkett and Elman text is a step-by-step description of several TLearn Training sessions. n Best way to learn: Hands-on! Try Lab Exercise # 5

Introduction to the TLearn Simulator n CS/PY 399 Lab Presentation # 5 n February 8, 2001 n Mount Union College