PSY105 Neural Networks 4/5 4. “Traces in time” Assignment note: you don't need to read the full book to answer the first half of the question. You should.

Slides:



Advertisements
Similar presentations
Conditioning Bear with me. Bare with me. Beer with me. Stay focused.
Advertisements

Ming-Feng Yeh1 CHAPTER 13 Associative Learning. Ming-Feng Yeh2 Objectives The neural networks, trained in a supervised manner, require a target signal.
G5BAIM Artificial Intelligence Methods Graham Kendall Neural Networks.
Justin Besant BIONB 2220 Final Project
Biological and Artificial Neurons Michael J. Watts
Artificial Neural Networks - Introduction -
Artificial Neural Networks - Introduction -
Chapter 7 Supervised Hebbian Learning.
Learning Overview F What is Learning? F Classical Conditioning F Operant Conditioning F Limits of Behaviorism F Observational Learning.
Partial Parallel Interference Cancellation Based on Hebb Learning Rule Taiyuan University of Technology Yanping Li.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Artificial Neural Networks Artificial Neural Networks are (among other things) another technique for supervised learning k-Nearest Neighbor Decision Tree.
September 16, 2010Neural Networks Lecture 4: Models of Neurons and Neural Networks 1 Capabilities of Threshold Neurons By choosing appropriate weights.
How does the mind process all the information it receives?
AN INTERACTIVE TOOL FOR THE STOCK MARKET RESEARCH USING RECURSIVE NEURAL NETWORKS Master Thesis Michal Trna
Supervised Hebbian Learning. Hebb’s Postulate “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing.
Learning What is Learning? –Relatively permanent change in behavior that results from experience (behaviorist tradition) –Can there be learning that does.
Artificial neural networks.
PART 5 Supervised Hebbian Learning. Outline Linear Associator The Hebb Rule Pseudoinverse Rule Application.
COMP305. Part I. Artificial neural networks.. Topic 3. Learning Rules of the Artificial Neural Networks.
CHAPTER 12 ADVANCED INTELLIGENT SYSTEMS © 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang.
Unsupervised learning
Supervised Hebbian Learning
2101INT – Principles of Intelligent Systems Lecture 10.
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
Artificial Neural Network Yalong Li Some slides are from _24_2011_ann.pdf.
Artificial Intelligence Lecture No. 29 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
HEBB’S THEORY The implications of his theory, and their application to Artificial Life.
PSY105 Neural Networks 2/5 2. “A universe of numbers”
Neural Networks and Fuzzy Systems Hopfield Network A feedback neural network has feedback loops from its outputs to its inputs. The presence of such loops.
7 1 Supervised Hebbian Learning. 7 2 Hebb’s Postulate “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Unsupervised learning
Modelling Language Evolution Lecture 1: Introduction to Learning Simon Kirby University of Edinburgh Language Evolution & Computation Research Unit.
Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu Neural Networks.
ADVANCED PERCEPTRON LEARNING David Kauchak CS 451 – Fall 2013.
PSY105 Neural Networks 5/5 5. “Function – Computation - Mechanism”
Bain on Neural Networks and Connectionism Stephanie Rosenthal September 9, 2015.
”When spikes do matter: speed and plasticity” Thomas Trappenberg 1.Generation of spikes 2.Hodgkin-Huxley equation 3.Beyond HH (Wilson model) 4.Compartmental.
1 Financial Informatics –XVII: Unsupervised Learning 1 Khurshid Ahmad, Professor of Computer Science, Department of Computer Science Trinity College, Dublin-2,
CS 478 – Tools for Machine Learning and Data Mining Perceptron.
Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu.
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
YI, SeongBae A transition to modern: Hebb. Questions What is the main idea of Hebb’s theory if we say in a easy way? Why it is important to repeat to.
Storage capacity: consider the neocortex ~20*10^9 cells, 20*10^13 synapses.
IE 585 History of Neural Networks & Introduction to Simple Learning Rules.
Part II.  Producing the same response to two similar stimuli.  The more similar the substitute stimulus is to the original used in conditioning, the.
1 Perceptron as one Type of Linear Discriminants IntroductionIntroduction Design of Primitive UnitsDesign of Primitive Units PerceptronsPerceptrons.
Artificial Intelligence Methods Neural Networks Lecture 1 Rakesh K. Bissoondeeal Rakesh K.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
0 Chapter 4: Associators and synaptic plasticity Fundamentals of Computational Neuroscience Dec 09.
Perceptron vs. the point neuron Incoming signals from synapses are summed up at the soma, the biological “inner product” On crossing a threshold, the cell.
1 Neural networks 2. 2 Introduction: Neural networks The nervous system contains 10^12 interconnected neurons.
Connectionist Modelling Summer School Lecture Two.
“Principles of Soft Computing, 2 nd Edition” by S.N. Sivanandam & SN Deepa Copyright  2011 Wiley India Pvt. Ltd. All rights reserved. CHAPTER 2 ARTIFICIAL.
Introduction to Connectionism Jaap Murre Universiteit van Amsterdam en Universiteit Utrecht
Artificial Neural Networks This is lecture 15 of the module `Biologically Inspired Computing’ An introduction to Artificial Neural Networks.
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Simple learning in connectionist networks
Financial Informatics –XVII: Unsupervised Learning
Hebb and Perceptron.
Covariation Learning and Auto-Associative Memory
Corso su Sistemi complessi:
Perceptron as one Type of Linear Discriminants
Backpropagation.
The Naïve Bayes (NB) Classifier
Simple learning in connectionist networks
Introduction to Neural Network
Supervised Hebbian Learning
Presentation transcript:

PSY105 Neural Networks 4/5 4. “Traces in time” Assignment note: you don't need to read the full book to answer the first half of the question. You should be able to answer it based on chapter 1 and the lecture notes.

Lecture 1 recap We can describe patterns at one level of description that emerge due to rules followed at a lower level of description. Neural network modellers hope that we can understand behaviour by creating models of networks of artificial neurons.

Lecture 2 recap Simple model neurons – Transmit a signal of (or between) 0 and 1 – Receive information from other neurons – Weight this information Can be used to perform any computation

Lecture 3 recap Classical conditioning is a simple form of learning which can be understood as an increase in the weight (‘associative strength’) between two stimuli (one of which is associated with an ‘unconditioned response’)

Nota Bene Our discussion of classical conditioning has involved – A behaviour: learning to associate a response with a stimulus – A mechanism: neurons which transmit signals – These are related by…a rule or algorithm

Learning Rules “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased.” Hebb, D.O. (1949), The organization of behavior, New York: Wiley

Operationalising the Hebb Rule Turn ….“When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased.” ….Into a simple equation which is a rule for changing weights according to inputs and outputs

A Hebb Rule Δ weight = activity A x activity B x learning rate constant In words: increase the weight in proportion to the activity of neuron A multiplied by the activity of neuron B

Stimulus On Stimulus Off Time

Implications of this rule Stimulus 1 Stimulus 2 CS 1 UCS ?

Implications of this rule Stimulus 1 Stimulus 2 ? CS 1 UCS

Implications of this rule Stimulus 1 Stimulus 2 ? CS 1 UCS weight

Implications of this rule Stimulus 1 Stimulus 2 ? CS 1 UCS Activity A Activity B weight

Implications of this rule Stimulus 1 Stimulus 2 ? CS 1 UCS Activity A Activity B weight = x x 0.1

The most successful model of Classical Conditioning is the Rescorla Wagnar model Accounts for the effects of combinations of stimuli in learning S-S links Based on the discrepancy between what is expected to happen and what happens But… Deals with discrete trials…ie has no model of time Rescorla RA, Wagner AR. A theory of Pavlovian conditioning: Variations in the effectiveness of reinforcement and nonreinforcement. In: Classical Conditioning II: Current Research and Theory (Eds Black AH, Prokasy WF) New York: Appleton Century Crofts, pp , 1972 Robert Rescorla (2008) Rescorla-Wagner model. Scholarpedia, 3(3):2237.

The problem of continuous time Stimulus 1 Stimulus 2

The problem of continuous time Stimulus 1 Stimulus 2

The problem of continuous time Stimulus 1 Stimulus 2

The problem of continuous time Stimulus 1 Stimulus 2 Activity A Activity B = 0

The problem of continuous time Stimulus 1 Stimulus 2 Activity A Activity B = 0

The problem of continuous time Stimulus 1 Stimulus 2 Activity A = 0 Activity B

We need to add something to our model to deal with a learning mechanism that is always “on”

Traces Stimulus 1

Traces Stimulus 1

Traces Stimulus 1 Stimulus 2

Traces Stimulus 1 Stimulus 2 Activity B Activity A

Traces Stimulus 1 Stimulus 2

Consequences of this implementation Size of CS stimulus Duration of CS stimulus Size of UCS stimulus Duration of UCS stimulus Separation in time of CS and UCS The order in which the CS and UCS occur – (cf. Rescola-Wagner discrete time model)