1 Financial Informatics –XVII: Unsupervised Learning 1 Khurshid Ahmad, Professor of Computer Science, Department of Computer Science Trinity College, Dublin-2,

Slides:



Advertisements
Similar presentations
© Negnevitsky, Pearson Education, Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Advertisements

Computational Neuroscience 03 Lecture 8
History, Part III Anatomical and neurochemical correlates of neuronal plasticity Two primary goals of learning and memory studies are –i. neural centers.
Introduction to Neural Networks Computing
Computer Vision Lecture 18: Object Recognition II
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Artificial neural networks:
Adaptive Resonance Theory (ART) networks perform completely unsupervised learning. Their competitive learning algorithm is similar to the first (unsupervised)
Justin Besant BIONB 2220 Final Project
Spike timing-dependent plasticity: Rules and use of synaptic adaptation Rudy Guyonneau Rufin van Rullen and Simon J. Thorpe Rétroaction lors de l‘ Intégration.
Artificial Neural Networks - Introduction -
Financial Informatics –XVI: Supervised Backpropagation Learning
Chapter 7 Supervised Hebbian Learning.
Partial Parallel Interference Cancellation Based on Hebb Learning Rule Taiyuan University of Technology Yanping Li.
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Contents Hebb. Learn. Patt. Assoc. Associator Correlations CS 476: Networks of Neural Computation, CSD, UOC, 2009 Examples Conclusions WK7 – Hebbian Learning.
Connectionist Modeling Some material taken from cspeech.ucd.ie/~connectionism and Rich & Knight, 1991.
Synapses are everywhere neurons synapses Synapse change continuously –From msec –To hours (memory) Lack HH type model for the synapse.
September 16, 2010Neural Networks Lecture 4: Models of Neurons and Neural Networks 1 Capabilities of Threshold Neurons By choosing appropriate weights.
Back-Propagation Algorithm
Supervised Hebbian Learning. Hebb’s Postulate “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing.
Instar Learning Law Adapted from lecture notes of the course CN510: Cognitive and Neural Modeling offered in the Department of Cognitive and Neural Systems.
Artificial neural networks.
Data Mining with Neural Networks (HK: Chapter 7.5)
1 Activity-dependent Development (2) Hebb’s hypothesis Hebbian plasticity in visual system Cellular mechanism of Hebbian plasticity.
PART 5 Supervised Hebbian Learning. Outline Linear Associator The Hebb Rule Pseudoinverse Rule Application.
COMP305. Part I. Artificial neural networks.. Topic 3. Learning Rules of the Artificial Neural Networks.
Unsupervised learning
Supervised Hebbian Learning
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
PSY105 Neural Networks 4/5 4. “Traces in time” Assignment note: you don't need to read the full book to answer the first half of the question. You should.
Artificial Neural Network Unsupervised Learning
Artificial Neural Network Yalong Li Some slides are from _24_2011_ann.pdf.
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
Neural Networks and Fuzzy Systems Hopfield Network A feedback neural network has feedback loops from its outputs to its inputs. The presence of such loops.
7 1 Supervised Hebbian Learning. 7 2 Hebb’s Postulate “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Unsupervised learning
Bain on Neural Networks and Connectionism Stephanie Rosenthal September 9, 2015.
Neural Networks 2nd Edition Simon Haykin
1 4. Associators and Synaptic Plasticity Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Synaptic Dynamics: Unsupervised Learning
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
1 Financial Informatics –XV: Perceptron Learning 1 Khurshid Ahmad, Professor of Computer Science, Department of Computer Science Trinity College, Dublin-2,
Storage capacity: consider the neocortex ~20*10^9 cells, 20*10^13 synapses.
EEE502 Pattern Recognition
IE 585 History of Neural Networks & Introduction to Simple Learning Rules.
1 Perceptron as one Type of Linear Discriminants IntroductionIntroduction Design of Primitive UnitsDesign of Primitive Units PerceptronsPerceptrons.
0 Chapter 4: Associators and synaptic plasticity Fundamentals of Computational Neuroscience Dec 09.
1 Neural networks 2. 2 Introduction: Neural networks The nervous system contains 10^12 interconnected neurons.
“Principles of Soft Computing, 2 nd Edition” by S.N. Sivanandam & SN Deepa Copyright  2011 Wiley India Pvt. Ltd. All rights reserved. CHAPTER 2 ARTIFICIAL.
Introduction to Connectionism Jaap Murre Universiteit van Amsterdam en Universiteit Utrecht
DEPARTMENT: COMPUTER SC. & ENGG. SEMESTER : VII
A Simple Artificial Neuron
Synaptic Dynamics: Unsupervised Learning
CSE P573 Applications of Artificial Intelligence Neural Networks
Simple learning in connectionist networks
Financial Informatics –XVII: Unsupervised Learning
Hebb and Perceptron.
Perceptron as one Type of Linear Discriminants
CSE 573 Introduction to Artificial Intelligence Neural Networks
Backpropagation.
Simple learning in connectionist networks
ARTIFICIAL NEURAL networks.
Introduction to Neural Network
Supervised Hebbian Learning
Presentation transcript:

1 Financial Informatics –XVII: Unsupervised Learning 1 Khurshid Ahmad, Professor of Computer Science, Department of Computer Science Trinity College, Dublin-2, IRELAND November 19 th,

2 Preamble Neural Networks 'learn' by adapting in accordance with a training regimen: Five key algorithms. ERROR-CORRECTION OR PERFORMANCE LEARNING HEBBIAN OR COINCIDENCE LEARNING BOLTZMAN LEARNING ( STOCHASTIC NET LEARNING ) COMPETITIVE LEARNING FILTER LEARNING (GROSSBERG'S NETS)

3 Preamble Neural Networks 'learn' by adapting in accordance with a training regimen: Five key algorithms. California sought to have the license of one of the largest auditing firms (Ernst & Young) removed because of their role in the well-publicized collapse of Lincoln Savings & Loan Association. Further, regulators could use a bankruptcy

4 ANN Learning Algorithms

5

6

7 Hebbian Learning DONALD HEBB, a Canadian psychologist, was interested in investigating PLAUSIBLE MECHANISMS FOR LEARNING AT THE CELLULAR LEVELS IN THE BRAIN. (see for example, Donald Hebb's (1949) The Organisation of Behaviour. New York: Wiley)

8 Hebbian Learning HEBB’s POSTULATE: When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic changes take place in one or both cells such that A's efficiency as one of the cells firing B, is increased.

9 Hebbian Learning Hebbian Learning laws CAUSE WEIGHT CHANGES IN RESPONSE TO EVENTS WITHIN A PROCESSING ELEMENT THAT HAPPEN SIMULTANEOUSLY. THE LEARNING LAWS IN THIS CATEGORY ARE CHARACTERIZED BY THEIR COMPLETELY LOCAL - BOTH IN SPACE AND IN TIME-CHARACTER.

10 Hebbian Learning L INEAR A SSOCIATOR : A substrate for Hebbian Learning Systems x1x1 Output y’ x2x2 x3x3 y1y1 y’ 1 y2y2 y’ 2 y3y3 y’ 3 Input x w 11 w 12 w 13 w 21 w 22 w 23 w 31 w 32 w 33

11 Hebbian Learning A simple form of Hebbian Learning Rule where h is the so-called rate of learning and x and y are the input and output respectively. This rule is also called the activity product rule.

12 Hebbian Learning A simple form of Hebbian Learning Rule If there are "m" pairs of vectors, to be stored in a network,then the training sequence will change the weight-matrix, w, from its initial value of ZERO to its final state by simply adding together all of the incremental weight change caused by the "m" applications of Hebb's law:

13 Hebbian Learning A worked example: Consider the Hebbian learning of three input vectors: in a network with the following initial weight vector:

14 Hebbian Learning A worked example: Consider the Hebbian learning of three input vectors: in a network with the following initial weight vector:

15 Hebbian Learning A worked example: Consider the Hebbian learning of three input vectors: in a network with the following initial weight vector:

16 Hebbian Learning A worked example: Consider the Hebbian learning of three input vectors: in a network with the following initial weight vector:

17 Hebbian Learning The worked example shows that with discrete f(net) and  =1, the weight change involves ADDING or SUBTRACTING the entire input pattern vectors to and from the weight vectors respectively. Consider the case when the activation function is a continuous one. For example, take the bipolar continuous activation function:

18 Hebbian Learning The worked example shows that with bipolar continuous activation function indicates that the weight adjustments are tapered for the continuous function but are generally in the same direction: VectorDiscrete Bipolar f(net) Continuous Bipolar f(net) x (1) x (2) x (3)

19 Hebbian Learning The details of the computation for the three steps with a discrete bipolar activation function are presented below in the notes pages. The input vectors and the initial weight vector are:

20 Hebbian Learning The details of the computation for the three steps with a continuous bipolar activation function are presented below in the notes pages. The input vectors and the initial weight vector are:

21 Hebbian Learning Recall that the simple form of Hebbian learning law suggests that the repeated application of the presynaptic signal x j leads to an increase in y k and therefore exponential growth that finally drives the synaptic connection into saturation. A number of researchers have proposed ways in which such saturation can be avoided. Sejnowski has suggested that

22 Hebbian Learning The Hebbian synapse described below is said to involve the use of POSITIVE FEEDBACK.

23 Hebbian Learning What is the principal limitation of this simplest form of learning? The above equation suggests that the repeated application of the input signal leads to an increase in, and therefore exponential growth that finally drives the synaptic connection into saturation. At that point of saturation no information cannot be stored in the synapse and selectivity will be lost. Graphically the relationship with the postsynaptic activityis a simple one: it is linear with a slope.

24 Hebbian Learning The so-called covariance hypothesis was introduced to deal with the principal limitation of the simplest form of Hebbian learning and is given as where and denote the time-averaged values of the pre-synaptic and postsynaptic signals.

25 Hebbian Learning If we expand the above equation: the last term in the above equation is a constant and the first term is what we have for the simplest Hebbian learning rule:

26 Hebbian Learning Graphically the relationship ∆w ij with the postsynaptic activity y k is still linear but with a slope and the assurance that the straight line curve changes its rate of change at and the minimum value of the weight change ∆w ij is