Download presentation
Presentation is loading. Please wait.
Published byMeagan Payne Modified over 6 years ago
1
Financial Informatics –XVII: Unsupervised Learning
Khurshid Ahmad, Professor of Computer Science, Department of Computer Science Trinity College, Dublin-2, IRELAND November 19th, 2008. 1
2
Preamble Neural Networks 'learn' by adapting in accordance with a training regimen: Five key algorithms. ERROR-CORRECTION OR PERFORMANCE LEARNING HEBBIAN OR COINCIDENCE LEARNING BOLTZMAN LEARNING (STOCHASTIC NET LEARNING) COMPETITIVE LEARNING FILTER LEARNING (GROSSBERG'S NETS)
3
Preamble Neural Networks 'learn' by adapting in accordance with a training regimen: Five key algorithms. California sought to have the license of one of the largest auditing firms (Ernst & Young) removed because of their role in the well-publicized collapse of Lincoln Savings & Loan Association. Further, regulators could use a bankruptcy
4
ANN Learning Algorithms
5
ANN Learning Algorithms
6
ANN Learning Algorithms
7
Hebbian Learning DONALD HEBB, a Canadian psychologist, was interested in investigating PLAUSIBLE MECHANISMS FOR LEARNING AT THE CELLULAR LEVELS IN THE BRAIN. (see for example, Donald Hebb's (1949) The Organisation of Behaviour. New York: Wiley)
8
Hebbian Learning HEBB’s POSTULATE: When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic changes take place in one or both cells such that A's efficiency as one of the cells firing B, is increased. Hebb proposed on psychological grounds the existence of synaptic modification during learning, in the absence, then, of any physiological evidence for such modification. Hebb proposed that changes in the efficacy of synapses take place via GROWTH OF SYNAPTIC KNOBS. Since Hebb's original postulate, it has been experimentally demonstrated that correlated activity at the pre- and post-synaptic cells of many synapses in animal neurons system alters the efficacy of the synapse in causing action potentials at the post-synaptic cells. However, relationship of any cellular changes to actual storage of cognitive information remains speculate. Hebb's claim that CHANGES IN THE EFFICACY OF SYNAPSES COULD TAKE PLACE VIA GROWTH OF SYNAPTIC KNOBS, has not been commonly noted in adult animals. Nevertheless, many alternative mechanisms have been suggested. SYNAPTIC PLASTICITY OR MODIFIABILITY can be due to the change in the amount of pre-synaptic transmitter substance correlated with post-synaptic protein synthesis.
9
Hebbian Learning Hebbian Learning laws CAUSE WEIGHT CHANGES IN RESPONSE TO EVENTS WITHIN A PROCESSING ELEMENT THAT HAPPEN SIMULTANEOUSLY. THE LEARNING LAWS IN THIS CATEGORY ARE CHARACTERIZED BY THEIR COMPLETELY LOCAL - BOTH IN SPACE AND IN TIME-CHARACTER. Hebb's laws-proposals more precisely - indicate that associative learning, at the cellular level, which would result in an ENDURING modification in activity pattern of a spatially distributed "assembly of nerve cells". A Hebbian synapse is a synapse that uses a time-dependent, highly local, and strongly interactive mechanism to increase synaptic efficiency as a function of the correlation between the pre-synaptic and post-synaptic activities.
10
Hebbian Learning LINEAR ASSOCIATOR: A substrate for Hebbian Learning Systems y’1 y1 y’2 y2 y’3 y3 Output y’ w11 w12 w13 w21 w22 w23 w31 w32 w33 A linear associator network should learn m pairs of input/output vector [(x1,y1), (x2,y2), (x3,y3)…(xm,ym)] when one of the input vectors, say xk, is entered into the network, the output vector y’ should be yk; when xk +e is entered then the output should be yk +d, where e and d are small numbers. · The problem of training the linear associator is to associate xk, with yk, is essentially the problem of finding the best weight matrix w. Input x x1 x2 x3
11
Hebbian Learning A simple form of Hebbian Learning Rule
where h is the so-called rate of learning and x and y are the input and output respectively. This rule is also called the activity product rule. The activity product rule that governs simple Hebbian learning suggests that the repeated application of the input signal xj leads to an increase in yk and therefore exponential growth that finally drives the input-output connection (the synaptic connection) into saturation. At no point information will be stored in the synapse and the selectivity is lost. (Haykin 1999:57) Output y slope = h* x
12
Hebbian Learning A simple form of Hebbian Learning Rule
If there are "m" pairs of vectors, to be stored in a network ,then the training sequence will change the weight-matrix, w, from its initial value of ZERO to its final state by simply adding together all of the incremental weight change caused by the "m" applications of Hebb's law:
13
Hebbian Learning A worked example: Consider the Hebbian learning of three input vectors: in a network with the following initial weight vector: The activation function f for the above network is given as the sgn function Recall that in Hebbian learning weight change Dw is given as Let = 1. Now, we compute net(1):
14
Hebbian Learning A worked example: Consider the Hebbian learning of three input vectors: in a network with the following initial weight vector: The superscripts refer to the PRESENTATION of a new input pattern. The subscripts refer to the components of vectors.
15
Hebbian Learning A worked example: Consider the Hebbian learning of three input vectors: in a network with the following initial weight vector: The weight change for x(2) is given as
16
For learning x(3), we have:
Hebbian Learning A worked example: Consider the Hebbian learning of three input vectors: in a network with the following initial weight vector: For learning x(3), we have:
17
we will use a bipolar continuous function with =1:
Hebbian Learning The worked example shows that with discrete f(net) and =1, the weight change involves ADDING or SUBTRACTING the entire input pattern vectors to and from the weight vectors respectively. Consider the case when the activation function is a continuous one. For example, take the bipolar continuous activation function: Consider the Hebbian learning example discussed above but instead of the bipolar binary function: we will use a bipolar continuous function with =1:
18
Hebbian Learning Vector Discrete Bipolar f(net)
The worked example shows that with bipolar continuous activation function indicates that the weight adjustments are tapered for the continuous function but are generally in the same direction: Vector Discrete Bipolar f(net) Continuous Bipolar f(net) x(1) 1 0.905 x(2) -1 -0.077 x(3) -0.932 Consider the Hebbian learning example discussed above but instead of the bipolar binary function: we will use a bipolar continuous function with =1:
19
Hebbian Learning The details of the computation for the three steps with a discrete bipolar activation function are presented below in the notes pages. The input vectors and the initial weight vector are:
20
Hebbian Learning The details of the computation for the three steps with a continuous bipolar activation function are presented below in the notes pages. The input vectors and the initial weight vector are:
21
Hebbian Learning Recall that the simple form of Hebbian learning law suggests that the repeated application of the presynaptic signal xj leads to an increase in yk and therefore exponential growth that finally drives the synaptic connection into saturation. A number of researchers have proposed ways in which such saturation can be avoided. Sejnowski has suggested that The average values of x and y constitute presynaptic and postsynaptic thresholds, which determine the sign of synaptic modification – the so-called covariance hypothesis (Haykin 1999:57). Problem: An input signal of unit amplitude is applied to a synaptic connection whose initial value is also unity. Calculate the variation in synaptic weight with time using Hebb’s rule and Sjnewoski’s variant of the Hebb’s rule: Hebb’s Rule: Sjnewoski’s Rule:
22
Hebbian Learning The Hebbian synapse described below is said to involve the use of POSITIVE FEEDBACK.
23
Hebbian Learning What is the principal limitation of this simplest form of learning? The above equation suggests that the repeated application of the input signal leads to an increase in , and therefore exponential growth that finally drives the synaptic connection into saturation. At that point of saturation no information cannot be stored in the synapse and selectivity will be lost. Graphically the relationship with the postsynaptic activityis a simple one: it is linear with a slope .
24
Hebbian Learning The so-called covariance hypothesis was introduced to deal with the principal limitation of the simplest form of Hebbian learning and is given as where and denote the time-averaged values of the pre-synaptic and postsynaptic signals.
25
Hebbian Learning If we expand the above equation:
If we expand the above equation: the last term in the above equation is a constant and the first term is what we have for the simplest Hebbian learning rule:
26
Hebbian Learning Graphically the relationship ∆wij with the postsynaptic activity yk is still linear but with a slope and the assurance that the straight line curve changes its rate of change at and the minimum value of the weight change ∆wij is
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.