COMP305. Part I. Artificial neural networks.. Topic 3. Learning Rules of the Artificial Neural Networks.

Slides:



Advertisements
Similar presentations
© Negnevitsky, Pearson Education, Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Advertisements

Computational Neuroscience 03 Lecture 8
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Artificial neural networks:
Chapter 7 Supervised Hebbian Learning.
A Review: Architecture
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Connectionist Modeling Some material taken from cspeech.ucd.ie/~connectionism and Rich & Knight, 1991.
Pattern Recognition using Hebbian Learning and Floating-Gates Certain pattern recognition problems have been shown to be easily solved by Artificial neural.
AN INTERACTIVE TOOL FOR THE STOCK MARKET RESEARCH USING RECURSIVE NEURAL NETWORKS Master Thesis Michal Trna
Supervised Hebbian Learning. Hebb’s Postulate “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing.
1 Pendahuluan Pertemuan 1 Matakuliah: T0293/Neuro Computing Tahun: 2005.
Artificial neural networks.
Data Mining with Neural Networks (HK: Chapter 7.5)
1 COMP305. Part I. Artificial neural networks.. 2 The McCulloch-Pitts Neuron (1943). McCulloch and Pitts demonstrated that “…because of the all-or-none.
PART 5 Supervised Hebbian Learning. Outline Linear Associator The Hebb Rule Pseudoinverse Rule Application.
Artificial Neural Network
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Introduction to Neural Networks CMSC475/675
Supervised Hebbian Learning
Machine Learning. Learning agent Any other agent.
Neurons, Neural Networks, and Learning 1. Human brain contains a massively interconnected net of (10 billion) neurons (cortical cells) Biological.
PSY105 Neural Networks 4/5 4. “Traces in time” Assignment note: you don't need to read the full book to answer the first half of the question. You should.
2101INT – Principles of Intelligent Systems Lecture 10.
Artificial Neural Network Unsupervised Learning
Artificial Neural Network Yalong Li Some slides are from _24_2011_ann.pdf.
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
HEBB’S THEORY The implications of his theory, and their application to Artificial Life.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
Neural Networks and Fuzzy Systems Hopfield Network A feedback neural network has feedback loops from its outputs to its inputs. The presence of such loops.
7 1 Supervised Hebbian Learning. 7 2 Hebb’s Postulate “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Pencil-and-Paper Neural Networks Prof. Kevin Crisp St. Olaf College.
Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu Neural Networks.
ADVANCED PERCEPTRON LEARNING David Kauchak CS 451 – Fall 2013.
Neural Networks Steven Le. Overview Introduction Architectures Learning Techniques Advantages Applications.
Features of Biological Neural Networks 1)Robustness and Fault Tolerance. 2)Flexibility. 3)Ability to deal with variety of Data situations. 4)Collective.
Neural Networks 2nd Edition Simon Haykin
1 Financial Informatics –XVII: Unsupervised Learning 1 Khurshid Ahmad, Professor of Computer Science, Department of Computer Science Trinity College, Dublin-2,
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
From brain activities to mathematical models The TempUnit model, a study case for GPU computing in scientific computation.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Storage capacity: consider the neocortex ~20*10^9 cells, 20*10^13 synapses.
IE 585 History of Neural Networks & Introduction to Simple Learning Rules.
NEURAL NETWORKS LECTURE 1 dr Zoran Ševarac FON, 2015.
Artificial Intelligence Methods Neural Networks Lecture 1 Rakesh K. Bissoondeeal Rakesh K.
Nicolas Galoppo von Borries COMP Motion Planning Introduction to Artificial Neural Networks.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
0 Chapter 4: Associators and synaptic plasticity Fundamentals of Computational Neuroscience Dec 09.
“Principles of Soft Computing, 2 nd Edition” by S.N. Sivanandam & SN Deepa Copyright  2011 Wiley India Pvt. Ltd. All rights reserved. CHAPTER 2 ARTIFICIAL.
Introduction to Connectionism Jaap Murre Universiteit van Amsterdam en Universiteit Utrecht
Chapter 13 Artificial Intelligence. Artificial Intelligence – Figure 13.1 The Turing Test.
1 Neural Networks MUMT 611 Philippe Zaborowski April 2005.
Ranga Rodrigo February 8, 2014
What is an ANN ? The inventor of the first neuro computer, Dr. Robert defines a neural network as,A human brain like system consisting of a large number.
Dr. Unnikrishnan P.C. Professor, EEE
CSE P573 Applications of Artificial Intelligence Neural Networks
Simple learning in connectionist networks
Financial Informatics –XVII: Unsupervised Learning
Hebb and Perceptron.
CSE 573 Introduction to Artificial Intelligence Neural Networks
Backpropagation.
Artificial neurons Nisheeth 10th January 2019.
Lecture 02: Perceptron By: Nur Uddin, Ph.D.
Simple learning in connectionist networks
Introduction to Neural Network
Supervised Hebbian Learning
Presentation transcript:

COMP305. Part I. Artificial neural networks.

Topic 3. Learning Rules of the Artificial Neural Networks.

ANN Learning rules. McCulloch-Pitts neuron capable of: storing information and producing logical and arithmetical operations on it. The next step must be to realise another important function of the brain, which is to acquire new knowledge through experience, i.e. learning.

ANN Learning rules. Learning means to change in response to experience. In a network of MP-neurons binary weights of connections and thresholds are fixed. The only change can be the change of pattern of connections, which is technically expensive. Some easily changeable free parameters are needed.

ANN Learning rules. The ideal free parameters to adjust, and so to resolve learning without changing pattern of connections, are the weights of connections w ji. Abstract neuron.

ANN Learning rules. Definition: ANN learning rule defines how to adjust the weights of connections to get desirable output.

Hebb’s rule (1949). Hebb conjectured that a particular type of use-dependent modification of the connection strength of synapses might underlie learning in the nervous system.

Hebb’s rule (1949). Hebb introduced a neurophysiological postulate : “…When an axon of cell A is near enough to excite a cell B and repeatedly and persistently tales part in firing it, some growth process or metabolic change takes place in one or both cells, such that A’s efficiency as one of the cells firing B, is increased.”

Hebb’s rule (1949). The simplest formalisation of Hebb’s rule is to increase weight of connection at every next instant in the way: (1) where (2)

Hebb’s rule (1949). (1) where (2) here w ji k is the weight of connection at instant k, w ji k+1 is the weight of connection at the following instant k+1,  w ji k is increment by which the weight of connection is enlarged, C is positive coefficient which determines learning rate, a i k is input value from the presynaptic neuron at instant k, X j k is output of the postsynaptic neuron at the same instant k.

Hebb’s rule (1949). (1) where (2) Thus, the weight of connection changes at the next instant only if both preceding input via this connection and the resulting output simultaneously are not equal to 0.

Hebb’s rule (1949). (1) where (2) Equation (2) emphasises the correlation nature of a Hebbian synapse. It is sometimes referred to as the activity product rule.

Hebb’s rule (1949). (1) where (2) For this reason, Hebb’s rule plays an important role in studies of ANN algorithms much “younger” than the rule itself, such as unsupervised learning or self- organisation, which we shall consider later.

Hebb’s rule in practice. Input unit N o

Hebb’s rule in practice. Input unit N o w01w01 w02w02 w03w03 w04w t =0 C=1

Hebb’s rule in practice. Input unit N o w01w01 w02w02 w03w03 w04w a01a01 a02a02 a03a03 a04a t =0 C=1

Hebb’s rule in practice. Input unit N o w01w01 w02w02 w03w03 w04w a01a01 a02a02 a03a03 a04a t =0 C=1

Hebb’s rule in practice. Input unit N o w01w01 w02w02 w03w03 w04w a01a01 a02a02 a03a03 a04a t =0 C=1

Hebb’s rule in practice. Input unit N o w01w01 w02w02 w03w03 w04w a01a01 a02a02 a03a03 a04a t =0 C=1

Hebb’s rule in practice. Input unit N o w01w01 w02w02 w03w03 w04w a01a01 a02a02 a03a03 a04a t =0 C=1

Hebb’s rule in practice. Input unit N o w11w11 w12w12 w13w13 w14w t =1 C=1 a01a01 a02a02 a03a03 a04a

Hebb’s rule in practice. Input unit N o w11w11 w12w12 w13w13 w14w t =1 C=1

Hebb’s rule in practice. Input unit N o w11w11 w12w12 w13w13 w14w a11a11 a12a12 a13a13 a14a t =1 C=1

Hebb’s rule in practice. Input unit N o w11w11 w12w12 w13w13 w14w a11a11 a12a12 a13a13 a14a t =1 C=1

Hebb’s rule in practice. Input unit N o w11w11 w12w12 w13w13 w14w a11a11 a12a12 a13a13 a14a t =1 C=1

Hebb’s rule in practice. Input unit N o w11w11 w12w12 w13w13 w14w a11a11 a12a12 a13a13 a14a t =1 C=1

Hebb’s rule in practice. Input unit N o w21w21 w22w22 w23w23 w24w t =2 C=1 a11a11 a12a12 a13a13 a14a

Hebb’s rule in practice. Input unit N o w21w21 w22w22 w23w23 w24w t =2 C=1

Hebb’s rule in practice. Input unit N o w21w21 w22w22 w23w23 w24w a21a21 a22a22 a23a23 a24a24 t =2 C=1

Hebb’s rule in practice. Input unit N o w21w21 w22w22 w23w23 w24w a21a21 a22a22 a23a23 a24a t =2 C=1

Hebb’s rule in practice. Input unit N o w21w21 w22w22 w23w23 w24w a21a21 a22a22 a23a23 a24a t =2 C=1

Hebb’s rule in practice. Input unit N o w21w21 w22w22 w23w23 w24w a21a21 a22a22 a23a23 a24a t =2 C=1

Hebb’s rule in practice. Input unit N o w21w21 w22w22 w23w23 w24w a21a21 a22a22 a23a23 a24a t =2 C=1

Hebb’s rule in practice. Input unit N o w31w31 w32w32 w33w33 w34w a21a21 a22a22 a23a23 a24a t =3 C=1

Hebb’s rule in practice. Input unit N o w31w31 w32w32 w33w33 w34w t =3 C=1 And so on…

Next - Perceptron (1958). Rosenblatt (1958) explicitly considered the problem of pattern recognition, where a “teacher” is essential.