Artificial Neural Networks Ch15. 2 Objectives Grossberg network is a self-organizing continuous-time competitive network.  Continuous-time recurrent.

Slides:



Advertisements
Similar presentations
© Negnevitsky, Pearson Education, Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Advertisements

Instar and Outstar Learning Laws Adapted from lecture notes of the course CN510: Cognitive and Neural Modeling offered in the Department of Cognitive and.
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
Ming-Feng Yeh1 CHAPTER 13 Associative Learning. Ming-Feng Yeh2 Objectives The neural networks, trained in a supervised manner, require a target signal.
Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden.
Self Organization: Competitive Learning
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Artificial neural networks:
An Illustrative Example.
Artificial Neural Networks - Introduction -
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Reading population codes: a neural implementation of ideal observers Sophie Deneve, Peter Latham, and Alexandre Pouget.
Competitive Networks. Outline Hamming Network.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
Correlation Matrix Memory CS/CMPE 333 – Neural Networks.
1 Adaptive Resonance Theory Fall 2007 Instructor: Tai-Ye (Jason) Wang Department of Industrial and Information Management Institute of Information Management.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Neural Networks: Concepts (Reading:
1 Pertemuan 15 ADAPTIVE RESONANCE THEORY Matakuliah: H0434/Jaringan Syaraf Tiruan Tahun: 2005 Versi: 1.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Un Supervised Learning & Self Organizing Maps Learning From Examples
November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an.
Levels in Computational Neuroscience Reasonably good understanding (for our purposes!) Poor understanding Poorer understanding Very poorer understanding.
The Decisive Commanding Neural Network In the Parietal Cortex By Hsiu-Ming Chang ( 張修明 )
Instar Learning Law Adapted from lecture notes of the course CN510: Cognitive and Neural Modeling offered in the Department of Cognitive and Neural Systems.
November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 1 Associative Networks Associative networks are able to store a set of patterns.
Lecture 09 Clustering-based Learning
SOMTIME: AN ARTIFICIAL NEURAL NETWORK FOR TOPOLOGICAL AND TEMPORAL CORRELATION FOR SPATIOTEMPORAL PATTERN LEARNING.
Neuron Model and Network Architecture
Bump attractors and the homogeneity assumption Kevin Rio NEUR April 2011.
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
Neural Networks Architecture Baktash Babadi IPM, SCS Fall 2004.
Artificial Neural Network Unsupervised Learning
Self organizing maps 1 iCSC2014, Juan López González, University of Oviedo Self organizing maps A visualization technique with data dimension reduction.
NEURAL NETWORKS FOR DATA MINING
Lecture 6, CS5671 Neural Networks Introduction –Biological neurons –Artificial neurons –Concepts –Conventions Single Layer Perceptron –Example –Limitation.
15 1 Grossberg Network Biological Motivation: Vision Eyeball and Retina.
Fuzzy BSB-neuro-model. «Brain-State-in-a-Box Model» (BSB-model) Dynamic of BSB-model: (1) Activation function: (2) 2.
George F Luger ARTIFICIAL INTELLIGENCE 6th edition Structures and Strategies for Complex Problem Solving Machine Learning: Connectionist Luger: Artificial.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Focus on Unsupervised Learning.  No teacher specifying right answer.
Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam
UNSUPERVISED LEARNING NETWORKS
Version 0.10 (c) 2007 CELEST VISI  N BRIGHTNESS CONTRAST: ADVANCED MODELING CLASSROOM PRESENTATION.
Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu.
1 Adaptive Resonance Theory. 2 INTRODUCTION Adaptive resonance theory (ART) was developed by Carpenter and Grossberg[1987a] ART refers to the class of.
381 Self Organization Map Learning without Examples.
Lecture 5 Neural Control
13 1 Associative Learning Simple Associative Network.
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam en Universiteit Utrecht
Modelleerimine ja Juhtimine Tehisnärvivõrgudega Identification and Control with artificial neural networks.
ECE 471/571 - Lecture 16 Hopfield Network 11/03/15.
A Self-organizing Semantic Map for Information Retrieval Xia Lin, Dagobert Soergel, Gary Marchionini presented by Yi-Ting.
Ch7: Hopfield Neural Model
Modelleerimine ja Juhtimine Tehisnärvivõrgudega
Dr. Unnikrishnan P.C. Professor, EEE
Kohonen Self-organizing Feature Maps
Grossberg Network.
Competitive Networks.
Adaptive Resonance Theory
Grossberg Network.
Competitive Networks.
The Naïve Bayes (NB) Classifier
Adaptive Resonance Theory
The Network Approach: Mind as a Web
Artificial Neural Networks
Presentation transcript:

Artificial Neural Networks Ch15

2 Objectives Grossberg network is a self-organizing continuous-time competitive network.  Continuous-time recurrent networks  The foundation for the adaptive resonance theory The biological motivation for the Grossberg network: the human visual system

3 Biological Motivation: Vision Eyeball and Retina

Leaky Integrator In mathematics, a leaky integrator equation is a specific differential equation, used to describe a component or system that takes the integral of an input, but gradually leaks a small amount of input over time. It appears commonly in hydraulics, electronics, and neuroscience where it can represent either a single neuron or a local population of neurons 4

5 Basic Nonlinear Model Leaky Integrator: : the system time constant  (input) is constant and  n(t):response

P(t)=1 n(0)=0, = 1, 0.5, 0.25,

7 Shunting Model Excitatory: the input that causes the response to increase –– p +. Inhibitory: the input that causes the response to decrease –– p . Biases b + and b  (nonnegative) determine the upper and lower limits on the response. Linear decay term Nonlinear gain control

8 Grossberg Network Three components: Layer 1, Layer 2 and the adaptive weights. The network includes short-term memory (STM) and long-term memory (LTM) mechanisms, and performs adaptation, filtering, normalization and contrast enhancement.

9 Layer 1 Receives external inputs and normalizes the intensity of the input pattern.

10 On-Center / Off-Surround Excitatory input: Inhibitory input: This type of connection pattern produces a normalization of the input pattern.

11 Normalization Inhibitory bias, Excitatory bias Steady state neuron output where is the relative intensity of input i. The input vector is normalized

12 Layer 1 Response p = p =

13 Characteristics of Layer 1 The network is sensitive to relative intensities of the input pattern, rather than absolute intensities. The output of Layer 1 is a normalized version of the input pattern. The on-center/off-surround connection pattern and the nonlinear gain control of the shunting model produce the normalization effect. The operation of Layer 1 explains the brightness constancy and brightness contrast characteristics of the human visual system.

14 Layer 2 A layer of continuous-time instar performs several functions.

15 Functions of Layer 2 It normalizes the total activity in the layer. It contrast enhances its pattern, so that the neuron that receives the largest input will dominate the response (like winner-take-all competition in the Hamming network). It operates as a short-term memory by storing the contrast-enhanced pattern.

16 Feedback Connections The feedback enables the network to store a pattern, even after the input has been removed. The feedback also performs the competition that causes the contrast enhancement of the pattern.

17 Equation of Layer 2 provides on-center feedback connection provides off-surround feedback connection W 2 consists of adaptive weights. Its rows, after training, will represent the prototype patterns. Layer 2 performs a competition between the neurons, which tends to contrast enhance the output pattern – maintaining large outputs while attenuating small outputs.

18 Layer 2 Example Correlation between prototype 1 and input. Correlation between prototype 2 and input.

19 Layer 2 Response t w 2 1  T a 1 w 2 2  T a 1 n 1 2 t() n 2 2 t() Contrast Enhancement and Storage Input to neuron 1: Input to neuron 2: a 1 (the steady state resulted obtained from Layer 1 example) is applied for 0.25 sec. and then removed. Input vector:

20 Characteristics of Layer 2 Even before the input is removed, some contrast enhancement is performed. After the input has been set to zero, the network further enhances the contrast and stores the patterns.  It is the nonlinear feedback that enables the network to store pattern, and the on- center/off-surround connection patterns that causes the contrast enhancement.

21 Transfer Functions See Fig Linear: perfect storage of any pattern, but amplify noise. Slower than linear: amplify noise, reduce contrast. Faster than linear: winner-take-all, suppress noise, quantize total activity. Sigmoid: suppress noise, contrast enhance, not quantized.

22 Learning Law The rows of the adaptive weights W 2 will represent patterns that have been stored and that the network will be able to recognize.  Long term memory (LTM) Learning law #1: Learning law #2: turn off learning (and forgetting) when is NOT active. decay term Hebbian-type learning

23 Response of Adaptive Weights For Pattern 1: For Pattern 2: w 11  2 t() w 12  2 t() w 21  2 t() w 22  2 t() The first row of the weight matrix is updated when n 1 2 (t) is active, and the second row of the weight matrix is updated when n 2 2 (t) is active. Two different input patterns are alternately presented to the network for periods of 0.2 seconds at a time.

24 Relation to Kohonen Law Grossberg learning law (continuous-time): Euler approximation for the derivative: Discrete-time approximation to Grossberg law:

25 Relation to Kohonen Law Rearrange terms: Assume that a faster-than-linear transfer function (winner-take-all) is used in Layer 2. Kohonen law:

26 Three Major Differences The Grossberg network is a continuous-time network. Layer 1 of Grossberg network automatically normalizes the input vectors. Layer 2 of Grossberg network can perform a “soft” competition, rather than the winner-take-all competition of the Kohonen network.  This soft competition allows more than one neuron in Layer 2 to learn.  Cause the Grossberg network to operate as a feature map.