Ch10 : Self-Organizing Feature Maps

Slides:



Advertisements
Similar presentations
2806 Neural Computation Self-Organizing Maps Lecture Ari Visa.
Advertisements

Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.
Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden.
Unsupervised learning. Summary from last week We explained what local minima are, and described ways of escaping them. We investigated how the backpropagation.
Self Organization: Competitive Learning
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Artificial neural networks:
Competitive learning College voor cursus Connectionistische modellen M Meeter.
B.Macukow 1 Neural Networks Lecture 4. B.Macukow 2 McCulloch symbolism The symbolism introduced by McCulloch at the basis of simplified Venn diagrams.
X0 xn w0 wn o Threshold units SOM.
Self Organizing Feature Maps Presented by: Mike Huang Igor Djuric Steve Park CPSC 533 AI W00 SOFMs Presented by Mike Huang, Igor Djuric & Steve Park.
2002/01/21PSCY , Term 2, Copyright Jason Harrison, The Brain from retina to extrastriate cortex.
Competitive Networks. Outline Hamming Network.
Jochen Triesch, UC San Diego, 1 Pattern Formation in Neural Fields Goal: Understand how non-linear recurrent dynamics can.
[1].Edward G. Jones, Microcolumns in the Cerebral Cortex, Proc. of National Academy of Science of United States of America, vol. 97(10), 2000, pp
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
PMR5406 Redes Neurais e Lógica Fuzzy
I welcome you all to this presentation On: Neural Network Applications Systems Engineering Dept. KFUPM Imran Nadeem & Naveed R. Butt &
Un Supervised Learning & Self Organizing Maps Learning From Examples
UNIVERSITY OF JYVÄSKYLÄ Topology Management in Unstructured P2P Networks Using Neural Networks Presentation for IEEE Congress on Evolutionary Computing.
November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Lecture 09 Clustering-based Learning
SOMTIME: AN ARTIFICIAL NEURAL NETWORK FOR TOPOLOGICAL AND TEMPORAL CORRELATION FOR SPATIOTEMPORAL PATTERN LEARNING.
Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL.
Understanding visual map formation through vortex dynamics of spin Hamiltonian models Myoung Won Cho BK21 Frontier Physics.
1 Introduction to Artificial Neural Networks Andrew L. Nelson Visiting Research Faculty University of South Florida.
Lecture 12 Self-organizing maps of Kohonen RBF-networks
KOHONEN SELF ORGANISING MAP SEMINAR BY M.V.MAHENDRAN., Reg no: III SEM, M.E., Control And Instrumentation Engg.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Self Organizing Maps (SOM) Unsupervised Learning.
Self Organized Map (SOM)
CZ5225: Modeling and Simulation in Biology Lecture 5: Clustering Analysis for Microarray Data III Prof. Chen Yu Zong Tel:
Intrusion Detection Using Hybrid Neural Networks Vishal Sevani ( )
Chapter 4. Neural Networks Based on Competition Competition is important for NN –Competition between neurons has been observed in biological nerve systems.
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
Artificial Neural Network Unsupervised Learning
Self organizing maps 1 iCSC2014, Juan López González, University of Oviedo Self organizing maps A visualization technique with data dimension reduction.
What is a neural network? Collection of interconnected neurons that compute and generate impulses. Components of a neural network include neurons, synapses,
Machine Learning Neural Networks (3). Understanding Supervised and Unsupervised Learning.
Self Organizing Feature Map CS570 인공지능 이대성 Computer Science KAIST.
Features of Biological Neural Networks 1)Robustness and Fault Tolerance. 2)Flexibility. 3)Ability to deal with variety of Data situations. 4)Collective.
Features of Biological Neural Networks 1)Robustness and Fault Tolerance. 2)Flexibility. 3)Ability to deal with variety of Data situations. 4)Collective.
UNSUPERVISED LEARNING NETWORKS
…. 2 Ongoing software project, not “theory” Encapsulated internals & interfaces Today: –Details of module internals –Details of architecture & signaling/feedback.
Version 0.10 (c) 2007 CELEST VISI  N BRIGHTNESS CONTRAST: ADVANCED MODELING CLASSROOM PRESENTATION.
381 Self Organization Map Learning without Examples.
Semiconductors, BP&A Planning, DREAM PLAN IDEA IMPLEMENTATION.
Self-Organizing Maps (SOM) (§ 5.5)
Self Organizing Maps: Clustering With unsupervised learning there is no instruction and the network is left to cluster patterns. All of the patterns within.
1 Technological Educational Institute Of Crete Department Of Applied Informatics and Multimedia Intelligent Systems Laboratory.
Biological Modeling of Neural Networks: Week 10 – Neuronal Populations Wulfram Gerstner EPFL, Lausanne, Switzerland 10.1 Cortical Populations - columns.
Computational Intelligence: Methods and Applications Lecture 9 Self-Organized Mappings Włodzisław Duch Dept. of Informatics, UMK Google: W Duch.
J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation.
Self-Organizing Network Model (SOM) Session 11
Data Mining, Neural Network and Genetic Programming
Unsupervised Learning Networks
Self organizing networks
Recurrent Neural Networks
Lecture 22 Clustering (3).
Volume 56, Issue 2, Pages (October 2007)
Computational Intelligence: Methods and Applications
An Introduction To The Backpropagation Algorithm
Self-Organizing Maps (SOM) (§ 5.5)
Self Organizing Maps A major principle of organization is the topographic map, i.e. groups of adjacent neurons process information from neighboring parts.
Artificial Neural Networks
Ch4: Backpropagation (BP)
Volume 56, Issue 2, Pages (October 2007)
Akram Bitar and Larry Manevitz Department of Computer Science
Presentation transcript:

Ch10 : Self-Organizing Feature Maps 。 Cortex: 。Ordered Feature Map e.g. Auditory cortex -- tonotopic maps Hippocampal cortex -- geographic maps Somatosensory cortex -- somatic maps Visual cortex -- retinotectal maps ------------------------- ------------------------ i) 6 layers of neurons ii) Size: iii) Thick:

。 Macroscopic scale: Consistently uniform structure. Microscopic scale: Located consistently relative to one another logical ordering of functionality Tissue level – inherence Mental level – learning

◎ Architecture of SOFM Neural Networks Primary mechanisms: (1) Lateral feedback (2) Clustering (3) Topology preserving projection

Objective : formation of localized responses ○ Lateral Feedback Objective : formation of localized responses Lateral interaction function : input, : output

Lateral interaction function +: excitation -: inhibition 。Output: primary input lateral input a: transfer function

○ Clustering e.g., Clustering of activity in a 1-D array: input signal Clustering of activity in a 1-D array: Clustering of activity in a 2-D array:

。 Nonsmooth input signal or Nonlinear transfer function a Arbitrary lateral feedback function Irregular activity bubbles

○ Topology Preserving Projection 。 3-D Physical space Weight space 。 2-D

。 1-D

Mathematical treatment of self – organization ○ 1-D case : a scalar input signal to a neuron neurons : 1, 2, …. , l weights :

。 The best match 。 Neighborhood neurons (first-order neighbors)

。 Weight update : learning step, g : Gaussian ○ 2-D System Input scalar signals: Output response: : weight

Find the best match between x and 2 ways 。 Topological neighborhood -- To be engaged in learning but with different degrees (approximated by Gaussian that is reduced with time)

◎ Applications – Geometrical Modeling S. W. Chen, G. C. Stockman, and K. E. Change, “SO Dynamic Deformation for Building of 3D Models,” IEEE Trans. on Neural Networks, Vol. 7, No. 2, pp. 374-387, 1996.

○ Data Acquisition