Concept Learning and Optimisation with Hopfield Networks Kevin Swingler University of Stirling Presentation to UKCI 2011, Manchester.

Slides:



Advertisements
Similar presentations
Bayesian Belief Propagation
Advertisements

Bioinspired Computing Lecture 16
Fundamentals of Data Analysis Lecture 12 Methods of parametric estimation.
B.Macukow 1 Lecture 12 Neural Networks. B.Macukow 2 Neural Networks for Matrix Algebra Problems.
CS 678 –Boltzmann Machines1 Boltzmann Machine Relaxation net with visible and hidden units Learning algorithm Avoids local minima (and speeds up learning)
Tuomas Sandholm Carnegie Mellon University Computer Science Department
Learning in Recurrent Networks Psychology 209 February 25, 2013.
Face Recognition & Biometric Systems Support Vector Machines (part 2)
1 Neural networks 3. 2 Hopfield network (HN) model A Hopfield network is a form of recurrent artificial neural network invented by John Hopfield in 1982.
Integrating Bayesian Networks and Simpson’s Paradox in Data Mining Alex Freitas University of Kent Ken McGarry University of Sunderland.
Estimation of Distribution Algorithms Ata Kaban School of Computer Science The University of Birmingham.
Relationship Between Sample Data and Population Values You will encounter many situations in business where a sample will be taken from a population, and.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
Network Goodness and its Relation to Probability PDP Class Winter, 2010 January 13, 2010.
Planning operation start times for the manufacture of capital products with uncertain processing times and resource constraints D.P. Song, Dr. C.Hicks.
Arizona State University DMML Kernel Methods – Gaussian Processes Presented by Shankar Bhargav.
November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 1 Associative Networks Associative networks are able to store a set of patterns.
Neural Networks Chapter 2 Joost N. Kok Universiteit Leiden.
Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)
CS623: Introduction to Computing with Neural Nets (lecture-10) Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay.
Genetic Algorithms and Ant Colony Optimisation
How to do backpropagation in a brain
Jochen Triesch, UC San Diego, 1 Short-term and Long-term Memory Motivation: very simple circuits can store patterns of.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Confidence Intervals for the Mean (σ known) (Large Samples)
Estimation of Distribution Algorithms (EDA)
NEURAL NETWORKS FOR DATA MINING
Hebbian Coincidence Learning
Recurrent Network InputsOutputs. Motivation Associative Memory Concept Time Series Processing – Forecasting of Time series – Classification Time series.
Boltzmann Machine (BM) (§6.4) Hopfield model + hidden nodes + simulated annealing BM Architecture –a set of visible nodes: nodes can be accessed from outside.
The Boltzmann Machine Psych 419/719 March 1, 2001.
Sampling and sampling distibutions. Sampling from a finite and an infinite population Simple random sample (finite population) – Population size N, sample.
(Particle Swarm Optimisation)
Neural Networks and Fuzzy Systems Hopfield Network A feedback neural network has feedback loops from its outputs to its inputs. The presence of such loops.
IE 585 Associative Network. 2 Associative Memory NN Single-layer net in which the weights are determined in such a way that the net can store a set of.
Sampling Methods. Probability Sampling Techniques Simple Random Sampling Cluster Sampling Stratified Sampling Systematic Sampling Copyright © 2012 Pearson.
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam
Sampling Neuman and Robson Ch. 7 Qualitative and Quantitative Sampling.
Probabilistic & Statistical Techniques Eng. Tamer Eshtawi First Semester Eng. Tamer Eshtawi First Semester
Slide Slide 1 Section 6-5 The Central Limit Theorem.
Taguchi. Abstraction Optimisation of manufacturing processes is typically performed utilising mathematical process models or designed experiments. However,
Lecture V Probability theory. Lecture questions Classical definition of probability Frequency probability Discrete variable and probability distribution.
The Essence of PDP: Local Processing, Global Outcomes PDP Class January 16, 2013.
Constraint Satisfaction and Schemata Psych 205. Goodness of Network States and their Probabilities Goodness of a network state How networks maximize goodness.
LUT Method For Inverse Halftone 資工四 林丞蔚 林耿賢. Outline Introduction Methods for Halftoning LUT Inverse Halftone Tree Structured LUT Conclusion.
Principled Probabilistic Inference and Interactive Activation Psych209 January 25, 2013.
CSC321: Introduction to Neural Networks and Machine Learning Lecture 19: Learning Restricted Boltzmann Machines Geoffrey Hinton.
Supervised learning network G.Anuradha. Learning objectives The basic networks in supervised learning Perceptron networks better than Hebb rule Single.
Copyright © 2012 by Nelson Education Limited. Chapter 5 Introduction to inferential Statistics: Sampling and the Sampling Distribution 5-1.
Chapter 18: The Central Limit Theorem Objective: To apply the Central Limit Theorem to the Normal Model CHS Statistics.
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam en Universiteit Utrecht
Chapter 6 Neural Network.
Lecture 9 Model of Hopfield
A Presentation on Adaptive Neuro-Fuzzy Inference System using Particle Swarm Optimization and it’s Application By Sumanta Kundu (En.R.No.
Assocative Neural Networks (Hopfield) Sule Yildirim 01/11/2004.
Lecture 39 Hopfield Network
Université d’Ottawa / University of Ottawa 2003 Bio 8102A Applied Multivariate Biostatistics L4.1 Lecture 4: Multivariate distance measures l The concept.
J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation.
Genetic Algorithm in TDR System
Micro-Solar Distance Learning Initiative
Collaborative Filtering Matrix Factorization Approach
PRAKASH CHOCKALINGAM, NALIN PRADEEP, AND STAN BIRCHFIELD
ASV Chapters 1 - Sample Spaces and Probabilities
Boltzmann Machine (BM) (§6.4)
CSC321 Winter 2007 Lecture 21: Some Demonstrations of Restricted Boltzmann Machines Geoffrey Hinton.
FDA – A Scalable Evolutionary Algorithm for the Optimization of Additively Decomposed Functions BISCuit EDA Seminar
Confidence Intervals for the Mean (Large Samples)
Simulated Annealing & Boltzmann Machines
CSC 578 Neural Networks and Deep Learning
Presentation transcript:

Concept Learning and Optimisation with Hopfield Networks Kevin Swingler University of Stirling Presentation to UKCI 2011, Manchester

Hopfield Networks

Example – Learning Digit Images Learned PatternsClueRecalled Pattern

How? Learning w ij = w ij + u i u j Recall u i = Σ j≠i w ij u j uiui ujuj w ij

Pattern Discovery Random PatternScore against TargetLearning rate = Score σ

Learning Concepts or Optimal Patterns σ Concept = Symmetry

How? Weight update rule is adapted to: w ij = w ij + σu i u j uiui ujuj w ij

Examples Concept = SymmetryConcept = Horizontal

Speed Simple Target MethodGAEDAHopfield Mean Search Length n =

Relation to EDA EDA Probabilities, R = r 1... r n Probability that element i=1: P(p i = 1) = r i Probability update rule: r i = r i +f(P) if p i = 1 Current pattern, P = p 1... p n f(P) = Score for pattern P

Relation to EDA Probability that element i=1: P(p i = 1) = g(W, p j≠i ) Not stochastic, but marginal probabilities are not known Settling the network from a random state samples from the learned distribution without the need for joint distribution sampling explicitly.

Practicalities 1 If P is an attractor state for the network, then so is P` Scores for target patterns need their distance metric altered accordingly Or compare both the pattern and its inverse and score the highest =

Practicalities 2 The random patterns used during training must have elements drawn from an even distribution Deviation from this impairs learning

Uses and Benefits In situations where there are many possible solutions, this provides a method of sampling random good solutions without the need for additional searching Solutions tend to be close to the seeded start point, so you can use this method to find a local optimum that is close to a start point – again, without actually searching

Limitations The storage capacity of the network The size of the search space – Can we use a sparser connected network? Inverse patterns need care Currently: – Only tested on binary patterns – No evolution of patterns – stimuli must all be random

Thank You