Pattern Recognition using Hebbian Learning and Floating-Gates Certain pattern recognition problems have been shown to be easily solved by Artificial neural.

Slides:



Advertisements
Similar presentations
Bioinspired Computing Lecture 16
Advertisements

Perceptron Lecture 4.
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Kohonen Self Organising Maps Michael J. Watts
Artificial neural networks:
Artificial Neural Networks - Introduction -
Artificial Neural Networks - Introduction -
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
CSC321: Neural Networks Lecture 3: Perceptrons
1 Neural networks 3. 2 Hopfield network (HN) model A Hopfield network is a form of recurrent artificial neural network invented by John Hopfield in 1982.
Machine Learning Neural Networks
Soft computing Lecture 6 Introduction to neural networks.
Lecture 14 – Neural Networks
Neural NetworksNN 11 Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Pattern Association A pattern association learns associations between input patterns and output patterns. One of the most appealing characteristics of.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Rutgers CS440, Fall 2003 Neural networks Reading: Ch. 20, Sec. 5, AIMA 2 nd Ed.
Machine Learning Motivation for machine learning How to set up a problem How to design a learner Introduce one class of learners (ANN) –Perceptrons –Feed-forward.
Artificial Neural Networks
November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 1 Associative Networks Associative networks are able to store a set of patterns.
Neural Networks Chapter 2 Joost N. Kok Universiteit Leiden.
October 7, 2010Neural Networks Lecture 10: Setting Backpropagation Parameters 1 Creating Data Representations On the other hand, sets of orthogonal vectors.
Neural Networks Lab 5. What Is Neural Networks? Neural networks are composed of simple elements( Neurons) operating in parallel. Neural networks are composed.
COMP305. Part I. Artificial neural networks.. Topic 3. Learning Rules of the Artificial Neural Networks.
September 28, 2010Neural Networks Lecture 7: Perceptron Modifications 1 Adaline Schematic Adjust weights i1i1i1i1 i2i2i2i2 inininin …  w 0 + w 1 i 1 +
Artificial Neural Network
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
How to do backpropagation in a brain
Jochen Triesch, UC San Diego, 1 Short-term and Long-term Memory Motivation: very simple circuits can store patterns of.
Neural Networks. Plan Perceptron  Linear discriminant Associative memories  Hopfield networks  Chaotic networks Multilayer perceptron  Backpropagation.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Neural Network Hopfield model Kim, Il Joong. Contents  Neural network: Introduction  Definition & Application  Network architectures  Learning processes.
What is a neural network? Collection of interconnected neurons that compute and generate impulses. Components of a neural network include neurons, synapses,
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
Artificial Neural Network Supervised Learning دكترمحسن كاهاني
NEURAL NETWORKS FOR DATA MINING
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Pencil-and-Paper Neural Networks Prof. Kevin Crisp St. Olaf College.
Features of Biological Neural Networks 1)Robustness and Fault Tolerance. 2)Flexibility. 3)Ability to deal with variety of Data situations. 4)Collective.
Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
CS621 : Artificial Intelligence
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
1 Perceptron as one Type of Linear Discriminants IntroductionIntroduction Design of Primitive UnitsDesign of Primitive Units PerceptronsPerceptrons.
ECE 471/571 - Lecture 16 Hopfield Network 11/03/15.
1 Neural networks 2. 2 Introduction: Neural networks The nervous system contains 10^12 interconnected neurons.
Neural Networks References: “Artificial Intelligence for Games” "Artificial Intelligence: A new Synthesis"
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
Machine Learning Artificial Neural Networks MPλ ∀ Stergiou Theodoros 1.
J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation.
Today’s Lecture Neural networks Training
Real Neurons Cell structures Cell body Dendrites Axon
CSSE463: Image Recognition Day 17
CSE P573 Applications of Artificial Intelligence Neural Networks
Chapter 3. Artificial Neural Networks - Introduction -
Face Recognition with Neural Networks
Perceptron as one Type of Linear Discriminants
CSE 573 Introduction to Artificial Intelligence Neural Networks
network of simple neuron-like computing elements
Creating Data Representations
CSSE463: Image Recognition Day 17
CSSE463: Image Recognition Day 17
CSSE463: Image Recognition Day 17
CSSE463: Image Recognition Day 17
The Network Approach: Mind as a Web
CSC 578 Neural Networks and Deep Learning
Presentation transcript:

Pattern Recognition using Hebbian Learning and Floating-Gates Certain pattern recognition problems have been shown to be easily solved by Artificial neural networks and many neural network chips have been made and sold. Most of these are not terribly biologically realistic.

Output layer neurons Hidden layer neurons Input layer neurons A 2-dimensional example… x y x (-10:10) y(-10:10) w1 = 0.3, w2 = =0.7 weights +1, -1

x y x (-10:10) y(-10:10) w1 = 0.5, w2 = 0.11 y x x y Putting the two together… We respond to a smaller region of this 2-D input space. +1, -1

So in general, we can apply this type of operation on an N-dimensional input With the hidden units defining hyperplanes in this input space. The individual output units combine these hyperplanes to create specific subregions of this N-dim space. This is what pattern recognition is about. As you might expect, these two images live very far apart from each other in this very high dimensional space. But if we had a set of 100 faces that we wanted to recognize, this might be harder. What happens if the faces are rotated, shifted, or scaled? Easy Task 100 x 77 pixels = 7700 dimensional input space unit1 unit2

How do I pick the weight matrices to solve these tasks?? One way is to present inputs and adjust the weights if the output is not what we want. where = 0 otherwise Output unit i, example  Target output for unit i example  input k, example  Learning rate This is known as the perceptron learning rule A training set of examples with target output values is defined and presented one by one, adjusting the weight matrix after each evaluation. The learning rule Assigns large value weights to components of the inputs that allow discrimination between the classes of inputs. e.g., many faces and many helicopters

Face vs. Helicopter Example

The concept of an energy function of a recurrent neural network was introduced by Hopfield (1982) to describe the state of a network. By studying the dynamics, it is possible to show that the activity in the network will always decrease in energy, evolving towards a "local minima". w ij matrix inputs SiSi The Hopfield Recurrent Network Associative Memory and Energy Functions The network defines an 'energy landscape' in which the state of the network settles. By starting close to minima (stored patterns) compared to other points in the landscape The network will settle towards the minima and 'recall' the stored patterns.

This view of neural processing has its merits, provides insight into this type of computational structure and has spawned new fields on its own, but does not describe the current neurobiological state of knowledge very well. In particular, neurons communicate with spikes and the backpropagation learning rule Is not a good match to what has been found. So what do we know about neurobiological learning? Hebbian learning If both cells are active, strengthen the synapse If only the post-synaptic cell is active, weaken the synapse

In fact, learning at some synapses seems to be even more specific. Temporal ordering seems to play a role in determining the change in the synapse. strengthen Abbott and Blum, 1996 weaken ww Time between pre-syn and post-syn spikes

Chip Idea: 1. Design a spiking neural network that can learn using the spike-timing rule to solve a particular temporal pattern recognition problem 2. Design a floating-gate modification circuit that can implement the learning rule