Pencil-and-Paper Neural Networks Prof. Kevin Crisp St. Olaf College.

Slides:



Advertisements
Similar presentations
Bioinspired Computing Lecture 16
Advertisements

Pattern Association.
Neural Networks  A neural network is a network of simulated neurons that can be used to recognize instances of patterns. NNs learn by searching through.
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
Computer Vision Lecture 18: Object Recognition II
Artificial Neural Network
Neural Network of the Cerebellum: Temporal Discrimination and the Timing of Responses Michael D. Mauk Dean V. Buonomano.
G5BAIM Artificial Intelligence Methods Graham Kendall Neural Networks.
1 Neural networks 3. 2 Hopfield network (HN) model A Hopfield network is a form of recurrent artificial neural network invented by John Hopfield in 1982.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
November 19, 2009Introduction to Cognitive Science Lecture 20: Artificial Neural Networks I 1 Artificial Neural Network (ANN) Paradigms Overview: The Backpropagation.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Pattern Recognition using Hebbian Learning and Floating-Gates Certain pattern recognition problems have been shown to be easily solved by Artificial neural.
CSE 153Modeling Neurons Chapter 2: Neurons A “typical” neuron… How does such a thing support cognition???
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
Artificial neural networks.
The McCulloch-Pitts Neuron. Characteristics The activation of a McCulloch Pitts neuron is binary. Neurons are connected by directed weighted paths. A.
Neural Networks Lab 5. What Is Neural Networks? Neural networks are composed of simple elements( Neurons) operating in parallel. Neural networks are composed.
1 COMP305. Part I. Artificial neural networks.. 2 The McCulloch-Pitts Neuron (1943). McCulloch and Pitts demonstrated that “…because of the all-or-none.
COMP305. Part I. Artificial neural networks.. Topic 3. Learning Rules of the Artificial Neural Networks.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Traffic Sign Recognition Using Artificial Neural Network Radi Bekker
Machine Learning. Learning agent Any other agent.
Neurons, Neural Networks, and Learning 1. Human brain contains a massively interconnected net of (10 billion) neurons (cortical cells) Biological.
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
What is Artificial Intelligence? AI is the effort to develop systems that can behave/act like humans. Turing Test The problem = unrestricted domains –human.
Explorations in Neural Networks Tianhui Cai Period 3.
Neural Networks Architecture Baktash Babadi IPM, SCS Fall 2004.
Artificial Neural Networks. Applied Problems: Image, Sound, and Pattern recognition Decision making  Knowledge discovery  Context-Dependent Analysis.
Neural Networks Kasin Prakobwaitayakit Department of Electrical Engineering Chiangmai University EE459 Neural Networks The Structure.
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
An informal description of artificial neural networks John MacCormick.
Neural Networks and Fuzzy Systems Hopfield Network A feedback neural network has feedback loops from its outputs to its inputs. The presence of such loops.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu Neural Networks.
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam
Artificial Intelligence & Neural Network
CS 478 – Tools for Machine Learning and Data Mining Perceptron.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
COSC 4426 AJ Boulay Julia Johnson Artificial Neural Networks: Introduction to Soft Computing (Textbook)
Introduction to Neural Networks Freek Stulp. 2 Overview Biological Background Artificial Neuron Classes of Neural Networks 1. Perceptrons 2. Multi-Layered.
Artificial Intelligence Methods Neural Networks Lecture 1 Rakesh K. Bissoondeeal Rakesh K.
Spiking Neural Networks Banafsheh Rekabdar. Biological Neuron: The Elementary Processing Unit of the Brain.
Perceptrons Michael J. Watts
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Where are we? What’s left? HW 7 due on Wednesday Finish learning this week. Exam #4 next Monday Final Exam is a take-home handed out next Friday in class.
“Principles of Soft Computing, 2 nd Edition” by S.N. Sivanandam & SN Deepa Copyright  2011 Wiley India Pvt. Ltd. All rights reserved. CHAPTER 2 ARTIFICIAL.
1 Azhari, Dr Computer Science UGM. Human brain is a densely interconnected network of approximately neurons, each connected to, on average, 10 4.
INTRODUCTION TO NEURAL NETWORKS 2 A new sort of computer What are (everyday) computer systems good at... and not so good at? Good at..Not so good at..
Neural Networks.
Neural Networks.
Real Neurons Cell structures Cell body Dendrites Axon
Dr. Unnikrishnan P.C. Professor, EEE
FUNDAMENTAL CONCEPT OF ARTIFICIAL NETWORKS
Simple learning in connectionist networks
OVERVIEW OF BIOLOGICAL NEURONS
XOR problem Input 2 Input 1
Covariation Learning and Auto-Associative Memory
Artificial Intelligence Lecture No. 28
Patrick Kaifosh, Attila Losonczy  Neuron 
Simple learning in connectionist networks
ARTIFICIAL NEURAL networks.
The Network Approach: Mind as a Web
Introduction to Neural Network
Patrick Kaifosh, Attila Losonczy  Neuron 
Presentation transcript:

Pencil-and-Paper Neural Networks Prof. Kevin Crisp St. Olaf College

The Biological Neuron A neuron can be conceived as a simple device that produces an output only when the net input is sufficiently intense. INPUT OUTPUT

The Artificial Neuron INPUT OUTPUT Thus, an artificial neuron consists of two parts: an integrator that sums dendritic inputs from other neurons, and a comparator that compares the sum to a threshold value.

The Integrator There may be many synaptic inputs onto a single neuron’s dendrites. At any moment, only some of these (n j ) will be active and firing. Furthermore, individual synapses can be strong or weak, and their strength (w j ; weight) can change with learning. For binary artificial neurons (firing: n j =1; not firing: n j =0), the output of the integrator is the sum of the weights of the firing synapses. INPUT

The Comparator The comparator compares the sum it receives from the integrator to a threshold value and produces a binary response: n i = 1 if the sum >= threshold (the artificial neuron is firing) n i = 0 if the sum < threshold (the artificial neuron is not firing) integrator comparator threshold OUTPUT n i

The McCulloch-Pitts Neuron INPUT OUTPUT

Learning in Hopfield Networks

A Very Simple Example Imagine a squirrel has cached nuts in a yard for winter. Trees in the yard serve as landmarks. A neural network uses the positions of trees to recall where nuts are cached.

INPUT OUTPUT

INPUT The input “map” is represented as the activation states of a layer of presynaptic neurons called the input layer. Each input layer neuron fires (n = 1) if a tree is present in its corresponding field. If no tree is present in a neuron’s receptive field, that input layer neuron does not fire (n = 0).

The output “map” is represented as the activation states of a layer of postsynaptic neurons called the output layer. Each output layer neuron fires (n = 1) if a nut is present in its corresponding field. If no nut is present in a neuron’s receptive field, that input layer neuron does not fire (n = 0). OUTPUT

Each input layer neuron is connected to each output layer neuron. INPUT OUTPUT

The synaptic matrix consists of the weights of every input layer neuron onto every output layer neuron. INPUT OUTPUT

A Simple Learning Rule For every synapse: – if the presynaptic cell and postsynaptic cell are both firing (1), the synapse between them should be strengthened (0->1). Note that if there are four trees in a map, that means there are four strong synapses onto each “nut” neuron. – This redundancy in the distributed representation allows neural networks to recognize patterns even when inputs are missing or noisy.

To simulate recall, count the number of 1’s in each column that correspond to 1’s in the input pattern. integrator INPUT

Whenever a column sum is greater or equal than the number of 1’s in the input pattern, the corresponding output layer neuron fires. integrator comparator INPUT OUTPUT

Pencil-and-Paper Neural Networks Input layer neurons are represented by ones and zeros to the left of the synaptic matrix. Output layer neurons are represented by ones and zeros at the top of the synaptic matrix. If an input layer neuron is firing at the same time as an output layer neuron, the synapse is strengthened.

Pencil-and-Paper Neural Networks A network consisting of 6 input layer neurons and 6 output layer neurons is trained to associate the input “101010” with the output “010101”.

Pencil-and-Paper Neural Networks The same network is now trained with a second association (“110100” with “001100”). Note that synapses are never weakened, or the network forgets what it learned before.

Pencil-and-Paper Neural Networks Now that the network has learned a second association, can it still remember the first? This process is repeated for each column.

Pencil-and-Paper Neural Networks The output neuron corresponding to the column fires if the total is greater than the lesser of: the number of ones in the current input, or the number of ones in the original input. When all column sums have been calculated, the thresholding function is applied.

Exercises While completing the lab exercises, you will: – Train neural networks with multiple associative “memories”. – Test the information capacity of artificial neural networks. – Test the ability of networks to recall associations accurately even when the input cue is partial or noisy. – Explain mistakes your networks make in terms of the distinctiveness of the patterns you taught it!