Neural Networks.

Slides:



Advertisements
Similar presentations
Chapter3 Pattern Association & Associative Memory
Advertisements

2806 Neural Computation Self-Organizing Maps Lecture Ari Visa.
NEURAL NETWORKS Perceptron
Correlation Matrix Memory CS/CMPE 333 – Neural Networks.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an.
September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 1 Visual Illusions demonstrate how we perceive an “interpreted version”
November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 1 Associative Networks Associative networks are able to store a set of patterns.
November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.
Information Fusion Yu Cai. Research Article “Comparative Analysis of Some Neural Network Architectures for Data Fusion”, Authors: Juan Cires, PA Romo,
Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
 The most intelligent device - “Human Brain”.  The machine that revolutionized the whole world – “computer”.  Inefficiencies of the computer has lead.
Artificial Neural Network Yalong Li Some slides are from _24_2011_ann.pdf.
Artificial Neural Network Supervised Learning دكترمحسن كاهاني
Recurrent Network InputsOutputs. Motivation Associative Memory Concept Time Series Processing – Forecasting of Time series – Classification Time series.
The Boltzmann Machine Psych 419/719 March 1, 2001.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
 How does memory affect your identity?  If you didn’t have a memory how would your answer the question – How are you today?
Pencil-and-Paper Neural Networks Prof. Kevin Crisp St. Olaf College.
Self Organizing Feature Map CS570 인공지능 이대성 Computer Science KAIST.
Neural Networks 2nd Edition Simon Haykin
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Chapter 8: Adaptive Networks
Hazırlayan NEURAL NETWORKS Backpropagation Network PROF. DR. YUSUF OYSAL.
ECE 471/571 - Lecture 16 Hopfield Network 11/03/15.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Pattern Recognition. What is Pattern Recognition? Pattern recognition is a sub-topic of machine learning. PR is the science that concerns the description.
1 Azhari, Dr Computer Science UGM. Human brain is a densely interconnected network of approximately neurons, each connected to, on average, 10 4.
Chapter 9 Knowledge. Some Questions to Consider Why is it difficult to decide if a particular object belongs to a particular category, such as “chair,”
Today’s Lecture Neural networks Training
Self-Organizing Network Model (SOM) Session 11
Neural Network Architecture Session 2
Human MEMORY.
CSC2535: Computation in Neural Networks Lecture 11 Extracting coherent properties by maximizing mutual information across space or time Geoffrey Hinton.
ECE 471/571 - Lecture 15 Hopfield Network 03/29/17.
Artificial Intelligence (CS 370D)
To discuss this week What is a classifier? What is generalisation?
What is an ANN ? The inventor of the first neuro computer, Dr. Robert defines a neural network as,A human brain like system consisting of a large number.
FUNDAMENTAL CONCEPT OF ARTIFICIAL NETWORKS
Simple learning in connectionist networks
ECE 471/571 - Lecture 19 Hopfield Network.
Intelligent Leaning -- A Brief Introduction to Artificial Neural Networks Chiung-Yao Fang.
Neural Networks Advantages Criticism
Neurobiology and Communication
of the Artificial Neural Networks.
Intelligent Leaning -- A Brief Introduction to Artificial Neural Networks Chiung-Yao Fang.
Perceptron as one Type of Linear Discriminants
Creating Data Representations
Artificial Intelligence Lecture No. 28
Ying Dai Faculty of software and information science,
Ying Dai Faculty of software and information science,
The Naïve Bayes (NB) Classifier
Remembering & Forgetting
Artificial Intelligence 12. Two Layer ANNs
Patrick Kaifosh, Attila Losonczy  Neuron 
Simple learning in connectionist networks
Brain-inspired Approaches for De Bruijns model of associative memory
The Network Approach: Mind as a Web
Remembering & Forgetting
Introduction to Neural Network
Ch6: AM and BAM 6.1 Introduction AM: Associative Memory
Unsupervised Networks Closely related to clustering
Sanguthevar Rajasekaran University of Connecticut
Ch4: Backpropagation (BP)
Patrick Kaifosh, Attila Losonczy  Neuron 
Outline Announcement Neural networks Perceptrons - continued
Presentation transcript:

Neural Networks

Associative Model of Memory Learning is the process of forming associations between related patterns. Human memory connects items (ideas, sensations, etc.) that are similar, that are contrary, that occur in close proximity, or that occur in close succession (Kohonen, 1987). We cannot remember an event before it happens. Therefore an event happens, some change takes place in our brains so that subsequently we can remember the event. So memory is inherently bound up in the learning process

Associative Model of Memory In a neurobiological context, memory refers to the relatively enduring neural alterations induced by the interactions of an organism with its environment (Tayler, 1986). Without such a change, there can be no memory. Furthermore, for the memory to be useful, it must be accessible to the nervous system so as to influence future behavior. When a particular activity pattern is learned, it is stored in the brain, from which it can be recalled later when required.

Short- and Long-Term Memories Memory may be divided into “short-term” and “long-term” memory, depending on the retention time (Arbib, 1989). Short-term memory refers to a compilation of knowledge representing the “current” state of the environment. Any discrepancies between knowledge stored in short-term memory and a “new” state are used to update the short-term memory. Long-term memory, on the other hand, refers to knowledge stored for a long time or permanently.

Fundamental Property of Associative Memory A fundamental property of the associative memory is that “it maps an output pattern of neural activity onto an input pattern of neural activity”. In particular, during the learning phase, a “key pattern” is presented as stimulus, and the memory transforms it into a “memorized” or “stored pattern”. The storage takes place through specific changes in the synaptic weights of the memory. During the retrieval or recall phase, the memory is presented with a stimulus that is a noisy version or incomplete description of a key pattern originally associated with a stored pattern. Despite imperfections in the stimulus, the associative memory has the capability to recall the stored pattern correctly.

Some Characteristics of the Associative Memory The memory is distributed. Both the stimulus (key) pattern and the response (stored) pattern of an associative memory consist of data vectors. Information is stored in memory by setting up a spatial pattern of neural activities across a large number of neurons. Information contained in a stimulus not only determines its storage location in memory but also an address for its retrieval. Despite the fact that the neurons do not represent reliable and low-noise computing cells, the memory exhibits a high degree of resistance to noise and damage of a diffusive kind. There may be interactions between individual patterns stored. (Otherwise, the memory would have to be exceptionally large for it to accommodate the storage of a large number of patterns in perfect isolation from each other.) There is therefore, the distinct possibility of the memory making errors during the recall process.

Auto- Versus Hetero-associative Memory There are two types of association: Auto-association: A key vector (pattern) is associated with itself in memory. This is most useful for pattern completion where a partial pattern (a pair of eyes) or a noisy pattern (a blurred image) is associated with its complete and accurate representation (the whole face). The input and output signal (data) spaces have the same dimensionality. Hetero-association: A vector is associated with another vector which may have different dimensionality. We may still hope that a noisy or partial input vector will retrieve the complete output vector.

Linear Versus Non-linear Associative Memory An associative memory may also be classified as linear or non-linear, depending upon the model adopted for its neurons. Let the data vectors a and b denote the stimulus (input) and the response (output) of an associative memory, respectively. Linear Associative Memory: Input-output relationship is: b = M a where M is called the “memory matrix”. Nonlinear Associative Memory: Here the input-output relationship is of the form: b = j( M; a ) a where in general, j(. ; .) is a nonlinear function of the memory matrix and the input vector.

Block Diagram of Associative Memory Memory Matrix M Response b Stimulus a

A Simple Network for Holding Associative Memory Inputs Weights Output Neurons Input Outputs