Associative Memories A Morphological Approach. Outline Associative Memories Motivation Capacity Vs. Robustness Challenges Morphological Memories Improving.

Slides:



Advertisements
Similar presentations
Bioinspired Computing Lecture 16
Advertisements

Pattern Association.
Presentation By Utkarsh Trivedi Y8544
Artificial Intelligence By: David Hunt Lee Evans Jonathan Moreton Rachel Moss.
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
Pattern Analysis Prof. Bennett Math Model of Learning and Discovery 2/14/05 Based on Chapter 1 of Shawe-Taylor and Cristianini.
Learning on Probabilistic Labels Peng Peng, Raymond Chi-wing Wong, Philip S. Yu CSE, HKUST 1.
1 Neural networks 3. 2 Hopfield network (HN) model A Hopfield network is a form of recurrent artificial neural network invented by John Hopfield in 1982.
Chapter 10 Artificial Intelligence © 2007 Pearson Addison-Wesley. All rights reserved.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
1 Lecture 33 Introduction to Artificial Intelligence (AI) Overview  Lecture Objectives.  Introduction to AI.  The Turing Test for Intelligence.  Main.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Face Recognition using PCA (Eigenfaces) and LDA (Fisherfaces)
Basic Models in Neuroscience Oren Shriki 2010 Associative Memory 1.
Efficient Fine Granularity Scalability Using Adaptive Leaky Factor Yunlong Gao and Lap-Pui Chau, Senior Member, IEEE IEEE TRANSACTIONS ON BROADCASTING,
Greg GrudicIntro AI1 Introduction to Artificial Intelligence CSCI 3202: The Perceptron Algorithm Greg Grudic.
Artificial Neurons, Neural Networks and Architectures
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 1 Associative Networks Associative networks are able to store a set of patterns.
Machine Learning Damon Waring 22 April of 15 Agenda Problem, Solution, Benefits Problem, Solution, Benefits Machine Learning Overview/Basics Machine.
1 Prediction of Software Reliability Using Neural Network and Fuzzy Logic Professor David Rine Seminar Notes.
© Negnevitsky, Pearson Education, Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works Introduction, or.
Chapter 11: Artificial Intelligence
Chapter 10 Artificial Intelligence. © 2005 Pearson Addison-Wesley. All rights reserved 10-2 Chapter 10: Artificial Intelligence 10.1 Intelligence and.
Neural Network Hopfield model Kim, Il Joong. Contents  Neural network: Introduction  Definition & Application  Network architectures  Learning processes.
Waqas Haider Khan Bangyal. Multi-Layer Perceptron (MLP)
Nico De Clercq Pieter Gijsenbergh Noise reduction in hearing aids: Generalised Sidelobe Canceller.
Face Recognition using Parallel Associative Memory 2009/12/31 學生:羅國育.
Artificial Neural Network Supervised Learning دكترمحسن كاهاني
1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 20 Oct 26, 2005 Nanjing University of Science & Technology.
Machine Learning Using Support Vector Machines (Paper Review) Presented to: Prof. Dr. Mohamed Batouche Prepared By: Asma B. Al-Saleh Amani A. Al-Ajlan.
Prediction of Traffic Density for Congestion Analysis under Indian Traffic Conditions Proceedings of the 12th International IEEE Conference on Intelligent.
Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 11: Artificial Intelligence Computer Science: An Overview Tenth Edition.
1 Introduction to Neural Networks And Their Applications.
Artificial Intelligence By Michelle Witcofsky And Evan Flanagan.
Artificial Neural Networks Bruno Angeles McGill University – Schulich School of Music MUMT-621 Fall 2009.
Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.
Artificial Intelligence Chapter 3 Neural Networks Artificial Intelligence Chapter 3 Neural Networks Biointelligence Lab School of Computer Sci. & Eng.
Towards a Method For Evaluating Naturalness in Conversational Dialog Systems Victor Hung, Miguel Elvir, Avelino Gonzalez & Ronald DeMara Intelligent Systems.
1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 25 Nov 4, 2005 Nanjing University of Science & Technology.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Evolutionary Path to Biological Kernel Machines Magnus Jändel Swedish Defence Research Agency.
Chapter1: Introduction Chapter2: Overview of Supervised Learning
A Roadmap towards Machine Intelligence
Framework For PDP Models Psych /719 Jan 18, 2001.
Perceptrons Michael J. Watts
Chapter 6 Neural Network.
ECE 471/571 - Lecture 16 Hopfield Network 11/03/15.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Bayes Rule Mutual Information Conditional.
FPGA IMPLEMENTATION OF TCAM Under the guidance of Dr. Hraziia Presented By Malvika ( ) Department of Electronics and Communication Engineering.
Intelligent HIV/AIDS FAQ Retrieval System Using Neural Networks
Deep Learning: What is it good for? R. Burgmann
Chapter 11: Artificial Intelligence
PART IV: The Potential of Algorithmic Machines.
ECE 471/571 - Lecture 15 Hopfield Network 03/29/17.
Combining CNN with RNN for scene labeling (segmentation)
A Graph-based Framework for Image Restoration
ECE 471/571 - Lecture 19 Hopfield Network.
Introduction to Neural Networks
Kocaeli University Introduction to Engineering Applications
Artificial Intelligence Chapter 3 Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
Introduction to Neural Network
Ch6: AM and BAM 6.1 Introduction AM: Associative Memory
Pattern Analysis Prof. Bennett
Artificial Intelligence Chapter 3 Neural Networks
Presentation transcript:

Associative Memories A Morphological Approach

Outline Associative Memories Motivation Capacity Vs. Robustness Challenges Morphological Memories Improving Limitations Experiment Results Summary References

Associative Memories Motivation Human ability to retrieve information from applied associated stimuli Ex. Recalling one’s relationship with another after not seeing them for several years despite the other’s physical changes (aging, facial hair, etc.) Enhances human awareness and deduction skills and efficiently organizes vast amounts of information Why not replicate this ability with computers? Ability would be a crucial addition to the Artificial Intelligence Community in developing rational, goal oriented, problem solving agents One realization of Associative Memories are Contents Addressable Memories (CAM)

Capacity versus Robustness Challenge for Associative Memories In early memory models, capacity was limited to the length of the memory and allowed for negligible input distortion (old CAMs). Ex. Linear Associative Memory Recent years have increased the memory’s robustness, but sacrificed capacity J. J. Hopfield’s proposed Hopfield Network Capacity:, where n is the memory length Current research offers a solution which maximizes memory capacity while still allowing for input distortion Morphological Neural Model Capacity: essentially limitless (2 n in the binary case) Allows for Input Distortion One Step Convergence

Morphological Memories Formulated using Mathematical Morphology Techniques Image Dilation Image Erosion Training Constructs Two Memories: M and W M used for recalling dilated patterns W used for recalling eroded patterns M and W are not sufficient…Why? General distorted patterns are both dilated and eroded solution: hybrid approach Incorporate a kernel matrix, Z, into M and W General distorted pattern recall is now possible! Input → M Z → W Z → Output

Improving Limitations Experiment Construct a binary morphological auto-associative memory to recall bitmap images of capital alphabetic letters Use Hopfield Model for baseline Construct letters using Microsoft San Serif font (block letters) and Math5 font (cursive letters) Attempt recall 5 times for each pattern for each image distortion at 0%, 2%, 4%, 8%, 10%, 15%, 20%, and 25% Use different memory sizes: 5 images, 10, 26, and 52 Use Average Recall Rate per memory size as a performance measure, where recall is correct if and only if it is perfect

Results Morphological Model and Hopfield Model: Both degraded in performance as memory size increased Both recalled letters in Microsoft San Serif font better than Math5 font Morphological Model: Always perfect recall with 0% image distortion Performance smoothly degraded as memory size and distortion increased Hopfield Model: Never correctly recalled images when memory contained more than 5 images

Results using 5 Images MNN = Morphological Neural Network HOP = Hopfield Neural Network MSS = Microsoft San Serif font M5 = Math5 font

Results using 26 Images MNN = Morphological Neural Network HOP = Hopfield Neural Network MSS = Microsoft San Serif font M5 = Math5 font

Summary The ability for humans to retrieve information from associated stimuli continues to elicit great interest among researchers Progress Continues with the development of enhanced neural models Linear Associative Memory → Hopfield Model → Morphological Model Using Morphological Model Essentially Limitless Capacity Guaranteed Perfect Recall with Undistorted Input One Step Convergence

References Y. H. Hu. Associative Learning and Principal Component Analysis. Lecture 6 Notes, 2003 R. P. Lippmann. An Introduction to Computing with Neural Nets. IEEE Transactions of Acoustics, Speech, and Signal Processing, ASSP-4:4- 22, R. McEliece and et. Al. The Capacity of Hopfield Associative Memory. Transactions of Information Theory, 1:33-45, G. X. Ritter and P. Sussner. An Introduction to Morphological Neural Networks. In Proceedings of the 13th International Conference on Pattern Recognition, pages , Vienna, Austria, G. X. Ritter, P. Sussner, and J. L. Diaz de Leon. Morphological Associative Memories. IEEE Transactions on Neural Networks, 9(2): , March 1998.