Hierarchical Temporal Memory as a Means for Image Recognition by Wesley Bruning CHEM/CSE 597D Final Project Presentation December 10, 2008.

Slides:



Advertisements
Similar presentations
Goal: a graph representation of the topology of a gray scale image. The graph represents the hierarchy of the lower and upper level sets of the gray level.
Advertisements

Towards an Implementation of a Theory of Visual Learning in the Brain Shamit Patel CMSC 601 May 2, 2011.
The Helmholtz Machine P Dayan, GE Hinton, RM Neal, RS Zemel
Hierarchical Temporal Memory (HTM)
Object recognition and scene “understanding”
Artificial Neural Network
Template design only ©copyright 2008 Ohio UniversityMedia Production Spring Quarter  A hierarchical neural network structure for text learning.
Brian Merrick CS498 Seminar.  Introduction to Neural Networks  Types of Neural Networks  Neural Networks with Pattern Recognition  Applications.
An introduction to: Deep Learning aka or related to Deep Neural Networks Deep Structural Learning Deep Belief Networks etc,
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Modeling Cross-Episodic Migration of Memory Using Neural Networks by, Adam Britt Definitions: Episodic Memory – Memory of a specific event, combination.
Artificial Neural Networks (ANNs)
1 Pendahuluan Pertemuan 1 Matakuliah: T0293/Neuro Computing Tahun: 2005.
Associative Learning in Hierarchical Self Organizing Learning Arrays Janusz A. Starzyk, Zhen Zhu, and Yue Li School of Electrical Engineering and Computer.
IT 691 Final Presentation Pace University Created by: Robert M Gust Mark Lee Samir Hessami Mark Lee Samir Hessami.
Rohit Ray ESE 251. What are Artificial Neural Networks? ANN are inspired by models of the biological nervous systems such as the brain Novel structure.
Cognitive level of Analysis
CHAPTER 12 ADVANCED INTELLIGENT SYSTEMS © 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang.
A Genetic Algorithms Approach to Feature Subset Selection Problem by Hasan Doğu TAŞKIRAN CS 550 – Machine Learning Workshop Department of Computer Engineering.
1 7-Speech Recognition (Cont’d) HMM Calculating Approaches Neural Components Three Basic HMM Problems Viterbi Algorithm State Duration Modeling Training.
Chapter 11: Artificial Intelligence
Chapter 14: Artificial Intelligence Invitation to Computer Science, C++ Version, Third Edition.
Image Recognition using Hierarchical Temporal Memory Radoslav Škoviera Ústav merania SAV Fakulta matematiky, fyziky a informatiky UK.
Multimedia Databases (MMDB)
Associative Pattern Memory (APM) Larry Werth July 14, 2007
Artificial Neural Network Theory and Application Ashish Venugopal Sriram Gollapalli Ulas Bardak.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Explanation Facility دكترمحسن كاهاني
 The most intelligent device - “Human Brain”.  The machine that revolutionized the whole world – “computer”.  Inefficiencies of the computer has lead.
Outline What Neural Networks are and why they are desirable Historical background Applications Strengths neural networks and advantages Status N.N and.
Advances in Modeling Neocortex and its impact on machine intelligence Jeff Hawkins Numenta Inc. VS265 Neural Computation December 2, 2010 Documentation.
Artificial Neural Networks. Applied Problems: Image, Sound, and Pattern recognition Decision making  Knowledge discovery  Context-Dependent Analysis.
NEURAL NETWORKS FOR DATA MINING
Practical Heirarchical Temporal Memory for Time Series Prediction
A New Theory of Neocortex and Its Implications for Machine Intelligence TTI/Vanguard, All that Data February 9, 2005 Jeff Hawkins Director The Redwood.
What is “Thinking”? Forming ideas Drawing conclusions Expressing thoughts Comprehending the thoughts of others Where does it occur? Distributed throughout.
Modelling Language Evolution Lecture 1: Introduction to Learning Simon Kirby University of Edinburgh Language Evolution & Computation Research Unit.
Visual Information Systems Recognition and Classification.
I Robot.
Last Words DM 1. Mining Data Steams / Incremental Data Mining / Mining sensor data (e.g. modify a decision tree assuming that new examples arrive continuously,
© 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang 12-1 Chapter 12 Advanced Intelligent Systems.
Neural Networks in Computer Science n CS/PY 231 Lab Presentation # 1 n January 14, 2005 n Mount Union College.
Introduction to Neural Networks and Example Applications in HCI Nick Gentile.
CSC321 Introduction to Neural Networks and Machine Learning Lecture 3: Learning in multi-layer networks Geoffrey Hinton.
Andrew Ng, Director, Stanford Artificial Intelligence Lab
Why Can't A Computer Be More Like A Brain?. Outline Introduction Turning Test HTM ◦ A. Theory ◦ B. Applications & Limits Conclusion.
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
1 Andrew Ng, Associate Professor of Computer Science Robots and Brains.
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
1 Andrew Ng, Associate Professor of Computer Science Robots and Brains.
Robodog Frontal Facial Recognition AUTHORS GROUP 5: Jing Hu EE ’05 Jessica Pannequin EE ‘05 Chanatip Kitwiwattanachai EE’ 05 DEMO TIMES: Thursday, April.
CSC400W Honors Project Proposal Understanding ocean surface features from satellite images Jared Tilanus Nemanja Spasic.
1 7-Speech Recognition Speech Recognition Concepts Speech Recognition Approaches Recognition Theories Bayse Rule Simple Language Model P(A|W) Network Types.
Presented By Dr. Paul Cottrell Company: Reykjavik.
Maestro AI Vision and Design Overview Definitions Maestro: A naïve Sensorimotor Engine prototype. Sensorimotor Engine: Combining sensory and motor functions.
Network Management Lecture 13. MACHINE LEARNING TECHNIQUES 2 Dr. Atiq Ahmed Université de Balouchistan.
March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 1 … let us move on to… Artificial Neural Networks.
HIERARCHICAL TEMPORAL MEMORY WHY CANT COMPUTERS BE MORE LIKE THE BRAIN?
Artificial Neural Networks By: Steve Kidos. Outline Artificial Neural Networks: An Introduction Frank Rosenblatt’s Perceptron Multi-layer Perceptron Dot.
Deep Learning Amin Sobhani.
Learning in Neural Networks
Software Design and Architecture
What is an ANN ? The inventor of the first neuro computer, Dr. Robert defines a neural network as,A human brain like system consisting of a large number.
Intelligent Information System Lab
Dynamic Routing Using Inter Capsule Routing Protocol Between Capsules
Chapter 12 Advanced Intelligent Systems
Fundamentals of Neural Networks Dr. Satinder Bal Gupta
Sanguthevar Rajasekaran University of Connecticut
Presentation transcript:

Hierarchical Temporal Memory as a Means for Image Recognition by Wesley Bruning CHEM/CSE 597D Final Project Presentation December 10, 2008

The Grand Scheme A free, on-line resource that allows anyone to find information about symbols; a compendium of symbols, their names, meanings, and histories. Symbols? Yes, symbols!  The Star of David (hexagram), the Greek symbol Sigma, the Masonic compass, the Wheel of Dharma, the bass clef, company logos, et cetera Would fill a niche, but a relatively easy enough niche to fill, and one that should be filled sometime. Not for profit.

The Neat Feature Users can search for symbols by drawing or uploading pictures.  “What does this mean?” The server(s) will house a program that receives the image and determines which symbol in the database the user desires. Image recognition.

Computer Vision Visual pattern recognition, like understanding language and physically manipulating objects, is difficult for computers. There are no viable algorithms for performing these functions on a computer. 1 For humans, these are easy. 1 J. Hawkins and D. George, “Hierarchical Temporal Memory – Concepts, Theory, and Terminology,” Whitepaper, Numenta Inc.

A Couple of Existing Models “Classic” artificial neural networks  At least 3 layers of nodes Bayesian networks  Directed acyclic graph

Hierarchical Temporal Memory Abbreviated HTM. A novel machine learning paradigm. Can be considered a type of artificial neural network, but the founding principles differ. (I will only discuss the higher-level concepts of HTM—not its learning algorithms and the like)

Why HTM? It's rather new/untested. Has already shown promising results in the area of visual pattern recognition. Biologically inspired.

The Biological Inspiration HTM is based on a hierarchical theory of the human brain's neocortex and thalamus; it seeks to replicate their biological functions. A top-down solution that models the brain as a “device that computes by performing sophisticated pattern matching and sequence prediction.” 1 A hierarchy of uniform processing elements. HTM implements invariant pattern recognition, as seen in the visual cortex. 1 K. L. Rice, et. al. “A Preliminary Investigation of a Neocortex Model Implementation on the Cray XD1.”

Assumptions in the Basic Theory The neocortex is an efficient pattern matching device, not a computing engine. 1 The brain learns by storing patterns. It recognizes by matching sensory data to learned patterns. 2 The structure of the world is hierarchical: temporal as well as spatial. e.g. “A speaker expresses an idea over time by combining consonants and vowels to make syllables, syllables to make words, etc.” 3 1 J. Hawkins. "Learn Like a Human." 2 K. L. Rice, et. al. “A Preliminary Investigation of a Neocortex Model Implementation on the Cray XD1.” 3

How Does HTM Work? It's (not) a black box! It's a hierarchy of connected nodes.

An HTM Network Multiple levels of nodes. Sensory data is input to the lower level, and a belief is generated at the top level. Information is exchanged from parent to child and vice versa. Each node performs the same learning algorithm.

This Looks Similar to Some Types of Artificial Neural Networks HTM can be considered a type of ANN, as well as a type of Bayesian network. Big Difference: The majority of these networks try to emulate individual neurons, not the overall structure of the neocortex. Temporal data is typically not handled (well). Different learning algorithms are used.

So How Does it Work? Each node looks at its input and learns the “cause” of its input. A “cause” is whatever causes the input pattern to occur. The outputs of the nodes in one level become the inputs of the nodes in the next level. So! The nodes at the lower levels discover simple causes, such as edges and corners, while the nodes at the higher levels discover complex causes, such as faces. Intermediate nodes find causes of intermediate complexity.

Beliefs

How Do Nodes Generate Beliefs? 1. Node looks at input and assigns a probability that the input matches a spatial pattern. 2. The node takes this probability distribution and combines it with previous state information to assign a probability that the current input matches a temporal sequence. 3. The distribution over the set of sequences is the output of the node and is passed up the hierarchy. Finally, if the node is still learning, it might modify the set of stored spatial and temporal patterns to reflect the new input. 1 1 J. Hawkins and D. George, “Hierarchical Temporal Memory – Concepts, Theory, and Terminology.”

Discovering spatial patterns Discovering temporal patterns (sequences) In Pictures

Past Trial “Using Numenta’s hierarchical temporal memory to recognize CAPTCHAs” 1  HTM performed well, but performance could have been improved with more time  Concluded HTMs are designed well to recognize CAPTCHAs 1 Y. J. Hall and R. E. Poplin, “Using Numenta’s hierarchical temporal memory to recognize CAPTCHAs”

Past Trial “Content-Based Image Retrieval Using Hierarchical Temporal Memory” 1  HTM was robust to spatial noise, blurring, and other distortions despite having been trained on only clean, undistorted images  Concluded HTMs are flexible enough to provide efficient and accurate indexing of line drawings 1 B. A. Bobier and M. Wirth, “Content-Based Image Retrieval Using Hierarchical Temporal Memory”

My Own Firsthand First Impression Testing HTM's image recognition capabilities.

Testing HTM's Image Recognition Capabilities

Results With simple black and white tests, it was very successful. Handled noisy data well. Not so good with rotated images. Reason? (predominantly) Training. Training is essential.

Is HTM a Viable Option? Yes. It has already proven it is a good candidate for simple image processing. I still need to conduct more experiments to find its boundaries. E.g. color images, more complex images, larger database of trained images. Once these boundaries are found, I must decide if it is worth it to find solutions within HTM technology. I may need to implement additional processing. Numenta is a business, this is their product.