Cellular Neural Networks and Visual Computing Leon O. Chua and Tamás Roska Presentation by Max Pflueger.

Slides:



Advertisements
Similar presentations
Face Recognition: A Convolutional Neural Network Approach
Advertisements

Lecture 19: Parallel Algorithms
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 7: Learning in recurrent networks Geoffrey Hinton.
A new approach to Artificial Intelligence.  There are HUGE differences between brain architecture and computer architecture  The difficulty to emulate.
Chapter 8 Content-Based Image Retrieval. Query By Keyword: Some textual attributes (keywords) should be maintained for each image. The image can be indexed.
Maths for Computer Graphics
Lecture 14 – Neural Networks
1 Vision based Motion Planning using Cellular Neural Network Iraji & Bagheri Supervisor: Dr. Bagheri.
Examples of Two- Dimensional Systolic Arrays. Obvious Matrix Multiply Rows of a distributed to each PE in row. Columns of b distributed to each PE in.
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Neural Networks: Concepts (Reading:
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Pattern Recognition using Hebbian Learning and Floating-Gates Certain pattern recognition problems have been shown to be easily solved by Artificial neural.
6 1 Linear Transformations. 6 2 Hopfield Network Questions.
November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 1 Associative Networks Associative networks are able to store a set of patterns.
Neural Networks An Introduction.
1 Matrix Addition, C = A + B Add corresponding elements of each matrix to form elements of result matrix. Given elements of A as a i,j and elements of.
Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)
Neural networks - Lecture 111 Recurrent neural networks (II) Time series processing –Networks with delayed input layer –Elman network Cellular networks.
Sequence Alignment.
Convolutional Neural Networks for Image Processing with Applications in Mobile Robotics By, Sruthi Moola.
University of Veszprém Department of Image Processing and Neurocomputing Emulated Digital CNN-UM Implementation of a 3-dimensional Ocean Model on FPGAs.
Cellular Neural Network Simulation and Modeling Oroszi Balázs
Artificial Neural Network Theory and Application Ashish Venugopal Sriram Gollapalli Ulas Bardak.
Introduction to Neural Networks. Neural Networks in the Brain Human brain “computes” in an entirely different way from conventional digital computers.
6 1 Linear Transformations. 6 2 Hopfield Network Questions The network output is repeatedly multiplied by the weight matrix W. What is the effect of this.
Polynomial Discrete Time Cellular Neural Networks Eduardo Gomez-Ramirez † Giovanni Egidio Pazienza‡ † LIDETEA, POSGRADO E INVESTIGACION Universidad La.
Cellular Neural Networks Survey of Techniques and Applications Max Pflueger CS 152: Neural Networks December 12, 2006.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Haptic Deformation Modelling Through Cellular Neural Network YONGMIN ZHONG, BIJAN SHIRINZADEH, GURSEL ALICI, JULIAN SMITH.
Ghent University Pattern recognition with CNNs as reservoirs David Verstraeten 1 – Samuel Xavier de Souza 2 – Benjamin Schrauwen 1 Johan Suykens 2 – Dirk.
Analog recurrent neural network simulation, Θ(log 2 n) unordered search with an optically-inspired model of computation.
Neural Networks Lecture 4 out of 4. Practical Considerations Input Architecture Output.
Evaluation of Gender Classification Methods with Automatically Detected and Aligned Faces Speaker: Po-Kai Shen Advisor: Tsai-Rong Chang Date: 2010/6/14.
March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 1 … let us move on to… Artificial Neural Networks.
Memory Network Maintenance Using Spike-Timing Dependent Plasticity David Jangraw, ELE ’07 Advisor: John Hopfield, Department of Molecular Biology 12 T.
Outline Of Today’s Discussion
Analysis of Sparse Convolutional Neural Networks
Data Mining, Neural Network and Genetic Programming
DeepCount Mark Lenson.
Matt Gormley Lecture 16 October 24, 2016
Mean Shift Segmentation
Lecture 5 Smaller Network: CNN
APPLICATIONS OF MATRICES APPLICATION OF MATRICES IN COMPUTERS Rabab Maqsood (069)
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Dynamic Routing Using Inter Capsule Routing Protocol Between Capsules
Dr. Unnikrishnan P.C. Professor, EEE
Fundamentals of Computer Science Part i2
By: Kevin Yu Ph.D. in Computer Engineering
Karnaugh Maps Topics covered in this presentation: Karnaugh Maps
Convolutional neural networks Abin - Roozgard.
Neural Networks Chapter 4
CSC 578 Neural Networks and Deep Learning
A Proposal Defense On Deep Residual Network For Face Recognition Presented By SAGAR MISHRA MECE
The use of Neural Networks to schedule flow-shop with dynamic job arrival ‘A Multi-Neural Network Learning for lot Sizing and Sequencing on a Flow-Shop’
Final Project presentation
Recurrent Encoder-Decoder Networks for Time-Varying Dense Predictions
Linear Transformations
Deep Learning Some slides are from Prof. Andrew Ng of Stanford.
Presentation By: Eryk Helenowski PURE Mentor: Vincent Bindschaedler
Computer Vision Lecture 19: Object Recognition III
ARTIFICIAL NEURAL networks.
Face Recognition: A Convolutional Neural Network Approach
Matrix Addition, C = A + B Add corresponding elements of each matrix to form elements of result matrix. Given elements of A as ai,j and elements of B as.
CSC321: Neural Networks Lecture 11: Learning in recurrent networks
An introduction to: Deep Learning aka or related to Deep Neural Networks Deep Structural Learning Deep Belief Networks etc,
Machine Learning for Space Systems: Are We Ready?
Linear Transformations
End-to-End Facial Alignment and Recognition
CSC 578 Neural Networks and Deep Learning
Volume 27, Issue 1, Pages (January 2017)
Presentation transcript:

Cellular Neural Networks and Visual Computing Leon O. Chua and Tamás Roska Presentation by Max Pflueger

Hopfield Networks  Every node receives input from every other node  Recurrent behavior  Cellular Neural Networks are a variant on Hopfield Nets

Why CNNs?  Hopfield nets are powerful, but because they are fully connected, they are difficult to implement in hardware for a significant number of nodes  CNNs offer a compromise by only connecting nodes to other nodes nearby  CNNs with thousands of nodes can be implemented on a single chip

CNNs  Cells are typically arranged in a grid layout  N rows, M columns  Cells are only connected to other cells within a sphere of influence of radius r  Note that if r >= max{N,M}-1 we have a Hopfield Net (a)neighborhood with r = 1 (b)r = 2

CNN Cells  CNN cells are governed by the equation shown below  x ij is the state of cell (i,j)  y kl is the output of cell (k,l)  u kl is the input to cell (k,l)  z ij is the threshold of cell (i,j)  C(i,j) is the set of cells in the CNN  S r (i,j) is the set of cells in the sphere of influence of cell (i,j)  A(i,j; k,l) and B(i,j; k,l) are the feedback and input synaptic operators respectively

Templates  A and B are not necessarily space and time invariant, but in most applications they can be treated that way  When A and B are space and time invariant, they can be represented by a (2r +1) x (2r+1) matrix  These two matrices, combined with the value for z ij is the template for a CNN

Logical AND Template  This template creates the pixel wise AND of the initial state and the input  The progression of images shows the input and initial image, followed by the output at a series of times

Edge Detecting Template  The template shown at right is designed to show the edges of the input with black pixels  The top example shows the progression through time of the CNN  The bottom shows behavior on a more complex image

The CNN Universal Machine  A CNN can be designed to change templates during operation  A set of templates can be strung together, creating the ability to program the CNN  This effect is similar to having a CNN with multiple layers  The CNN UM can be programmed in a way very similar to a traditional microprocessor  Instructions: load template, execute template, etc.

A CNN UM Example  The algorithm at right is designed to identify concave arcs positioned face to face horizontally in relation to each other  The image is transformed by a series of different templates, rather than a single one

Comparison of Processing Speed  This chart shows a comparison of traditional microprocessors to a couple of different CNN chips in a series of image processing tasks

Questions?

References  Chua, Leon O. and Tamás Roska. Cellular Neural Networks and Visual Computing. Cambridge: Cambridge University Press, 2002.