Demetz Clément ECE 539 Final Project Fall 2003

Slides:



Advertisements
Similar presentations
Face Recognition: A Convolutional Neural Network Approach
Advertisements

1 Image Classification MSc Image Processing Assignment March 2003.
Perceptron Learning Rule
DATA-MINING Artificial Neural Networks Alexey Minin, Jass 2006.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Neural NetworksNN 11 Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Supervised and Unsupervised learning and application to Neuroscience Cours CA6b-4.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks / Fall 2004 Shreekanth Mandayam ECE Department Rowan University.
Prénom Nom Document Analysis: Linear Discrimination Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Neural Networks Part 4 Dan Simon Cleveland State University 1.
November 19, 2009Introduction to Cognitive Science Lecture 20: Artificial Neural Networks I 1 Artificial Neural Network (ANN) Paradigms Overview: The Backpropagation.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
On the Basis Learning Rule of Adaptive-Subspace SOM (ASSOM) Huicheng Zheng, Christophe Laurent and Grégoire Lefebvre 13th September 2006 Thanks to the.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks / Fall 2004 Shreekanth Mandayam ECE Department Rowan University.
Feature Detection and Emotion Recognition Chris Matthews Advisor: Prof. Cotter.
1 Pendahuluan Pertemuan 1 Matakuliah: T0293/Neuro Computing Tahun: 2005.
Hub Queue Size Analyzer Implementing Neural Networks in practice.
SOMTIME: AN ARTIFICIAL NEURAL NETWORK FOR TOPOLOGICAL AND TEMPORAL CORRELATION FOR SPATIOTEMPORAL PATTERN LEARNING.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Traffic Sign Recognition Using Artificial Neural Network Radi Bekker
Convolutional Neural Networks for Image Processing with Applications in Mobile Robotics By, Sruthi Moola.
Revision Michael J. Watts
Review – Backpropagation
A Genetic Algorithms Approach to Feature Subset Selection Problem by Hasan Doğu TAŞKIRAN CS 550 – Machine Learning Workshop Department of Computer Engineering.
Soft Computing Colloquium 2 Selection of neural network, Hybrid neural networks.
CS623: Introduction to Computing with Neural Nets (lecture-20) Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Network Theory and Application Ashish Venugopal Sriram Gollapalli Ulas Bardak.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
DIGITAL IMAGE PROCESSING Dr J. Shanbehzadeh M. Hosseinajad ( J.Shanbehzadeh M. Hosseinajad)
Explorations in Neural Networks Tianhui Cai Period 3.
Self organizing maps 1 iCSC2014, Juan López González, University of Oviedo Self organizing maps A visualization technique with data dimension reduction.
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
Matlab Matlab Sigmoid Sigmoid Perceptron Perceptron Linear Linear Training Training Small, Round Blue-Cell Tumor Classification Example Small, Round Blue-Cell.
A Scalable Self-organizing Map Algorithm for Textual Classification: A Neural Network Approach to Thesaurus Generation Dmitri G. Roussinov Department of.
Damageless Information Hiding Technique using Neural Network Keio University Graduate School of Media and Governance Kensuke Naoe.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
A note about gradient descent: Consider the function f(x)=(x-x 0 ) 2 Its derivative is: By gradient descent. x0x0 + -
Soft Computing Lecture 8 Using of perceptron for image recognition and forecasting.
Introduction: Olfactory Physiology Organic Chemistry Signal Processing Pattern Recognition Computational Learning Electronic Nose Chemical Sensors.
SOM-based Data Visualization Methods Author:Juha Vesanto Advisor:Dr. Hsu Graduate:ZenJohn Huang IDSL seminar 2002/01/24.
CS621 : Artificial Intelligence
Face Image-Based Gender Recognition Using Complex-Valued Neural Network Instructor :Dr. Dong-Chul Kim Indrani Gorripati.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
1 Neural networks 2. 2 Introduction: Neural networks The nervous system contains 10^12 interconnected neurons.
Neural Networks Lecture 11: Learning in recurrent networks Geoffrey Hinton.
Grim Grins Project Number 5.. Grim Grins: The Team. Team members: Adrian Hoitan (Romania) Serkan Öztürk (Turkey) Günnar Yagcilar (Turkey) Póth Miklós.
Dimensions of Neural Networks Ali Akbar Darabi Ghassem Mirroshandel Hootan Nokhost.
Neural Networks Lecture 4 out of 4. Practical Considerations Input Architecture Output.
Machine Learning Artificial Neural Networks MPλ ∀ Stergiou Theodoros 1.
Optical Character Recognition
Leaves Recognition By Zakir Mohammed Indiana State University Computer Science.
Introduction to Neural Networks
Pattern Recognition Lecture 20: Neural Networks 3 Dr. Richard Spillman Pacific Lutheran University.
1 Neural Networks MUMT 611 Philippe Zaborowski April 2005.
Data Mining, Neural Network and Genetic Programming
A Methodology for Finding Bad Data
Artificial Intelligence Lecture No. 30
CSSE463: Image Recognition Day 17
Lecture 22 Clustering (3).
Training a Neural Network
Face Recognition with Neural Networks
network of simple neuron-like computing elements
CSSE463: Image Recognition Day 17
Multilayer Perceptron & Backpropagation
CSSE463: Image Recognition Day 17
CSSE463: Image Recognition Day 17
CSSE463: Image Recognition Day 17
Face Recognition: A Convolutional Neural Network Approach
CSC321: Neural Networks Lecture 11: Learning in recurrent networks
Presentation transcript:

Demetz Clément ECE 539 Final Project Fall 2003 Lip-recognition Software using a Kohonen Algorithm for Image Compression

Outline -Problem and motivation -Data creation: preprocessing -Kohonen self organization map (SOM) -Multi-Layer perceptron -Final results -Conclusion -References

Problem Problem of voice recognition: A combined approach always leads to better results For cell phone and PDA: voice recognition and visual recognition Lip-recognition Combined recognition Voice-recognition

Problem of lip-recognition software Need high computational power. Need to be implement on low-power systems (PDA, cell phone) How can we reduce the size of the information? Pb: Find a way to implement such an algorithm with few computation.

Multi-Layer perceptron Motivation Reduce the size of the image with a Kohonen Self organization map Filter Kohonen SOM Image of a cell phone digital camera Contour of the mouth Multi-Layer perceptron

Preprocessing -Starting with low quality JPEG pictures -Gradient filters are applied to only keep the contour of the mouths. -the opening of the mouth is a relevant input: needs to follow a certain pattern to pronounce a sound. Dark part of the mouth Contour of the dark part JPEG picture of the mouth Pb: a contour corresponds to thousands points: it is still too large to have a low computation time

Kohonen Self Organisation Map (SOM) -Idea of using a Kohonen self organization map to reduce the information to 12 neurons problems: Initialization Bad stretching or turning of the SOM

Kohonen SOM problems: Initialization Bad stretching or turning of the SOM We want to keep all the information: here we are losing the left part

Kohonen SOM A way to avoid problems: We link the first and the last neurons

Kohonen SOM Results of the Kohonen Map: we keep 12 points representing the contour:

Multi-Layer perceptron We take the 12 points given by the SOM as inputs. SOM applied many times on each picture to create the database 3 classes of pictures: only 3 sounds, because the lip-recognition is a support to a voice recognition Training on 15 pictures, testing on 3 pictures.

Multi-Layer perceptron: Result Layers alpha momentum Configuration (hidden l) Testing classification rate(%) Training classification rate(%) 2 0.1 0.8 10 27 33 0.05 73.33 93 0.01 92 100 3 10 10 52 76 100% Classification rate is obtained

Multi-Layer perceptron: Result 100% Classification rate is obtained With a 400 iterations training.

Conclusion Kohonen SOM reduces the problem to a 12 dimension problem (previously, working on pictures mean thousands dimension) . Multi-Layer perceptron needs a training, but once it is trained computations are made very fast. we can obtain a 100% classification rate with 3 sounds. Pb: because of Matlab, transforming picture into Matrix needs computations. (solution: use another language more picture processing-oriented)

Some references -Image compression by Self-Organized kohonen Map Christophe Amerijckx, Philippe Thissen..IEE Transition on Neural Networks 1998. http://www.dice.ucl.ac.be/~verleyse/papers/ieeetnn98ca.pdf -SRAM bitmap shape recognition and sorting Using Neural Networks. Randall S. Collica. IEEE. http://www.ibexprocess.com/solutions/wp_SRAM.pdf -From your lips to your printer. James Fallow. -SRAM bitmap shape recognition and sorting using neural networks. Collica, R.S., Card, J.P., and Martin. W. ISBN 0894-6507 -A kohonen Neural Network Controlled All-optical router system. E.E.E Frietman, M.T. Hill, G.D. Khoe. http://www.ph.tn.tudelft.nl/~ed/pdfs/IJCR.pdf