Convolutional Neural Networks for Image Processing with Applications in Mobile Robotics By, Sruthi Moola.

Slides:



Advertisements
Similar presentations
Face Recognition: A Convolutional Neural Network Approach
Advertisements

ImageNet Classification with Deep Convolutional Neural Networks
Thesis title: “Studies in Pattern Classification – Biological Modeling, Uncertainty Reasoning, and Statistical Learning” 3 parts: (1)Handwritten Digit.
Machine Learning Neural Networks
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Aula 5 Alguns Exemplos PMR5406 Redes Neurais e Lógica Fuzzy.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
IT 691 Final Presentation Pace University Created by: Robert M Gust Mark Lee Samir Hessami Mark Lee Samir Hessami.
Overview of Back Propagation Algorithm
Information Fusion Yu Cai. Research Article “Comparative Analysis of Some Neural Network Architectures for Data Fusion”, Authors: Juan Cires, PA Romo,
Radial-Basis Function Networks
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Hurieh Khalajzadeh Mohammad Mansouri Mohammad Teshnehlab
© Negnevitsky, Pearson Education, Will neural network work for my problem? Will neural network work for my problem? Character recognition neural.
Introduction to Neural Networks Debrup Chakraborty Pattern Recognition and Machine Learning 2006.
Chapter 9 Neural Network.
Dr. Z. R. Ghassabi Spring 2015 Deep learning for Human action Recognition 1.
EE459 Neural Networks Examples of using Neural Networks Kasin Prakobwaitayakit Department of Electrical Engineering Chiangmai University.
Non-Bayes classifiers. Linear discriminants, neural networks.
Human vision Jitendra Malik U.C. Berkeley. Visual Areas.
ECE 6504: Deep Learning for Perception Dhruv Batra Virginia Tech Topics: –(Finish) Backprop –Convolutional Neural Nets.
Deep Convolutional Nets
381 Self Organization Map Learning without Examples.
Face Image-Based Gender Recognition Using Complex-Valued Neural Network Instructor :Dr. Dong-Chul Kim Indrani Gorripati.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
CSC321 Lecture 5 Applying backpropagation to shape recognition Geoffrey Hinton.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 6: Applying backpropagation to shape recognition Geoffrey Hinton.
Perceptrons Michael J. Watts
Object Recognizing. Deep Learning Success in 2012 DeepNet and speech processing.
Convolutional Neural Network
1 Convolutional neural networks Abin - Roozgard. 2  Introduction  Drawbacks of previous neural networks  Convolutional neural networks  LeNet 5 
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Lecture 4b Data augmentation for CNN training
Data Mining: Concepts and Techniques1 Prediction Prediction vs. classification Classification predicts categorical class label Prediction predicts continuous-valued.
Convolutional Neural Network
The Relationship between Deep Learning and Brain Function
Data Mining, Neural Network and Genetic Programming
Data Mining, Neural Network and Genetic Programming
Data Mining, Neural Network and Genetic Programming
Jure Zbontar, Yann LeCun
Artificial Intelligence (CS 370D)
Applications of Deep Learning and how to get started with implementation of deep learning Presentation By : Manaswi Advisor : Dr.Chinmay.
Supervised Training of Deep Networks
Lecture 5 Smaller Network: CNN
Neural Networks 2 CS446 Machine Learning.
CSSE463: Image Recognition Day 17
Introduction to Neural Networks
Neuro-Computing Lecture 4 Radial Basis Function Network
Object Classification through Deconvolutional Neural Networks
network of simple neuron-like computing elements
Convolutional neural networks Abin - Roozgard.
CSC 578 Neural Networks and Deep Learning
Creating Data Representations
CSSE463: Image Recognition Day 17
LECTURE 35: Introduction to EEG Processing
On Convolutional Neural Network
LECTURE 33: Alternative OPTIMIZERS
CSSE463: Image Recognition Day 17
Analysis of Trained CNN (Receptive Field & Weights of Network)
Convolutional Neural Networks
Deep Learning Some slides are from Prof. Andrew Ng of Stanford.
CSSE463: Image Recognition Day 17
CSSE463: Image Recognition Day 17
Heterogeneous convolutional neural networks for visual recognition
Face Recognition: A Convolutional Neural Network Approach
Deep Learning Authors: Yann LeCun, Yoshua Bengio, Geoffrey Hinton
CSC 578 Neural Networks and Deep Learning
Ch4: Backpropagation (BP)
CSC 578 Neural Networks and Deep Learning
Presentation transcript:

Convolutional Neural Networks for Image Processing with Applications in Mobile Robotics By, Sruthi Moola

Convolution Convolution is a common image processing technique that changes the intensities of a pixel to reflect the intensities of the surrounding pixels. A common use of convolution is to create image filters

Convolutional neural network Type of feed-forward MLP. Conv. Nets are inspired by biological processes in visual cortex. So it is used in image recognition and handwritten recognition. high performance in MNIST database. Designed by Yann Lecun.

Convolutional neural network Architecture of applying neural networks to 2-D arrays (usually images), based on spatially localized neural input. Technique of sharing weights or receptive fields.

Types of layers Convolutional layers ▫Feature Map or filter ▫Shared weights Subsampling or Max pooling Full connected layer (classification)

Convolutional layer Rectangular grid of neurons Input from a rectangular section of previous layer Weights are same for each neuron Image convolution of previous layer Weights specify convolutional filters Several grids in each layer, each grid takes inputs from all layers using different filters

Max pooling layer Takes smaller blocks from convolutional layer Subsamples to produce single output from that block Several ways- average or maximum or learned linear combination of neurons Max pooling layers take maximum out of that block

Full-connected layer High level reasoning in NN Takes all neurons from previous layer and connects it to every single neuron it has These are not spatially located (visualize as one- dimensional) Therefore, no convolutional layers after fully connected layer

Convolutional neural network Network structure designed extracts relevant features, restricting neural weights of one layer to a local perceptive field in previous layer. Thus, feature map obtained in second layer The degree of shift and distortion variance is achieved by reducing the spatial resolution of the feature map

A six-layer convolutional neural network

Training Back propagation In feature map, all neurons share the same weight and bias, the number of parameters is smaller than in fully connected multilayer perceptron, leading to a reduction in gap Subsampling/pooling layers have one trainable weight and one trainable bias, so number of free parameters is even lower when compared

Because of low number of free parameters, training of CNN requires far less computational effort than training multilayer perceptron

Applying CNN to real-world problems Image processing system on mobile robot Task – detect and characterize cracks and damage in sewer pipe walls. Uses monochrome CCD camera Task of CNN- ▫filter raw data ▫Identify spatial location of cracks ▫Enable characterization of length, width of damage

Example input and target images for large cracks on a concrete pipe

Horizontal feature – pipe joint Significant challenges for filtering system ▫Differentiating between pipes and joints ▫Accounting for shadows and lighting effects

Training was conducted using standard weight updates rules and approximately 93% of pixels in the validation set were correctly classified Not every pixel was used for training ▫Computational expense ▫Low proportion of ‘crack’ to ‘clean’ training samples tended to bias the network towards classifying all samples as ‘clean’

Example input and CNN output frames Three frames including Crack present, no crack and pipe joint, and crack and joint together represent the data set. Using a subsequent crack detection algorithm, the network ignores the presence of joints and attenuated lighting effects while enhancing the cracks.

Conclusion Issues- Training uses bitmap type images that results in over abundance of training sample points. Key characteristics like sharing weights are appropriate when input data is spatially distributed. Concept of CNNs and weight sharing not only reduces the need for computation, but also offers a satisfying degree of noise resistance and invariance to various forms of distortion.

In the present system, CNNs are expected to achieve better results than standard feed forward tasks.

Thank you