West Virginia University

Slides:



Advertisements
Similar presentations
Artificial Neural Networks
Advertisements

Université du Québec École de technologie supérieure Face Recognition in Video Using What- and-Where Fusion Neural Network Mamoudou Barry and Eric Granger.
Complex Networks for Representation and Characterization of Images For CS790g Project Bingdong Li 9/23/2009.
Road-Sign Detection and Recognition Based on Support Vector Machines Saturnino, Sergio et al. Yunjia Man ECG 782 Dr. Brendan.
Division of Operation and Maintenance Engineering Wear prediction of grinding mill liners Farzaneh Ahmadzadeh, Jan Lundberg
Training and Testing Neural Networks 서울대학교 산업공학과 생산정보시스템연구실 이상진.
Neural Networks My name is Burleson. Neural Networks vs Conventional Computing Programming is broken into small, unambiguous steps Algorithms must be.
Aula 5 Alguns Exemplos PMR5406 Redes Neurais e Lógica Fuzzy.
RBF Neural Networks x x1 Examples inside circles 1 and 2 are of class +, examples outside both circles are of class – What NN does.
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Power Systems Application of Artificial Neural Networks. (ANN)  Introduction  Brief history.  Structure  How they work  Sample Simulations. (EasyNN)
California Car License Plate Recognition System ZhengHui Hu Advisor: Dr. Kang.
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
Face Processing System Presented by: Harvest Jang Group meeting Fall 2002.
Oral Defense by Sunny Tang 15 Aug 2003
Radial-Basis Function Networks
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Traffic Sign Recognition Using Artificial Neural Network Radi Bekker
Knowledge Systems Lab JN 8/24/2015 A Method for Temporal Hand Gesture Recognition Joshua R. New Knowledge Systems Laboratory Jacksonville State University.
Attention Deficit Hyperactivity Disorder (ADHD) Student Classification Using Genetic Algorithm and Artificial Neural Network S. Yenaeng 1, S. Saelee 2.
Convolutional Neural Networks for Image Processing with Applications in Mobile Robotics By, Sruthi Moola.
Presented by: Kamakhaya Argulewar Guided by: Prof. Shweta V. Jain
A Genetic Algorithms Approach to Feature Subset Selection Problem by Hasan Doğu TAŞKIRAN CS 550 – Machine Learning Workshop Department of Computer Engineering.
Soft Computing Colloquium 2 Selection of neural network, Hybrid neural networks.
1 st Neural Network: AND function Threshold(Y) = 2 X1 Y X Y.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Networks (ANN). Output Y is 1 if at least two of the three inputs are equal to 1.
Kumar Srijan ( ) Syed Ahsan( ). Problem Statement To create a Neural Networks based multiclass object classifier which can do rotation,
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Hurieh Khalajzadeh Mohammad Mansouri Mohammad Teshnehlab
 The most intelligent device - “Human Brain”.  The machine that revolutionized the whole world – “computer”.  Inefficiencies of the computer has lead.
ARTIFICIAL NEURAL NETWORKS. Introduction to Neural Networks.
Outline What Neural Networks are and why they are desirable Historical background Applications Strengths neural networks and advantages Status N.N and.
NEURAL NETWORKS FOR DATA MINING
Damageless Information Hiding Technique using Neural Network Keio University Graduate School of Media and Governance Kensuke Naoe.
Artificial Intelligence Techniques Multilayer Perceptrons.
COMPARISON OF IMAGE ANALYSIS FOR THAI HANDWRITTEN CHARACTER RECOGNITION Olarik Surinta, chatklaw Jareanpon Department of Management Information System.
Gap filling of eddy fluxes with artificial neural networks
Handwritten Recognition with Neural Network Chatklaw Jareanpon, Olarik Surinta Mahasarakham University.
EE459 Neural Networks Examples of using Neural Networks Kasin Prakobwaitayakit Department of Electrical Engineering Chiangmai University.
Analysis of Movement Related EEG Signal by Time Dependent Fractal Dimension and Neural Network for Brain Computer Interface NI NI SOE (D3) Fractal and.
Face Image-Based Gender Recognition Using Complex-Valued Neural Network Instructor :Dr. Dong-Chul Kim Indrani Gorripati.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Team Members Ming-Chun Chang Lungisa Matshoba Steven Preston Supervisors Dr James Gain Dr Patrick Marais.
CSC321 Lecture 5 Applying backpropagation to shape recognition Geoffrey Hinton.
Preliminary Transformations Presented By: -Mona Saudagar Under Guidance of: - Prof. S. V. Jain Multi Oriented Text Recognition In Digital Images.
Kim HS Introduction considering that the amount of MRI data to analyze in present-day clinical trials is often on the order of hundreds or.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Network Management Lecture 13. MACHINE LEARNING TECHNIQUES 2 Dr. Atiq Ahmed Université de Balouchistan.
Signature Recognition Using Neural Networks and Rule Based Decision Systems CSC 8810 Computational Intelligence Instructor Dr. Yanqing Zhang Presented.
Sparse Coding: A Deep Learning using Unlabeled Data for High - Level Representation Dr.G.M.Nasira R. Vidya R. P. Jaia Priyankka.
1 Neural Networks MUMT 611 Philippe Zaborowski April 2005.
Introduction to Machine Learning, its potential usage in network area,
Big data classification using neural network
Action-Grounded Push Affordance Bootstrapping of Unknown Objects
Recognition of biological cells – development
Data Mining, Neural Network and Genetic Programming
Recognizing Deformable Shapes
Dynamic Routing Using Inter Capsule Routing Protocol Between Capsules
Counter propagation network (CPN) (§ 5.3)
Pearson Lanka (Pvt) Ltd.
Introduction to Deep Learning with Keras
network of simple neuron-like computing elements
An Improved Neural Network Algorithm for Classifying the Transmission Line Faults Slavko Vasilic Dr Mladen Kezunovic Texas A&M University.
Department of Electrical Engineering
Yi Zhao1, Yanyan Shen*1, Yanmin Zhu1, Junjie Yao2
Computer Vision Lecture 19: Object Recognition III
Face Recognition Using Artificial Neural Network Group-based Adaptive Tolerance Trees By Ming Zhang , John Fulcher.
Presentation transcript:

West Virginia University Object Recognition from Photographic Images Using a Back Propagation Neural Network CPE 520 Final Project West Virginia University Daniel Moyers May 6, 2003

Why use neural networks for object recognition? Introduction Why use neural networks for object recognition? Neural networks are the key to smart and complex vision systems for research and industrial applications.

Motivation and Applications Object Recognition is essential for…… Socially interactive robots Vision Based Industrial Robots Autonomous Flight Vehicles

Background Object Recognition Concerns It is necessary to recognize the shape of patterns in an image regardless of position, rotation, and scale Objects in images must be distinguished from their backgrounds and additional objects Once isolated, objects can then be extracted from the captured image

Neural Network Paradigms to Consider Supervised Learning Mechanisms: Back Propagation –very robust & widely used Extended Back Propagation: PSRI - Position, Scale, and Rotation Invariant neural processing Unsupervised Learning Mechanisms: Kohonen network – - may be used to place similar objects into groups Lateral inhibition can be used for edge enhancement

Application: Neural Network Type Back Propagation Network with Momentum BP is classified under the supervised learning paradigm BP is Non-recurrent - learning doesn’t use feedback information Supervised learning mechanism for multi-layered, generalized feed forward network

Back Propagation Network Architecture

Back Propagation Back Propagation is the most well known and widely used among the current types of NN systems Can recognize patterns similar to those previously learned Back Propagation networks are very robust and stable A majority of object/pattern recognition applications use back propagation networks Back propagation networks have a remarkable degree of fault-tolerance for pattern recognition tasks

Problem Statement The goal was to demonstrate the object recognition capabilities of neural networks by using real world objects Processed photographs of 14 household objects under various orientations were considered for network training patterns Images were captured and preprocessed to extract object feature data The back propagation network was trained with nine patterns The remaining patterns were used to test the network

The Experimental Objects A total of 14 objects to be classified into 5 groups: Rectangular Circular Square Triangular Cylindrical

Variance in Position, Rotation and Scale The Captured Image Sets 0 Degrees Rotated Offset

Image Processing: Preparation for network inputs Image Tool results for cereal box at 45 deg.

Training Data Network Inputs Preprocessing section of the software application Network Inputs The inputs to the network were normalized radius values Measured from the centroid of the object to the edge of the object in increments of 10 degrees

Analysis of Training Data For Determination of Training Set 10 deg (36 data point) 30 deg (18 data points) 60 deg (6 data point) 90 deg (4 data points)

The Training Set Selection Interface - Nine selections are to be made for training the 9 output neurons: One object from each group at 0 degrees (5 total) One object from the non-circular groups at 45 deg. (4 total)

The Training Section Number of neurons in hidden layer: 85 Testing Configuration: Number of neurons in hidden layer: 85 Learning rate: 0.3 Momentum Coefficient: 0.7 Acceptable Error: 5 % Training Increment Angle: 10 deg.

The Testing Section - After training, the user may test all 36 configurations based on the results of the 9 training configurations - Seen to the bottom right, the book was used as the rectangular training object. - When the cereal box (bottom left) was tested by the network, it was correctly determined to be a rectangle at 450.

The Entire GUI Configuration

Conclusions The network was able to successfully classify all of the test objects by object type and orientation. The average training time for 100% accuracy in successfully classifying all of the test objects was approximately 42 minutes. Average number of iterations required for training was 552 Once training is complete, testing objects for classification can be performed in real-time. When the network was trained to within 2% error, the training time was 3.27 hours and 2493 iterations were necessary. However, 5% acceptable error was sufficient for the network to correctly identify all of the test objects due to similarities among their group

Future Work Development of a semi-supervised neural network for humanoid robotics applications The network will continually grow in size as the object knowledge base expands Network training will be modeled after human learning techniques The humanoid robot’s neural network will learn new objects and then prompt its trainer to provide a name for each of those objects

Questions? Thank you for your time!