Handwritten Character Recognition Using Artificial Neural Networks Shimie Atkins & Daniel Marco Supervisor: Johanan Erez Technion - Israel Institute of.

Slides:



Advertisements
Similar presentations
Neural Networks: Learning
Advertisements

1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 7: Learning in recurrent networks Geoffrey Hinton.
Neural Networks  A neural network is a network of simulated neurons that can be used to recognize instances of patterns. NNs learn by searching through.
Neural Networks: Backpropagation algorithm Data Mining and Semantic Web University of Belgrade School of Electrical Engineering Chair of Computer Engineering.
Characterization Presentation Neural Network Implementation On FPGA Supervisor: Chen Koren Maria Nemets Maxim Zavodchik
Performed by: Ariel Wolf & Elad Bichman Instructor: Yuri Dolgin המעבדה למערכות ספרתיות מהירות High speed digital systems laboratory הטכניון - מכון טכנולוגי.
Artificial Neural Networks (ANNs)
Data Mining with Neural Networks (HK: Chapter 7.5)
Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Recognizing Hebrew alphabet of specific handwriting Introduction to Computational and Biological Vision 2010 By Rotem Dvir and Osnat Stein.
Traffic Sign Recognition Using Artificial Neural Network Radi Bekker
Machine Learning. Learning agent Any other agent.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Artificial Neural Networks
Artificial Neural Networks (ANN). Output Y is 1 if at least two of the three inputs are equal to 1.
Artificial Neural Networks
Artificial Neural Network Theory and Application Ashish Venugopal Sriram Gollapalli Ulas Bardak.
Hybrid AI & Machine Learning Systems Using Ne ural Network and Subsumption Architecture Libraries By Logan Kearsley.
Explorations in Neural Networks Tianhui Cai Period 3.
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
Appendix B: An Example of Back-propagation algorithm
Back-Propagation MLP Neural Network Optimizer ECE 539 Andrew Beckwith.
NEURAL NETWORKS FOR DATA MINING
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
COMPARISON OF IMAGE ANALYSIS FOR THAI HANDWRITTEN CHARACTER RECOGNITION Olarik Surinta, chatklaw Jareanpon Department of Management Information System.
Handwritten Recognition with Neural Network Chatklaw Jareanpon, Olarik Surinta Mahasarakham University.
Multi-Layer Perceptron
Non-Bayes classifiers. Linear discriminants, neural networks.
Akram Bitar and Larry Manevitz Department of Computer Science
CSE & CSE6002E - Soft Computing Winter Semester, 2011 Neural Networks Videos Brief Review The Next Generation Neural Networks - Geoff Hinton.
Music Genre Classification Alex Stabile. Example File
PARALLELIZATION OF ARTIFICIAL NEURAL NETWORKS Joe Bradish CS5802 Fall 2015.
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
Neural Networks Vladimir Pleskonjić 3188/ /20 Vladimir Pleskonjić General Feedforward neural networks Inputs are numeric features Outputs are in.
NEURAL NETWORKS LECTURE 1 dr Zoran Ševarac FON, 2015.
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Alex Stabile. Research Questions: Could a computer learn to distinguish between different composers? Why does music by different composers even sound.
Essential components of the implementation are:  Formation of the network and weight initialization routine  Pixel analysis of images for symbol detection.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Neural Networks Lecture 11: Learning in recurrent networks Geoffrey Hinton.
Kim HS Introduction considering that the amount of MRI data to analyze in present-day clinical trials is often on the order of hundreds or.
Machine Learning Artificial Neural Networks MPλ ∀ Stergiou Theodoros 1.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Neural networks.
Ranga Rodrigo February 8, 2014
CSE 473 Introduction to Artificial Intelligence Neural Networks
Neural Networks A neural network is a network of simulated neurons that can be used to recognize instances of patterns. NNs learn by searching through.
Final Year Project Presentation --- Magic Paint Face
FUNDAMENTAL CONCEPT OF ARTIFICIAL NETWORKS
CSE P573 Applications of Artificial Intelligence Neural Networks
Prof. Carolina Ruiz Department of Computer Science
Artificial Neural Networks for Pattern Recognition
CSC 578 Neural Networks and Deep Learning
CSE 573 Introduction to Artificial Intelligence Neural Networks
network of simple neuron-like computing elements
Neural Network - 2 Mayank Vatsa
Neural Networks Geoff Hulten.
Capabilities of Threshold Neurons
Zip Codes and Neural Networks: Machine Learning for
Artificial Intelligence 10. Neural Networks
Computer Vision Lecture 19: Object Recognition III
CSC321: Neural Networks Lecture 11: Learning in recurrent networks
Ch4: Backpropagation (BP)
Akram Bitar and Larry Manevitz Department of Computer Science
Prof. Carolina Ruiz Department of Computer Science
Outline Announcement Neural networks Perceptrons - continued
Presentation transcript:

Handwritten Character Recognition Using Artificial Neural Networks Shimie Atkins & Daniel Marco Supervisor: Johanan Erez Technion - Israel Institute of Technology Department of Electrical Engineering The Image and Computer Vision Laboratory November 1998

Project Goals Creating an optimal neural network capable of identifying characters. Creating training files, in which the characters are reduced, and presented to the net. Creating an application that enables the user to define forms that are filled and later on identified.

Theoretical Background What is a neuron?  A black box that has inputs.  These inputs are summed according to their respective weights - and the sum is the total input.  A certain mathematical function is then applied to the total input, which now becomes the neuron’s output. What is a Layer?  A group of neurons that are parallel to each other, with no connections amongst themselves.

Background (cont) What is a Network?  A group of layers, where neurons’ outputs in one layer are the inputs of the neurons in the next layer.  The input of the network is the first layer’s input.  The output of the network is the last layer’s output. Note:  Our network is fully connected, thus all the neurons in one layer are connected to each neuron in the next layer.

Training Algorithm We used the back-propagation algorithm, which works as follows:  Net receives input vector and desired output vector.  Propagate input vector through net.  Compare net’s output vector with desired vector, if they match within the allowable error --> Done.  Update weights beginning with last layer and advancing backwards towards the first layer.  Go back to second step.

The Reduction Algorithm  Divide height and width of the large picture by those of the small picture.  Each block of pixels received in the large picture represents one pixel in the small picture.  Count number of “On” pixels in block.  Determine, according to a predefined threshold, whether the pixel in the small picture is “On” or “Off”. Note:  If Dimensions of block are not integral padding is initiated.

Reduction Algorithm (cont)

Code Implementation We Implemented the network in C++. We Defined three classes: CNeuron, CLayer, Cnet. They have the following relationships: Net Layer Neuron

Results of Form Identification The following form was created (using our interface) and scanned:

Results (cont)  First net identified the form as follows: Configuration of Network: 37 nets (8x10 inputs, 3 layers) Alphabetical - 40 neurons in 2nd layer Numerical - 25 neurons in 2nd layer Train File: dan1to6.trn (40 examples of each character). Identification Rate (for check file): 96%

Results (cont)  Second network identified the form as follows: Configuration of Network: 2 nets (8x10 inputs, 3 layers) Alphabetical - 30 neurons in 2nd layer Numerical - 30 neurons in 2nd layer Train File: dan1to9.trn (60 examples of each character) Identification Rate (for check file): 95.5%

Conclusions  The identification success rate is proportional to the number of examples of each character given.  More than one hidden layer does not necessarily improve performance.  Most errors are with letters of similar shape (such as I- J or O-D), numerical digits are usually successfully identified.  The configuration and training parameters of the net have minimal influence (once inside certain optimal limits ).