VEHICLE NUMBER PLATE RECOGNITION SYSTEM. Information and constraints Character recognition using moments. Character recognition using OCR. Signature.

Slides:



Advertisements
Similar presentations
電腦視覺 Computer and Robot Vision I
Advertisements

CS Spring 2009 CS 414 – Multimedia Systems Design Lecture 4 – Digital Image Representation Klara Nahrstedt Spring 2009.
Spatial Filtering (Chapter 3)
Chapter 3 Image Enhancement in the Spatial Domain.
DREAM PLAN IDEA IMPLEMENTATION Introduction to Image Processing Dr. Kourosh Kiani
July 27, 2002 Image Processing for K.R. Precision1 Image Processing Training Lecture 1 by Suthep Madarasmi, Ph.D. Assistant Professor Department of Computer.
E.G.M. PetrakisFiltering1 Linear Systems Many image processing (filtering) operations are modeled as a linear system Linear System δ(x,y) h(x,y)
Digital Image Processing In The Name Of God Digital Image Processing Lecture3: Image enhancement M. Ghelich Oghli By: M. Ghelich Oghli
Lecture 07 Segmentation Lecture 07 Segmentation Mata kuliah: T Computer Vision Tahun: 2010.
電腦視覺 Computer and Robot Vision I Chapter2: Binary Machine Vision: Thresholding and Segmentation Instructor: Shih-Shinh Huang 1.
Computer Vision Lecture 16: Region Representation
Esmail Hadi Houssein ID/  „Motivation  „Problem Overview  „License plate segmentation  „Character segmentation  „Character Recognition.
COMP322/S2000/L181 Pre-processing: Smooth a Binary Image After binarization of a grey level image, the resulting binary image may have zero’s (white) and.
Image Filtering CS485/685 Computer Vision Prof. George Bebis.
Introduction to Computer and Human Vision Shimon Ullman, Ronen Basri, Michal Irani Assistants: Lena Gorelick Denis Simakov.
Introduction to Computer and Human Vision Shimon Ullman, Ronen Basri, Michal Irani Assistants: Tal Hassner Eli Shechtman.
Pattern Recognition. Introduction. Definitions.. Recognition process. Recognition process relates input signal to the stored concepts about the object.
IMAGE 1 An image is a two dimensional Function f(x,y) where x and y are spatial coordinates And f at any x,y is related to the brightness at that point.
Feature extraction Feature extraction involves finding features of the segmented image. Usually performed on a binary image produced from.
Brief overview of ideas In this introductory lecture I will show short explanations of basic image processing methods In next lectures we will go into.
November 29, 2004AI: Chapter 24: Perception1 Artificial Intelligence Chapter 24: Perception Michael Scherger Department of Computer Science Kent State.
Linear Algebra and Image Processing
Digital Image Processing - (monsoon 2003) FINAL PROJECT REPORT Project Members Sanyam Sharma Sunil Mohan Ranta Group No FINGERPRINT.
Multimedia Systems & Interfaces Karrie G. Karahalios Spring 2007.
Entropy and some applications in image processing Neucimar J. Leite Institute of Computing
Chapter 10: Image Segmentation
Presentation Image Filters
Spatial-based Enhancements Lecture 3 prepared by R. Lathrop 10/99 updated 10/03 ERDAS Field Guide 6th Ed. Ch 5: ;
Machine Vision ENT 273 Image Filters Hema C.R. Lecture 5.
S EGMENTATION FOR H ANDWRITTEN D OCUMENTS Omar Alaql Fab. 20, 2014.
CS 6825: Binary Image Processing – binary blob metrics
Digital Image Processing CCS331 Relationships of Pixel 1.
Digital Image Processing Lecture 4: Image Enhancement: Point Processing Prof. Charlene Tsai.
Lecture 03 Area Based Image Processing Lecture 03 Area Based Image Processing Mata kuliah: T Computer Vision Tahun: 2010.
Digital Image Processing (DIP) Lecture # 5 Dr. Abdul Basit Siddiqui Assistant Professor-FURC 1FURC-BCSE7.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
Digital Image Processing - (monsoon 2003) FINAL PROJECT REPORT Project Members Sanyam Sharma Sunil Mohan Ranta Group No FINGERPRINT.
CAPTCHA Processing CPRE 583 Fall 2010 Project CAPTCHA Processing Responsibilities Brian Washburn – Loading Image into RAM and Preprocessing and related.
Digital Image Processing (Digitaalinen kuvankäsittely) Exercise 2
8-1 Chapter 8: Image Restoration Image enhancement: Overlook degradation processes, deal with images intuitively Image restoration: Known degradation processes;
Spatial Filtering (Applying filters directly on Image) By Engr. Muhammad Saqib.
Ch5 Image Restoration CS446 Instructor: Nada ALZaben.
Image Subtraction Mask mode radiography h(x,y) is the mask.
COMP322/S2000/L171 Robot Vision System Major Phases in Robot Vision Systems: A. Data (image) acquisition –Illumination, i.e. lighting consideration –Lenses,
Automatic License Plate Location Using Template Matching University of Wisconsin - Madison ECE 533 Image Processing Fall 2004 Project Kerry Widder.
Autonomous Robots Vision © Manfred Huber 2014.
Image Segmentation by Histogram Thresholding Venugopal Rajagopal CIS 581 Instructor: Longin Jan Latecki.
1 Computational Vision CSCI 363, Fall 2012 Lecture 6 Edge Detection.
1 Machine Vision. 2 VISION the most powerful sense.
Non-linear Filters Non-linear filter (nelineární filtr) –spatial non-linear operator that produces the output image array g(x,y) from the input image array.
Digital Image Processing
CSE 6367 Computer Vision Image Operations and Filtering “You cannot teach a man anything, you can only help him find it within himself.” ― Galileo GalileiGalileo.
Musical notes reader Anna Shmushkin & Lior Abramov.
    LICENSE PLATE EXTRACTION AND CHARACTER SEGMENTATION   By HINA KOCHHAR NITI GOEL Supervisor Dr. Rajeev Srivastava        
1 Mathematic Morphology used to extract image components that are useful in the representation and description of region shape, such as boundaries extraction.
©Soham Sengupta,
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
Digital Image Processing Lecture 4: Image Enhancement: Point Processing January 13, 2004 Prof. Charlene Tsai.
Digital Image Processing Image Enhancement in Spatial Domain
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
License Plate Recognition of A Vehicle using MATLAB
1. 2 What is Digital Image Processing? The term image refers to a two-dimensional light intensity function f(x,y), where x and y denote spatial(plane)
Spatial Filtering (Chapter 3) CS474/674 - Prof. Bebis.
Digital Image Processing - (monsoon 2003) FINAL PROJECT REPORT
Introduction to Computer and Human Vision
Image Processing, Lecture #8
Image Processing, Lecture #8
Digital Image Processing Week IV
DIGITAL IMAGE PROCESSING Elective 3 (5th Sem.)
Presentation transcript:

VEHICLE NUMBER PLATE RECOGNITION SYSTEM

Information and constraints Character recognition using moments. Character recognition using OCR. Signature. Mostly implemented using hardware.

Technique used - Signature Take binary image. Sum of white or black pixels in each row and column. Finding peaks and valleys in row histogram or column histogram. Ridge in the Row signature Row Histogram - Signature Column Histogram – Signature

Brief overview of the system Takes image of the car and searches for the number plate in the image. Once the probable number plate area is located it is given to OCR. If OCR doesn’t recognize the characters from the image number plate area is searched again from the image. If characters are recognized then number plate search is terminated.

Limitations of the system Noise free image with uniform illumination. Numbers displayed in one line on the number plate.

Basic components of the system Image division into small images (finding probable number plate area in the image). Recognizing number plate area. Parsing number plate to extract characters. Apply OCR to the parsed characters.

Part I – Finding probable Number Plate images

Finding probable number plate image Using signature technique to break the vehicle image into smaller image pieces. One of these image pieces will be number plate. Breaking image into pieces was the main issue.

Steps towards refinement Thresh holding using average of minimum and maximum value of the signature.

Steps towards refinement – Cont. Thresh holding using average of non zero minimum and second highest peak of the signature.

Steps towards refinement – Cont. Thresh holding using average of minimum and minimum peak of the signature.

Row wise signature of binarised image

Row wise signature of inverted binarised image

Car Images Original Image Binarised image Inverted binarised image

Steps towards refinement – Cont. Thresh holding using average of minimum and median of the signature.

Extracting Number plate from the image One piece of image that will be tested for number plate

Part II – Recognition of the Number Plate

Recognizing plate and parsing it Looking for number plate in the broken pieces of vehicle image. Apply peak to valley to the candidate image pieces to further break the image piece into possible character. Image piece with maximum peaks in candidate character is selected as the number plate. Column signature of the number plate image Column signature of the another image piece

Recognizing plate and parsing it – Cont. Column signature of the another image piece

Recognizing plate and parsing it – Cont. Column signature with maximum number of ridges. Percentage width of the ridge should be 15% of the whole ‘number plate image’. Taking minimum value of the column histogram as thresh hold value.

Parsing plate Images of all characters

Optical Character Recognizer - OCR

Recognition of Characters. Method of recognition of characters from an image containing these characters is based on object recognition techniques used in Digital Image Processing. Two commonly used techniques for object recognition are  Template Matching using Correlation.  Distance Measurement.

Object Recognition Techniques. Distance Measurement is based on representation technique which uses moments of an object. Moments represent such measurements of an object which can represent features associated with that object, such as center of gravity, Eccentricity. Distance Measurement has some drawbacks such as  Extensive computations reduce efficiency of execution of algorithm.  Difficult to implement.

Template Matching Another technique which was used in this project for character recognition is template matching using correlation. The technique is based on performing correlation between segmented image from which a character is required to be recognized and character template image which is used for recognition.  This technique is efficient as compared to distance measurement.  Only problem is associated with template image i.e. proper acquisition of template image is required.

Correlation Modified form of convolution. f(x,y) function represents gray scale value at a specific element (x,y) in an image. f(x,y) represents an image from which a character is required to be recognized. g(x,y) represents image of a character template. h(x,y) represents result image after correlation.

Correlation Correlation in spatial domain can be represented as: h(x,y) = f(x,y) * g * (x,y) Correlation in frequency domain can be represented as: h(x,y) = f – 1{f {f(x,y)}.* {g * (x,y)}} In MATLAB correlation in frequency domain can be easily represented as: h=real(ifft2(fft2(f,70,324).*fft2(rot90(g,2),70,324)));

Result of Correlation As it is basic form of convolution, its result is an image which represents convolutions of two matrices. The size of result matrix will be increased from input image matrices. Due to which we have to apply some thresh holding on resultant image. Normally value of thresh hold is little less than maximum value of resultant image.

Detection of existence of a template image A single pixel’s presence provides information about exact match of a template image with input segmented image. An image after thresh holding provides information about the presence of a pixel in correlation result image.

Modules implemented in Matlab. Three modules were defined in MATLAB for character recognition. ocr_alpha (p1, p2, p3, p4); ocr_numeric(p1,p2,p3,p4);  P1 represents segmented image which is required to be recognized.  P2,p3,p4 represent template images which will be used for recognition Template(numplateimage, charimage);  Numplateimage represents the segmented image.  Charimage represents template image which is currently used for recognition.

Template module returns no of white pixels in resultant correlated image. Template function is called by ocr_numeric() and ocr_alpha() functions. Ocr_numeric() and ocr_alpha() functions are called by user and parameters to these functions are passed by user Modules implemented in Matlab.

Thank you