Study Of Iris Recognition Schemes(Interim - Power Point Presentation) By:Ritika Jain ritika.jain@mavs.uta.edu Under guidance of DR K R RAO UNIVERSITY OF TEXAS AT ARLINGTON SPRING 2012
PROPOSAL This project is focussed upon studying and implementing the various iris recognition schemes available and an analysis of the different algorithms using Chinese academy of sciences institute of automation (CASIA) [14] database.
Brief Introduction about iris recognition [19] Iris recognition is an automated method of biometric identification that uses mathematical pattern-recognition techniques on video images of the irides of an individual's eyes, whose complex random patterns are unique and can be seen from some distance.
Comparison of iris recognition and retinal scanning [19] Iris Recognition uses a camera which is similar to that in a home video camcorder to capture an image of the Iris. A picture is taken from a distance of 3 to 10 inches away.Iris recognition uses camera technology with subtle infrared illumination to acquire images of iris. Retinal Scanning requires a very close encounter with a scanning device that sends a beam of light deep inside the eye to capture an image of the retina. (intrusive process required to capture an image).
List of a few places where iris recognition has been successfully implemented [19] United Arab Emirates homeland security border control. ''Adhar'', India's Unique Identification (UID) project. BioID technologies in Pakistan. Schiphol airport, Netherlands. United Kingdom's Iris Recognition Immigration System (IRIS). Many United States and Canada airports.
Libor Masek's Principle [3] The iris recognition system consists of : automatic segmentation system that is based on the Hough transform, and is able to localize the circular iris and pupil regions The extracted iris region is then normalized into a rectangular block with constant dimensions to account for imaging inconsistencies. Finally, the phase data from 1D Log-Gabor filters is extracted and quantized to four levels to encode the unique pattern of the iris into a bit-wise biometric template.
The iris recognition system is composed of a number of sub-systems, which correspond to each stage of iris recognition. These stages are: segmentation – locating the iris region in an eye image normalization – creating a dimensionally consistent representation of the iris region feature encoding – creating a template containing only the most discriminating features of the iris
The Hamming distance [3] is employed for classification of iris templates, and two templates are found to match if a test of statistical independence has failed. The input to the system is an eye image, and the output is an iris template, which will provide a mathematical representation of the iris region.
Segmentation Techniques [3] Hough transform (employed by Wildes et al, [7]) Daugman’s integro-differential operator approach, [5] Active contour models (used by Ritter, [17]) Eyelash and noise detection (used by Kong and Zhang, [16])
Segmentation technique in Masek's method [3] Hough transform is used which first involves Canny edge [10] detection to generate edge map using Kovesi's Canny edge detection MATLAB function [10] Eyelids detection is done using Hough transform using MATLAB Radon tranform [10], [11]
MATLAB functions involved in segmentation technique [3], [10], [11] createiristemplate - generates a biometric template from an iris eye image. segmentiris - peforms automatic segmentation of the iris region from an eye image. Also isolates noise areas such as occluding eyelids and eyelashes. addcircle - circle generator for adding weights into a Hough accumulator array. adjgamma - for adjusting image gamma.
circlecoords - returns the pixel coordinates of a circle defined by the radius and x, y coordinates of its center. CANNY - Canny edge detection - function to perform Canny edge detection. findcircle - returns the coordinates of a circle in an image using the Hough transform and Canny edge detection to create the edge map.
findline - returns the coordinates of a line in an image using the Hough transform and Canny edge detection to create the edge map. houghcircle - takes an edge map image, and performs the Hough transform for finding circles in the image. HYSTHRESH - Hysteresis thresholding - Function performs hysteresis thresholding of an image.
linecoords - returns the x y coordinates of positions along a line. NONMAXSUP - Function for performing non-maxima suppression on an image using an orientation image. It is assumed that the orientation image gives feature normal orientation angles in degrees (0-180).
Normalization Techniques [3] Daugman’s rubber sheet model, [5] Image registration technique, [7] Virtual circles technique, [8]
Normalization technique in Masek's method [3] For normalization of iris region a technique based on Daugman's rubber sheet model [5] is implemented. The center of the pupil is considered as the reference point, and radial vectors pass through the iris region. A number of data points are selected along each radial line and this is defined as the radial resolution. The number of radial lines going around the iris region is defined as the angular resolution. .
A constant number of points are chosen along each radial line, so that a constant number of radial data points are taken, irrespective of how narrow or wide the radius is at a particular angle. The normalized pattern is created by backtracking to find the Cartesian coordinates of data points from the radial and angular positions in the normalized pattern.
Feature Extraction and Encoding Techniques [3] Gabor filters [3] Log Gabor filters (used by Masek, [4]) Zero crossings of 1D wavelet (used by Boles et al, [8]) Laplacian of Gaussian filters (used by Wildes et al, [7])
Feature encoding is implemented by convolving the normalized iris pattern with 1D Log-Gabor wavelets. The 2D normalized pattern is broken up into a number of 1D signals, and then these 1D signals are convolved with 1D Gabor wavelets. The output of filtering is then phase quantized to four levels using the Daugman method [1], with each filter producing two bits of data for each phasor. The output of phase quantization is chosen to be a grey code, so that when going from one quadrant to another, only 1 bit changes.
Functions involved in steps 2 and 3:- Normalization and Encoding [3], [10], [11] normaliseiris - normalization of the iris region by unwrapping the circular region into a rectangular block of constant dimensions. encode - generates a biometric template from the normalized iris region, also generates corresponding noise mask gaborconvolve - function for convolving each row of an image with 1D log-Gabor filters.
Matching algorithms [3] Hamming distance (employed by Daugman) [3], [5] Weighted Euclidean distance (used by Zhu et al, [18]) Normalized correlation (used by Wildes et al, [7])
Matching algorithms used in L.Masek's method [3] For matching, the Hamming distance is chosen, since bit-wise comparison is required. The Hamming distance algorithm employed also incorporates noise masking, so that only significant bits are used in calculating the Hamming distance between two iris templates. When taking the Hamming distance, only those bits in the iris pattern that correspond to ‘0’ bits in noise masks of both iris patterns is used in the calculation involved in the matching of pattern [3].
Functions involved in step 4 :- Matching [3], [10], [11] gethammingdistance - returns the Hamming Distance between two iris templates incorporates noise masks, so noise bits are not used for calculating the HD. shiftbits - function to shift the bit-wise iris patterns in order to provide the best match. Each shift is by two bit values and left to right, since one pixel value in the normalized iris pattern gives two bit values in the template.
REFERENCES [1] J. Daugman, "High confidence visual recognition of persons by a test of statistical independence", IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.15, No.11, pp.1148-1160, November, 1993. [2] J. Daugman, " How iris recognition works", IEEE Transactions on Circuits and Systems for Video Technology, Vol.14, No.1, pp.21-30, January, 2004. [3] L. Masek, "Recognition of human iris patterns for biometric identification", M.S. thesis, University of Western Australia, 2003. [4] R. Wildes, " Iris recognition: an emerging biometric technology", Proceedings of the IEEE, Vol. 85, No. 9, pp.1348-1363, September, 1997. [5] J. Daugman, Biometric personal identification system based on iris analysis. United States Patent, Patent Number: 5,291,560,1994.
[6] S. Sanderson and J. Erbetta, " Authentication for secure environments based on iris scanning technology", IEE Colloquium on Visual Biometrics, pp.8/1-8/7, March, 2000. [7] R. Wildes, J. Asmuth, G. Green, S. Hsu, R. Kolczynski, J. Matey and S. McBride, " A system for automated iris recognition", Proceedings IEEE Workshop on Applications of Computer Vision, Sarasota, FL, pp.121-128, December, 1994. [8] W. Boles and B. Boashash, " A human identification technique using images of the iris and wavelet transform", IEEE Transactions on Signal Processing, Vol. 46, No. 4, pp.185-188, April, 1998. [9] A. Gongazaga and R.M. da Costa, " Extraction and selection of dynamic features of human iris", IEEE Computer Graphics and Image Processing, Vol. XXII, pp.202-208, October, 2009.
[10] P. Kovesi "MATLAB functions for computer vision and image analysis", available at: http://www.cs.uwa.edu.au/~pk/Research/MatlabFns/index.html. [11] L. Masek and P. Kovesi, “MATLAB source code for a biometric identification system based on iris patterns’’, The school of computer science and software engineering, The university of Western Australia, 2003. [12] D.M. Monro, S.Rakshit and Z. Dexin, "DCT based iris recognition”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 29, Issue 4, pp.586-595, April, 2007. [13] Different sample source codes for the functions involved in Masek's algorithm are available at: Advancedsourcode.com: http://www.advancedsourcecode.com/iris.asp
[14] Chinese academy of sciences - institute of automation, database of greyscale eye images http://www.cbsr.ia.ac.cn/IrisDatabase.htm [15] K. Miyazawa, K. Ito, K. Aoki, T. Kobayashi and K. Nakajima, " An efficient iris recognition algorithm using phase based image matching ", IEEE International conference on image processing, pp.325-328, September, 1995. [16] W. Kong and D. Zhang," Accurate iris segmentation based on novel reflection and eyelash detection model", Proceedings of 2001 International Symposium on Intelligent Multimedia, Video and Speech Processing, Hong Kong, pp.263-266, May, 2001.
[17] N. Ritter, "Location of the pupil-iris border in slit-lamp images of the cornea", Proceedings of the International Conference on Image Analysis and Processing, pp.740-745, September, 1999. [18] Y. Zhu, T. Tan and Y. Wang,” Biometric personal identification based on iris patterns” ,Proceedings of the 15th International Conference on Pattern Recognition, Spain, Vol. 2, pp.801-804, February, 2000. [19] Online free encyclopedia, Wikipedia:http://www.wikipedia.org/. [20] K.R.Rao and P.Yip, ”Discrete cosine transform”, Boca Raton, FL: Academic press, 1990.
THANKYOU