Automated Iris Recognition Technology & Iris Biometric System CS 790Q Biometrics Instructor: Dr G. Bebis Presented by Chang Jia Dec 9th, 2005.

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

Object Recognition from Local Scale-Invariant Features David G. Lowe Presented by Ashley L. Kapron.
QR Code Recognition Based On Image Processing
Road-Sign Detection and Recognition Based on Support Vector Machines Saturnino, Sergio et al. Yunjia Man ECG 782 Dr. Brendan.
BIOMETRIC AUTHENTICATION
Electrical & Computer Engineering Dept. University of Patras, Patras, Greece Evangelos Skodras Nikolaos Fakotakis.
Image Segmentation Image segmentation (segmentace obrazu) –division or separation of the image into segments (connected regions) of similar properties.
ANKUSH KUMAR (M.Tech(CS))
Chapter 6 Feature-based alignment Advanced Computer Vision.
Instructor: Mircea Nicolescu Lecture 15 CS 485 / 685 Computer Vision.
Adviser:Ming-Yuan Shieh Student:shun-te chuang SN:M
Emerging biometrics Presenter : Shao-Chieh Lien Adviser : Wei-Yang Lin.
Face Recognition & Biometric Systems, 2005/2006 Face recognition process.
On Constrained Optimization Approach To Object Segmentation Chia Han, Xun Wang, Feng Gao, Zhigang Peng, Xiaokun Li, Lei He, William Wee Artificial Intelligence.
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
A Versatile Depalletizer of Boxes Based on Range Imagery Dimitrios Katsoulas*, Lothar Bergen*, Lambis Tassakos** *University of Freiburg **Inos Automation-software.
Localization of Piled Boxes by Means of the Hough Transform Dimitrios Katsoulas Institute for Pattern Recognition and Image Processing University of Freiburg.
Iris Modeling and Synthesis
Motion Detection And Analysis Michael Knowles Tuesday 13 th January 2004.
Object Recognition Using Genetic Algorithms CS773C Advanced Machine Intelligence Applications Spring 2008: Object Recognition.
A Study of Approaches for Object Recognition
UPM, Faculty of Computer Science & IT, A robust automated attendance system using face recognition techniques PhD proposal; May 2009 Gawed Nagi.
Robust Real-time Object Detection by Paul Viola and Michael Jones ICCV 2001 Workshop on Statistical and Computation Theories of Vision Presentation by.
Ensemble Tracking Shai Avidan IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE February 2007.
Image Forgery Detection by Gamma Correction Differences.
Chapter 11 Integration Information Instructor: Prof. G. Bebis Represented by Reza Fall 2005.
Object Recognition Using Geometric Hashing
Scale Invariant Feature Transform (SIFT)
Iris localization algorithm based on geometrical features of cow eyes Menglu Zhang Institute of Systems Engineering
Comparison and Combination of Ear and Face Images in Appearance-Based Biometrics IEEE Trans on PAMI, VOL. 25, NO.9, 2003 Kyong Chang, Kevin W. Bowyer,
Scale-Invariant Feature Transform (SIFT) Jinxiang Chai.
Stereo vision A brief introduction Máté István MSc Informatics.
A survey of image-based biometric identification methods: Face, finger print, iris, and others Presented by: David Lin ECE738 Presentation of Project Survey.
Overview Introduction to local features
Iris Recognition Sathya Swathi Mabbu Long N Vuong.
Computer vision.
Following the work of John Daugman University of Cambridge
A fully automated method for segmentation and thickness determination of hip joint cartilage from 3D MR data Authors: Yoshinobu Sato,et al. Source: Proceedings.
CPSC 601 Lecture Week 5 Hand Geometry. Outline: 1.Hand Geometry as Biometrics 2.Methods Used for Recognition 3.Illustrations and Examples 4.Some Useful.
Overview Harris interest points Comparing interest points (SSD, ZNCC, SIFT) Scale & affine invariant interest points Evaluation and comparison of different.
March 10, Iris Recognition Instructor: Natalia Schmid BIOM 426: Biometrics Systems.
1 Webcam Mouse Using Face and Eye Tracking in Various Illumination Environments Yuan-Pin Lin et al. Proceedings of the 2005 IEEE Y.S. Lee.
Phase Congruency Detects Corners and Edges Peter Kovesi School of Computer Science & Software Engineering The University of Western Australia.
CSCE 643 Computer Vision: Extractions of Image Features Jinxiang Chai.
Lecture 7: Features Part 2 CS4670/5670: Computer Vision Noah Snavely.
IRIS RECOGNITION. CONTENTS  1. INTRODUCTION  2. IRIS RECOGNITION  3. HISTORY AND DEVELOPMENT  4. SCIENCE BEHIND THE TECHNOLOGY  5. IMAGE ACQUISITION.
Iris Scanning By, rahul vijay 1. Introduction  Biometrics provides a secure method of authentication and identification.  Biometric identification utilises.
Biometric Iris Recognition System INTRODUCTION Iris recognition is fast developing to be a foolproof and fast identification technique that can be administered.
1 Iris Recognition Ying Sun AICIP Group Meeting November 3, 2006.
IRIS RECOGNITION SYSTEM
Non-Ideal Iris Segmentation Using Graph Cuts
CSE 185 Introduction to Computer Vision Feature Matching.
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
776 Computer Vision Jan-Michael Frahm Spring 2012.
Iris-based Authentication System Daniel Schonberg and Darko Kirovski, “Iris Compression for Cryptographically Secure Person Identification”, in Proceedings.
Instructor: Mircea Nicolescu Lecture 10 CS 485 / 685 Computer Vision.
Shadow Detection in Remotely Sensed Images Based on Self-Adaptive Feature Selection Jiahang Liu, Tao Fang, and Deren Li IEEE TRANSACTIONS ON GEOSCIENCE.
CONTENTS:  Introduction.  Face recognition task.  Image preprocessing.  Template Extraction and Normalization.  Template Correlation with image database.
IRIS RECOGNITION 1 CITY ENGINEERING COLLEGE Technical Seminar On “IRIS RECOGNITION” By NANDAN.T.MURTHY 1CE06EC043.
Automated Iris Recognition Technology & Iris Biometric System
- photometric aspects of image formation gray level images
Hand Geometry Recognition
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Motion Detection And Analysis
Iris Recognition.
Following the work of John Daugman University of Cambridge
Machine Vision Acquisition of image data, followed by the processing and interpretation of these data by computer for some useful application like inspection,
CSE 185 Introduction to Computer Vision
Review and Importance CS 111.
Presentation transcript:

Automated Iris Recognition Technology & Iris Biometric System CS 790Q Biometrics Instructor: Dr G. Bebis Presented by Chang Jia Dec 9th, 2005

2 Overview The Iris as a Biometrics: The iris is an overt body that is available for remote assessment with the aid of a machine vision system to do automated iris recognition. Iris recognition technology combines computer vision, pattern recognition, statistical inference, and optics. The spatial patterns that are apparent in the human iris are highly distinctive to an individual. Clinical observations Developmental biology

3 The structure of the human eyeThe structure of the iris seen in a transverse section The structure of the iris seen in a frontal section Overview

4 Its suitability as an exceptionally accurate biometric derives from its extremely data-rich physical structure genetic independence — no two eyes are the same patterns apparently stable throughout life physical protection by a transparent window (the cornea), highly protected by internal organ of the eye externally visible, so noninvasive — patterns imaged from a distance Overview

5 The disadvantages to use iris as a biometric measurement are Small target (1 cm) to acquire from a distance (about 1 m) Moving target Located behind a curved, wet, reflecting surface Obscured by eyelashes, lenses, reflections Partially occluded by eyelids, often drooping Deforms non-elastically as pupil changes size Illumination should not be visible or bright Overview

PART I: Iris Recognition: An Emerging Biometric Technology  R. Wildes, "Iris Recognition: An Emerging Biometric Technology", Proceedings of the IEEE, vol 85, no. 9, pp , CS 790Q Biometrics

7 Outline Technical Issues ** Image Acquisition Iris Localization Pattern Matching Systems and Performance ** (Throughout the discussion in this paper, the iris-recognition systems of Daugman and Wildes et al. will be used to provide illustrations.)

8 Technical Issues Schematic diagram of iris recognition

9 I. Image Acquisition Why important? One of the major challenges of automated iris recognition is to capture a high-quality image of the iris while remaining noninvasive to the human operator. Concerns on the image acquisition rigs Obtained images with sufficient resolution and sharpness Good contrast in the interior iris pattern with proper illumination Well centered without unduly constraining the operator Artifacts eliminated as much as possible

10 The Daugman image-acquisition rig I. Image Acquisition - Rigs

11 The Wildes et al. image-acquisition rig I. Image Acquisition - Rigs

12 Result Image from Wildes et al. rig -- capture the iris as part of a larger image that also contains data derived from the immediately surrounding eye region I. Image Acquisition - Results

13 In common: Easy for a human operator to master Use video rate capture Difference: Illumination The Daugman’s system makes use of an LED-based point light source in conjunction with a standard video camera. The Wildes et al. system makes use of a diffuse source and polarization in conjunction with a low-light level camera. Operator self-position The Daugman’s system provides the operator with live video feedback The Wildes et al. system provides a reticle to aid the operator in positioning Discussion

14 Purpose: to localize that portion of the acquired image that corresponds to an iris In particular, it is necessary to localize that portion of the image derived from inside the limbus (the border between the sclera and the iris) and outside the pupil. Desired characteristics of iris localization: Sensitive to a wide range of edge contrast Robust to irregular borders Capable of dealing with variable occlusions II. Iris Localization

15 The Daugman system fits the circular contours via gradient ascent on the parameters so as to maximize Where is a radial Gaussian, and circular contours (for the limbic and pupillary boundaries) be parameterized by center location (xc,yc), and radius r (active contour fitting method) II. Iris Localization

16 The Wildes et al. system performs its contour fitting in two steps. (histogram-based approach) First, the image intensity information is converted into a binary edge-map where and Second, the edge points vote to instantiate particular contour parameter values. II. Iris Localization

17 The voting procedure of the Wildes et al. system is realized via Hough transforms on parametric definitions of the iris boundary contours. II. Iris Localization

18 Illustrative Results of Iris Localization Obtained by using the Wildes et al. system only that portion of the image below the upper eyelid and above the lower eyelid should be included

19 Both approaches are likely to encounter difficulties if required to deal with images that contain broader regions of the surrounding face than the immediate eye region Difference: the active contour approach avoids the inevitable thresholding involved in generating a binary edge-map the histogram-based approach to model fitting should avoid problems with local minima that the active contour model’s gradient descent procedure might experience Discussion

20 Four steps: 1) bringing the newly acquired iris pattern into spatial alignment with a candidate data base entry; 2) choosing a representation of the aligned iris patterns that makes their distinctive patterns apparent; 3) evaluating the goodness of match between the newly acquired and data base representations; 4) deciding if the newly acquired data and the data base entry were derived from the same iris based on the goodness of match. III. Pattern Matching

21 Purpose: to establish a precise correspondence between characteristic structures across the two images. Both of the systems under discussion compensate for image shift, scaling, and rotation. For both systems, iris localization is charged with isolating an iris in a larger acquired image and thereby accomplishes alignment for image shift. III. Pattern Matching -Alignment

22 The Daugman’s system uses radial scaling to compensate for overall size as well as a simple model of pupil variation based on linear stretching. III. Pattern Matching -Alignment while being constrained to capture a similarity transformation of image coordinates (x, y) to (x’, y’) Map Cartesian image coordinates (x, y) to dimensionless polar (r, ө) image coordinates according to The Wildes et al. system uses an image-registration technique to compensate for both scaling and rotation. The mapping function (u,v) is to minimize

23 The two methods for establishing correspondences between acquired and data base iris images seem to be adequate for controlled assessment scenarios Improvements: more sophisticated methods may prove to be necessary in more relaxed scenarios more complicated global geometric compensations will be necessary if full perspective distortions (e.g., foreshortening) become significant III. Pattern Matching -Alignment

24 The Daugman’s system uses demodulation with complex-valued 2D Gabor wavelets to encode the phase sequence of the iris pattern to an “IrisCode”. III. Pattern Matching - Representation

25 In implementation, the Gabor filtering is performed via a relaxation algorithm, with quantization of the recovered phase information yielding the final representation. III. Pattern Matching - Representation Pictorial Examples of one IrisCode

26 The Wildes et al. system makes us of an isotropic bandpass decomposition derived from application of Laplacian of Gaussian filters to the image data. In practice, the filtered image is realized as a Laplacian pyramid. This representation is defined procedurally in terms of a cascade of small Gaussian-like filters. III. Pattern Matching - Representation with σ the standard deviation of the Gaussian and ρ the radial distance of a point from the filter’s center

27 Result: Multiscale representation for iris pattern matching. Distinctive features of the iris are manifest across a range of spatial scales. III. Pattern Matching - Representation Obtained by using the Wildes et al. system

28 The Daugman system computes the normalized Hamming distance as The result of this computation is then used as the goodness of match, with smaller values indicating better matches. IV. Pattern Matching – Goodness of Match

29 The Wildes et al. system employs normalized correlation between the acquired and data base representations. IV. Pattern Matching - Decision

30 IV. Pattern Matching - Decision For the Daugman system, this amounts to choosing a separation point in the space of (normalized) Hamming distances between iris representations. In order to calculate the cross-over point, sample populations of imposters and authentics were each fit with parametrically defined distributions.

31 For the Wildes et al. system, the decision-making process must combine the four goodness-of-match measurements that are calculated by the previous stage of processing (i.e., one for each pass band in the Laplacian pyramid representation) into a single accept/reject judgment. IV. Pattern Matching - Decision

32 Both the enrollment and verification modes take under 1s to complete.  Empirical test 1: 592 irises from 323 persons  the system exhibited no false accepts and no false rejects  Empirical test 2: Phase1: 199 irises from 122 persons, 878 attempts in identification mode over 8 days  no false accepts and 89 false rejects (47 retry with still 16 rejected) Phase2: 96 irises (among 199) with 403 entries for identification  no false accepts and no false rejects Systems and Performance - The Daugman iris-recognition system

33 Both the enrollment and verification modes require approximately 10s to complete.  Only one empirical test: 60 different irises with 10 images each (5 at the beginning and 5 about one month later) from 40 persons  no false accepts and no false rejects. Systems and Performance - The Wildes et al. iris-recognition system

34 Questions?

PART II: An Iris Biometric System for Public and Personal Use  M. Negin et al., "An Iris Biometric System for Public and Personal Use", IEEE Computer, pp , February CS 790Q Biometrics

36 Iris identification process The system captures a digital image of one eye, encodes its iris pattern, then matches that file against the file stored in the database for that individual.

37 The public-use system The public-use multiple-camera system for correctly positioning and imaging a subject’s iris. Note: wide-field-of-view (WFOV) & narrow-field-of-view (NFOV) camera

38 The public-use optical platform (a) left and right illuminator pods, gaze director, and optical filter (b) a solid model of the platform’s internal components.

39 The user manually positions the camera three to four inches in front of the eye. Make sure that the device’s LED centers within the aperture that superimposes the user’s line of sight and the camera’s optical axis. The personal-use system

40 Identification Performance Verification distributions of authentic results (in brown) and imposter results (in green).

41 Field Trial Experience The first pilot program—with the Nationwide Building Society in Swindon, England—ran for six months and included more than 1,000 participants, before going into regular service during the fourth quarter of The field trial experience has been very positive: 91 percent prefer iris identification to a PIN (personal identification number) or signature, 94 percent would recommend iris identification to friends and family, 94 percent were comfortable or very comfortable using the system. The survey also found nearly 100 percent approval on three areas of crucial importance to consumers: reliability, security, and acceptability.

42 Thank You. Questions?