Download presentation
Presentation is loading. Please wait.
1
Automated Iris Recognition Technology & Iris Biometric System CS 790Q Biometrics Instructor: Dr G. Bebis Presented by Chang Jia Dec 9th, 2005
2
2 Overview The Iris as a Biometrics: The iris is an overt body that is available for remote assessment with the aid of a machine vision system to do automated iris recognition. Iris recognition technology combines computer vision, pattern recognition, statistical inference, and optics. The spatial patterns that are apparent in the human iris are highly distinctive to an individual. Clinical observations Developmental biology
3
3 The structure of the human eyeThe structure of the iris seen in a transverse section The structure of the iris seen in a frontal section Overview
4
4 Its suitability as an exceptionally accurate biometric derives from its extremely data-rich physical structure genetic independence — no two eyes are the same patterns apparently stable throughout life physical protection by a transparent window (the cornea), highly protected by internal organ of the eye externally visible, so noninvasive — patterns imaged from a distance Overview
5
5 The disadvantages to use iris as a biometric measurement are Small target (1 cm) to acquire from a distance (about 1 m) Moving target Located behind a curved, wet, reflecting surface Obscured by eyelashes, lenses, reflections Partially occluded by eyelids, often drooping Deforms non-elastically as pupil changes size Illumination should not be visible or bright Overview
6
PART I: Iris Recognition: An Emerging Biometric Technology R. Wildes, "Iris Recognition: An Emerging Biometric Technology", Proceedings of the IEEE, vol 85, no. 9, pp. 1348-1363, 1997. CS 790Q Biometrics
7
7 Outline Technical Issues ** Image Acquisition Iris Localization Pattern Matching Systems and Performance ** (Throughout the discussion in this paper, the iris-recognition systems of Daugman and Wildes et al. will be used to provide illustrations.)
8
8 Technical Issues Schematic diagram of iris recognition
9
9 I. Image Acquisition Why important? One of the major challenges of automated iris recognition is to capture a high-quality image of the iris while remaining noninvasive to the human operator. Concerns on the image acquisition rigs Obtained images with sufficient resolution and sharpness Good contrast in the interior iris pattern with proper illumination Well centered without unduly constraining the operator Artifacts eliminated as much as possible
10
10 The Daugman image-acquisition rig I. Image Acquisition - Rigs
11
11 The Wildes et al. image-acquisition rig I. Image Acquisition - Rigs
12
12 Result Image from Wildes et al. rig -- capture the iris as part of a larger image that also contains data derived from the immediately surrounding eye region I. Image Acquisition - Results
13
13 In common: Easy for a human operator to master Use video rate capture Difference: Illumination The Daugman’s system makes use of an LED-based point light source in conjunction with a standard video camera. The Wildes et al. system makes use of a diffuse source and polarization in conjunction with a low-light level camera. Operator self-position The Daugman’s system provides the operator with live video feedback The Wildes et al. system provides a reticle to aid the operator in positioning Discussion
14
14 Purpose: to localize that portion of the acquired image that corresponds to an iris In particular, it is necessary to localize that portion of the image derived from inside the limbus (the border between the sclera and the iris) and outside the pupil. Desired characteristics of iris localization: Sensitive to a wide range of edge contrast Robust to irregular borders Capable of dealing with variable occlusions II. Iris Localization
15
15 The Daugman system fits the circular contours via gradient ascent on the parameters so as to maximize Where is a radial Gaussian, and circular contours (for the limbic and pupillary boundaries) be parameterized by center location (xc,yc), and radius r (active contour fitting method) II. Iris Localization
16
16 The Wildes et al. system performs its contour fitting in two steps. (histogram-based approach) First, the image intensity information is converted into a binary edge-map where and Second, the edge points vote to instantiate particular contour parameter values. II. Iris Localization
17
17 The voting procedure of the Wildes et al. system is realized via Hough transforms on parametric definitions of the iris boundary contours. II. Iris Localization
18
18 Illustrative Results of Iris Localization Obtained by using the Wildes et al. system only that portion of the image below the upper eyelid and above the lower eyelid should be included
19
19 Both approaches are likely to encounter difficulties if required to deal with images that contain broader regions of the surrounding face than the immediate eye region Difference: the active contour approach avoids the inevitable thresholding involved in generating a binary edge-map the histogram-based approach to model fitting should avoid problems with local minima that the active contour model’s gradient descent procedure might experience Discussion
20
20 Four steps: 1) bringing the newly acquired iris pattern into spatial alignment with a candidate data base entry; 2) choosing a representation of the aligned iris patterns that makes their distinctive patterns apparent; 3) evaluating the goodness of match between the newly acquired and data base representations; 4) deciding if the newly acquired data and the data base entry were derived from the same iris based on the goodness of match. III. Pattern Matching
21
21 Purpose: to establish a precise correspondence between characteristic structures across the two images. Both of the systems under discussion compensate for image shift, scaling, and rotation. For both systems, iris localization is charged with isolating an iris in a larger acquired image and thereby accomplishes alignment for image shift. III. Pattern Matching -Alignment
22
22 The Daugman’s system uses radial scaling to compensate for overall size as well as a simple model of pupil variation based on linear stretching. III. Pattern Matching -Alignment while being constrained to capture a similarity transformation of image coordinates (x, y) to (x’, y’) Map Cartesian image coordinates (x, y) to dimensionless polar (r, ө) image coordinates according to The Wildes et al. system uses an image-registration technique to compensate for both scaling and rotation. The mapping function (u,v) is to minimize
23
23 The two methods for establishing correspondences between acquired and data base iris images seem to be adequate for controlled assessment scenarios Improvements: more sophisticated methods may prove to be necessary in more relaxed scenarios more complicated global geometric compensations will be necessary if full perspective distortions (e.g., foreshortening) become significant III. Pattern Matching -Alignment
24
24 The Daugman’s system uses demodulation with complex-valued 2D Gabor wavelets to encode the phase sequence of the iris pattern to an “IrisCode”. III. Pattern Matching - Representation
25
25 In implementation, the Gabor filtering is performed via a relaxation algorithm, with quantization of the recovered phase information yielding the final representation. III. Pattern Matching - Representation Pictorial Examples of one IrisCode
26
26 The Wildes et al. system makes us of an isotropic bandpass decomposition derived from application of Laplacian of Gaussian filters to the image data. In practice, the filtered image is realized as a Laplacian pyramid. This representation is defined procedurally in terms of a cascade of small Gaussian-like filters. III. Pattern Matching - Representation with σ the standard deviation of the Gaussian and ρ the radial distance of a point from the filter’s center
27
27 Result: Multiscale representation for iris pattern matching. Distinctive features of the iris are manifest across a range of spatial scales. III. Pattern Matching - Representation Obtained by using the Wildes et al. system
28
28 The Daugman system computes the normalized Hamming distance as The result of this computation is then used as the goodness of match, with smaller values indicating better matches. IV. Pattern Matching – Goodness of Match
29
29 The Wildes et al. system employs normalized correlation between the acquired and data base representations. IV. Pattern Matching - Decision
30
30 IV. Pattern Matching - Decision For the Daugman system, this amounts to choosing a separation point in the space of (normalized) Hamming distances between iris representations. In order to calculate the cross-over point, sample populations of imposters and authentics were each fit with parametrically defined distributions.
31
31 For the Wildes et al. system, the decision-making process must combine the four goodness-of-match measurements that are calculated by the previous stage of processing (i.e., one for each pass band in the Laplacian pyramid representation) into a single accept/reject judgment. IV. Pattern Matching - Decision
32
32 Both the enrollment and verification modes take under 1s to complete. Empirical test 1: 592 irises from 323 persons the system exhibited no false accepts and no false rejects Empirical test 2: Phase1: 199 irises from 122 persons, 878 attempts in identification mode over 8 days no false accepts and 89 false rejects (47 retry with still 16 rejected) Phase2: 96 irises (among 199) with 403 entries for identification no false accepts and no false rejects Systems and Performance - The Daugman iris-recognition system
33
33 Both the enrollment and verification modes require approximately 10s to complete. Only one empirical test: 60 different irises with 10 images each (5 at the beginning and 5 about one month later) from 40 persons no false accepts and no false rejects. Systems and Performance - The Wildes et al. iris-recognition system
34
34 Questions?
35
PART II: An Iris Biometric System for Public and Personal Use M. Negin et al., "An Iris Biometric System for Public and Personal Use", IEEE Computer, pp. 70-75, February 2000. CS 790Q Biometrics
36
36 Iris identification process The system captures a digital image of one eye, encodes its iris pattern, then matches that file against the file stored in the database for that individual.
37
37 The public-use system The public-use multiple-camera system for correctly positioning and imaging a subject’s iris. Note: wide-field-of-view (WFOV) & narrow-field-of-view (NFOV) camera
38
38 The public-use optical platform (a) left and right illuminator pods, gaze director, and optical filter (b) a solid model of the platform’s internal components.
39
39 The user manually positions the camera three to four inches in front of the eye. Make sure that the device’s LED centers within the aperture that superimposes the user’s line of sight and the camera’s optical axis. The personal-use system
40
40 Identification Performance Verification distributions of authentic results (in brown) and imposter results (in green).
41
41 Field Trial Experience The first pilot program—with the Nationwide Building Society in Swindon, England—ran for six months and included more than 1,000 participants, before going into regular service during the fourth quarter of 1998. The field trial experience has been very positive: 91 percent prefer iris identification to a PIN (personal identification number) or signature, 94 percent would recommend iris identification to friends and family, 94 percent were comfortable or very comfortable using the system. The survey also found nearly 100 percent approval on three areas of crucial importance to consumers: reliability, security, and acceptability.
42
42 Thank You. Questions?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.