Fingerprint Matching Chapter 4.1- 4.3 On-Line Fingerprint Verification Anil Jain, Fellow, IEEE, Lin Hong, and Ruud Bolle, Fellow, IEEE Presented by Chris.

Slides:



Advertisements
Similar presentations
Image Registration  Mapping of Evolution. Registration Goals Assume the correspondences are known Find such f() and g() such that the images are best.
Advertisements

Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Medical Image Registration Kumar Rajamani. Registration Spatial transform that maps points from one image to corresponding points in another image.
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Fitting: The Hough transform. Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not.
Object Recognition using Invariant Local Features Applications l Mobile robots, driver assistance l Cell phone location or object recognition l Panoramas,
Chapter 6 Feature-based alignment Advanced Computer Vision.
Image Indexing and Retrieval using Moment Invariants Imran Ahmad School of Computer Science University of Windsor – Canada.
The Statistics of Fingerprints A Matching Algorithm to be used in an Investigation into the Reliability of the Use of Fingerprints for Identification Bob.
EE 7740 Fingerprint Recognition. Bahadir K. Gunturk2 Biometrics Biometric recognition refers to the use of distinctive characteristics (biometric identifiers)
66: Priyanka J. Sawant 67: Ayesha A. Upadhyay 75: Sumeet Sukthankar.
Special Topic on Image Retrieval Local Feature Matching Verification.
Image alignment Image from
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
Computer Vision Optical Flow
Fitting: The Hough transform
Robust and large-scale alignment Image from
Iterative closest point algorithms
Motion Analysis (contd.) Slides are from RPI Registration Class.
Symmetric hash functions for fingerprint minutiae
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
Fractal Image Compression
Detecting Image Region Duplication Using SIFT Features March 16, ICASSP 2010 Dallas, TX Xunyu Pan and Siwei Lyu Computer Science Department University.
Pores and Ridges: High- Resolution Fingerprint Matching Using Level 3 Features Anil K. Jain Yi Chen Meltem Demirkus.
Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.
1Ellen L. Walker Matching Find a smaller image in a larger image Applications Find object / pattern of interest in a larger picture Identify moving objects.
Comparison and Combination of Ear and Face Images in Appearance-Based Biometrics IEEE Trans on PAMI, VOL. 25, NO.9, 2003 Kyong Chang, Kevin W. Bowyer,
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
כמה מהתעשייה? מבנה הקורס השתנה Computer vision.
VEHICLE NUMBER PLATE RECOGNITION SYSTEM. Information and constraints Character recognition using moments. Character recognition using OCR. Signature.
Digital Image Processing - (monsoon 2003) FINAL PROJECT REPORT Project Members Sanyam Sharma Sunil Mohan Ranta Group No FINGERPRINT.
1 Fingerprint Classification sections Fingerprint matching using transformation parameter clustering R. Germain et al, IEEE And Fingerprint Identification.
CSE 185 Introduction to Computer Vision
CSC 589 Lecture 22 Image Alignment and least square methods Bei Xiao American University April 13.
Computer Vision - Fitting and Alignment
By Yevgeny Yusepovsky & Diana Tsamalashvili the supervisor: Arie Nakhmani 08/07/2010 1Control and Robotics Labaratory.
F INGERPRINT RECOGNITION AND SYNTHESIS – ADVANCED TECHNIQUES Biometric Course CPSC 601.
1 Interest Operators Harris Corner Detector: the first and most basic interest operator Kadir Entropy Detector and its use in object recognition SIFT interest.
Local invariant features Cordelia Schmid INRIA, Grenoble.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Digital Image Processing Lecture 7: Geometric Transformation March 16, 2005 Prof. Charlene Tsai.
Symmetric hash functions for fingerprint minutiae S. Tulyakov, V. Chavan and V. Govindaraju Center for Unified Biometrics and Sensors SUNY at Buffalo,
1 Fingerprint Recognition CPSC 601 CPSC Lecture Plan Fingerprint features Fingerprint matching.
Intelligent Vision Systems ENT 496 Object Shape Identification and Representation Hema C.R. Lecture 7.
Image Processing Edge detection Filtering: Noise suppresion.
Digital Image Processing - (monsoon 2003) FINAL PROJECT REPORT Project Members Sanyam Sharma Sunil Mohan Ranta Group No FINGERPRINT.
Fitting: The Hough transform
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
Medical Image Analysis Dr. Mohammad Dawood Department of Computer Science University of Münster Germany.
EE 7740 Fingerprint Recognition. Bahadir K. Gunturk2 Biometrics Biometric recognition refers to the use of distinctive characteristics (biometric identifiers)
Computer Vision - Fitting and Alignment (Slides borrowed from various presentations)
1 Machine Vision. 2 VISION the most powerful sense.
CSE 185 Introduction to Computer Vision Feature Matching.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
776 Computer Vision Jan-Michael Frahm Spring 2012.
CSCI 631 – Foundations of Computer Vision March 15, 2016 Ashwini Imran Image Stitching.
Hough Transform CS 691 E Spring Outline Hough transform Homography Reading: FP Chapter 15.1 (text) Some slides from Lazebnik.
776 Computer Vision Jan-Michael Frahm Spring 2012.
Digital Image Processing - (monsoon 2003) FINAL PROJECT REPORT
University of Ioannina
Nearest-neighbor matching to feature database
Lecture 7: Image alignment
Feature description and matching
A special case of calibration
Nearest-neighbor matching to feature database
Geometric Hashing: An Overview
CSE 185 Introduction to Computer Vision
Calibration and homographies
Image Registration  Mapping of Evolution
Presentation transcript:

Fingerprint Matching Chapter On-Line Fingerprint Verification Anil Jain, Fellow, IEEE, Lin Hong, and Ruud Bolle, Fellow, IEEE Presented by Chris Miles

Fingerprint Matching ● Compare two given fingerprints T,I – Return degree of similarity (0->1) – Binary Yes/No ● T -> template, acquired during enrollment ● I -> Input ● Either input images, or feature vectors (minutiae) extracted from them

Issues involved with matching ● “extremely difficult problem” ● Displacement ● Rotation ● Partial Overlap – Not completely in image ● Distortion (Non- Linear) – Stretches when pushed down

More issues ● Pressure and Skin condition – Pressure, dryness, disease, sweat, dirt, grease, humidity ● Noise – Dirt on the sensor ● Feature Extraction Errors

State of the Art ● Many algorithms match high quality images ● Challenge is in low-quality and partial matches ● 20% of the problems (low quality) at FVC2000 caused 80% of the false non-matches ● Many were correctly identified at FVC2002 though

Approaches ● Correlation-based matching – Superimpose images – compare pixels ● Minutiae-based matching – Classical Technique – Most popular – Compare extracted minutiae ● Ridge Feature-based matching – Compare the structures of the ridges – Everything else

FVC2002

Correlation-based Techniques ● T and I are images ● Sum of squared Differences – SSD(T,I) = ||T-I|| 2 = (T-I) T (T-I) = ||T|| 2 + ||I|| 2 – 2T T I – Difference between pixels ● ||T|| 2 + ||I|| 2 are constant under transformation ● Try to maximize correlation – Minimizes difference – CC(T,I) = T T I – Can't be used because of displacement / rotation

Maximizing Correlation ● I (  x,  y,  ) – Transformation of I – Rotation around the origin by  – Translation by x,y ● S(T,I) = max CC(T,I (  x,  y,  ) ) – Try them all – take max

Correlation Results

Doesn't Work ● Non-Linear distortion is significant and not accounted for ● Skin Condition / Pressure cause brightness / contrast and thickness to vary significantly ● Difference correlation – Check max/min correlation values – Matches show greater range ● Computationally Expensive (try them all)

Divide and Conquer ● Identify local regions in the image – Pieces of the whole – Interesting regions ● Match them – Sum correlations to find global correlation – Check difference of transforms for each region ● Consolidation

Computation Improvements ● Correlation Theorem – Spatial Correlation = point wise mutation in fourier domain – T  I = F -1 (F *(T) x F(I)) – Resultant image has values = correlation for translating to those points – No rotation – Very large computation improvement

Computational Improvements ● Multi resolution hierarchical approaches ● Fourier-Mellin – Gives rotational invariance – Other steps reduce accuracy ● Divide and Fourier – Partition into local regions – Match them against each other – Border overlap – Much faster

Optical Fingerprint Matching ● Lenses to derive the Fourier transform of the images ● Joint transform correlator for matching ● Very expensive ● Complex ● Immature

Minutiae-based Methods ● Classical Technique ● T, I are feature vectors of minutiae ● Minutiae = (x,y,  ) ● Two minutiae match if – Euclidean distance < r 0 – Difference between angles <   – Tolerance Boxes ● r 0 ●  

Alignment ● Displacement ● Rotation ● Scale ● Distortion-tolerant transformations ● More transformations – higher degree of freedom for matcher – More false matches

Formulation ● M'' j = map(m' j ) – Map applies a geometrical transformation ● mm(m'' j, m i ) returns 1 if they match ● Matching can be formulated as – maximize   mm(map  x,  y,  )(m' P(i) ), m i ) ● P is an unknown function which pairs the minutiae – Which minutiae in I corresponds to which in T m i=1  x,  y, , P

P ● Pairs the minutiae ● If P(i) = j – j is i's mate – Most likely pair ● If P(i) = null – i, from T has no mate in I ● If no i matches a j, that j has no mate ● Each minutiae has a maximum of one mate ● Trivial if alignment is known

Point Pattern Matching ● Alignment is rarely known ● Cast as point pattern matching – Angular component is only a small difference ● Techniques – Relaxation – Algebraic and operational research solutions – Tree search (pruning) – Search (Energy Minimization) – Hough Transform

Matching

Relaxation ● Maintain confidence levels between all possible matchings – How likely we think I a matches T b ● Iteratively – Calculate the consistencies the transformations required for those matches – Adjust the confidence levels of points that agree with other points – Decrease others ● Iterative -> slow

Algebraic and Operational Research Solutions ● Use algebraic methods ● Requires very restrictive assumptions – Exactly aligns under affine transformation – N = M ● All minutiae perfectly identified ● Assignment problems ● Bipartite graph matching

Tree Search (Pruning) ● Search the tree of possible matches – If A matches B, then C matches F, then D matches K ● More assumptions – M=N – No outliers -> All minutiae must match

Search (Energy Minimization) ● Cast as a general search problem ● Search towards the optimal set of matches ● Fitness is a function of consistency ● Can use any general search technique – GA – Simulated Annealing

Hough Transform ● Brute force search over possible Pairings / rotations / scalings ● Foreach m i in I – Foreach m j in T ● Foreach  in discretized  's – If  I –  J < threshold ● Foreach scale in discretized scales ● dx,dy = m i – map ,scale (m j ) ● A[dx,dy,theta,scale]++ ● Whichever A is highest is the closest transformation ● Can then find pairings easily

Hough Transform

Improvements ● Vote on neighboring bins / smooth the bins, to get more robust answers ● Parallelize on custom hardware ● Hierarchical discretization ● Chang et al 1997 – Find the principle pair and a course transformation with respect to it that matches the most points – Calculate pairing – Use the pairing to calculate exact alignment

Principle Pair ● Brute force for the segment, 2 pts, which can be best mapped to a corresponding segment in T ● Foreach m i in I – Foreach m j in T ● Reset A ● Foreach m i 2 in I – Foreach m j 2 in T ● ,S = Transformation required to turn m i 1,m i 2 into m j 1,m j 2 ● A[ ,S]++ ● Remember the m i 1and m j 1 and  S  with the highest A value ● m i 1,m j 1 are the principle pair and  S is the course transformation

Segment Matching ● Udupa, Garg, Sharma 2001 ● Looking for matching segments of similar lengths ● Foreach, determine transforms to match them – Try to match remaining minutiae ● Check consistencies of best matches ● Final score is a combination of the number of mated pairs, the fraction of consistent alignments, and the topological correspondence

Minutiae match with pre-alignment ● Idea – pre-align T before storage in database – Align each I just once against the global orientation – Reduces computation in identification systems ● Absolute pre-alignment – Orient everything in a common direction – No reliable system to do this ● Difficult to detect core, or even basic orientation ● Relative pre-alignent – Align I to for each T seperately ● No computation savings

M82 – Fbi ● Do course absolute pre- alignment – Center the image around the core location – Orient it with ridges to the left/right averaged facing up ● Find principal pairs – Look at minutiae around the center – Find the best matching pair -> Principle – Calculate course transformation, deformation tensor

Avoiding Alignment ● Ordinary Person - “You should go to work” ● Philosopher - “Why?” ● Intrinsic coordinate system – Instead of using global coordinate system orient them with respect to the ridge patterns – Minutiae are defined with respect to this – Transformation / rotation does not change their relative location – Problems partitioning the image into the local coordinate systems

Ridge Relative Pre-alignment ● Jain, Hong, Bolle 1997 ● Store minutiae along with information about the ridge attached to it – Oriented along minutiae orientation – Normalized to ridge frequency ● Compare with other ridges until you find a good match – Take as principle pair

Comparing ridges ● Convert minutiae in T,I to polar coordinates with respect to the reference minutiae – Reference minutiae = the one with the ridge ● Order them into a list ● Check how many insertions / deletions / transformations are necessary to match the lists ● Variant – Match distances and relative angles of sampled points

Results