By Pushpita Biswas Under the guidance of Prof. S.Mukhopadhyay and Prof. P.K.Biswas.

Slides:



Advertisements
Similar presentations
Patient information extraction in digitized X-ray imagery Hsien-Huang P. Wu Department of Electrical Engineering, National Yunlin University of Science.
Advertisements

QR Code Recognition Based On Image Processing
電腦視覺 Computer and Robot Vision I
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Sliding Window Filters and Edge Detection Longin Jan Latecki Computer Graphics and Image Processing CIS 601 – Fall 2004.
Computer Vision Detecting the existence, pose and position of known objects within an image Michael Horne, Philip Sterne (Supervisor)
Facial feature localization Presented by: Harvest Jang Spring 2002.
1Ellen L. Walker Edges Humans easily understand “line drawings” as pictures.
Instructor: Dr. G. Bebis Reza Amayeh Fall 2005
CS 376b Introduction to Computer Vision 04 / 11 / 2008 Instructor: Michael Eckmann.
EE663 Image Processing Edge Detection 2 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Contents Description of the big picture Theoretical background on this work The Algorithm Examples.
Fitting a Model to Data Reading: 15.1,
Scale Invariant Feature Transform (SIFT)
CS 376b Introduction to Computer Vision 04 / 14 / 2008 Instructor: Michael Eckmann.
Iris localization algorithm based on geometrical features of cow eyes Menglu Zhang Institute of Systems Engineering
כמה מהתעשייה? מבנה הקורס השתנה Computer vision.
October 8, 2013Computer Vision Lecture 11: The Hough Transform 1 Fitting Curve Models to Edges Most contours can be well described by combining several.
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.
Presented by: Kamakhaya Argulewar Guided by: Prof. Shweta V. Jain
Chapter 10: Image Segmentation
CPSC 601 Lecture Week 5 Hand Geometry. Outline: 1.Hand Geometry as Biometrics 2.Methods Used for Recognition 3.Illustrations and Examples 4.Some Useful.
Graph-based Segmentation. Main Ideas Convert image into a graph Vertices for the pixels Vertices for the pixels Edges between the pixels Edges between.
Digital Image Processing Lecture 20: Representation & Description
Topic 10 - Image Analysis DIGITAL IMAGE PROCESSING Course 3624 Department of Physics and Astronomy Professor Bob Warwick.
ENT 273 Object Recognition and Feature Detection Hema C.R.
University of Kurdistan Digital Image Processing (DIP) Lecturer: Kaveh Mollazade, Ph.D. Department of Biosystems Engineering, Faculty of Agriculture,
Intelligent Vision Systems ENT 496 Object Shape Identification and Representation Hema C.R. Lecture 7.
Video Based Palmprint Recognition Chhaya Methani and Anoop M. Namboodiri Center for Visual Information Technology International Institute of Information.
© by Yu Hen Hu 1 ECE533 Digital Image Processing Image Segmentation.
: Chapter 8: Edge Detection 1 Montri Karnjanadecha ac.th/~montri Image Processing.
G52IVG, School of Computer Science, University of Nottingham 1 Edge Detection and Image Segmentation.
Data Extraction using Image Similarity CIS 601 Image Processing Ajay Kumar Yadav.
Handwritten Hindi Numerals Recognition Kritika Singh Akarshan Sarkar Mentor- Prof. Amitabha Mukerjee.
CS654: Digital Image Analysis Lecture 25: Hough Transform Slide credits: Guillermo Sapiro, Mubarak Shah, Derek Hoiem.
HOUGH TRANSFORM. Introduced in 1962 by Paul Hough pronounced like “tough” according to orm.html.
COMP322/S2000/L171 Robot Vision System Major Phases in Robot Vision Systems: A. Data (image) acquisition –Illumination, i.e. lighting consideration –Lenses,
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Handwritten Signature Verification
CS654: Digital Image Analysis Lecture 5: Pixels Relationships.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
October 16, 2014Computer Vision Lecture 12: Image Segmentation II 1 Hough Transform The Hough transform is a very general technique for feature detection.
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
CSE 6367 Computer Vision Image Operations and Filtering “You cannot teach a man anything, you can only help him find it within himself.” ― Galileo GalileiGalileo.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
Course 5 Edge Detection. Image Features: local, meaningful, detectable parts of an image. edge corner texture … Edges: Edges points, or simply edges,
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Digital Image Processing CSC331
Instructor: Mircea Nicolescu Lecture 10 CS 485 / 685 Computer Vision.
Frank Bergschneider February 21, 2014 Presented to National Instruments.
September 26, 2013Computer Vision Lecture 8: Edge Detection II 1Gradient In the one-dimensional case, a step edge corresponds to a local peak in the first.
Materi 09 Analisis Citra dan Visi Komputer Representasi and Deskripsi 1.
Optical Character Recognition
Grouping and Segmentation. Sometimes edge detectors find the boundary pretty well.
April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 1 Canny Edge Detector The Canny edge detector is a good approximation.
Graph-based Segmentation
OCR Reading.
Image Representation and Description – Representation Schemes
Hand Geometry Recognition
Digital Image Processing Lecture 20: Representation & Description
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Mean Shift Segmentation
Fourier Transform: Real-World Images
Machine Vision Acquisition of image data, followed by the processing and interpretation of these data by computer for some useful application like inspection,
Fitting Curve Models to Edges
A New Approach to Track Multiple Vehicles With the Combination of Robust Detection and Two Classifiers Weidong Min , Mengdan Fan, Xiaoguang Guo, and Qing.
Comparing Images Using Hausdorff Distance
Blobworld Texture Features
Fourier Transform of Boundaries
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

By Pushpita Biswas Under the guidance of Prof. S.Mukhopadhyay and Prof. P.K.Biswas

Why access security is used? Why Palm print verification? 1. no need to memorize codes or passwords. 2. more reliable

 Image acquisition  Palm positioning  Feature extraction  Palm print matching

Image acquisition Palm Positioning Feature extraction Register or verify? Palm print matching TIFF file (gray scale) Gray-scale Image Line edge map Verify Decision Register Registered model Database

1.Image acquisition  Image of the user’s hand is taken via a camera and stored a grayscale TIFF file. 2. Palm positioning  Boundary extraction and edge thinning  Feature point location  Establishment of coordinate system  Sub image normalization

1. Gradient magnitude of each pixel computed using set of sobel masks for detecting horizontal, vertical and diagonal edges. 2. Adaptive thresholding :- Gr => highest gradient value taken as reference Ratio_Gradient => predetermined constant between 0 and 1 T_Gradient => Threshold value T_Gradient = Gr * Ratio_Gradient 3. Selected pixels removed from binary image to reduce all lines in the image to a single pixel width.

Feature point location In the boundary image’s line pattern the bottom of a valley is a short curve joining the edges of adjacent fingers. The key points are best represented as those curve’s midpoints. Establishment of the coordinate system The x-axis passes through K1 and K3.The y-axis is perpendicular to the x-axis and passes through K2 1. Sort the parallel line pairs, so that the line pairs are stored in left to right order. 2. For each parallel pair Pi in the sorted array, form a V- shape pair with the right edge of Pi and the left edge of Pi+1 (i = 0..I-2, where I is the total number of parallel pairs)

The rectangle specifications : 1.distance between x-axis and rectangle’s nearest side is RefLength * 0.25, RefLength =>distance between K1 and K3 2.sides parallel to x-axis and y-axis 3.symmetric with respect to y-axis 4.sides have length of RefLength Scaling and rotation is followed

3. Feature extraction  Image Preprocessing A 3*3 averaging mask is used, which smoothes the image and minimizes the noise impact.  Line Detection Standard Sobel edge detector is used followed by thresholding on edge magnitude.  Image Thresholding Threshold value calculated on basis of a percentage of image area.  Line thinning Resulting image contains lines of only a single pixel width Results Next

Thresholding of two sample images, of same person captured under different lighting conditions Return

Result of line detection Return

Result of thinningResult of Line approximation Contour tracing and the Dynamic Two-Strip (DYN2S) algorithm is applied to establish a set of straight line segments that approximate the extracted palm print lines.

4. Palm print matching 1. Line segment Hausdorff distance (LHD) is applied. m and t are 2 line segments Angle distance by tangent function with respect to smallest angle between m and t. Predetermined weight of angle distance

2. Decision Making Choice of method depends on system specification

Results for palm print matching system. Thus Threshold value is decided.

Conclusion The system will work well on images with a uniform background, but this can be further extended to handle images with arbitrary backgrounds. Since the algorithm for locating and aligning the palm print is based on line detection instead of simple segmentation, makes the system more robust and suitable for security applications with outdoor cameras.

References  M.K.Leung, A.C.M. Fong, Siu Cheung Hui “Palm print Verification for Controlling Access to Shared Computing Resources,” IEEE Pervasive Computing, vol. 6, no. 4, 2007, pp. 40–47.  W.J. Rucklidge, “Efficiently Locating Objects Using the Hausdorff Distance,” Int’l J. Computer Vision, vol. 24, no. 3, 1997,pp. 251–270.  M.K. Leung and Y.H. Yang, “Dynamic Two-Strip Algorithm in Curve Fitting,” Pattern Recognition, vol. 23, nos. 1–2, 1990, pp. 69–79.

Thank you