Download presentation
Presentation is loading. Please wait.
Published byStephen Porter Modified over 9 years ago
1
Distributed Framework for Automatic Facial Mark Detection Graduate Operating Systems-CSE60641 Nisha Srinivas and Tao Xu Department of Computer Science and Engineering nsriniva, txu1@nd.edu 1
2
Introduction What is Biometrics? – Face, iris, fingerprint etc. – Face is a popular biometric Non-invasive – Identical twins have a high degree of facial similarity. Fine details on the face like facial marks are used to distinguish between identical twins. – Automatic facial mark detector: detects facial marks and extracts facial mark features. Different type of Biometric. 2
3
Automatic Facial Mark Detector Convert Images Face Contour Points Crop face Images Detect facial marks Independent of results from other images 3
4
Objective Drawbacks of the Automatic Facial Mark Detector – Slow Size of the dataset Size of each image in the dataset Run time of algorithms is long Executing it sequentially Objective: – To design a distributed framework for the automatic facial mark detector. To improve computation time To obtain scalability 4
5
Sequential Execution Execution Time: T e =Nt p t p = time to execute facial mark detector for a single image N= Number of Images 5 Conversion Contour Points Cropping FM Detections Input Image
6
Proposed Approach : Distributed Framework Conversion Contour Points Cropping FM Detections Machine 1 Machine n Machine 2 Execution Time: T e = t p t p = time to execute facial mark detector for a single image 6
7
Implementation – Combination of Makeflow, Worker Queue, Condor Condor is a distributed environment which makes use of idle resources on remote computers. Work Queue is a fault tolerant framework. – Master/Worker framework. – Manages Condor Makeflow – Distributed computing abstraction – Runs computations on WQ – The computations have dependencies that are represented by directed acyclic graph (DAG). 7
8
Flow Diagram 8
9
Performance Metrics We evaluate the performance of the distributed framework by computing the following metrics – Total execution time – Node Efficiency – Scalability Weak scaling: Number of jobs proportional to number of images in dataset. Strong scaling: Number of jobs is varied by keeping the number of images in the dataset a constant. 9
10
Dataset and System Specifications Twin face images were collected at the Twins Days Festival in Twinsburg, Ohio in August 2009. High Resolution Images: 4310 rows x 2868 columns Total Number of Images: 800 – Dataset size based on attributes: [206 200 250 144] Notre Dame Condor Pool: ~(700 cores) 10
11
Notre Dame Condor Pool MachineArchOpSysMachineOwnerMachineGroupStateLoadAvgMemory ccl00.cse.nd.eduINTELLINUXdthaincclUnclaimed0.1901518 ccl00.cse.nd.eduINTELLINUXdthaincclUnclaimed0.1901518 ccl01.cse.nd.eduINTELLINUXdthaincclUnclaimed0.1501518 ccl01.cse.nd.eduINTELLINUXdthaincclUnclaimed0.1501518 11 ccl 8x1 cclsun 16x2 loco 32x2 sc0 32x2 netscale 16x2 cvrl 32x2 iss 44x2 compbio 1x8 netscale 1x32 Fitzpatrick 130 CSE 170 CHEG 25 EE 10 Nieu 20 DeBart 10 MPI HadoopBiometrics Storage Research Network Research Network Research Timeshared Collaboration Personal Workstations Storage Research Batch Capacity green house Makeflow was executed on cvrl.cse.nd.edu Intel(R) Xeon(R) CPU X7460 @ 2.66GHz
12
Experiments 12 Experiment 1 – Comparison of total execution time between the distributed framework and sequential framework. – Submit N jobs to Condor by keeping the dataset constant. – Number of jobs workers for distributed framework= {10,50, 100, 150, 200} – Dataset Size= 206 – Executed on the Notre Dame Condor Pool.
13
Experiment 2 – To evaluate node efficiency – Analyze the time taken for a single job to complete on a machine in the Notre Dame Condor Pool. Experiment 3 – To evaluate scalability of the AFMD Weak scaling: Number of jobs proportional to number of images in dataset. Strong scaling: Number of jobs is varied by keeping the number of images in the dataset a constant. 13
14
Experiment 1: Results 14 Number of Workers Time (secs)
15
15 Experiment 2: Results Machine Names Number of Workers Time (secs) Number of jobs executed per machine
16
Experiment 3:Weak Scaling 16 Time (secs) Number of Workers
17
Conclusion Designed and implemented a distributed framework for a Automatic facial mark detector. It was implemented using Makeflow, Work Queue and Condor. Performance of the distributed framework is significantly better. 17
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.