Presentation is loading. Please wait.

Presentation is loading. Please wait.

Face Recognition and Tracking for Human-Robot Interaction using PeCoFiSH Alex Eisner This material is based upon work supported by the National Science.

Similar presentations


Presentation on theme: "Face Recognition and Tracking for Human-Robot Interaction using PeCoFiSH Alex Eisner This material is based upon work supported by the National Science."— Presentation transcript:

1 Face Recognition and Tracking for Human-Robot Interaction using PeCoFiSH Alex Eisner This material is based upon work supported by the National Science Foundation under Grant No. IIS/REU/0755462 Mentor: Dr. Andrew Fagg

2 Overview Mobile Manipulator  Navigation, Grasping Human-Robot Interaction  Intuitive cooperation  Body language, Hand gestures Face Tracking  Engage humans, ignore non-humans

3 Approach OpenCV implementation of Haar Cascades  Finds faces in each frame  Haar-like features (Haar Wavelets)‏ Find signals of different frequencies, like FFT Locates patterns of intensity gradients Preserves spatial information Inexpensive to convolve over an image

4 Haar Features Image source: morph.cs.st-andrews.ac.ukImage source: twoday.tuwien.ac.at

5 Approach Haar Cascades: highly effective, but noisy  High false positive rate, caused by: Noise in video stream “Face-like” objects in environment PeCoFiSH: Persistence and Color Filter Selection for Haar cascades –Prefer objects that persist between frames –Prefer objects that are skin-colored

6 Persistence Filter Store a certain number of past frames Identify locations of face candidates in each frame Locations that are candidates in at least N frames are considered faces Salvador Dalí, Image source: www.art.com

7 Use this picture to discuss the raw Haar wavelet approach Move it up…

8

9 Persistence Filter Successfully filters for noise –Not for face-like objects in environment For that we turn to color filering Image source: optcorp.com

10 Color Filtering Construct a color model For a given image, assigns a likelihood to each pixel that that pixel is part of a face A statistical model using Gaussian mixture elements

11 Color Filtering: Overview Offline Sample images captured with Biclops –Variety of skin tones, lighting conditions, etc. User tags locations in images containing skin Color values at each pixel used to create color model Color model passed to PeCoFiSH

12 Color Filtering: Overview Online (PeCoFiSH): Mixture model applied to give relative likelihood each pixel is skin Pixels above a certain threshold are considered Series of dilate/erode operations –Remove clusters which are very thin –Connect clusters which are very close to each other Image source: www.w3.org

13 User-selected skin pixels represented in YCbCr space  Luma (brightness), blue-difference, red-difference 3-Dimensional Gaussian mixture model created from N Gaussians EM algorithm finds the parameters of the Gaussian mixture model Color Filtering: Implementation

14 Pixel Samples in YCrCb Space

15

16

17 Combined Filters: PeCoFiSH Threshold applied to persistence filter to give binary windows of persistent faces Pixel areas which match both color and persistence filters are considered candidates Largest set of connected pixels found  Minimum size threshold is applied  Largest face is probably the closest face

18

19

20

21

22

23

24 Analysis True Positive (TP): an existing face was tagged False Positive (FP): a location with no face was tagged False Negative (FN): a location with a face was not tagged

25 Analysis Accuracy –How frequently the existing faces were tagged –Accuracy = TP / ( TP + FN ) Precision –How frequently the tagged locations were faces –Precision = TP / ( TP + FP )

26 Video: alex240-boxes

27 Analysis: Simple Case

28 Video: alexwalk2-raw

29 Video: alexwalk2-filter

30 Video: alexwalk2-boxes

31 Analysis: Complex Case

32 Analysis The really cool part: Throughout all these tests, At no time did PeCoFiSH select a major face location which was not a face!  Robot won't try and talk to the hat rack

33 Performance By applying a scale factor, real-time processing is obtainable without sacrificing accuracy At 640x480, 2 FPS At 320x240, 10 FPS Scale Factor Frames per Second

34 Current Work Saccadic face tracking with Biclops Human-Robot cooperation Image source: tangledwing.wordpress.com

35 Questions? This material is based upon work supported by the National Science Foundation under Grant No. IIS/REU/0755462


Download ppt "Face Recognition and Tracking for Human-Robot Interaction using PeCoFiSH Alex Eisner This material is based upon work supported by the National Science."

Similar presentations


Ads by Google