Presentation is loading. Please wait.

Presentation is loading. Please wait.

Simple Face Detection system Ali Arab Sharif university of tech. Fall 2012.

Similar presentations


Presentation on theme: "Simple Face Detection system Ali Arab Sharif university of tech. Fall 2012."— Presentation transcript:

1 Simple Face Detection system Ali Arab Sharif university of tech. Fall 2012

2 2 outline What is face detection? What is face detection? Applications Applications Basic concepts Basic concepts Image Image RGB color space RGB color space Normalized RGB Normalized RGB HSL color space HSL color space Algorithm description Algorithm description

3 3 What is face detection Given an image, tell whether there is any human face, if there is, where is it (or where they are). Given an image, tell whether there is any human face, if there is, where is it (or where they are).

4 Applications automatic face recognition systems automatic face recognition systems Human Computer Interaction systems Human Computer Interaction systems surveillance systems surveillance systems Face tracking systems Face tracking systems Autofocus cameras Autofocus cameras Even energy conservation!!! Even energy conservation!!! The system can recognize the face direction of the TV user. When the user is not looking at the screen, the TV brightness is lowered. When the face returns to the screen, the brightness is increased. The system can recognize the face direction of the TV user. When the user is not looking at the screen, the TV brightness is lowered. When the face returns to the screen, the brightness is increased. 4

5 What is an image? We can think of an image as a Matrix. We can think of an image as a Matrix. Simplest form : Binary images Simplest form : Binary images 5

6 What is an image? (cont.) Grayscale images : Grayscale images : 6

7 What is an image? (cont.) 7 Color images : Color images : Known as RGB color space

8 rg space 8 Normalized RGB : Normalized RGB : a color is represented by the proportion of red, green, and blue in the color, rather than by the intensity of each. a color is represented by the proportion of red, green, and blue in the color, rather than by the intensity of each. Removes ntensity information. Removes the i ntensity information. r = R/(R+G+B) g = G/(R+G+B)

9 HSL color space 9 Motivation: the relationship between the constituent amounts of red, green, and blue light and the resulting color is unintuitive. Motivation: the relationship between the constituent amounts of red, green, and blue light and the resulting color is unintuitive.

10 HSL color space 10 Each pixel is represented using Hue, saturation and lightness. Each pixel is represented using Hue, saturation and lightness. You need to know how to convert from RGB to HSL! You need to know how to convert from RGB to HSL!

11 Algorithm description We use a simple Knowledge-based algorithm to accomplish the task: We use a simple Knowledge-based algorithm to accomplish the task: This approach represent a face using a set of rules, Use these rules to guide the search process. This approach represent a face using a set of rules, Use these rules to guide the search process. 11

12 Algorithm description First step: skin pixel classification First step: skin pixel classification Convert RGB to HSL. Convert RGB to HSL. In HSL color space: In HSL color space: The goal is to remove the maximum number of non-face pixels from the images in order to focus to the remaining skin-colored regions. The goal is to remove the maximum number of non-face pixels from the images in order to focus to the remaining skin-colored regions. 12 If H =239, Can be skin, otherwise reject it.

13 Algorithm description (cont.) First step: skin pixel classification First step: skin pixel classification Convert RGB to rg space. Convert RGB to rg space. In rg chromaticity space: In rg chromaticity space: 13 Let :

14 Algorithm description (cont.) Result of skin classification: Result of skin classification: 14

15 Algorithm description (cont.) Consider each connected region as an object. Consider each connected region as an object. 15

16 Algorithm description (cont.) second step: connected components labelling. second step: connected components labelling. 16 Binary image before labelling:

17 Algorithm description (cont.) second step: connected components labelling. second step: connected components labelling. 17 Binary image after labelling:

18 Algorithm description (cont.) second step: connected components labelling. second step: connected components labelling. 18 You can find an efficient algorithm for labelling here : http://www.codeproject.com/Articles/336915/Connected-Component-Labeling-Algorithm

19 Algorithm description (cont.) Third step: connected component analysis Third step: connected component analysis Analysing the labelled image. Analysing the labelled image. Giving us features of each object like: Giving us features of each object like: Area Area Minimum bounding box Minimum bounding box 19

20 Algorithm description (cont.) Forth step: Forth step: objects smaller than the minimum face area are removed (smaller than 450 ) objects smaller than the minimum face area are removed (smaller than 450 ) Objects bigger than the maximum face area are removed (larger than 4500) Objects bigger than the maximum face area are removed (larger than 4500) 20

21 Algorithm description (cont.) 21 The resulted image until now: The resulted image until now:

22 Algorithm description (cont.) Fifth step : percentage of skin in each bounding box Fifth step : percentage of skin in each bounding box if precentage > 0.9 if precentage > 0.9or if percentage <0.4 if percentage <0.4 22 region is rejected.

23 Algorithm description (cont.) sixth step: eliminating based on golden ratio sixth step: eliminating based on golden ratio (height / width) ratio ≈ golden ratio (1.618) (height / width) ratio ≈ golden ratio (1.618) 23 height width

24 Algorithm description (cont.) And the last step: counting the holes. (optional) And the last step: counting the holes. (optional) For remaining objects we compute the number of holes. For remaining objects we compute the number of holes. Eyes, mouth and nose usually are darker, so they appear as holes in binary image. Eyes, mouth and nose usually are darker, so they appear as holes in binary image. If an object has no hole, we simply reject it! If an object has no hole, we simply reject it! 24

25 Algorithm description (cont.) And the last step: counting the holes. (optional) And the last step: counting the holes. (optional) How?? How?? In each bounding box invert the pixels and count the objects in new image using the labelling algorithm discussed before. In each bounding box invert the pixels and count the objects in new image using the labelling algorithm discussed before. 25

26 Algorithm description (cont.) Remaining objects are facial regions. Remaining objects are facial regions. 26

27 Algorithm description (cont.) 27

28 Final Result We can draw a bounding box for each face or just report the position. We can draw a bounding box for each face or just report the position. 28

29 Remarks You’re not allowed to use any image processing library like cx_image or openCV. You’re not allowed to use any image processing library like cx_image or openCV. Collaboration encouraged, but the work must be done individually. Collaboration encouraged, but the work must be done individually. 29

30 Any Question? mail to: aliarab2009@gmail.com 30


Download ppt "Simple Face Detection system Ali Arab Sharif university of tech. Fall 2012."

Similar presentations


Ads by Google