Session 7: Face Detection (cont.)

Slides:



Advertisements
Similar presentations
EE462 MLCV Lecture 5-6 Object Detection – Boosting Tae-Kyun Kim.
Advertisements

Rapid Object Detection using a Boosted Cascade of Simple Features Paul Viola, Michael Jones Conference on Computer Vision and Pattern Recognition 2001.
Rapid Object Detection using a Boosted Cascade of Simple Features Paul Viola, Michael Jones Conference on Computer Vision and Pattern Recognition 2001.
Face detection Behold a state-of-the-art face detector! (Courtesy Boris Babenko)Boris Babenko.
AdaBoost & Its Applications
Face detection Many slides adapted from P. Viola.
Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,
EE462 MLCV Lecture 5-6 Object Detection – Boosting Tae-Kyun Kim.
The Viola/Jones Face Detector Prepared with figures taken from “Robust real-time object detection” CRL 2001/01, February 2001.
Rapid Object Detection using a Boosted Cascade of Simple Features
Robust Real-time Object Detection by Paul Viola and Michael Jones ICCV 2001 Workshop on Statistical and Computation Theories of Vision Presentation by.
Face detection and recognition Many slides adapted from K. Grauman and D. Lowe.
Ensemble Learning: An Introduction
Adaboost and its application
Robust Real-Time Object Detection Paul Viola & Michael Jones.
Foundations of Computer Vision Rapid object / face detection using a Boosted Cascade of Simple features Presented by Christos Stoilas Rapid object / face.
Face Detection CSE 576. Face detection State-of-the-art face detection demo (Courtesy Boris Babenko)Boris Babenko.
FACE DETECTION AND RECOGNITION By: Paranjith Singh Lohiya Ravi Babu Lavu.
Face Detection using the Viola-Jones Method
A Tutorial on Object Detection Using OpenCV
Using Statistic-based Boosting Cascade Weilong Yang, Wei Song, Zhigang Qiao, Michael Fang 1.
EADS DS / SDC LTIS Page 1 7 th CNES/DLR Workshop on Information Extraction and Scene Understanding for Meter Resolution Image – 29/03/07 - Oberpfaffenhofen.
Object Detection Using the Statistics of Parts Presented by Nicholas Chan – Advanced Perception Robust Real-time Object Detection Henry Schneiderman.
Window-based models for generic object detection Mei-Chen Yeh 04/24/2012.
Benk Erika Kelemen Zsolt
Lecture 29: Face Detection Revisited CS4670 / 5670: Computer Vision Noah Snavely.
Face detection Slides adapted Grauman & Liebe’s tutorial
Robust Real-time Face Detection by Paul Viola and Michael Jones, 2002 Presentation by Kostantina Palla & Alfredo Kalaitzis School of Informatics University.
ECE738 Advanced Image Processing Face Detection IEEE Trans. PAMI, July 1997.
Tony Jebara, Columbia University Advanced Machine Learning & Perception Instructor: Tony Jebara.
Lecture notes for Stat 231: Pattern Recognition and Machine Learning 1. Stat 231. A.L. Yuille. Fall 2004 AdaBoost.. Binary Classification. Read 9.5 Duda,
Adaboost and Object Detection Xu and Arun. Principle of Adaboost Three cobblers with their wits combined equal Zhuge Liang the master mind. Failure is.
Lecture 09 03/01/2012 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
The Viola/Jones Face Detector A “paradigmatic” method for real-time object detection Training is slow, but detection is very fast Key ideas Integral images.
Learning to Detect Faces A Large-Scale Application of Machine Learning (This material is not in the text: for further information see the paper by P.
Ensemble Methods.  “No free lunch theorem” Wolpert and Macready 1995.
FACE DETECTION : AMIT BHAMARE. WHAT IS FACE DETECTION ? Face detection is computer based technology which detect the face in digital image. Trivial task.
1 January 24, 2016Data Mining: Concepts and Techniques 1 Data Mining: Concepts and Techniques — Chapter 7 — Classification Ensemble Learning.
CS-498 Computer Vision Week 9, Class 2 and Week 10, Class 1
Notes on HW 1 grading I gave full credit as long as you gave a description, confusion matrix, and working code Many people’s descriptions were quite short.
A Brief Introduction on Face Detection Mei-Chen Yeh 04/06/2010 P. Viola and M. J. Jones, Robust Real-Time Face Detection, IJCV 2004.
“Joint Optimization of Cascaded Classifiers for Computer Aided Detection” by M.Dundar and J.Bi Andrey Kolobov Brandon Lucia.
Face detection Many slides adapted from P. Viola.
Hand Detection with a Cascade of Boosted Classifiers Using Haar-like Features Qing Chen Discover Lab, SITE, University of Ottawa May 2, 2006.
AdaBoost Algorithm and its Application on Object Detection Fayin Li.
Things iPhoto thinks are faces
ADA Boost Face Detection. Find Faces Imagine a Stock Market Application In the investment field, many experts get paid for their stock market forecasts.
Machine Learning: Ensemble Methods
Stock Market Application: Review
CS262: Computer Vision Lect 06: Face Detection
Reading: R. Schapire, A brief introduction to boosting
2. Skin - color filtering.
Cascade for Fast Detection
License Plate Detection
Recognition Part II: Face Detection via AdaBoost
Assignment 4 Face Detection.
Lit part of blue dress and shadowed part of white dress are the same color
High-Level Vision Face Detection.
Students: Meiling He Advisor: Prof. Brain Armstrong
Learning to Detect Faces Rapidly and Robustly
Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei.
Face Detection via AdaBoost
ADABOOST(Adaptative Boosting)
Assignment 4 Face Detection.
Adaboost for faces. Material
Ensemble learning.
A Tutorial on Object Detection Using OpenCV
Lecture 29: Face Detection Revisited
Jie Chen, Shiguang Shan, Shengye Yan, Xilin Chen, Wen Gao
ECE738 Final Project Face Detection Baseline
Presentation transcript:

Session 7: Face Detection (cont.) John Magee 8 February 2017 Slides courtesy of Diane H. Theriault

Question of the Day: How can we find faces in images?

Face Detection Compute features in the image Apply a classifier Viola & Jones. “Rapid Object Detection using a Boosted Cascade of Simple Features”

What do Faces “Look Like”? Chosen features are responses of the image to box filters at specific locations in the image

Using Features for Classification Gateway between the signal processing world and the machine learning world For any candidate image, we will compute the response of the image to some different filters at some different locations These will be our input to our machine learning algorithm

Using Features for Classification In Machine Learning, a classifier is a thing that can make a decision about whether a piece of data belongs to a particular class (e.g. “is this image a face or not”) There are different types of classifiers, and one way to find their parameters by looking at some training data (i.e. “supervised learning”) A “weak learner” is a classifier for which you can find the best parameters you can, but it still does a mediocre job

Using Features for Classification “Weak Learner” Example: Apply a threshold to a feature (the response of the image to a filter) How to find a threshold?

Using Features for Classification How to find a threshold? Start with labeled training data 9,832 face images (positive examples) 10,000 non-face images (sub-windows cut from other pictures containing no faces) (negative examples) Compute some measure of how good a particular threshold his (e.g. “accuracy”), then find the threshold that gives the best result

Using Features for Classification For a particular threshold on a particular feature, compute: True positives (faces that are identified as faces) True negatives (non-face patches that are identified as non-faces) False positives (non-faces identified as faces) False negatives (faces identified as non-faces) Accuracy: % correctly classified Classification Error: 1 - Accuracy For each feature, choose a threshold that maximizes the accuracy Classifier Result Known Classification positive True Positive False Positive negative False Negative True Negative “Confusion Matrix”

Using Features for Classification How do you know which feature to use? Try them all and pick the one that gives the best result Then, choose the next one that does the next best job, emphasizing the misclassified images Each threshold on a single feature gives mediocre results, but if you combine them in a clever way, you can get good results (That’s the extremely short version of “boosting”)

Classification with Adaboost An awesome Machine Learning Algorithm! Training: Given a pool of “weak learners” and some data, Create a “boosted classifier” by choosing a good combination of K weak learners and associated weights In our case “Train a Weak Learner” = = Choose a feature to use and which threshold to apply

Classification with Adaboost Training: Initialization: Assign data weights uniformly to each data point For 1:K Train all of the “weak learners” Compute the weighted classification error using weights assigned to each data point Choose the weak learner with the lowest weighted error Compute a classifier weight associated with the weak learner, based on the classification error Adjust the weights for the data points to emphasize misclassified points (Specifics of how to compute weights in paper)

Classification with Adaboost Use the “boosted classifier” (the weak learners and associated weights we found during training) to label faces Evaluate each weak learner we chose on the new data point by computing the response of the image to the filter and applying the threshold to obtain a binary result Make a final decision by computing a weighted sum of the classification results from the weak learners

Classification Cascade Tradeoff between more “complex” classifier that uses more features (computational cost) and accuracy What is an acceptable error rate? What is an acceptable computational cost? Can we have our cake and eat it too?

Classification Cascade Solution: Use a “cascade” of increasingly complex classifiers Create less complex classifiers with fewer weak learners that achieve high detection rates (maybe with extra false positives) Evaluate more complex, more picky, classifiers only after the image passes the early classifiers Train later classifiers in the cascade using only images that pass earlier classifiers

To Detect Faces Divide large images into overlapping sub-windows Apply classifier cascade to each sub-window Apply to sub-windows of different sizes by scaling the features (using larger box filters)

Discussion Questions: What is the relationship between an image feature and the response of an image to a box filter applied at a particular location? If you were given a set of labeled images and a filter response for some particular filter, how would you choose a threshold to use? How would you adjust your procedure for finding the best possible threshold if you wanted to find the best threshold that recognized at least 99% of faces, even if it let through some non-faces (false positives)? Given an image, what are the steps for labeling it as face or non-face? What is a classifier cascade and why would you want to use one?