Download presentation
Presentation is loading. Please wait.
1
Object Detection
2
Object Identification
Object Recognition Object Detection Object Identification Where is Rania? Where is a Face? Is there a face in the image? Who is it? Is it Ahmed or Hassan?
3
a challenge: object perception
Object detection Object segmentation Object recognition Typical systems require human-prepared training data; can we use autonomous experimentation?
4
a challenge: object perception
Object detection Object segmentation Object recognition Typical systems require human-prepared training data; can we use autonomous experimentation? Fruit detection
5
a challenge: object perception
Object detection Object segmentation Object recognition Typical systems require human-prepared training data; can we use autonomous experimentation? Fruit segmentation
6
a challenge: object perception
Object detection Object segmentation Object recognition Typical systems require human-prepared training data – can’t adapt to new situations autonomously Fruit recognition
7
Object Detection Find the location of an object if it appear in an image Does the object appear? Where is it? Wally / Waldo
8
Face Detector 8
9
Face detection We slide a window over the image
? features classify -1 not face x F(x) y We slide a window over the image Extract features for each window Classify each window into face/non-face We start with a walk through a face detector.
10
Classification Examples are points in Rn
+ + Examples are points in Rn Positives are separated from negatives by the hyperplane w y=sign(wTx-b) + + + + + - w - + - - We have converted all windows to n-dimensional vectors, or points in N-d - - - -
11
Classification x Rn - data points P(x) - distribution of the data
+ + x Rn - data points P(x) - distribution of the data y(x) - true value of y for each x F - decision function: y=F(x, ) - parameters of F, e.g. =(w,b) We want F that makes few mistakes + + + + + - w - + - - We have converted all windows to n-dimensional vectors, or points in N-dimensional space. Not all points are equally likely and P(x) captures the distribution of the data. For each point there the correct prediction value y(x). In this - - - -
12
Loss function Our decision may have severe implications
+ + POSSIBLE CANCER Our decision may have severe implications L(y(x),F(x, )) - loss function How much we pay for predicting F(x,), when the true value is y(x) Classification error: Hinge loss + + + + + - w - + - - We have converted all windows to n-dimensional vectors, or points in N-dimensional space. Not all points are equally likely and P(x) captures the distribution of the data. For each point there the correct prediction value y(x). In this - ABSOLUTELY NO RISK OF CANCER - - -
13
Face Detection – basic scheme
Classification Result Off-line training Face examples Non-face examples Classifier Feature vector (x1, x2 ,…, xn) Feature Extraction Pixel pattern Search for faces at different resolutions and locations
15
Feature Engineering or Feature Learning?
In Vision: SIFT, HOG, pixels, sparse coding, RBM, autoencoder, LCC, scattering net (Mallat), deep conv net (discriminative feature learning), etc. In NLP: N-gram, hashing, XXX, YYY, ZZZ etc. In Speech: MFCC’s, PLPs, SPLICE (for noise robustness), autoencoder, scattering spectra, learned mapping from filterbank to MFCCs, DNN, etc.
16
Training and Testing Training Set Labeled Test Set Train Classifier
False Positive Correct Labeled Test Set Classify Sensitivity
17
Learning Components Start with small initial regions
Expand into one of four directions Extract new components from images Train SVM classifiers Choose best expansion according to error bound of SVMs
18
Some Examples
19
What Types of Problems Fit (not fit) Deep Learning (some conjectures)
“Perceptual” AI “Data matching” e.g.: Image/video recognition Speech recognition Speech/text understanding Sequential data with temporal structure (stock market prediction?) e.g.: Malware detection(ICASSP-2013) movie recommender, speaker/language detection? Easy data representation e.g., histogram of events, user-watched movies, etc. Non-obvious data representations Deep networks are not a panacea, however. We know that they work well on visual object recognition and speech recognition. There is early evidence that they will work well on document understanding. In all of these kind of problems, it’s very difficult for an engineer to specify or build a representation for the data (that isn’t just the lowest-level building blocks, such as pixels or words). Deep networks should be state-of-the-art in those kind of problems. Many machine learning applications inside and outside of Microsoft are not difficult AI problems: they fall under ‘data mining’. An example of such a problem is movie recommendation. In movie recommendation, the problem of how to represent data is very easy. For example, you just represent the movies that a user watches in a big sparse matrix. Perhaps you add the demographics of the user. Then, simply apply an existing ML algorithm. We expect deep networks will have no benefit in this (or similar) cases. Deep learning may not win over standard machine learning Deep learning already shows tremendous benefits
20
face/non-face Classifier
Window-based models Generate and score candidates Scans the detector at multiple locations and scales face/non-face Classifier
21
Haar-features (1) The difference between pixels’ sum of the white and black areas Four types, put on different locations with different scales
22
Haar-features (2) Capture the face symmetry
23
Can be extracted at any location with any scale!
Haar-features (3) Type A Four types of haar features Can be extracted at any location with any scale! A 24x24 detection window
24
Integral image (1) Example: Time complexity? 1 2 3 4 3 3 5 8 12 15
Sum of pixel values in the blue area Example: Time complexity? Image Integral image
25
Integral image (2) Sum(4) = ? d + a – b – c 6-point 8-point 9-point
a = sum(1) b = sum(1+2) c = sum(1+3) d = sum( ) 1 2 a b 3 4 c d Sum(4) = ? d + a – b – c Four-point calculation! A, B: 2 rectangles => C: 3 rectangles => D: 4 rectangles => 6-point 8-point 9-point
26
Feature selection A weak classifier h f1 f2
f1 > θ (a threshold) => Face! f2 ≤ θ (a threshold) => Not a Face! h = 1 if fi > θ 0 otherwise
27
Feature selection Idea: Combining several weak classifiers to generate a strong classifier …… α1 α3 α2 αT ~performance of the weak classifier on the training set feature: type, location, scale α1h1+ α2h2 + α3h3 + … + αThT >< Tthresold weak classifier (feature, threshold) h1 = 1 or 0
28
K Nearest Neighbors Memorize all training data
Find K closest points to the query The neighbors vote for the label: Vote(+)=2 Vote(–)=1 + + + + + + o + - - - + - - + + - - - - - - - -
29
K-Nearest Neighbors Nearest Neighbors (silhouettes)
Kristen Grauman, Gregory Shakhnarovich, and Trevor Darrell, Virtual Visual Hulls: Example-Based 3D Shape Inference from Silhouettes
30
K-Nearest Neighbors Silhouettes from other views 3D Visual hull
Kristen Grauman, Gregory Shakhnarovich, and Trevor Darrell, Virtual Visual Hulls: Example-Based 3D Shape Inference from Silhouettes
31
Support vector machines
Simple decision Good classification Good generalization + + + + w + margin + - - - + - - + + - - - - - - -
32
Support vector machines
+ + + + w + + - - - + - - + + - - - Support vectors: - - - -
33
Slides by Pete Barnum Navneet Dalal and Bill Triggs, Histograms of Oriented Gradients for Human Detection, CVPR05
34
centered diagonal uncentered cubic-corrected Sobel
Slides by Pete Barnum Navneet Dalal and Bill Triggs, Histograms of Oriented Gradients for Human Detection, CVPR05
35
EE465: Introduction to Digital Image Processing Copyright Xin Li'2003
Neural Network based FD Cited from “Neural network based face detection”, by Henry A. Rowley, Ph.D. thesis, CMU, May 1999 EE465: Introduction to Digital Image Processing Copyright Xin Li'2003
36
Vehicle Detection Intelligent vehicles aim at improving the driving safety by machine vision techniques
37
Chain Code
38
Chain Code
39
This example shows that the Chain Code is independent of Location, Starting Point and orientation
40
Segmentation Segmentation
Roughly speaking, segmentation is to partition the images into meaningful parts that are relatively homogenous in certain sense
41
Segmentation by Fitting a Model
One view of segmentation is to group pixels (tokens, etc.) belong together because they conform to some model In many cases, explicit models are available, such as a line Also in an image a line may consist of pixels that are not connected or even close to each other
42
Canny and Hough Together
43
Hough Transform It locates straight lines
It locates straight line intervals It locates circles It locates algebraic curves It locates arbitrary specific shapes in an image But you pay progressively for complexity of shapes by time and memory usage
44
Hough Transform for circles
* You need three parameters to describe a circle * * * * * * Vote space is three dimensional
45
First Parameterization of Hough Transform for lines
46
Hough Transform – cont. Straight line case
Consider a single isolated edge point (xi, yi) There are an infinite number of lines that could pass through the points Each of these lines can be characterized by some particular equation
47
Line detection Mathematical model of a line: x y Y = mx + n Y1=m x1+n
P(x1,y1) P(x2,y2) YN=m xN+n
48
Image and Parameter Spaces
intercept slope x y Y = mx + n Y1=m x1+n Y2=m x2+n YN=m xN+n Y = m’x + n’ m’ n’ m n Image Space Line in Img. Space ~ Point in Param. Space
49
Looking at it backwards …
Parameter space Y1=m x1+n Can be re-written as: n = -x1 m + Y1 Fix (-x1,y1), Vary (m,n) - Line n = -x1 m + Y1 intercept slope m n m’ n’
50
Least squares line fitting
Data: (x1, y1), …, (xn, yn) Line equation: yi = m xi + b Find (m, b) to minimize y=mx+b (xi, yi) Matlab: p = A \ y; Modified from S. Lazebnik
51
Hough Transform – cont. Hough transform algorithm
1. Find all of the desired feature points in the image 2. For each feature point For each possibility i in the accumulator that passes through the feature point Increment that position in the accumulator 3. Find local maxima in the accumulator 4. If desired, map each maximum in the accumulator back to image space
52
HT for Circles Extend HT to other shapes that can be expressed parametrically Circle, fixed radius r, centre (a,b) (x1-a)2 + (x2-b)2 = r2 accumulator array must be 3D unless circle radius, r is known re-arrange equation so x1 is subject and x2 is the variable for every point on circle edge (x,y) plot range of (x1,x2) for a given r
53
Hough Transform – cont. Here the radius is fixed
54
Hough circle Fitting
55
Hough circle Fitting
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.