Download presentation
Presentation is loading. Please wait.
Published byIra Hudson Modified over 9 years ago
1
Tamara Berg Machine Learning 790-133 Recognizing People, Objects, & Actions 1
2
Announcements Topic presentation groups posted. Anyone not have a group yet? Last day of background material For Monday - Object recognition papers will be posted online. Please read! Slide 2 of 113
3
What is machine learning? Computer programs that can learn from data Two key components – Representation: how should we represent the data? – Generalization: the system should generalize from its past experience (observed data items) to perform well on unseen data items. Slide 3 of 113
4
Types of ML algorithms Unsupervised – Algorithms operate on unlabeled examples Supervised – Algorithms operate on labeled examples Semi/Partially-supervised – Algorithms combine both labeled and unlabeled examples Slide 4 of 113
5
Unsupervised Learning Slide 5 of 113
6
Slide 6 of 113
7
K-means clustering Want to minimize sum of squared Euclidean distances between points x i and their nearest cluster centers m k Algorithm: Randomly initialize K cluster centers Iterate until convergence: Assign each data point to the nearest center Recompute each cluster center as the mean of all points assigned to it source: Svetlana Lazebnik Slide 7 of 113
8
Slide 8 of 113
9
Slide 9 of 113
10
Slide 10 of 113
11
Slide 11 of 113
12
Slide 12 of 113
13
Slide 13 of 113
14
Slide 14 of 113
15
Slide 15 of 113
16
Slide 16 of 113
17
Slide 17 of 113
18
Slide 18 of 113
19
Different clustering strategies Agglomerative clustering Start with each point in a separate cluster At each iteration, merge two of the “closest” clusters Divisive clustering Start with all points grouped into a single cluster At each iteration, split the “largest” cluster K-means clustering Iterate: assign points to clusters, compute means K-medoids Same as k-means, only cluster center cannot be computed by averaging The “medoid” of each cluster is the most centrally located point in that cluster (i.e., point with lowest average distance to the other points) source: Svetlana Lazebnik Slide 19 of 113
20
Supervised Learning Slide 20 of 113
21
Slide from Dan Klein Slide 21 of 113
22
Slide from Dan Klein Slide 22 of 113
23
Slide from Dan Klein Slide 23 of 113
24
Slide from Dan Klein Slide 24 of 113
25
Example: Image classification apple pear tomato cow dog horse inputdesired output Slide credit: Svetlana Lazebnik Slide 25 of 113
26
Slide from Dan Klein http://yann.lecun.com/exdb/mnist/index.html Slide 26 of 113
27
Example: Seismic data Body wave magnitude Surface wave magnitude Nuclear explosions Earthquakes Slide credit: Svetlana Lazebnik Slide 27 of 113
28
Slide from Dan Klein Slide 28 of 113
29
The basic classification framework y = f(x) Learning: given a training set of labeled examples {(x 1,y 1 ), …, (x N,y N )}, estimate the parameters of the prediction function f Inference: apply f to a never before seen test example x and output the predicted value y = f(x) outputclassification function input Slide credit: Svetlana Lazebnik Slide 29 of 113
30
Some ML classification methods 10 6 examples Nearest neighbor Shakhnarovich, Viola, Darrell 2003 Berg, Berg, Malik 2005 … Neural networks LeCun, Bottou, Bengio, Haffner 1998 Rowley, Baluja, Kanade 1998 … Support Vector Machines and Kernels Conditional Random Fields McCallum, Freitag, Pereira 2000 Kumar, Hebert 2003 … Guyon, Vapnik Heisele, Serre, Poggio, 2001 … Slide credit: Antonio Torralba 30
31
Example: Training and testing Key challenge: generalization to unseen examples Training set (labels known)Test set (labels unknown) Slide credit: Svetlana Lazebnik Slide 31 of 113
32
Slide credit: Dan Klein Slide 32 of 113
33
Slide from Min-Yen Kan Classification by Nearest Neighbor Word vector document classification – here the vector space is illustrated as having 2 dimensions. How many dimensions would the data actually live in? Slide 33 of 113
34
Slide from Min-Yen Kan Classification by Nearest Neighbor Slide 34 of 113
35
Classification by Nearest Neighbor Classify the test document as the class of the document “nearest” to the query document (use vector similarity to find most similar doc) Slide from Min-Yen Kan Slide 35 of 113
36
Classification by kNN Classify the test document as the majority class of the k documents “nearest” to the query document. Slide from Min-Yen Kan Slide 36 of 113
37
Slide from Min-Yen Kan What are the features? What’s the training data? Testing data? Parameters? Classification by kNN Slide 37 of 113
38
Slide from Min-Yen Kan Slide 38 of 113
39
Slide from Min-Yen Kan Slide 39 of 113
40
Slide from Min-Yen Kan Slide 40 of 113
41
Slide from Min-Yen Kan Slide 41 of 113
42
Slide from Min-Yen Kan Slide 42 of 113
43
Slide from Min-Yen Kan What are the features? What’s the training data? Testing data? Parameters? Classification by kNN Slide 43 of 113
44
NN for vision 44 Fast Pose Estimation with Parameter Sensitive Hashing Shakhnarovich, Viola, Darrell
45
J. Hays and A. Efros, Scene Completion using Millions of Photographs, SIGGRAPH 2007Scene Completion using Millions of Photographs NN for vision
46
J. Hays and A. Efros, IM2GPS: estimating geographic information from a single image, CVPR 2008 NN for vision
47
Decision tree classifier Example problem: decide whether to wait for a table at a restaurant, based on the following attributes: 1.Alternate: is there an alternative restaurant nearby? 2.Bar: is there a comfortable bar area to wait in? 3.Fri/Sat: is today Friday or Saturday? 4.Hungry: are we hungry? 5.Patrons: number of people in the restaurant (None, Some, Full) 6.Price: price range ($, $$, $$$) 7.Raining: is it raining outside? 8.Reservation: have we made a reservation? 9.Type: kind of restaurant (French, Italian, Thai, Burger) 10.WaitEstimate: estimated waiting time (0-10, 10-30, 30-60, >60) Slide credit: Svetlana Lazebnik Slide 47 of 113
48
Decision tree classifier Slide credit: Svetlana Lazebnik Slide 48 of 113
49
Decision tree classifier Slide credit: Svetlana Lazebnik Slide 49 of 113
50
Linear classifier Find a linear function to separate the classes f(x) = sgn(w 1 x 1 + w 2 x 2 + … + w D x D ) = sgn(w x) Slide credit: Svetlana Lazebnik Slide 50 of 113
51
Discriminant Function It can be arbitrary functions of x, such as: Nearest Neighbor Decision Tree Linear Functions Slide credit: Jinwei Gu Slide 51 of 113
52
Linear Discriminant Function g(x) is a linear function: x1x1 x2x2 w T x + b = 0 w T x + b < 0 w T x + b > 0 A hyper-plane in the feature space Slide credit: Jinwei Gu denotes +1 denotes -1 x1x1 Slide 52 of 113
53
How would you classify these points using a linear discriminant function in order to minimize the error rate? Linear Discriminant Function denotes +1 denotes -1 x1x1 x2x2 Infinite number of answers! Slide credit: Jinwei Gu Slide 53 of 113
54
How would you classify these points using a linear discriminant function in order to minimize the error rate? Linear Discriminant Function x1x1 x2x2 Infinite number of answers! denotes +1 denotes -1 Slide credit: Jinwei Gu Slide 54 of 113
55
How would you classify these points using a linear discriminant function in order to minimize the error rate? Linear Discriminant Function x1x1 x2x2 Infinite number of answers! denotes +1 denotes -1 Slide credit: Jinwei Gu Slide 55 of 113
56
x1x1 x2x2 How would you classify these points using a linear discriminant function in order to minimize the error rate? Linear Discriminant Function Infinite number of answers! Which one is the best? denotes +1 denotes -1 Slide credit: Jinwei Gu Slide 56 of 113
57
Large Margin Linear Classifier “safe zone” The linear discriminant function (classifier) with the maximum margin is the best Margin is defined as the width that the boundary could be increased by before hitting a data point Why it is the best? strong generalization ability Margin x1x1 x2x2 Linear SVM Slide credit: Jinwei Gu Slide 57 of 113
58
Large Margin Linear Classifier x1x1 x2x2 Margin w T x + b = 0 w T x + b = -1 w T x + b = 1 x+x+ x+x+ x-x- Support Vectors Slide credit: Jinwei Gu Slide 58 of 113
59
Large Margin Linear Classifier We know that The margin width is: x1x1 x2x2 Margin w T x + b = 0 w T x + b = -1 w T x + b = 1 x+x+ x+x+ x-x- n Support Vectors Slide credit: Jinwei Gu Slide 59 of 113
60
Large Margin Linear Classifier Formulation: x1x1 x2x2 Margin w T x + b = 0 w T x + b = -1 w T x + b = 1 x+x+ x+x+ x-x- n such that Slide credit: Jinwei Gu Slide 60 of 113
61
Large Margin Linear Classifier Formulation: x1x1 x2x2 Margin w T x + b = 0 w T x + b = -1 w T x + b = 1 x+x+ x+x+ x-x- n such that Slide credit: Jinwei Gu Slide 61 of 113
62
Large Margin Linear Classifier Formulation: x1x1 x2x2 Margin w T x + b = 0 w T x + b = -1 w T x + b = 1 x+x+ x+x+ x-x- n such that Slide credit: Jinwei Gu Slide 62 of 113
63
Solving the Optimization Problem s.t. Quadratic programming with linear constraints Slide credit: Jinwei Gu Slide 63 of 113
64
Solving the Optimization Problem s.t., and Lagrangian Dual Problem Slide credit: Jinwei Gu Slide 64 of 113
65
Solving the Optimization Problem The solution has the form: From KKT condition, we know: Thus, only support vectors have x1x1 x2x2 w T x + b = 0 w T x + b = -1 w T x + b = 1 x+x+ x+x+ x-x- Support Vectors Slide credit: Jinwei Gu Slide 65 of 113
66
Solving the Optimization Problem The linear discriminant function is: Notice it relies on a dot product between the test point x and the support vectors x i Slide credit: Jinwei Gu Slide 66 of 113
67
Linear separability Slide credit: Svetlana Lazebnik Slide 67 of 113
68
Non-linear SVMs: Feature Space General idea: the original input space can be mapped to some higher-dimensional feature space where the training set is separable: Φ: x → φ(x) Slide courtesy of www.iro.umontreal.ca/~pift6080/documents/papers/svm_tutorial.ppt 68
69
Nonlinear SVMs: The Kernel Trick With this mapping, our discriminant function becomes: No need to know this mapping explicitly, because we only use the dot product of feature vectors in both the training and test. A kernel function is defined as a function that corresponds to a dot product of two feature vectors in some expanded feature space: Slide credit: Jinwei Gu 69
70
Nonlinear SVM: Optimization Formulation: (Lagrangian Dual Problem) such that The solution of the discriminant function is The optimization technique is the same. Slide credit: Jinwei Gu Slide 70 of 113
71
Nonlinear SVMs: The Kernel Trick Linear kernel: Examples of commonly-used kernel functions: Polynomial kernel: Gaussian (Radial-Basis Function (RBF) ) kernel: Sigmoid: Slide credit: Jinwei Gu 71
72
Support Vector Machine: Algorithm 1. Choose a kernel function 2. Choose a value for C and any other parameters (e.g. σ) 3. Solve the quadratic programming problem (many software packages available) 4. Classify held out validation instances using the learned model 5. Select the best learned model based on validation accuracy 6. Classify test instances using the final selected model Slide 72 of 113
73
Some Issues Choice of kernel - Gaussian or polynomial kernel is default - if ineffective, more elaborate kernels are needed - domain experts can give assistance in formulating appropriate similarity measures Choice of kernel parameters - e.g. σ in Gaussian kernel - In the absence of reliable criteria, applications rely on the use of a validation set or cross-validation to set such parameters. This slide is courtesy of www.iro.umontreal.ca/~pift6080/documents/papers/svm_tutorial.ppt Slide 73 of 113
74
Summary: Support Vector Machine 1. Large Margin Classifier – Better generalization ability & less over-fitting 2. The Kernel Trick – Map data points to higher dimensional space in order to make them linearly separable. – Since only dot product is used, we do not need to represent the mapping explicitly. Slide credit: Jinwei Gu Slide 74 of 113
75
A simple algorithm for learning robust classifiers – Freund & Shapire, 1995 – Friedman, Hastie, Tibshhirani, 1998 Provides efficient algorithm for sparse visual feature selection – Tieu & Viola, 2000 – Viola & Jones, 2003 Easy to implement, doesn’t require external optimization tools. Boosting Slide credit: Antonio Torralba Slide 75 of 113
76
Defines a classifier using an additive model: Boosting Strong classifier Weak classifier Weight Features vector Slide credit: Antonio Torralba Slide 76 of 113
77
Defines a classifier using an additive model: We need to define a family of weak classifiers Boosting Strong classifier Weak classifier Weight Features vector from a family of weak classifiers Slide credit: Antonio Torralba Slide 77 of 113
78
Adaboost Slide credit: Antonio Torralba Slide 78 of 113
79
Each data point has a class label: w t =1 and a weight: +1 ( ) -1 ( ) y t = Boosting It is a sequential procedure: x t=1 x t=2 xtxt Slide credit: Antonio Torralba Slide 79 of 113
80
Toy example Weak learners from the family of lines h => p(error) = 0.5 it is at chance Each data point has a class label: w t =1 and a weight: +1 ( ) -1 ( ) y t = Slide credit: Antonio Torralba Slide 80 of 113
81
Toy example This one seems to be the best Each data point has a class label: w t =1 and a weight: +1 ( ) -1 ( ) y t = This is a ‘weak classifier’: It performs slightly better than chance. Slide credit: Antonio Torralba Slide 81 of 113
82
Toy example Each data point has a class label: w t w t exp{-y t H t } We update the weights: +1 ( ) -1 ( ) y t = Slide credit: Antonio Torralba Slide 82 of 113
83
Toy example Each data point has a class label: w t w t exp{-y t H t } We update the weights: +1 ( ) -1 ( ) y t = Slide credit: Antonio Torralba Slide 83 of 113
84
Toy example Each data point has a class label: w t w t exp{-y t H t } We update the weights: +1 ( ) -1 ( ) y t = Slide credit: Antonio Torralba Slide 84 of 113
85
Toy example Each data point has a class label: w t w t exp{-y t H t } We update the weights: +1 ( ) -1 ( ) y t = Slide credit: Antonio Torralba Slide 85 of 113
86
Toy example The strong (non- linear) classifier is built as the combination of all the weak (linear) classifiers. f1f1 f2f2 f3f3 f4f4 Slide credit: Antonio Torralba Slide 86 of 113
87
Adaboost Slide credit: Antonio Torralba Slide 87 of 113
88
Semi-Supervised Learning Slide 88 of 113
89
Supervised learning has many successes recognize speech, steer a car, classify documents classify proteins recognizing faces, objects in images... Slide Credit: Avrim Blum Slide 89 of 113
90
However, for many problems, labeled data can be rare or expensive. Unlabeled data is much cheaper. Need to pay someone to do it, requires special testing,… Slide Credit: Avrim Blum 90
91
However, for many problems, labeled data can be rare or expensive. Unlabeled data is much cheaper. Speech Images Medical outcomes Customer modeling Protein sequences Web pages Need to pay someone to do it, requires special testing,… Slide Credit: Avrim Blum 91
92
However, for many problems, labeled data can be rare or expensive. Unlabeled data is much cheaper. [From Jerry Zhu] Need to pay someone to do it, requires special testing,… Slide Credit: Avrim Blum 92
93
Need to pay someone to do it, requires special testing,… However, for many problems, labeled data can be rare or expensive. Unlabeled data is much cheaper. Can we make use of cheap unlabeled data? Slide Credit: Avrim Blum 93
94
Semi-Supervised Learning Can we use unlabeled data to augment a small labeled sample to improve learning? But unlabeled data is missing the most important info!! But maybe still has useful regularities that we can use. But… Slide Credit: Avrim Blum Slide 94 of 113
95
Method 1: EM 95
96
How to use unlabeled data One way is to use the EM algorithm – EM: Expectation Maximization The EM algorithm is a popular iterative algorithm for maximum likelihood estimation in problems with missing data. The EM algorithm consists of two steps, – Expectation step, i.e., filling in the missing data – Maximization step – calculate a new maximum a posteriori estimate for the parameters. Slide 96 of 113
97
Algorithm Outline 1.Train a classifier with only the labeled documents. 2.Use it to probabilistically classify the unlabeled documents. 3.Use ALL the documents to train a new classifier. 4.Iterate steps 2 and 3 to convergence. Slide 97 of 113
98
Method 2: Co-Training 98
99
Co-training [Blum&Mitchell ’ 98] Many problems have two different sources of info (“features/views”) you can use to determine label. E.g., classifying faculty webpages: can use words on page or words on links pointing to the page. My AdvisorProf. Avrim BlumMy AdvisorProf. Avrim Blum x 2 - Text info x 1 - Link info x - Link info & Text info Slide Credit: Avrim Blum Slide 99 of 113
100
Co-training Idea: Use small labeled sample to learn initial rules. – E.g., “ my advisor ” pointing to a page is a good indicator it is a faculty home page. – E.g., “ I am teaching ” on a page is a good indicator it is a faculty home page. my advisor Slide Credit: Avrim Blum Slide 100 of 113
101
Co-training Idea: Use small labeled sample to learn initial rules. – E.g., “ my advisor ” pointing to a page is a good indicator it is a faculty home page. – E.g., “ I am teaching ” on a page is a good indicator it is a faculty home page. Then look for unlabeled examples where one view is confident and the other is not. Have it label the example for the other. Training 2 classifiers, one on each type of info. Using each to help train the other. h x 1,x 2 i Slide Credit: Avrim Blum Slide 101 of 113
102
Co-training Algorithm [Blum and Mitchell, 1998] Given: labeled data L, unlabeled data U Loop: Train h1 (e.g., hyperlink classifier) using L Train h2 (e.g., page classifier) using L Allow h1 to label p positive, n negative examples from U Allow h2 to label p positive, n negative examples from U Add these most confident self-labeled examples to L 102
103
Watch, Listen & Learn: Co-training on Captioned Images and Videos Sonal Gupta, Joohyun Kim, Kristen Grauman, Raymond Mooney The University of Texas at Austin, U.S.A. 103
104
Goals Classify images and videos with the help of visual information and associated text captions Use unlabeled image and video examples Slide 104 of 113
105
Image Examples 105 Cultivating farming at Nabataean Ruins of the Ancient Avdat Bedouin Leads His Donkey That Carries Load Of Straw Ibex Eating In The Nature Entrance To Mikveh Israel Agricultural School Desert Trees Slide 105 of 113
106
Approach Combining two views of images and videos using Co-training (Blum and Mitchell ‘98) learning algorithm Views: Text and Visual Text View – Caption of image or video – Readily available Visual View – Color, texture, temporal information in image/video Slide 106 of 113
107
Co-training 107 + + - + Initially Labeled Instances Visual Classifier Text Classifier Text ViewVisual View Text ViewVisual View Text ViewVisual View Text ViewVisual View Slide 107 of 113
108
Co-training 108 Initially Labeled Instances Visual Classifier Text Classifier Supervised Learning Text View Visual View + + - + + + - + Slide 108 of 113
109
Co-training 109 Unlabeled Instances Visual Classifier Text Classifier Text View Visual View Slide 109 of 113
110
Co-training 110 Classifier Labeled Instances Classify most confident instances Text Classifier Visual Classifier Text View Visual View + + - - + + - - Slide 110 of 113
111
Co-training 111 Retrain Classifiers Text Classifier Visual Classifier Text View Visual View + + - - + + - - Slide 111 of 113
112
Video Features Detect Interest Points Harris-Forstener Corner Detector for both spatial and temporal space Describe Interest Points Histogram of Oriented Gradients (HoG) Create Spatio-Temporal Vocabulary Quantize interest points to create 200 visual words dictionary Represent each video as histogram of visual words [Laptev, IJCV ‘05] … N 72 Slide 112 of 113
113
Textual Features 113 That was a very nice forward camel. Well I remember her performance last time. He has some delicate hand movement. She gave a small jump while gliding He runs in to chip the ball with his right foot. He runs in to take the instep drive and executes it well. The small kid pushes the ball ahead with his tiny kicks. That was a very nice forward camel. Well I remember her performance last time. He has some delicate hand movement. She gave a small jump while gliding He runs in to chip the ball with his right foot. He runs in to take the instep drive and executes it well. The small kid pushes the ball ahead with his tiny kicks. Standard Bag-of-Words Representation Raw Text Commentary Porter StemmerRemove Stop Words Slide 113 of 113
114
Conclusion Combining textual and visual features can help improve accuracy Co-training can be useful to combine textual and visual features to classify images and videos Co-training helps in reducing labeling of images and videos [ More information on http://www.cs.utexas.edu/users/ml/co-training] 114 Slide 114 of 113
115
Co-training vs. EM Co-training splits features, EM does not. Co-training incrementally uses the unlabeled data. EM probabilistically labels all the data at each round; EM iteratively uses the unlabeled data. Slide 115 of 113
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.