Download presentation
Presentation is loading. Please wait.
Published byJared Cam Modified over 9 years ago
1
Recognizing Human Actions by Attributes CVPR2011 Jingen Liu, Benjamin Kuipers, Silvio Savarese Dept. of Electrical Engineering and Computer Science University of Michigan
2
Outline Introduction Our Contributions Attribute-Based Action Representation Learning Data-Driven Attributes Knowledge Transfer Across Classes Experiments and Discussion
3
Introduction the traditional approaches for human action recognition the action golf-swinging human actions are better described by action attributes
4
manually specified attributes ◦ Subjective ◦ 2 – problem Complete ◦ Data – driven Intra-class variably ◦ Latent – variable, SVM
6
Our Contributions action attributes can be used to improve human action recognition manually-specified attributes latent variables integrates manually-specified and data- driven attributes
7
useful for recognizing novel action classes without training examples significantly boost traditional action classification
8
Attribute-Based Action Representation previous works represent actions with low-level features define an action attribute space
9
Example ◦ five attributes “translation of torso”, “updown torso motion”, “arm motion”, “arm over shoulder motion”, “leg motion” ◦ action class “walking” represented by a binary vector {1, 0, 1, 0, 1}
10
By introducing the attribute layer between the low-level features and action class labels, classifier f which maps x to a class label
11
Attributes as Latent Variables want to learn a classification model for recognizing an unknown action x Treating attributes as latent variables consider each attribute in the space as latent variables ai ∈ [0, 1]
12
Goal : learn a classifier fw to predict a new video x
13
Raw feature : x Class label : y Attributes : a Weight for each feature : w
14
provides the score measuring how well the raw feature matches the action class
15
provides the score of an individual attribute, and is used to indicate the presence of an attribute in the video x
16
captures the co-occurrence of pair of attributes aj and ak
17
parameter vector w is learned from a training dataset
18
Learning Data-Driven Attributes manual specification of attributes is subjective data-driven attributes
19
The Mutual Information (MI) ◦ a good measurement to evaluate the quality of grouping Given two random variables ◦ X ∈ X = {x1, x2,..., xn} ◦ Y ∈ Y = {y1, y2,..., ym} ◦ where X represents a set of visual-words, and Y is a set of action videos MI(X; Y )
20
Given a set of features Wish to obtain a set of clusters The quality of clustering is measured by the loss of MI
21
integrate the discovery of data-driven attributes into the framework of latent SVM h ∈ H H is the data-driven attribute space
22
Knowledge Transfer Across Classes transferring knowledge from known classes (with training examples) to a novel class (without training examples) using this knowledge to recognize instances of the novel class
24
Experiments and Discussion Datasets and Action Attributes Experimental Results Experiments on Olympic Sports Dataset
25
Datasets and Action Attributes UIUC Dataset ◦ 532 videos of 14 actions such as walk, hand-clap, jump-forward … Combining existing datasets into a larger one ◦ KTH dataset six classes and about 2,300 videos ◦ Weizmann dataset 10 classes and about 100 videos ◦ UIUC Olympic Sports dataset ◦ it is collected from YouTube, it contains realistic human actions
26
Experimental Results Recognizing novel action classes Attributes boosting traditional action recognition
27
Recognizing novel action classes use the leave-two-classes-out-cross- validation strategy in experiments on the UIUC dataset each run leave two classes out as novel classes (|Z| = 2)
28
The average accuracy of leave-two-classes- out-cross-validation on the UIUC dataset for recognizing novel action classes.
29
Divide the UIUC dataset into two disjoint sets ◦ Y : training set contains 10 action classes ◦ Z : testing set contains four classes the testing and training classes share some common attributes
31
Example (a)
32
Attributes boosting traditional action recognition using our proposed framework to prove that action attributes do improve performance of traditional action recognition Our results demonstrate that a significant improvement occurs with the use of manually-specified attributes.
33
To further demonstrate the correlation between manually-specified attributes and data-driven attributes This map is constructed from the training data
34
Dissimilarity between 100 data-driven attributes (rows) and 34 manually-specified attributes (columns) Colder color has lower value
35
The effect of removing a set of human-specified attributes some specified attributes (e.g., the human- specified attribute set a = {1, 8, 9, 10, 11}, columns ) are more correlated with data- driven attributes.
36
◦ “Specified attributes” means only using this type of attributes for recognition ◦ “B” indicates the performance before attributes removal ◦ “A” indicates the performance after removing the attributes. ◦ “Mixed Attributes” means using both manually-specified and data-driven attributes for recognition
37
Using manually-specified attributes only Remove human-specified attribute set a = {1, 8, 9, 10, 11} the performance from 72% to 64%
38
Using both manually-specified and data-driven attributes Remove human-specified attribute set a = {1, 8, 9, 10, 11} doesn’t cause an obvious performance decrease
39
Experiments on Olympic Sports Dataset using the Olympic Sports dataset, which contains 16 action classes and about 781 videos, for recognizing novel action classes and traditional training based recognition
40
The performance of recognizing novel testing classes Five cases 4 classes are used for testing 12 classes used for training
41
THANK YOU !
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.