Presentation is loading. Please wait.

Presentation is loading. Please wait.

Multi-View Learning in the Presence of View Disagreement C. Mario Christoudias, Raquel Urtasun, Trevor Darrell UC Berkeley EECS & ICSI MIT CSAIL.

Similar presentations


Presentation on theme: "Multi-View Learning in the Presence of View Disagreement C. Mario Christoudias, Raquel Urtasun, Trevor Darrell UC Berkeley EECS & ICSI MIT CSAIL."— Presentation transcript:

1 Multi-View Learning in the Presence of View Disagreement C. Mario Christoudias, Raquel Urtasun, Trevor Darrell UC Berkeley EECS & ICSI MIT CSAIL

2 The World is Multi-view Several datasets are comprised of multiple feature sets or views

3 Learning from Multiple Information Sources Multi-view learning methods exploit view redundancy to learn from partially labeled data Can be advantageous to learning with only a single view [Blum et.al., ‘98], [Kakade et.al., ‘07] “Weaknesses of one view complement the strengths of the other”

4 Dealing with Noise Multi-view learning approaches have difficulty dealing with noisy observations Methods proposed that model stream reliability [Yan et.al., ‘05], [Yu et.al., ‘07] View 1 View 2 View V Noiseless Corrupted

5 Dealing with Noise More generally view corruption is non- uniform: View 1 View 2 View V Noiseless Corrupted “neutral” or “background” class

6 View disagreement View disagreement can be caused by view corruption –Samples in each view belong to a different class Audio-Visual Examples: –Uni-modal Expression (person says `yes’ without nodding) –Temporary View Occlusions (person temporarily covers mouth while speaking)

7 Our Approach Consider view disagreement caused by view corruption Detect and filter samples with view disagreement using an information theoretic measure based on conditional view entropy

8 Related Work View disagreement is a new type of view in-sufficiency Multi-view learning with insufficient views –Co-regularization [Collins et.al., ‘99], [Sindhwani et.al., ’05] –View validation [Muslea et.al., ’02], [Naphade et.al., ’05], [Yu et.al., ’07] –Multi-view manifold learning [Ando et.al., ‘07], [Kakade et.al., ‘07] Previous still rely on samples from all views belonging to the same class

9 Multi-View Bootstrapping Co-training [Blum & Mitchell, 98] –Mutually bootstrap a set of classifiers from partially labeled data Cross-view Bootstrapping –Learn a classifier in one modality from the labels provided by a classifier from another modality

10 Bootstrapping One View from the Other Extrapolate from high-confidence labels in other modality Video data: Labels: Audio Classifier Video Classifier Audio data: Positive Negative No Label Label legend Test

11 Bootstrapping One View from the Other Extrapolate from high-confidence labels in other modality Video data: Labels: Positive Negative No Label Label legend Test Audio data: Audio Classifier Video Classifier

12 Video Classifier Video Classifier Bootstrapping One View from the Other Extrapolate from high-confidence labels in other modality Video data: Labels: Positive Negative No Label Label legend Audio data: Audio Classifier Train

13 Co-training Learns from partially labeled data by mutually bootstrapping a set of classifiers on multi-view data Assumptions –Class conditional independence –Sufficiency Applied to: –Text classification (Collins and Singer, ‘99) –Visual object detection (Levin et al, ‘03) –Information retrieval (Yan and Naphade, ‘05) [Blum and Mitchell, ’98]

14 Co-training Algorithm Audio data: Video data: Audio Classifier Video Classifier Labels: Positive Negative No Label Label legend Start with seed set of labeled examples

15 Audio Classifier Audio Classifier Co-training Algorithm Step 1: Train classifiers on seed set Audio data: Video data: Labels: Positive Negative No Label Label legend Video Classifier Train

16 Video Classifier Video Classifier Audio Classifier Co-training Algorithm Step 1: Train classifiers on seed set Audio data: Video data: Labels: Positive Negative No Label Label legend Train

17 Co-training Algorithm Step 2: Evaluate on unlabeled data, add N most confident examples from each view Audio data: Video data: Labels: Positive Negative No Label Label legend Audio Classifier Video Classifier Test

18 Co-training Algorithm Step 2: Evaluate on unlabeled data, add N most confident examples from each view Audio data: Video data: Labels: Positive Negative No Label Label legend Audio Classifier Video Classifier Test

19 Co-training Algorithm Step 2: Evaluate on unlabeled data, add N most confident examples from each view Audio data: Video data: Labels: Positive Negative No Label Label legend Audio Classifier Video Classifier Test

20 Co-training Algorithm Step 2: Evaluate on unlabeled data, add N most confident examples from each view Audio data: Video data: Labels: Positive Negative No Label Label legend Audio Classifier Video Classifier Test

21 Co-training Algorithm Iterate steps 1 and 2 until done Audio data: Video data: Labels: Positive Negative No Label Label legend Audio Classifier Video Classifier

22 View Disagreement Example: Normally Distributed Classes

23 Conventional Co-training under View Disagreement

24 Our Approach: Key Assumption Given n foreground classes and background –Foreground classes can only co-occur with the same class or background –Background class can co-occur with either of the n+1 classes Reasonable assumption for audio-visual problems

25 Conditioning on a background sample gives distribution with `high’ entropy. Our Approach: Notional Example Conditioning on a foreground sample gives distribution with `low’ entropy.

26 Conditional Entropy Measure Let Indicator function m( ) over view pairs (x i, x j ) with H ij is the mean conditional entropy p(x) is a kernel density estimate [Silverman, 70] m() detects foreground samples x k j

27 Redundant Sample Detection A sample x k is a redundant foreground sample if it satisfies A sample x k is a redundant background sample if it satisfies

28 View Disagreement Detection Two views of a multi-view sample x k are in view disagreement if where is the logical xor operator. Define modified co-training algorithm

29 Co-training in the Presence of View Disagreement Audio data: Video data: Audio Classifier Video Classifier Video Labels: Start with seed set of labeled examples Audio Labels: Positive Negative No Label Label legend Background

30 Audio Classifier Audio Classifier Step 1: Train classifiers on seed set Video data: Video Classifier Video Labels: Audio data: Audio Labels: Train Co-training in the Presence of View Disagreement Positive Negative No Label Label legend Background

31 Video Classifier Video Classifier Step 1: Train classifiers on seed set Video data: Video Labels: Train Audio data: Audio Labels: Audio Classifier Co-training in the Presence of View Disagreement Positive Negative No Label Label legend Background

32 Step 2: Evaluate on unlabeled data, add N most confident examples from each view Video Classifier Video data: Video Labels: Audio data: Audio Labels: Audio Classifier Test Co-training in the Presence of View Disagreement Positive Negative No Label Label legend Background

33 Step 2: Evaluate on unlabeled data, add N most confident examples from each view Video Classifier Video data: Video Labels: Audio data: Audio Labels: Audio Classifier Test Co-training in the Presence of View Disagreement Positive Negative No Label Label legend Background

34 Step 2: Evaluate on unlabeled data, add N most confident examples from each view Video Classifier Video data: Video Labels: Audio data: Audio Labels: Audio Classifier Test Co-training in the Presence of View Disagreement Positive Negative No Label Label legend Background

35 Video Classifier Video data: Video Labels: Audio data: Audio Labels: Audio Classifier Step 2: Evaluate on unlabeled data, add N most confident examples from each view Test Co-training in the Presence of View Disagreement Positive Negative No Label Label legend Background

36 Video Classifier Video data: Video Labels: Audio data: Audio Labels: Audio Classifier Step 3: Map labels using conditional-entropy measure Co-training in the Presence of View Disagreement Positive Negative No Label Label legend Background

37 Video Classifier Video data: Video Labels: Audio data: Audio Labels: Audio Classifier Step 3: Map labels using conditional-entropy measure View DisagreementRedundant Co-training in the Presence of View Disagreement View Disagreement Measure Positive Negative No Label Label legend Background

38 Video Classifier Video data: Video Labels: Audio data: Audio Labels: Audio Classifier Step 3: Map labels using conditional-entropy measure View DisagreementRedundant Co-training in the Presence of View Disagreement View Disagreement Measure Positive Negative No Label Label legend Background

39 Video Classifier Video data: Audio data: Audio Classifier Iterate steps 1 through 3 until done Co-training in the Presence of View Disagreement Positive Negative No Label Label legend Background

40 Normally Distributed Classes: Results

41 Real Data Agreement from head gesture and speech –Head gesture: nod/shake –Speech: ‘yes’ or ‘no’ –15 subjects, 103 questions Simulated view disagreement –Background segments in visual domain –Babble noise in audio

42 Experimental Setup Single frame audio and video observations Bayes classifier for audio and visual gesture recognition, p(x|y) is Gaussian. Randomly separated subjects into 10 train and 5 test subjects Show results averaged over 5 splits labelaudio or video observation

43 Cross-View Bootstrapping Experiment Bootstrap visual classifier from audio labels Video

44 Co-training Experiment Learn both audio and video classifiers

45 Conclusions and Future Work Investigated the problem of view disagreement in multi-view learning Information theoretic measure to detect view disagreement due to view corruption On audio-visual user agreement task our method was robust to gross amounts of view disagreement (50%- 70%) Future Work –More general view disagreement distributions –Integrate view disagreement uncertainty into co-training


Download ppt "Multi-View Learning in the Presence of View Disagreement C. Mario Christoudias, Raquel Urtasun, Trevor Darrell UC Berkeley EECS & ICSI MIT CSAIL."

Similar presentations


Ads by Google