Download presentation
Presentation is loading. Please wait.
Published byEvan Boone Modified over 9 years ago
1
Yair Weiss, CS HUJI Daphna Weinshall, CS HUJI Amnon Shashua, CS HUJI Yonina Eldar, EE Technion Ron Meir, EE Technion
2
The human brain can rapidly recognize thousands of objects while using less power than modern computers use in “quiet mode”. Although there are many neurons devoted to visual recognition, only a tiny fraction fire at any given moment. Can we use machine learning to learn such representations and build systems with such performance? We believe a key component enabling this remarkable performance is the use of sparse representations, and seek to develop brain-inspired hierarchical representations for visual recognition.
3
1.Low power visual recognition - sparsity before the A2D: 1.Low power visual recognition - sparsity before the A2D: algorithms and theories for sparsification in the analog domain ( Weiss & Eldar) 2.Extracting Informative Features from Sensory Input 2.Extracting Informative Features from Sensory Input: an approach based on slowness (Meir & Eldar) 3.Sparsity at all levels of the hierarchy 3.Sparsity at all levels of the hierarchy: algorithms for learning hierarchical sparse representations - from the input to the top levels (Weinshall & Shahsua)
4
Research direction: Compressed Sensing for low power cameras Research direction: Compressed Sensing for low power cameras We have shown that random projections are poor fits for compressed sensing of natural images. We are working on better linear projections that take advantage of image statistics. We want to explore nonlinear compressed sensing. Optimizing projections for recognition, not for visual reconstruction. Weiss & Eldar
5
Sensory signals effectively represent environmental signals Slow Feature Analysis extracts features based on slowness The approach has been applied successfully to generate biologically plausible features, blind source separation and pattern recognition. The SFA does not deal directly with representational accuracy We formulate a generalized multi-objective criterion balancing representational and temporal reliability Obtain feasible objective for optimization Preliminary results demonstrate advantages over SFA Future work: robustness, online learning, distributed implementation through local learning rules Ron Meir
6
Instance-based typically relies on similarity metrics class-based recognition typically relies on statistical learning Our goal: develop a unifieid statistical learning framework for both recognition tasks Cohen & Shashua object classes object instances
7
Cognitive psychology: Basic-Level Category (Rosch 1976). Intermediate category level which is learnt faster and earlier as compared to other levels in the category hierarchy Neurophysiology: Agglomerative clustering of responses taken from population of neurons within the IT of macaque monkeys resembles an intuitive hierarchy (Kiani et al. 2007)
9
Goal: jointly learn classifiers for a few tasks Implicit goal: information sharing ◦ Achieve more economical overall representation ◦ A way to enhance impoverished training data ◦ Knowledge transfer (learning to learn) Our method: share information hierarchically in a cascade, whose levels are automatically discovered Publication: Regularization Cascade for Joint Learning, Alon Zweig and Daphna Weinshall, ICML, June 2013
10
How we compute the classifiers? Build classifiers for all tasks, each is a linear combination of classifiers computed in a cascade ◦ Higher levels – high incentive for information sharing more tasks participate, classifiers are less precise ◦ Lower levels – low incentive for sharing fewer tasks participate, classifiers get more precise How we control the incentive to share? vary regularization of loss function
11
Regularization: ◦ restrict the number of features the classifiers can use by imposing sparse regularization - || || 1 ◦ add another sparse regularization term which does not penalize for joint features - || || 1,2 λ|| || 1,2 + (1- λ )|| || 1 Incentive to share : ◦ λ=1 highest incentive to share ◦ λ=0 no incentive to share 11
12
African Elephant Asian Elephant OwlEagle Head Legs Wings Long Beak Short Beak Trunk Short Ears Long Ears 12
13
Head Legs Wings Long Beak Short Beak Trunk Short Ears Long Ears = ++ 13
14
14 Loss + || || 12 Loss + λ|| || 1,2 + (1- λ )|| || 1 Loss + || || 1
15
15 We train a linear classifier in Multi-task and multi-class settings, as defined by the respective loss function Iterative algorithm over the basic step: = {W,b} ’ stands for the parameters learnt up till the current step λ governs level of sharing from max sharing λ = 0 to none λ = 1 Each step λ is increased The aggregated parameters plus the decreased level of sharing is intended to guide the learning to focus on more task/class specific information as compared to the previous step
16
Synthetic and real data (many sets) Multi-task and multi-class loss functions Low level features vs. high level features Compare the cascade approach against the same algorithm with: No regularization L 1 sparse regularization L 12 multi-task regularization Multi-task loss Multi-class loss
17
NoReg L12 17 1 2 74 3 56 T1 T2 T3T4
18
H L1 H 100 tasks. 20 positive sample and 20 negative samples per task. 18
19
Step 1 output 100 tasks. 20 positive sample and 20 negative samples per task. 19
20
Step 2 output 100 tasks. 20 positive sample and 20 negative samples per task. 20
21
Step 3 output 100 tasks. 20 positive sample and 20 negative samples per task. 21
22
Step 4 output 100 tasks. 20 positive sample and 20 negative samples per task. 22
23
Step 5 output 100 tasks. 20 positive sample and 20 negative samples per task. 23
24
Sample size Average accuracy 24
25
Sample size Average accuracy
26
Caltech 101 Cifar-100 (subset of tiny images) Imagenet Caltech 256 Datasets 26
27
Datasets 27 MIT-Indoor-Scene (annotated with label-me)
28
Representation for sparse hierarchical sharing: low-level vs. mid-level o Low level features: any of the images features which are computed from the image via some local or global operator, such as Gist or Sift. o Mid level features: features capturing some semantic notion. Classifiers over low level features. 28
29
29 Cifar-100 MIT indoor scene, ObjBank multi class Caltech 256, Gehler multi task
30
Main objective: faster learning algorithm for dealing with larger dataset (more classes, more samples) Iterate over original algorithm for each new sample, where each level uses the current value of the previous level Solve each step of the algorithm using the online version presented in “Online learning for group Lasso”, Yang et al. 2011 (we proved regret convergence)
31
31 Experiment on 1000 classes from Imagenet with 3000 samples per class and 21000 features per sample (ILSVRC2010) Accuracy data repetitions Top1Top2Top3Top4Top5 H0.2850.3650.4030.4340.456 Zhao et al.0.2210.3020.3660.4110.435
32
A different setting for sharing: share information between pre-trained models and a new learning task (typically small sample settings). Extension of both batch and online algorithms, but online extension is more natural Gets as input the implicit hierarchy computed during training with the known classes When given examples from a new task: ◦ The online learning algorithms continues from where it stopped ◦ The matrix of weights is enlarged to include the new task, and the weights of the new task are initialized ◦ Sub-gradients of known classes are not changed
33
Knowledge Transfer =++ ++ + + Online KT Method Batch KT Method 1... K = = K+1 αα α πππ Task 1Task K MTL
34
34 accuracy Sample size Synthetic data ILSVRC2010
35
35 We assume hierarchical structure of shared information which is unknown; hierarchy exploitation is implicit. Describe a cascade based on varying sparse regularization, for multi-Task/multi-Class and knowledge-transfer algorithms. Cascade shows improved performance in all experiments. Investigate different visual representation schemes: better value in multi-task learning with higher level features Different levels of sharing help and can be efficient.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.