Download presentation
Presentation is loading. Please wait.
Published byRosaline Fletcher Modified over 9 years ago
1
Playing with features for learning and prediction Jongmin Kim Seoul National University
2
Problem statement Predicting outcome of surgery
3
Ideal approach.. ? Training Data Predicting outcome surgery
4
Predicting outcome of surgery Initial approach –Predicting partial features Predict witch features?
5
Predicting outcome of surgery 4 Surgery –DHL+RFT+TAL+FDO flexion of the knee ( min / max ) dorsiflexion of the ankle ( min ) rotation of the foot ( min / max )
6
Predicting outcome of surgery Is it good features? Number of Training data –DHL+RFT+TAL : 35 data –FDO+DHL+TAL+RFT : 33 data
7
Machine learning and feature Data Feature representation Learning algorithm Feature representation Learning algorithm
8
Joint position / angle Velocity / acceleration Distance between body parts Contact status … Features in motion
9
Features in computer vision SIFT Spin image HoGRIFT Textons GLOH
10
Machine learning and feature
11
Outline Feature selection - Feature ranking - Subset selection: wrapper, filter, embedded - Recursive Feature Elimination - Combination of weak prior (Boosting) - ADAboosting(clsf) / joint boosting (clsf)/ Gradientboost (regression) Prediction result with feature selection Feature learning?
12
Feature selection Alleviating the effect of the curse of dimensionality Improve the prediction performance Faster and more cost-effective Providing a better understanding of the data
13
Subset selection Wrapper Filter Embedded
14
Feature learning? Can we automatically learn a good feature representation? Known as: unsupervised feature learning, feature learning, deep learning, representation learning, etc. Hand-designed features (by human): 1. need expert knowledge 2. requires time-consuming hand-tuning. When it’s unclear how to hand design features: automatically learned features (by machine)
15
Learning Feature Representations Key idea: – Learn statistical structure or correlation of the data from unlabeled data –The learned representations can be used as features in supervised and semi-supervised settings
16
Learning Feature Representations Encoder Decoder Input (Image/ Features) Output Features e.g. Feed-back / generative / top-down path Feed-forward / bottom-up path
17
Learning Feature Representations σ(Wx) Dz Input Patch x Sparse Features z e.g. Predictive Sparse Decomposition [Kavukcuoglu et al., ‘09] Encoder filt ers W Sigmoid fu nction σ(.) Decoder fi lters D L 1 Spars ity
18
Stacked Auto-Encoders Encoder Decoder Input Image Class label Features Encoder Decoder Features Encoder Decoder [Hinton & Salakhutdinov Science ‘06]
19
At Test Time Encoder Input Image Class label Features Encoder Features Encoder [Hinton & Salakhutdinov Science ‘06] Remove decoders Use feed-forward path Gives standard(Convol utional) Neural Network Can fine-tune with bac kprop
20
Status & plan Data 파악 / learning technique survey… Plan : 11 월 실험 끝 12 월 논문 writing 1 월 시그랩 submit 8 월에 미국에서 발표 But before all of that….
21
Deep neural net vs. boosting Deep Nets: - single highly non-linear system - “deep” stack of simpler modules - all parameters are subject to learning Boosting & Forests: - sequence of “weak” (simple) classifiers that are linearly combined to produce a powerful classifier - subsequent classifiers do not exploit representations of earlier classifiers, it's a “shallow” linear mixture - typically features are not learned
22
Deep neural net vs. boosting
23
Feature learning for motion data Learning representations of temporal data - Model complex, nonlinear dynamics such as style Restricted Boltzmann machine - didn’t understand the concept.. - the result is not impressive
24
Restricted Boltzmann machine Model complex, nonlinear dynamics Easily and exactly infer the latent binary state given the observations
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.