Download presentation
Presentation is loading. Please wait.
Published byMagdalene Gabriella Stokes Modified over 9 years ago
1
Hilbert Space Embeddings of Conditional Distributions -- With Applications to Dynamical Systems Le Song Carnegie Mellon University Joint work with Jonathan Huang, Alex Smola and Kenji Fukumizu
2
Hilbert Space Embeddings Summary statistics for distributions: (mean) (covariance) (expected features)
3
Hilbert Space Embeddings Reconstruct from for certain kernels! Eg. RBF kernel. Sample average converges to true mean at.
4
Hilbert Space Embeddings Embed Conditional Distributions p(y|x) µ[p(y|x)] Embed Conditional Distributions p(y|x) µ[p(y|x)] Structured Prediction Dynamical Systems
5
Embedding P(Y|X) Simple case: X binary, Y vector valued Need to embed Solution: – Divide observations according to their – Average feature for each group What if X is continuous or structured?
6
Goal: Estimate embedding from finite sample Problem: Some X=x are never observed… Embed p(Y|X=x) for x never observed? Embedding P(Y|X)
7
If and are close, and will be similar. Key Idea Generalize to unobserved X=x via a kernel on X
8
Conditional Embedding Conditional Embedding Operator
9
generalization of covariance matrix
10
Kernel Estimate Empirical estimate converges at
11
Sum and Product Rules Probabilistic Relation Hilbert Space Relation Sum Rule Product Rule Can do probabilistic inference with embeddings!
12
Dynamical Systems Maintain belief using Hilbert space embeddings Hidden State observation
13
Advantages Applies to any domain with a kernel Eg. reals, rotations, permutations, strings Applies to more general distributions Eg. multimodal, skewed
14
Linear dynamics model (Kalman filter optimal): Evaluation with Linear Dynamics Trajectory of Hidden States Observations
15
Kalman 246 0.2 0.25 0.3 0.35 N(0,10 I), N(0,10 -2 I) Training Size ( Steps) RMS Error RKHS Gaussian process noise, Gaussian observation noise
16
246 0.2 0.22 0.24 (1,1/3), N(0,10 -2 I) Training Size ( Steps) RMS Error RKHS Kalman Skewed process noise, small observation noise
17
23456 0.15 0.155 0.16 0.165 0.17 0.175 0.5N(-0.2,10 -2 I)+0.5N(0.2,10 -2 I), N(0,10 -2 I) Training Size ( Steps) RMS Error RKHS Kalman Bimodal process noise, small observation noise
18
Camera Rotation Rendered using POV-ray renderer
19
Predicted Camera Rotation Ground truth RKHSKalman filterRandom Trace measure 32.912.180.01 Dynamical systems with Hilbert space embeddings works better
20
Video from Predicted Rotation Original Video Video Rendered with Predicted Camera Rotation
21
Conclusion Embedding conditional distributions – Generalizes to unobserved X=x – Kernel estimate – Convergence of empirical estimate – Probabilistic inference Application to dynamical systems learning and inference
22
The End Thanks
23
Future Work Estimate the hidden states during training? Dynamical system = chain graphical model; how about tree and general graph topology?
24
246 0.3 0.4 0.5 N(0,10 -2 I), N(0,10 I) Training Size (2 n Steps) RMS Error RKHS Kalman Small process noise, large observation noise
25
Dynamics can Help -3-20 1 1.5 2 2.5 log10(Noise Variance) Trace Measure
26
Dynamical Systems Maintain belief using Hilbert space embeddings Hidden State observation
27
Conditional Embedding Operator Desired properties: 1. 2. Covariance operator: generalization of covariance matrix
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.