Download presentation
Presentation is loading. Please wait.
Published byBlaise Snow Modified over 9 years ago
1
Learning the structure of Deep sparse Graphical Model Ryan Prescott Adams Hanna M Wallach Zoubin Ghahramani Presented by Zhengming Xing Some pictures are directly copied from the paper and Hanna Wallach’s slides
2
outline Introduction Finite belief network Infinite belief network Inference Experiment
3
Introduction Main contribution: combine deep belief network and nonparametric bayesian together. Main idea: use IBP to learn the structure of the network Structure of the network include: Depth Width Connectivity
4
Single layer network Use Binary matrix to represent the network. Black refer to 1(two unit were connected) White refer to 0 (two unit were not connected) IBP can be used as the prior for infinite columns binary matrix Z
5
Review IBP 1.First customer tries dishes. 2. Nth customer tries Tasked dishes K with probability new dishes
6
Multi-layer network
7
Cascading IBP Also parameterize by Each dishes in the restaurant is also a customer in another Indian buffet process Each matrix is exchangeable both rows and columns This chain can reach the state with probability one ( number of unit in layer m) Properties: For unit in layer m+1 Expected number of parents: Expected number of children:
8
Sample from the CIBP prior
9
model m refer to the layers and increase upto M. weightsbias Place layer wise Gaussian prior on weights and bias, Gamma prior on noise precision
10
Inference Weights, bias, noise variance can be sampled with Gibbs sampler.
11
Inference( sample Z) Two step: 1. 2. Sample existing dishes MH-sample Add a new unit and, and insert connection to this unit with For a exist unit remove the connection to this unit with MH ratio
12
Experiment result Olivetti faces Remove bottom halves of the test image.
13
Experiment result MNIST Digits
14
Experiment result Frey Faces
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.