Graphical models: approximate inference and learning CA6b, lecture 5
Bayesian Networks General Factorization
D-separation: Example
Trees Undirected Tree Directed TreePolytree
Converting Directed to Undirected Graphs (2) Additional links
Inference on a Chain
Inference in a HMM E step: belief propagation
Belief propagation in a HMM E step: belief propagation
Expectation maximization in a HMM E step: belief propagation
The Junction Tree Algorithm Exact inference on general graphs. Works by turning the initial graph into a junction tree and then running a sum- product-like algorithm.
Factor Graphs
Factor Graphs from Undirected Graphs
The Sum-Product Algorithm (6)
The Sum-Product Algorithm (5)
The Sum-Product Algorithm (3)
The Sum-Product Algorithm (7) Initialization
Sensory observations Prior expectations Forest Tree LeaveRoot Bottom-up Top-down Stem Green Consequence of failing inhibition in hierarchical inference
Causal model Pairwise factor graph Bayesian network and factor graph
Causal model Pairwise factor graph
Causal model Pairwise factor graph
Pairwise graphs Log belief ratio Log messages ratio
Belief propagation and inhibitory loops
Tight excitatory/inhibitory balance is required, and sufficient Okun and Lampl, Nat Neuro 2008 Inhibition Excitation
Lewis et al, Nat Rev Nsci 05 controls schizophrenia Support for impaired inhibition in schizophrenia See also: Benes, Neuropsychopharmacology 2010, Uhhaas and Singer, Nat Rev Nsci 2010… GAD26
Circular inference: Impaired inhibitory loops
Circular inference and overconfidence:
Renaud Jardri Alexandra Litvinova & Sandrine Duverne The Fisher Task 3 4 A priori Evidence sensorielles Confiance a posteriori
Mean group responses Controls:Schizophrenes: Simple Bayes:
Control Patients
? s SCZ CTL *** * Parameter value (mean + sd) Mean parameter values
PANSS positive factor Inference loops and psychosis 25 Non-clinical beliefs (PDI-21 scores) PDI score Strenght of loops
The Junction Tree Algorithm Exact inference on general graphs. Works by turning the initial graph into a junction tree and then running a sum- product-like algorithm. Intractable on graphs with large cliques.
What if exact inference is intractable? Loopy belief propagation works in some scenarios. Markov-Monte-Carlo sampling methods. Variational methods (not covered here)
Loopy Belief Propagation Sum-Product on general graphs. Initial unit messages passed across all links, after which messages are passed around until convergence (not guaranteed!). Approximate but tractable for large graphs. Sometime works well, sometimes not at all.
Neural code for uncertainty: sampling
Alternative neural code for uncertainty: sampling Berkes et al, Science 2011
Alternative neural code for uncertainty: sampling
Learning in graphical models More generally: learning parameters in latent variable models Visible Hidden
Learning in graphical models More generally: learning parameters in latent variable models Visible Hidden
Learning in graphical models More generally: learning parameters in latent variable models Visible Hidden Huge!
Mixture of Gaussians (clustering algorithm) Data (unsupervised)
Mixture of Gaussians (clustering algorithm) Data (unsupervised) Generative model: M possible clusters Gaussian distribution
Mixture of Gaussians (clustering algorithm) Data (unsupervised) Generative model: M possible clusters Gaussian distribution Parameters
Given the current parameters and the data, what are the expected hidden states? Expectation stage: Responsability
Given the responsabilities of each cluster, update the parameters to maximize the likelihood of the data: Maximization stage:
Learning in hidden Markov models Hidden state Observations cause Forward model Sensory likelihood Inverse model
Object present/not Receptor spike/not Time
Leak Synaptic input Bayesian integration corresponds to leaky integration.
Expectation maximization in a HMM Multiple training sequences: What are the parameters: Transition probabilities Observation probabilities
Expectation stage E step: belief propagation
Expectation stage E step: belief propagation
Expectation stage E step: belief propagation
Using “on-line” expectation maximization, a neuron can adapt to the statistics of its input.
Fast adaptation in single neurons Adaptation to temporal statistics? Fairhall et al, 2001