Presentation is loading. Please wait.

Presentation is loading. Please wait.

@ NeurIPS 2016 Tim Dunn, May 17 2019.

Similar presentations


Presentation on theme: "@ NeurIPS 2016 Tim Dunn, May 17 2019."β€” Presentation transcript:

1 @ NeurIPS 2016 Tim Dunn, May

2 ~10,000 recorded neurons How can we deconstruct this complex network activity to shed light on brain mechanisms?

3 Hypothesis: Neural firing rates can be explained as linear transformations, followed by exponentiation, of low-dimensional temporal factors. These factors evolve with non-linear dynamics that depend on unobserved external input. Let v=W(u) denote v=Wu+b Values of low-D factors at time t keeps rates positive firing rates for all neurons at time t Binned spike # for all neurons at time t f 𝑑 =𝐅( f π‘‘βˆ’1 , u 𝑑 ) Some unobserved input (e.g. input from another brain area) non-linear function

4 F is modeled with a GRU, and f as a linear transformation of the GRU state. From some initial state, this model can be run forward in time to generate neural spiking data Gaussian Priors

5 F is modeled with a GRU, and f as a linear transformation of the GRU state. From some initial state, this model can be run forward in time to generate neural spiking data

6 But what we would like to do is infer g 0 and u 𝑑 (and thus g 𝑑 ) for all time points, given the observed spiking patterns in the neural population. To do this, the authors use a VAE strategy to find approximate posteriors, Additional extrinsic information that can affect firing patterns, like a visual stimulus

7 Encoder network for Mean and variance given by:
With E obtained running a GRU forward and backward in time over all data With the initial RNN states as additional learnable parameters

8 Encoder network for A bidirectional GRU is used again, this time to generate a time-dependent variable Which, rather than being fed into a Gaussian, is fed into another GRU (the β€œcontroller”) The dependence on f introduces the conditioning on g 0 and u 1:π‘‘βˆ’1

9 Encoder network for Finally, with

10 Full LFADS With initial state sampled from

11 Full LFADS loss function
Maximize the lower bound on the marginal data log-likelihood

12 Full LFADS

13 Experiments

14 Experiments


Download ppt "@ NeurIPS 2016 Tim Dunn, May 17 2019."

Similar presentations


Ads by Google