CDB Exploring Science and Society Seminar Thursday 19 November 2009 at 5.30pm Host: Prof Giorgio Gabella The Bayesian brain, surprise and free-energy.

Slides:



Advertisements
Similar presentations
Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning.
Advertisements

How much about our interaction with – and experience of – our world can be deduced from basic principles? This talk reviews recent attempts to understand.
How much about our interaction with – and experience of – our world can be deduced from basic principles? This talk reviews recent attempts to understand.
Bayesian models for fMRI data
How much about our interactions with – and experience of – our world can be deduced from basic principles? This talk reviews recent attempts to understand.
How much about our interaction with – and experience of – our world can be deduced from basic principles? This talk reviews recent attempts to understand.
Free Energy Workshop - 28th of October: From the free energy principle to experimental neuroscience, and back The Bayesian brain, surprise and free-energy.
How much about our interaction with – and experience of – our world can be deduced from basic principles? This talk reviews recent attempts to understand.
The free-energy principle: a rough guide to the brain? Karl Friston
How much about our interaction with – and experience of – our world can be deduced from basic principles? This talk reviews recent attempts to understand.
Itti: CS564 - Brain Theory and Artificial Intelligence. Systems Concepts 1 CS564 - Brain Theory and Artificial Intelligence University of Southern California.
How much about our interactions with – and experience of – our world can be deduced from basic principles? This talk reviews recent attempts to understand.
Free energy and active inference Karl Friston Abstract How much about our interaction with – and experience of – our world can be deduced from basic principles?
Computational and Physiological Models Part 1
Abstract We start with a statistical formulation of Helmholtz’s ideas about neural energy to furnish a model of perceptual inference and learning that.
Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany A hierarchy of time-scales and the brain Stefan Kiebel.
The free-energy principle: a rough guide to the brain? K Friston Summarized by Joon Shik Kim (Thu) Computational Models of Intelligence.
ABSTRACT: My treatment of critical gaps in models of probabilistic inference will focus on the potential of unified theories to “close the gaps” between.
Varieties of Helmholtz Machine Peter Dayan and Geoffrey E. Hinton, Neural Networks, Vol. 9, No. 8, pp , 1996.
Learning Lateral Connections between Hidden Units Geoffrey Hinton University of Toronto in collaboration with Kejie Bao University of Toronto.
Computational models for imaging analyses Zurich SPM Course February 6, 2015 Christoph Mathys.
Mean Field Variational Bayesian Data Assimilation EGU 2012, Vienna Michail Vrettas 1, Dan Cornford 1, Manfred Opper 2 1 NCRG, Computer Science, Aston University,
Abstract This talk summarizes recent attempts to integrate action and perception within a single optimization framework. We start with a statistical formulation.
CIAR Second Summer School Tutorial Lecture 1a Sigmoid Belief Nets and Boltzmann Machines Geoffrey Hinton.
J. Daunizeau Wellcome Trust Centre for Neuroimaging, London, UK UZH – Foundations of Human Social Behaviour, Zurich, Switzerland Dynamic Causal Modelling:
Dynamic causal modelling of electromagnetic responses Karl Friston - Wellcome Trust Centre for Neuroimaging, Institute of Neurology, UCL In recent years,
Motor Control. Beyond babbling Three problems with motor babbling: –Random exploration is slow –Error-based learning algorithms are faster but error signals.
Abstract This talk summarizes our recent attempts to integrate action and perception within a single optimization framework. We start with a statistical.
Recent advances in the theory of brain function
How much about our interaction with – and experience of – our world can be deduced from basic principles? This talk reviews recent attempts to understand.
Abstract: How much about our interactions with – and experience of – our world can be deduced from basic principles? This talk reviews recent attempts.
Abstract How much about our interactions with – and experience of – our world can be deduced from basic principles? This talk reviews recent attempts.
Abstract We offer a formal treatment of choice behaviour based on the premise that agents minimise the expected free energy of future outcomes. Crucially,
Abstract Predictive coding models and the free-energy principle, suggests that cortical activity in sensory brain areas reflects the precision of prediction.
Abstract: This overview of the free energy principle offers an account of embodied exchange with the world that associates conscious operations with actively.
Abstract This presentation questions the need for reinforcement learning and related paradigms from machine-learning, when trying to optimise the behavior.
Zangwill Club Seminar - Lent Term The Bayesian brain, surprise and free-energy Karl Friston, Wellcome Centre for Neuroimaging, UCL Abstract Value-learning.
Abstract How much about our interactions with – and experience of – our world can be deduced from basic principles? This talk reviews recent attempts.
How much about our interactions with – and experience of – our world can be deduced from basic principles? This talk reviews recent attempts to understand.
Free energy and active inference
Deans Lecture Reception PM, Lecture 6-7, Lecture theatre S1, Clayton Campus, Monash University). Models, maps and modalities in brain imaging Karl.
Reasons to be careful about reward A flow (policy) cannot be specified with a scalar function of states: the fundamental theorem of vector calculus – aka.
Free-energy and active inference
Abstract We suggested recently that attention can be understood as inferring the level of uncertainty or precision during hierarchical perception. In.
Abstract This presentation will look at action, perception and cognition as emergent phenomena under a unifying perspective: This Helmholtzian perspective.
Workshop on Mathematical Models of Cognitive Architectures December 5-9, 2011 CIRM, Marseille Workshop on Mathematical Models of Cognitive Architectures.
Abstract If we assume that neuronal activity encodes a probabilistic representation of the world that optimizes free- energy in a Bayesian fashion, then.
Abstract In this presentation, I will rehearse the free-energy formulation of action and perception, with a special focus on the representation of uncertainty:
How much about our interaction with – and experience of – our world can be deduced from basic principles? This talk reviews recent attempts to understand.
How much about our interaction with – and experience of – our world can be deduced from basic principles? This talk reviews recent attempts to understand.
Abstract In this presentation, I will rehearse the free-energy formulation of action and perception, with a special focus on the representation of uncertainty:
Dynamic Causal Model for evoked responses in MEG/EEG Rosalyn Moran.
Workshop on: The Free Energy Principle (Presented by the Wellcome Trust Centre for Neuroimaging) July 5 (Thursday) - 6 (Friday) 2012 Workshop on: The.
Bayesian inference Lee Harrison York Neuroimaging Centre 23 / 10 / 2009.
Abstract: How much about our interactions with – and experience of – our world can be deduced from basic principles? This talk reviews recent attempts.
How much about our interactions with – and experience of – our world can be deduced from basic principles? This talk reviews recent attempts to understand.
Abstract How much about our interactions with – and experience of – our world can be deduced from basic principles? This talk reviews recent attempts to.
Dynamic Causal Models Will Penny Olivier David, Karl Friston, Lee Harrison, Stefan Kiebel, Andrea Mechelli, Klaas Stephan MultiModal Brain Imaging, Copenhagen,
Tutorial Session: The Bayesian brain, surprise and free-energy Value-learning and perceptual learning have been an important focus over the past decade,
Dynamic Causal Modeling of Endogenous Fluctuations
Variational filtering in generated coordinates of motion
Free energy and active inference
Nicolas Alzetta CoNGA: Cognition and Neuroscience Group of Antwerp
Free energy and life as we know it
Free energy, the brain and life as we know it
Computational models for imaging analyses
Wellcome Trust Centre for Neuroimaging University College London
The free-energy principle: a rough guide to the brain? K Friston
Predictive computational modelling in the brain (and other animals)
CRIS Workshop: Computational Neuroscience and Bayesian Modelling
Presentation transcript:

CDB Exploring Science and Society Seminar Thursday 19 November 2009 at 5.30pm Host: Prof Giorgio Gabella The Bayesian brain, surprise and free-energy Abstract Value-learning and perceptual learning have been an important focus over the past decade, attracting the concerted attention of experimental psychologists, neurobiologists and the machine learning community. Despite some formal connections; e.g., the role of prediction error in optimizing some function of sensory states, both fields have developed their own rhetoric and postulates. In work, we show that perceptual learning is, literally, an integral part of value learning; in the sense that perception is necessary to integrate out dependencies on the inferred causes of sensory information. This enables the value of sensory trajectories to be optimized through action. Furthermore, we show that acting to optimize value and perception are two aspects of exactly the same principle; namely the minimization of a quantity (free energy) that bounds the probability of sensory input, given a particular agent or phenotype. This principle can be derived, in a straightforward way, from the very existence of agents, by considering the probabilistic behavior of an ensemble of agents belonging to the same class.

“Objects are always imagined as being present in the field of vision as would have to be there in order to produce the same impression on the nervous mechanism” - Hermann Ludwig Ferdinand von Helmholtz Thomas Bayes Geoffrey Hinton Richard Feynman From the Helmholtz machine and the Bayesian Brain to Action and self-organization Hermann Haken

Overview Ensemble dynamics Entropy and equilibria Free-energy and surprise The free-energy principle Action and perception Generative models Perception Birdsong and categorization Simulated lesions Action Active inference Reaching Policies Control and attractors The mountain-car problem

Particle density contours showing Kelvin-Helmholtz instability, forming beautiful breaking waves. In the self- sustained state of Kelvin-Helmholtz turbulence the particles are transported away from the mid-plane at the same rate as they fall, but the particle density is nevertheless very clumpy because of a clumping instability that is caused by the dependence of the particle velocity on the local solids-to-gas ratio (Johansen, Henning, & Klahr 2006) temperature pH falling transport Self-organization that minimises an ensemble density to ensure a limited repertoire of states are occupied (i.e., ensuring states have a random attracting set).

How can an active agent minimise its equilibrium entropy? This entropy is bounded by the entropy of sensory signals (under simplifying assumptions) Crucially, because the density on sensory signals is at equilibrium, it can be interpreted as the proportion of time each agent entertains them (the sojourn time). This ergodic argument means that entropy is the path integral of surprise experienced by a particular agent: This means agents minimise surprise at all times. But there is one small problem… Agents cannot access surprise; however, they can evaluate a free-energy bound on surprise, which is induced with a recognition density q :

Overview Ensemble dynamics Entropy and equilibria Free-energy and surprise The free-energy principle Action and perception Generative models Perception Birdsong Simulated lesions Action Active inference Reaching Polices Control and attractors The mountain-car problem

Action External states in the world Internal states of the agent (m) Sensations The free-energy principle Action to minimise a bound on surprisePerception to optimise the bound

The free-energy rests on expected Gibb’s energy and can be evaluated, given a generative model comprising a likelihood and prior: So what models might the brain use? The generative model

Processing hierarchy Backward (nonlinear) Forward (linear) lateral Ensemble dynamics Entropy and equilibria Free-energy and surprise The free-energy principle Action and perception Generative models Perception Birdsong Simulated lesions Action Active inference Reaching Polices Control and attractors The mountain-car problem

Hierarchical (deep) dynamic models

Structural priors Dynamical priors Likelihood and empirical priors Hierarchal form Gibb’s energy: a simple function of prediction error Prediction errors

Synaptic gain Synaptic activity Synaptic efficacy Activity-dependent plasticity Functional specialization Attentional gain Enabling of plasticity Attention and salience Perception and inferenceLearning and memory The recognition density and its sufficient statistics Mean-field approximation: Laplace approximation:

Backward predictions Forward prediction error Perception and message-passing Synaptic plasticitySynaptic gain

Overview Ensemble dynamics Entropy and equilibria Free-energy and surprise The free-energy principle Action and perception Generative models Perception Birdsong and categorization Simulated lesions Action Active inference Reaching Polices Control and attractors The mountain-car problem

Synthetic song-birds SyrinxVocal centre Time (sec) Frequency Sonogram

prediction and error time hidden states time Backward predictions Forward prediction error Causal states time (bins) Recognition and message passing stimulus time (seconds)

Perceptual categorization Frequency (Hz) Song A time (seconds) Song B Song C ABCABC time (seconds)

Generative models of birdsong: sequences of sequences Syrinx Neuronal hierarchy Time (sec) Frequency (KHz) sonogram Kiebel et al

Frequency (Hz) percept Frequency (Hz) no structural priors time (seconds) Frequency (Hz) no dynamical priors LFP (micro-volts) LFP LFP (micro-volts) LFP peristimulus time (ms) LFP (micro-volts) LFP Simulated lesion studies: a model for false inference in psychopathology?

Ensemble dynamics Entropy and equilibria Free-energy and surprise The free-energy principle Action and perception Generative models Perception Birdsong Simulated lesions Action Active inference Reaching Polices Control and attractors The mountain-car problem

prediction From reflexes to action action dorsal root ventral horn True dynamics Generative model

From reflexes to action Jointed arm Movement trajectory Descending sensory prediction error visual input proprioceptive input

Overview Ensemble dynamics Entropy and equilibria Free-energy and surprise The free-energy principle Action and perception Generative models Perception Birdsong Simulated lesions Action Active inference Reaching Polices Control and attractors The mountain-car problem

Energies sensory prediction error sensory surprise surprise free-energy complexity How do policies minimise entropy? perceptual divergence Path integrals sensory entropy entropy Under ergodic assumptions free-action perceptionpolicy (model)action

Richard Bellman Cost-functions, value and optimal control (polices that lead to sparse distal goals) Using the Helmholtz decomposition flow ( i.e., policy) can be expressed in terms of scalar and vector potentials Where value V is proportional to negative surprise and can be defined as expected (negative) cost This means the cost-function is defined by the equilibrium density but not vice versa; this is the problem addressed by dynamic programming and reinforcement learning.

Cost-functions and attracting sets (polices with attractors) At equilibrium we have: This means maxima of the equilibrium density must have negative divergence. We can exploit this to ensure maxima lie in A, using a Langevin-based policy; where cost plays the role of dissipation Adriaan Fokker Max Planck

equations of motion Exploitation exploration Exploration and exploitation under Langevin dynamics

True equations of motion position height The mountain car problem positionsatiety The cost-function Policy (expected equations of motion) The environment

prediction and error time hidden states time position velocity Trajectory of one trial position height leaned (after 16 trials) and true potential Learning the environment With no cost (i.e., Hamiltonian dynamics)

conditional expectations time action velocity trajectories position force cost-function (priors) With cost (i.e., exploratory dynamics) Exploring & exploiting the environment

Using just the free-energy principle and a simple gradient ascent scheme, we have solved a benchmark problem in optimal control theory using a handful of learning trials. Note that we use reinforcement learning or dynamic programming. Adaptive policies and trajectories

time action position trajectories velocity satiety prediction and error time expected hidden states time Self-organisation with (happiness) dynamics on cost policy (expected flow) true flow

Thank you And thanks to collaborators: Jean Daunizeau Stefan Kiebel James Kilner Klaas Stephan And colleagues: Peter Dayan Jörn Diedrichsen Paul Verschure Florentin Wörgötter