Abstract This presentation questions the need for reinforcement learning and related paradigms from machine-learning, when trying to optimise the behavior of an agent. We show that it is fairly simple to teach an agent complicated and adaptive behaviors under a free-energy principle. This principle suggests that agents adjust their internal states and sampling of the environment to minimize their free-energy. In this context, free-energy represents a bound on the probability of being in a particular state, given the nature of the agent, or more specifically the model of the environment an agent entails. We show that such agents learn causal structure in the environment and sample it in an adaptive and self-supervised fashion. The result is a policy that reproduces exactly the policies that are optimized by reinforcement learning and dynamic programming. Critically, at no point do we need to invoke the notion of reward, value or utility. We illustrate these points by solving a benchmark problem in dynamic programming; namely the mountain-car problem using just the free-energy principle. The ensuing proof of concept is important because the free-energy formulation also provides a principled account of perceptual inference in the brain and furnishes a unified framework for action and perception. Action and active inference: A free-energy formulation
Perception, memory and attention (Bayesian Brain) Action and value learning (Optimum control) causes ( ) Prediction error sensory input S R Q CS reward (US) action S-R S-S The free-energy principle
Overview The free energy principle and action Active inference and prediction error Orientation and stabilization Intentional movements Cued movements Goal-directed movements Autonomous movements Forward and inverse models
agent - m environment Separated by a Markov blanket External states Internal states Sensation Action Exchange with the environment
Perceptual inference Perceptual learning Perceptual uncertainty Action to minimise a bound on surprise The free-energy principle Perception to optimise the bound The conditional density and separation of scales
Overview The free energy principle and action Active inference and prediction error Orientation and stabilization Intentional movements Cued movements Goal-directed movements Autonomous movements Forward and inverse models
Hierarchical model Top-down messagesBottom-up messages Prediction error Action Active inference: closing the loop (synergy)
action perception Action needs access to sensory-level prediction error
prediction From reflexes to action action dorsal horndorsal root ventral rootventral horn
Overview The free energy principle and action Active inference and prediction error Orientation and stabilization Intentional movements Cued movements Goal-directed movements Autonomous movements Forward and inverse models
sensory prediction and errorhidden states (location) cause (perturbing force)perturbation and action Active inference under flat priors (movement with percept) time time time time Visual stimulus Sensory channels
sensory prediction and errorhidden states (location) cause (perturbing force)perturbation and action Active inference under tight priors (no movement or percept) time time time time
under flat priorsunder tight priors action perceived and true perturbation Retinal stabilisation or tracking induced by priors Visual stimulus displacement displacement time time real perceived
Overview The free energy principle and action Active inference and prediction error Orientation and stabilization Intentional movements Cued movements Goal-directed movements Autonomous movements Forward and inverse models
sensory prediction and error cause (prior)perturbation and action Active inference under tight priors (movement and percept) Proprioceptive input time time time time hidden states (location)
robust to perturbation and change in motor gain displacementtime trajectories real perceived action and causes action perceived cause (prior) exogenous cause Self-generated movements induced by priors
Overview The free energy principle and action Active inference and prediction error Orientation and stabilization Intentional movements Cued movements Goal-directed movements Autonomous movements Forward and inverse models
from reflexes to action Jointed arm Cued movements and sensorimotor integration
Trajectory Cued reaching with noisy proprioception
Noisy proprioception Noisy vision position Conditional precisions Bayes optimal integration of sensory modalities
Overview The free energy principle and action Active inference and prediction error Orientation and stabilization Intentional movements Cued movements Goal-directed movements Autonomous movements Forward and inverse models
position velocity null-clines The mountain car problem equations of motion position Height position Forces Desired location
flow and density nullclines velocity velocity position velocity position Uncontrolled Controlled Expected
Learning in controlled environment Active inference in uncontrolled environment
Using just the free-energy principle and a simple gradient ascent scheme, we have solved a benchmark problem in optimal control theory with just a handful of learning trials. At no point did we use reinforcement learning or dynamic programming. Goal-directed behaviour and trajectories
prediction and error time hidden states time velocity behaviouraction position velocity time perturbation and actionbehaviour Action under perturbation
Simulating Parkinson's disease?
Overview The free energy principle and action Active inference and prediction error Orientation and stabilization Intentional movements Cued movements Goal-directed movements Autonomous movements Forward and inverse models
velocity velocity position position velocity controlled velocity before after trajectoriesdensities Learning autonomous behaviour
prediction and error time hidden states time position velocity learnt time perturbation and action Autonomous behaviour under random perturbations
Overview The free energy principle and action Active inference and prediction error Orientation and stabilization Intentional movements Cued movements Goal-directed movements Autonomous movements Forward and inverse models
Desired and inferred states Sensory prediction error Motor command (action) Forward model (generative model) Inverse model Desired and inferred states Sensory prediction error Forward model Motor command (action) Environment Free-energy formulationForward-inverse formulation Inverse model (control policy) Corollary discharge Efference copy
Summary The free-energy can be minimised by action (through changes in states generating sensory input) or perception (through optimising the predictions of that input) The only way that action can suppress free-energy is through reducing prediction error at the sensory level (speaking to a juxtaposition of motor and sensory systems) Action fulfils expectations, which can manifest as an explaining away of prediction error through resampling sensory input (e.g., visual tracking); Or intentional movement, fulfilling expectations furnished by empirical priors. In an optimum control setting a training environment can be constructed by minimising the cross-entropy between the ensemble density and some desired density. This can be learnt and reproduced under active inference.