Free energy principle


The free energy principle is a formal statement that explains how living and non-living systems remain in non-equilibrium steady-states by restricting themselves to a limited number of states. It establishes that systems minimise a free energy function of their internal states, which entail beliefs about hidden states in their environment. The implicit minimisation of free energy is formally related to variational Bayesian methods and was originally introduced by Karl Friston as an explanation for embodied perception in neuroscience, where it is also known as active inference.
The free energy principle explains the existence of a given system by modeling it through a Markov blanket that tries to minimize the difference between their model of the world and their sense and associated perception. This difference can be described as "surprise" and is minimized by continuous correction of the world model of the system. As such, the principle is based on the Bayesian idea of the brain as an “inference engine”. Friston added a second route to minimization: action. By actively changing the world into the expected state, systems can also minimize the free energy of the system. Friston assumes this to be the principle of all biological reaction. Friston also believes his principle applies to mental disorders as well as to artificial intelligence. AI implementations based on the active inference principle have shown advantages over other methods.
The free energy principle has been criticized for being very difficult to understand, even for experts. Discussions of the principle have also been criticized as invoking metaphysical assumptions far removed from a testable scientific prediction, making the principle unfalsifiable. In a 2018 interview, Friston acknowledged that the free energy principle is not properly falsifiable: "the free energy principle is what it is — a principle. Like Hamilton’s Principle of Stationary Action, it cannot be falsified. It cannot be disproven. In fact, there’s not much you can do with it, unless you ask whether measurable systems conform to the principle."

Background

The notion that self-organising biological systems – like a cell or brain – can be understood as minimising variational free energy is based upon Helmholtz’s work on unconscious inference and subsequent treatments in psychology and machine learning. Variational free energy is a function of observations and a probability density over their hidden causes. This variational density is defined in relation to a probabilistic model that generates predicted observations from hypothesized causes. In this setting, free energy provides an approximation to Bayesian model evidence. Therefore, its minimisation can be seen as a Bayesian inference process. When a system actively makes observations to minimise free energy, it implicitly performs active inference and maximises the evidence for its model of the world.
However, free energy is also an upper bound on the self-information of outcomes, where the long-term average of surprise is entropy. This means that if a system acts to minimise free energy, it will implicitly place an upper bound on the entropy of the outcomes – or sensory states – it samples.

Relationship to other theories

Active inference is closely related to the good regulator theorem and related accounts of self-organisation, such as self-assembly, pattern formation, autopoiesis and practopoiesis. It addresses the themes considered in cybernetics, synergetics and embodied cognition. Because free energy can be expressed as the expected energy of observations under the variational density minus its entropy, it is also related to the maximum entropy principle. Finally, because the time average of energy is action, the principle of minimum variational free energy is a principle of least action.

Definition

Definition : Active inference rests on the tuple,
The objective is to maximise model evidence or minimise surprise. This generally involves an intractable marginalisation over hidden states, so surprise is replaced with an upper variational free energy bound. However, this means that internal states must also minimise free energy, because free energy is a function of sensory and internal states:
This induces a dual minimisation with respect to action and internal states that correspond to action and perception respectively.

Free energy minimisation

Free energy minimisation and self-organisation

Free energy minimisation has been proposed as a hallmark of self-organising systems when cast as random dynamical systems. This formulation rests on a Markov blanket that separates internal and external states. If internal states and action minimise free energy, then they place an upper bound on the entropy of sensory states
This is because – under ergodic assumptions – the long-term average of surprise is entropy. This bound resists a natural tendency to disorder – of the sort associated with the second law of thermodynamics and the fluctuation theorem.

Free energy minimisation and Bayesian inference

All Bayesian inference can be cast in terms of free energy minimisation; e.g.,. When free energy is minimised with respect to internal states, the Kullback–Leibler divergence between the variational and posterior density over hidden states is minimised. This corresponds to approximate Bayesian inference – when the form of the variational density is fixed – and exact Bayesian inference otherwise. Free energy minimisation therefore provides a generic description of Bayesian inference and filtering. It is also used in Bayesian model selection, where free energy can be usefully decomposed into complexity and accuracy:
Models with minimum free energy provide an accurate explanation of data, under complexity costs. Here, complexity is the divergence between the variational density and prior beliefs about hidden states.

Free energy minimisation and thermodynamics

Variational free energy is an information theoretic functional and is distinct from thermodynamic free energy. However, the complexity term of variational free energy shares the same fixed point as Helmholtz free energy. This is because if sensory perturbations are suspended, complexity is minimised. At this point, the system is at equilibrium and internal states minimise Helmholtz free energy, by the principle of minimum energy.

Free energy minimisation and information theory

Free energy minimisation is equivalent to maximising the mutual information between sensory states and internal states that parameterise the variational density. This relates free energy minimization to the principle of minimum redundancy and related treatments using information theory to describe optimal behaviour.

Free energy minimisation in neuroscience

Free energy minimisation provides a useful way to formulate normative models of neuronal inference and learning under uncertainty and therefore subscribes to the Bayesian brain hypothesis. The neuronal processes described by free energy minimisation depend on the nature of hidden states: that can comprise time-dependent variables, time-invariant parameters and the precision of random fluctuations. Minimising variables, parameters, and precision correspond to inference, learning, and the encoding of uncertainty, respectively.

Perceptual inference and categorisation

Free energy minimisation formalises the notion of unconscious inference in perception and provides a normative theory of neuronal processing. The associated process theory of neuronal dynamics is based on minimising free energy through gradient descent. This corresponds to generalised Bayesian filtering :
Usually, the generative models that define free energy are non-linear and hierarchical. Special cases of generalised filtering include Kalman filtering, which is formally equivalent to predictive coding – a popular metaphor for message passing in the brain. Under hierarchical models, predictive coding involves the recurrent exchange of ascending prediction errors and descending predictions that is consistent with the anatomy and physiology of sensory and motor systems.

Perceptual learning and memory

In predictive coding, optimising model parameters through a gradient ascent on the time integral of free energy reduces to associative or Hebbian plasticity and is associated with synaptic plasticity in the brain.

Perceptual precision, attention and salience

Optimising the precision parameters corresponds to optimising the gain of prediction errors. In neuronally plausible implementations of predictive coding, this corresponds to optimising the excitability of superficial pyramidal cells and has been interpreted in terms of attentional gain.
Concerning the top-down vs bottom-up controversy that has been addressed as a major open problem of attention, a computational model has succeeded in illustrating the circulatory nature of reciprocation between top-down and bottom-up mechanisms. Using an established emergent model of attention, namely, SAIM, the authors suggested a model called PE-SAIM that in contrast to the standard version approaches the selective attention from a top-down stance. The model takes into account the forwarding prediction errors sent to the same level or a level above to minimize the energy function indicating the difference between data and its cause or in other words between the generative model and posterior. To enhance validity, they also incorporated the neural competition between the stimuli in their model. A notable feature of this model is the reformulation of the free energy function only in terms of predictions error in the course of task performance.
where, is the total energy function of the neural networks entail, and is the prediction error between the generative model
Comparing the two models reveals a notable similarity between the results as well as a promising finding, in that, in the standard version of SAIM, the model architecture consists of excitatory connections whereas in the PE-SAIM inhibitory connections will be leveraged in the course of Bayesian inference. The model has also been shown fit to predict the EEG and fMRI data drawn from human experiments.

Active inference

When gradient descent is applied to action, motor control can be understood in terms of classical reflex arcs that are engaged by descending predictions. This provides a formalism that generalizes the equilibrium point solution – to the degrees of freedom problem – to movement trajectories.

Active inference and optimal control

Active inference is related to optimal control by replacing value or cost-to-go functions with prior beliefs about state transitions or flow. This exploits the close connection between Bayesian filtering and the solution to the Bellman equation. However, active inference starts with flow that are specified with scalar and vector value functions of state space. Here, is the amplitude of random fluctuations and cost is. The priors over flow induce a prior over states that is the solution to the appropriate forward Kolmogorov equations. In contrast, optimal control optimises the flow, given a cost function, under the assumption that . Usually, this entails solving backward Kolmogorov equations.

Active inference and optimal decision (game) theory

problems are treated within active inference by absorbing utility functions into prior beliefs. In this setting, states that have a high utility are states an agent expects to occupy. By equipping the generative model with hidden states that model control, policies that minimise variational free energy lead to high utility states.
Neurobiologically, neuromodulators like dopamine are considered to report the precision of prediction errors by modulating the gain of principal cells encoding prediction error. This is closely related to – but formally distinct from – the role of dopamine in reporting prediction errors per se and related computational accounts.

Active inference and cognitive neuroscience

Active inference has been used to address a range of issues in cognitive neuroscience, brain function and neuropsychiatry, including: action observation, mirror neurons, saccades and visual search, eye movements, sleep, illusions, attention, action selection, consciousness, hysteria and psychosis. Explanations of action in active inference often depend on the idea that the brain has 'stubborn predictions' which it cannot update, leading to actions that cause these predictions to come true.