Universal approximation theorem


In the mathematical theory of artificial neural networks, universal approximation theorems are results state that establish the density of an algorithmically generated class of functions within a given function space of interest. Typically, these results concern the approximation capabilities of the feedforward architecture on the space of continuous functions between two Euclidean spaces, and the approximation is with respect to the Compact convergence topology. However, there are also a variety of results between non-Euclidean spaces and other commonly used architectures and, more generally, algorithmically generated sets of functions, such as the Convolutional neural network architecture
, radial basis-functions
, or neural networks with specific properties. Most universal approximation theorems can be parsed into two classes. The first, line of research quantifies the approximation capabilities of neural networks with an arbitrary number of artificial neurons and the later line of research focuses on the case where there is an arbitrary number of hidden layers, each containing a limited number of artificial neurons.
Such theorems thus imply that neural networks can represent a wide variety of interesting functions when given appropriate weights. On the other hand, they typically do not provide a construction for the weights, but merely state that such a construction is possible.

History

One of the first versions of the arbitrary width case was proved by George Cybenko in 1989 for sigmoid activation functions. Kurt Hornik showed in 1991 that it is not the specific choice of the activation function, but rather the multilayer feed-forward architecture itself which gives neural networks the potential of being universal approximators. Moshe Leshno et al in 1993 and later Allan Pinkus in 1999 showed that the universal approximation property, as defined in, is equivalent to having a nonpolynomial activation function.
The arbitrary depth case was also studied by number of authors, such as Zhou Lu et al in 2017, Boris Hanin and Mark Sellke in 2018, and Patrick Kidger and Terry Lyons in 2020.
Several extensions of the theorem exist, such as to discontinuous activation functions, alternative network architectures, other topologies, and noncompact domains.

Arbitrary Width Case

The classical form of the universal approximation theorem for arbitrary width is as follows:
The following formulation, due to Allan Pinkus, extends the classical results of George Cybenko and Kurt Hornik.

Fix a continuous function and positive integers. The function is not a polynomial if and only if, for every continuous function, every compact subset of, and every there exists a continous function with representation
where are composable affine maps and denotes component-wise composition, such that the following approximation bound holds

This theorem extends straightforwardly to networks with any fixed number of hidden layers: the theorem implies that the first layer can approximate any desired function and that later layers can approximate the identity function. Thus any fixed-depth network may approximate any continuous function, and this version of the theorem applies to networks with bounded depth and arbitrary width.

Arbitrary Depth Case

The 'dual' versions of the theorem consider networks of bounded width and arbitrary depth. A variant of the universal approximation theorem was proved for the arbitrary depth case by Zhou Lu et all in 2017. They showed that networks of width n+4 with ReLU activation functions can approximate any Lebesgue integrable function on n-dimensional input space with respect to distance if network depth is allowed to grow. It was also shown that there was the limited expressive power if the width was less than or equal to n. All Lebesgue integrable functions except for a zero measure set cannot be approximated by ReLU networks of width n. In the same paper it was shown that ReLU networks with width n+1 were sufficient to approximate any continuous function of n-dimensional input variables:

Universal approximation theorem. For any Lebesgue-integrable function and any, there exists a fully-connected ReLU network with width, such that the function represented by this network satisfies

Another variant was given by Patrick Kidger and Terry Lyons in 2020:

Universal approximation theorem. Let be any nonaffine continuous function which is continuously differentiable at at least one point, with nonzero derivative at that point. Let be compact. The space of real vector-valued continuous functions on is denoted by. Let denote the space of feedforward neural networks with input neurons, output neurons, and an arbitrary number of hidden layers each with neurons, such that every hidden neuron has activation function and every output neuron has the identity as its activation function. Then given any and any, there exists such that
for all.
In other words, is dense in with respect to the uniform norm.

Certain necessary conditions for the bounded width, arbitrary depth case have been established, but there is still a gap between the known sufficient and necessary conditions.