T-distributed stochastic neighbor embedding


T-distributed Stochastic Neighbor Embedding is a machine learning algorithm for visualization developed by Laurens van der Maaten and Geoffrey Hinton. It is a nonlinear dimensionality reduction technique well-suited for embedding high-dimensional data for visualization in a low-dimensional space of two or three dimensions. Specifically, it models each high-dimensional object by a two- or three-dimensional point in such a way that similar objects are modeled by nearby points and dissimilar objects are modeled by distant points with high probability.
The t-SNE algorithm comprises two main stages. First, t-SNE constructs a probability distribution over pairs of high-dimensional objects in such a way that similar objects are assigned a higher probability while dissimilar points are assigned a very low probability. Second, t-SNE defines a similar probability distribution over the points in the low-dimensional map, and it minimizes the Kullback–Leibler divergence between the two distributions with respect to the locations of the points in the map. While the original algorithm uses the Euclidean distance between objects as the base of its similarity metric, this can be changed as appropriate.
t-SNE has been used for visualization in a wide range of applications, including computer security research, music analysis, cancer research, bioinformatics, and biomedical signal processing. It is often used to visualize high-level representations learned by an artificial neural network.
While t-SNE plots often seem to display clusters, the visual clusters can be influenced strongly by the chosen parameterization and therefore a good understanding of the parameters for t-SNE is necessary. Such "clusters" can be shown to even appear in non-clustered data, and thus may be false findings. Interactive exploration may thus be necessary to choose parameters and validate results. It has been demonstrated that t-SNE is often able to recover well-separated clusters, and with special parameter choices, approximates a simple form of spectral clustering.

Details

Given a set of high-dimensional objects, t-SNE first computes probabilities that are proportional to the similarity of objects and, as follows.
For, define
and set.
Note that for all.
As Van der Maaten and Hinton explained: "The similarity of datapoint to datapoint is the conditional probability,, that would pick as its neighbor if neighbors were picked in proportion to their probability density under a Gaussian centered at."
Now define
and note that,, and.
The bandwidth of the Gaussian kernels is set in such a way that the perplexity of the conditional distribution equals a predefined perplexity using the bisection method. As a result, the bandwidth is adapted to the density of the data: smaller values of are used in denser parts of the data space.
Since the Gaussian kernel uses the Euclidean distance, it is affected by the curse of dimensionality, and in high dimensional data when distances lose the ability to discriminate, the become too similar. It has been proposed to adjust the distances with a power transform, based on the intrinsic dimension of each point, to alleviate this.
t-SNE aims to learn a -dimensional map that reflects the similarities as well as possible. To this end, it measures similarities between two points in the map and, using a very similar approach.
Specifically, for, define as
and set.
Herein a heavy-tailed Student t-distribution is used to measure similarities between low-dimensional points in order to allow dissimilar objects to be modeled far apart in the map.
The locations of the points in the map are determined by minimizing the Kullback–Leibler divergence of the distribution from the distribution, that is:
The minimization of the Kullback–Leibler divergence with respect to the points is performed using gradient descent.
The result of this optimization is a map that reflects the similarities between the high-dimensional inputs.

Software