Feature hashing


In machine learning, feature hashing, also known as the hashing trick, is a fast and space-efficient way of vectorizing features, i.e. turning arbitrary features into indices in a vector or matrix. It works by applying a hash function to the features and using their hash values as indices directly, rather than looking the indices up in an associative array. This trick is often attributed to Weinberger et al., but there exists a much earlier description of this method published by John Moody in 1989.

Motivating example

In a typical document classification task, the input to the machine learning algorithm is free text. From this, a bag of words representation is constructed: the individual tokens are extracted and counted, and each distinct token in the training set defines a feature of each of the documents in both the training and test sets.
Machine learning algorithms, however, are typically defined in terms of numerical vectors. Therefore, the bags of words for a set of documents is regarded as a term-document matrix where each row is a single document, and each column is a single feature/word; the entry in such a matrix captures the frequency of the 'th term of the vocabulary in document.
Typically, these vectors are extremely sparse—according to Zipf's law.
The common approach is to construct, at learning time or prior to that, a dictionary representation of the vocabulary of the training set, and use that to map words to indices. Hash tables and tries are common candidates for dictionary implementation. E.g., the three documents
can be converted, using the dictionary
TermIndex
John1
likes2
to3
watch4
movies5
Mary6
too7
also8
football9

to the term-document matrix
The problem with this process is that such dictionaries take up a large amount of storage space and grow in size as the training set grows. On the contrary, if the vocabulary is kept fixed and not increased with a growing training set, an adversary may try to invent new words or misspellings that are not in the stored vocabulary so as to circumvent a machine learned filter. This difficulty is why feature hashing has been tried for spam filtering at Yahoo! Research.
Note that the hashing trick isn't limited to text classification and similar tasks at the document level, but can be applied to any problem that involves large numbers of features.

Feature vectorization using hashing trick

Instead of maintaining a dictionary, a feature vectorizer that uses the hashing trick can build a vector of a pre-defined length by applying a hash function to the features, then using the hash values directly as feature indices and updating the resulting vector at those indices. Here, we assume that feature actually means feature vector.

function hashing_vectorizer:
x := new vector
for f in features:
h := hash
x += 1
return x

Thus, if our feature vector is and hash function is if is "cat" and if is "dog". Let us take the output feature vector dimension to be 4. Then output will be .
It has been suggested that a second, single-bit output hash function be used to determine the sign of the update value, to counter the effect of hash collisions. If such a hash function is used, the algorithm becomes

function hashing_vectorizer:
x := new vector
for f in features:
h := hash
idx := h mod N
if ξ 1:
x += 1
else:
x -= 1
return x

The above pseudocode actually converts each sample into a vector. An optimized version would instead only generate a stream of pairs and let the learning and prediction algorithms consume such streams; a linear model can then be implemented as a single hash table representing the coefficient vector.

Properties

When a second hash function ξ is used to determine the sign of a feature's value, the expected mean of each column in the output array becomes zero because ξ causes some collisions to cancel out. E.g., suppose an input contains two symbolic features f₁ and f₂ that collide with each other, but not with any other features in the same input; then there are four possibilities which, if we make no assumptions about ξ, have equal probability, as listed in the table on the right.
In this example, there is a 50% probability that the hash collision cancels out. Multiple hash functions can be used to further reduce the risk of collisions.
Furthermore, if φ is the transformation implemented by a hashing trick with a sign hash ξ, then inner products in the hashed space are unbiased:
where the expectation is taken over the hashing function φ. It can be verified that is a positive semi-definite kernel.

Extensions and variations

Recent work extends the hashing trick to supervised mappings from words to indices,
which are explicitly learned to avoid collisions of important terms.

Applications and practical performance

Ganchev and Dredze showed that in text classification applications with random hash functions and several tens of thousands of columns in the output vectors, feature hashing need not have an adverse effect on classification performance, even without the signed hash function.
Weinberger et al. applied their variant of hashing to the problem of spam filtering, formulating this as a multi-task learning problem where the input features are pairs so that a single parameter vector captured per-user spam filters as well as a global filter for several hundred thousand users, and found that the accuracy of the filter went up.

Implementations

Implementations of the hashing trick are present in: