In linguistics word embeddings were discussed in the research area of distributional semantics. It aims to quantify and categorize semantic similarities between linguistic items based on their distributional properties in large samples of language data. The underlying idea that "a word is characterized by the company it keeps" was popularized by Firth. The technique of representing words as vectors has roots in the 1960s with the development of the vector space model for information retrieval. Reducing the number of dimensions using singular value decomposition then led to the introduction of latent semantic analysis in the late 1980s. In 2000 Bengioet al. provided in a series of papers the "Neural probabilistic language models" to reduce the high dimensionality of words representations in contexts by "learning a distributed representation for words".. Word embeddings come in two different styles, one in which words are expressed as vectors of co-occurring words, and another in which words are expressed as vectors of linguistic contexts in which the words occur; these different styles are studied in. Roweis and Saul published in Science how to use "locally linear embedding" to discover representations of high dimensional data structures. The area developed gradually and really took off after 2010, partly because important advances had been made since then on the quality of vectors and the training speed of the model. There are many branches and many research groups working on word embeddings. In 2013, a team at Google led by Tomas Mikolov created word2vec, a word embedding toolkit which can train vector space models faster than the previous approaches. Most new word embedding techniques rely on a neural network architecture instead of more traditional n-gram models and unsupervised learning.
Limitations
One of the main limitations of word embeddings is that words with multiple meanings are conflated into a single representation. In other words, polysemy and homonymy are not handled properly. For example, in the sentence “The club I tried yesterday was great!”, it is not clear if the term club is related to the word sense of a club sandwich, baseball club, clubhouse, golf club, or any other sense that club might have. The necessity to accommodate multiple meanings per word in different vectors is the motivation for several contributions in NLP to split single-sense embeddings into multi-sense ones. Most approaches that produce multi-sense embeddings can be divided into two main categories for their word sense representation, i.e., unsupervised and knowledge-based. Based on word2vec skip-gram, Multi-Sense Skip-Gram performs word-sense discrimination and embedding simultaneously, improving its training time, while assuming a specific number of senses for each word. In the Non-Parametric Multi-Sense Skip-Gram this number can vary depending on each word. Combining the prior knowledge of lexical databases, word embeddings and word sense disambiguation, Most Suitable Sense Annotation labels word-senses through an unsupervised and knowledge-based approach considering a word’s context in a pre-defined sliding window. Once the words are disambiguated, they can be used in a standard word embeddings technique, so multi-sense embeddings are produced. MSSA architecture allows the disambiguation and annotation process to be performed recurrently in a self-improving manner. The use of multi-sense embeddings is known to improve performance in several NLP tasks, such as part-of-speech tagging, semantic relation identification, and semantic relatedness. However, tasks involving named entity recognition and sentiment analysis seem not to benefit from a multiple vector representation.
For biological sequences: BioVectors
Word embeddings for n-grams in biological sequences for bioinformatics applications have been proposed by Asgari and Mofrad. Named bio-vectors to refer to biological sequences in general with protein-vectors for proteins and gene-vectors for gene sequences, this representation can be widely used in applications of deep learning in proteomics and genomics. The results presented by Asgari and Mofrad suggest that BioVectors can characterize biological sequences in terms of biochemical and biophysical interpretations of the underlying patterns.
Thought vectors
Thought vectors are an extension of word embeddings to entire sentences or even documents. Some researchers hope that these can improve the quality of machine translation.