Semantic space


Semantic spaces in the natural language domain aim to create representations of natural language that are capable of capturing meaning. The original motivation for semantic spaces stems from two core challenges of natural language: Vocabulary mismatch and ambiguity of natural language.
The application of semantic spaces in natural language processing aims at overcoming limitations of rule-based or model-based approaches operating on the keyword level. The main drawback with these approaches is their brittleness, and the large manual effort required to create either rule-based NLP systems or training corpora for model learning. Rule-based and machine learning based models are fixed on the keyword level and break down if the vocabulary differs from that defined in the rules or from the training material used for the statistical models.
Research in semantic spaces dates back more than 20 years. In 1996, two papers were published that raised a lot of attention around the general idea of creating semantic spaces: latent semantic analysis and Hyperspace Analogue to Language. However, their adoption was limited by the large computational effort required to construct and use those semantic spaces. A breakthrough with regard to the accuracy of modelling associative relations between words was achieved by explicit semantic analysis in 2007. ESA was a novel based approach that represented words in the form of vectors with 100,000 dimensions. However practical applications of the approach are limited due to the large number of required dimensions in the vectors.
More recently, advances in neural network techniques in combination with other new approaches led to a host of new recent developments: Word2vec from Google, GloVe from Stanford University, and fastText from Facebook AI Research labs.