Automatic indexing is the computerized process of scanning large volumes of documents against a controlled vocabulary, taxonomy, thesaurus or ontology and using those controlled terms to quickly and effectively index large electronic document depositories. These keywords or language are applied by training a system on the rules that determine what words to match. There are additional parts to this such as syntax, usage, proximity, and other algorithms based on the system and what is required for indexing. This is taken into account using Boolean statements to gather and capture the indexing information out of the text. As the number of documents exponentially increases with the proliferation of the Internet, automatic indexing will become essential to maintaining the ability to find relevant information in a sea of irrelevant information. Natural language systems are used to train a system based on seven different methods to help with this sea of irrelevant information. These methods are Morphological, Lexical, Syntactic, Numerical, Phraseological, Semantic, and Pragmatic. Each of these look and different parts of speed and terms to build a domain for the specific information that is being covered for indexing. This is used in the automated process of indexing. The automated process can encounter problems and these are primarily caused by two factors: 1) the complexity of the language; and, 2) the lack intuitiveness and the difficulty in extrapolating concepts out of statements on the part of the computing technology. These are primarily linguistic challenges and specific problems involve semantic and syntactic aspects of language. These problems occur based on defined keywords. With these keywords you are able to determine the accuracy of the system based on Hits, Misses, and Noise. These terms relate to exact matches, keywords that a computerized system missed that a human wouldn't, and keywords that the computer selected that a human would not have. The Accuracy statistic based on this should be above 85% for Hits out of 100% for human indexing. This puts Misses and Noise combined to be 15% or less. This scale provides a basis for what is considered a good Automatic Indexing System and shows where problems are being encountered.
History
There are scholars who cite that the subject of automatic indexing attracted attention as early as the 1950s, particularly with the demand for faster and more comprehensive access to scientific and engineering literature. This attention in indexing began with text processing between 1957 and 1959 by H.P. Lunh through a series of papers that were published. Lunh proposed that a computer could handle keyword matching, sorting, and content analysis. This was the beginning of Automatic Indexing and the formula to pull keywords from text based on frequency analysis. It was later determined that frequency alone was not sufficient for good descriptors however this began the path to where we are now with Automatic Indexing. This was highlighted by the information explosion, which was predicted in the 1960s and came about through the emergence of information technology and the World Wide Web. The prediction was prepared by Mooers where an outline was created with the expected role that computing would have for text processing and information retrieval. This prediction said that machines would be used for storage of documents in large collections and that we would use these machines to run searches. Mooers also predicted the online aspect and retrieval environment for indexing databases. This led Mooers to predict an Induction Inference Machine which would revolutionize indexing. This phenomenon required the development of an indexing system that can cope with the challenge of storing and organizing vast amount of data and can facilitate information access. New electronic hardware further advanced automated indexing since it overcame the barrier imposed by old paper archives, allowing the encoding of information at the molecular level. With this new electronic hardware there were tools developed for assisting users. These were used to manage files and were organized into different categories such as PDM Suites like Outlook or Lotus Note and Mind Mapping Tools such as MindManager and Freemind. These allow users to focus on storage and building a cognitive model. The automatic indexing is also partly driven by the emergence of the field called computational linguistics, which steered research that eventually produced techniques such as the application of computer analysis to the structure and meaning of languages. Automatic indexing is further spurred by research and development in the area of artificial intelligence and self-organizing system also referred to as thinking machine.