Symbolic artificial intelligence


Symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level "symbolic" representations of problems, logic and search. Symbolic AI was the dominant paradigm of AI research from the mid-1950s until the late 1980s.
John Haugeland gave the name GOFAI to symbolic AI in his 1985 book Artificial Intelligence: The Very Idea, which explored the philosophical implications of artificial intelligence research. In robotics the analogous term is GOFR.
The approach is based on the assumption that many aspects of intelligence can be achieved by the manipulation of symbols, an assumption defined as the "physical symbol systems hypothesis" by Allen Newell and Herbert A. Simon in the middle 1960s.
One popular form of symbolic AI is expert systems, which uses a network of production rules. Production rules connect symbols in a relationship similar to an If-Then statement. The expert system processes the rules to make deductions and to determine what additional information it needs, i.e. what questions to ask, using human-readable symbols.
Opponents of the symbolic approach include roboticists such as Rodney Brooks, who aims to produce autonomous robots without symbolic representation and computational intelligence researchers, who apply techniques such as neural networks and optimization to solve problems in machine learning and control engineering.
Symbolic AI was intended to produce general, human-like intelligence in a machine, whereas most modern research is directed at specific sub-problems. Research into general intelligence is now studied in the sub-field of artificial general intelligence.
Machines were initially designed to formulate outputs based on the inputs that were represented by symbols. Symbols are used when the input is definite and falls under certainty. But when there is uncertainty involved, for example in formulating predictions, the representation is done using artificial neural networks. Recently, there have been structured efforts towards integrating the symbolic and connectionist AI approaches under the umbrella of neural-symbolic computing. As argued by Valiant and many others the effective construction of rich computational cognitive models demands the combination of sound symbolic reasoning and efficient learning models.

State space search

A symbolic AI system can be realize as a microworld, for example blocks world. The microworld represents the real world in the computer memory. It is described with lists, which contain of symbols and the Intelligent agent uses operators to bring the system into a new state. The production system is the software which searches in the state space for the next action of the Intelligent agent. The symbols for representing the world are grounded with sensory perception. In contrast to neural networks, the overall system works with heuristics, that means domain specific knowledge is used to improve the state space search.
Symbolic Artificial Intelligence was rejected by Hubert Dreyfus, because its only able to solve toy problems. Building more complex systems or scaling up the idea towards useful software isn't possible. The same argument was given in the Lighthill report, which started the AI Winter in the mid 1970s.