Douglas Lenat


Douglas Bruce Lenat is the CEO of Cycorp, Inc. of Austin, Texas, and has been a prominent researcher in artificial intelligence; he was awarded the biannual IJCAI Computers and Thought Award in 1976 for creating the machine learning program, AM. He has worked on machine learning, knowledge representation, "cognitive economy", blackboard systems, and what he dubbed in 1984 "ontological engineering". He has also worked in military simulations, and numerous projects for US government, military, intelligence, and scientific organizations. In 1980, he published a critique of conventional random-mutation Darwinism. He authored a series of articles in the Journal of Artificial Intelligence exploring the nature of heuristic rules.
Lenat was one of the original Fellows of the AAAI, and is the only individual to have served on the Scientific Advisory Boards of both Microsoft and Apple. He is a Fellow of the AAAS, AAAI, and Cognitive Science Society, and an editor of the J. Automated Reasoning, J. Learning Sciences, and J. Applied Ontology. He was one of the founders of TTI/Vanguard in 1991 and remains a member of its still in 2017. He was named one of the Wired 25.

Background and education

Lenat was born in Philadelphia, Pennsylvania, on September 13, 1950, and grew up there and, from ages 5–15, in Wilmington, Delaware. He attended Cheltenham High School, in Wyncote PA, where his after-school job at the neighboring Beaver College was cleaning rat cages and then goose pens, which motivated him to learn to program as a path to a very different after-school and summer job, and eventually career.
While attending the University of Pennsylvania, Lenat supported himself through programming, notably designing and developing a natural language interface to a U.S. Navy data base question–answering system serving as an early online shipboard operations manual used on US aircraft carriers. He received his bachelor's degree in Mathematics and Physics, and his master's degree in Applied Mathematics, all in 1972, from the University of Pennsylvania.
His senior thesis, advised in part by Dennis Gabor, was to bounce acoustic waves in the 40 mHz range off real-world objects, record their interference patterns on a 2-meter square plot, photo-reduce that to a 10mm square film image, shine a laser through that film, and thus project the 3-D imaged object—i.e., the first known acoustic hologram. To settle an argument with Dr. Gabor, Lenat computer-generated a five-dimensional hologram, by photo-reducing computer printout of the interference pattern of a globe rotating and expanding over time, reducing that large two-dimensional paper printout to a moderately large 5 cm square film surface through which a conventional laser beam was then able to project a three-dimensional image which changed in two independent ways
Lenat was a Ph.D. student in Computer Science at Stanford University, where his published research included automatic program synthesis from input/output pairs and from natural language clarification dialogues

Research

He received his Ph.D. in Computer Science from Stanford University in 1976. His thesis advisor was Professor Cordell Green, and his thesis/oral committee included Professors Edward Feigenbaum, Joshua Lederberg, Paul Cohen, Allen Newell, Herbert Simon, , John McCarthy, and Donald Knuth.
His thesis, AM was one of the first computer programs that attempted to make discoveries, i.e., a theorem proposer rather than a theorem prover. Experimenting with the program fueled a cycle of criticism and improvement, leading to a slightly deeper understanding of human creativity. Many issues had to be dealt with, in constructing such a program: how to represent knowledge formally and expressively and concretely, how to program hundreds of heuristic "interestingness" rules to judge the worth of new discoveries, heuristics for when to reason symbolically and inductively versus when to reason statistically from frequency data, what the architecture — the design constraints — of such reasoning programs might be, why heuristics work, and what their ``inner structure'' might be. AM was one of the first halting steps toward a science of learning by discovery, toward de-mystifying the creative process and demonstrating that computer programs can make novel and creative discoveries.
In 1976 Lenat started teaching as an assistant professor of Computer Science at Carnegie Mellon and commenced his work on the AI program Eurisko. The limitation with AM was that it was locked into following a fixed set of interestingness heuristics; Eurisko, by contrast, represented its heuristic rules as first class objects and hence it could explore, manipulate, and discover new heuristics just as it explored, manipulated, and discovered new domain concepts.
Lenat returned to Stanford as an assistant professor of Computer Science in 1978, and continued his research building the Eurisko automated discovery and heuristic-discovery program. Eurisko made many interesting discoveries and enjoyed significant acclaim, with his paper "Heuretics: Theoretical and Experimental Study of Heuristic Rules" winning the Best Paper award at the 1982 AAAI conference.

A call for "common sense"

Unlike the vast preponderance of published scientific results, Lenat published in 1984 a thorough and frank analysis of what were the limitations of his AM and Eurisko lines of research. It concluded that progress toward real, general, symbolic AI would require a vast knowledge base of "common sense", suitably formalized and represented, and an inference engine capable of finding tens- or hundreds-deep conclusions and arguments that followed from the application of that knowledge base to specific questions and applications.
The successes, and frank analysis of the limitations, of this AM and Eurisko approach to AI, and the concluding plea for the massive R&D effort would be required to break that bottleneck to AI, led to attention in 1982 from Admiral Bob Inman and the then-forming MCC research consortium in Austin, Texas, culminating in Lenat's becoming Principal Scientist of MCC from 1984-1994, though he continued even after this period to return to Stanford to teach approximately one course per year. At the 400-person MCC, Lenat was able to have several dozen researchers work on that common sense knowledge base, rather than just a few graduate students.

Cycorp

The fruits of the first decade of R&D on Cyc were spun out of MCC into a company, Cycorp, at the end of 1994. In 1986, he estimated the effort to complete Cyc would be at least 250,000 rules and 1000 person-years of effort, probably twice that, and by 2017 he and his team had spent about 2,000 person-years building Cyc, approximately 24 million rules and assertions and 2,000 person-years of effort. Lenat emphasizes that he and his 60-person R&D team strive to keep those numbers as small as possible; even the number of one-step inferences in Cyc's deductive closure is in the hundreds of trillions.
, Lenat continues his work on Cyc as CEO of Cycorp. While the first decade of work on Cyc was funded by large American companies pooling long-term research funds to compete with the Japanese Fifth Generation Computer Project, and the second decade of work on Cyc was funded by US government agencies' research contracts, the third decade up through the present has been largely supported through commercial applications of Cyc, including in the financial services, energy, and healthcare areas.
Among the recent Cyc applications, one unusual one, , involves helping middle-school students more deeply understand math. Most people have had the experience where we thought we understood something, but only really understood it when we had to explain or teach it to someone else. Despite that, almost all AI-aided instruction has the AI play the role of the teacher. In contrast, Mathcraft has the AI, Cyc, play the role of a fellow student who is always very slightly more confused than you, the user, are. As you give MathCraft good advice, it allows that avatar to make fewer mistakes of that kind, and from the point of the user it seems as though they have taught it something. This sort of Learning by Teaching paradigm may have broad applications in future domains where training is involved.

Quotes