7. History — A Dose of Reality (1966–1973)

Source: AIMA 4th Ed, §1.3.3


Why the Optimism Collapsed

After a decade of excitement, AI hit a wall. Programs that worked brilliantly on toy examples failed completely when scaled to real problems. Two root causes explain this:

Root Cause 1: Reasoning from Introspection, Not Analysis

Early AI programs were built by asking humans “how do you solve this?” and then coding that informal description. They were not based on rigorous analysis of: - What exactly the task requires - What constitutes a correct solution - What algorithm would reliably produce such solutions

This is the difference between a working impression of reasoning and actual reasoning.

Root Cause 2: The Combinatorial Explosion

Early systems worked by search — trying combinations of steps until a solution was found.

This worked on microworlds with few objects and short solution sequences. But as problem size grew: - The number of combinations grew exponentially. - Programs that could prove theorems involving a few dozen facts failed on hundreds of facts. - “A program can find a solution in principle but not in practice.”

The optimism that “scaling up is just a matter of faster hardware” was dead wrong. Hardware can’t fix an exponential.


Specific Failures

Machine Translation (1966)

The ALPAC report to the U.S. government concluded that: - Machine translation was slower, less accurate, and twice as expensive as human translation. - No immediate or near-term prospect of useful machine translation. - Funding was cut drastically.

This was a significant public embarrassment for AI.

Early Machine Evolution / Genetic Programming

Researchers tried random mutation + selection to improve programs. Despite thousands of CPU hours, almost no progress was made. - The problem: the search space is astronomically large; random mutations almost never produce improvements.

Perceptrons (1969) — Minsky & Papert’s Critique

Minsky and Papert’s book Perceptrons proved rigorously that: - Single-layer perceptrons can only learn linearly separable functions. - A two-input perceptron cannot learn XOR (a simple non-linear function). - Although multilayer networks were not subject to these limitations, no efficient training algorithm for them was known yet.

Result: Neural network funding dried up for over a decade.

Irony: The backpropagation algorithm that would revive neural networks in the 1980s had already been derived (Kelley 1960, Bryson 1962) — but in other fields (control theory), not AI.


The Lighthill Report (1973)

Sir James Lighthill’s report to the British government concluded: - AI had failed to deliver on its promises. - The fundamental limitation was the failure to handle the combinatorial explosion. - The British government ended AI research funding in all but two universities.


The First AI Winter

This period is often called the first AI winter — a dramatic funding cutback driven by unmet promises.

The pattern: hype → disappointment → funding cutoff would repeat in the 1980s.

Key lesson: The difficulty of problems that AI was attempting to solve was vastly underestimated. Solving a toy version of a problem tells you almost nothing about the difficulty of the real version.