4. Foundations of AI — Economics, Neuroscience, Psychology, and Linguistics

Source: AIMA 4th Ed, §1.2


Economics

Economics asked: How should agents make decisions to maximize payoff?

Key Contributions

Figure / Concept Contribution
Adam Smith (1776) Agents pursuing individual self-interest → collective benefit; foreshadowed utility maximization
Jeremy Bentham / John Stuart Mill Utilitarianism: actions should maximize total well-being (utility)
von Neumann & Morgenstern (1944) Decision theory: how rational agents should choose under uncertainty using expected utility
Leonard Savage (1954) Subjective expected utility (SEU): agents have subjective beliefs (probabilities) and preferences (utilities) → choose the action with highest expected utility
Markov Decision Processes (MDPs) Mathematical formalism for sequential decision making under uncertainty — central to RL
John Nash (1950) Nash equilibrium in game theory: no agent can unilaterally improve by changing strategy. Relevant for multi-agent AI.
Herbert Simon (satisficing) In practice, agents don’t optimize perfectly — they “satisfice”: find a solution that is good enough. Rationality is bounded.

Why It Matters for AI/RL


Neuroscience

Neuroscience asked: How does the brain compute? What can we borrow?

Key Facts

Topic Key Finding
Neurons ~100 billion neurons, each connected to ~1,000–100,000 others via synapses
Firing rate Neurons fire at up to ~1,000 Hz; inter-neuron communication via electrochemical pulses
McCulloch & Pitts (1943) First mathematical model of a neuron: binary threshold unit. Showed networks of such units could compute any computable function.
Hebb (1949) Hebbian learning: “Neurons that fire together wire together” — synaptic strength increases when pre- and post-synaptic neurons fire simultaneously
Brain-machine interfaces Modern work allows direct reading of motor intentions from neurons (e.g., paralyzed patients controlling computer cursors)

Key Caveat

The brain is NOT a simple digital computer. Its architecture is massively parallel, fault-tolerant, and operates on analog signals. AI draws inspiration from it but is not a literal simulation of it.


Psychology

Psychology asked: How do humans and animals think and act?

Behaviorism vs. Cognitive Psychology

School Core Claim Key Figure
Behaviorism (1913–1950s) Only observable behavior matters; reject all talk of internal mental states Watson, Skinner
Cognitive psychology (1960s–) Internal representations and processes exist and explain behavior; the mind is an information processor Craik (1943), Miller (1956)

Craik’s Knowledge-Based Agent Framework (1943)

Kenneth Craik proposed that rational behavior can be explained by three steps: 1. The stimulus (percept) is translated into an internal representation 2. The representation is manipulated by cognitive processes (reasoning) 3. The result is retranslated into action

This is essentially the architecture of a rational agent in AIMA.

Miller’s “Magic Number 7” (1956)

George Miller showed that short-term memory can hold roughly 7 ± 2 items — one of the first rigorous results connecting psychology to information theory.


Control Theory

Control theory asked: How do devices use feedback from the environment to act optimally?

Figure Contribution
Norbert Wiener (1948) Cybernetics: designed feedback controllers; connected feedback systems to purposeful behavior and intelligence
Optimal control (1950s–60s) Minimize a cost function (e.g., fuel use, error) over time → precursor to the RL reward signal
Kalman filter (1960) Optimal linear estimator for noisy systems — used in robotics and navigation

Key difference from AI (historically): Control theory used continuous math (ODEs, optimization); early AI used symbolic logic. They are now converging — RL is the bridge.


Linguistics

Linguistics asked: What is the structure of language, and can machines understand it?

Figure Contribution
Chomsky (1957) Formal grammars (context-free grammars) for syntax; showed behaviorism couldn’t explain language acquisition
Whorf hypothesis Language shapes thought (controversial — largely rejected)
Computational linguistics Formal study of language structure, parsing, generation, and understanding by computer
Knowledge representation Language understanding requires world knowledge, not just syntax — drove knowledge representation research in AI

Summary: What Each Field Gave AI

Field Core gift to AI
Philosophy Justification that mind can be mechanical; logic as reasoning
Mathematics Formal reasoning, computation limits, probability
Economics Decision theory, utility, MDPs, game theory
Neuroscience Neural network inspiration, Hebbian learning
Psychology Agent architecture (percept → representation → action)
Control theory Feedback, cost minimization, continuous optimization
Linguistics Language structure, knowledge representation