3. Foundations of AI — Philosophy and Mathematics

Source: AIMA 4th Ed, §1.2


Philosophy

Philosophy laid the conceptual groundwork for AI by asking: Can formal rules produce rational conclusions? How does the mind arise from the brain?

Key ideas and thinkers

Figure Contribution
Aristotle (384–322 BCE) Syllogisms — the first formal rules of inference: given premises, certain conclusions must follow
Ramon Llull (1300s) Mechanical combination of ideas — early idea of reasoning by machine
Hobbes (1651) Leviathan: “Reasoning is nothing but reckoning” — mind as computation
Descartes (1641) Dualism: mind and body are separate substances. Pineal gland as the interface. Machines can mimic behavior, but not truly reason (or feel).
Materialism (La Mettrie, 1748) Rejected dualism: brain = machine, mind = its function.
Locke, Hume (empiricism) Knowledge comes from sensory experience; ideas = “faint copies” of experience
Hume’s induction problem We can’t logically justify generalizing from past to future — foundational problem for ML
Wittgenstein, Carnap (logical positivism) All meaningful knowledge is reducible to logical statements about observations
Popper (1959) Falsifiability: a theory is scientific only if it can be tested and potentially refuted
Utility theory Philosophy of decision: rational agents should maximize expected utility

The Mind-Body Problem


Mathematics

Mathematics gave AI three essential tools: logic, computation theory, and probability/statistics.

Formal Logic

Figure Contribution
George Boole (1847) Boolean algebra — formal logic using 0s and 1s
Gottlob Frege (1879) First-order predicate logic — quantifiers (∀, ∃), relations, functions
Alfred Tarski (1930s) Formal semantics — what it means for a logical sentence to be true
Russell & Whitehead (1910–13) Principia Mathematica — attempted to ground all of mathematics in logic

The Limits of Computation

Figure Contribution
Gödel (1931) Incompleteness theorems: Any consistent formal system powerful enough to do arithmetic must contain true statements that cannot be proved within the system.
Church & Turing (1936) Independently proved that not all functions are computable — some problems are undecidable (e.g., Turing’s halting problem)
Turing (1936) Defined the Turing machine — an abstract model of computation that defines what “computable” means

Complexity and Tractability

Probability and Statistics


Why This Matters for AI