3. Foundations of AI — Philosophy and Mathematics
Source: AIMA 4th Ed, §1.2
Philosophy
Philosophy laid the conceptual groundwork for AI by asking: Can formal rules produce rational conclusions? How does the mind arise from the brain?
Key ideas and thinkers
| Figure | Contribution |
|---|---|
| Aristotle (384–322 BCE) | Syllogisms — the first formal rules of inference: given premises, certain conclusions must follow |
| Ramon Llull (1300s) | Mechanical combination of ideas — early idea of reasoning by machine |
| Hobbes (1651) | Leviathan: “Reasoning is nothing but reckoning” — mind as computation |
| Descartes (1641) | Dualism: mind and body are separate substances. Pineal gland as the interface. Machines can mimic behavior, but not truly reason (or feel). |
| Materialism (La Mettrie, 1748) | Rejected dualism: brain = machine, mind = its function. |
| Locke, Hume (empiricism) | Knowledge comes from sensory experience; ideas = “faint copies” of experience |
| Hume’s induction problem | We can’t logically justify generalizing from past to future — foundational problem for ML |
| Wittgenstein, Carnap (logical positivism) | All meaningful knowledge is reducible to logical statements about observations |
| Popper (1959) | Falsifiability: a theory is scientific only if it can be tested and potentially refuted |
| Utility theory | Philosophy of decision: rational agents should maximize expected utility |
The Mind-Body Problem
- Dualism: Mind is non-physical; humans have a rational soul, animals (and machines) do not → machines can simulate but not truly think.
- Materialism: Mind = physical brain processes → in principle, machines could think.
- Modern AI implicitly assumes materialism: if we replicate the right physical processes, we get intelligence.
Mathematics
Mathematics gave AI three essential tools: logic, computation theory, and probability/statistics.
Formal Logic
| Figure | Contribution |
|---|---|
| George Boole (1847) | Boolean algebra — formal logic using 0s and 1s |
| Gottlob Frege (1879) | First-order predicate logic — quantifiers (∀, ∃), relations, functions |
| Alfred Tarski (1930s) | Formal semantics — what it means for a logical sentence to be true |
| Russell & Whitehead (1910–13) | Principia Mathematica — attempted to ground all of mathematics in logic |
The Limits of Computation
| Figure | Contribution |
|---|---|
| Gödel (1931) | Incompleteness theorems: Any consistent formal system powerful enough to do arithmetic must contain true statements that cannot be proved within the system. |
| Church & Turing (1936) | Independently proved that not all functions are computable — some problems are undecidable (e.g., Turing’s halting problem) |
| Turing (1936) | Defined the Turing machine — an abstract model of computation that defines what “computable” means |
Complexity and Tractability
- Even for computable problems, many are intractable — solvable in principle but requiring exponential time.
- NP-completeness (Cook, 1971): A large class of problems where no known polynomial-time algorithm exists. If one NP-complete problem could be solved in polynomial time, all could.
- AI cares deeply about this: many natural AI problems (constraint satisfaction, planning, inference in Bayes nets) are NP-hard.
Probability and Statistics
- Pascal, Fermat (1650s): Combinatorics and probability theory
- Bayes (1763): Bayes’ rule — how to update beliefs given new evidence
- Laplace (1812): Applied probability to scientific inference
- Probability theory is the mathematical language for reasoning under uncertainty — essential for all modern ML.
Why This Matters for AI
- Philosophy established that the mind could, in principle, be mechanical.
- Mathematics established how to reason precisely, what can and cannot be computed, and how to handle uncertainty.
- These foundations make AI possible in theory. The engineering challenge is to make it work in practice.