1. Problem-Solving Agents and Search Formulation

Source: AIMA 4th Ed., Chapter 3 (Sections 3.1–3.2) Status: Complete


Overview

A problem-solving agent operates in episodic, fully observable, deterministic, static, discrete, known environments. Its task is to find a sequence of actions that transforms an initial state into a goal state. This contrasts with reflex agents that map perceptions directly to actions — the problem-solving agent plans an entire sequence before executing.


The Four-Phase Agent Process

  1. Goal Formulation — Define the set of goal states. Goals constrain the behavior and focus the search. Example: reach Bucharest.

  2. Problem Formulation — Abstract the world into a formal model: what are the states, what actions are available, what transitions do actions cause, what is the cost of each action?

  3. Search — Simulate sequences of actions in the model (without executing them) to find a path to a goal state.

  4. Execution — Follow the found solution action-by-action in the real environment (open-loop: no feedback during execution in the basic model).


Formal Problem Definition

A search problem is defined by five components:

Component Formal Name Description
State space S All possible states the environment can be in
Initial state s_0 The state the agent starts in
Goal states G ⊆ S States the agent wants to reach; can be a predicate IS-GOAL(s)
Actions Actions(s) Set of actions available in state s
Transition model Result(s, a) = s’ The state reached by taking action a in state s
Action cost ActionCost(s, a, s’) Numeric cost of taking action a from s to s’

A path is a sequence of states connected by actions. The path cost is the sum of individual action costs along the path. An optimal solution is one with minimum path cost among all solutions.


Abstraction

Real environments are vastly complex. Abstraction is the process of removing detail to create a tractable model. The key principle: an abstract action should be implementable in the real world (“refinable”), and an abstract solution should correspond to a real solution.

Example (Romania road problem): - Real state: exact GPS position, fuel level, weather, road conditions, etc. - Abstract state: name of city currently in - Abstract action: drive from city A to city B along road - Abstract goal: be in Bucharest

The level of abstraction must be chosen carefully — too coarse and the model is misleading; too fine and the search space is intractable.


State Space Graph vs. Search Tree

These are the two key data structures in search:

State-space graph: - Nodes = world states - Edges = actions (labeled with action and cost) - A state appears exactly once - May have cycles

Search tree: - Root = initial state - Each node = a state reached by a particular path from the root - A state can appear multiple times (via different paths) - Captures the history of how we got to a state - Used by search algorithms to represent the search frontier

The distinction matters when handling redundant paths: two different tree paths to the same state. Ignoring them leads to tree-like search (efficient in memory but may re-expand states); tracking them requires a reached table.


Node Data Structure

Each node in the search tree stores:

node.STATE       -- the world state this node corresponds to
node.PARENT      -- the node in the search tree that generated this one
node.ACTION      -- the action applied to parent to produce this node
node.PATH-COST   -- g(n): total cost of the path from root to this node

The solution is extracted by tracing PARENT pointers from a goal node back to the root, then reversing the action sequence.


Key Terminology


Performance Metrics

All search algorithms are judged on four criteria:

Criterion Definition
Completeness Is the algorithm guaranteed to find a solution when one exists (and report failure when none exists)?
Cost optimality Does it find the solution with the lowest path cost?
Time complexity How many nodes are generated/expanded?
Space complexity How many nodes are kept in memory at peak?

Parameters used in complexity analysis: - b — branching factor: max number of successors of any node - d — depth of the shallowest solution - m — maximum depth of any path in the state space (may be infinite) - l — depth limit (for depth-limited search) - C* — optimal solution cost - ε — minimum action cost (assumed > 0 for cost-based analysis)


Assumptions for Chapter 3

Chapter 3 assumes: - Episodic: solution is a fixed sequence, not adaptive - Single agent: no adversary, no cooperation - Fully observable: agent knows the complete current state - Deterministic: actions have known, certain outcomes - Static: world does not change while the agent searches - Discrete: finite number of states and actions - Known: agent has the complete transition model in advance

These assumptions are relaxed in Chapter 4 (complex environments).