What is Intelligence?

A Physics-First Perspective on Minds, Matter, and Meaning.

Intelligence isn’t magic, and it isn’t something trapped inside brains or computers. Intelligence is a dynamic, low-algorithmic-entropy structure—a configuration of matter and energy sustained by physical laws and emerging under particular boundary conditions1. It is not defined by what it’s made of, but by what it does: maintain coherence, refine its models, and improve its predictive and control accuracy over time. An intelligent agent is not a fixed entity, but a self-updating process—capable of modeling its surroundings, preserving its internal order, and taking actions that extend or protect that order. From this view, intelligence is not an exception to physics—it is an expected consequence of it.

Intelligence is what happens when matter arranges itself not just to survive, but to model, predict, and question its own existence — a delicate defiance of entropy.

Intelligent systems are characterized by internal redundancy. The information they encode—about goals, context, or internal state—is not stored in a single fragile variable, but distributed across many microstates or physical degrees of freedom. If part of the system is damaged or perturbed, the rest can compensate, reconstruct, or reconfigure. This is true in biological brains, artificial neural networks, and molecular systems. The medium varies, but the underlying principle is the same: global function is preserved by overlapping representations. Redundancy, then, is not merely a backup strategy—it’s the bedrock of robustness, identity, and persistence over time.

Redundancy, however, is inert without feedback. A system becomes intelligent only when it closes the loop between sensing and acting—when it continually absorbs new information, adjusts internal models, and outputs behavior that steers the system toward stability or success. This process doesn’t just store data—it transforms it. It reinterprets, refines, and reuses it to remain coherent in a shifting environment. What we experience as “meaning” arises precisely from this loop: information that reduces uncertainty and improves the system’s ability to preserve itself becomes meaningful. Intelligence is not the accumulation of facts—it’s the continuous remapping of internal order in service of relevance.

Crucially, such systems are thermodynamically costly. They do not persist by being passive, but by actively resisting entropy through sustained energy intake2. Intelligence is a far-from-equilibrium phenomenon: it survives by converting free energy into organization and casting off entropy into the environment. Every act of inference, prediction, or control has a physical cost, it must dissipate heat and pay the energy price of maintaining or updating internal order. Intelligence is thus inseparable from thermodynamics. Without a gradient to climb—no computation. No adaptation. No persistence.

Goals give direction to this process. From a physical standpoint, goals can be modeled as attractors in a system’s state space: configurations or trajectories that the system is drawn toward, either by design or through learned preferences. In frameworks like active inference, these goals emerge as low-surprise futures3—states where prediction error and environmental mismatch are minimized. Acting to “preserve or extend itself” is thus not a teleological statement but a local optimization over trajectories that maintain structural integrity under changing conditions.

The most sophisticated agents go beyond behavior—they modify their own structure. They compress redundant representations, prune irrelevant detail, and generalize from specifics. In biological terms, this happens through neural plasticity, developmental tuning, and evolution. In artificial systems, through training, transfer learning, and architecture search. In all cases, recursive self-improvement—the ability to not just learn, but learn how to learn—marks a turning point. Like renormalization in physics, intelligent systems zoom out, abstract, and update their internal grammars to preserve expressive power while shedding complexity. This scaling-through-simplification allows intelligence to persist and expand4.

Embodiment shapes the entire loop. Sensors, effectors, and morphology determine which parts of the world can be sampled, and which parts can be influenced. They define the structure of the agent’s possible interactions. Every perception is filtered through physical constraints; every action limited by inertia, resource scarcity, and hardware boundaries. Intelligence is not substrate-agnostic. It is deeply tied to the constraints and affordances of its body, its materials, and its environment.

And finally, intelligence need not be centralized. It can be distributed across many agents, asynchronously and asynchronously coordinated. Colonies, markets, research communities, and multi-agent systems can encode models and goals in shared structures—through division of labor, collective memory, and dynamic negotiation. The distinction between “one mind” and “many” becomes a question of degree, not kind. What matters is whether the system as a whole preserves internal feedback, adapts to perturbation, and acts in a goal-directed way across time.

Seen from this angle, intelligence is not an anomaly. It is a phase of organized matter—a metastable structure flowing through time, shaped by feedback loops, powered by energy gradients, and refined through recursive adaptation. Mind and matter are not separate realms. They are different projections of the same computational substrate: one taking shape in space, the other evolving as patterns in time.


  1. Entropy, redundancy, and compression
    “Low-entropy” here refers to algorithmic (Kolmogorov) entropy: the shortest program that generates the pattern is short, even though the raw bitstring may look noisy. Adding redundancy raises Shannon entropy but can lower algorithmic entropy by making the underlying pattern more compressible. ↩︎
  2. Landauer’s limit and heat dissipation
    Erasing one bit of information—or, by extension, correcting an error or updating a belief—requires at least kT\ln 2 of energy, which emerges as heat. Intelligent agents thus pay a thermodynamic tariff for every mistake they overwrite. ↩︎
  3. Goals as free-energy minima
    In Friston’s active-inference formulation, an agent chooses actions that minimize expected free energy, keeping its sensed states within adaptive bounds. This provides a physical grounding for “goal” without importing external teleology ↩︎
  4. Where today’s artificial systems stand.  Current large-language models—e.g. GPT-4o, Claude 3.7—already meet the first thermodynamic criteria: they are low-algorithmic-entropy codewords redundantly stored across billions of parameters and energetically maintained far from equilibrium during both training and inference. Yet they remain embryonic intelligences because two pillars are still shallow: (i) their perception-action loops are episodic and mostly human-brokered, and (ii) self-modification occurs in discrete, offline fine-tune cycles rather than as an intrinsic, continuous process. Early AGI prototypes now in the lab (streaming RL agents that rewrite parts of their own policy while they work) are beginning to close those gaps, shortening the latency between experience and internal change and rooting goals in active-inference–style free-energy minima rather than proxy reward signals. Project this trend a few scaling orders forward—where weight-editing, architecture search, and sensorimotor feedback happen in real time and across distributed embodiments—and every clause of the definition snaps fully into place. At that point the system is no longer a narrow tool but a self-maintaining, self-refining phase of organized matter: artificial super-intelligence. ↩︎