They’re Not Interchangeable
There is a confusion running through AI governance discourse that rarely gets named directly, possibly because naming it implicates most of the frameworks currently in use.
The confusion is about what kind of uncertainty we are actually dealing with.
Epistemic uncertainty is uncertainty about a system that exists and behaves in specific ways we haven't fully characterized. It is uncertainty in the observer, not in the system. In principle, more information resolves it. Probabilistic frameworks are well-suited here: they represent what we don't know about something that is, underneath, knowable.
Ontological uncertainty – or more precisely, Knightian uncertainty – is different. It is not uncertainty about a known possibility space. It is uncertainty that includes possibilities we haven't thought of yet. You cannot assign a probability to a possibility you haven't conceived. The distribution isn't imprecisely known. It doesn't exist yet as an object of knowledge.
Most AI governance frameworks treat these as interchangeable. They apply probabilistic risk tools to frontier AI systems and present the result as characterization of the risk. But the uncertainty we face with large language models deployed in novel contexts is not primarily epistemic in the resolvable sense. The output space hasn't been characterized. The failure modes haven't been enumerated. What looks like a probability distribution over outcomes is usually a representation of the analyst's ignorance rather than a property of the system.
This distinction matters because the tools were built for the first kind of uncertainty. When applied to the second kind, they produce confident-looking assessments of risks that haven't actually been characterized. The math is not wrong. The assumption smuggled in underneath it is.
The apparent randomness in AI outputs compounds the confusion. Because these systems produce different outputs across runs, they feel stochastic, as though the uncertainty is in the system itself. But the uncertainty we observe is mostly in us, not in the computation. The system is doing something specific; we don't know what. Dressing that ignorance in probabilistic language doesn't resolve it; it performs resolution.
This matters enormously when frontier models operate in novel deployment contexts. Frank Knight distinguished risk from uncertainty in 1921 for precisely this reason: risk describes a situation where you know the possibility space and can assign probabilities. Uncertainty describes a situation where you don't. Most AI governance frameworks are risk frameworks applied to uncertainty conditions. They are not dishonest; sometimes the intent is genuinely to represent what we don't know. But they import the assumption that the distribution is estimable, and in genuinely novel deployment contexts, it is not.
The honest response to this is to say: we are operating under uncertainty, here is how we are making decisions under those conditions, and here is what would cause us to update. Some frameworks do this. Most don't. Most present the probabilistic overlay as if it were a characterization of the risk rather than a representation of the analyst's uncertainty, which is a different thing with different implications for how much confidence the framework actually warrants.
This is not an argument against AI governance frameworks. It is an argument for precision about what they are doing and what they are not doing. A framework that acknowledges uncertainty and structures decisions accordingly is doing something useful. A framework that papers over uncertainty with probabilistic language is producing the appearance of governance rather than the thing itself.
The world has structure. It has causal regularities that do not update based on what organizations believe about them. The gap between a model and the territory it represents is not a social problem: it does not close through better communication or stronger culture. It closes through contact with reality. That contact has always been the job. It has just, for a while, been underpriced.
It is not underpriced anymore.