When Regulators Ask the Impossible

When Regulators Ask the Impossible
Photo by Joshua Tsu / Unsplash

Why AI Governance Frameworks and Generative AI Are Fundamentally Incompatible

The Scenario

You're eight months into deploying an AI system for clinical decision support. Internal reviews are passed, you decided not to build in-house and you found a vendor who promised compliance. The documentation looks solid. Then the FDA auditor asks a straightforward question:

"Walk me through how the system arrived at this specific recommendation for this patient."

Your technical team explains the model architecture, the training approach, the validation metrics. The auditor nods politely and repeats the question:

"I need to see the evidence chain. Show me exactly which data points influenced this specific output."

And that's when you realize: the architecture fundamentally cannot answer this question.

Not because your team failed to document properly, or because the vendor cut corners. It’s just that what the auditor is asking for doesn't exist—cannot exist—given how the technology actually works.

Why This Keeps Happening

This scenario repeats across industries. Different regulators, different systems, same fundamental problem.

An algorithmic trading system at a financial services firm passes model risk management review, then fails when SEC examiners demand granular traceability for individual trading decisions. A pharmaceutical company's AI-assisted compound screening looks solid until EMA reviewers ask for invertible reasoning chains the architecture didn't preserve.

The pattern is consistent: organizations make architectural decisions during system design that seem reasonable at the time, then discover months later that those decisions guaranteed regulatory failure.

The gap isn't knowledge or expertise. Technical teams understand AI. Compliance teams understand regulations. But nobody is asking the critical question during architecture decisions: Can this system produce what regulators will eventually demand, given information-theoretic constraints?

Most implementations assume the answer is yes—that traceability is a documentation problem solvable with better logging, more detailed records, clearer approval workflows. Organizations layer governance processes onto AI systems without examining whether the underlying architecture can support what those processes require.

By the time the audit reveals the architectural incompatibility, it's expensive or impossible to fix. You can't retrofit information the system compressed away during training. You can't recover reasoning chains that never existed in the first place.

The Deeper Problem: Compression Dynamics vs. Regulatory Expectations

Current regulatory frameworks across industries—FDA guidance for medical devices, SEC requirements for algorithmic trading, NIST AI RMF, EU AI Act—share a common assumption: systems should produce traceable, auditable decision pathways that regulators can verify.

These frameworks evolved from decades of regulating deterministic software and rule-based systems. If a traditional medical device makes a decision, you can trace that decision back through explicit rules, programmed logic, and defined parameters. The process is invertible: given an output, you can work backward to identify exactly which inputs and rules produced it.

AI systems, particularly large language models and other generative AI, don't work this way.

These systems work through compression. They take massive amounts of training data and compress it into statistical patterns encoded in model weights. During inference, they generate outputs by predicting based on those compressed patterns.

This compression is lossy and non-invertible. Information that existed in the training data gets transformed, combined, and condensed. You cannot take a model output and work backward through the compression to recover the specific training examples or reasoning process that produced it. The information regulators want to audit—the explicit evidence chain—was compressed away. That's not a flaw; that's how the technology functions.

When regulators ask "show me how this decision was made," they're asking for something the architecture cannot provide. Not won't provide—cannot provide, information-theoretically.

This incompatibility isn't an excuse to avoid oversight. It's a reason to design governance differently – to ask questions AI systems can actually answer while still verifying what matters: reliability, safety, and performance within validated domains.

Think of it this way: you can't unbake a cake and trace the final product back to identify which specific grain of flour went where. The baking process is non-invertible. But you can verify the cake is safe and high-quality through ingredient testing before baking, process monitoring during baking, systematic sampling of outputs, and constraints on the baking process itself.

AI governance needs the same shift: from demanding invertible reasoning chains (impossible due to compression) to verifying reliability through methods that information-theoretic constraints actually allow.

Organizations that understand these constraints before deployment can architect systems that satisfy what regulators actually care about, even when they can't provide what current frameworks literally request.

The Manifestations Across Industries

Life Sciences:
FDA reviewers expect evidence chains showing how AI-assisted diagnostic systems reached specific conclusions. They want to see which training cases influenced which outputs, how the model weighted different features, what the reasoning pathway looked like.

But LLM-based clinical decision support systems compress medical literature, case studies, and clinical data into pattern representations that enable prediction without preserving invertible reasoning chains. The system can predict effectively. It cannot show its work the way FDA frameworks assume.

Financial Services:
Model risk management frameworks and SEC oversight require firms to demonstrate how algorithmic trading systems make decisions. Regulators want granular traceability: for any given trade, show the decision logic, the data inputs, the reasoning process.

But AI systems that compress market data, news feeds, and historical patterns into predictive models can't provide this level of backward traceability. The compression that enables effective prediction also eliminates the evidence chains regulatory frameworks expect.

Healthcare Systems:
Clinical decision support tools must satisfy patient safety requirements that assume transparent, auditable reasoning. When an AI system recommends a treatment path, healthcare administrators need to verify the recommendation's basis for liability and quality assurance purposes.

Yet the AI's "reasoning"—if we can even call it that—exists as distributed patterns across millions of parameters, not as discrete logical steps someone can review and validate. The system compresses medical knowledge in ways that enable useful predictions but don't preserve the interpretable reasoning chains governance frameworks require.

What Needs to Change

The solution isn't better documentation templates or more rigorous compliance processes. Organizations cannot solve this through implementation rigor alone.

We need regulatory frameworks that ask answerable questions about AI systems.

Current frameworks ask: "Show me the evidence chain for this decision." This question assumes invertibility that doesn't exist.

Better questions recognize information-theoretic constraints:

  • "What systematic testing demonstrates this model's prediction reliability for this class of inputs?"
  • "What monitoring systems detect when the model's behavior drifts from validated performance?"
  • "What architectural constraints prevent the accumulation of errors that would compromise safety or accuracy?"
  • "How do you verify the model maintains performance within its validated domain?"

These questions are actually answerable. They test what matters—whether the system performs reliably and safely—without requiring impossible evidence chains through non-invertible compression.

This requires regulatory frameworks to evolve from process verification to outcome verification. Instead of auditing how a specific decision was made (which assumes invertibility), audit whether the system reliably produces valid outputs within its intended domain (which is testable even for compressed models).

Information-theoretic governance means understanding what you can and cannot verify about systems that work through compression, and designing oversight accordingly.

How to Deploy AI Successfully Despite This Mismatch

While waiting for regulatory frameworks to evolve—which will take years—organizations deploying AI in regulated environments face a practical problem: current requirements don't align with technical reality, but you still need to satisfy auditors.

Three approaches that work:

1. Architectural honesty during design
Before deployment, explicitly map what your architecture can and cannot provide. If regulators will ask for evidence chains and your system compresses information non-invertibly, you have an architectural mismatch. Identify this before implementation, not during an audit.

Organizations that surface these incompatibilities early can make different architectural choices: constraining the AI's role to preserve traceability where regulations demand it, or designing alternative verification approaches that satisfy regulatory intent even when literal compliance is impossible.

2. Alternative evidence strategies that demonstrate what matters
When direct traceability is information-theoretically impossible, shift to verification methods that demonstrate reliability and safety:

  • Comprehensive validation testing across the system's intended domain
  • Continuous monitoring for drift from validated performance
  • Systematic performance verification under varied conditions
  • Constraint-based safety bounds that prevent outputs outside validated ranges
  • Statistical evidence of reliability across populations rather than individual reasoning chains

These approaches don't answer the invertibility question regulators are asking. But they provide evidence that matters more: whether the system works reliably and safely within its validated domain. This is often what regulators actually care about—they're asking for reasoning chains because those were how we verified reliability in deterministic systems.

3. Proactive regulator engagement and education
Many regulators don't yet understand information-theoretic constraints in AI systems. They're applying frameworks developed for rule-based software to compression-based AI without recognizing the fundamental architectural differences.

Organizations that can explain—clearly, without jargon—why certain requirements are technically impossible while proposing alternative verification approaches that better demonstrate reliability often find regulators receptive. The conversation shifts from "show me the evidence chain" to "demonstrate that your system is reliable and safe within its domain."

This requires technical depth regulators can trust combined with genuine commitment to safety and oversight—not attempts to evade accountability.

The Path Forward

The current situation—regulatory frameworks demanding invertibility from systems that work through non-invertible compression—is unsustainable.

Either organizations will stop deploying AI in regulated environments, accepting that the technology cannot satisfy existing frameworks. Or regulatory frameworks will evolve to ask questions that information-theoretic constraints allow systems to answer.

The second path is both more realistic and more important. AI systems offer genuine value in healthcare, financial services, research, and other regulated domains. Preventing their deployment because regulatory frameworks haven't adapted to compression dynamics would sacrifice real benefits for adherence to outdated assumptions.

But evolution requires understanding. Regulators need to grasp why their current questions are often unanswerable—not because organizations are hiding information, but because the information they're requesting was compressed away before the system could use it effectively.

Organizations need to stop pretending they can retrofit traceability onto architectures that weren't designed for it, and start having honest conversations about what's actually verifiable.

Standards bodies need to develop governance frameworks grounded in information theory, not just policy templates adapted from pre-AI regulatory approaches.

This isn't about lowering standards. It's about asking the right questions—questions that verify what matters without demanding the impossible.

Organizations that understand information-theoretic constraints before their competitors have a strategic advantage. While others are retrofitting impossible traceability requirements onto deployed systems, you can architect governance that provides evidence regulators actually need (reliability verification) rather than what frameworks literally ask for (invertible reasoning chains).

This creates systems that satisfy regulatory intent through technically feasible means—which requires understanding what's possible before you build.


For organizations implementing AI in regulated environments: Architecture reviews using information-theoretic methods can identify these incompatibilities before deployment, preventing expensive regulatory failures. Learn more about EpistemIQ Readiness Assessments.

Questions about regulatory compatibility and AI architecture? Contact me at jenniferfkinne@proton.me


Jen