The Measurement Problem in AI Risk: Why Output Variance Doesn't Capture Epistemic Drift
Anthropic's recent paper "The Hot Mess of AI: How Does Misalignment Scale with Model Intelligence and Task Complexity?" makes an important empirical observation: frontier models show increasing output variance
When Regulators Ask the Impossible
Why AI Governance Frameworks and Generative AI Are Fundamentally Incompatible
The Scenario
You're eight months into deploying an AI system for clinical decision support. Internal reviews are passed, you decided not
Constitutional AI and the Compression Target Problem
The recent surge in "Constitutional AI" discourse reveals both progress and confusion in AI governance. While Constitutional AI represents an improvement over pure RLHF, it doesn't solve the fundamental
Why Your Vendor's AI Is Becoming Less Reliable
(And they don’t know it)
You deployed an AI system six months ago. It performed well in validation. Your vendor provided documentation showing 94% accuracy on test data. Your compliance team signed
Why Epistemic Drift Is Mathematically Inevitable: An Information-Theoretic Analysis
People are starting to notice that AI systems become less reliable over time. The term "epistemic drift" is emerging in research circles, often defined as a gradual shift away from truth-seeking