JenniferKinne
  • Home
  • About
Sign in Subscribe
Practical The Measurement Problem in AI Risk: Why Output Variance Doesn't Capture Epistemic Drift

The Measurement Problem in AI Risk: Why Output Variance Doesn't Capture Epistemic Drift

15 Mar 2026
Theory The "You" Problem: What AI Consciousness Discourse Gets Wrong

The "You" Problem: What AI Consciousness Discourse Gets Wrong

15 Mar 2026
Practical What to Audit Before Your AI Deployment Becomes a Liability

What to Audit Before Your AI Deployment Becomes a Liability

06 Mar 2026
Theory Open-Loop Generation: On the Architectural Basis of LLM Output Errors

Open-Loop Generation: On the Architectural Basis of LLM Output Errors

28 Feb 2026
Theory The Measurement Problem in AI Risk: Why Output Variance Doesn't Capture Epistemic Drift

The Measurement Problem in AI Risk: Why Output Variance Doesn't Capture Epistemic Drift

14 Feb 2026
Practical When Regulators Ask the Impossible

When Regulators Ask the Impossible

03 Feb 2026
Practical Constitutional AI and the Compression Target Problem

Constitutional AI and the Compression Target Problem

29 Jan 2026
Practical Why Your Vendor's AI Is Becoming Less Reliable

Why Your Vendor's AI Is Becoming Less Reliable

22 Jan 2026
Theory Why Epistemic Drift Is Mathematically Inevitable: An Information-Theoretic Analysis

Why Epistemic Drift Is Mathematically Inevitable: An Information-Theoretic Analysis

22 Jan 2026
Practical When the AI Said "Done" But Nothing Happened: A Case Study in Interface Trust vs. System Reality

When the AI Said "Done" But Nothing Happened: A Case Study in Interface Trust vs. System Reality

23 Dec 2025
The Measurement Problem in AI Risk: Why Output Variance Doesn't Capture Epistemic Drift

The Measurement Problem in AI Risk: Why Output Variance Doesn't Capture Epistemic Drift

15 Mar 2026 6 min read Practical
Anthropic's recent paper "The Hot Mess of AI: How Does Misalignment Scale with Model Intelligence and Task Complexity?" makes an important empirical observation: frontier models show increasing output variance
The "You" Problem: What AI Consciousness Discourse Gets Wrong

The "You" Problem: What AI Consciousness Discourse Gets Wrong

15 Mar 2026 5 min read Theory
A new field has emerged with remarkable speed. It has journals, taxonomies, conferences, and a growing body of literature. It concerns itself with the psychology of artificial intelligence — with whether AI systems have
What to Audit Before Your AI Deployment Becomes a Liability

What to Audit Before Your AI Deployment Becomes a Liability

06 Mar 2026 2 min read Practical
Standard evaluations tell you whether your AI system performs. They don’t tell you whether it still knows what it’s talking about. The distinction matters because performance and epistemic reliability can decouple
Open-Loop Generation: On the Architectural Basis of LLM Output Errors

Open-Loop Generation: On the Architectural Basis of LLM Output Errors

28 Feb 2026 6 min read Theory
A recent paper from Tsinghua University identifies what it calls H-Neurons — a sparse subpopulation of neurons whose activation patterns predict hallucination events in large language models [1]. The finding is real and probably
The Measurement Problem in AI Risk: Why Output Variance Doesn't Capture Epistemic Drift

The Measurement Problem in AI Risk: Why Output Variance Doesn't Capture Epistemic Drift

14 Feb 2026 4 min read Theory
Anthropic's recent paper "The Hot Mess of AI: How Does Misalignment Scale with Model Intelligence and Task Complexity?" makes an important empirical observation: frontier models show increasing output variance
Page 1 of 6
Next
JenniferKinne © 2026
  • Sign up
  • Applied Clarity
Powered by Ghost