About Jennifer Kinne

About Jennifer Kinne
Photo by Daniela Kokina / Unsplash

Systems Thinking • AI Governance • Epistemic Integrity Across Domains

Affiliation: Faculty of Arts & Sciences, Harvard University

EpistemIQ is a patent-pending information-theoretic method for detecting epistemic drift in AI systems before architectural failures become regulatory problems. The approach identifies incompatibilities between AI governance architectures and regulatory requirements - preventing expensive compliance failures in life sciences, financial services, healthcare, and research institutions.

Organizations deploying AI in regulated environments face a recurring challenge: governance architectures that pass internal review but fail when regulators require evidence chains the system can't provide. Standard approaches miss these structural issues because they don't analyze information flow and compression dynamics in the architecture itself.

My theoretical framework unites biological persistence and machine learning alignment under one informational law. The formal development with my co-author appears in The Information-Theoretic Imperative: Compression and the Epistemic Foundations of Intelligence (arXiv:2510.25883). It is currently under review at a scientific journal.

Background

  • 20+ years in research operations, risk, and compliance at Harvard's Faculty of Arts & Sciences
  • Speaker on AI governance and clinical operations transformation
  • Regulatory Affairs Certification (Devices candidate)
  • Participant and moderator at industry roundtables and summits

What I Do

AI governance architectures are reviewed before implementation to identify structural issues that will create regulatory problems.

Organizations deploying AI in regulated environments face a recurring challenge: technical teams and compliance teams can't effectively communicate about system capabilities and constraints. This creates implementations that appear to work internally but fail when regulators require evidence chains the architecture can't provide.

The solution involves detecting epistemic drift - the phenomenon where AI systems' internal representations diverge from ground truth in ways that accumulate over time. Most governance approaches can't identify this architectural failure mode until it's already caused compliance problems. Information-theoretic methods detect these incompatibilities before deployment by analyzing compression efficiency and information flow in the system architecture.

Where This Applies

Life sciences: FDA/EMA submission workflows, clinical trial oversight, post-market surveillance where AI-assisted decisions require evidence chains and epistemic drift threatens validation.

Financial services: Model risk management, algorithmic trading oversight, SEC/FINRA reporting, AML/KYC compliance where audit requirements conflict with system architecture and information-theoretic analysis reveals hidden failure modes.

Healthcare systems: Clinical decision support, diagnostic AI where patient safety requirements and AI capabilities need architectural alignment and compression dynamics inform safety bounds.

Research institutions: IRB processes, cross-departmental governance where expert domains need to coordinate on AI deployment and epistemic drift detection prevents systemic failures.

How Engagements Work

Architecture reviews identify structural risks before implementation using information-theoretic analysis. Gap analysis shows where current approaches conflict with regulatory requirements and where epistemic drift is likely to emerge. Implementation guidance provides specific recommendations for architecting governance systems that maintain compression efficiency and survive regulatory scrutiny.

Engagements typically begin with a structured assessment of information flow and compression dynamics in the proposed architecture, followed by specific recommendations for preventing compliance failures.

Approach

I often say, "I could be wrong." Not as a signal of doubt, but as a recognition of reality. The aim is not to perform certainty. It's to see clearly, and to help others see clearly as well.

The kind of truth worth pursuing doesn't come from consensus or authority. It comes from alignment with reality itself. Because reality is larger than any single perspective, staying open isn't self-distrust, it's trust in the truth.

This work centers on clarity rather than persuasion. Coercion is easy; clarity is harder. But clarity, grounded in biological and informational reality, is the only force that can reform broken systems without reproducing their errors.

Whether the task is improving oversight, mapping AI risk, or building governance frameworks, the purpose stays consistent: making systems that shape decisions more aligned with the reality that sustains them.

Connect

For collaboration, architecture reviews, speaking inquiries, or EpistemIQ briefings:

jenniferfkinne@proton.me

LinkedIn