About Jennifer Kinne
Systems Thinking • AI Governance • Epistemic Integrity Across Domains
Affiliation: Faculty of Arts & Sciences, Harvard University
I work at the intersection of biology, information theory, and governance, helping regulated organizations architect AI systems that survive regulatory scrutiny. My approach begins with a biological truth: systems that survive must compress truthfully.
I created EpistemIQ, a patent-pending framework for identifying architectural issues in AI governance before they become expensive regulatory failures. The logic extends into a theory uniting biological persistence and machine learning alignment under one informational law. The formal theory published with my coauthor is detailed in The Information-Theoretic Imperative: Compression and the Epistemic Foundations of Intelligence (arXiv:2510.25883).
Background
- 20+ years in research operations, risk, and compliance at Harvard's Faculty of Arts & Sciences
- Speaker on AI governance and clinical operations transformation
- Regulatory Affairs Certification (Devices candidate)
- Participant and moderator at industry roundtables and summits
What I Do
```I review AI governance architectures before implementation to identify structural issues that will create regulatory problems. This prevents expensive failures and rework.
Organizations deploying AI in regulated environments face a recurring challenge: technical teams and compliance teams can't effectively communicate about system capabilities and constraints. This creates implementations that appear to work internally but fail when regulators require evidence chains the architecture can't provide.
I help organizations identify these architectural incompatibilities before deployment—by understanding information theory, LLM architecture, and regulatory requirements simultaneously.
Where This Applies
Life sciences: FDA/EMA submission workflows, clinical trial oversight, post-market surveillance where AI-assisted decisions require evidence chains.
Financial services: Model risk management, algorithmic trading oversight, regulatory reporting where audit requirements conflict with system architecture.
Healthcare systems: Clinical decision support, diagnostic AI where patient safety requirements and AI capabilities need careful architectural alignment.
Research institutions: IRB processes, cross-departmental governance where expert domains need to coordinate on AI deployment.
How Engagements Work
Architecture reviews to identify structural risks before implementation. Gap analysis showing where current approaches conflict with regulatory requirements. Implementation guidance for architecting governance systems that survive scrutiny.
Engagements typically begin with a structured assessment, followed by specific recommendations for preventing compliance failures.
```Approach
```I often say, "I could be wrong." Not as a signal of doubt, but as a recognition of reality. The aim is not to perform certainty. It's to see clearly, and to help others see clearly as well.
The kind of truth worth pursuing doesn't come from consensus or authority. It comes from alignment with reality itself. Because reality is larger than any single perspective, staying open isn't self-distrust, it's trust in the truth.
This work centers on clarity rather than persuasion. Coercion is easy; clarity is harder. But clarity, grounded in biological and informational reality, is the only force that can reform broken systems without reproducing their errors.
Whether the task is improving oversight, mapping AI risk, or building governance frameworks, the purpose stays consistent: making systems that shape decisions more aligned with the reality that sustains them.
```Connect
For collaboration, architecture reviews, speaking inquiries, or EpistemIQ briefings:
jenniferfkinne@proton.me