About

AI Governance · Epistemic Integrity · Regulated Industries

Faculty of Arts & Sciences, Harvard University


Background

Twenty years of regulatory science, risk, and compliance at Harvard's Faculty of Arts & Sciences, spanning pharmaceutical research, biotech, and academic research operations at institutional scale.

That work formed the foundation of a longer inquiry: why governance frameworks that look correct on paper fail when they meet reality under pressure. The answer, developed through parallel work in information theory, philosophy of science, and the epistemology of complex systems, is structural. Most frameworks measure internal consistency. None of them measure correspondence with the world.

That gap is what my work addresses.


Theoretical Work

This work develops an information-theoretic account of epistemic integrity in intelligent systems, both biological and artificial. A branch of the formal framework appears in The Information-Theoretic Imperative: Compression and the Epistemic Foundations of Intelligence, available at arXiv:2510.25883, currently under journal review.

The underlying theoretical project is under separate review.


Current Work
VeracIQ — Founder

A patent-pending information-theoretic method for detecting epistemic drift in AI systems before architectural failures become regulatory problems. Current governance approaches measure whether a system agrees with itself. VeracIQ measures whether it agrees with the world.

Institutional Coherence Initiative — Founding Partner

A cross-disciplinary initiative developing frameworks for institutional knowledge integrity in the age of AI. ICI explores how institutions can build the next layer of governance infrastructure: tools that translate ethical commitments and regulatory frameworks into operational decision systems.

ITAG — Advisory Board Member

Industry advisory board. As tax administrations worldwide accelerate AI adoption, the gap between technology and governance widens. ITAG fills this void. We are not just a think tank; we are an active operating body that defines standards, certifies professionals, and deploys governance frameworks.


Engagements

Speaking, board briefings, executive advisory, and architecture reviews for organizations deploying AI in regulated environments. Current focus: life sciences, financial services, and healthcare systems.

For speaking inquiries, board briefings, or to discuss a specific governance challenge, the best first step is a direct conversation.


Approach

The kind of truth worth pursuing doesn't come from consensus or authority. It comes from alignment with reality itself, which is larger than any single perspective.

I often say: I could be wrong. Not as a signal of doubt, but as a recognition that staying open to that isn't self-distrust. It's trust in the truth.

Get in Touch

MMA fighter. Rock and ice climber. Cellist. Cargo bikes everywhere, kids included.