Why AI Regulation Must Start From Scratch

When regulators first encountered biotechnology, nanomedicine, and digital health, they adapted existing frameworks. Drugs, devices, and diagnostics each had long histories of oversight, so new technologies were folded into those categories with incremental updates.

Artificial intelligence does not fit this model.

The defining feature of AI is that it evolves. A machine learning system is not a static product like a pill or a syringe pump. Its function is to change in response to new data, sometimes daily or even continuously. This is not a side effect — it is the core mechanism of being. We want these systems to adapt, because that adaptability is what makes them powerful.

But it also makes them fundamentally different from every category of product regulators have managed before.


The Problem with Patchwork

Today, most agencies are attempting to treat AI as if it were a drug or device with novel features. In the U.S., AI-enabled software is being pulled into the “device” pathway. In Europe, the AI Act creates new risk categories, but still uses device-style obligations as the scaffolding. These are pragmatic first steps, but they are also patchwork.

Patchwork has limits. It assumes that the underlying regulatory logic is sound for this new context. Yet static frameworks are designed for products that remain what they are after approval. Drugs don’t evolve once manufactured. Devices don’t rewrite their own instructions after clearance. AI does.

If regulators apply the old mental model, they risk treating evolution as noise — an anomaly to be patched over — rather than the defining feature that must be addressed.


Why a New Frame Is Needed

Starting “from scratch” does not mean discarding decades of regulatory wisdom. The foundational principles remain: protect patients, ensure safety and effectiveness, promote transparency, and hold decision-makers accountable.

What must change is the frame. Instead of asking: “Is this product safe and effective right now?” regulators must also ask:

  • “How do we ensure it remains reliable as it changes?”
  • “How do we preserve transparent, defensible knowledge as the system adapts?”
  • “How do we make accountability meaningful when the product is not a fixed entity?”

These are not extensions of old questions — they are new questions entirely.


Toward a First-Principles Approach

A true regulatory framework for AI would:

  • Treat continuous evolution as the starting point, not the exception.
  • Build in mechanisms for lifecycle monitoring, feedback loops, and recalibration.
  • Require transparency not only of outputs, but of the processes by which systems learn and update.
  • Anchor every decision in scientific plausibility, so correlations never replace biological or clinical grounding.

These are not radical departures from regulatory science. They are extensions of its deepest commitments: evidence, accountability, and public trust.


The Path Ahead

AI will not wait for regulators to catch up. Its integration into drug development, medical devices, and public health is already accelerating. If agencies continue to patch old frameworks, they will be perpetually behind — chasing updates instead of guiding innovation.

The better path is to begin from first principles: to design oversight around what AI is, not what came before. That requires courage, but it also creates clarity. And clarity is the most important safeguard we have.


✦ Jennifer Kinne is a regulatory strategist and biologist focused on AI governance and epistemic integrity. A longer article expanding on this argument is forthcoming in RAPS Regulatory Focus.

Jen