If everyone can build who is accountable?

I’ve been thinking about how quickly “coding” became “AI.”

Not because the underlying systems changed overnight, but because the interface did.

For years, building software required fluency in a language most people were never taught. Platforms were powerful, but they assumed you already knew how to participate. There was a clear boundary between those who could build and those who couldn’t.

That boundary is dissolving with vibe coding.

Tools don’t require you to write code in the traditional sense. They ask you to describe what you want. The translation happens somewhere else. Instantly.

And that shift—more than anything technical—is what changed the conversation.

It’s also why the rebrand feels strange.

Because in many ways, the underlying logic hasn’t changed. Systems still require structure, constraints, and clarity of intent. What has changed is who can engage with them. Or more accurately, who *feels* competent enough. Even if they’re not.

“AI” is easier to approach than “software engineering.” It invites participation rather than signaling expertise.

But it also obscures something important.

When the interface becomes more intuitive, the system becomes less visible. And when the system becomes less visible, it becomes harder to question—harder to understand where decisions are being made, what assumptions are embedded, and who is accountable when something doesn’t work as expected.

That’s where my interest in AI policy starts.

Not from a place of fear, but from a recognition of how quickly access is expanding without a parallel expansion in understanding.

I spend a lot of time thinking about systems—how people move through them, where they break down, who they serve, and who they leave out. In my work, that shows up in the rooms I design, the conversations I facilitate, and the outcomes I’m responsible for delivering.

What AI is doing is shifting that same kind of system design into a much wider public space.

People are now building tools, workflows, and even decision-making processes through interfaces they don’t fully see. And increasingly, those outputs shape real-world outcomes—what gets funded, who gets hired, what information is surfaced, what paths feel available.

That requires a different kind of attention.

Not just to what these tools can do, but to how they are introduced, how they are framed, and what assumptions are carried forward in the process.

Because framing matters.

Calling something “AI” instead of “software” doesn’t just describe it—it changes who feels invited to use it, who feels qualified to question it, and who assumes responsibility for its impact.

And if more people are now able to build, then more people also need to be equipped to understand what they’re building inside of.

That, to me, is the policy conversation.

Not as a technical exercise, but as a question of access, transparency, and accountability in systems that are becoming harder to see at the exact moment they are becoming easier to use.

Next
Next

What it’s been like to begin here