Where Are the Building Codes?
AI is starting to look less like software and more like infrastructure, but the governance frameworks that normally accompany infrastructure are still arriving.
Tomorrow Jensen Huang will walk onto a stage in San Jose in front of roughly thirty thousand engineers, founders, and investors gathered for NVIDIA’s annual conference. The message NVIDIA has been telegraphing for weeks is this: AI is no longer a product, it is infrastructure. Something closer to electricity than software.
I think he’s right.
We build codes for bridges. We inspect power grids. Banks carry capital reserves because regulators assume something will eventually go wrong. When a system earns the label infrastructure, governing it moves from the philosophical and becomes necessary and practical. Usually that shift happens after the first serious failure.
AI governance is sitting somewhere between those two moments.
NVIDIA often describes AI as a five-layer stack: energy, chips, infrastructure, models, and applications. Every layer is accelerating at once, backed by enormous investment. By their own framing this may be one of the largest infrastructure buildouts in human history.
Note that governance doesn’t appear anywhere in that stack.
That’s not really a critique of NVIDIA. Infrastructure builders focus on capacity, and governance usually arrives later, written by institutions that were not involved during system design.
Earlier this year the National Institute of Standards and Technology launched its AI Agent Standards Initiative, focusing on identity management, access controls, audit logging, and incident response for autonomous systems. The public comment period closed earlier this month. In regulated industries, NIST frameworks have a habit of becoming supervisory expectations long before they become formal rules.
Meanwhile the regulatory calendar is moving faster than most organizations realize. The EU AI Act reaches general application in August 2026, while several U.S. state laws arrive sooner.
The governance gap becomes most visible with agent systems.
Traditional AI governance assumes a model generates an output and a human reviews it. Agent systems are uniquely different. An agent queries systems, moves data, triggers transactions, sends confirmations, and records the activity. The loop closes without anyone in the middle.
Infrastructure gets built quickly, then it gets regulated, then organizations spend years retrofitting controls that should have been designed into the system from the beginning.
The grid is being built while the building codes are still being written by NIST, the EU, state regulators, and eventually your federal banking supervisor. Building codes exist for a reason.
We usually remember that reason after something breaks.
References
Get the next issue in your inbox
Algorithm & Blues publishes one clear argument per week on AI research, governance, and the long arc.