Joe Fuqua
Enterprise AI Governance & Architecture
Algorithm & Blues · Weekly
Charlotte, NC · Est. 1988
Governance & Control

Culture as the Control Plane for Enterprise AI

Tech isn’t the limiter—your operating model is

An Architectural View

In networking, we talk about the concept of ‘planes’: the data plane moves packets, the control plane decides where they go. The fastest pipes in the world don’t matter if the routing rules are wrong.

Enterprise AI works the same way. Models, APIs, and infrastructure form the capability plane—the layer that delivers raw technical power. But the control plane isn’t just one thing. It’s the mix of culture, process, and organizational design that determines what actually happens in practice.

The difference is that while process and org can be explicitly designed, culture usually operates through defaults—norms, habits, and leadership behavior. That’s why so many pilots thrive in the lab but stall in production. The capability plane does its job, but the control plane is still running on implicit settings.

Misrouted Capabilities

Most enterprises invest heavily in the capability plane: model accuracy, API reliability, security, compliance. All critical, and often well executed. But far less effort goes into the control plane—the mechanisms that decide whether those technical investments ever get used.

Those mechanisms include:

  • Routing: who gets access to AI tools, and who doesn’t.
  • Quality Control: when AI output is accepted versus escalated.
  • Policy Enforcement: what behaviors are rewarded or discouraged.
  • Exception Handling: how failures or surprises are resolved.

When these aren’t tuned, powerful tools sit idle.  Teams produce brilliant AI insights that never left the analytics group because managers weren’t sure they were “safe” to share upstream. The packet was dropped, not because the AI failed, but because the control plane had no route for trust.

Agents Raise the Stakes

With traditional software, the routing problem is manageable—it runs instructions, and humans guide the outcome. The introduction of AI agents changes that. Agents make choices, act autonomously, and force organizations to decide in real time how authority flows between humans and machines.

That means new control plane requirements:

  • Dynamic authority: when can the agent act alone, and when is approval required?
  • Escalation paths: how do decisions move from AI to human oversight?
  • Performance checks: how is agent output measured and tuned over time?
  • Exception protocols: what happens when the agent hits a scenario outside its design?

This is the first time most enterprises have had to ask: when does software get to make decisions without us? That’s not a technical specification—it’s an organizational design choice. If left vague, agents either run unchecked, creating operational risk, or sit idle behind endless approvals, wasting potential.

Leaders as Signal Generators

In network systems, protocols broadcast routing priorities. In organizations, leaders do the same. Their behavior is the control plane’s strongest signal.

When executives actively use AI in their own work, they route legitimacy and resources toward it. When they only talk about AI but still ask for manual PowerPoints, the message is clear in the opposite direction.

That’s why adoption patterns track leadership usage so closely. Staff don’t calibrate against vision decks; they calibrate against what leaders actually do. A CFO who integrates AI-generated analysis into quarterly planning is reconfiguring the control plane in real time. One who delegates it to a task force is signaling that AI is a side project.

Recurring Misconfigurations

Across industries, the same failures appear again and again:

  • Pilot isolation: AI works in experiments but never scales.
  • Policy disconnects: mandates at the top, but daily behaviors route traffic elsewhere.
  • Capacity bottlenecks: AI is jammed into legacy workflows, overloading people and processes.

I’d add another: governance theater. Companies produce glossy AI policies that look robust but don’t touch how real work gets done. On paper the control plane looks airtight; in practice, packets still get dropped.

Workforce Rewiring

AI adoption is already reshaping how work is routed through organizations.

  • Entry-level roles: much of the repetitive work once assigned to junior staff can now be handled by agents, forcing companies to rethink which skills are relevant and how they should be developed.
  • Middle managers: they are no longer just coordinators—they become traffic directors, deciding when to send work to humans, to AI, or to both. (I’ll leave for a separate discussion what happens when AI agents become middle managers.)
  • Executives: leaders’ usage patterns configure the system for everyone else, whether they intend to or not.

This creates an uncomfortable truth: the old apprenticeship model—where new hires learned by grinding through spreadsheets and reports—is disappearing. If we don’t reconfigure this pathway, we’ll end up with AI agents producing work that no human in the next generation knows how to evaluate. That’s a quality control problem no policy can fix after the fact.

Designing Culture as Infrastructure

Control planes don’t configure themselves. Enterprises that succeed treat culture, process, and org design with the same rigor as system architecture. They:

  • Define explicit protocols for human–AI interaction.
  • Set clear authority boundaries between human and machine.
  • Build feedback loops to measure adoption and outcomes.
  • Track leadership behavior as closely as technical KPIs.
  • Align workforce capacity to handle AI-augmented workflows.

Most leaders assume culture will “adapt” around new tools. It rarely does. Left unmanaged, culture routes traffic by inertia—and inertia always favors the status quo. That’s why explicit design matters.

The Competitive Edge

Every AI rollout runs on two planes: the visible capability plane and the less visible control plane. Failures are almost never about weak models—they’re about weak routing.

Organizations that design their control planes with intent—protocols, signals, and feedback—will scale further and faster. The rest will keep reliving the same story: flashy pilots, failed production.

← All Writing