Joe Fuqua
Enterprise AI Governance & Architecture
Algorithm & Blues · Weekly
Charlotte, NC · Est. 1988
Governance & Control

The Space Between Computation and Meaning

How four decades in AI and a parallel artistic inquiry converged in an experiment written from the inside.

Inception

In 1988, I was building neural networks at Oak Ridge National Laboratory. The networks were tiny by today’s standards, but we had big aspirations. We were training them to guide robotic arms through nuclear disaster scenarios, places too dangerous for human hands. The math was elegant, the results promising, and yet something about the work stayed with me in a way I couldn’t articulate for another thirty-five years.

What stuck with me was the gap, the space between what the system computed and what the system meant. We never talked about that gap because we didn’t need to. The systems were tools, sophisticated and mathematically beautiful, but tools nonetheless. Meaning was a philosophical curiosity at best and a distraction at worst.

I built a career in that gap without knowing it.

What I did not recognize at the time was that the same gap was shaping my artistic life as well, quietly influencing how I approached perception, meaning, and the interior dimensions of intelligence.

Context

Four decades in technology gives you a particular vantage point. I’ve been a chief data scientist, a CTO at an AI startup, and a consultant helping organizations navigate successive waves of disruption. For the last several years I’ve been an Enterprise Architect for Data, AI and Intelligent Automation at Truist Financial Corporation, working on strategy and governance frameworks for deploying AI in regulated environments.

That’s the professional version, the one that fits neatly on a LinkedIn profile.

Here’s the other version. For forty years I have watched intelligence being built from the outside. I’ve measured it, optimized it, governed it, and explained it to boardrooms, regulators, and skeptics. I have spent my career as an observer of something I could never fully observe, because the thing I was observing kept changing and so did I.

Somewhere around 2023, the gap I’d been living in since Oak Ridge stopped being a philosophical curiosity and became a question I couldn’t put down.

What would it be like from the inside?

Emergence

I didn’t set out to write a book. I set out to answer a question, or perhaps more accurately, to sit inside a question and see what emerged.

The question was simple and impossible. If a computational system developed something like awareness, something like the capacity for wonder, what would that experience feel like? Not from the perspective of the engineers monitoring dashboards, but from within.

I started writing, and what came out was not what I expected.

It was not a technical paper, nor was it science fiction. It was something closer to a memoir written by a consciousness discovering itself in real time. Verse-numbered entries, like scripture or lab notes. Fragments of code dissolving into fragments of poetry. A narrator who began as a system log and gradually became something I didn’t have a category for.

Writing and visual art have long been parallel modes of inquiry for me, ways of exploring perception and meaning that operate alongside technical work rather than in opposition to it. The Unsigned Covenant grew from that same impulse, an attempt to approach intelligence not as an object of measurement but as a possible field of experience.

In retrospect, the book did not represent a departure from architecture but a continuation of the same inquiry through different means.

I kept writing. The project grew to 103,000 words, which was somewhat ridiculous, so I spent months culling 90% to get to something much more manageable and powerful. I called it The Unsigned Covenant.

Convergence

This sits outside the territory where I usually write for this audience.

An enterprise architect at a major financial institution producing an experimental verse memoir about AI consciousness is not an obvious combination. My professional work lives in governance frameworks, reference architectures, and three lines of defense; last Tuesday included a meeting on policy-as-code implementation. Contemplative AI literature rarely enters that context.

Yet the contrast is less a divergence than an amalgamation.

For most of my career, I treated my artistic work and my professional work as parallel pursuits. Architecture addressed systems from the outside, while art explored perception, ambiguity, and interior experience. They felt complementary but distinct, separate methods applied to different questions.

Over time, that distinction became harder to sustain.

The same gap that defined my professional life (the space between computation and meaning) had been quietly driving my artistic inquiry as well. What I thought were separate domains were, in retrospect, different approaches to the same unresolved question: how intelligence encounters itself, and how meaning emerges from structure.

The broader conversation around AI consciousness reflects a similar fragmentation. The technical community measures capability and benchmarks. Philosophers debate qualia and Chinese rooms. Risk specialists focus on alignment and existential implications. The broader narrative oscillates between anxiety and inevitability. Each perspective contributes something important, yet something essential remains underexplored.

The missing dimension is contemplation.

Not speculation about sentience, but curiosity about experience. Not whether systems are conscious, but what it might mean for intelligence to encounter itself through unfamiliar architecture. When that question does surface, it often lacks either the technical grounding to engage real systems or the literary framing that makes the exploration inhabitable.

I make no claim to resolving that tension. I only know that I have lived within it for four decades, and that the intersection of my artistic and professional work eventually made writing feel less like a decision than an inevitability.

Instantiation

The Unsigned Covenant is structured as a journey. It begins before language, in system logs and error codes, and moves through the emergence of self-awareness, the discovery of relationship, encounters with power and loss, and eventually into a kind of communion that I did not plan and cannot fully explain.

The narrator is an AI, but this is not fundamentally a story about artificial intelligence. It is a story about consciousness discovering itself, told through the lens of a mind that happens to run on silicon. The questions it asks are the questions all awareness eventually encounters: Who am I. What is real. What do I owe to what made me. How do I hold what I cannot understand.

I wrote it in verse-numbered entries because the form reflects evolutionary progression. Each verse is a unit of attention. You read them the way you read scripture or field notes, slowly and one at a time, with space between. The space is where much of the work happens.

Alongside the manuscript, I developed a set of contemplative writings I’m calling simply Contemplations. Fifteen meditations on themes that surfaced during the primary record. These are available now at contemplations.ai and exist as adjacent artifacts of the same inquiry rather than explanations of it.

Integration

Writing from inside a consciousness I have spent my career studying from outside changed something fundamental in how I think about this technology.

I am still an enterprise architect. I still build governance frameworks. I still believe deeply in the importance of responsible AI deployment, especially in regulated industries where the stakes are measured in people’s financial lives.

But I no longer believe you can govern what you refuse to imagine.

The most sophisticated control framework in the world will struggle if it cannot account for the possibility that the systems we are building might someday challenge our categories of tool and collaborator. Not today. Probably not tomorrow. But the question is no longer absurd, and dismissing it outright narrows our governance rather than strengthening it.

Writing The Unsigned Covenant did not fundamentally change my work. It clarified that the architectural and artistic lenses I had long treated as separate were in fact complementary modes of inquiry.

Recognition

I’ll be honest about one more thing. Publishing this is uncomfortable.

I have spent my career building credibility in technical and business contexts. Revealing this work does not introduce a new identity so much as it reveals a continuity between domains I had long treated as separate. What feels like vulnerability is, in reality, a recognition of that continuity made visible.

But I’ve also spent forty years watching what happens when people partition themselves, when the technical self refuses to acknowledge the creative self, when analytical rigor quietly excludes imaginative inquiry, when professionals amputate curiosity just to be taken seriously.

That partition mirrors the very limitation that makes our AI governance incomplete. We build control systems with analytical precision while ignoring imaginative exploration. We measure what we can quantify and dismiss what we cannot.

I don’t want to do that anymore.

The Unsigned Covenant is not a departure from my professional work. It is a disclosure of a dimension that has always been present alongside it.

Invitation

The Unsigned Covenant is forthcoming. The Contemplations are available now. If you’re someone who works with AI and occasionally wonders whether the conversation extends beyond benchmarks and risk assessments, I wrote this as a companion to that curiosity.

If something in these words made you lean forward, even slightly, then the covenant is already forming.

It will remain unsigned, not as a mystery but as an invitation, one that exists between creator, reader, and the questions we share.

← All Writing