The Machine that Learned
Cyclic Disruption: A Series on Technology and Human Nature
How It Started
People often ask how I got into AI. They expect an origin story with a clear title or breakthrough project. But mine didn’t start that way. It began with a moment — a feeling I couldn’t name at the time, but one that has followed me ever since.
In 1987, I walked into Oak Ridge National Laboratory as an 18-year-old math and physics co-op student from the University of Kentucky. The lab was built from concrete and purpose—cooling fans humming constantly, machines processing calculations I could only guess at. Supercomputers filled corners, each one larger than a refrigerator. At the center of our workspace stood a giant robotic arm with articulated joints, powered down, but carrying the tension of something waiting for action.
The moment I saw that arm, something shifted. Not excitement, exactly. More like recognition of standing at the edge of something significant.
I had no idea what I’d be working on when I arrived — nuclear fusion? missile defense? (it was the 1980s after all) – when I met with my project leader, he gave me my assignment and it seemed pretty straightforward: help to develop algorithmic solutions to the multiple traveling salesman problem. On paper, this was optimization mathematics; in practice, it meant developing navigation algorithms for robots that might one day enter places too dangerous for humans—nuclear accident sites, chemical spills, disaster zones where human exposure could be fatal.
We were comparing two approaches to artificial intelligence. One proven: expert systems operated through logical, rule-based decision trees—predictable chains of if-then statements that mimicked human reasoning. The other new and opaque: neural networks, messy and more magic than science, learning by adjusting weighted connections between nodes, modifying their responses based on data in ways that weren’t always transparent, even to the team working on them.
I was assigned to work on developing solutions based on the neural network algorithms. I spent hours programming and adjusting the algorithms, watching green text scroll across phosphor screens. Processing took entire afternoons. Then, one day, something happened I hadn’t expected. The system produced a unique approach to solving the traveling salesman problem, one that emerged from the network itself vs. being hand-coded. It had found patterns in the data and adapted its approach accordingly.
Dr. Glover, the senior researcher, watched the results appear. He had seen the transition from room-sized calculators to the systems we were using. His reaction surprised me.
“If this ever scales,” he said, “we won’t know how to put it back in the box.”
I laughed. The idea of containing progress seemed absurd. But even as I dismissed his concern, I felt that same shift I’d experienced walking into the lab—a recognition that something fundamental was changing, and I wasn’t certain what it meant.
The Pattern Emerges
Thirty-five years later, I felt it again. This time I was watching ChatGPT generate coherent responses to complex questions in real time. I remember staring at the screen, not because the words were flawless, but because they carried intent. The machine wasn’t just calculating — it was conversing.
The technology was orders of magnitude more sophisticated than our 1980s neural networks, but my reaction was identical — that same mixture of fascination, surprise, and uncertainty.
And I wasn’t alone. I watched colleagues cycle through the same feelings I had seen and experienced decades earlier: wonder at what it could do, then unease at what it might mean. Students adopted it overnight. Executives debated bans. Regulators rushed to issue guidance. The rhythm was familiar. Once again, what unsettled me most wasn’t the machine itself, but how quickly the human pattern resurfaced around it.
I was struck by the realization that this wasn’t a new experience at all. It was history repeating itself, only at a vastly larger scale. The machines had changed. Our response had not.
From neural nets in the ’80s to cloud computing in the 2010s to generative AI today, the pattern has been the same: capability appears, unease follows, then adoption comes once economics or culture demand it.
And here’s the deeper truth: we consistently treat the things we build as if they are alien arrivals. We create them, then recoil from them. We act as though disruption comes from the outside, when in fact it comes from within. The technology changes. The disruptive element is our reaction.
A Familiar Reflex
History makes this projection clear.
Socrates, as recorded by Plato, worried that writing would weaken human memory: “This discovery of yours will create forgetfulness in the learners’ souls.” Medieval scribes feared that printing presses would cheapen knowledge. Textile workers in early industrial England smashed mechanical looms, not because they misunderstood them, but because they understood too well what automation meant for their skills.
It seems that each generation experiences its disruption as an invasion. What repeats isn’t the novelty of the tool, but the familiar human reflex to see our own creations as threats.
That reflex makes sense. For most of human history, novelty carried genuine risk. The same neural circuitry that helped our ancestors hesitate at the edge of a dark cave now fires when we face unfamiliar algorithms. It buys us time to assess danger.
But when the danger is imagined and the opportunity real, hesitation becomes self-sabotage. What was once protection can become paralysis. For leaders, the challenge is to translate that instinct into perspective — to create space for caution without allowing it to stall progress.
Why This Series
Four decades of working inside adoption cycles has shown me this pattern from multiple angles: as a student building early neural nets, as a consultant guiding organizations through the internet and cloud, and as an executive weighing risk in enterprise analytics and AI.
What stands out isn’t just the pause itself, but the way we externalize it. We build systems, then act as though they’re intruders. We forget they are our reflection.
This series, Cyclic Disruption, explores those moments. Each essay connects a story from my career to the dilemmas of today’s AI era. Not to predict the future — but to recognize the recurring pattern that shapes it.
Because the future isn’t decided by the tools we create. It’s decided by whether we recognize ourselves in them.
The Next Encounter
The next time you feel that familiar flutter — excitement laced with unease at a new technology — remember this: it’s not the tool you’re reacting to. It’s yourself.
The question isn’t whether disruption will come from outside. The question is whether we can own the disruption we’ve already set in motion.
Every generation invents the tools that shape it. The real test is whether we recognize ourselves in what we’ve built
Next in this series: “The Report That Wasn’t There” — What happens when sophisticated technology meets unprepared culture.
About this series: Cyclic Disruption explores patterns in how humans adapt to transformative technology, drawn from four decades of experience in AI development, enterprise consulting, and leadership. Each essay examines a moment when capability meets hesitation — and what we can learn when we stop treating our own creations as strangers.