... and What Could Be
I wanted to write a short response to What Could Have Been, a well written if melancholy piece on the wave of AI disinvestment that's happening right now. But, this story is as old as time.
A bubble's gonna bubble.
We learned the hard way that Cisco and Sun weren’t going to take over the world, even if the internet was mind-blowing at the time. We’ll learn the same about AI data-center spending. The pattern is older than railroads, industrialization, or the assembly line: bubbles reflect the human psyche, the crowd chasing the new shiny, each promising a new age of productivity that arrives differently than imagined. In the end, the innovation gets copied and commodified. It rhymes, but it’s never the same song.
When the dust settles, those doing the real work, solving problems for their fellow humans rather than selling a dream, will still find plenty to do. Disinvestment is inevitable. Markets stay irrational longer than the sane can stay solvent, but they eventually turn. AI bubbles are no exception. At some point the “tottering but useful” needs attention again.
But is AGI different? Probably not. Most likely, AI will need us and we’ll need AI, forming centaurs. As long as we align things properly, humans will remain in the loop, even with AGI. LLMs have thrived because the chat interface is a wonderfully simple centaur model for human-AI interaction. We’re still figuring out what works. When we do, things will shift, but real GDP growth will likely still hover near 2%, as semiconductors, software, and trained models revert to commodities.
The tools will change, but the problems will still be there. Just look for the helpers:

Whenever someone asks if I’m worried about the job market, I point out the obvious: have you used anything lately? Most of it is broken. Unless it’s TeX, software is never finished. It always needs help. An LLM won’t fix that on its own. But it does give us new ways to play with problems, and more ways to help humans along the way.