there is a comforting argument circulating right now, articulated clearly in a recent a16z piece.
AI feels chaotic, overhyped, and existential — but so did the PC, e-commerce, and streaming. Software isn't dead. Timelines are longer than people think. In the end, we will have more software, more complexity, more companies, and more work than before.
Historically, that argument has been right.
This time, I do not think it is.
Not because AI is faster. Not because adoption is viral. And not because the technology is impressive.
But because AI is the first widely deployed technology that does not just change how work is done — it changes who does the thinking inside the work.
And that breaks the analogy.
The mistake we keep making
Every past transition gets framed as a shift in tools or distribution.
PCs changed interfaces.
The internet changed access.
Cloud changed infrastructure.
E-commerce changed consumer behavior.
In all of those cases, humans still planned, decided, coordinated, and executed. Software helped, accelerated, and scaled — but it didn't replace the cognitive core of organizations.
AI does.
Calling this "software moving up the stack" sounds reasonable, but it understates the change. AI isn't moving up the stack.
it is entering the org chart.
Related: 8 Atlas Prompts Every Founder Should Try
Why this isn't just automation
Automation replaces tasks.
AI replaces roles.
That distinction matters more than anything else.
Once you introduce agents that can reason, plan, execute, coordinate, and transact — using tools, APIs, workflows, and even wallets — you are no longer talking about productivity software. you are talking about a parallel cognitive workforce.
This isn't theoretical.
Take an agent-based system like OpenClaw.
You do not "use" it the way you use software.
You give it an objective.
"Launch a landing page for this product, validate demand, and start acquiring users."
Behind the scenes, multiple agents spin up:
They research the market.
They analyze competitors.
They draft positioning.
They generate and deploy a site.
They set up analytics and SEO.
They create initial marketing assets.
They monitor results and suggest changes.
They do not wait for step-by-step instructions. They make decisions, coordinate with each other, and adapt when something fails.
A human might review, redirect, or approve — but they are no longer doing the work. They are steering a system that does.
that is not automation. that is delegation.
This changes what "software" even is
Another place the historical analogy quietly breaks is interface.
For decades, software meant:
dashboards, menus, buttons, forms, workflows designed for humans.
That model assumes humans are the operators.
Agentic systems flip that assumption.
The emerging interface isn't a UI at all — it is intent. You express what you want in language, and the system figures out how.
Behind the scenes:
- skills execute
- sub-agents coordinate
- tools are called
- infrastructure is provisioned
- decisions are made
The interface collapses. The system becomes invisible.
Software stops being something you use and becomes something you delegate to.
that is a control-model change, not a UX trend.
See also: AI Agents Unleashed: Blockchain’s New Power Players
"More software" does not mean "more jobs" anymore
This is where the a16z argument leans hardest on history — and where history may stop being a reliable guide.
In past waves, productivity gains eventually led to more hiring. Software made people more efficient, demand expanded, and organizations grew.
AI breaks that assumption.
Why? Because the bottleneck shifts.
Most organizations today do not suffer from lack of tools. They suffer from:
- coordination overhead
- mediocre execution
- slow decision cycles
- managerial bloat
- incentive misalignment
AI removes those constraints.
One strong human, supervising a fleet of agents, can now do the work that previously required entire teams. Not because they work harder — but because coordination collapses.
Hiring more people stops being leverage. It becomes complexity.
The best AI-first companies do not scale by headcount. They scale by capabilities.
Headcount stays flat. Output explodes.
Media wasn't just transformed — it was broken
The original article treats media as another example of long, healthy creative destruction: some companies die, others adapt, output increases.
that is only half the story.
Yes, there is more content than ever.
No, the economics didn't survive for most players.
What actually happened was a barbell:
Platforms and stars won big.
Niche creators survived via direct relationships.
The middle collapsed.
Distribution became free. Creation became cheap. Attention fragmented. Revenue followed power laws.
AI is about to apply that same pressure to knowledge work.
Writing, design, research, analysis, marketing, consulting — all face a world where intelligence and output are abundant. The question stops being "can this be done?" and becomes "why should a human do this at all?"
Why adoption feels different this time
Every previous shift required behavior change:
learning new interfaces, installing infrastructure, retraining people, changing habits.
AI requires almost none of that.
You talk to it.
The interface is language — the most natural human behavior there is. The time-to-value is minutes. The cost of experimentation is close to zero.
You can build, test, research, and iterate in the same place you think.
we have never seen a general-purpose technology with this combination of power, simplicity, and speed. that is why adoption feels explosive — and why fear and backlash feel emotional rather than rational.
What actually changes inside organizations
The most important impact of AI isn't technical. it is organizational.
AI-first companies look fundamentally different:
- smaller teams
- fewer layers
- faster iteration
- higher leverage per person
Humans do not disappear, but their roles change. Juniors shrink. Middle management shrinks. "Glue roles" shrink.
The AI-First Organization Canvas: a framework for designing companies where agents execute and humans decide.
what is left are people who:
- set direction
- apply judgment
- design systems
- take responsibility for outcomes
This is easy to do if you start AI-first.
it is brutally hard to retrofit into existing organizations.
That tension explains a lot of the resistance we are seeing.
So what remains scarce?
If intelligence is abundant, what still matters?
Judgment.
Taste.
Context.
Ethics.
Imagination.
Storytelling.
Long-term thinking.
Human trust.
AI generates options. Humans decide which ones matter.
The winners in this transition will not be the people who "use AI." they will be the people who design systems where AI operates — and are accountable for what those systems do.
The real disagreement
The a16z argument says: Relax. This looks scary, but it is just another long transition. we have seen this before.
My take is different.
Yes, transitions are long.
Yes, legacy systems persist.
Yes, software isn't dead.
But no — this is not just another chapter in the same book.
This is the first time we have introduced a non-human, general-purpose, decision-making layer into the economy.
That does not end the story.
But it absolutely changes the plot.