Three Speeds of AI Adoption
And three rooms
Fire people, stock goes up. Beat expectations, stock goes down.
I started writing about this last week. But with the Iran war going on now, this feels almost quaint.
Predictions about a doomsday scenario where AI agents collapse the job market by 2028. Headlines about Dorsey’s Block cutting over 4,000 roles - over 40% of its workforce. Gone, just like that. His excuse was AI, as always. Block’s stock price shot up. And then the irony - Nvidia beat expectations, revenue up ~70%, guidance ahead of even the most bullish estimates. The stock dropped 5.5%. $260 billion in market value erased overnight.
Fire people, stock goes up. Beat expectations, stock goes down. That’s the weird logic of markets right now. And I think it mirrors the weird logic of the AI debate. The narrative has decoupled from the evidence. What you believe matters more than what you can show.
Let me compare three rooms.
Three Rooms
Room one. Block’s numbers. An internal AI agent called Goose. One engineer says 90% of his code is now written by it. Non-technical teams writing SQL queries and closing tickets. Revenue per employee doubled.
Room two. A classroom. Students from an organization that will exist in fifty years regardless of what happens in AI. What were they learning? How to write a prompt. How to get an image out of a model. In 2026. They weren’t behind because they were slow. They were behind because there’s a chasm between what frontier labs ship and what most organizations can absorb.
Room three. Trading professionals at a talk I gave. They’d read the doom scenarios. They wanted to know - should I be worried about my career? They weren’t panicking. But they weren’t dismissing it either. Sitting in the uncertainty, looking for clarity.
I tried to answer, but I thought my answer was lacking. So I tried to do better with this article.
Why do I think a piece like Citrini and Shah’s “The 2028 Global Intelligence Crisis” is science fiction designed to go viral? Because of three speeds that may not be moving in lockstep.
The Three Speeds
I think all of these pieces assume uniformity in scaling and adoption. But I think there are 3 speeds that matter, perhaps even more.
METR - Model Evaluation & Threat Research - is a nonprofit that evaluates AI capabilities. Their time horizon chart - arguably the most cited graphic in AI right now - has been called “the most misunderstood graph in AI.” It shows AI capability doubling every few months. Sequoia used it to declare “2026: This is AGI.”
What that chart actually says about each speed:
Speed of Capability. What AI can actually do. Far fewer people saw METR’s own limitations page. More than 10 disclaimers about what their research is not. The researcher writes plainly: they have “no idea whether Claude’s ‘true’ time horizon is 3.5h or 6.5h.” Their code quality research: AI-generated code passed 38% of tests but “none of them are mergeable as-is.” But you don’t need METR to tell you this. When was the last time you spotted your LLM sprouting nonsense? Less common than in 2023, but definitely not zero.
Speed of Adoption. What organizations actually change. You might remember the discredited MIT study claiming 95% of AI pilots fail - the methodology didn’t hold up. So I went looking at what has happened since. McKinsey’s 2025 State of AI survey at the end of 2025: 88% of companies say they use AI, but only a third have begun to scale it. A survey of 120,000+ enterprise respondents from March 2025 to January 2026 reported that nearly two-thirds have no formalized AI initiative at all. But you don’t need McKinsey to tell you this. Look around your own workplace. How many AI pilots have started? How many are in production?
Speed of Belief. What people think AI can do. The chart made it into every investment deck. The caveats didn’t. Same with Citrini’s doom scenario - it spooked actual markets before anyone could verify the underlying technical assumptions. Same with Block - the headline landed, and every manager with a budget started rethinking next year’s headcount because their boss could be like Dorsey. Doesn’t matter if their company has nothing remotely comparable to Goose. Charts travel faster than FAQs with caveats on limitations. And I am sure you know how dangerous mistaken beliefs in large organizations can be. When’s the last time you spotted a senior person saying something totally wrong, but nobody correcting him or her?
The Underlying Issue
The doom scenario by Citrini and Shah needs all three speeds to converge. Capability, belief, and adoption in lockstep.
But the problem is, belief doesn’t need the other two.
Block just showed that AI can replace headcount. At least in its specific context. But the headline doesn’t come with that caveat. So what happens next? A manager somewhere reads it and quietly decides not to backfill that open role. A team that was supposed to grow just doesn’t. None of that requires AI to actually do the work. It just requires someone with the power to believe it could.
Back to 2008. Not the mechanics - those are different. The underlying issue with such beliefs.
In 2008, banks sold complex structured products to people who had no ability to understand what they were buying. CDOs priced by models that the sellers themselves barely understood. The belief in the models was enough to move trillions. The mom-and-pop investors were the ones left holding the bag when reality caught up.
I think something similar is happening now. Not with financial products, but with people’s livelihoods. Block fires 4,000 people and the stock jumps. The market cheers. The narrative is: AI made it possible. But how much of that is demonstrated capability, and how much is a belief about capability that hasn’t been stress-tested outside one company’s very specific context? And perhaps unproven, even for that company.
It was unconscionable then to sell instruments people couldn’t understand to people who couldn’t afford to lose. I think it’s unconscionable now to restructure people’s lives based on a chart that its own creators say they can’t fully trust, and a narrative that outpaces the evidence by miles.
The direction is right. AI will displace some work. But the timeline assumes a uniformity that doesn’t match what I’m seeing in real rooms. And the human cost of getting the timeline wrong - of letting belief run ahead of reality - is not a rounding error. It’s people.
#AI #FutureOfWork #AIAdoption #AIRisk #Leadership


