The AI Jobs Apocalypse Is the Oldest Story in the Book — and We Keep Getting It Wrong
Every generation believes its technology is the one that finally ends work. Every generation is wrong.
There’s a piece circulating right now analysing Anthropic’s new research paper on AI and labour markets. It’s slick, it’s data-rich, and it treats the gap between AI’s theoretical capability and its observed real-world usage as a countdown clock to disruption. The implicit message: the jobs are going, it’s just a matter of when.
I disagree. Not because the data is wrong — much of it isn’t — but because the framework it sits inside is one of the most persistently, demonstrably incorrect narratives in the history of economic thought.
Let me make the case.
We Have Been Here Before. Repeatedly.
In 1589, Queen Elizabeth I refused to grant a patent to William Lee for his stocking-frame knitting machine on the grounds that it would put her subjects out of work. In the 1810s, the Luddites smashed textile machinery across the English Midlands for the same reason. In the 1930s, John Maynard Keynes coined the phrase “technological unemployment” to describe the fear gripping a Depression-era world convinced that machines were eliminating work faster than society could create it. In 1938, with unemployment near 20%, MIT’s President Karl T. Compton was still having to publicly argue that technological progress creates more jobs than it destroys.
The fear has been continuous. The predicted catastrophe has not arrived.
When Ford introduced the Model T, the coach-building and horse-breeding industries — which employed hundreds of thousands — faced genuine, painful obsolescence. What followed was not mass unemployment. It was the automobile industry, the highway system, the trucking sector, the roadside motel, the suburb, and eventually the global logistics economy that underpins virtually every modern supply chain. The horse gave way to the horsepower. The jobs multiplied.
McKinsey’s research into five waves of historical technological disruption — from the first Industrial Revolution forward — concludes plainly: technology adoption causes significant short-term labour displacement, but in the longer run it creates far more jobs than it destroys. The agricultural share of US employment fell from 60% in 1850 to under 5% by 1970. Manufacturing followed a similar curve. Neither destroyed total employment. Both redirected it toward sectors and occupations that couldn’t have been imagined at the outset.
The personal computer alone — a single technology introduced at scale after 1980 — is estimated to have enabled the net creation of roughly 15.8 million jobs in the United States, after accounting for every displacement.
The ATM That Created Bank Tellers
Perhaps the most instructive case study for our current moment is the automated teller machine — precisely because everyone got the prediction wrong, and the actual outcome illuminates something important about how technology and labour interact.
When ATMs were rolled out at scale across the United States from the mid-1970s onward, the conventional wisdom was unequivocal: bank tellers were finished. In 1985, there were around 60,000 ATMs in the US and 485,000 bank tellers. By 2002, there were 352,000 ATMs — and 527,000 bank tellers. As economist James Bessen documented, the number of tellers required to operate an urban branch fell from roughly 21 to 13. But that cost reduction made it profitable to open far more branches. Urban bank branches increased by 43% during the ATM proliferation era. Fewer tellers per branch, but vastly more branches: total teller employment grew.
Crucially, the nature of the job transformed. Cash handling became largely automated. What remained — and became more valuable — was the human relationship work: advising small business customers, selling financial products, building the personal trust that machines couldn’t replicate. The teller didn’t disappear. The teller became a relationship banker.
The pattern is so common it has a name: the Jevons Paradox. When technology makes a task cheaper or more efficient, demand for the underlying service often expands enough to increase total employment even while each unit requires less labour. This is not an accident. It is the consistent, documented behaviour of market economies in response to productivity-enhancing innovation.
In 1985, there were zero social media managers, zero UX designers, zero data scientists, zero cloud architects, zero prompt engineers. The industries that will absorb displaced workers over the next two decades largely do not exist in their mature form yet. That is the nature of the transition — not a cliff edge, but a landscape that keeps expanding.
The Capability-to-Deployment Gap Is Not a Bug. It’s Reality.
The Anthropic research itself — the data being used to argue that disruption is imminent — actually supports a more cautious reading if you look at it carefully.
Computer and mathematics occupations: 96% theoretical AI capability. 32% observed exposure. Legal: 88% theoretical. 15% observed. Management: 92% theoretical. 25% observed.
This is not a countdown to convergence. It is evidence of a structural gap that is driven by forces that are unlikely to dissolve quickly.
The piece in question attributes this gap to “legal constraints, verification requirements, and slow enterprise adoption.” That framing treats those friction points as temporary obstacles to be overcome, rather than what many of them actually are: load-bearing structural features of how organisations operate.
Consider the fiduciary dimension. Directors of listed companies operate under legal duties of care, skill, and diligence that are not delegable — not to a junior employee, not to an external consultant, and not to an algorithm. Under the Caremark doctrine in US law and equivalent frameworks in the UK Companies Act, board members bear personal liability for failures of governance and oversight. A 2025 study found that only 36% of boards have even implemented a formal AI governance framework, and merely 6% have established AI-related management reporting metrics. This is not foot-dragging; it is an appropriate response to genuine legal exposure that no AI system currently resolves.
The situation in legal services is even more instructive. The American Bar Association issued ethics guidance in 2024 confirming that lawyers must personally verify all AI-generated output and maintain technical competence regardless of which tool was used. There are now over 600 documented AI hallucination cases in US courts, implicating 128 lawyers including partners at major firms. In at least one case, a federal court disqualified an entire firm and referred attorneys to every bar association in every jurisdiction where they were licensed. The duty to use AI responsibly attaches to the attorney personally — it cannot be outsourced to the machine.
This is not technophobia. This is accountability infrastructure that societies have built up over centuries, and that functions precisely because human judgment carries legal weight in ways that AI outputs currently do not, and may not for a long time.
Innovation Always Front-Runs Legislation — Until It Doesn’t
There is a reliable pattern to how technology and governance interact across history. Innovation arrives. Society scrambles to adapt. Legislation eventually catches up — and when it does, good legislation attempts to balance the gains from innovation against the interests of the broader public.
This is exactly what is happening with AI right now. Over 1,000 state-level AI bills were introduced in the United States in 2025 alone. The European Union’s AI Act came into force in 2024. The UK and Canada are developing their own governance frameworks. None of this represents hostility to progress. It represents the normal, functional process by which societies calibrate how powerful new tools are deployed in contexts where the stakes are high.
That regulatory momentum will constrain certain high-leverage, high-value applications of AI in exactly the domains that the Anthropic data shows as theoretically most exposed — legal, finance, corporate management. Not forever. Not absolutely. But meaningfully, and for long enough that “the gap” is not primarily a measure of how quickly AI will colonise white-collar work. It is also a measure of how quickly institutions, legal frameworks, and human trust will adapt — which historically happens much more slowly than technology enthusiasts predict.
The Real Risk Is Not Mass Unemployment — It’s Uneven Transition
None of this is to say the disruption is painless or that the concerns are unfounded. They are not. Innovation always disrupts before it creates, and the disruption falls unevenly.
The Anthropic data points to something genuinely important: that the hiring of 22-to-25-year-olds into exposed roles has dropped by roughly 14% since ChatGPT launched, with no equivalent decline for workers over 25. This is worth taking seriously. It suggests AI may be compressing the traditional entry-level apprenticeship pathway — the period when young workers develop skills, relationships, and professional judgment under supervision. That is a real structural concern, distinct from the headline “jobs are being replaced” narrative, and it deserves a different kind of response.
Similarly, McKinsey’s historical research is honest about the fact that even when technology eventually creates more jobs in aggregate, the transition period can span decades and the pain is real and unevenly distributed. During the first Industrial Revolution, real wages in England stagnated for roughly 40 years while productivity soared. The gains arrived — but not immediately, and not for everyone. Policy responses matter enormously in determining how the gains from productivity are shared.
What the Data Actually Shows
The Anthropic research is, in some ways, a sophisticated argument for why the AI disruption is slower and more bounded than the headlines suggest — not faster and wider. The observation that programmers are both the most exposed occupation and the heaviest AI adopters is not evidence of a doom spiral. It is evidence that sophisticated workers are using powerful tools to become more productive. That is the story of every technology wave.
The 30% of workers with zero AI exposure — cooks, mechanics, nurses, builders, caregivers, lifeguards — are not a rounding error. They are a reminder that the economy is not a spreadsheet. The physical, the relational, the improvisational, and the high-accountability domains of human work are genuinely resistant to the kind of automation that AI enables today.
And history keeps telling us the same thing: the jobs that get created on the other side of a technological transition are not the jobs that existed before, adjusted downward. They are categorically new kinds of work, in industries that don’t yet have names, for skills that haven’t yet been formalised.
Prompt engineers didn’t exist five years ago. Neither did LLM safety researchers, AI governance consultants, synthetic data curators, or the teams now being built inside every major institution to manage AI deployment, oversight, and liability. These are not compensatory fictions. They are the leading edge of the expansion.
A Final Thought
The gap between what AI can theoretically do and what it is actually being used for is not primarily a story about human reluctance to change. It is a story about the complexity of deploying powerful tools in contexts where accountability, trust, and legal responsibility matter — which is to say, in most of the contexts that define professional work.
The disruption is real. The transition will be uneven and in some ways painful. The policy response will matter. But the specific fear — that AI is going to systematically eliminate the high-value, knowledge-intensive jobs at the top of the labour market — runs headlong into a wall of fiduciary duty, professional liability, regulatory momentum, and centuries of evidence that suggests productivity gains create work rather than destroy it.
We’ve been here before. The horse and cart gave way to the automobile. The switchboard operator gave way to the receptionist. The bank teller gave way to the relationship banker. The farm labourer gave way to the agronomist, the food scientist, and the logistics coordinator.
What comes next, we don’t yet have words for. But “mass unemployment” is almost certainly not it.
Data sources: McKinsey Global Institute; Bessen (2015), IMF Finance & Development; Information Technology and Innovation Foundation; US Bureau of Labor Statistics; WilmerHale (2026); NACD Board Practices Survey (2025); Corporate Compliance Insights (2026); Anthropic Labour Market Impacts Research
.

