AI: Not taking jobs yet?
« previous post | next post »
Martha Gimbel et al., "Evaluating the Impact of AI on the Labor Market: Current State of Affairs", The Budget Lab (Yale) 10/1/2025:
Overall, our metrics indicate that the broader labor market has not experienced a discernible disruption since ChatGPT’s release 33 months ago, undercutting fears that AI automation is currently eroding the demand for cognitive labor across the economy.
While this finding may contradict the most alarming headlines, it is not surprising given past precedents. Historically, widespread technological disruption in workplaces tends to occur over decades, rather than months or years. Computers didn’t become commonplace in offices until nearly a decade after their release to the public, and it took even longer for them to transform office workflows. Even if new AI technologies will go on to impact the labor market as much, or more, dramatically, it is reasonable to expect that widespread effects will take longer than 33 months to materialize.
Most of the figures show (various versions of) smoothed time-functions of occupational dissimilarity indices from different baseline time-points, and/or in different industries — for example this one or this one.
I find the report's discussion of its many figures convincing. And the authors provide links to their data and the code used to create it, so you can check it all out for yourself (though I haven't done so).
But my own opinion, for what it's worth, is that the authors' conclusion ("widespread technological disruption in workplaces tends to occur over decades, rather than months or years") is true, but is not the only thing going on.
There's obviously also the widely-discussed fact that the effectiveness and economic viability of "AI" systems, though improving, has been over-hyped.
But I think there's another factor not so widely discussed, namely the tasks the new "AI" systems are actually performing, and the goals and motivations of the administrators responsible for deciding how to deploy these new technologies in the workplace.
The new systems are not just doing what used to be done. They're modeling and predicting the organization's activities — while simultaneously changing them. And rather than simply seeking cheaper ways to accomplish existing tasks, the administrators see an opportunity to add new layers of bureaucratic interaction, and new dimensions of documentation and record-keeping — both because they like these rich data-webs, and because the new "AI" systems need the new data to function.
As a result of my own institution's outsourcing of HR functions to a company that calls itself "the enterprise AI platform", many component organizations have had to hire new staff to manage a large increase in HR-related human labor. This is partly because (some of?) the apps involved, "AI" or not, are badly designed and executed. But (in my experience) it's mostly because the new systems insist on documenting many more facts, events, stages of processing, and numbers than were previously required.
One reason for this is suggested by the OECD's attempt to define "AI" , which is much better than the too-common idea that "AI is applying generative LLM technology to whatever":
An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.
Below that definition, they offer these flow charts:
Building systems of that kind, whatever their ultimate utility, requires ingesting explicit data on a scale that was never previously needed. And the immediate effect will often be a significant increase in the workload of the humans in the organization.
Thomas Claburn in The Register ("AI has had zero effect on jobs so far, says Yale study", 10/1/2025) cites other recent studies with similar conclusions, and notes some other reasons for recently-publicized changes, e.g.
In Microsoft's own case, the culls appear to be a way to reduce expenses and mollify investors following its massive capital expenditures on data centers that fuel its AI ambitions.
It's also not clear to me whether recent DOGE reductions in the U.S. government workforce have been large enough to affect the occupational dissimilarity indices. As far as I can tell, "unemployed" isn't one of the occupations in their dataset, though I'm not sure that matters to the dissimilarity index calculations.
Dan Milmo in The Guardian ("US jobs market yet to be seriously disrupted by AI, finds Yale study", 10/1/2025) also notes that
[T]he report flagged some recent data that showed a divergence between the jobs mix for recent graduates and older graduates aged 25-34. It said the data could show AI impacting employment for early career workers but could also reflect a slowing jobs market.

Benjamin Geer said,
October 2, 2025 @ 9:18 am
David Chisnall argues that the main thing AI is being used for is to intimidate the workforce.
Garrett Wollman said,
October 2, 2025 @ 1:45 pm
It's been an interesting transformation over the course of my career in terms of how "AI" is perceived and (attempted to be) defined. Hofstadter, I believe, made the observation in the 1980s that "AI is whatever hasn't been done yet", and at the time it was an accurate observation: as computer-based methods for various kinds of analytical and predictive business processes became both feasible and popular, people stopped considering them "artificial intelligence", they were just computer software ("algorithms" if you had a CS education). "AI" was always in the future; if you wanted to sell your product as something "real", that customers could buy and use today, you had to call it something else.
I'm not sure when you'd date that statistical turn in AI research; it has to be some time around the year 2000, based on a whole pile of document processing technologies happened (PageRank, statistical machine translation, early work on generative methods for document similarity and citation analysis), but that still didn't make businesses willing to call their AI products "AI" — that was still in the future of Terminator movies and science fiction, not something you wanted to be trying to sell in the here and now.
Something seems to have flipped between 2000 and 2020 such that it became advantageous for marketers to call their machine-learning product (or even their 1980s-style rule-based expert system) "AI", when just a few years before it was considered a disadvantage. Is it just a marketing fad, soon to burn itself out when it turns out that we still haven't built perfect slaves to replace the "creative class"? I don't know; myl has seen more turns of the wheel than I have.
Based on my own observations as well as recent analysis by Ed Zitron I definitely see what is currently going on as a bubble, one likely to pop soon because there simply isn't enough money floating around to fund all of the investment in GPUs that the big players have asserted is going to be necessary — and there's no prospect of profitability unless they really do deliver on the "you can fire all your senior engineers and creatives and replace them with Claude/ChatGPT/Copilot". I think there's good reason to believe that this doesn't actually scale the way Sam Altman and Dario Amodei claim.
ErikF said,
October 3, 2025 @ 12:18 am
I remember when Prolog was supposed to be the silver bullet to all problems and 5GL languages were supposed to the "AI" that would turn programmers into logic creators. It didn't happen: apparently people have the ability to do things that all the applied logic in the world can't handle.
I see ChatGPT and other AIs as useful tools, but unless you are OK running untested code blind or submitting legal briefs with potentially-bogus citations, several humans are required somewhere in the process, and not just at one end.
Peter Cyrus said,
October 3, 2025 @ 3:20 am
The fact that they reaction has been hysterical – as all our reactions always are – says nothing at all about the promise of the technology (or lack thereof).
AntC said,
October 3, 2025 @ 3:41 am
many component organizations have had to hire new staff to manage a large increase in HR-related human labor. … mostly because the new systems insist on documenting many more facts, events, stages of processing, and numbers than were previously required.
In the wave of corporate takeovers through the '80's/'90's, much institutional knowledge was lost in the new owner's (Venture Capitalists) search for so-called efficiencies. To remediate the visibility of operations, the new owners implemented whole-of-corporation 'Enterprise Resource Planning' software. Which indeed required capturing "many more facts, events, stages of processing". These needed _more_ staffing and/or changed employees' roles to be little better than data entry clerks. And of course ERP projects cost a bucket-load, and froze the organisation's evolution/innovation whilst it tried to swallow a whole different way of working.
bks said,
October 3, 2025 @ 4:59 am
When it becomes clear that replacing vice presidents and venture capitalists with "AI" is easier than replacing janitors and busboys, the bubble will burst.
bks said,
October 3, 2025 @ 5:01 am
… And no, robots will not come to the rescue of our corporate overlords:
https://rodneybrooks.com/why-todays-humanoids-wont-learn-dexterity/
Matthew J. McIrvin said,
October 3, 2025 @ 3:01 pm
Yes, there was an earlier AI hype bubble in the 1980s, when "AI" was a vague term referring to expert systems, the use of certain languages like Lisp and Prolog, and Japan's "fifth-generation computing" initiative. When that one popped, it seemed like "AI" was a term people didn't use commercially for a while except to refer to the enemy character logic in video games (even if it was very simple). Now it's back but refers to machine learning, LLMs and other generative systems.