Macroeconomics of AI?

« previous post | next post »

Daron Acemoglu, "The Simple Macroeconomics of AI":

ABSTRACT: This paper evaluates claims about the large macroeconomic implications of new advances in AI. It starts from a task-based model of AI’s effects, working through automation and task complementarities. It establishes that, so long as AI’s microeconomic effects are driven by cost savings/productivity improvements at the task level, its macroeconomic consequences will be given by a version of Hulten’s theorem: GDP and aggregate productivity gains can be estimated by what fraction of tasks are impacted and average task-level cost savings. Using existing estimates on exposure to AI and productivity improvements at the task level, these macroeconomic effects appear nontrivial but modest—no more than a 0.71% increase in total factor productivity over 10 years. The paper then argues that even these estimates could be exaggerated, because early evidence is from easy-to-learn tasks, whereas some of the future effects will come from hard-to-learn tasks, where there are many context-dependent factors affecting decision-making and no objective outcome measures from which to learn successful performance. Consequently, predicted TFP gains over the next 10 years are even more modest and are predicted to be less than 0.55%. I also explore AI’s wage and inequality effects. I show theoretically that even when AI improves the productivity of low-skill workers in certain tasks (without creating new tasks for them), this may increase rather than reduce inequality. Empirically, I find that AI advances are unlikely to increase inequality as much as previous automation technologies because their impact is more equally distributed across demographic groups, but there is also no evidence that AI will reduce labor income inequality. AI is also predicted to widen the gap between capital and labor income. Finally, some of the new tasks created by AI may have negative social value (such as design of algorithms for online manipulation), and I discuss how to incorporate the macroeconomic effects of new tasks that may have negative social value.

A contrary view, or at least some objections, from Tyler Cowan — including this:

[A]s with international trade, a lot of the benefits of AI will come from “new goods,”  Since the prices of those new goods previously were infinity (do note the degree of substability matters), those gains can be much higher than what we get from incremental productivity improvements.  The very popular Character.ai is already one such new good, not to mention I and many others enjoy playing around with LLMs just about every day.

But there's another thing that neither Acemoglu nor Cowan considers, which is that administrative automation may be different, at least in some settings. I predict that applications of "AI" to administrative functions will decrease productivity more than they increase it — though I'll skip the supporting details to protect the innocent (as well as the guilty…).

[h/t Bob Shackleton]

Update — Commenters have missed my point, which was simple enough: the effects of administrative bureaucracy on productivity are often negative, as documented extensively in the media, in the scholarly literature, and in everyday experience. (See e.g. this 1968 paper…) Now that bureaucrats can use AI technology to do more of what they already do, negative effects on substantive production may well be magnified, even without the likely failures of that technology.

I should have been more explicit, and given examples.

 



15 Comments

  1. GeorgeW said,

    April 23, 2024 @ 3:43 pm

    How will AI decrease productivity?

  2. KeithB said,

    April 23, 2024 @ 3:47 pm

    Because of all the work to undo its mess.

    I recently wrote an email describing two separate systems that were similar in some respects. Someone summarized it with AI, and it simply munged the whole thing together without respecting the differences. I had to re-write it to untangle everything.

  3. Chester Draws said,

    April 23, 2024 @ 3:51 pm

    AI will decrease the productivity of teachers and anyone else who has to determine a person's actual output, rather than their AI generated one. So anyone employing a person for a role involving writing.

    We used to just look at their written output and make judgements. Now we cannot just do that, so an extra layer of work in involved.

  4. Rick Rubenstein said,

    April 23, 2024 @ 5:08 pm

    I took the obvious step of asking ChatGPT "What are the large macroeconomic implications of new advances in AI?". It answered with a surprisingly comprehensive, and unsurprisingly wishy-washy, six-point buzzword-filled list describing various ways AI will have positive, negative, and as-yet-unknown impacts.

    While the content was the typical bland LLM output, I was (not for the first time) absolutely blown away by the perfect formatting. There was an opening paragraph, then an indented section with the six numbered points (e.g. "1. Labor Market Dynamics:"), with the topics themselves in bold, and then an unindented concluding summary.

    Prior to powerful LLMs appearing on the scene, I would have naively predicted that consistent complex formatting would have lagged far behind the text generation itself. It still feels a bit magical to me (even though I do have a vague understanding of how it works under the hood).

  5. GeorgeW said,

    April 23, 2024 @ 5:34 pm

    @KeithB and Chester Draws

    For those situations in which it reduces productivity, people and organizations would just refrain from using it. I can't think of a good reason someone, such as a teacher, would choose to use something less productive except maybe for pleasure.

  6. AG said,

    April 23, 2024 @ 10:19 pm

    @georgew –

    in addition to what others have already mentioned, AI will also (and has alredy begun to) decrease productivity because:

    1) People will believe nonsense it tells them and waste time proceeding based on inaccurate information;

    2) People will need to spend vast amounts of time deciding whether to trust any given text (even more than current source analysis would require);

    3) People will waste vast amounts of time trying to get customer support by talking in circles to useless chatbots (thus wasting even more time, with worse results, than waiting on hold to talk to a human operator, as unlikely as that sounds)

  7. loonquawl said,

    April 24, 2024 @ 1:23 am

    @georgew -the decrease in productivity has already begun. Sales had an unusually comprehensive pitch, and 5000 characters in i realized it was just inane LLM drivel (took me that long because Sales and LLM have many similarities in their lingo) and factually wrong. So now it is being rewritten. Sales time wasted, my time wasted. The wrongheadedness of the thing is now being hidden much more efficiently by all the LLM output by definitionconsisting of words that truly belong together. Gone are the days when deficits in understanding led to stumbly and jarring outputs that got one's critical attention up, now everything is smoothly, cloyingly, pattering along.

  8. Sven said,

    April 24, 2024 @ 3:51 am

    My (somewhat educated) guess on how AI will lower productivity: Until recently, when encountering a reasonable well written and formatted text, you could savely assume that someone (a human) spend some time on it. That means it was important enough for someone to spend time on it, or money to have someone else spend time on it.
    Now you cannot assume that anymore.

    And I definitely predict that administrations will require more reports in the future, knowing that they will be cheaper to produce.

  9. Richard Hershberger said,

    April 24, 2024 @ 5:28 am

    "For those situations in which it reduces productivity, people and organizations would just refrain from using it."

    You aren't wrong, but "should" is doing a lot of heavy lifting here. People and organizations should do or refrain from doing lots of things. Yet here we are.

  10. GeorgeW said,

    April 24, 2024 @ 5:35 am

    My point regarding reductions in productivity is that profit-making organizations would not continue to use a tool that decreases productivity – companies do not stay in business choosing to be less productive.
    Non-profit organizations operate with budgets, and often very limited funds. They would not choose to reduce their productivity.
    Individuals might continue using AI because it provides a recreational benefit.
    Many organizations might — probably would — try it but when discovering that it is harming the bottom line would stop. Others, after learning about the experience of others, would refrain from adopting it.

  11. AG said,

    April 24, 2024 @ 7:20 am

    @georgew – I am afraid you are operating from a unique set of preconceptions about how modern profit-making organizations operate.

    It's certainly different from mine. I've observed over and over and over again in recent decades that corporations love investing in extremely unprofitable nonsense if it artificially inflates their market valuation or temporarily exites shareholders, and they LOVE boosting their bottom line by laying off human beings, both of which AI is currently promising to help them do.

  12. KeithB said,

    April 24, 2024 @ 7:37 am

    GeorgeW:
    I am fixing the mess *someone else* made with AI – much like loonquawl said. The people who use AI go blithely on thinking it is the best thing without realizing the mess they made.

  13. GeorgeW said,

    April 24, 2024 @ 9:15 am

    @AG: FWIW, I am now retired after spending a career in business, first as an auditor then a corporate financial manager. I can assure you that organizations dont stay in business long by persisting with frivolous or bad decisions about productivity.

  14. Terry Hunt said,

    April 24, 2024 @ 9:21 am

    I am reminded of the Science Fiction writer Frank Herbert's inclusion, in some of his novels and stories, of an official government Bureau of Sabotage tasked with slowing down the operations of other arms of government so that they do not become tyrannical in their efficiency.

  15. AG said,

    April 24, 2024 @ 6:04 pm

    @georgew – I think perhaps the companies you used to deal with weren't thinking enough like Amazon and countless other current "success" stories:

    https://finance.yahoo.com/news/15-big-companies-aren-t-143602940.html
    https://www.nasdaq.com/articles/8-famous-companies-that-arent-profitable
    https://247wallst.com/special-report/2022/06/03/largest-public-companies-that-dont-turn-a-profit/

RSS feed for comments on this post