The future of AI
« previous post | next post »
Elon Musk (a long-time Iain Banks fan) recently told the audience at VivaTech 2024 that AI will (probably) bring us Banksian "space socialism" — which Wikipedia explains this way:
The Culture is a society formed by various humanoid species and artificial intelligences about 9,000 years before the events of novels in the series. Since the majority of its biological population can have almost anything they want without the need to work, there is little need for laws or enforcement, and the culture is described by Banks as space socialism. It features a post-scarcity economy where technology is advanced to such a degree that all production is automated. Its members live mainly in spaceships and other off-planet constructs, because its founders wished to avoid the centralised political and corporate power-structures that planet-based economies foster. Most of the planning and administration is done by Minds, very advanced AIs.
If you have the patience for it, Elon offered an eloquent presentation of his Banksian vision in the cited VivaTech Q&A, starting at 33:27. Here's the question:
hello Elon uh my name is Shan
I'm a student of University of ((…))
uh we found that a
lot of job
uh being replaced
by AI.
uh do you worry about your job
rep- being replaced
by AI?
if not why?
uh if your job
was uh replacing-
replaced by AI what would you do?
And here's the answer:
Elon:
{laughs}
well uh I mean we do get into some existential questions here.
um in- in a benign scenario-
um in a benign scenario
uh we probably none of us will have a job.
um there will be- but in that benign scenario there will be Universal High income
uh not Universal basic income Universal High income there would be no shortage of goods or services.
um and I- I think the benign scenario is the most likely scenario
probably I don't know 80% likely if you ask- in- in my opinion.
um the- the the question will not be um
one of uh lacking goods or services you'll have um
everyone will- will have access to
as much in the way of goods and services as they- as they would like.
um the- the question will really be one of meaning
of how- how- if you-
if the computer can do and the robots can do everything better than you
uh then
uh what- does- does your life have meaning?
that's- that's really the- will be the question in the benign scenario
and in the negative scenario all- all bets are off, we're-
we're in deep trouble
um so
I- I do think there's-
there's perhaps- perhaps still a role for humans in this- in- in that we may give
AI meaning
um so if- if you think of the way that our brain works
we've got the limbic system which is our instincts
um and our feelings
and then we've got the cortex which is uh thinking and planning
um but the cortex is constantly trying to make the limbic system happy
so maybe that's how it'll be with AI which is the
AI is trying to make that cortex happy which is trying to make our limbic system happy
and maybe we are what give
the AI meaning or purpose you know some kind of-
yeah- I- so- but- but- I- I do think that long term in a benign scenario
any- any job that somebody does will be optional.
like if you- if you if you want to do a job as kind of like a hobby
you can do a job
but- but otherwise the- the AI and the robots will
provide any goods and services that you want.
Moderator:
okay so you see
there is a future
where no one will need to work
it will be just ((passions))
Elon:
I think so
uh prob- prob- that is the most likely outcome
um if- if- if people are
interested in reading some science fiction books
that- the most
accurate portrayal of
a future uh with um super intelligent AI is- um
was done by Iain Banks
uh the- the culture books
of Iain Banks
um are- um
are the best- that- that- that's probably the best
envisioning of a future AI.
Musk lays out the standard benign (= space socialism) scenario, and hints at the negative (= extermination of humans?) scenario.
But there's a third option for the future, anyhow the next bit of it — namely another "AI Winter" cycle, as Wikipedia explains:
In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research. The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or even decades later.
The term first appeared in 1984 as the topic of a public debate at the annual meeting of AAAI (then called the "American Association of Artificial Intelligence"). Roger Schank and Marvin Minsky—two leading AI researchers who experienced the "winter" of the 1970s—warned the business community that enthusiasm for AI had spiraled out of control in the 1980s and that disappointment would certainly follow. They described a chain reaction, similar to a "nuclear winter", that would begin with pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research. Three years later the billion-dollar AI industry began to collapse.
Is this plausible? Some pundits think so — a small sample:
Christopher Mims, "The AI Revolution Is Already Losing Steam: The pace of innovation in AI is slowing, its usefulness is limited, and the cost of running it remains exorbitant", WSJ 5/31/2024
Kevin Okemwa, "Is AI all a fad? A new report suggests very few people are using tools like ChatGPT and the hype is being misconstrued for actual public interest", Windows Central 5/29/2024
Mark Sullivan, "Why we may be headed for a generative AI winter: As the generative AI buzz fades, its positive effects seem spotty and anecdotal. Meanwhile, some execs may wonder if all these tools and products are really making things better.", Fast Company 4/25/2024
And — as always, independent of AI hopes and hypes — some people are predicting an economic boom, and others an economic crash. I have near-zero competence to evaluate economic predictions.
With respect to the technical side of (what has come to be called) AI, I see a lot of progress, many near-term opportunities, and also a lot of hype. When it comes to the socio-economic effects, I'm inclined to think that those have been exaggerated, at least for the near term, but I could be wrong.
As for Banksian space socialism, I read Consider Phlebas when it came out in 1987, and I've been an Iain Banks fan ever since. The Culture's universe is attractive and emotionally plausible, though there's a fair amount of problematic physics, and the time scale of the envisioned AI developments might be centuries or millennia rather than decades. In any case, it seems clear that Banks' vision has been motivating Elon Musk since he was a student — though how that turns into support for various would-be authoritarians is puzzling. Maybe that limbic system he mentions?
Serious Caller said,
June 1, 2024 @ 9:15 am
I really like the idea (as seen in David Prokopetz's tumblr, https://prokopetz.tumblr.com/post/749018860064210944) that the Culture is actually a deeply reactionary society, particularly after the Idiran wars. They hate the idea of non-bipedal body plans despite others being shown at almost all points in the series, have viewpoint characters who are anomalously straight and cis compared to the societal average, and center the 'warrior aristocracy' of the Minds involved in interstellar conflict. From that perspective it would make sense how Elon would idolize it.
Gregory Kusnick said,
June 1, 2024 @ 10:42 am
In his recent book Deep Utopia, Nick Bostrom delves into questions of what role humans might play and how they might find meaning in a Banksian AI-dominated future.
It's a bit of a slog; Bostrom gets very deep into the weeds on such issues as what exactly constitutes an "interesting" life. He also pads the book out with a lot of extraneous material — dialogs and fables and such — that doesn't bear directly on his main arguments. But for those interested in such questions it's worth checking out.
David L said,
June 1, 2024 @ 11:37 am
The idea that technology will bring about a modern utopia is an old trope in SF. Arthur C. Clarke's Childhood's End offers a version (if I'm remembering the right book, that is).
When the first successful transatlantic telegraph cable was laid, in the 1860s, either the London or NY Times ran an editorial saying that rapid communication between countries would put an end to war, because the parties would be able to resolve their disagreements toot sweet, thus avoiding fisticuffs. Har de frickin' har.
Barbara Phillips Long said,
June 1, 2024 @ 11:25 pm
My impression of a lot of the utopian science fiction (although I am not familiar with Iain Banks) is that the writers are not informed by the history of utopian communities. I grew up in upstate New York, and read historical fiction and historical fact about the Shakers and the Oneida Community when I was in junior high and high school. I also read about the beginnings of the Latter Day Saints (aka Mormons). Then when I was in college in the 1970s there were a lot of folks who thought communes of various sorts would create an ideal lifestyle.
Humans find sources of conflict and seem to revert to tribalism when it is available. Space socialism or life without work or some other utopia is likely to disintegrate over conflicts started or continued over issues involving property, natural resources, religion, sex, status, or power seeking. As someone who wants to see an end to war, I am discouraged, because mostly I think human history shows that some humans will always be corrupted by power and will seek absolute power. and their actions and avarice will always make sufficient trouble to disrupt utopian efforts. AI can and will be subverted to serve the ends of those who think equality is humiliating rather than equalizing.
Would Elon Musk be satisfied with an AI regime that made him only as rich and powerful as all other humans? Does he see himself no longer keeping score when comparing himself to others?
In the musical Camelot, there is a song called The Seven Deadly Virtues. In it, Mordred sings "It's not the earth the meek inherit, it's the dirt." I don't see AI being able to create enough Earths to satisfy those humans who want to see other humans beneath them in the dirt.
Seth said,
June 2, 2024 @ 3:47 am
It's always a safe bet that a technological advance is never as utopian as its most favorable promoters, nor as dystopian as its harshest critics. We sure didn't get universal democracy from The Internet, for example. But the world hasn't blow up either (well, so far).
One could actually make a good case that it's indeed "that limbic system" which has pushed Musk rightward. The Culture War (not Ian Banks sense, but "woke mind virus"/"techbro" sense) very strongly polarizes people into one side or the other, if nothing else for the reason that extremists will trying to destroy someone. And taking this to the level of dealing with governments makes it even more difficult – there's a great amount of pressure to join one "team", for the benefits and protection that entails. It's really complicated to try to advance humanity!
Mark Liberman said,
June 2, 2024 @ 4:23 am
@Barbara Philips Long: "Humans find sources of conflict and seem to revert to tribalism when it is available."
Also humanoid aliens, in Iain Banks' novels. You might enjoy reading some of them, but if not, Wikipedia has plot summaries.
The post-scarcity theme is pervasive, but so are themes of competition, aggression, deception, and violence. And the featured characters are often agents of "Special Circumstances", about which Banks wrote:
So post-scarcity James Bonds, more or less.
For more background on Banks' vision, mostly minus the competition and violence, see his "A Few Notes on the Culture".
Richard Hershberger said,
June 2, 2024 @ 6:17 am
AI winter: In the current round of AI they have managed to create a massively expensive and resource intensive version that can produce grammatically correct bullshit. This is great for people whose job involves a "there's not bad idea" session to bounce ideas around before winnowing out the bad ones. It might be useful for editing, to take a rough draft and clean up the grammar and spelling. It might in principle be useful for someone who has trouble putting things down on paper but is competent at fact checking and developmental editing.
These all seem pretty economically limited applications–certainly not justifying the expense. So AI is being pushed into areas where it simply doesn't work well. Google seems to be manually editing hilariously wrong answers that go viral. Their defense is telling, that these are rare queries. Yeah, so? This clearly is not a tenable response.
The money is following the hype out of fear of being left behind. As we continue the downward slope of the hype cycle, the fear will abate. This is how these things always work.
bks said,
June 2, 2024 @ 6:53 am
Not a big fan of podcasts, but this one with Emmanual Maggiori (PhD machine learning) author of "Smart Until It's Dumb: Why artificial intelligence keeps making epic mistakes(and why the AI bubble will burst)" rings true:
https://youtu.be/Nd7wrC62LEk
Andrew Usher said,
June 2, 2024 @ 9:27 am
Whether the current AI fad will pan out, that is whether it will allow computers to take over useful tasks that they were not previously able to, is not something I think anyone can predict at this time. I wouldn't commit myself either way, yet. Certainly the most visible examples don't seem promising, but it's likely that the most effort is directed toward applications not for public consumption, and for which it would take some time to see any impacts.
However, that is of little consequence to the question of this so-called utopia. Certainly, the situation 'no one _has_ to work' is desirable, because it is for each individual – it's a safe bet that anyone in that circumstance (which is very many, after all) will not regret it – and to deny it for everyone, if that is practicable, would be wrong. But no one could seriously claim that would mean the end to all human problems, or a degree of perfection our nature is hardly capable of; if that it what is meant by 'utopian', I would disclaim the term.
When it gets mentioned in this manner, though, it is more to deflect criticism than an actual prediction or desire; this alone does not show what Musk really thinks about it. And this answer, as can be predicted, similarly does not talk about _how_ we are to move into that state of society, or even that that should be a concern at all. But is certainly is, for recent history shows it: computers have already displaced many, perhaps most, jobs that existed before and yet, despite a number of reasonable people having thought about it and come to the same conclusion, we have not moved toward such a state in general.
What has happened instead, in the US probably most of all, but surely not only here, is an explosion of bureaucracy and rules, as well as the whole sphere of government-funded pointlessness, that reduce average productivity so that we maintain roughly the same portion of people employed, even as genuine productive work is computerised. Many should be able to attest to seeing this personally, as everyone has to some extent. No one explicitly planned it this way, but it shows that the social conventions surrounding jobs and work are powerful and that there's likely no spontaneous, gradual way to change it. And it's most reasonable to believe that will continue if no action against it is taken, which technology itself can't do – unless it takes over political decision-making, probably the last thing it would displace if it ever does.
The real question should now ask, and answer, itself. Stasis is not happening now or in the immediate future, even if this kind of AI is a dead end – there have been dead ends before in computing, failed advancements, and real advancement does not stop; the problem will continue to grow. If indeed the 'utopia' where jobs as we know them fade, _will_ happen, everyone should rather not wait for a complete collapse of society before we can get there and enjoy it. Even though the current left-wing advocates of a 'universal basic income' will not admit it, implementing one in full would ensure the purpose, and I can see no other way compatible with human nature.
k_over_hbarc at yahoo.com
ardj said,
June 2, 2024 @ 10:34 am
@Andrew Usher, or whatever is the machine intelligence behind "k_over_hbarc at yahoo.com"
Your analysis of the present/ near past seems to be "an explosion of bureaucracy and rules, as well as the whole sphere of government-funded pointlessness, that reduce average productivity so that we maintain roughly the same portion of people employed, even as genuine productive work is computerised"
This might be more convincing with some examples / data. Sixty years ago I enjoyed the introduction of a small computer (the IBM 1130) to a market research department. Far from replacing employees, the chap who previously ran the punched cards production and processing, and knew how to wire up the plugboards, instantly acquired three more assistants, only two of whom were programmers. Equally the further analysis which became possible entailed much more work for the market researchers, contributing to the growth in their numbers not to mention the additional surveys made possible.
Your complaint about bureaucracy seems to resist the controlled society you wish for, and it is not clear which rules and regulations you would keep and which remove (how?).
David Marjanović said,
June 2, 2024 @ 2:42 pm
Well, wars caused by mutual misunderstandings are a thing of the past; and indeed instant communication has been extremely important for keeping us alive ever since the Soviet Union got nukes, too.
Wars caused by a strongman who just simply wants a war remain pretty common.
This is such an American thing to say.
It is really not hard to find countries that have way more bureaucracy and way fewer loopholes in their rules, and yet are richer per capita, than the US.
Andrew Usher said,
June 2, 2024 @ 3:47 pm
ardj:
I don't doubt your anecdote, but one department of one company at one time doesn't make a good case. True, I gave no examples, but since it can be seen almost everywhere none come to mind in particular. If you've seen a lion, say, but once, the subject of lions may well recall that one; if a thousand times, none is likely to stand out in that circumstance.
To the latter, I don't know where you get 'controlled society' from – I said nothing like it. And are you implying that our society is not 'controlled' now? There is always some kind of control, some restraint on freedom. Surely there can be more or less freedom; I will likely be on the side of more and have always said so. At least you don't deny that there is an increase in bureaucracy, etc. As to which rules I would remove, answering that is too large a task for one person! Whenever a particular piece of the system comes up, I can give my opinion and do, but it's more complicated than saying 'good' or 'bad' to individual rules.
David Marjanovic:
I do not think you know the US well enough to make such a comparison, which is in any case an over-simplification. Also, rules with more loopholes cause more wasted effort (what I was referring to) than rules with less.
Philip Taylor said,
June 3, 2024 @ 4:32 am
David M. — "Wars caused by a strongman who just simply wants a war remain pretty common" I cannot help but feel that "who just simply wants a war" is rarely an accurate description of the situation. The strongman may well want (e.g.,) someone else's country or resources, to regain territory that he has lost, or many other things, but I do not believe that he wants war qua war. However, since he can see no way of gaining "someone else's country … many other things", and since he rarely has anything personal to lose in the event of war, he is more than willing to start one. But I still don't believe that that implies that he wants one, other than in a pure revenge/punishment situation.
Philip Anderson said,
June 3, 2024 @ 7:20 am
@Philip Taylor
There are many cases where someone did want a war as such, as a way of distracting their country from other problems, or to appear to be a strongman, even if the potential gain were minimal.
ardj said,
June 3, 2024 @ 8:27 am
@Andrew Usher
My work as a statistician inclines me to accept your argument that one anecdote does not prove a general description. But in this case it was intended simply to contradict your over-general and unsupported complaint about computers destroying jobs. You are probably too young to remember the Industrial Revolution of the 19C or the Agricultural one starting perhaps in the 16C (cf. "The Brenner debate" &c), but while they destroyed a grat many jobs, it also produced new ones – at that time, mostly horrible ones. Improved automation then alleviated some of the unpleasantness and created new jobs. cf. How Technology Is Destroying Jobs by David Rotman, MIT Technology Review, 13jun13 or the two articles on the same journal of 1April24 by Peter Dizikes, to look no further than the USA).
Only one person claims that Amazon warehouse jobs are, er, fulfilling, so to speak. but then Amazon was always a terrible idea.
Turning to controlled society, what else are you saying when you write: "an explosion of bureaucracy and rules, as well as the whole sphere of government-funded pointlessness" ? Do you see why I read libertarian arguments into this ? And the question which regulations you wish to remove is very real, with Trunp in the offing: Regulations protect drinking water, highway safety, even place some checks on the idiotic casino of the stock markets. Even before computers, there were some rules about not fouling wells or driving a carriage furiously but their enlargement and the expansion of associated bureaucrats to survey them is probably not down to computers in the main – in large part it is simply due to the rapid growth in populations, while it is also possible nowadays to do more to see that we live in a safe world. What are your super minds going to do, after all ?
I am a longstanding admirer of the Culture but I struggle to understand your argument.
Philip Taylor said,
June 3, 2024 @ 8:31 am
ardj — "but then Amazon was always a terrible idea" — a terrible idea for whom ? Not for its founder, not for its shareholders, not for its customers, not for its vendors — terrible, perhaps, for those that are unable to compete with it, but then that is true of every organise that exists to (a) sell, and (b) make a profit.
Philip Taylor said,
June 3, 2024 @ 8:32 am
[…] of every organisation […]
Andrew Usher said,
June 4, 2024 @ 7:31 am
Well, it's surely reasonable to bring up the industrial revolution and other past technological changes, and I suppose the reasons that this time is different need to be stated. All previous technology was self-limiting in the amount of work it could displace, because it did not replace human thought, even in its more rudimentary forms. As all man's endeavors require mental effort to some extent, they placed no obstacle to developing new jobs and indeed, as you note, enough new jobs did evolve to eventually spread the increased wealth. This is now different as thought has been and still is being replaced by computers, and in general increasing output does not necessitate more workers.
The other point is that previous industrialisation _did_ result in a net decrease in human labor. By the time computers came around, society had almost entirely removed children and married women from paid work (and their unpaid household work fell, too, it certainly didn't increase to compensate); reduced the hours of work for most men, establishing the 8-hour day and 40-hour week as norms; and made retirement a thing for the masses; all of which reduced the ratio of work hours to population supported. Of course that's a good thing, but it's clear the trend has stopped – the exact timing may be a bit of a coincidence, but since computerisation took off, significant reduction in work hours per capita has not happened and has probably reversed. (You might object that at first industrialisation increased work hours, but to the extent it did, it was due to understandable shifts that can only happen once: modern supervision meaning people could be forced to work longer, and reduced physical demands making them able to.)
The ideas that computers are or will be causing an unprecedented change in work, and that some sort of universal income is the correct solution, are not new: some reasonable people have been so concluding since the 1960s, and probably earlier in speculation. As far as I know, I also figured out those two basic points independently. I certainly did not get it from this Iain Banks, whom I don't know that I'd ever heard of, and references to his work mean nothing to me.
To the point about regulation: again, my argument was not about any specific rules, but the effect of rules in general. You can't defend bureaucracy by making a list of rules you like; you'd need to think about those you don't like as well. Surely, we need some rules, and some will be new, but just as surely, it's possible to have too many rules, or the wrong ones, or ones that are enforced so unfairly or inconsistently as to not serve any purpose they might have – and most certainly those things do exist. Trump is irrelevant; understandably, neither he nor any other leader has or is likely to implement a large-scale rollback of regulation, even if one really tried. There have been attacks on a few specific things, yes, and I won't debate them here, but not on bureaucracy or regulation as such – in terms of action, not just rhetoric.
Benjamin E. Orsatti said,
June 6, 2024 @ 4:19 pm
As a municipal solicitor, I would like to thank the creators of AI for having afforded me lifetime job security — I believe I will be able to work until retirement age just on replies to, and defense of appeals of, Right-to-Know requests submitted by AI bots.
Next, they're probably going to start showing up at public meetings; do we have procedures for this?…