Sam Altman and King Blozo
« previous post | next post »
We're all waiting to learn the story behind Sam Altman's firing as CEO of OpenAI. Or at least, many of us are.
Meanwhile, there's possible resonance with an on-going drama in the daily Popeye comic strip, concerning the fate of (former) King Blozo of Spinachovia, now the Superintendent of Royal Foot Surfaces:
During the question period after my talk yesterday, the issue was raised whether modern AI-ish methods mean the end of (human) linguistics. My opinion: No, for the same reason that the invention of the telescope was not the end of astronomy — though it did foreground a somewhat different mix of skills. (Here's the introductory slide from yesterday's talk, and a mildly skeptical take from Geoff Pullum in 2011…)
King Blozo clearly has the basis for a different view.
Update — if this Xeet is correct, as seems plausible, the issue was mostly about Altman's recent massive fund-raising campaign (mingled of course with the question of what to do with the new sixty billion dollars):
What happened at OpenAI?
> Nov 2 -> Sam was in the room, when the team demonstrated the next big improvement. 3 times before in OpenAIs history, most recently with GPT-4, they’d pushed back the veil of ignorance and pushed forward the frontier of discovery. As he watched the…
— Ate-a-Pi (@8teAPi) November 18, 2023
Update #2 — A slightly different take, from Henry Farrell: "What OpenAI shares with Scientology — Strange beliefs, fights over money and bad science fiction", Programmable Mutter 11/20/2023.
Update #3 — Altman is back…
Bill Benzon said,
November 18, 2023 @ 6:02 am
The folks over at LessWrong, who are much closer to this business certainly than I am, seem to think it was over a disagreement about safety issues:
Bill Benzon said,
November 18, 2023 @ 6:13 am
This relates to Blozo's concern about AI superintelligence. Mishka, again over at LessWrong, has transcribed part of a YouTube video released on July 17. It's a conversation with Ilya Sutskever, former student of Geoffrey Hinton and co-founder and chief scientist of OpenAI:
Yes, I did go outside and take a walk around the block to verify that I'm still in Hoboken and not on some soundstage – not necessarily in Hollywood, BTW, they're all over the place by now, including Jersey City and Queens, NY – where a science fiction movie is being filmed. I'm still living in the Real World. Not so sure about Silicon Valley.
Mark Liberman said,
November 18, 2023 @ 6:19 am
@Bill Benzon: " I'm still living in the Real World. Not so sure about Silicon Valley."
See Maria Farrell, "Silicon Valley’s worldview is not just an ideology; it’s a personality disorder", Crooked Timber 11/15/2023.
Also N.K. Jemisin, Emergency Skin.
Carlana said,
November 18, 2023 @ 6:21 am
Less Wrong is a cult. I wouldn’t trust them if they tell me someone was fired for praying in the wrong direction.
Bill Benzon said,
November 18, 2023 @ 7:51 am
On LessWrong and the like, I published this in 3 Quarks Daily a couple of months ago: A New Counter Culture: From the Reification of IQ to the AI Apocalypse.
Also, a bit older: On the Cult of AI Doom.
The working title for my next column for 3 Quarks Daily: O Captain, My Captain! Investing in AI is like buying shares in a whaling voyage helmed by a man who knows all about ships and nothing about whales.
bks said,
November 18, 2023 @ 7:52 am
Perhaps worth noting are the poorly sourced quotes from Bill Gates last month that ChatGPT-5 would not be much of an improvement over ChatGPT-4:
https://the-decoder.com/bill-gates-does-not-expect-gpt-5-to-be-much-better-than-gpt-4/
Gates is in a position to look behind the curtain.
Bill Benzon said,
November 18, 2023 @ 8:09 am
From the Xerpt Mark posted:
From a blog post by Scott Aaronson back on Nov. 28, 2022, who'd taken leave from his post at UTexas, Austin, to work on safety at OpenAI:
Sutskever's the guy plotting the technial course.
Bill Benzon said,
November 18, 2023 @ 8:22 am
As for the impact on linguistics, I think it is in fine shape and that LLMs and the like provide new tools and opportunities. One of the big issues with LLMs is that they are black boxes. We don't know how they work. To quote from a recent post:
Well, we're going to need linguistics and some other things to figure that out.
KWillets said,
November 18, 2023 @ 3:13 pm
The illustrations that I've seen resemble the grammatical or syntactical rules that linguists have derived previously.
The difference this round seems to be the ability to generate and apply these patterns automatically, versus the past when people had to encode them one at a time.
Jerry Packard said,
November 18, 2023 @ 5:29 pm
Well, there is a significant part of linguistics that has to do with the generation of human natural language by humans, and so that part of linguistics will remain relevant and important as long as that remains of interest.
Stephen Goranson said,
November 19, 2023 @ 8:39 am
"The history of the term 'effective altruism'" by William MacAskill (2014) may give the impression that the collocation was coined in 2011 or 2012. [1]
An Economic Analysis of the Family by John Ermisch, 2003, is one earlier example, using it numerous times, including as a section heading on page 55. [2]
[1]
https://forum.effectivealtruism.org/posts/9a7xMXoSiQs3EYPA2/the-history-of-the-term-effective-altruism
[2]
https://www.google.com/books/edition/An_Economic_Analysis_of_the_Family/xOGrHSN-1pQC?hl=en&gbpv=1&dq=%22effective+altruism%22&pg=PA55&printsec=frontcover
/df said,
November 19, 2023 @ 5:22 pm
"One of the big issues with LLMs is that they are black boxes. We don't know how they work."
1. Maybe we could try to reproduce the behaviour of these intelligences using computer software.
2. When that turns out to be a black box, go to 1.