Sam Altman and King Blozo

« previous post | next post »

We're all waiting to learn the story behind Sam Altman's firing as CEO of OpenAI. Or at least, many of us are.

Meanwhile, there's possible resonance with an on-going drama in the daily Popeye comic strip, concerning the fate of (former) King Blozo of Spinachovia, now the Superintendent of Royal Foot Surfaces:

During the question period after my talk yesterday, the issue was raised whether modern AI-ish methods mean the end of (human) linguistics. My opinion: No, for the same reason that the invention of the telescope was not the end of astronomy — though it did foreground a somewhat different mix of skills. (Here's the introductory slide from yesterday's talk, and a mildly skeptical take from Geoff Pullum in 2011…)

King Blozo clearly has the basis for a different view.

Update — if this Xeet is correct, as seems plausible, the issue was mostly about Altman's recent massive fund-raising campaign (mingled of course with the question of what to do with the new sixty billion dollars):

 

Update #2 — A slightly different take, from Henry Farrell: "What OpenAI shares with Scientology — Strange beliefs, fights over money and bad science fiction", Programmable Mutter 11/20/2023.

Update #3 — Altman is back



12 Comments

  1. Bill Benzon said,

    November 18, 2023 @ 6:02 am

    The folks over at LessWrong, who are much closer to this business certainly than I am, seem to think it was over a disagreement about safety issues:

    Burny

    "OpenAI’s ouster of CEO Sam Altman on Friday followed internal arguments among employees about whether the company was developing AI safely enough, according to people with knowledge of the situation.

    Such disagreements were high on the minds of some employees during an impromptu all-hands meeting following the firing. Ilya Sutskever, a co-founder and board member at OpenAI who was responsible for limiting societal harms from its AI, took a spate of questions.

    At least two employees asked Sutskever—who has been responsible for OpenAI’s biggest research breakthroughs—whether the firing amounted to a “coup” or “hostile takeover,” according to a transcript of the meeting. To some employees, the question implied that Sutskever may have felt Altman was moving too quickly to commercialize the software—which had become a billion-dollar business—at the expense of potential safety concerns."

    Kara Swisher also tweeted:

    "More scoopage: sources tell me chief scientist Ilya Sutskever was at the center of this. Increasing tensions with Sam Altman and Greg Brockman over role and influence and he got the board on his side."

    "The developer day and how the store was introduced was in inflection moment of Altman pushing too far, too fast. My bet: [Sam will] have a new company up by Monday."

    Apparently Microsoft was also blindsided by this and didn't find out until moments before the announcement.

    "You can call it this way," Sutskever said about the coup allegation. "And I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAl builds AGI that benefits all of humanity." AGI stands for artificial general intelligence, a term that refers to software that can reason the way humans do.
    When Sutskever was asked whether "these backroom removals are a good way to govern the most important company in the world?" he answered: "I mean, fair, I agree that there is a not ideal element to it. 100%."

    https://twitter.com/AISafetyMemes/status/1725712642117898654

  2. Bill Benzon said,

    November 18, 2023 @ 6:13 am

    This relates to Blozo's concern about AI superintelligence. Mishka, again over at LessWrong, has transcribed part of a YouTube video released on July 17. It's a conversation with Ilya Sutskever, former student of Geoffrey Hinton and co-founder and chief scientist of OpenAI:

    15:03 Sven: it's worthwhile to also talk about AI safety, and OpenAI has released the document just recently where you're one of the undersigners. Sam has testified in front of Congress. What worries you most about AI safety?

    15:27 Ilya: Yeah I can talk about that. So let's take a step back and talk about the state of the world. So you know, we've had this AI research happening, and it was exciting, and now you have the GPT models, and now you all get to play with all the different chatbots and assistance and, you know, Bard and ChatGPT, and they say okay that's pretty cool, it can do things; and indeed they already are. You can start perhaps worrying about the implications of the tools that we have today, and I think that it is a very valid thing to do, but that's not where I allocate my concern.

    16:14 The place where things get really tricky is when you imagine fast forwarding some number of years, a decade let's say, how powerful will AI be? Of course with this incredible future power of AI which I think will be difficult to imagine frankly. With an AI this powerful you could do incredible amazing things that are perhaps even outside of our dreams. Like if you can really have a dramatically powerful AI. But the place where things get challenging are directly connected to the power of the AI. It is powerful, it is going to be extremely unbelievably powerful, and it is because of this power that's where the safety issues come up, and I'll mention three I see… I personally see three… like you know when you get so… you alluded to the letter that we posted at OpenAI a few days ago, actually yesterday, about what with… about some ideas that we think would be good to implement to navigate the challenges of superintelligence.

    17:46 Now what is superintelligence, why did we choose to use the term "superintelligence"? The reason is that superintelligence is meant to convey something that's not just like an AGI. With AGI we said, well you have something kind of like a person, kind of like a co-worker. Superintelligence is meant to convey something far more capable than that. When you have such a capability it's like can we even imagine how it will be? But without question it's going to be unbelievably powerful, it could be used to solve incomprehensibly hard problems. If it is used well, if we navigate the challenges that superintelligence poses, we could radically improve the quality of life. But the power of superintelligence is so vast so the concerns.

    18:37 The concern number one has been expressed a lot and this is the scientific problem of alignment. You might want to think of it from the as an analog to nuclear safety. You know you build a nuclear reactor, you want to get the energy, you need to make sure that it won't melt down even if there's an earthquake and even if someone tries to I don't know smash a truck into it. (Sven: Yep.) So this is the superintelligent safety and it must be addressed in order to contain the vast power of the superintelligence. It's called the alignment problem. One of the suggestions that we had in our… in the post was an approach that an international organization could do to create various standards at this very high level of capability, and I want to make this other point you know about the post and also about our CEO Sam Altman Congressional testimony where he advocated for regulation of AI. The intention is primarily to put rules and standards of various kinds on the very high level of capability. You know you could maybe start looking at GPT-4, but that's not really what is interesting, what is relevant here, but something which is vastly more powerful than that, when you have a technology so powerful it becomes obvious that you need to do something about this power. That's the first concern, the first challenge to overcome.

    20:08 The Second Challenge to overcome is that of course we are people, we are humans, "humans of interests", and if you have superintelligences controlled by people, who knows what's going to happen… I do hope that at this point we will have the superintelligence itself try to help us solve the challenge in the world that it creates. This is not… no longer an unreasonable thing to say. Like if you imagine a superintelligence that indeed sees things more deeply than we do, much more deeply. To understand reality better than us. We could use it to help us solve the challenges that it creates.

    20:43 Then there is the third challenge which is the challenge maybe of natural selection. You know what the Buddhists say: the change is the only constant. So even if you do have your superintelligences in the world and they are all… We've managed to solve alignment, we've managed to solve… no one wants to use them in very destructive ways, we managed to create a life of unbelievable abundance, which really like not just not just material abundance, but Health, longevity, like all the things we don't even try dreaming about because there's obviously impossible, if you've got to this point then there is the third challenge of natural selection. Things change, you know… You know that natural selection applies to ideas, to organizations, and that's a challenge as well.

    21:28 Maybe the Neuralink solution of people becoming part AI will be one way we will choose to address this. I don't know. But I would say that this kind of describes my concern. And specifically just as the concerns are big, if you manage, it is so worthwhile to overcome them, because then we could create truly unbelievable lives for ourselves that are completely even unimaginable. So it is like a challenge that's really really worth overcoming.

    22:00 Sven: I very much like the idea that there needs to be the sort of threshold above which we we really really should pay attention. Because you know speaking as as a German, if it's like European style regulation often from people that don't really know very much about the field, you can also completely kill innovation which is a which be… it would be a little bit of a pity.

    Yes, I did go outside and take a walk around the block to verify that I'm still in Hoboken and not on some soundstage – not necessarily in Hollywood, BTW, they're all over the place by now, including Jersey City and Queens, NY – where a science fiction movie is being filmed. I'm still living in the Real World. Not so sure about Silicon Valley.

  3. Mark Liberman said,

    November 18, 2023 @ 6:19 am

    @Bill Benzon: " I'm still living in the Real World. Not so sure about Silicon Valley."

    See Maria Farrell, "Silicon Valley’s worldview is not just an ideology; it’s a personality disorder", Crooked Timber 11/15/2023.

    Also N.K. Jemisin, Emergency Skin.

  4. Carlana said,

    November 18, 2023 @ 6:21 am

    Less Wrong is a cult. I wouldn’t trust them if they tell me someone was fired for praying in the wrong direction.

  5. Bill Benzon said,

    November 18, 2023 @ 7:51 am

    On LessWrong and the like, I published this in 3 Quarks Daily a couple of months ago: A New Counter Culture: From the Reification of IQ to the AI Apocalypse.

    Also, a bit older: On the Cult of AI Doom.

    The working title for my next column for 3 Quarks Daily: O Captain, My Captain! Investing in AI is like buying shares in a whaling voyage helmed by a man who knows all about ships and nothing about whales.

  6. bks said,

    November 18, 2023 @ 7:52 am

    Perhaps worth noting are the poorly sourced quotes from Bill Gates last month that ChatGPT-5 would not be much of an improvement over ChatGPT-4:

    https://the-decoder.com/bill-gates-does-not-expect-gpt-5-to-be-much-better-than-gpt-4/

    Gates is in a position to look behind the curtain.

  7. Bill Benzon said,

    November 18, 2023 @ 8:09 am

    From the Xerpt Mark posted:

    > Nov 4 – Ilya was unsettled. They’d reached a threshold of autonomy that was concerning, while the alignment team was still just adding capability instead of emotion, actual love for humanity. They needed more time to figure out the research pathway instead of hurrying to deploy product.

    From a blog post by Scott Aaronson back on Nov. 28, 2022, who'd taken leave from his post at UTexas, Austin, to work on safety at OpenAI:

    I have these weekly calls with Ilya Sutskever, cofounder and chief scientist at OpenAI. Extremely interesting guy. But when I tell him about the concrete projects that I’m working on, or want to work on, he usually says, “that’s great Scott, you should keep working on that, but what I really want to know is, what is the mathematical definition of goodness? What’s the complexity-theoretic formalization of an AI loving humanity?” And I’m like, I’ll keep thinking about that! But of course it’s hard to make progress on those enormities.

    Sutskever's the guy plotting the technial course.

  8. Bill Benzon said,

    November 18, 2023 @ 8:22 am

    As for the impact on linguistics, I think it is in fine shape and that LLMs and the like provide new tools and opportunities. One of the big issues with LLMs is that they are black boxes. We don't know how they work. To quote from a recent post:

    You can’t understand what the parts of a mechanism are doing unless you know what the mechanism is trying to do. Early in How the Mind Works (p. 22) Steven Pinker asks us to imagine that we’re in an antique shop:

    …an antique store, we may find a contraption that is inscrutable until we figure out what it was designed to do. When we realize that it is an olive-pitter, we suddenly understand that the metal ring is designed to hold the olive, and the lever lowers an X-shaped blade through one end, pushing the pit out through the other end. The shapes and arrangements of the springs, hinges, blades, levers, and rings all make sense in a satisfying rush of insight. We even understand why canned olives have an X-shaped incision at one end.

    To belabor the example and put it to use as an analogy for mechanistic interpretability, someone with a good feel for mechanisms can tell you a great deal about how the parts of this strange device articulate, the range of motion for each part, the stresses operating on each joint, the amount of force require to move the parts, and so forth. But, still, when you put all that together, that will not tell you what the device was designed to do.

    Well, we're going to need linguistics and some other things to figure that out.

  9. KWillets said,

    November 18, 2023 @ 3:13 pm

    The illustrations that I've seen resemble the grammatical or syntactical rules that linguists have derived previously.

    The difference this round seems to be the ability to generate and apply these patterns automatically, versus the past when people had to encode them one at a time.

  10. Jerry Packard said,

    November 18, 2023 @ 5:29 pm

    Well, there is a significant part of linguistics that has to do with the generation of human natural language by humans, and so that part of linguistics will remain relevant and important as long as that remains of interest.

  11. Stephen Goranson said,

    November 19, 2023 @ 8:39 am

    "The history of the term 'effective altruism'" by William MacAskill (2014) may give the impression that the collocation was coined in 2011 or 2012. [1]
    An Economic Analysis of the Family by John Ermisch, 2003, is one earlier example, using it numerous times, including as a section heading on page 55. [2]

    [1]
    https://forum.effectivealtruism.org/posts/9a7xMXoSiQs3EYPA2/the-history-of-the-term-effective-altruism

    [2]
    https://www.google.com/books/edition/An_Economic_Analysis_of_the_Family/xOGrHSN-1pQC?hl=en&gbpv=1&dq=%22effective+altruism%22&pg=PA55&printsec=frontcover

  12. /df said,

    November 19, 2023 @ 5:22 pm

    "One of the big issues with LLMs is that they are black boxes. We don't know how they work."

    1. Maybe we could try to reproduce the behaviour of these intelligences using computer software.

    2. When that turns out to be a black box, go to 1.

RSS feed for comments on this post