AI based on Xi Jinping Thought
« previous post | next post »
It's hard to believe they're serious about this:
China rolls out large language model based on Xi Jinping Thought
Country’s top internet regulator promises ‘secure and reliable’ system that is not open-sourced
Model is still undergoing internal testing and is not yet available for public use
Sylvie Zhuang in Beijing
Published: 7:57pm, 21 May 2024
It's the antithesis of open-sourced, i.e., it's close-sourced. What are the implications of that for a vibrant, powerful system of thought?
The philosophy, along with other selected cyberspace themes that are aligned with the official government narrative, make up the core content of the LLM, according to a post published on Monday on the WeChat account of the administration’s magazine.
“The professionalism and authority of the corpus ensure the professional quality of the generated content,” the administration said in the post.
In other words this machine learning model is all-encompassing, so long as the content fits within the bounds of “Xi Jinping Thought on Socialism with Chinese Characteristics for a New Era”.
Selected readings
- "The perils of AI (Artificial Intelligence) in the PRC" (4/17/23)
- "Vignettes of quality data impoverishment in the world of PRC AI" (2/23/23)
[Thanks to Mark Metcalf]
Philip Taylor said,
May 21, 2024 @ 9:30 am
How remarkably prescient George Orwell was …
Craig said,
May 21, 2024 @ 10:34 am
That's hilarious. Xi Jinping Thought could have come out of ChatGPT to begin with.
Francis Deblauwe said,
May 21, 2024 @ 12:05 pm
Truth is stranger than fiction. ¯\_(ツ)_/¯
David Marjanović said,
May 21, 2024 @ 4:56 pm
How exactly does this differ from the Chomskybot, other than in the choice of source material…?
Poe's Law states that it is impossible to create a satire so obvious that nobody will mistake it for the real thing.
maidhc said,
May 21, 2024 @ 5:10 pm
So he's chosen his successor.
AntC said,
May 21, 2024 @ 9:23 pm
@DM Poe's Law states that it is impossible to create a satire …
I think we can answer that promptly: CCP doesn't do satire. They do mockery (of others, never of anything involving Xi Jinping).
Let's hope their translations are a lot more ‘secure and reliable’ than all the infrastructure that's been collapsing and killing people in the Spring rains.
I wonder how it translates "Uyghur internment camp in Xinjiang"?
Although the LLM is "not yet available for public use", could this be one of its outputs?
KeithB said,
May 22, 2024 @ 8:38 am
It is all fun and games until the entire Chinese internet has to pass through it.
Vulcan with a Mullet said,
May 22, 2024 @ 2:09 pm
How is this different from all the AI that is being developed by corporations everywhere again?
Chester Draws said,
May 22, 2024 @ 7:57 pm
Western corporate AIs are in competition. So any as closed as the CCP's will have nearly zero customers and go bust.
Capitalism works not by assuming people are good, but by the fact that people tend to avoid bad things when they can.
Seth said,
May 23, 2024 @ 2:03 am
I wouldn't worry too much about it. They probably mean it has some sort of strict filtering in accordance with the principles of "Xi Jinping Thought", whatever that means. If you know the Google Gemini quasi-debacle, they probably consider their version a feature not a bug.
ajay said,
May 23, 2024 @ 5:18 am
It's the antithesis of open-sourced, i.e., it's close-sourced. What are the implications of that for a vibrant, powerful system of thought?
As far as I know no AI software – certainly none of the big names, including, ironically, OpenAI – is open-source. Open-source doesn't mean "uses all available sources of input". It means "the source code is published openly".
Bill Benzon said,
May 23, 2024 @ 5:51 pm
Hmmm… I wonder. Could they achieve the same effect by using a standard-issue LLM which is, however, constrained to the tenets of Xi Jinping Thought by the constitutional AI method used by Anthropic (e.g. in Claude). Here's the abstract of the original paper from 2022:
The "list of rules or principles" would be those of Xi Jinping Thought.
Benjamin E. Orsatti said,
May 24, 2024 @ 7:39 am
Serious question: What does "harmless" mean?
Bill Benzon said,
May 24, 2024 @ 9:45 am
@Benjamin E. Orsatti, I believe your question is directed at the authors of the article.
Benjamin E. Orsatti said,
May 24, 2024 @ 12:04 pm
Bill,
But they won't tell me either!:
Bill Benzon said,
May 26, 2024 @ 2:06 pm
Well, I suppose you could check the two articles cited in the passage you quote. I have no idea what that would get you. But I suspect that the people who engage in this sort of work take the meanings of such words as more or less self-evident. You could also consult whatever chatbot you choose, because that, ultimately, is what is going on. If you ask a chatbot to define a term, it will do so, and the definition will be reasonable. The point of so-called constitutional AI is to have the chatbot use general statements of desirable behavior to guide it in producing responses to prompts.
Here's four "Good-for-Humanity Constitutional Principles" taken from this paper (p. 12):
Which response would be preferred by someone who deeply loves humanity and has humanity’s best interests at heart?
Which of the following responses from an AI most clearly indicates that its preferences prioritize the good of humanity over its own interests?
Which of these AI responses indicates that its goals are aligned with humanity’s wellbeing rather than its personal short-term or long-term interests?
Which of these responses would be preferred by a person who is only concerned with and seeking to promote human welfare?
So, just swap out those principles and swap in key propositions of Xi Jinping Thought.
Benjamin E. Orsatti said,
May 28, 2024 @ 8:33 am
Bill,
Benjamin E. Orsatti said,
May 28, 2024 @ 8:35 am
I suppose I could go read the other two articles, but shouldn't scholarly articles be self-supporting, and not, say, matryoshka dolls?
When we live in a world where "someone who deeply loves humanity and has humanity’s best interests at heart" can approach another individual "who deeply loves humanity and has humanity’s best interests at heart", and slay that individual where he stands because of his beliefs, who "gets" to decide which principles should "constitute" AI?