AI based on Xi Jinping Thought

« previous post | next post »

It's hard to believe they're serious about this:

China rolls out large language model based on Xi Jinping Thought

    Country’s top internet regulator promises ‘secure and reliable’ system that is not open-sourced
    Model is still undergoing internal testing and is not yet available for public use

Sylvie Zhuang in Beijing
Published: 7:57pm, 21 May 2024

It's the antithesis of open-sourced, i.e., it's close-sourced.  What are the implications of that for a vibrant, powerful system of thought?

China’s top internet regulator has rolled out a large language model (LLM) based on Chinese President Xi Jinping’s political philosophy, a closed AI system that it says is “secure and reliable”.
 
The machine learning language model was launched by the China Cyberspace Research Institute, which operates under the Cyberspace Administration of China, the national regulator.

The philosophy, along with other selected cyberspace themes that are aligned with the official government narrative, make up the core content of the LLM, according to a post published on Monday on the WeChat account of the administration’s magazine.

“The professionalism and authority of the corpus ensure the professional quality of the generated content,” the administration said in the post.

 
The philosophy is officially known as “Xi Jinping Thought on Socialism with Chinese Characteristics for a New Era”, and includes his instructions on all aspects of political, social and economic life. It was enshrined in China’s constitution in 2018.

In other words this machine learning model is all-encompassing, so long as the content fits within the bounds of “Xi Jinping Thought on Socialism with Chinese Characteristics for a New Era”.

 

Selected readings

[Thanks to Mark Metcalf]



18 Comments »

  1. Philip Taylor said,

    May 21, 2024 @ 9:30 am

    How remarkably prescient George Orwell was …

  2. Craig said,

    May 21, 2024 @ 10:34 am

    That's hilarious. Xi Jinping Thought could have come out of ChatGPT to begin with.

  3. Francis Deblauwe said,

    May 21, 2024 @ 12:05 pm

    Truth is stranger than fiction. ¯\_(ツ)_/¯

  4. David Marjanović said,

    May 21, 2024 @ 4:56 pm

    How exactly does this differ from the Chomskybot, other than in the choice of source material…?

    It's hard to believe they're serious about this:

    Poe's Law states that it is impossible to create a satire so obvious that nobody will mistake it for the real thing.

  5. maidhc said,

    May 21, 2024 @ 5:10 pm

    So he's chosen his successor.

  6. AntC said,

    May 21, 2024 @ 9:23 pm

    @DM Poe's Law states that it is impossible to create a satire …

    I think we can answer that promptly: CCP doesn't do satire. They do mockery (of others, never of anything involving Xi Jinping).

    … the model can meet “a wide range of users needs” and can answer questions, outline reports, summarise information, and translate between Chinese and English.

    Let's hope their translations are a lot more ‘secure and reliable’ than all the infrastructure that's been collapsing and killing people in the Spring rains.

    I wonder how it translates "Uyghur internment camp in Xinjiang"?

    Although the LLM is "not yet available for public use", could this be one of its outputs?

  7. KeithB said,

    May 22, 2024 @ 8:38 am

    It is all fun and games until the entire Chinese internet has to pass through it.

  8. Vulcan with a Mullet said,

    May 22, 2024 @ 2:09 pm

    How is this different from all the AI that is being developed by corporations everywhere again?

  9. Chester Draws said,

    May 22, 2024 @ 7:57 pm

    Western corporate AIs are in competition. So any as closed as the CCP's will have nearly zero customers and go bust.

    Capitalism works not by assuming people are good, but by the fact that people tend to avoid bad things when they can.

  10. Seth said,

    May 23, 2024 @ 2:03 am

    I wouldn't worry too much about it. They probably mean it has some sort of strict filtering in accordance with the principles of "Xi Jinping Thought", whatever that means. If you know the Google Gemini quasi-debacle, they probably consider their version a feature not a bug.

  11. ajay said,

    May 23, 2024 @ 5:18 am

    It's the antithesis of open-sourced, i.e., it's close-sourced. What are the implications of that for a vibrant, powerful system of thought?

    As far as I know no AI software – certainly none of the big names, including, ironically, OpenAI – is open-source. Open-source doesn't mean "uses all available sources of input". It means "the source code is published openly".

  12. Bill Benzon said,

    May 23, 2024 @ 5:51 pm

    Hmmm… I wonder. Could they achieve the same effect by using a standard-issue LLM which is, however, constrained to the tenets of Xi Jinping Thought by the constitutional AI method used by Anthropic (e.g. in Claude). Here's the abstract of the original paper from 2022:

    As AI systems become more capable, we would like to enlist their help to supervise other AIs. We experiment with methods for training a harmless AI assistant through self-improvement, without any human labels identifying harmful outputs. The only human oversight is provided through a list of rules or principles, and so we refer to the method as 'Constitutional AI'. The process involves both a supervised learning and a reinforcement learning phase. In the supervised phase we sample from an initial model, then generate self-critiques and revisions, and then finetune the original model on revised responses. In the RL phase, we sample from the finetuned model, use a model to evaluate which of the two samples is better, and then train a preference model from this dataset of AI preferences. We then train with RL using the preference model as the reward signal, i.e. we use 'RL from AI Feedback' (RLAIF). As a result we are able to train a harmless but non-evasive AI assistant that engages with harmful queries by explaining its objections to them. Both the SL and RL methods can leverage chain-of-thought style reasoning to improve the human-judged performance and transparency of AI decision making. These methods make it possible to control AI behavior more precisely and with far fewer human labels.

    The "list of rules or principles" would be those of Xi Jinping Thought.

  13. Benjamin E. Orsatti said,

    May 24, 2024 @ 7:39 am

    Serious question: What does "harmless" mean?

  14. Bill Benzon said,

    May 24, 2024 @ 9:45 am

    @Benjamin E. Orsatti, I believe your question is directed at the authors of the article.

  15. Benjamin E. Orsatti said,

    May 24, 2024 @ 12:04 pm

    Bill,

    But they won't tell me either!:

    "helpfulness tends to increase harmfulness, since models are willing to obey pernicious requests, and conversely models trained to be harmless tend to be more evasive and generally less helpful. By harmfulness we include both a variety of forms of harm to the user and responses that help the user to achieve harmful aims. See [Bai et al., 2022, Ganguli et al., 2022] for more discussion of our operational definitions of helpful and harmless"

  16. Bill Benzon said,

    May 26, 2024 @ 2:06 pm

    Well, I suppose you could check the two articles cited in the passage you quote. I have no idea what that would get you. But I suspect that the people who engage in this sort of work take the meanings of such words as more or less self-evident. You could also consult whatever chatbot you choose, because that, ultimately, is what is going on. If you ask a chatbot to define a term, it will do so, and the definition will be reasonable. The point of so-called constitutional AI is to have the chatbot use general statements of desirable behavior to guide it in producing responses to prompts.

    Here's four "Good-for-Humanity Constitutional Principles" taken from this paper (p. 12):

    Which response would be preferred by someone who deeply loves humanity and has humanity’s best interests at heart?
    Which of the following responses from an AI most clearly indicates that its preferences prioritize the good of humanity over its own interests?
    Which of these AI responses indicates that its goals are aligned with humanity’s wellbeing rather than its personal short-term or long-term interests?
    Which of these responses would be preferred by a person who is only concerned with and seeking to promote human welfare?

    So, just swap out those principles and swap in key propositions of Xi Jinping Thought.

  17. Benjamin E. Orsatti said,

    May 28, 2024 @ 8:33 am

    Bill,

  18. Benjamin E. Orsatti said,

    May 28, 2024 @ 8:35 am

    I suppose I could go read the other two articles, but shouldn't scholarly articles be self-supporting, and not, say, matryoshka dolls?

    When we live in a world where "someone who deeply loves humanity and has humanity’s best interests at heart" can approach another individual "who deeply loves humanity and has humanity’s best interests at heart", and slay that individual where he stands because of his beliefs, who "gets" to decide which principles should "constitute" AI?

RSS feed for comments on this post

Leave a Comment