Archive for Computational linguistics

GLM-130B: An Open Bilingual Pre-Trained Model

Description of a General Language Model (GLM; also GLaM) project based at Tsinghua University in Beijing, but with users and collaborators around the world.

Homepage (August 4, 2022)

This prospectus is difficult for outsiders to understand because of the large number of unexplained acronyms, abbreviations, initialisms, etc. and other such participants' terminology.

GLM-130B is an open bilingual (English & Chinese) bidirectional dense model with 130 billion parameters, pre-trained using the General Language Model (GLM) algorithm1. It is designed to support inference tasks with the 130B parameters on a single A100 (40G * 8) or V100 (32G * 8) server. As of July 3rd, 2022, GLM-130B has been trained on over 400 billion text tokens (200B each for Chinese and English) and exhibits the following unique features:

    • Bilingual: supports both English and Chinese.
    • Performance (EN): better than GPT-3 175B (+5.0%), OPT-175B (+6.5%), and BLOOM-176B (+13.0%) on LAMBADA and slightly better than GPT-3 175B (+0.9%) on MMLU.
    • Performance (CN): significantly better than ERNIE TITAN 3.0 260B on 7 zero-shot CLUE datasets (+24.26%) and 5 zero-shot FewCLUE datasets (+12.75%).
    • Fast Inference: supports fast inference on both SAT and FasterTransformer (up to 2.5X faster) with a single A100 server.
    • Reproducibility: all results (>30 tasks) can be easily reproduced with open-sourced code and model checkpoints.
    • Cross-Platform: supports training and inference on NVIDIA, Hygon DCU, Ascend 910, and Sunway.

The model checkpoints of GLM-130B and code for inference are publicly available at our GitHub repo. The code for pre-training and fine-tuning as well as the research paper are coming soon.

Read the rest of this entry »

Comments off

Detecting LLM-created essays?

As I observed in "Alexa down, ChatGPT up?" (12/8/2022), there's reason to fear that LLMs ("Large Language Models") like ChatGPT will force major changes in writing education, by offered a cheap and easy way to generate essay assignments. A small sample of the extensive published discussion:

Stephen Marche, "The College Essay is Dead", The Atlantic 12/6/2022
Daniel Lametti, "A.I. Could Be Great for College Essays", Slate 12/7/2022
Daniel Herman, "ChatGPT will end High School English", The Atlantic 12/9/2022
Beth McMurtrie, "AI and the Future of Undergraduate Writing: Teaching experts are concerned, but not for the reasons you think", The Chronicle of Higher Education 12/13/2022

Of course, various other forms of cheating have been common for hundreds of years, starting with simple plagiarism and ghost-written submissions. The internet has made it easier to find texts to copy or ghostwriters to hire — but modern technology has also brought us plagiarism-detection systems, which catch at least the simplest cases. Will we see effective LLM-detection software?

Read the rest of this entry »

Comments (16)

Alexa down, ChatGPT up?

Two recent developments seem to point in opposite directions. On one hand, there are R&D cutbacks as voice assistants are seen as failures. On the other hand, there's widespread enthusiasm for the impressive capabilities of ChatGPT, including suggestions that it will take over internet search (Ben Cost, "Rise of the bots: ‘Scary’ AI ChatGPT could eliminate Google within 2 years", NY Post 12/6/2022), destroy writing education (Stephen Marche, "The College Essay is Dead", The Atlantic 12/6/2022), and more.

Read the rest of this entry »

Comments (20)

Spectral slices of overtone singing, animated

As part of my on-going exploration of the many ways in which F0 is not pitch and pitch is not F0, I did a little demo/experiment with a sample of Anna-Maria Hefele's "Polyphonic Overtone Singing" video:

Read the rest of this entry »

Comments (15)

Talking is like living

…and ending a sentence is like dying.

What do I mean by this weird and even creepy statement?

Short answer: Your probability of continuing to live is not constant, but decreases exponentially as you get older. (Actuaries know this as the Gompertz-Makeham Law of Mortality,  usually expressed in terms of your probability of dying.)

A generative model of this type, on a shorter time scale, is a surprisingly good fit to the distributions of speech- and silence-segment durations in speech, and also to the distribution of sentence lengths in text. A shockingly good fit, in most cases.

Long answer: See below, if you have the patience…

Read the rest of this entry »

Comments (15)

More on conversational dynamics

Following up on "The dynamics of talk maps" (9/30/2022), I created and parameterized such representations for the published CallHome conversations in Egyptian Arabic, American English, German, Japanese, Mandarin, and Spanish. The goal was mostly just to set up and debug an analysis pipeline, including the extraction of 14 first-guess parameters per conversation, on the way to analyzing the much larger set of much more diverse conversational data that's available.

But just for fun, I used t-SNE to reduce the 14 dimensions to 2 for visualization purposes. I didn't expect much, but some differences emerged in the distribution of points for conversations in the different languages:


Read the rest of this entry »

Comments (3)

The dynamics of talk maps

Over the years, across many disciplines, there have been many approaches to the analysis of conversational dynamics. For glimpses of a few corners of this topic, see the list of related posts at the end of this one — today I want to sketch the beginnings of a new way of thinking about it.

Read the rest of this entry »

Comments (2)

Against physics

Or rather: Against the simplistic interpretation of physics-based abstractions as equal to more complex properties of the physical universe. And narrowing the focus further, it's a big mistake to analyze signals in terms of such abstractions, while pretending that we're analyzing the processes creating those signals, or our perceptions of those signals and processes.  This happens in many ways in many disciplines, but it's especially problematic in speech research.

The subject of today's post is one particular example, namely the use of "Harmonic to Noise Ratio" (HNR) as a measure of hoarseness and such-like aspects of voice quality. Very similar issues arise with all other acoustic measures of speech signals.

I'm not opposed to the use of such measures. I use them myself in research all the time. But there can be serious problems, and it's easy for things to go badly off the rails. For example, HNR  can be strongly affected by background noise, room acoustics, microphone frequency response, microphone placement, and so on. This might just add noise to your data. But if different subject groups are recorded in different places or different ways, you might get serious artefacts.

Read the rest of this entry »

Comments (6)

Our Lady of the Highway: A linguistic mystery

Current text-to-speech systems are pretty good. Their output is almost always comprehensible, and often pretty natural-sounding. But there are still glitches.

This morning, Dick Margulis sent an example of one common problem: inconsistent (and often wrong) stressing of complex nominals:

We have a winding road that we drive with our Google Maps navigator on, to keep us from taking a wrong turn in the woods. We have noticed that "West Woods Road" is rendered with a few different stress patterns as we go from turn to turn, and we can't come up with a hypothesis explaining the variation. Attached is a recording. It's a few minutes long because that's how long the trip takes. The background hum is the car.

I've extracted and concatenated the 11 Google Maps instructions from the four minutes and five seconds of the attached recording:

Read the rest of this entry »

Comments (30)

Micro- Nano-Stylistic Variation

"Don't miss the most loved conference by Delphists like you!"

Philip Taylor wrote to complain about that phrase, which apparently arrived in an email advertisement:

"The most loved conference …" ? I would have written "The conference most loved …".

But his preference apparently disagrees, not only with the author of that flyer, but also with most other writers of English. And it's wonderful how easily we can now check such things. As Yogi Berra (may have) said, "Sometimes you can see a lot just by looking".

Read the rest of this entry »

Comments (26)

When more data makes things worse…

The mantra of machine learning, as Fred Jelinek used to say, is "The best data is more data" — because in many areas, there's a Long Tail of relevant cases that are hard to classify or predict without either a valid theory or enough examples.

But a recent meta-analysis of machine-learning work in digital medicine shows, convincingly, that more data can lead to poorer reported performance.  The paper is  Visar Berisha et al., "Digital medicine and the curse of dimensionality", NPJ digital medicine 2021, and one of the pieces of evidence they present is shown in the figure reproduced below:

This analysis considers two types of models: (1) speech-based models for classifying between a control group and patients with a diagnosis of Alzheimer’s disease (Con vs. AD; blue plot) and (2) speech-based models for classifying between a control group and patients with other forms of cognitive impairment (Con vs. CI; red plot).

Read the rest of this entry »

Comments (8)

Word frequency variation: elicit vs. illicit

In the comments on yesterday's post about a slip of the fingers or brain ("Elicit → illicit"), there was some discussion about which of the two words is more common.

Obviously, the answer to such questions depends on where you look.

So I looked in a bunch of places. Overall, illicit tends to be more common than elicit — but the relative frequency varies widely, and sometimes it's the other way round.

Read the rest of this entry »

Comments (4)

COURTHOUHAING TOGET T ROCESS.WHE

HE HAS ALL THE SOU OF COURSE
0:05 AND LOADED, READTOO.K
0:11 TING
0:16 A TVERY CONFIDENT.CONWAY
0:21 COURTHOUHAING TOGET T ROCESS.WHE
0:28 COIDATE'
0:30 TTACUTION'S CATHATE'
0:36 SE.
0:36 CHCEN'T KNHA
0:37 TAER OFURDI

That's the start of the automatically-generated transcript on YouTube for "See George Conway's reaction to Trump's reported plan if he wins again", CNN 7/24/2022.

Read the rest of this entry »

Comments (3)