Archive for Computational linguistics
Syllable-scale wheelbarrow spectrogram
Following up on Saturday's post "Towards automated babble metrics", I thought I'd try the same technique on some adult speech, specifically William Carlos Williams reading his poem "The Red Wheelbarrow".
Why might some approach like this be useful? It's a way of visualizing syllable-scale frequency patterns (roughly 1 to 8 Hz or so) without having to do any phonetic segmentation or classification. And for early infant vocalizations, where speech-like sounds gradually mix in with coos and laughs and grunts and growls and fussing, it might be the basis for some summary statistics that would be useful in tracing a child's developmental trajectory.
Is it actually good for anything? I don't know . The basic idea was presented in a 1947 book as a way to visualize the performance of metered verse. Those experiments didn't really work, and the idea seems to have been abandoned afterwards — though the authors' premise was that verse "beats" should be exactly periodic in time, which was (and is) false. In contrast, my idea is that the method might let us characterize variously-inexact periodicities.
Read the rest of this entry »
Permalink Comments off
Towards automated babble metrics
There are lots of good reasons to want to track the development of infant vocalizations — see e.g. Zwaigenbaum et al. "Clinical assessment and management of toddlers with suspected autism spectrum disorder" (2009). But existing methods are expensive and time-consuming — see e.g. Nyman and Lohmander, "Babbling in children with neurodevelopmental disability and validity of a simplified way of measuring canonical babbling ratio" (2018). (It's also unfortunately true that there's not yet any available dataset documenting the normal development of infant vocalizations from cooing and gooing to "canonical babbling", but that's another issue…)
People are starting to make and share extensive recordings of infant vocal development — see e.g. Frank et al., "A collaborative approach to infant research: Promoting reproducibility, best practices, and theory‐building" (2017). But automatic detection and classification of vocalization sources and types is still imperfect at best. And if we had reliable detection and classification methods, that would open up a new set of questions: Are the standard categories (e.g. "canonical babbling") really well defined and well separated? Do infant vocalizations of whatever type have measurable properties that would help to characterize and quantify normal or abnormal development?
Read the rest of this entry »
"Unparalleled accuracy" == "Freud as a scrub woman"
A couple of years ago, in connection with the JSALT2017 summer workshop, I tried several commercial speech-to-text APIs on some clinical recordings, with very poor results. Recently I thought I'd try again, to see how things have progressed. After all, there have been recent claims of "human parity" in various speech-to-text applications, and (for example) Google's Cloud Speech-to-Text tells us that it will "Apply the most advanced deep-learning neural network algorithms to audio for speech recognition with unparalleled accuracy", and that "Cloud Speech-to-Text accuracy improves over time as Google improves the internal speech recognition technology used by Google products."
So I picked one of the better-quality recordings of neuropsychological test sessions that we analyzed during that 2017 workshop, and tried a few segments. Executive summary: general human parity in automatic speech-to-text is still a ways off, at least for inputs like these.
Read the rest of this entry »
Ouch
Eliza Strickland, "How IBM Watson Overpromised and Underdelivered on AI Health Care", IEEE Spectrum 4/2/2019 (subhead: "After its triumph on Jeopardy!, IBM’s AI seemed poised to revolutionize medicine. Doctors are still waiting"):
In 2014, IBM opened swanky new headquarters for its artificial intelligence division, known as IBM Watson. Inside the glassy tower in lower Manhattan, IBMers can bring prospective clients and visiting journalists into the “immersion room,” which resembles a miniature planetarium. There, in the darkened space, visitors sit on swiveling stools while fancy graphics flash around the curved screens covering the walls. It’s the closest you can get, IBMers sometimes say, to being inside Watson’s electronic brain.
Read the rest of this entry »
Coherence Quiz answers
As promised, the results of yesterday's little experiment on "Coherence of sentence sequences" are here.
A tabular summary:
Question | Correct | Wrong |
1 | 166 (98%) | 4 (2%) |
2 | 135 (80%) | 33 (20%) |
3 | 167 (99%) | 2 (1%) |
4 | 158 (93%) | 12 (7%) |
5 | 113 (67%) | 56 (33%) |
6 | 152 (90%) | 17 (10%) |
7 | 165 (97%) | 5 (3%) |
8 | 115 (68%) | 55 (32%) |
9 | 169 (99%) | 1 (1%) |
10 | 167 (98%) | 3 (2%) |
11 | 163 (96%) | 7 (4%) |
12 | 137 (81%) | 32 (19%) |
So the survey respondents (as a whole) guessed the original order of all twelve sentence-pairs correctly — though the margins varied from 2-to-1 to 99-to-1. The overall percent correct was 89%, though of course that percentage will depend on the particular mix of examples.
(The counts don't all sum to the same row-wise value because a couple of participants left some answers blank — there's probably a way to get Qualtrics to prevent that, but I didn't figure it out in time…)
Read the rest of this entry »
Coherence of sentence sequences
Here are two successive sentences from The Wizard of Oz, presented in two different orders:
- "How strange it all is! But, comrades, what shall we do now?"
- "We must journey on until we find the road of yellow brick again," said Dorothy, "and then we can keep on to the Emerald City."
- "We must journey on until we find the road of yellow brick again," said Dorothy, "and then we can keep on to the Emerald City."
- "How strange it all is! But, comrades, what shall we do now?"
The first order (in blue) is easier to construe as a coherent sequence, because in that order, sentence 2 answers a question posed by sentence 1. The version in red could be rescued by a more complicated set of contextual assumptions or a more complicated theory of the interaction — but in fact it's the blue version that's the original.
Read the rest of this entry »
The first conversing automaton
An article I'm writing led me to wonder when the idea of a conversing automaton first arose, or at least was first published. I'm ruling out magical creations like golems and divine statuary; brazen heads seem to have either been magical or created using arcane secrets of alchemy; I don't know enough to evaluate the legend of King Mu and Yen Shih's automaton, whose conversational abilities are not clearly described in the texts I've found.
There are many early documented automata doing things like playing music, and plenty of enlightenment philosophizing about what human abilities might or might not be clockwork-like, so I would have thought that there would be plenty of fictional conversing automata over the past four or five hundred years.
But apparently not: it's possible that the first real example was as late as 1907 or even 1938.
Read the rest of this entry »
Sleepless in Samsung?
I'm spending a couple of days at the DARPA AI Colloquium — about which more later — and during yesterday's afternoon session, I experienced an amusing conjunction of events. Pedro Szekeley gave a nice presentation on "Advances in Natural Language Understanding", after which one of the questions from the audience was "Hasn't Google solved all these problems?" Meanwhile, during the session, I got a quasi-spam cell-phone call trying to recruit me for a medical study, and since my (Google Fi) phone was turned off, it went to voicemail, and Google helpfully offered me a text as well as audio version of the call.
The result illustrates one of the key ways that modern technology, including Google's, fails to solve all the problems of natural language understanding.
Read the rest of this entry »
NLLP: bag-of-words semantics?
The First Workshop on Natural Legal Language Processing (NLLP) will be co-located with NAACL 2019. The phrase "natural legal language processing" in the title strikes me as oddly constructed, from a syntactic and semantic point of view, though I'm sure that NAACL attendees will interpret it easily as intended.
Let me explain.
Read the rest of this entry »
Who's the sponsor?
A few weeks ago I attended the last afternoon of Scale By The Bay 2018 ("So much for Big Data", 11/18/2018), and as a result, this arrived today by email:
We had a blast at Scale by the Bay. We hope you did, too. As a sponsor, the organizer has shared your email with us. If you would like to receive messages from Xxxxxxxxx, please opt-in to our mailing list.
Read the rest of this entry »
"Human parity" in machine translation
In May of 2015, I gave a talk at the Centre Cournot in Paris on the topic "Why Human Language Technology (almost) works", starting with a list of notable successes, including how well Google and Bing on-line translation did on the Centre Cournot's web site. But my theme required a few failures as well, and I found a spectacular set of examples when I tried a chapter-opening from a roman policier that I was reading (Yasmina Khadra, Le Dingue au Bistouri):
Il y a quatre choses que je déteste. Un: qu'on boive dans mon verre. Deux: qu'on se mouche dans un restaurant. Trois: qu'on me pose un lapin.
Google Translate: There are four things I hate. A: we drink in my glass. Two: we will fly in a restaurant. Three: I get asked a rabbit.
Bing Translate: There are four things that I hate. One: that one drink in my glass. Two: what we fly in a restaurant. Three: only asked me a rabbit.
Should be: There are four things I hate. One: that somebody drinks from my glass. Two: that somebody blows their nose in a restaurant. Three: that somebody stands me up.
Read the rest of this entry »