Archive for Computational linguistics

Coherence Quiz answers

As promised, the results of yesterday's little experiment on "Coherence of sentence sequences" are here.

A tabular summary:

 Question Correct Wrong
1 166 (98%) 4 (2%)
2  135 (80%)  33 (20%)
3 167 (99%) 2 (1%)
4 158 (93%) 12 (7%)
5 113 (67%) 56 (33%)
6 152 (90%) 17 (10%)
7 165 (97%) 5 (3%)
8 115 (68%) 55 (32%)
9 169 (99%) 1 (1%)
10 167 (98%) 3 (2%)
11 163 (96%) 7 (4%)
12 137 (81%) 32 (19%)

So the survey respondents (as a whole) guessed the original order of all twelve sentence-pairs correctly — though the margins varied from 2-to-1 to 99-to-1. The overall percent correct was 89%, though of course that percentage will depend on the particular mix of examples.

(The counts don't all sum to the same row-wise value because a couple of participants left some answers blank — there's probably a way to get Qualtrics to prevent that, but I didn't figure it out in time…)

Read the rest of this entry »

Comments (9)

Coherence of sentence sequences

Here are two successive sentences from The Wizard of Oz, presented in two different orders:

  1. "How strange it all is! But, comrades, what shall we do now?"
  2. "We must journey on until we find the road of yellow brick again," said Dorothy, "and then we can keep on to the Emerald City."
  1. "We must journey on until we find the road of yellow brick again," said Dorothy, "and then we can keep on to the Emerald City."
  2. "How strange it all is! But, comrades, what shall we do now?"

The first order (in blue) is easier to construe as a coherent sequence, because in that order, sentence 2 answers a question posed by sentence 1. The version in red could be rescued by a more complicated set of contextual assumptions or a more complicated theory of the interaction — but in fact it's the blue version that's the original.

Read the rest of this entry »

Comments (12)

The first conversing automaton

An article I'm writing led me to wonder when the idea of a conversing automaton first arose, or at least was first published. I'm ruling out magical creations like golems and divine statuary; brazen heads  seem to have either been magical or created using arcane secrets of alchemy; I don't know enough to evaluate the legend of King Mu and Yen Shih's automaton, whose conversational abilities are not clearly described in the texts I've found.

There are many early documented automata doing things like playing music, and plenty of enlightenment philosophizing about what human abilities might or might not be clockwork-like, so I would have thought that there would be plenty of fictional conversing automata over the past four or five hundred years.

But apparently not: it's possible that the first real example was as late as 1907 or even 1938.

Read the rest of this entry »

Comments (21)

Sleepless in Samsung?

I'm spending a couple of days at the DARPA AI Colloquium — about which more later —  and during yesterday's afternoon session, I experienced an amusing conjunction of events. Pedro Szekeley gave a nice presentation on "Advances in Natural Language Understanding", after which one of the questions from the audience was "Hasn't Google solved all these problems?" Meanwhile, during the session, I got a quasi-spam cell-phone call trying to recruit me for a medical study, and since my (Google Fi) phone was turned off, it went to voicemail, and Google helpfully offered me a text as well as audio version of the call.

The result illustrates one of the key ways that modern technology, including Google's, fails to solve all the problems of natural language understanding.

Read the rest of this entry »

Comments (8)

NLLP: bag-of-words semantics?

The First Workshop on Natural Legal Language Processing (NLLP) will be co-located with NAACL 2019. The phrase "natural legal language processing" in the title strikes me as oddly constructed, from a syntactic and semantic point of view, though I'm sure that NAACL attendees will interpret it easily as intended.

Let me explain.

Read the rest of this entry »

Comments (14)

Who's the sponsor?

A few weeks ago I attended the last afternoon of Scale By The Bay 2018 ("So much for Big Data", 11/18/2018), and as a result, this arrived today by email:

We had a blast at Scale by the Bay. We hope you did, too. As a sponsor, the organizer has shared your email with us. If you would like to receive messages from Xxxxxxxxx, please opt-in to our mailing list.

Read the rest of this entry »

Comments (6)

The literary Turing Test

Comments (12)

"Human parity" in machine translation

In May of 2015, I gave a talk at the Centre Cournot in Paris on the topic "Why Human Language Technology (almost) works", starting with a list of notable successes, including how well Google and Bing on-line translation did on the Centre Cournot's web site. But my theme required a few failures as well, and I found a spectacular set of examples when I tried a chapter-opening from a roman policier that I was reading (Yasmina Khadra, Le Dingue au Bistouri):

Il y a quatre choses que je déteste. Un: qu'on boive dans mon verre. Deux: qu'on se mouche dans un restaurant. Trois: qu'on me pose un lapin.

Google Translate: There are four things I hate. A: we drink in my glass. Two: we will fly in a restaurant. Three: I get asked a rabbit.

Bing Translate: There are four things that I hate. One: that one drink in my glass. Two: what we fly in a restaurant. Three: only asked me a rabbit.

Should be: There are four things I hate. One: that somebody drinks from my glass. Two: that somebody blows their nose in a restaurant. Three: that somebody stands me up.

Read the rest of this entry »

Comments (39)

Autoresponses

SMBC on the future of helpful gmail, a few days ago:

Read the rest of this entry »

Comments (4)

LRNLP 2018

On Monday, I'm pursuing the quixotic enterprise of talking to an NLP workshop about phonetics.

LRNLP ("Language Resources for NLP") 2018 is a workshop associated with COLING 2018 in Santa Fe NM.  My abstract:

Semi-automatic analysis of digital speech collections is transforming the science of phonetics, and offers interesting opportunities to researchers in other fields. Convenient search and analysis of large published bodies of recordings, transcripts, metadata, and annotations – as much as three or four orders of magnitude larger than a few decades ago – has created a trend towards “corpus phonetics,” whose benefits include greatly increased researcher productivity, better coverage of variation in speech patterns, and essential support for reproducibility.

The results of this work include insight into theoretical questions at all levels of linguistic analysis, as well as applications in fields as diverse as psychology, sociology, medicine, and poetics, as well as within phonetics itself. Crucially, analytic inputs include annotation or categorization of speech recordings along many dimensions, from words and phrase structures to discourse structures, speaker attitudes, speaker demographics, and speech styles. Among the many near-term opportunities in this area we can single out the possibility of improving parsing algorithms by incorporating features from speech as well as text.

Due to semester-initial commitments at Penn, I won't be able to stay for COLING, but I'm looking forward to an interesting day of presentations at the workshop.

 

Comments (2)

"Yeah day go, baby"

Yesterday, while I was sitting in an interesting session at Speech Prosody 2018, I got a phone call that I didn't answer. The caller left a message that Google Voice transcribed this way:

Lowell is an installer sensor Grace call me. I'll pick it up. That was a break was thinking. Because you had to go to work this morning around, you know, my exact maybe go back to take the brake light. As you said you didn't feel quite right still cyber, even though I was still wearing the back. I might have something. Bye. What thank God. This f****** f*** m*********** train my f****** bank account. What I see your ex. What's your phone number? Yeah day go, baby. Does it have that switch that maybe that's what size over at light source? I'm open. Another f*****. I know what that's like I recognize. Yeah, I was.

Read the rest of this entry »

Comments (15)

AI Cyrano

Comments (2)

World disfluencies

Disfluency has been in the news recently, for two reasons: the deployment of filled pauses in an automated conversation by Google Duplex, and a cross-linguistic study of "slowing down" in speech production before nouns vs. verbs.

Lance Ulanoff, "Did Google Duplex just pass the Turing Test?", Medium 5/8/2018:

I think it was the first “Um.” That was the moment when I realized I was hearing something extraordinary: A computer carrying out a completely natural and very human-sounding conversation with a real person. And it wasn’t just a random talk. […]

Duplex made the call and, when someone at the salon picked up, the voice AI started the conversation with: “Hi, I’m calling to book a woman’s hair cut appointment for a client, um, I’m looking for something on May third?”

Frank Seifart et al., "Nouns slow down speech: evidence from structurally and culturally diverse languages", PNAS 2018:

When we speak, we unconsciously pronounce some words more slowly than others and sometimes pause. Such slowdown effects provide key evidence for human cognitive processes, reflecting increased planning load in speech production. Here, we study naturalistic speech from linguistically and culturally diverse populations from around the world. We show a robust tendency for slower speech before nouns as compared with verbs. Even though verbs may be more complex than nouns, nouns thus appear to require more planning, probably due to the new information they usually represent. This finding points to strong universals in how humans process language and manage referential information when communicating linguistically.

Read the rest of this entry »

Comments (12)