Archive for Computational linguistics

Woo

Read the rest of this entry »

Comments (10)

Linguistic Science and Technology in China

I just spent a few days in China, mainly to attend an "International Workshop on Language Resource Construction: Theory, Methodology and Applications". This was the second event in a three-year program funded by a small grant from the "Penn China Research & Engagement Fund". That program's goals include "To develop new, or strengthen existing, institutional and faculty-to-faculty relationships with Chinese partners", and our proposal focused on "linguistic diversity in China, with specific emphasis on the documentation of variation in standard, regional and minority languages".

After last year's workshop at the Penn Wharton China Center, some Chinese colleagues (Zhifang Sui and Weidong Zhan from the Key Laboratory of Computational Linguistics and the Center for Chinese Linguistics at Peking University) suggested that we join them in co-sponsoring a two-day workshop this fall, with the first day at PKU and the second day at the PWCC. Here's the group photo from the first day (11/5/2017):

The growing strength of Chinese research in the various areas of linguistic science and technology has been clear for some time, and the presentations and discussions at this workshop made it clear that this work is poised for a further major increase in quantity and quality.

Read the rest of this entry »

Comments (11)

You need to know something

I'm happy to see that Google Translate is still turning (many types of) meaningless character sequences into spoken-word poetry. Repetitions of single hiragana characters are an especially reliable source — here's "You need to know something":


Read the rest of this entry »

Comments (15)

Cartoonist walks into a language lab…

Bob Mankoff gave a talk here in Madison not long ago.  You may recognize Mankoff as the cartoon editor for many years at the New Yorker magazine, who is now at Esquire. Mankoff’s job involved scanning about a thousand cartoons a week to find 15 or so to publish per issue. He did this for over 20 years, which is a lot of cartoons. More than 950 of his own appeared in the magazine as well. Mankoff has thought a lot about humor in general and cartoon humor in particular, and likes to talk and write about it too.

The Ted Talk
On “60 Minutes”
His Google talk
Documentary, "Very Semi-Serious"

What’s the Language Log connection?  Humor often involves language? New Yorker cartoons are usually captioned these days, with fewer in the lovely mute style of a William Steig.  A general theory of language use should be able to explain how cartoon captions, a genre of text, are understood. The cartoons illustrate (sic) the dependence of language comprehension on context (the one created by the drawing) and background knowledge (about, for example, rats running mazes, guys marooned on islands, St. Peter’s gate, corporate culture, New Yorkers). The popular Caption Contest is an image-labeling task, generating humorous labels for an incongruous scene.

But it’s Mankoff's excursions into research that are particularly interesting and Language Loggy.  Mankoff is the leading figure in Cartoon Science (CartSci), the application of modern research methods to questions about the generation, selection, and evaluation of New Yorker cartoons.

Read the rest of this entry »

Comments (11)

DolphinAttack

Guoming Zhang et al., "DolphinAttack: Inaudible Voice Commands", arXiv 8/31/2017:

In this work, we design a completely inaudible attack, DolphinAttack, that modulates voice commands on ultrasonic carriers (e.g., f > 20 kHz) to achieve inaudibility. By leveraging the nonlinearity of the microphone circuits, the modulated lowfrequency audio commands can be successfully demodulated, recovered, and more importantly interpreted by the speech recognition systems. We validate DolphinAttack on popular speech recognition systems, including Siri, Google Now, Samsung S Voice, Huawei HiVoice, Cortana and Alexa.

Read the rest of this entry »

Comments (11)

The power and the lactulose

The so-called Free Speech Rally that's about to start in Boston will probably be better attended, both by supporters and opponents, than the one that was organized by same group back in May. But some of the featured speakers at the May rally, including "Augustus Invictus", have decided not to attend today's rerun. So I listened to the YouTube copy of the May rally speech by Austin Gillespie (Augustus's real or at least original name). And since this is Language Log and not Political Rhetoric Log (though surely political rhetoric is part of language), I'm going to focus on YouTube's efforts to provide "automatic captions".

Read the rest of this entry »

Comments (3)

English Verb-Particle Constructions

Lately I've been thinking about "optionality" as it relates to syntactic alternations. (In)famous cases include complementizer deletion ("I know that he is here" vs. "I know he is here") or embedded V2 in Scandinavian. For now let's consider the English verb-particle construction. The relative order of the particle and the object is "optional" in cases such as the following:

1a) "John picked up the book"
1b) "John picked the book up"

Either order is usually acceptable (with the exception of pronoun objects — although those too become acceptable under a focus reading…)

1c) "John put it back"
1d) *"John put back it"

Read the rest of this entry »

Comments (20)

Gender, conversation, and significance

As I mentioned last month ("My summer", 6/22/2017), I'm spending six weeks in Pittsburgh at the at the 2017 Jelinek Summer Workshop on Speech and Language Technology (JSALT) , as part of a group whose theme is "Enhancement and Analysis of Conversational Speech".

One of the things that I've been exploring is simple models of who talks when — a sort of Biggish Data reprise of Sacks, Schegloff & Jefferson "A simplest systematics for the organization of turn-taking for conversation", Language 1974. A simple place to start is just the distribution of speech segment durations. And my first explorations of this first issue turned up a case that's relevant to yesterday's discussion of "significance".

Read the rest of this entry »

Comments (10)

Helpful Google

The marvels of modern natural language processing:

Michael Glazer, who sent in the example, wonders whether Google Translate has overdosed on old Boris and Natasha segments from Rocky and Bullwinkle:


Read the rest of this entry »

Comments (12)

Elephant semifics

Comments (11)

Do STT systems have "intriguing properties"?

In "Intriguing properties of neural networks" (2013), Christian Szegedy et al. point out that

… deep neural networks learn input-output mappings that are fairly discontinuous to a significant extent. We can cause the network to misclassify an image by applying a certain imperceptible perturbation…

For example:

Read the rest of this entry »

Comments (14)

"The eye of the needle … is being tried to be threaded…"

Adam Cancryn, "Why a GOP senator from Trump country opposes the Senate health bill", Politico 7/9/2017:

“Collaborating with Democrats on the other side, to me, is not an exercise in futility,” Capito said, noting that she has spoken with Manchin and other Democrats about tackling health care together. “That may be where we end up, and so be it.”

Speculating further than that, she added, is premature. Senate Republicans could quickly strike a deal, pass a bill and follow through on their seven-year repeal pledge before the month is out.

“I think that remains to be seen,” Capito said. “That’s the eye of the needle, and I think it’s being tried to be threaded. But I’m not sure.”

Read the rest of this entry »

Comments (6)

Amazon Echo Silver

Comments (13)