"They called for more structure"

« previous post | next post »

From Kevin Knight's home page:

I think our approach to syntax in machine translation is best described in D. Barthelme's short story They called for more structure….

In case you didn't follow the link, and to guard against future link rot:

They called for more structure, then, so we brought in some big hairy four-by-fours from the back shed and nailed them into place with railroad spikes. This new city, they said, was going to be just jim-dandy, would make architects stutter, would make Chambers of Commerce burst into flame. We would have our own witch doctors, and strange gods aplenty, and site-specific sins, and humuhumunukunukuapuaa in the public fishbowls. We workers listened with our mouths agape. We had never heard anything like it. But we trusted our instincts and our paychecks, so we pressed on, bringing in color-coated steel from the back shed and anodized aluminum from the shed behind that. Oh radiant city! we said to ourselves, how we want you to be built! Workplace democracy was practiced on the job, and the clerk-of-the-works (who had known Wiwi Lönn in Finland) wore a little cap with a little feather, very jaunty. There was never any question of hanging back (although we noticed that our ID cards were of a color different from their ID cards); the exercise of our skills, and the promise of the city, were enough. By the light of the moon we counted our chisels and told stories of other building feats we have been involved in: Babel, Chandigarh, Brasilia, Taliesin.

At dawn each day, an eight-mile run, to condition ourselves for the implausible exploits ahead.

The enormous pumping station, clad in red Lego, at the point where the new river will be activated . . .

Areas of the city, they told us, had been designed to rot, fall into desuetude, return, in time, to open space. Perhaps, they said, fawns would one day romp there, on the crumbling brick. We were slightly skeptical about this part of the plan, but it was, after all, a plan, the ferocious integrity of the detailing impressed us all, and standing by the pens containing the fawns who would father the fawns who might someday romp on the crumbling brick, one could not help but notice one's chest bursting with anticipatory pride.

High in the air, working on a setback faced with alternating bands of gray and rose stone capped with grids of gray glass, we moistened our brows with the tails of our shirts, which had been dipped into a pleasing brine, lit new cigars, and saw the new city spread out beneath us, in the shape of the word FASTIGIUM. Not the name of the city, they told us, simply a set of letters selected for the elegance of the script. The little girl dead behind the rosebushes came back to life, and the passionate construction continued.


  1. Mara K said,

    February 22, 2015 @ 12:50 pm

    Despite being a grad student in linguistics, I have been taught very little about machine translation, so I don't see the connection. Is it that the workers act without really knowing what they're working toward and why? Or that the designers are creating a city out of nothing, going so far as to pull a river up from the depths of the earth to make the area habitable and to import fawns so that someday the city will revert to an idealized forest? Or is it the structure of the text itself, which is perfectly grammatical but doesn't always bother to be coherent or cohesive? (phrases like "clad in red Lego" and "the little girl dead behind the rosebushes came back to life" come to mind)

    [(myl) The joke (or at least the concept) is that statistical MT started by mapping words to words, and then word-strings to word-strings, and now has gradually been adding more and more structure, so that state-of-the-art MT systems do a complete parse and a light-weight semantic analysis of the input.

    But of course the attribution of structure and function is not always correct or coherent, and (I presume) Kevin thought that the surreal encounter with structure-building in Barthelme's story was an amusing metaphor for the attempt to introduce structure into statistical MT.]

  2. Stephen Nightingale said,

    February 22, 2015 @ 1:04 pm


    It's not necessarily intended to make sense. It is simply machine translated from the original Finnish.


  3. GH said,

    February 22, 2015 @ 1:34 pm

    Is it really? I didn't even realize, and still enjoyed it as a piece of writing in all its surrealism. I'm impressed, because my experience with Google Translate from Finnish is that it is almost incomprehensible.

  4. Dave K said,

    February 22, 2015 @ 1:50 pm

    Naah, it's in the original English. If a machine can come up with a prose style that good, it's time to go off the grid.

  5. Tom S. Fox said,

    February 22, 2015 @ 2:03 pm

    I have a feeling Stephen Nightingale just made that up.

  6. TonyK said,

    February 22, 2015 @ 2:08 pm

    I simply don't believe this is a machine translation. Somebody (Stephen Nightingale perhaps?) is pulling our legs.

  7. Theophylact said,

    February 22, 2015 @ 2:31 pm

    It's in Barthelme's collection Overnight to Many Distant Cities.

  8. Tommi Nieminen said,

    February 22, 2015 @ 5:19 pm

    Based on my own experiences with English to Finnish machine translation, mosts syntactic improvements to statistical systems are like rotten four-by-fours nailed with railroad spikes: you work with unreliable output from parsers and then you attach that to the MT architecture with crude and primitive methods. Predictably, most of the time you're better off with simpler systems. We know we need more structure, but at the moment it's not helping much.

  9. GH said,

    February 22, 2015 @ 6:06 pm

    Ha! Maybe that'll teach me not to be so gullible.

  10. Jason Eisner said,

    February 22, 2015 @ 10:55 pm

    "Areas of the city, they told us, had been designed to rot, fall into desuetude …":

    Keep hacking on your MT system, and don't worry about the unprincipled bits (such as the way that phrase pairs are extracted and counted). They're only temporary. In time these portions of the code will fall away as we return to the open space of linguistics.

    @Mara K! Your own linguistics department is among the pens containing the fawns who will father and mother the fawns who may someday romp on the crumbling brick!

  11. Mara K said,

    February 22, 2015 @ 10:58 pm

    @Jason I don't think I understand enough about machine translation for that to make sense. Does it mean we're programming the predecessors to real AI?

  12. Jason Eisner said,

    February 22, 2015 @ 11:52 pm

    @Mara K, yes, if by "real AI" you mean "neat AI."

    I used to read a lot of Barthelme, and found Kevin's link delightful. I hope that spelling out the analogy (as I see it) doesn't spoil the fun:

    The field of AI includes both neat and scruffy approaches. A neat system for MT would be a faithful implementation of some linguistic theory. Current leading MT systems are somewhat scruffy. They contain various hacks and shortcuts that help to produce a decent translation quickly.

    Researchers with a scruffy-AI mindset may think that's just fine. Either they suspect that brains themselves are much scruffier than linguists admit, or they have no opinion about brains and simply want to engineer a working product.

    A scruffy-AI researcher may want to enrich the current system to make more use of syntax, but will be perfectly happy to use a "big hairy four-by-four" approximation of syntax that is nailed onto the rest of the system with railroad spikes. The goal is to improve the end results by any expedient method.

    Other researchers working on the same system may be true believers in neat AI. They really wish that the system had been designed on clean linguistic and statistical principles from the ground up. Unfortunately such systems would be hard to build and have not worked as well in the past, so these neat-AI researchers settle for helping to nail syntax onto an existing scruffy system. They feel proud of themselves for using (more) linguistics. But does this route really lead toward the utopian system they dream of? Can the hybrid system be gradually made more principled, as the old hacks are gradually phased out? Or is that just a comforting fantasy that sustains them, as it sustains Barthelme's construction workers? "The exercise of our skills, and the promise of the city, were enough."

    [(myl) A beautiful exegesis.]

  13. Mara K said,

    February 23, 2015 @ 12:02 am

    @Jason this makes sense. Now what about the part where we discover that the city has a plan behind it, but that that plan is based around a nonsense word that simply looked pretty to the designers?

  14. Ethan said,

    February 23, 2015 @ 2:01 am

    @MaraK: "Fastigium" is not a nonsense word. Its meaning may be unknown to the narrator, but I take it either as irony on the part of the unknown city designers or a meta comment that breaks the fourth wall of the vignette and addresses the reader directly. To me it adds a connotation that the whole passage is sort of a fever dream. I leave it to others to speculate how much this contributed to Kevin Knight's choice of analogy.

  15. Mara K said,

    February 23, 2015 @ 2:13 am

    @ethan *looks up "fastigium"* It sounds to me more like the city is a disease. What does that say about machine translation?

  16. J.W. Brewer said,

    February 23, 2015 @ 7:26 am

    The failures and inhumane brutality of planned utopias like Brasilia and Chandigarh are contrasted with the unplanned but well-functioning complexity of natural language in James Scott's interesting book Seeing Like a State. From which I wonder if it follows that MT between two artificial rationally-designed languages (Esperanto to Lojban, or whatever) would be easier to implement via "neat" AI?

  17. Mara K said,

    February 23, 2015 @ 11:59 am

    @J.W the problem with "rationally-designed" languages is that once real people start using them they evolve and become messy and unplanned. Here is my intuition: MT between Loglan and the original Esperanto would probably be easier; MT between Lojban and today's Esperanto might be easier than, say, MT between French and Mandarin, which have both been evolving in unplanned ways for thousands of years, but not easy. Is this a good/correct intuition?

  18. John Lawler said,

    February 23, 2015 @ 12:43 pm

    A somewhat dated (ca 1998) account of the two CL/NLP approaches — which were quite distinct and even antagonistic at the time — can be found in the last two chapters of Using Computers in Linguistics.
    The first of these, by Jim Hoard (then with Boeing), is definitely the neat approach, and its title heralds the present synthesis.
    The second one, by Sam Bayer and his group at MITRE, is about how far you can get picking low-hanging fruit, and how you can build ladders. Plus it has a rather nice summary of the history of the field.

  19. J. W. Brewer said,

    February 23, 2015 @ 1:41 pm

    Mara K.: I don't know enough about either MT or the history of Esperanto to say, but that certainly seems like a *plausible* intuition. And perhaps as with Esperanto, once people actually started living in Brasilia they came up with various spontaneous/improvisational ways to make it more livable in practice than it would have been had the planners' aridly rationalistic vision not been diluted in that way. Although probably there's a difference in degree because using Esperanto in the first place probably self-selects for basic sympathy with the planners' original vision in a way that living in Brasilia probably doesn't.

    I think some of the early "neat AI" failures at getting computers to deal with natural language were trying to construct software around academic approaches to language (e.g. generative semantics) that have themselves subsequently fallen out of favor in linguistics departments. But the MT business may still be scruffy enough that inability-to-be-implemented-via-neat-AI may not be a good way to tell better academic theories of language from worse ones, because even the better ones may not (yet, at least) be susceptible of notably successful implementation in that context.

  20. Zizoz said,

    February 23, 2015 @ 9:58 pm

    What is the significance of the word "fastigium"? It seems to mean "gable", which is of no help to me…

  21. Mara K said,

    February 23, 2015 @ 10:18 pm

    @zizoz Dictionary.com says a fastigium is "the highest point of a fever or disease; the period of greatest development of an infection." This makes me think the city itself is a disease, and the moment of its completion is the peak of the infection.

  22. David J. Littleboy said,

    February 25, 2015 @ 9:12 am

    "A scruffy-AI researcher may want to enrich the current system to make more use of syntax, but will be perfectly happy to use a "big hairy four-by-four" approximation of syntax that is nailed onto the rest of the system with railroad spikes. The goal is to improve the end results by any expedient method."

    Hmm. That's not what I take "scruffy" to mean. "Scruffy" means having a cognitive theory of how people do things and attempting to implement that theory. Neat AI is more "scruffy" in your sense. In my sense of "scruffy", if you asked someone to name every museum they'd ever visited, they'd be slow and forget some. A "neat" AI program to respond to that question would be a simple database query, would be fast, and would never forget a museum. A program that had to work to justify traversing a given link (and had trouble finding links that might get to museum memories in the first place) would be slow, make mistakes, and get you a PhD from Schank in the early 1980s.

    Neat AI is about persuading a computer to do something impressive with no concern for whether or not people do it that way. E.g. Deep Blue, corpora-based MT, contemporary "machine learning" stuff. And pretty much everything else in AI for the last 25 years. To the best I can tell, no one's doing scruffy AI any more.

    Or at least that's what the terms look like to the average bloke who's passed the AI quals under Roger Schank.

    (Note that there's some argument as to whether neural network models are neat or scruffy. I take them to be neat in the extreme, because despite being vaguely reminiscent of tangles of neurons, they're based on praying that intelligence will emerge from doing the same stupid thing over and over again in parallel with no model of what intelligence is. Folks who like neural networks think they're modeling brains. Go figure.)

  23. J. W. Brewer said,

    February 25, 2015 @ 11:40 am

    Schank was very big name around campus (at least if you spent any time socializing with with dweeby people interested in computer-related stuff) when I was an undergrad back in the '80's, and I recall hearing at the time (and wikipedia confirms it) that (understandably, since he was of the generation when "computer science" wasn't a thing that you could have gotten a Ph.D. in) his own doctorate was in linguistics, rather than in the more-typical-for-CS math or applied math or EE. However, I think my prior comment was inspired in part by memories of taking an AI-for-non-CS-majors class (not with Schank but with a younger colleague) the same semester I was taking a class on then-orthodox (but now badly out of date) transformational syntax. It seemed (at least with 20/20 hindsight on top of admittedly fuzzy recollection) like the model of natural language being used for examples in the AI class was, if not straight up generative semantics, at least heavily reliant on some very naive version of deep structure that the orthodox Chomskyans had already jettisoned a decade or so earlier (although they had not yet jettisoned the whole D v. S distinction yet).

    Although I guess it could be argued that getting a computer to simulate fluency in a highly impoverished version of a natural language (like the simplification characteristic of pidgins, but more so) that was simple and orderly enough to be accurately modeled by early/naive/superseded Chomskyanism would still have been a massively impressive accomplishment.

  24. Mara K said,

    February 25, 2015 @ 1:13 pm

    I don't even know what linguists have or haven't jettisoned, because my graduate syntax professor last semester insisted on teaching us the Chomskyan way. I took issue with the existence of covert movement and had an epic argument in with her in the last month of class about whether anything that happened after Spellout was really syntax (I argued it should be semantics or pragmatics). Her response: "Sure, but let's do it this way for the sake of the argument." Argh!

    tl;dr Where can I, as a graduate student who expected to learn about cutting-edge developments in theoretical linguistics, go to learn about anything that's happened in linguistics since Minimalism? Someone please recommend papers.

  25. David J. Littleboy said,

    February 25, 2015 @ 6:25 pm

    "It seemed (at least with 20/20 hindsight on top of admittedly fuzzy recollection) like the model of natural language being used for examples in the AI class was, if not straight up generative semantics, …"

    That's exactly right. We thought the wrong side won the linguistics wars. (Basically, Chomsky thought that it wouldn't be possible to deal with meaning in language scientifically, so meaning had to be ignored. The scruffy AI party line was that language is about meaning, so was the central concern, so we ended up reinventing generative semantics.) I took an intro course to linguistics from a generative semantics type (who posts here occasionally (hi, Larry!)) back then, and nearly everything he'd say would strike me as ridiculous, and I'd say so, and he'd say "you're right, but I'm teaching intro transformational, not generative semantics." The textbook used had one of the most egregious examples of academic dishonesty I had ever seen. As an exercise to show how useful phrase structure grammar was, it used fudged data (they used formal sentences when the ordinary forms wouldn't fit the phrase structure they were putting together) from Japanese. ROFL. But I learned to never, ever, even think of believing a linguist or anthropologist who tells you something about a language you don't speak. And that rule has proved correct several times over the last few years.

RSS feed for comments on this post