Universal Grammar haters
« previous post | next post »
It's bizarre. Suddenly every piece of linguistic research is spun as a challenge to "universal grammar". The most recent example involves Ewa Dabrowka's interesting work on the linguistic correlates of large educational differences — Quentin Cooper did a segment on BBC 4, a couple of days ago, about how this challenges the whole idea of an innate human propensity to learn and use language. (Dr. Dabrowska appears to be somewhat complicit in this spin, but that's another story.)
It's hard for me to explain how silly I think this argument is. It's like showing that there are hematologic effects of athletic training, and arguing that this calls into question the whole idea that blood physiology is an evolved system.
When I wrote about Ewa Dabrowska's work on the linguistic correlates of large educational differences, I carefully avoided the whole "universal grammar" aspect of her presentation ("'Unable to understand some basic sentences'?", 7/9/2010; "More on basic sentence interpretation", 7/12/2010; "The Wason selection test", 7/15/2010).
That was because I thought that her work, though quite interesting, has essentially no bearing on the question of whether or not our species has an evolved substrate for speech and language. At least one commenter disagreed, but on the whole, the discussion was mercifully free of innateness bombast on either side. But the BBC's listeners were not so lucky when Dabrowska was featured on Quentin Cooper's BBC Material World program, 7/29/2010.
The web site's synopsis of the Dabrowska segment was: "Are we born with built in grammar knowledge and if we're not, can we learn it?" And here's how Quentin Cooper starts out:
Seventy-odd years ago the writer William Somerset Maugham argued "it is necessary to know grammar, and it is better to write grammatically than not, but it is well to remember that grammar is common speech formulated."
If that is the case that grammar is a formula formalizing what we do naturally rather than a set of rules to control it, then why does common speech follow these patterns? What leads us to put together our words in these particular ways?
It's long ((and)) often been suggested that deep beneath all languages there is a "universal grammar" that our brains have evolved to use and which helps children to rapidly learn how to speak. But research about to be published in the journal Lingua has come up with evidence that seems to go against this theory, showing that some native English speakers who left school early have difficulties with even basic grammatical constructions.
Since the BBC (why???) withdraws audio access to its radio shows after a few days, here's the whole ten-minute segment in a more durable form:
It seems to me that Cooper's explicit argument — that the existence of individual differences correlated with training shows that there's no evolutionary substrate for language acquisition — is so silly that to state it is to refute it. You can tussle among yourselves in the comments if you don't agree.
I'll just mention another example of the same bizarre meme: the spin given to Lera Boroditsky's interesting work on how morphosyntactic differences between languages have a bit of an effect on how their speakers tend to remember certain experiences ("Boroditsky on Whorfian navigation and blame", 7/26/2010).
Since I still haven't had the time to offer a detailed analysis of her findings, which are Whorfian in the classic sense, for now I'll just point to Lane Greene's lovely summary:
She sticks mainly to pretty careful statements about things she's tested. If I had to sum up in plain English my conclusion would be not "language shapes thought" (much less "language restricts thought"), but probably "language nudges thought" (in certain circumstances).
I still plan to discuss Boroditsky's work at greater length, but I hope to ignore the whole "universal grammar" discussion as thoroughly as I can, because I think it's an irrelevant waste of time in this context. In support of this view, let me offer another analogy. Suppose we find that deaf people are somewhat more likely than hearing people to remember the individual facial characteristics of a stranger they pass on the street. This would be an interesting result, but would we spin it to the world as a challenge to the widely-held theory that there's an evolutionary substrate for the development of human face-recognition abilities?
Please note that I'm not arguing here for any particular epistemological theory, either in general or in specific cases. I'm just surprised at the intensity of what seems to me to be a transcendently silly belief: "if there are any effects of experience, there must not be any evolved predisposition".
Anton Cox said,
July 31, 2010 @ 7:41 am
Perhaps perversely, I am going to ignore the universal grammar part of the post and instead explain why (I think) the BBC deletes content after a while.
In part the boring reason is simply rights issues over music and third-party programmes. This mainly affects the music channels and the TV programmes online, but presumably it is simpler to have all content time-limited to avoid any (legally expensive) mistakes.
But there is probably another more general reason. Although adverts appear on international versions of the BBC website etc., it is almost entirely funded by a licence fee paid by all owners of TVs and video players in the UK. The anti-BBC brigade sometimes argue that it has too much clout and distorts the market, and so the BBC has to be ultra-sensitive to the charge that it provides services that unduly compete with the commercial market. Having a large permanent archive of online programmes is the kind of thing that would potentially cause them trouble. Again, while some of the programmes would seem unlikely to be provided elsewhere commercially, it is probably easier to have a blanket policy rather than to carefully assess each one in turn.
[(myl) I'm not sure that these theories are any more consistent with the overall pattern of facts than any of the other theories advanced in the discussion here. In particular, it doesn't seem to account for the dire warnings against daring to make a copy even for your own use.]
Mark F. said,
July 31, 2010 @ 9:14 am
How is the meme "Variations in language capacity disprove UG" the same as "Evidence for weak Whorfianism proves strong Whorfianism"?
Anyway, in Cooper's case I think he was trying to explain why the result should be surprising, and in the process conflated two hypotheses. There is a hypothesis that native speakers of ordinary intelligence have a full grasp of the language of their community. It's difficult to rigorously express this hypothesis, since every idiolect is a little different, but still it seems to me that it's a very strongly held hypothesis and that Dabrowska's results seem to violate it. Put differently, if I were going to formulate that hypothesis in terms of tests that I would expect less-educated speakers to pass, Dabrowska's tests would have been on that list. (But I'm not a linguist.)
I think that competency hypothesis has been put forth as evidence for UG, and anyway it seems like it's part of the same family of ideas. Meanwhile, UG is more famous as an area of controversy, and it gets explained more often. So I can easily see someone thinking of Dabroska's results in the light of the more-known controversy, and it must be tempting to cast it that way.
Anton Cox said,
July 31, 2010 @ 9:33 am
"I'm not sure that these theories are any more consistent with the overall pattern of facts than any of the other theories advanced in the discussion here. In particular, it doesn't seem to account for the dire warnings against daring to make a copy even for your own use."
They seem pretty consistent to me. The BBC broadcasts a massive amount of material, and the iPlayer covers all radio and TV channels. Quite a lot of this involve rights issues with either music companies or foreign TV companies (so that I can watch Mad Men for example). Checking which programmes, or which parts of programmes, do or dont have contractual reasons for restriction is a non-trivial task. And I understand that there were massive issues with the music industry about personal use recording which had to be resolved before the iPlayer could be launched. So the BBC errs on the side of safety and has a blanket polic
And there is the problem of unintended consequences. The BBC Philharmonic had a Beethoven symphony cycle available free online a few years ago to download. There were well over a million downloads. This caused great consternation in the record industry that people would not buy commerial recordings, and I believe that the BBC agreed not to repeat the experiment.
In the thread myl links to above the quoted remark, he notes that the BBC is a quasi-monopoly. This is precisely why their contract writers have to err on the side of weakness when dealing with the many independent contrators who now create much of the BBC radio and TV content. Otherwise they are castigated for abusing their dominant market position. (And why do they subcontract out production? – well political pressures for "efficiency savings" about a decade or so ago led to various "markets" being introduced…)
Debbie said,
July 31, 2010 @ 10:10 am
Perhaps the children that left school early did so because they had difficulty grasping language therefore impacting their ability to understand. Could be that unable To understand basic language construction made further understanding difficult thus frustrating students who then opted to leave school. I see this arguement to be just as logical as the one suggested in the blog so I'd have to agree with your final statement.
language hat said,
July 31, 2010 @ 10:10 am
I, on the other hand, welcome the new wave of anti-"universal grammar" spinners.
Geoffrey K. Pullum said,
July 31, 2010 @ 10:18 am
I'm quite prepared to treat claims about an innate UG very critically, and to demand stronger evidence for it — that is, I'm not especially a fan of linguistic nativism; but I'm also not a diehard UG-hater, and I'm just as inclined as Mark is to think that this recent fashion for spinning of results on language as anti-UG findings, even when addressing an entirely non-specialist general public, is shallow, unconvincing, and rather silly. So let me make just one comment from a neutral standpoint about what Mark says above: he is much more right about the Whorf-good-therefore-UG-bad argument (the Boroditsky line) than for Dabrowska's one. Quentin Cooper's remarks could be interpreted as tacitly adopting this as his argument:
I'm not endorsing that (other commenters will read carelessly and will think that I am, so let me say it in boldface: I am not endorsing the above argument), but it's formally valid if you buy the step from 1 to 2, and I don't think it's self-evidently ridiculous, if the rather vaguely stated premises are charitably construed.
Henning Makholm said,
July 31, 2010 @ 10:57 am
"There is a hypothesis that native speakers of ordinary intelligence have a full grasp of the language of their community."
That's not so much a hypothesis as a definition. A language is what its native speakers grasp, neither more nor less.
Rodger C said,
July 31, 2010 @ 11:07 am
To advert to the topic of an earlier thread: If language didn't affect thought, we wouldn't be infested with binaries and obliged to deconstruct them.
A mathematician said,
July 31, 2010 @ 11:10 am
The BBC doesn't withdraw access to all its radio shows. There are currently 477 episodes of the fabulous In Our Time available, which could keep you busy for a while.
Jarek Weckwerth said,
July 31, 2010 @ 12:00 pm
@H Makholm: A language is what its native speakers grasp, neither more nor less.
I think you'll find that the point here is that speakers vary in exactly how much they grasp. Which leads to the rather obvious problem with these kinds of "shared competence"-based definitions of "a" language. Is it what all of its speakers grasp/share, to the exlusion of the fancier bits, as displayed on e.g. web forums frequented by highly-proficient users? Or is it what those highly-proficient users grasp, with the implication that the less-proficient ones do not in fact have complete command of the language?
Old story. (Not to mention the rather shaky character of the term "native speaker"…)
Leonardo Boiko said,
July 31, 2010 @ 1:01 pm
Perhaps journalists got the idea from the Pirahã thing? Just substitute anything for Pirahã and presto, you can reuse the popular “Chomsky was wrong” story.
John Cowan said,
July 31, 2010 @ 1:13 pm
GKP: Actually, that argument is not formally valid either, because it equivocates on "children" vs. "some children". [Be fair, John. I explicitly said "if the rather vaguely stated premises are charitably construed". I was assuming we turn a blind eye to sloppiness on points like whether checking out a few children bears on a general claim about children. Try to defocus your eyes and think more fuzzily, John.—GKP] Statements 1 and 2 may be true of "children" (most children, the bulk of children) without being true of each and every child individually. Humans have evolved to have (terminal) hair on their heads and a few other spots, the existence of some people with hypertrichosis notwithstanding. It is perfectly possible that almost all children have a working "UG machine" that helps them learn to speak, whereas others have to use their general cognitive capacities, either exclusively or partly, in order to do so.
This is clearly true of facial recognition. As I posted before, because I have congenital prosopagnosia, I have to use general cognition to remember faces, and I'm extremely poor at it. From my point of view, the 98% of you with working "FR machines" have what amounts to a superpower: one glance and you know who someone is (though you may not remember their name), whereas for me, human faces are about as memorable as stones.
Dominik Lukes said,
July 31, 2010 @ 3:31 pm
As a self-confessed UG hater, I think this is a good trend. I hate the UG hypothesis not for what it is but for the damage it's done to how people look at practical problems of language usage. It seems to tap into our "innate" fascination with psychobabble and folk theorists of language seem to flock to it like middle-class women to horoscopes. Once you know about UG, every "fascinating" thought you once idly entertained about language, fits into it. I've had so many people spout irrelevant nonsense at me (most recently someone asking if phonological awareness limitations of dyslexics are due to a fault in UG) that my initial indifference has turned to outright hostility.
So rather than complain that the problem of UG is treated superficially, I'm glad to see that those who previously thought it was gospel without understanding any of it, may now treat it with a bit more suspicion without really understanding a reason for that either.
The problem with UG is not whether it's right or wrong but that it is irrelevant to pretty much any practical issue to do with language. Not that I'm opposed to highly abstract debates but I suspect UG's popularity outside of linguistics circles is not due to its internal consistency but rather to its association with the only linguist anybody can name.
I personally think that Dabrowska's (and others') whole body of research (not just the paper under discussion) offers a serious challenge to formalist treatments of language in general so it's nice to see it presented that way in popular media. It may force the largely insular UG community to have to answer tougher questions from the heretofore uncritical public.
~flow said,
July 31, 2010 @ 3:46 pm
@ the BBC conundrum: over here in germany, tv fee payers now pay people at the public tv and radio stations to sift through their publicly funded websites to erase publicly funded content, so as not to endanger commercial offerings. so we now pay for not being able to view content. can they please either cancel those fees or else let us keep what we have paid for, thank you?
on youtube, two lo-res videos featuring the 1971 appearance of singer dialiah lavi in german tv were taken offline—both parts of it, the game show / interview part, and her live presentation of her song jerusalem, which seems to indicate it was not BigMusic Inc who pressed youtube, but likely some publicly financed interest protector.
i mean, we all payed for that show, it was broadcast (thus 'published', 'made public'), to millions, and **fourty** years later we are still prohibited to display a badass-small flaky copy of a few minutes of that because someone claims they have the 'rights' on that content? say what? madness.
Stephen Jones said,
July 31, 2010 @ 4:39 pm
But it was the blessed Lera that wrote the whole article. She's the one giving the spin.
. What kind of hold does this woman have on you Mark. She's not sticking carefully to what she's tested in the article. And she doesn't even describe her own work accurately.
Stephen Jones said,
July 31, 2010 @ 4:52 pm
The silly little tests with dogs in baskets merely showed the researchers didn't know how to get their research stuff right.
I suspect the research to be published in Lingua is of the same quality; that is the relationship between the findings and the result is based on hope rather than hard logic.
marie-lucie said,
July 31, 2010 @ 5:25 pm
I was going to leave a comment but first read the first post referred to (July 7), in which most of the commenters made the points I was going to make (ambiguous pictures, odd situations oddly verbalized, socio-dialectal differences, and the fact that the "basic" sentences with "every" and "each" were not basic at all).
My own discomfort with UG is that it is based on English structure – a UG first presented by a Japanese or Georgian or Algonquian-speaking linguist might have been quite different – and that it leads linguists not to listen to what people actually say but what they can be made to say with a bit of ingenuity, whether the result is natural or grammatical for that language or not.
I recently attended a presentation which centered on some syntactic structures in a language I know quite well, or rather, on translations into that language of complex English sentences, for which the consultant had obviously tried to please the linguist by coming up with sentences that were quite ungrammatical in her language – while she would have been quite capable of describing the (odd) situations presented if she had responded naturally with the (differently structured) resources of her own language.
This type of problem is one that linguists should always be aware of: a "wonderful" consultant who can always be counted on to come up with a translation may be linguistically imaginative rather than displaying models of her native grammatical competence. The modalities of thus adapting to the English structures could be a valid subject of study, if recognized by the linguist, but it is very misleading to describe such adaptations as spontaneous utterances typical of the structure of the language. Sentences thus obtained, which have no parallel in those naturally occurring in spontaneous utterances or in texts, should be very suspicious, especially if some of the features are quite at odds with those independently described in works on the same language. Some linguists have written about this real potential pitfall in fieldwork with bilingual consultants, and competent translators between any two languages are quite aware of the problem, but the UG people may not be, especially if they try complex sentences on their consultants without having a firm grasp of the simpler ones in the language they claim to be studying.
Sili said,
July 31, 2010 @ 5:47 pm
No it isn't. (Bizarre, that is.) The operative word is "spun".
The Press™ hates consensus and loves to root for the underdog. Hence if UG is perceived to be somehow generally accepted (whether this is true or not), every singly piece on anything related to linguistics has to be framed as the maverick standing up against Big Grammar™. Added to that we need to have two sides to every story – and they need to be given equal weight in the name of balance™.
That's why you'll see headlines in Jesus font exclaiming how the Large Hardon Collider is gonna kill the Pope. String Theory cannot possibly be true because so many people work on it. And of course vaccinations cause autism.
I'd add in Creationism and AGW denialism, but those two issues are further driven by ideological (and financial) interests in suppressing the truth.
Oh, and of course Noam Chomsky is an arse, so of course UG is wrong. It's just like Al Gore and global warming.
[(myl) OK. So it's culturally predictable. It's still logically bizarre.]
John Roth said,
July 31, 2010 @ 6:11 pm
Given the many more informed comments, I've only got one thing to say: I'm surprised at equating "universal grammar" with "an evolved substrate for speech and language." I usually think of the term "universal grammar" in Chomskyian terms, with Transformational Grammar having been derided by Pinker as "cut and paste".
To me, Chomskyian U.G. was never likely and has become less so over the years, while the notion of some kind of evolved substrate for speech and language seems almost too obvious to be discussable.
We might be seeing the hyenas gathering for the corpse of Transformational Grammar. If this is in fact what's happening, it would be nice to be clear about it.
Jesús Sanchis said,
July 31, 2010 @ 6:35 pm
I totally agree with Dominik Lukes. I would only add a couple of things:
As I wrote in a recent comment in 'A Replicated Typo', the whole 'debate' about UG and around Chomsky is mainly 'an American thing', ultimately a consequence of the cold war period, when the USA needed a leading figure in every scientific domain. Nowadays the debate persists in American academia but the rest of the world couldn't care less. It's a bit like talking about UFO's (another phenomenon that was born in those years): nobody is interested in them any more.
Human language is just one aspect of a more complex phenomenon: human communication. A 'grammar' is an even poorer concept, as it just reflects the result of a secondary introspection into how languages work. And then we have UG: a gratuitous intellectual effort that explains nothing.
the other Mark P said,
July 31, 2010 @ 7:57 pm
Oh, and of course Noam Chomsky is an arse, so of course UG is wrong. It's just like Al Gore and global warming.
Except, of course, that Chomsky was a massively gifted linguist and Al Gore has never been even an average scientist.
And Chomsky had to battle massive hostility in the early days, whereas Gore arrived after the groundbreaking work was done and could bask in the glory of a Nobel Prize.
And Chomsky engages in debate via scholarly output, but Al Gore prefers to pontificate from on high (to be fair he has no choice, since he has no actual science).
How can you even begin to compare a real seeker for knowledge with a puffed-up politician?
Bill Findlay said,
July 31, 2010 @ 8:53 pm
The program was on BBC *Radio* 4.
BBC4, tout court, is a TV channel.
marie-lucie said,
July 31, 2010 @ 9:01 pm
Chomsky had to battle massive hostility in the early days,
For a well-documented contrary opinion, see E.F.K. Koerner's Toward a history of American linguistics (Benjamins), ch. 8 "The 'Chomskyan revolution' and its historiography".
Sili said,
July 31, 2010 @ 10:00 pm
1) I was trying to demonstrate the invalidity of using ad hominem to dismiss an argument.
2) I am not aware of Al Gore ever claiming to be a scientist. It should be patently obvious that he's a science advocate, trying to use his knowledge of the political beast to guide good science through the maze that is lobbyism.
3) How can you call Gore a "puffed-up politician" in the same breath as praising Chomsky, the public face of Anarchism in today's US?
Mr. Fnortner said,
July 31, 2010 @ 10:31 pm
Is no one else intrigued by the Large Hadron Collider and the dire threat to the Pope? What does Sili know that the Vatican should?
[Read Dan Brown's Angels and Demons! It's real!—GKP]
Neal Goldfarb said,
July 31, 2010 @ 10:33 pm
While the attack on UG continues within linguistics and cognitive science more generally (I'm talking here about the real attack, not the "spin" attack), theories building on UG are popping up in the field of moral psychology and in legal academia.
For example, there is the theory of Universal Moral Grammar, whose most well-known proponents are Marc Hauser (yes, that Marc Hauser) and a law professor at Georgetown University named John Mikhail. And building on the theory of UMG, there are law-review articles articles like Is There a Law Instinct? by Michael Guttentag, of which this is the abstract:
Note the assumption that UG is basically an Established Fact.
By the way, if you like the title of Guttentag's article, you'll love Mikhail's unpublished paper Aspects of the Theory of Moral Cognition: Investigating Intuitive Knowledge of the Prohibition of Intentional Battery and the Principle of Double Effect (available here).
elinar said,
August 1, 2010 @ 4:35 am
As far as I know, Dabrowska and co. do not try to challenge the idea of “an innate human propensity to learn and use language”, but the idea that there is an autonomous UG and that the acquisition of grammar is unrelated to general socio-cognitive skills and the quality and quantity of linguistic input.
This type of research doesn’t ‘refute’ UG, but could be said to provide some further evidence for the usage-based or cognitive-functional model of language acquisition.
In any case, ‘What kind of evidence COULD refute the UG hypothesis?’, to quote the title of a paper by Michael Tomasello.
Of course Dabrowska and co’s research has a bearing on the UG question. Uniform linguistic competence has been one of the main arguments for UG, and, as far as I’m aware, is still entrenched in introductory and popular books about linguistics. It is a kind of ‘zombie axiom’ that seems to have become a tenet of folk linguistics as well.
Some people here seem to be suggesting that no serious person believes in this axiom. I’m glad if that is indeed the case. So presumably it is no longer drummed into students that “children learn their native language with remarkable speed and uniformity, regardless of the external circumstances”.
I agree with Dominik Lukas that it is up to the ‘UG community’ to sharpen up their act and tell us exactly what this UG is and why we need it.
Dominik Lukes said,
August 1, 2010 @ 6:54 am
Something about what the statement "Chomsky was a massively gifted linguist" by the other Mark P rubbed me the wrong way. I was trying to figure out in writing what the reason for my discomfort might be but it got a bit long and a bit too off topic, so I wrote a separate blog post about it: http://metaphorhacker.net/2010/08/why-chomsky-doesnt-count-as-a-gifted-linguist/
Dunx said,
August 1, 2010 @ 9:22 am
Further to the remarks about the BBC deleting its content after seven days, Material World is one of the programmes which continues to be available indefinitely after broadcast. However, knowing which programmes are going to be available beyond a week is not easy, and most of them do expire (much to my annoyance also).
Topher said,
August 1, 2010 @ 12:52 pm
# Deep beneath all languages there is a "universal grammar" that our brains have evolved to use and which helps children to rapidly learn how to speak. [Hypothesis to be shown false.]
# If 1 holds, then children will need no grammar instruction in order to command enough basic constructions to speak and understand the way others do, and individual differences in education level should make no difference. [Makes explicit an obvious corollary of 1.]
Except that 2 is not only not an "obvious corollary" of 1, it is not a corollary of 1 at all. The deduction of 2 from 1 requires the further assumption that "deep" in 1 actually means "shallow" so that the deep structures are almost identical to the surface grammatical structures and that there is no learning necessary in connecting those structures together.
An obvious corollary of that assumption is that all languages will have almost identical grammars. This is obviously false and no linguist in their right mind would claim otherwise. It is the fact that the version of UG disproven by these results (assuming that they are valid) is such a complete strawman with no relation to any serious version of UG theory that justifies Mark's statement that the claim of UG falsification is "…is so silly that to state it is to refute it."
For what it is worth, I should mention that the statement of UG in 1 doesn't include the only part of the general UG hypothesis that is in any serious doubt (and which I am agnostic about). The results of learning theory says quite unambiguously that, leaving out the implied evolutionary purpose it contains, either 1 is correct or the brain is capable of magically performing feats transcending what is mathematically possible for any physical mechanism. The issue is whether the deep "universal grammar" behind grammar can reasonably be described as grammar and whether it is in any way specialized for language. For language to have evolved there must be severe limits on the forms it can take to allow its acquisition. What we don't know is whether those restrictions co-evolved with language or pre-existed as part of the general cognitive arsenal.
Topher
Sili said,
August 1, 2010 @ 12:59 pm
GAH! I swear I've never read Brown! I musta absorbed the meme passively somehow. I honestly just wanted to jazz up the Doomsdayness to be even more over the top.
Rodger C said,
August 1, 2010 @ 1:02 pm
@Sili: Was "Large Hardon Collider" a typo or deliberate? In either case I'm adopting it.
Michael Rank said,
August 1, 2010 @ 2:55 pm
I'm a journalist not a professional linguist (although I did study linguistics as part of my degree), but what Dambrowska seems to be saying is that some (what proportion, does the Lingua article say?) less educated people can't (always?) match "The soldier hit the sailor"/"The soldier was hit by the sailor" with pictures depicting these events. This is surely totally different from saying they don't know the difference between these two sentences, which seems extremely unlikely to me. And would it make any difference if it were eg man/woman or man/car rather than soldier/sailor…?
Sili said,
August 1, 2010 @ 5:12 pm
"Hardon" was deliberate, and I think the source for my aiming it at the papacy rather than the world.
elinar said,
August 2, 2010 @ 2:51 am
I have a query about the current thinking on language acquisition/UG.
This is the ‘old’ story I’ve been told, and still see repeated in various places:
“Children acquire language spontaneously, and native speakers converge on the same grammar, regardless of the quality of the linguistic data they are exposed to. These facts can be explained if we assume that children are innately equipped with a UG”.
Is this now considered to be a very silly thing to argue; and if so, what is the more sophisticated view that all ‘sane’ linguists subscribe to? That children don’t converge on the same grammar, and syntactic competence is related to the quality of linguistic input and the level of education? And these facts can be explained if we assume what?
I would be very grateful if somebody could shed some light on this.
Dr Spouse said,
August 2, 2010 @ 6:12 am
Apologies for posting about the radio programme in the previous comment without reading this post first (I'm not a regular reader).
As I said in my other comment, though, good luck publishing in quite a few linguistics journals if you don't subscribe to UG. It's alive, well, and being heavily debated. Same as all other kinds of evolutionary psychology.
John Cowan said,
August 2, 2010 @ 8:48 am
Linguistics seems to go through intellectual phases about a generation ahead of everybody else; thus the rise of Structuralism (later renamed, with breathtaking arrogance, Theory) contemporaneous with the fall of (Bloomfieldian) structuralism. Now it's happening with UG.
Achim said,
August 2, 2010 @ 10:05 am
Topher:
This quote sums up rather nicely what I believe to have understood about UG when writing my dissertation. Having left academia 15 yrs ago after completing my doctorate, another question has dawned on me (I fear that that is not an English sentence, but I still hope to be understood): From the Master Himself downwards, a lot of people have talked a lot about Occam's Rasor, arguing that UG meets the conditions of that theorem.
But is that necessarily the case in all aspects of grammar? Maybe there are apects of grammar that are more easily learnt, rather than acquired. When I was a Ph.D. student, the acquisition of the German Mittelfeld was quite en vogue, and I remember how parameters and features were juggled about to get a grasp around the data. Even back then, when I was still an ardent believer, I thought that maybe one should not try to to press everything into the core, and ignore the periphery. (Render unto the king what belongs to the king. But no more.)
Occam's Rasor, it seems, sometimes turns out to be a double-edged sword.
language hat said,
August 2, 2010 @ 10:30 am
Dominik Lukeš: Thanks very much for linking to your excellent post; I've blogged about it at LH.
marie-lucie said,
August 2, 2010 @ 1:56 pm
I second Language Hat.
michael ramscar said,
August 2, 2010 @ 3:52 pm
Universal Grammaris a specific claim about the kind of capacity that underpins language learning. The claim is that children are innately endowed with mechanisms that abstract a set of rules (rules that guide the child's linguistic behavior) from a limited amount of experience:
"It is often argued that experience, rather than innate capacity to handle information in certain specific ways, must be the factor of overwhelming dominance in determining the specific character of language acquisition, since a child speaks the language of the group in which he lives. But this is a superficial argument. As long as we are speculating, we may consider the possibility that the brain has evolved to the point where, given an input of observed Chinese sentences, it produces (by an induction of apparently fantastic complexity and suddenness) the rules of Chinese grammar, and given an input of observed English sentences, it produces (by, perhaps, exactly the same process of induction) the rules of English grammar; or that given an observed application of a term to certain instances, it automatically predicts the extension to a class of complexly related instances. If clearly recognized as such, this speculation is neither unreasonable nor fantastic; nor, for that matter, is it beyond the bounds of possible study. There is of course no known neural structure capable of performing this task in the specific ways that observation of the resulting behavior might lead us to postulate; but for that matter, the structures capable of accounting for even the simplest kinds of learning have similarly defied detection." Chomsky (1959)
"The child who learns a language has in some sense constructed the grammar for himself on the basis of his observation of sentences and nonsentences (i.e., corrections by the verbal community). Study of the actual observed ability of a speaker to distinguish sentences from nonsentences, detect ambiguities, etc., apparently forces us to the conclusion that this grammar is of an extremely complex and abstract character, and that the young child has succeeded in carrying out what from the formal point of view, at least, seems to be a remarkable type of theory construction. Furthermore, this task is accomplished in an astonishingly short time, to a large extent independently of intelligence, and in a comparable way by all children. Any theory of learning must cope with these facts." Chomsky (1959)
Although Chomsky is careful to fudge the ontological status of the 'grammar', the claim is that specific 'facts' about the way children learn language force the conclusion that children are equipped with a device that permits a specific kind of abstract theory construction — a Universal Grammar.
This is separable from whether there is a biological basis to human the human capacity for language: for example, if many of the biological underpinning of language reside in changes to the way human learning develops, it seems perfectly plausible that one might accept "an innate human propensity to learn and use language" without accepting that this in any shape or form resembles a Universal Grammar. (Or else you water down the Universal Grammar claim to the point where it becomes so trivially true that one has to ask what all the fuss is about.)
Because of the way Chomsky sets up the argument (and the careful way in which he avoids endowing his speculations with any specific, falsifiable content), it is impossible to falsify the claim that there is a Universal Grammar. All one can do is falsify the claims (vague as they are) that 'force' one to conclude the existence of a Universal Grammar. — If you show that the 'facts" are nothing of the sort, then the conclusions doesn't follow at all.
As far as I can see, Dabrowska's data are an attempt to do this. They are not "challenges [to] the whole idea of an innate human propensity to learn and use language." They are challenges to claims about what kind of 'facts' a theory of language learning must explain, and what kind of conclusions these 'facts' force on one.
It's worth adding that because of the way that Chomsky set up the argument, anyone who sits down and actually tries to understand the way languages are actually learned (or to understand the actual human differences that enable language learning) has to deal with the problem that however detailed one's demonstrations of learning are, and however useful they might be to people learning languages, or to children with learning difficulties, if one's research doesn't account for every conceivable aspect of "language," or any conceivable 'fact', its positive contribution can be cut down with a "ah, but that is trivial… you haven't shown how everything about language is learned…"
The logic of Universal Grammar is exactly the same as the logic of Intelligent Design, and it is unfalsifiable in exactly the same way. The difference is that while evolutionary biologists can effectively ignore ID in their day to day lives (I doubt many papers crash and burn at, say, Nature because they fail to rule out an ID explanation, nor do I expect that many biologists spend their days dealing with, "sure, you've shown how it works in a fruitfly, but that doesn't mean a thing until you prove the principles apply to people…"), it is impossible to do research in language acquisition without having to constantly navigate the witless violations of scientific logic warranted by UG, and it is possible to use the logic of UG to dismiss the value of every piece of work that fails to account for 'everything'…
I realize that none of justifies the technical inaccuracies in anti-UG reports in the popular press, but I hope it helps clarify why there are inaccuracies (it's not easy to popularize the technical aspects of rhetoric) and where the motivation to 'spin' against UG comes from. That people might want to spend their time trying to falsify the unfalsifiable may seem bizarre, but is it really any more odd than spending one's time defending the indefensible?
dwmacg said,
August 2, 2010 @ 4:18 pm
Bravo, Dominik Lukeš and michael ramscar. I share Mark's frustration at the need to frame seemingly every work (forgive the exaggeration) of functional, cognitive, or any other non-generative linguistics as an argument against UG or linguistic autonomy, but only because the burden of proof ought to be borne by those asserting UG. If you can represent linguistic knowldedge without recourse to UG, as the work of Langacker, Fillmore, Lakoff, Halladay, etc., suggests, and you can show how children acquire language without recourse to UG, as the work of Tomasello and Bybee suggests, then there is no need for UG, and no need to argue against it.
Jonathan said,
August 2, 2010 @ 9:07 pm
Geoffrey, apart from the question of the relationship between [1] and [2] earlier, is it really formally valid to describe the idea that education is predicted to not make a difference [2] and the observation that there is a correlation between education and proficiency [3] as contradictory?
Henning Makholm said,
August 2, 2010 @ 9:45 pm
Michael Ramscar, you sound somewhat angry, but I'm afraid I do have to ask what the fuss is about. You write:
but the preceding text appears to explain "universal grammar" as neither more nor less as shorthand term for "innate human prospensity to learn and use language":
This "specific claim", does it claim anything else than the (trivial) observed phenomenon that after being immersed in a language for about a decade (a "limited about of experience"), the child will be able to judge whether someone speaks funny or not (the child has "abstracted a set of rules" that it somehow uses to make this determination).
This does indeed sound like something that would be hard to falsify, but so what? Why even bother to give it a fancy name and spend long blog comments arguing about it? Nobody is asking for falsifications of 2+2=4 either.
Are you implying that "abstract a set of rules" requires the child to be conscious of a discrete set of grammatical rules and able to enunciate each of them formally? So restricted, your "universal grammar" description so evidently false (even adults who have made it their business to analyze languages can have trouble identifying why they intuitively feel that such-and-such is good or bad language) that it would hardly be worth anyone's breath to argue against it.
elinar said,
August 3, 2010 @ 3:23 am
Michael Ramscar: Thanks for your explanation. That’s exactly how I interpret the research by Dabrowska and co: it challenges some specific suppositions associated with nativist theories of language learning, and not a more general notion of innate propensity for language.
And yes, Quentin Cooper may have been rather confused about the whole issue, but this is hardly surprising, given that it is not always clear what linguists mean by UG.
This is the question I was trying to ask: Is it silly to assume that Dabrowska and co.’s work has any implications for the UG debate, because no self-respecting linguist believes in what I dubbed above as “the old story”, or is it perfectly sensible to assume that their work might have at least some bearing on this debate because it challenges some silly but still widely-held assumptions about language learning?
Unfortunately, thanks to e.g. Pinker’s “Language Instinct”, some of these silly ideas have been adopted by lay people too. I keep bumping into lay descriptivists who try to silence anyone holding false (i.e. prescriptivist) beliefs about language by using arguments like ‘grammatical correctness is determined by UG’; or ‘speakers know all the grammatical rules of their native language’.
Dominik Lukes said,
August 3, 2010 @ 6:54 am
Thanks Michael Ramscar. That was a perfect summary of why it's so hard to argue against UG. It rests entirely on its initial assumptions about the nature of language, so going after those is the only way to do and that's exactly what Thomasello, Dabrowska and others are doing.
Dr Spouse said,
August 3, 2010 @ 7:29 am
Ellinar says:
"Is it silly to assume that Dabrowska and co.’s work has any implications for the UG debate, because no self-respecting linguist believes in what I dubbed above as “the old story”, or is it perfectly sensible to assume that their work might have at least some bearing on this debate because it challenges some silly but still widely-held assumptions about language learning?"
I guess the question is, what do you mean by self-respecting? Is Pinker a self-respecting linguist? Certainly, he's very well-respected by many inside the academic community, as well as outside, as are many others who still hold these assumptions, still review language acquisition papers, and still referee grant applications…
Topher said,
August 3, 2010 @ 1:13 pm
Responding to Michael Ramscar:
The quote presented is indeed vague — what one might call the "extra-weak UG" hypothesis (EWUGH). However, it does not address human propensity to learn language but human capacity. Further, it is clearly falsifiable in the scientific sense, but its falsification would be equivalent to demonstrating the super-physicality of the human mind. Chomsky's claim was a little bit to definite for its time, but now the fundamental basis of it has been reduced to a well reviewed mathematical proof: it is mathematically impossible for any finite physical system to learn (and therefore acquire) an unconstrained grammar from a finite number of exemplars. The human capacity for learning natural language grammars from exemplars requires that there are innate limitations on the forms those grammars can take and that those limitations correspond to specific human capabilities for learning grammar.
To falsify it: take a sufficiently large collection of children. Present each with exemplars-in-context of a natural language unknown to them and to one of a set of artificially constructed languages of similar complexity to natural language but with randomly selected rules. If the children show no more capability to learn the natural languages than the artificial ones than the VWUGH has been falsified — and some form of psychological dualism/realism has been established.
If, as you say, one can equate the VWUGH with the UGH then opposing UGH is a scientifically untenable position (as far out as, for example, those who believe that pi is exactly .3 because the bible says so).
However, the straw-man of the VWUGH is not really at issue, nor is it what people who support UG theory mean by UG theory. What is at issue is whether the intrinsic capacity of humans to acquire depends on a distinctly evolved "grammar module" distinct from general human capabilities to, for example, "parse" visual scenes. If this is true than there would be a way of describing all human grammars as learned restrictions of the very large but finite super-grammar generated by including all possible learned options as unconstrained. this is the thing that might reasonably refereed to as the Universal Grammar.
The Dabrowska et. al. experiments do not address this at all — even if we make the highly questionable assumption that the highly artificial tests directly reflect natural language comprehension in natural circumstances. The only version of UG that they falsify even slightly is one where the UG and the surface grammars are almost identical and that therefore one can expect universal ability to learn and be able to apply the minor amount of learning required to acquire language. This "Shallow Universal Grammar" hypothesis is trivially refuted by the amount of variation observable in the (surface) grammars of human languages.
What Dabrowska et. al. have demonstrated is that there is a variation in the ability of the sample of test subjects to succeed in the presented task that correlates with educational attainment. It says nothing about the direction of causality (e.g., that poor language ability might tend to interfere with obtaining higher education) nor its nature (e.g., that people with lower levels of education are less familiar with formal tests and react nervously to them resulting in lower scores, or that researcher bias is resulting in a "Pygmalion in the Classroom" effect).
Lane Greene said,
August 3, 2010 @ 5:21 pm
I don't know if I'm the only journalist on this thread, but I can say with some certainty that it's fairly simple, at least as far as the BBC's part in this is concerned. If reporters can name one linguist, it's Noam Chomsky, and if they can name one linguistic idea, it's UG. And the vast majority of reporters cannot name two-plus linguists or two-plus linguistic theories.
How do I know this? I've been writing about language for about five years, speak about five languages well and about four more badly, and in other words am not the world's very dumbest journalist when it comes to language. Yet for the first piece I did for my magazine on language, I interviewed the great Prof. Liberman myself, and framed my questions about "what does this have to say about universal grammar?" Mark was very kind in telling me "nothing". It doesn't matter what my article was about; I doubt Mark can remember either. But I was *sure* it had something to say about Chomsky. I'm glad I checked before putting it in print.
Lane Greene said,
August 3, 2010 @ 5:23 pm
PS: Insofar as what I just wrote is true (and I think it is), Mark is wrong that "suddenly" every linguistic article in the press is about Chomsky. Recency illusion…?
Ryan said,
August 3, 2010 @ 8:14 pm
Topher: You've done a marvelous job of telling us how to falsify your VWUGH, but I don't see any suggestions on falsifying your formulation of UG. Hardly scientific, is it?
elinar said,
August 4, 2010 @ 3:11 am
Dr Spouse:
This is why I used the word ‘self-respecting’.
I assume that Dabrowska’s research has some bearing on the UG debate. This is indeed what she herself and many other researchers believe.
Mark Liberman argues in his post that it is ‘transcendentally silly’ to believe such a thing.
In Comment 6, Geoffrey Pullum suggests that it may not be a totally silly thing to believe if we assume the truth of what I called the Old Story in Comment 35. But then I gather from his tone in Comment 12 that no serious/self-respecting/sane linguist subscribes to such a simplistic view.
However, other commenters seem to be saying that the Old Story is still very much alive and kicking.
So I’m still not sure if I (and Dabrowska and co.) should be regarded as reasonably sane, just slightly silly or transcendentally so.
elinar said,
August 4, 2010 @ 3:16 am
Topher:
You say that what people supporting UG theory mean by it is not at issue.
In my view, this is precisely what is at issue. Yes, it is sloppy to say that this type of research ‘refutes’ UG (whatever that means). And yes, we can quibble endlessly about the methodological flaws of this particular piece of research.
But assuming that there are still many supporters of UGH around who subscribe to some very naïve beliefs about language learning, is it really so silly to assume that research challenging these beliefs could at least in principle have some implications for the UG debate?
Topher said,
August 4, 2010 @ 3:14 pm
elinar:
I'm not sure exactly what you are saying. I do believe that what people supporting UG theory mean by it is at issue. I am saying that what people who do not believe in UG theory claim people supporting UG theory mean by it is not at issue. I have not seen a single argument here that supports the claim that the experiments in question refute anything but the "shallow UG" theory nor one that there are any actual UG researchers who believe in that. If one were found then I would say that the experiments, if they had any ecological validity, and if they could be replicated, do refute that already refuted specific UG theory, but that it still is irrelevant to the question of UG theories in general, nor the characteristics of UG theories that most supporters deem as relevant characteristics.
If I found a non-UG theorist who believed that all native speakers of any language are equally fluent would that then "refute" non-UG and would therefore establish UG as true? I see little reason to see this position as any more likely than a UG version.
If someone conducted experiments that they claimed demonstrated that they moon was not made of green cheese (that it instead appeared to be made of marshmallow fluff) and that that clearly refuted UG claims, would that then "at least in principle have some implications for the UG debate?" Would it become any the more so if some UGer were found who believed that somehow UG implied that the moon was made of green cheese? Of course not. Its a distinct issue. UGers can make errors about the logical implications of UG theory just as non-UGers can make errors about the logical implications of non-UG theory.
michael ramscar said,
August 4, 2010 @ 9:10 pm
Very briefly:
Gold himself questioned the degree to which his theorem applied to natural languages, as opposed to formalisms by which natural languages might be described. If you read the paper, you'll find that it is about set identification, and that relating the specific exercises Gold reports to actual natural languages relies on a host of other assumptions (such as the probability with which a grammar is learned, what a natural language grammar really is, and infinitude — GKP has a paper that present a very stimulating and accessible discussion of this latter one.) Gold himself is admirably clear about many of these assumptions, and their relationship to his proofs.
Gold's (and similar) exercises can thus be taken as either proofs that natural language is unlearnable; or else, if natural languages are learned, proofs that a particular way of formalizing language is false.
Accordingly, when considering this claim (made above): "it is mathematically impossible for any finite physical system to learn (and therefore acquire) an unconstrained grammar from a finite number of exemplars. The human capacity for learning natural language grammars from exemplars requires that there are innate limitations on the forms those grammars can take and that those limitations correspond to specific human capabilities for learning grammar…" it is important to note that the argument only works if one makes the extra assumption that a particular formal model of grammar and actual natural grammars are the same thing. [Since this assumption is implicit in the claim, it should be considered a "poor argument" ™.] This is important, because if this very big assumption is wrong, then there is no more to this claim than the conflation of the wrong model with a lot of hubris. There is a nice precedent for just this kind of thing in what happened to the Ptolemaic model of the cosmos, and a great many statements made on the basis of the same kind of conflations.
If the history of science is not enough to give one pause for thought when it comes to conflating reality with one's models of the same, one can ponder the notion that variants of Gold's theorem can be applied equally well to any kind of inductive learning, and thus depending on how one was parting one's hair on a given day, one might equally use Gold to argue against vocabulary learning, or any kind of lexical grammar, and indeed pretty much any aspect of language one might care to think of.
The simple bottom line is this: to the degree to which UG is concrete, it is rooted in a particular view which sees language as governed by categorical rules (albeit that the categories get hazier with each passing year). Since this view gives primacy to structural considerations, it has the awkward corollary that one has to take a Platonic view of concepts to make the whole thing fly (as Jerry Fodor and Chomsky, bless them, have repeatedly made plain, and which most people who seem otherwise happy with UG also seem more than a little happy to fudge).
Thankfully, just as it turned out there are ways of thinking about astronomy beyond Ptolemy, it turns out there are ways of thinking about language beyond CFGs, infinitude and lets-not-worry-about-meaning-for-now. Indeed, you don't even have to take my word for it. Ask yourself, why else did Miller & Chomsky expend so much energy on their wholly unconvincing take down of information theory?
Following from this: if it is possible to conceive of language other than in terms of its being governed by abstract syntactic rules, then it isn't necessarily the case that UG is identical to there being a biological basis for the human capacity to learn language. (As Ali G might say, "if UG is dis, but langwidge is dat, and dis an dat is not de same, den dey is diffrent.") Getting to a final point anything like this straightforward depends, as I noted above, on how the UG argument is formulated — if you fudge the UG claim so that syntactic rules are merely descriptive of linguistic behavior, as opposed to determining it, then even a Skinnerian model of language would fail to falsify UG.
Which gets me to where I finished above: UG is such an incoherent ragbag of vague claptrap, half baked "facts" and appeals to proofs that actually undermine the research programs of many UGers themselves that the mystery is why anyone seeks to defend 'it.' Is it habitual?*
(*This question is best taken as being rhetorical, because now I think of it, the 'non-existence' of habit may well have been part of the UG story too, somewhere along the road…)
elinar said,
August 5, 2010 @ 3:36 am
Topher:
Interesting. So when I come across the following type of claim (which I frequently do) in academic articles, textbooks, and pop-linguistic books:
“All native speakers converge on the same grammar, regardless of the quality and quantity of linguistic input”
how am I to interpret it? That these authors believe in the Shallow UG theory and don’t know what they are talking about, or that by ‘grammar’ they don’t mean anything as shallow as passive constructions or embedded clauses, but are referring to some kind of Deep Grammar constrained by UG?
We know that on the ‘shallow’ interepretation the claim is false, but how could anyone ever refute the ‘deep’ interpretation?
Topher said,
August 5, 2010 @ 11:55 am
elinar:
Interesting. So when I come across the following type of claim (which I frequently do) in academic articles, textbooks, and pop-linguistic books:
“All native speakers converge on the same grammar, regardless of the quality and quantity of linguistic input”
how am I to interpret it?
You should interpret it in a straight-forward manner (though I would not agree with it as literally stated, I don't question that some do). That, of course, does not mean that you should take "converge on" to mean "reaches absolute convergence within their lifetimes." Those are quite different things. It also says nothing about the rate of convergence being independent of the quality and quantity of linguistic input — obviously it would take a rather long time to acquire a language if the quantity was a single input per century (that I had to translate "quantity" to "rate" for this statement to make any sense at all is part of why I disagree with it).
That approximate convergence must be reached in practice is actually a simple deduction from a) surface grammar is not innate — there must be some acquisition; b) surface grammar is necessary to produce and understand language with minimal "native" fluency; c) there must be some approximate equivalence between communicants' grammar (and language in general) for communication to take place; d) some communication does take place during linguistic communication between native speakers of the same language.
This is an observation (not a theory) that any UG theory (in fact, any theory of grammar that claims to be consistent with acquisition) must take into account rather than a prediction from it.
Topher said,
August 5, 2010 @ 12:52 pm
michael ramscar:
I think that you have taken Gold's statement of the limitations of his proof to mean the exact opposite of his intention. Learning theory is full of what are called, somewhat whimsically NFL theorems (NFL = "No Free Lunch"). Overall they say that for a finite mathematically describable system (which includes all physical systems) to learn some class of finite sets of mathematical descriptions (rules, equations, differentials, regularities, etc.) it must be incapable of learning some other (in some sense, "much larger") set of descriptions. In order to generalize to novel cases — for it to make sense to talk about grammar/syntax existing at all even in a rough sense — there must be such a set.
Gold demonstrated that the set of mathematical descriptions covered by traditional grammars is rich enough for this to apply. The conclusion is either a) grammar cannot be learned or b) the formalities of formal grammars are richer than needed to describe natural language (this applies even if there are nuances of the "real" grammars that fall outside the ability of the formal grammars to describe). Consequently there must be an intrinsic restriction of the set of possible languages that is not captured by formal grammar. Whether or not the restricted set of possible natural languages is most conveniently represented as a set of formal "syntactic rules" this means that there is some innate capacity for humans to specifically learn the restricted set of grammars embodied in possible "natural languages" An unrestricted, physical, learning system would not be capable of learning natural language — or much of anything. There is "no free lunch:" to be equally able to learn anything means to be unable to learn anything from less than a complete enumeration of all possible exemplars.
But note that I have repeatedly stated that this is not equivalent to saying that UG is necessarily true. While context sensitive grammars are Turing Equivalent and therefore able to describe any physical system, that does not mean that they are the most appropriate way of describing them. I could, in principle, describe all classical, relativistic and quantum laws of physics as a context sensitive grammar but it really wouldn't be a particularly useful formalism. The observation that the existing formalism is too rich makes looking for a modified or restricted version (i.e., a UG) an attractive and plausible research program, it does not guarantee that program's success. Even if successful it does not prove (though it would probably provide strong evidence for) a specialized grammar mechanism in the brain.
For the record, I am a computer scientist who has worked in the areas of speech recognition (and therefore natural language processing) and learning theory. My command of learning theory is stronger than my knowledge of linguistics, and I have been mostly commenting on that. I do have an interest in linguistics, however, and though I can't claim expertise, have done some study, and I can claim some expertise in formal languages.
michael ramscar said,
August 5, 2010 @ 4:43 pm
What computer scientists mean by learning theory and what psychologists mean by learning theory are very different things. The former is a branch of mathematics, and the latter is part of natural science. My reading of Gold is that he was a nuanced enough thinker to understand the difference. (It's also the case that what passes as a portrayal of human learning in UG discussions of learning, and in Gold, is beyond a caricature, which is why so many claims about human learnability are so laughable; though again, Gold to his credit, was quick to acknowledge the possibility that his portrayal of human learning might be inadequate.)
In the same vein, computer scientists' use of the word "language" is based on an analogy to natural language. Whether this analogy is useful, as opposed to whether it simply gets in the way of people thinking clearly about these matters, is very much an open question.
Which is why equating natural grammar (the nature of which is something we are still trying to figure out, and which may bear no resemblance at all to a combinatorial algorithm) with syntax and formal grammar (as understood in say, computer science) is a mistake. For example, while one can conceive of how language works as involving encoding and decoding (the model we inherited from the Greeks, and which informs many approaches to computer science), one can also choose to conceive of language in information theoretic terms (one might believe that human communication is better modeled in terms of mutual prediction and uncertainty reduction, as opposed to the exchange of determinate tokens).
It seems reasonable to assume that may also be other ways of conceiving of how language work, but if we stick with just these two, then what we have are two very different approaches, in terms of just about every possible mechanistic assumption they make about natural language grammar. Importantly, many claims that one might make about UG and syntax are beside the point from the point of view of an information theoretic approach (because syntax as usually conceived of embraces a range of methods that are aimed at answering questions that might make little sense under this frame of reference); and of course, the converse is also true.
It may be that you think one or the other of these approaches is completely crazy and beyond the pale (if so, it may be useful to recall how much we take for granted at any given point in scientific history has been considered rubber room material in the past), but even if you do, given that history of science also suggests that when it comes to language, our main achievements have so far amounted to discovering how much we don't know, it still seems like a good idea to keep a sense of the difference between one's model of a phenomena and the phenomena itself to me (albeit that this goes against the whole tradition of UG arguments).
Topher said,
August 5, 2010 @ 6:47 pm
(Sorry for the length — replying to such a laundry list of subtle misunderstandings is difficult to keep short — whether in a single note or spread over multiple ones)
michael ramscar:
What computer scientists mean by learning theory and what psychologists mean by learning theory are very different things.
Well, I've never heard a psychologist refer to the branch of cognitive psych concerned with human learning as "learning theory" but your point is, of course, a trivial and elementary fact from learning theory — that it does not limit itself to human learning.
The former is a branch of mathematics,
Its more one of the most mathematical areas of computer science.
and the latter is part of natural science. My reading of Gold is that he was a nuanced enough thinker to understand the difference.
Hardly takes much "nuance" — it would almost certainly be covered in the first day of any learning theory survey course, and in the preface or introduction of any textbook.
(It's also the case that what passes as a portrayal of human learning in UG discussions of learning, and in Gold, is beyond a caricature, which is why so many claims about human learnability are so laughable; though again, Gold to his credit, was quick to acknowledge the possibility that his portrayal of human learning might be inadequate.)
Only if you don't understand what learning theory is. Its an area which deals with what kind of learning (a concept far broader but definitely inclusive of human learning; evolution, for example is a learning system, as is human bone thickening in response to repeated stress) is computational possible, which means (before you get hung up with narrow misunderstandings of what is meant by "computation") what is possible for any physically realizable system. If NFL theorems do not apply to human learning then human learning is accomplished by a super-physical system.
In the same vein, computer scientists' use of the word "language" is based on an analogy to natural language. Whether this analogy is useful, as opposed to whether it simply gets in the way of people thinking clearly about these matters, is very much an open question.
Well its been rather fruitful for the last 50+ years but it certainly might have reached its limits. But lets be clear: Chomsky, a linguist, created the foundations of the modern formalism for describing formal grammars which was picked up by computer scientists such as Backus as useful for describing programming languages. Formal grammars were not taken in analogy to computer languages — computer languages had no formal grammars at the time.
Which is why equating natural grammar (the nature of which is something we are still trying to figure out, and which may bear no resemblance at all to a combinatorial algorithm) with syntax and formal grammar (as understood in say, computer science) is a mistake.
You have in no way demonstrated that — what follows from your previous statement is that it is possible that it might be a mistake. So far it has been a successful paradigm since, what, Aristotle for describing a large number of regularities in human language.
For example, while one can conceive of how language works as involving encoding and decoding (the model we inherited from the Greeks, and which informs many approaches to computer science), one can also choose to conceive of language in information theoretic terms (one might believe that human communication is better modeled in terms of mutual prediction and uncertainty reduction, as opposed to the exchange of determinate tokens).
I agree with that completely. Of course, information theory is also tied into artificial computational systems and is one of the foundations used in learning theory. Learning theory is about the limitations that a computational system (which includes all physical systems) must have in its ability to incorporate information (in the information theoretic sense) received so as to improve performance on relevant tasks. That's pretty much precisely what your "information theoretic" based system does. Gold showed that this applied to the formal models used by linguists. We can skip that step with your "alternative" — learning theory needs no additional proofs to be applied to it.
It seems reasonable to assume that may also be other ways of conceiving of how language work, but if we stick with just these two, then what we have are two very different approaches, in terms of just about every possible mechanistic assumption they make about natural language grammar. Importantly, many claims that one might make about UG and syntax are beside the point from the point of view of an information theoretic approach (because syntax as usually conceived of embraces a range of methods that are aimed at answering questions that might make little sense under this frame of reference); and of course, the converse is also true.
Actually, I would say that they are complementary. Information theory is about the properties of systems which communicate by transferring sequences (originally restricted to that) of tokens with some regular time dependent structure. It depends on a commonly understood structure for the token sequences exchanged. Probabilistically annotated grammars are the formalism used to express that. Learning theory is about the necessary limitations of such systems to reduce uncertainty by such token sequences either one-way or interactive. Bayesian inference (the mathematical study of optimal reduction in uncertainty in response to evidence) is another relevant, complementary area for examining such theories.
The point is that you are quite right that some questions might be irrelevant to one or more of these approaches but that does not mean that one should (or even could) dispose of one of these. It means that all three are ways to look at the same theory. In particular both information theory and Bayesian inference require some kind of formal system as a starting point.
And once again, I will state that I do not believe that the fact that there must be sharp constraints on the range of possible human natural languages, and that therefore there must be some "hard-wired" built-in capability correlated to those restrictions proves that UG is correct. It certainly does not prove that those restrictions are specific to understanding the structure of language, or that the current formalisms for describing grammatical structure are the correct ones. Nor does it suggest that there aren't other, complementary areas of linguistic analysis. It does suggest, though, that something like UG is, succeed or fail, a fruitful direction for research.
michael ramscar said,
August 6, 2010 @ 5:37 am
There is a long tradition of studying learning in psychology, and the use of the term "formal learning theory" to describe attempts to characterize the workings of natural learning–as opposed to analogies to it–long predates the use of this term by computer scientists. Funnily enough, the origins of the divorce-with-visiting-rights relationship between learning theory and much of cognitive psychology are bound up in the origins of UG.
So much for the history of ideas: much of what is now sometimes called "learning theory" by computer scientists was more appropriately called "search" when I was at grad school in AI. However, whichever way one slices it, search or learning is always going to comprise at least two parts: the search space and the search algorithm. If your search space is undefined (as natural grammars still are), then making strong claims about the nature of the search algorithm (which, if it does anything at all, is what UG does) seems somewhat premature.
What this means is that UG has nothing to do with whether one supposes that natural computation is governed by basic principles of information and coding theory, or whether one accepts that the way that natural learning methods are implemented will limit the kinds of search possible in human learning. This is because any strong claims anyone cares to make about the constraints governing learners and languages will depend on how the search space that a learner is assumed to be navigating is specified. To return to the two examples above, if learning natural languages involves acquiring the encoding and decoding algorithms for a set of determinate "concept" tokens, then the human language system is faced with one kind of search task. On the other hand, it might be that a learner is faced with incrementally discriminating a set of conventionalized cues to linguistic events at multiple levels of granularity.
The latter is a very different search problem to the former, and so it has different implications for the question of natural learnability: there may be a different answer to the question of whether it may or may not be possible to solve the search problem given a natural learning algorithm and the experience available to a learner. (Being clear on what the search problem is also allows one to evaluate whether many language engineering results support the ideas that have underpinned UG for the past 50 years, or whether, as it often seems, where engineering has tended to work it has done so by ignoring same.)
So. To the extent that UG has any content, it seems to be committed to a particular search problem (# 1, above), rather than the trivially true point that the scope of natural languages is in all likelihood limited by human capabilities, such as natural learning algorithms (one might also ask: if the total extent of the UG claim was that the scope of natural languages is limited by human capabilities, wouldn't it suggest that the best way to study language is through an examination of actual human capabilities, as opposed to studying idealized formalisms?.)
As far as I'm aware, no-one has sought to eliminate as legitimate forms of linguistic enquiry those formal approaches to syntax in which language is conceived of as the arrangement and exchange of determinate tokens of meaning (however laughable the assumptions these approaches often force on their adherents may be). On the other hand, UG (and its attendant conflation of natural language with a particular model of language) has often been used to argue that any other approach has been automatically falsified. People will say things like "the fundamental basis of [UG] has been reduced to a well reviewed mathematical proof: it is mathematically impossible for any finite physical system to learn (and therefore acquire) an unconstrained grammar from a finite number of exemplars. The human capacity for learning natural language grammars from exemplars requires that there are innate limitations on the forms those grammars can take and that those limitations correspond to specific human capabilities for learning grammar" as if there was no alternative to UG, and no other natural conception of 'grammar.'
If instead, the message is, "Let a hundred flowers blossom," then so much the better.
However, I'm still mystified as to why anyone would want to spend time defending UG. As far as I understand it, UG is simply a dumping ground in which embarrassing questions and vacuous assumptions can be buried. Stuck with a theoretical consequence you can't explain? Tag it as innate… e.g.: "However surprising the conclusion may be that nature has provided us with an innate stock of concepts, and that the child’s task is to discover their labels, the empirical facts appear to leave open few other possibilities." Chomsky, 2000, pp. 65–66. Is that really a fruitful direction for research? All I can say is that it looks like the opposite from where I'm sitting.
Topher said,
August 6, 2010 @ 11:30 pm
michael ramscar:
much of what is now sometimes called "learning theory" by computer scientists was more appropriately called "search" when I was at grad school in AI.
True but misleading (do we need to open a discussion on "much of" to parallel "most"?). It's like saying "much of what is now called 'accounting' was more appropriately called 'arithmetic' when I was in grade school" — although this latter is a bit more accurate.
Learning theory (more specifically machine learning theory) uses results from many other areas and also develops results that can then be used in the source field. Heuristic search, combinatorial optimization, clustering, robotics, neural networks, genetic algorithms, vector machines and Bayesian methods are just some of these. Heuristic search still exists as a separate discipline (no one would, for example call the process of playing a game of machine chess "learning"), although the broader perspective of learning theory has made an obvious mark on how many heuristic search problems are formulated.
Of course, we were not talking about practical machine learning theory (which deals with learning by specific mechanisms) but rather the more abstract part of the field, theoretical learning theory that deals with what any possible finite physical system including the human brain can possibly do. Unless you feel that learning does not take place in the brain or elsewhere in the physical body (i.e., you are a psychological dualist) it doesn't matter what kind of theory of language you posit. If (as does traditional natural language grammar systems) your theory allows for unrestricted language learning then your theory is unrealistic, perhaps because it is incomplete.
UG follows the usual scientific pattern of not abandoning a large body of successful results because a flaw is found. Instead, it attempts to make adjustments in and extensions to the existing theory. Sometimes this works (e.g., relativity) and sometimes it leads by a process of step-by-step modification to the original theory to something quite different (modern electrodynamics).
Any theory that simply abandons all preceding structure (a rarely successful strategy — although popular historical accounts, e.g., of the development of QM, sometimes portray the development of some theories that way) must either explain the accuracy of earlier theories (SR for example) or at least be judged to plausibly be able to do so with additional work (e.g., statistical mechanics superseding traditional thermodynamics).
However, whichever way one slices it, search or learning is always going to comprise at least two parts: the search space and the search algorithm. If your search space is undefined (as natural grammars still are), then making strong claims about the nature of the search algorithm (which, if it does anything at all, is what UG does) seems somewhat premature.
If you want to cast learning theory as heuristic search (which is one of a number of complementary ways that are used) that is close to true. Except that UG is about the search space not the search algorithm. It starts by saying that a reasonably complete description of the search space (the set of all possible grammars that could be learned) must allow for the possibility of there being some mathematically, physically realizable, search algorithm.
UG is a theory of grammar not a theory of language acquisition. UG theorists may indeed attempt to demonstrate that their solution is learnable by presenting (usually sketchy) learning algorithms for exploring their solution space, and language acquisition theorists may use the structure of some UG or class of UGs as a starting point, but that is not what UG theory is about. A particular UG proposal or class of UG proposals gains plausibility if it is learnable by any computable algorithm even if the proposed algorithm is incorrect.
UG then uses the usual scientific strategy of retaining as much of the currently existing theory as possible (therefore plausibly avoiding being falsified by the success of the Cambridge Grammar of the English Language in capturing regularities in English).
It is premature to declare that the eventual end point of this research program will necessarily be close to the starting point. It is not premature to declare this as a fruitful starting point in the search for a more complete theory.
This is because any strong claims anyone cares to make about the constraints governing learners and languages will depend on how the search space that a learner is assumed to be navigating is specified.
Since very strong, proven claims have been made about the constraints governing learners and languages, without having any dependency on how the search space is specified this statement is demonstrably untrue. Perhaps you meant specific rather than strong? UG takes the reasonable, though not yet proven, assumption that the search space looks something like present successful theories. Grammar theory is an exploration of the possible search spaces (or moving outside of the restricted heuristic search viewpoint: of the possible solution spaces).
On the other hand, it might be that a learner is faced with incrementally discriminating a set of conventionalized cues to linguistic events at multiple levels of granularity.
The latter is a very different search problem to the former,
It might be. It appears, however, to be a different view of a lot of different processes, including UG theories. If I drop a stone does it fall to the ground because the gravitational force between it and the Earth attracts it to the center of the earth and the ground gets in the way, or because it follows a Hamiltonian and thereby reduces its potential energy? Both, actually. For the latter description to be a plausible one, however, we must define both "potential" energy and the associated Hamiltonian so that the conventional description is preserved (at least approximately, though in this case exactly).
and so it has different implications for the question of natural learnability: there may be a different answer to the question of whether it may or may not be possible to solve the search problem given a natural learning algorithm and the experience available to a learner.
It may or may not have different implications depending on the details. It also might suggest other avenues even though it is consistent with the other approach.
If you can construct a theory which is a) consistent with Learning Theory and physics; b) explains the observed regularities of surface grammars in all natural languages studied; c) physically/mathematically plausible both for use and acquisition d) without reference to an intermediate UG theory and e) (the hard one) not end up with a theory that is homomorphic to a UG at least to a high degree of approximation, then I will tip my hat to you. You may then proceed to solve world hunger overnight, settle the P=NP problem, and leap tall buildings in a single bound.
(Being clear on what the search problem is also allows one to evaluate whether many language engineering results support the ideas that have underpinned UG for the past 50 years, or whether, as it often seems, where engineering has tended to work it has done so by ignoring same.)
Is "being clear on what the search problem is" necessary? Is there anyone in "language engineering" (or more generally, in "natural language processing") who claims that their work is based on UG (not inspired by, but based on)? Are you conflating formal, mathematical grammars with UG (UG generally depends on them, and Chomsky invented them, but they do not depend on UG)? Until quite recently, all successful (and probably unsuccessful) natural language processing for which grammar was relevant were based on some restricted form of formal grammar. More recently, there have been some useful (and therefore successful in a utilitarian sense) which have restricted their solution space in primarily non-grammatical ways by relying on humongous numbers of exemplars — orders of magnitude larger then those that any human being is exposed to in their entire lives — much less by the age of 6. I don't think it is likely that they could be refined — without adding more assumptions (= deep structures) — to operate on significantly smaller training sets.
In any case, the whole point of these approaches is that they completely avoid the need — and apparently the possibility — of anything that bears any slight resemblance to understanding. Remember Eliza? (Or Dr. Memory for the Firesign Theater fans).
On the other hand, UG (and its attendant conflation of natural language with a particular model of language) has often been used to argue that any other approach has been automatically falsified.
Any approach that does not sufficiently restrict the possible range of grammars has been falsified. If there are any theories out there that are likely to have done that but are not compatible with UG I haven't heard of it. I certainly can't pretend to be aware of even a small fraction of current approaches so I am open to references. Its hard to imagine what a theory would be that was not compatible with the purely linguistic part of UG, but history has adequately demonstrated the invalidity of "proof by lack of imagination". Your supposed alternatives so far appear to be completely compatible with UG — and in their present form are clearly without computationally plausible restrictions.
as if there was no alternative to UG, and no other natural conception of 'grammar.'
Repeatedly I have stated that this is invalid. The invalidity of a claim about the consequences of UG is irrelevant to the validity of UG itself.
However, I'm still mystified as to why anyone would want to spend time defending UG. As far as I understand it, UG is simply a dumping ground in which embarrassing questions and vacuous assumptions can be buried.
Mystery solved — other people understand it better.
Stuck with a theoretical consequence you can't explain? Tag it as innate… e.g.: "However surprising the conclusion may be that nature has provided us with an innate stock of concepts, and that the child’s task is to discover their labels, the empirical facts appear to leave open few other possibilities." Chomsky, 2000, pp. 65–66.
It is an accurate statement (for a suitable understanding of what is meant by a "stock of concepts").
Is that really a fruitful direction for research? All I can say is that it looks like the opposite from where I'm sitting.
Any direction for research that ignores it is clearly of limited fruitfulness. Given that there is such a "stock of concepts" (I would prefer to say "inate predispositions" and I'm bewildered why this is considered surprising. We know about many cognitive predispositions that could be cast as a "stock of concepts" The innate capacity to recognize human faces has been very unambiguously demonstrated, i.e., we come equipped by the concept of the appearance of the human face.)
Since they do exist, elucidating these "concepts" and discovering how they may best be described seems very fruitful — particularly since they would represent a vast simplification of the description of human language.
Linguistic Anthropology Roundup #12 – Society for Linguistic Anthropology said,
August 21, 2010 @ 1:14 pm
[…] Language Log » Universal Grammar haters. […]
A Thinking Machine : On Metaphors For Mind « Scribblings said,
August 26, 2010 @ 1:43 am
[…] you’re interested in the debate that’s been raging, I’ve listed some introductory reading below. But in the upcoming weeks, I’m less interested […]
Some Links #13: Universal Grammar Haters | Replicated Typo said,
November 16, 2011 @ 3:22 am
[…] Universal Grammar haters. Mark Lieberman takes umbrage with claims that Ewa Dabrowska's recent work challenges the concept of a biologically evolved substrate for language. Put simply: it doesn't. What their experiments suggest is that there are considerable differences in native language attainment. As some of you will probably know, I'm not necessarily a big fan of most UG conceptions, however, there are plenty of papers that directly deal with such issues. Dabrowska's not being one of them. In Lieberman's own words: In support of this view, let me offer another analogy. Suppose we find that deaf people are somewhat more likely than hearing people to remember the individual facial characteristics of a stranger they pass on the street. This would be an interesting result, but would we spin it to the world as a challenge to the widely-held theory that there's an evolutionary substrate for the development of human face-recognition abilities? […]