Selected comments on a Language Log post by Geoffrey K. Pullum, "When a word is redundant enough to be omitted", 8/25/2008.

Shimon Edelman (link):

I share John Cowan's aversion to the word "ungrammatical". In Geoff Pullum's posting, the root of the problem is nicely exposed for all to see in this passage: "It doesn't have its roots in semantics or logic; these are facts of syntax (though they have semantic connections)." The ontological status of "facts of syntax" (or grammaticality that's independent from acceptability) is the same as that of the tooth fairy: there is no independent empirical evidence for it, and phenomena attributed to it can be better explained by other means.

Geoff Pullum (link):

My friend Shimon Edelman believes that the notion "grammatical" has tooth-fairy status, and that the concept to replace it is that of probability. There are utterances with high probability features and utterances with low probability features, and that's all. He really does appear to believe this. It is a view that I have urged him to jettison. It will not be possible (so I predict) to differentiate my (1) and (2) in probability terms. As to whether there simply isn't a grammaticality issue with (1), as John Cowan thinks, we need to have his answer to Mark's question. (If there is no ungrammaticality in (1), by the way, we are fine, and formulating the rules of grammar becomes easier, not harder. Nothing about Shimon Edelman's view, that the distinction between grammatical and ungrammatical doesn't exist, would get any support from that purely empirical finding, if it were indeed a finding.)

Shimon Edelman (link):

In his reply to my comment, Geoff kindly offered a framework within which to interpret my repudiation of "grammaticality." I feel obliged to clarify it, however briefly. If grammaticality is dissociated from acceptability, it becomes empirically vacuous (joining the club that has "facts of syntax" and "competence" among its members); if it is defined in terms of acceptability, it loses its claim for a separate existence. Probability enters the picture as follows: utterances whose various features — in the given context and given the sum total of the listener's experience — are more probable will be deemed by the listener more acceptable. One of the features in question is utterance length (hence the difference in acceptability between long and short versions of Geoff's example with which the present thread started). Importantly, however, the probabilities in question are always conditioned on context (including extralinguistic context); this is the crucial component of the framework to which I subscribe that has been left out of Geoff's summary of my views.

Mark Seidenberg (link)

The issue that Edelman raises–about the status of "grammaticality" vs. "acceptability," has a long, fraught history and I don't think it's ever been resolved adequately. Carson Schutze's book (The empirical base of linguistics, 1996) is a good place to start. Two brief points on what has been an extensive discussion since the beginnings of generative linguistics.

1. about the "lack of independent evidence" for grammaticality that Edelman mentions: many theoretical linguists think it isn't required, for basic, principled reasons. Chomsky's argument from the 1970s went something like this (I can't find the quotes but a better web searcher like Mark probably can). The simple sentences of the language aren't informative, every theory can account for them, and so we need to look at edge constructions that are unusual and infrequently used ("Which pot is soup easy to cook in," and so on). That's where the information is. Then it turned out that grammaticality judgments for such sentences usually turned out to be inconsistent even among experts (as in the sentence Geoff discussed; BTW both short and long versions are just fine for me). At that point, Chomsky made a very interesting move, which had enormous influence: he said we should look to the theory to help adjudicate the unclear cases.

This means that the critical evidence for grammaticality is theory-internal, based on criteria such as whether the analysis of a particular of a borderline sentence conformed to principles developed in connection with other sentences. A borderline construction might be judged grammatical if doing so avoided complicating the grammar, violating some other elegant bit of formal analysis, etc. Of course, this becomes highly circular. The complex sentences provide the data for the theory of what's grammatical; the theory decides the cases where grammaticality judgments are unclear.

2. Grammaticality has a fuzzy structure: there are clear cases at the extremes, which seems encouraging, but then there are sentences that fill in the rest of the continuum. What to do with them? People maintain the idea that there should be a boundary in there someplace, and differ where they place it and why, which generates endless discussion, prescriptivist fury, etc. Some recent theoretical work has introduced graded notions such as degrees of grammaticalitiy. But I say, why bother? The important issue is what's going on in the black box that is comprehending and producing utterances; how the outputs of that box are sorted and judged seems less important, to me. Which is why I am a psycholinguist, not a syntactician.

Grammars such as the one that Geoff co-authored are hugely useful, of course; I just don't think "grammaticality" has much theoretical force or gets us far in understanding language acquisition and processing and their brain bases. Or should have as much as it has been assigned.

I don't mean to be polemical here; there is a lot more to be said on the issues on both sides, but this is a comment in a blog not a thesis.