On Interdisciplinary Collaboration and "Latent Personas"

« previous post | next post »

This is a guest post by David Bamman, in response to the post by Dan Garrette ("Computational linguistics and literary scholarship", 9/12/2013).


The critique by Hannah Alpert-Abrams and Dan Garrette of our recent ACL paper ("Learning Latent Personas of Film Characters") and the ensuing discussion is raising interesting questions on the nature of interdisciplinary research, specifically between computer science and literary studies. Garrette frames our paper as "attempting to … answer questions in literary theory" and Alpert-Abrams argues that for a given work of this kind to be truly interdisciplinary, it "must be cutting edge in the field of literary scholarship too." To do truly meaningful work at the intersection of computer science and literary studies, they argue, parties from both sides need to be involved.

While I disagree with how Garrette and Alpert-Abrams have characterized our paper (as attempting to address literary theory), I fundamentally agree with their underlying point. I have a different understanding of how we get to that point, however; to illustrate this, let me offer here a different framing of our paper.

Prelude.

Before I jump in, let me preface this with a brief introduction to contextualize where my words are coming from. I'm a PhD student in Computer Science, but I come here by way of the humanities; my undergraduate background is in Classics, my Master's degree is in Linguistics, and I was a researcher at the Perseus Digital Library before CMU. In attempting to address questions in the digital humanities, I do not consider myself a cultural imperialist looking to march in and solve humanists' problems; I recognize that the research questions asked in this space are complex, messy, and don't often have answers that are easily verified. I know the tremendous value that comes not from finding "solutions" but from looking for more, and better, questions to ask.

What our algorithm does.

First, let me describe what our algorithm does. It takes a set of documents as input; in our particular case, those happened to be plot summaries of movies sourced from Wikipedia, but they could also be literary books, newspaper articles, blog posts, tweets, or emails. We use standard tools from natural language processing to recognize the people mentioned in those documents and store all the verbs for which they are the agent, all the verbs for which they are the patient, and all of the adjectives and other modifiers by which they are described.

We then use all of that information to automatically infer a set of abstractions over people – what we call "personas" or character types — that capture how different clusters of people are associated with different kinds of actions and modifiers. We deliberately avoided calling these clusters "archetypes" because of its conceptual baggage and association with mystical universality. While we give due respect to Jung and Campbell both, there is no sense in which we believe that what we learn from this collection of summaries could possibly be described as universal. We are primarily describing a method.

persona in this case is simply a way to cluster people based on how they are described; to take even the hint of mysticism out of this work, a sensible persona need not be The Hero or The Trickster; it could apply to generic group membership (e.g., Policemen and Firefighter) as well.

To explore how this method works in practice, we applied it to a set of movie plot summaries from Wikipedia. The choice to consider movies (as opposed to, e.g., news articles) was made both out of personal interest and an effort to illustrate how this algorithm might potentially be useful for researchers in the digital humanities (a community to which I feel I belong). The choice to use Wikipedia in particular as a testbed was made out of opportunity: the combination of Wikipedia and Freebase provide a large amount of structured data to learn from, and its writing style is more amenable to current state-of-the-art NLP technologies than other domains (such as dialogue or literary novels).

Everything should be critiqued.

One common theme among the comments on the previous post is the inherent bias of Wikipedia for the task of investigating film: Wikipedia plot descriptions are not "film"; they are descriptions of movies that naturally reflect the biases of the subpopulation that writes them (predominantly white, American males).

One thing I stress in the class I'm co-teaching is that in attempting to tackle a particular literary research question using quantitative methods, we always have to be aware of the gap that exists between the ideal, Platonic data that would in theory let us answer our question, and the fuzzy shadow of that ideal that constitutes the real data we have. If someone were to use both our method and Wikipedia data to ask a substantive question of film, then accounting for all of those biases are crucial for the larger argument being made (in addition to those already noted, I would add the caveat that the Wikipedia data also consists of descriptions of movies written in the 21st century; movie descriptions written in the 1950s might also reflect a different cultural bias). Other data sources, likewise, have other biases (e.g., movie transcripts offer a more unmediated experience but also only record dialogue, not actions onscreen).

The critiques voiced so far seem largely to be directed at the assumptions underlying the data, but I would also point out that this particular algorithm, like all algorithms, also contains a number of other modeling assumptions that should likewise be challenged:

  • We define a "persona" as a mixture of different kinds of actions and attributes, which are operationalized here as syntactic agents, patients, and attributive modifiers. Are these the best or only source of information we can use? Are there biases inherent in this choice?
  • Similarly, we specify that each character in a plot summary is associated with a single persona that embodies many different characteristics. Other alternatives (with different modeling implications) involve allowing a person to don multiple personas, and doing away with personas altogether (letting a person be a simple mixture themselves).
  • From a modeling point of view, we adopt an "upstream" approach in which we condition on metadata in generating the latent personas, word topics, and word-role tuples; an alternative choice is to use a "downstream" approach in which that metadata is generated as well.
  • In this paper, we experiment with different numbers of personas and fine-grained semantic classes that we were learning (from 25-100). This number is a choice, and the optimal number may be very different in different datasets.

Some of these choices may be obvious, others controversial, and all are certainly dependent on the exact question to be asked and what role the data has in a larger narrative argument being made. I bring up this list of assumptions for two reasons: first, we published our method at ACL, a computational linguistics conference, as a technical contribution in its own right, to be peer reviewed by people who would judge its technical merit (and directly critique these assumptions), before collaborating directly with humanists, who would have a different set of criteria for applying it to a real problem (as we have seen).

The second, and far more important, reason for enumerating this list of assumptions is this: for all of the discussions about hegemony and fearmongering that have arisen lately in the context of computer science intersecting with the humanities, many of us – computer scientists and humanists alike – are really on the same side, and have the same ends: we want to figure out how to apply quantitative methods in a way that's appropriate for research questions that actually have value. We don't see our methods as big hammers looking for more nails to bang on to prove how useful the hammer can be; we want to be discerning in their application, and aware of their flaws. This is not hegemony; this is a real desire to work together and appreciate the nuance of the questions we both care about.

Many ways of working together.

The crux of Hannah and Dan's piece seems to me to be that when attempting to work in an interdisciplinary space, as in the digital humanities (but this is no less true of computational social science, computational journalism, etc.), it's important to have representatives from both disciplines at the get-go. I agree that this is a great ideal. It's one that I occasionally have when the stars align, when I already have trust in a colleague from another discipline and impromptu conversations naturally lead to formal collaboration. Creating a new collaboration across disciplines ex nihilo, however, is much more difficult (and risky when that trust is absent), but I don't think those difficulties of direct collaboration should prevent us from engaging together in other ways. Interdisciplinary collaboration in my mind is much bigger than working together on a single paper; it's an ongoing conversation involving exactly what we're seeing here – publication, critique, and sowing the seeds of new work together. By keeping our publications and much of the conversation surrounding it in the public sphere, we draw on a variety of perspectives (e.g., critiques from people not only in literary studies but from a wide range of theoretical camps), which can help us not only improve our work, but provide a tangible archive of our progress.


Above is a guest post by David Bamman.



28 Comments

  1. Ben Zimmer said,

    September 17, 2013 @ 4:50 pm

    It's also worth mentioning that David was a co-author, with Jacob Eisenstein and Tyler Schnoebelen, on a paper demonstrating an admirable disciplinary crossover between computational linguistics and sociolinguistics: "Gender in Twitter: Styles, stances, and social networks." I discussed the paper in a Boston Globe column and Language Log followup. As I wrote in the latter,

    I hope that this research will encourage other partnerships between computational linguists and sociolinguists interested in taking advantage of "big data" megacorpora for studies of language variation. Focusing strictly on the computational side or on the social side just won't cut it in the new era of digital scholarship.

  2. Response on our movie personas paper | AI and Social Science – Brendan O'Connor said,

    September 17, 2013 @ 7:09 pm

    […] (2013-09-17): See David Bamman's great guest post on Language Log on our latent personas paper, and the big picture of interdisciplinary collaboration. I've […]

  3. David Golumbia said,

    September 17, 2013 @ 9:53 pm

    I do not think you are addressing the main problem, although it is not exactly the one that Alpert-Abrams and Garrette directly state. It is not that you didn't have representatives of the right field on your project (which I argue, and I think you agree, should be Film Studies, not literary studies); it is that your project simply does not engage with the scholarship of the field you are interacting with *at all.* How would you feel if the tables were turned–if film studies scholars started to do computational research, writing algorithms and developing answers to what they consider CS questions that are not found anywhere in the CS literature, and without reference to that literature? As a professor in an interdisciplinary program I never allow my students to do this: you MUST engage with the procedures of each discipline you engage with, or you are disrespecting the field itself and the very project of doing scholarly research.

    Your notion of "persona," appealing as it might be to common sense, is nevertheless not one we find in any recognizable form in the Film Studies literature. I don't know of any recent or current work that would support it, although I'm not 100% current in the Film Studies scholarship. The idea of "archetype" you mention without relying on it is one that is actively rejected in the scholarly literature. You have built a tool that addresses a non-problem to Film scholars. As such, its utility as a tool to address actual problems is at best entirely unclear. You don't need film studies scholars on your team, but if you mean to study film, you simply have to engage with the procedures of the scholarly study of film. I expect the same from people who engage with CS, with Linguistics, and with every other field, and I hope you do as well.

  4. Geoff Nunberg said,

    September 18, 2013 @ 12:19 am

    @David. Here's another of putting the uneasiness that people feel about these undertakings: the problem with a lot of the research that describes itself as digital humanities is that it seems to take as its point of departure not the question, "What do we want to know?" or "what is the research problem in this field that we're addressing?" but "What do we know how to do with our techniques and tools?" Bamman says this is not a hammer in search of a nail, but the paper cites no film theory, and no work on film at all apart from two screenwiting texts, from which they deduce, quite mistakenly, that a contrast between plot-driven and character-driven work, of which some screenwriters make much, is a central concern of film theory. Anyway, why frame the research in this way? why bring in Goffman at the end, as if he would have found any of this interesting? It may be useful NLP research, but would it wouldn't be any less so if the data were pollce crime reports rather than movie synopses, as the authors suggest. In short, there's nothing remotely "interdisciplinary" about this. Many years ago Hinrich Schuetze and I built a system (called Northrop, as it happens) that classified genres of articles about AIDS as news stories, edtiorials, etc. We could have used the same approach to classify novels by genre, but that wouldn't have suddenly turned the enterprise into literary study.

    What makes the work of scholars like Franco Moretti interesting even to those who are generally cool to these approaches is that they ground their research in issues that are already articulated in a literature that they're comfortably conversant with and try to show how digital methods can be brought to bear on these issues, often refining or amplifying them in surprising ways. But a paper that wholly ignores the literature of a scholarly field can't possibly expect to make a contribution to it, and if it pretends to do so it runs the risk of seeming merely callow.

  5. D.O. said,

    September 18, 2013 @ 12:41 am

    David Golumbia raises an interesting question. Is it really the Film Studies that are engaged here? Ostensibly, the data source is a literary source. Namely, textual description of film plots.
    Also, I don't see anything particularly wrong with "hammer in search of the nails" approach. If you have a good hammer why not to apply to all the nails you can see? It's a different thing to call a nail what is not a nail (and then apply the hammer), but so long as any given nail in question is legit, what difference does it make why a researcher has brought the hammer?

  6. Bill Benzon said,

    September 18, 2013 @ 5:13 am

    Why not forget the philosophizing about how to do interdisciplinary work? I've been listening to interdisciplinary happy talk my entire adult life and the academic world is still dominated by a disciplinary structure whose outlines were set in place in the 19th century. To be sure, there are more interdisciplinary centers and institutes, etc., but the traditional disciplines pretty much rule.

    Your problem is a tactial one, David: How do you find people who are interested in your work, perhaps merely to cite it, perhaps to use your technique, or perhaps even to collaborate? More specifically, how do you find humanists? For that matter, are there any out there?

    On THAT tactical problem, it seems to me things are going well. Ted Underwood posted on your work and that generated a nice discussion. Now you're being discussed here. What ore could you want?

    For example, there's a psychologist at Cornell, James Cutting, who's developed a fascinating line of work about film. Using a whole mess of sophisticated tech and a bunch of collaborators, he divides films into individual shots, measures the length of the shots, and then does various things with shot length in scenes, over whole films, over a bunch of films over time; trends emerge. I rather doubt that most folks in academic film studies would find this of interest at all. After all, it says zilch about systems of oppression and cultural specificity. But David Bordwell and Kristen Thompson find Cutting's work fascinating, and properly so (see this blog post). Perhaps they'd find your work interesting too. So send it to them.

    As method of getting at what's going on in films, your main problem is that you're working from Wikipedia plot summaries. Well, is there any way to ameliorat that problem?

    In any event, the technique should work on an archive of literary texts or folk tales. That, I assume, is what Ted Underwood has in mind. Wanna' collaborate with him?

  7. Ted Underwood said,

    September 18, 2013 @ 5:42 am

    Much of the commentary above is premised on the notion that this paper pretends to make a contribution to film studies. I think that's a puzzling misreading.

    The paper develops a method for clustering representations of character in text. As they stress on the last page, the "testbed has been textual synopses of film," but the same method could be applied to fiction, or for that matter to nonfiction journalism, which (they shrewdly suspect) also relies on conventional character types in its representation of human action.

    That's not a weakness of the paper. It's not like they set out to write film scholarship, but then accidentally slipped and produced a method with such generality that it could also apply to nonfiction. They're computer scientists presenting a paper at a conference on computational linguistics. Of course they're primarily interested in generalizable NLP problems.

    But I'm a literary historian, and I think the method they developed could be useful in my field. In fact, it's very clear to me how it could be used to support literary history, and I intend to use it. Perhaps I'm crazy — but be that as it may, I teach in an English Department. How many conference papers published in computer science reshape the research agendas of scholars in English literature?

  8. Bill Benzon said,

    September 18, 2013 @ 7:13 am

    So, literary history. As I understand it, much of mainstream literary criticism is based on the New Historicism, which owes a heavy debt to Michel Foucault among many others. As I understand the technique from something of a distance – though I have read, and learned from, some essays by Stephen Greenblatt – the technique works by taking a text or handfull of texts on the one hand and then looking at contemporary texts of all sorts, not merely literary texts, but newspaper articles, legal documents, diaries, what have you, and pointing out points of continuity, themes and resonances, between your target literary texts and its many textual contexts.

    Rohan Maitzen has written a wonderful little parody of the technique:

    Here's a mildly parodic (but fairly accurate) example of how it works. Suppose the text is a 19th-century realist novel–say, Barchester Towers, which I happen to be reading now. Imagine there's a scene with a dinner party at which pickles are served. Now, the immediate action of Barchester Towers has everything to do with the internecine rivalries of English clergyman and the moral and social crises flowing from them, and nothing to do with pickles, but now that we have noticed the pickles, it becomes irresistible to follow up on them. Lo and behold, nobody has done pickles yet (though I could give you quite a list of what has been done). So we produce a pickled reading. What are the cultural implications of pickles? Who could afford them, and who could not? Were pickling techniques perhaps learned abroad, maybe in the chutney-producing regions of the eastern empire? Or maybe pickling was once a cottage industry and has now been industrialized. We learn all about these issues and make that jar on the table resonate with all the socio-economic and cultural meanings we have uncovered. Though the pickles seemed so incidental, now we realize how much work they are doing, sitting there on the table. (Who among us has not heard or read or written umpteen versions of this paper?) And perhaps we are right to bring this out–after all, for whatever known or felt reason, Trollope saw fit to put pickles there and not, say, oysters or potatoes. But do we really understand more about Barchester Towers, or just more about pickles–not in themselves, but as symptoms of industrialism, colonialism, or bourgeois taste in condiments? It's not that our pickle paper might not be interesting or, indeed, accurate in all the conclusions it draws about the symptomatic or semiotic or other significance of the pickles. But it's hard not to feel somehow that such an analysis misses the point of the book and thus has a certain intrinsic irrelevance.

    Not only that, but it pretty much misses history, though this brand spanking new literary history places itself before the world as a species of history.

    Thus David Perkins writes (Is Literary History Possible? 1992, p. 131):

    Between the context and the event it explains, a continuity or causal connection must be posited. Yet mostmodern literary histories are generally committed to models of the real that posit discontinuity between events. The use contextual studies to dissolve historical generalizations. As they expose the weltering diversities and oppositions in the field of objects they consider, the continuities of traditional literary vanish like ghosts at dawn. Thus context is deployed not to explain literary history but to deconstruct the possibility of explaining it.

    That is, we use context as a device for slicing cultural change into discrete segments which, though contiguous in time, are causally isolated.

    So, what has any of this to do with NLP? Well, NLP allows us to examine large piles of texts, thousands of them, and see what's in them, albeit in a very abstracted way. We can treat the texts as existing cotemporaneously (effectively outside of time) or we can bin them into time slots to see if there is any change from one bin to the text. If so, then we've got a description of historical change of a kind we've never had before.

    And it is only that, a description. That description explains nothing. On the contrary, it needs to be explained. Just how do we do that? Answering that question, I warrant, is going to be very interesting.

    But the techniques of NLP allow us to put the question of literary history on the table, once again, as something to be understood and explained. That's no small contribution.

  9. Daniel Allington said,

    September 18, 2013 @ 7:17 am

    Three comments from the original thread:

    David Golumbia: ‘Sitting (mostly) on the literary side of things, I am fairly impatient with literary scholarship that takes on bits of linguistic work without being responsible toward the underlying disciplinary procedures; I am no less impatient with the relationship going the other way.’

    Dan Garrette: ‘Both David Bamman and Brendan O’Connor have stated that their work was not intended as a contribution to literary study. My question, then, is: Why not? Given that they are interested in designing models to analyse literature, why not seek out the questions of contemporary literary (or film) study, and attempt to address those?’

    Brendan O’Connor: ‘Because research is hard, and it's harder to do two things at once instead of one. I think it's better to either (1) focus on methods while being somewhat informed by substantive issues, or (2) focus on substantive while utilizing well-proved methods. // When I was younger and more naive, I thought that great interdisciplinary scholarship should be cutting-edge on both sides at once — as you and Hannah Alpert-Abrams seem to demand. I now think that is extremely hard and unrealistic to expect in all cases.’

    I think what we’re dealing with here is the difference between, on the one hand, actual interdisciplinarity (which is very difficult to achieve and virtually unrewarded), and, on the other hand, the old phenomenon of researchers from one discipline deciding to adopt an object of study that has traditionally been considered to fall within the remit of another discipline. I see a lot of the latter in the discipline that I mostly teach, i.e. applied linguistics: I’ve sat through an excessive number of conference sessions – even, and perhaps especially, keynote lectures – that essentially consisted of applied linguists demonstrating, to an audience of other applied linguists, that the methodologies linguists have developed for studying language are in fact the best possible methodologies for studying everything else as well, from child development to politics to social media to Romantic poetry. C.f. Ted Underwood: ‘They’re computer scientists presenting a paper at a conference on computational linguistics.’ This sort of thing probably goes on in every discipline; I don’t imagine that linguists (computational or applied) are particularly to blame. Usually such work is ignored outside the discipline in which it was produced, and no-one particularly cares.

    What is – I think – interesting in this case is that because this particular piece of computational linguistic research was picked up by the blogosphere, it’s provided an opportunity for people to discuss different models for actual interdisciplinary work (which this is definitely not, as Geoff Nunberg points out). One is cross-disciplinary collaboration (as Hannah Alpert-Adams and Dan Garrett put it in their original post, ‘Just take one of us out for coffee’). Another is for researchers based within the discipline whose territory is being infringed to appropriate the methods of the infringer (Ted Underwood, feel free to recoil in horror at being paraphrased in this appalling and insensitive way). A third is for those who take an interest in an object already studied within a discipline not their own to take an interest not only in the object, but also in the discipline (this being David Golumbia’s prescription).

    It might also be worth thinking about why those three things are so rare in practice. I don’t share the position David Bamman expresses in the final paragraph of his article above, but I have some sympathy with it.

  10. Geoff Nunberg said,

    September 18, 2013 @ 10:05 am

    @Ted :" Much of the commentary above is premised on the notion that this paper pretends to make a contribution to film studies. I think that's a puzzling misreading:"

    The paper begins:

    Philosophers and dramatists have long argued whether the most important element of narrative is plot or character. Under a classical Aristotelian perspective, plot is supreme;1 modern theoretical dramatists and screenwriters disagree.2

    Here in a footnote it provides brief quotes from two screenwriting manuals, the first of which, from 1946, says that Aristotle was mistaken and that "our scholars are mistaken today when they accept his rulings concerning character," and the second of which, more recent, says only that "What the reader wants is fascinating, complex characters." Those are the only works on film or literary narrative that are cited. The authors go on:

    Without addressing this debate directly, much computational work on narrative has focused on learning the sequence of events by which a story is define… We present a complementary perspective that addresses the importance of character in defining a story. Our testbed is film. Under this perspective, a character’s latent internal nature drives the action we observe

    It isn't unreasonable to conclude from this that the authors expect their work to bear on a central issue of film or drama studies and that it will support the "modern view" of the plot-character relation. If I were in either of these fields, I would conclude on finishing the paper that the authors are under the impression that the the remarks of two screenwriters, one from almost 70 years ago, can be read as fair characterizations of the views and preoccupations of modern film theory–rather like quoting Strunk and White and a modern guide to writing to document the contemporary linguistic understanding of the sentence. It's this framing, not the paper itself, that makes the authors sound naive and presumptuous — quite unnecessarily, since that framing is wholly irrelevant to any genuine contributions the paper might make to NLP.

  11. Ted Underwood said,

    September 18, 2013 @ 1:34 pm

    @Geoff: To be sure, if I were writing the paper, I would use different literary framing. In literary studies, citing Aristotle and Joseph Campbell is like showing up at the party wearing bell-bottom jeans.

    But I agree with you that this "framing is wholly irrelevant to any genuine contributions the paper might make to NLP." So why dwell on it?

    The literary/film-studies framing isn't essential to the argument; it's just a well-intentioned but unsuccessful effort to reach out to a broader audience. What should matter to humanists is whether the underlying NLP model could in fact be useful for their work.

    David Bamman has engaged that question above by raising some very good skeptical points about his own model (e.g., do characters actually have a single "persona" in his sense, or can their roles change?) I wish more commentary on the paper would engage those kinds of substantive questions instead of dwelling on framing gestures that we agree are immaterial.

  12. Jonathan Mayhew said,

    September 18, 2013 @ 1:57 pm

    Framing is very significant for Humanities articles. By framing their research in terms of Aristotle and archetype theory, they are losing more sophisticated readers. Of course stereotypes by their definition are highly recognizable and familiar, so the technique of mining huge amounts of data to find them seems a little backwards.

  13. Ken Brown said,

    September 18, 2013 @ 4:57 pm

    So if they hadn't used loaded terms like "persona" or "archetype" but coined their own, would they have stepped on fewer toes?

    Maybe "crowdsourced descriptomes"? [eeeugh!]

    Is it that any academic field has so many nooks and crannies of history that anyone who hasn't paid their dues is bound to misuse the local vocabulary in ways that imply things they did not mean to imply to those who have lived in that universe of discourse?

  14. Geoff Nunberg said,

    September 18, 2013 @ 5:16 pm

    @Bill: "As I understand it, much of mainstream literary criticism is based on the New Historicism, which owes a heavy debt to Michel Foucault among many others. As I understand the technique from something of a distance – though I have read, and learned from, some essays by Stephen Greenblatt – the technique works by taking a text or handfull of texts on the one hand and then looking at contemporary texts of all sorts, not merely literary texts, but newspaper articles, legal documents, diaries, what have you, and pointing out points of continuity, themes and resonances, between your target literary texts and its many textual contexts."

    This is a little off-base. I'm not by any means a literary historian, either. I do however serve on the editorial board of Representations, which was founded in 1988 by Greenblatt, Catherine Gallagher, and others at Berkeley as a flagship journal for the New Historicism. Twenty-five years later, the journal is still flourishing, and if I may, still influential, but to claim that the New Historicism remains the basis of "Mainstream literary criticism," or indeed, that it ever was, is off the mark — and indeed, I can't think of any of its progenitors who would still find it a useful label for their work. As for what it is or was about, it's not a great idea to try to reverse-engineer a theory from a brief parody of its excesses, or to suggest on the basis of a summary of its principles from a secondary source that NLP obviously provides a revolutionary approach to the central problems. This stuff is hard, complicated, and requires one to be familiar with a substantial literature from the inside.

    I don't mean to suggest that NLP or other computational tools can't contribute importantly to these kinds of research (and at Reps we're always on the lookout for just this sort of paper). I already mentioned the work of Franco Moretti, and could have added many other people ( I recently read and was verb impressed by Matt Jockers Macroanalysis, for example.)

  15. JS said,

    September 18, 2013 @ 9:00 pm

    "By framing their research in terms of Aristotle and archetype theory, they are losing more sophisticated readers."

    No, they are losing less sophisticated readers: those who are unwilling even to consider that a study may hold something of potential value to them if it doesn't happen to rub their particular intellectual erogenous zones.

  16. Jonathan Mayhew said,

    September 19, 2013 @ 10:12 am

    You establish or lose your credibility with your readers in the first paragraphs of a scholarly article. The framing is crude, not because it mentions Aristotle, but because it does so in a crude and clunky contrast with contemporary screen-writers. It is just plain amateurish and shows the writers to be out of their depth. It's not even a matter of not quoting more up-to-date literary theory, but of a general cluelessness of the existing state of the field to which they want to contribute. It comes off as arrogant.

  17. Bill Benzon said,

    September 19, 2013 @ 10:17 am

    @Geoff: First, I note that the reply that’s currently up is somewhat moderated from the one you had originally posted. I thank you for that moderation.

    However, it leaves me with a rhetorical problem. In that original comment you pretty much said, or implied, that as an outsider to the field of literary criticism, I didn’t know what I was talking about. Remnants of that view do persist in the current comment, and so I’ll proceed on that basis.

    The situation is worse than you think. I was in fact trained as a literary critic, though not specifically a literary historian (lit crit, like linguistics, has many mansions), and so I am something of an insider. Sorta’. If I’ve unfairly maligned new historicism or whatever it is, I’ve not done so out of flat out ignorance. I’ve done so because I’ve spent a fair amount of time and effort thinking about literary history and because I came up in the kinds of intellectual institutions that gave rise to those approaches to criticism.

    I got my degree in the English Department at SUNY Buffalo in the mid-1970s, the glory years, as Bruce Jackson puts it in this piece. The department was a large one and arguably one of the finest in the nation at that time; without a doubt it was the best and most intellectually various experimental program. Which is why they allowed me to spend a great deal of time in the Department of Linguistics learning some cognitive science under the tutelage of David Hays who, as I’m sure you know, was one of the first generation of researchers in machine translation and, hence, one of the founders of computational linguistics.

    In thus committing myself to cognitive science – much of my dissertation was a quasi-technical exercise in knowledge representation – I thought I was blazing new territory for literary studies. It wasn’t until the mid-1980s or so that it became apparent, not only that I was working alone but that, if I continued on (which I did), I would find no professional support. Like Wily Coyote in the Roadrunner cartoons, I’d walked off the edge of cliff and was able to keep forging ahead only by virtue of not looking down.

    As for literary history, it comes in various forms, many of which I do know something about though I’ve not specialized in any of them. Thus when I reference Michel Foucault, I’m not referencing a thinker I’ve never read. I read him as an undergraduate at Johns Hopkins in the late 1960s (The Order of Things, The Archaeology of Knowledge, somewhat later, the first volume in his history of sexuality), along with Philippe Aries (The History of Childhood), Thomas Kuhn (The History of Scientific Revolutions), Friedrich Nietzsche (The Birth of Tragedy), Walter Wiora (Four Ages of Music), Jean Piaget (Genetic Epistemology), and others. While that’s a rather eclectic list, all of those works raise questions about the history of mentalities (histoire des mentalités) , an interest I continued to work on in graduate school (with. e.g. readings on the history of the family).

    Given that Piaget is best known as a developmental psychologist, he might seem something of an outlier in that list. But he was interested in the historical evolution of concepts, in which he saw a process of reflective abstraction similar to that he saw at work in the psychological development of individuals. Crudely put, later conceptual structures emerge from earlier ones as those earlier structures become objects on which later structures can operate. Hays and I took that notion, reconstructed it in our own terms, and published a number of articles based on that idea; Hays also published a book on the history of technology (all available online HERE).

    One of the chapters of my dissertation took that idea and applied it to narrative. Some years later I reworked that material into an article, The Evolution of Narrative and the Self, which I published in the Journal of Social and Evolutionary Systems (1993, Vol. 16, No. 2, pp. 129-155). Here’s a small passage from that article:

    We begin with Hamlet. While Shakespeare's version is the one we know best, the story is considerably older. The version in the late twelfth century Historica Danica of Saxo Grammaticus (reprinted in Hoy, 1963, pp. 123-131) is [quite] different Shakespeare's. … Amleth—for that is how Saxo named him—faced the same requirement Hamlet did, to avenge his father's death. His difficulty stems from the fact that the probable murderer, and therefore the object of Amleth's revenge, is his uncle, and thus from the same kin group. Medieval Norse society had legal provisions for handling murder between kin groups; the offended group could seek the death of a member of the offending group or ask for the payment of wergild and a public apology. But there were no provisions for dealing with murder within the kin group (Bloch, 1961, pp. 125-130; cf. Eibl-Eibesfeldt, 1974, p. 226). Thus Amleth faced a situation in which there was no socially sanctioned way for him to act. … Amleth deals with his problem by feigning madness. Being mad, he is not bound by social convention, a social convention which binds him both to his murdered father and the father's murderer. Amleth's madness allows him to act, which he does directly and successfully. He kills his uncle, the usurper, and his entire court and takes over the throne.

    Shakespeare's Hamlet was not so fortunate. He is notorious for his inability to act. When he finally does so, he ends up dead. And whether his madness was real or feigned is never really clear. What happened between the late twelfth century version of the story and the turn-of-the-seventeenth century version? The change might, of course, be due merely to the personal difference between Saxo Grammaticus and William Shakespeare. However, European culture and society had changed considerably in that interval and thus to attribute much of the difference between the two stories to the general change in culture is not unreasonable. Saxo Grammaticus told a story to please his twelfth century audience and Shakespeare told one to please his audience of the seventeenth century.

    Something had happened which made Amleth's madness ploy less effective. An individual can escape contradictory social demands by opting out of society. But if the contradictory demands are within the individual, if they are intrapsychic, then stepping outside of society won't help. If anything, it makes matters worse by leaving the individual completely at the mercy of his/her inner contradictions, with no contravening forces from others. That, crude as it is, seems to me the difference between Amleth and Hamlet. For Amleth, the problem was how to negotiate contradictory demands on him made by external social forces. For Hamlet, the contradictory demands were largely internal, making the pretense of madness but a step toward becoming, in reality, mad.

    What had happened between Saxo Grammaticus and Shakespeare? Well, there was some kind of historical process, almost all details of which are lost. But that’s not the question that’s been nagging at me for years. What I’m arguing in that paper is that Shakespeare’s mind was different from Saxo Grammaticus’s mind and that they were writing for populations with distinctly capacities. In particular, one can have a mind that’s adequate to Amleth, but not to Hamlet; but a mind that can grasp Hamlet will necessarily also grasp Amleth.

    That, of course, is speculation, nor do I pretend otherwise. But sometimes speculation is the only way to move forward. An argument that takes that speculation seriously and shows it to be wrong would be just as useful as one that fleshes it out in greater detail.

    But what, you might be asking yourself, does this have to do with the so-called new historicism and, for example, Stephen Greenblatt? Well, in his collection, Learning to Curse (1990), there’s a chapter, “The Cultivation of Anxiety: King Lear and His Heirs”. It opens with a passage from an 1831 article on child rearing by one Rev. Francis Weyland and goes on to explore how “Wayland’s struggle is a strategy of intense familial love, and it is the sophisticated product of a long historical process whose roots lie at least partly in early modern England, the England of Shakespeare’s King Lear” (p. 82). Later on, in a chapter entitled “Psychoanalysis and Renaissance Culture” contrasts the conception of the self implicit in the story of Martin Guerre in 16th Century France with the conception of the self implicit in Freud’s psychoanalytic theorizing. Those conceptions are very different.

    In both of those cases there is a difference between two historical situations that seems to me to be similar to the difference between the Saxo Grammaticus story of Amleth and Shakespeare’s later story about Hamlet. In all cases there is a historical process, consisting of many readings by many people over decades and centuries and interleaved with many writings as well. In that process, I suggest, new mental structures get built. I would like to see: 1) an account of those structures, and 2) an account of how they got built over time and across a succession of populations. I have the barest beginning of the first. It’s not clear to me that Greenblatt, or others, even recognize the questions.

    And THAT’s why I quoted David Perkins, Is Literary History Possible? (1992), and then went on to assert that this new literary history uses “context as a device for slicing cultural change into discrete segments which, though contiguous in time, are causally isolated”. Is that overly simple and crude? Yes. But I also think it’s a reasonable statement of the core of what’s going on.

    As a crude parallel, there’s more to the geometric constructions geocentric astronomy than the idea that all other bodies revolve around the earth. But that’s the central aspect of the construction; most of the cycles and epicycles exist to compensate for the implications of that decision (and the decision that orbits must be circular rather than elliptical). Until you fix that, the other fixes don’t matter.

    That still leaves us with one last question: What does this have to do with NLP? While I do have more to say on that point, I’ve got other things to do. So I’ll end by repeating what I said previously:

    Well, NLP allows us to examine large piles of texts, thousands of them, and see what's in them, albeit in a very abstracted way. We can treat the texts as existing cotemporaneously (effectively outside of time) or we can bin them into time slots to see if there is any change from one bin to the text. If so, then we've got a description of historical change of a kind we've never had before.

    And it is only that, a description. That description explains nothing. On the contrary, it needs to be explained. Just how do we do that? Answering that question, I warrant, is going to be very interesting.

    What I think is that going beyond those descriptions is going to force scholars to think about texts, and the social processes of sharing texts, in new ways. Refitting Marx, Freud, Foucault, Lacan, et al. won’t work this time around.

  18. J. W. Brewer said,

    September 19, 2013 @ 11:55 am

    Different people trying to contribute to the transformation (via use of quantitative methods and data-crunching) of literary scholarship from the outside will probably find different tactics and rhetorical strategies appropriate, depending on what they are trying to accomplish and who they are trying to accomplish it with. This should depend in part on their evaluation of what sort of people the prominent incumbents in the field are and whether they are worth cultivating/converting/collaborating with or are instead obstacles to be worked around, and also on evaluation of the career incentive structures facing junior scholars (i.e. are they more likely to do well in a horrible horrible job market by appearing to be part of some way-cool new trend that's such a discontinuity with the discipline's past that the old people can't really understand it but don't want to miss the boat by not having their department have someone who does, or is the better career strategy to appear to move more incrementally and respectfully by using new high-tech tools to provide better answers to what can be characterized as approximately the same sort of questions the old people always agreed were important in principle).

  19. Bill Benzon said,

    September 20, 2013 @ 9:16 am

    @Geoff: Your comment prompted me to take a quick run through back issues of Representations. There's a special issue from 2009 (Vol 108 No 1 Fall), The Way We Read Now, that would be useful to computational linguists seeking to speak with literary critics.

    As I don't have institutional access I haven't been able to read any of the articles, but two look especially useful. The introduction to the issue seems to be the most useful piece: Steven Best and Sharon Marcus, Surface Reading: An Introduction:

    Abstract: In the text-based disciplines, psychoanalysis and Marxism have had a major influence on how we read, and this has been expressed most consistently in the practice of symptomatic reading, a mode of interpretation that assumes that a text's truest meaning lies in what it does not say, describes textual surfaces as superfluous, and seeks to unmask hidden meanings. For symptomatic readers, texts possess meanings that are veiled, latent, all but absent if it were not for their irrepressible and recurring symptoms. Noting the recent trend away from ideological demystification, this essay proposes various modes of "surface reading" that together strive to accurately depict the truth to which a text bears witness. Surface reading broadens the scope of critique to include the kinds of interpretive activity that seek to understand the complexity of literary surfaces—surfaces that have been rendered invisible by symptomatic reading.

    In this standard metaphor surface, of course, is opposed to deep or hidden. Distance another standard way literary critics have had of talking about what we're up to. When Moretti talks of "distant" reading he is, of course, deliberately contrasting that with the older notion of "close" reading.

    And then we have a piece by Geoffrey Harpham from and earlier issue (Vol. 106, No. 1, Spring 2009, pp. 34-62): Roots, Races, and the Return to Philology

    Abstract: Noting recent indications of a renewed interest in philology, this essay provides accounts of both the flourishing of philology in the nineteenth century and the abandonment by scholars of philology on methodological and moral grounds in the twentieth century. It contends that while traditional philology cannot be considered a worthy model for contemporary scholarship, neither can it be simply repudiated or ignored, for it continues to exert a powerful if largely unacknowledged influence on scholarly practice.

    Here's the opening paragraph:

    So little did Edward Said and Paul de man have in common, so different and even opposed were their understandings of the methods and aims of scholarship, that it was easy to overlook points of contact of continuity between them. These came into sharp focus, however, with the posthumous publication of Said's Humanism and Democratic Criticism, whose central chapter was titled "The Return to Philology," the very same title that de Man had used more than twenty years earlier for one of his most programmatic and polemical essays. The current of agreement between these essays ran deep, beginning with their diagnoses of the state of criticism. Literary studies, they said, seem to have lost sight of the object, so that the discourse of criticism was filled witn windy pronouncement about what Said called "vast structures of power or. . .vaguely therapeutic structures of salutary redemption," statements referring not to texts, but, as de Man put it, to "the general context of human experience or history." The agreed, too, on the reason for this loss of focus: the decline of philology in professional training. Criticism without philology, they said, was nothing more than the professional form of the pleasure principle. Only a penitential returnto philology, which Said described as the "detailed, patient scrutiny of and a lifelong attentiveness to" the text, would restore the integrity of scholarship (HDC, 61).

    Said and de Man, of course, were both major figures in literary criticism in the last half century.

    The discipline's two oldest American journals, PMLA (Publications of the Modern Language Assoaciation, founded in 1884) and MLN (Modern Language Notes, 1886) were founded as philology journals. Literary criticism as we've come to know it didn't creep into their pages until the second half of the 20th century or so.

  20. Brian Lennon said,

    September 21, 2013 @ 3:49 am

    @Bill: Your last comment elides something quite obvious about and fundamental to Said's work, something on which the article by Harpham that you cite is perfectly clear. Said's critique of Orientalism was a critique of eighteenth and nineteenth-century European philology as Orientalism. And Said's critique of Orientalism was also a critique of the afterlife of imperial European philology in the social-scientific "area studies" of a postwar U.S. security state.

    To be sure, Said's own so-called "return to philology," via Vico and Auerbach, was an attempt to salvage something from philology; but he believed that could be accomplished only by acknowledging, to its full extent, from the beginning and at every stop along the way, the complicity of European philology in the European imperial project — rather than denying or ignoring it, as I would argue contemporary Digital Humanities enthusiasts do, in their (to me) politically clumsy attempts to appropriate historical philology (and for that matter, postwar social science) for legitimation. That denial or ignorance is one thing that I think very much holds the Digital Humanities back, today, despite its triumphalism, and particularly when it comes to gaining genuine and substantial, rather than merely grudging or opportunistic acceptance in the domain of literature, culture, and writing studies — a domain that was decisively and permanently transformed by Said's work and the work of many other advocates for decolonization in all its forms. The deeply tortured debate of "returns" to philology within Comparative Literature and European language studies (Harpham's article provides as good a citation trail for this debate as any I've read) only reminds us that there is a historically plain and plainly political reason for the decline of "philology" and the rise of a certain "criticism" in the postwar era of decolonization.

  21. Brian Lennon said,

    September 21, 2013 @ 4:50 am

    Addendum: In that context, the takeaway from Harpham's article, for me at least, is not the introductory preamble cited above, but this passage in which Harpham is explicitly critical of such "returns" to philology:

    Said and de Man, to take just two examples, might have considered this possibility before promoting a practice that had been intimately entangled with racist and anti-Semitic theories and practices. (56)

  22. Bill Benzon said,

    September 21, 2013 @ 8:11 am

    Interesting observations, Brian. As I indicated, I don't have full access to the articles, so I couldn't actually read Harpham's piece, just the opening paragraph. What you say about it makes it all the more valuable for DH folks. Gives them something to be wary and critical of.

  23. J.W. Brewer said,

    September 21, 2013 @ 9:46 pm

    Harpham's piece can be found in its entirety here: http://nationalhumanitiescenter.org/director/publications/rootsracesphilology.pdf. All but the last two pages of de Man's quite short essay can be read via google books preview (probably an entire copy is floating around the web somewhere, but I didn't devote more time to digging), and either de Man changes tack quite abruptly in those last two pages (certainly possible) or Harpham fails to engage with what de Man actually was saying and is simply miffed that de Man may have meant something rather different by "philology" than Harpham does (and thus does not use the term as pejoratively as Harpham does). What de Man does seem to be offering at least qualified praise for, however, seems like it might have some interesting resonances with the "surface reading" notion Bill B. alluded to above.

  24. Bill Benzon said,

    September 21, 2013 @ 10:32 pm

    Thanks for the link.

    I think this surface/depth trope (and the related one of distance/closeness) would be a good one for mutual discussion among literary critics and computational linguists. In lit crit we have 'symptomatic' readings that look for 'hidden' meanings (to which David Bordwell, replies: "What hidden meanings? There's no where to hide" in his book, Making Meaning).

    In the NLP division of CL we have, for example, latent semantic analysis (LSA) as a way to investigate semantic structure. Though the meaning it's looking for is pretty 'surfacy' meaning. So we have a machine search for surface meaning by looking at latent structure. How does a mere machine approximate the surface meaning of words by examining their contexts of occurrence? What does language have to be like for such a procedure to work?

    And then we can go back to the ancient days, when Chomsky was talking about deep structure, a notion that Jonathan Culler picked up and packed into his Structuralist Poetics (1975), but it never took hold. What problem was Chomsky trying to solve that he posited those deep structures? And what does that have to do with current discussions of recursion and the Piraha? And recursion could take us to Jakobson's metalingual function, which puts us is classic structuralist territory, but one can also fork over to deconstruction (by which I mean the practice of competent critiques and not the debased coin that circulates these days) and mise en abyme.

    Of course, if you keep pushing on this you're going to end up with The Ancients, who believed that the phenomenal world was one of illusion, that Reality is Hidden. That's too weak and general for our purposes, but we need to know that it's out there.

  25. Bill Benzon said,

    September 23, 2013 @ 10:27 am

    I've taken some of my remarks above, edited them, packaged them with new material fore (a passage from Tristes Tropiques) and aft (a statment about culture as a driving force in history) and posted the resulting bit of bricolage at New Savanna as Notes Toward a Naturalist Cultural History. You might want to scoot over to Ted Underwood's blog to see a graph showing a steep decline in first-person narration straddling the turn of the 19th Century.

    And, there's this, where Latent Semantic Analysis and Google n-grams meets the history of mentalities:

    Carlos G. Diuk, D. Fernandez Slezak, I. Raskovsky, M. Sigman, and G. A. Cecchi, A quantitative philology of introspection, Frontiers in Integrative Neuroscience, 2012; 6: 80; Published online 2012 September 24. Prepublished online 2012 August 14. doi: 10.3389/fnint.2012.00080

    Abstract: The cultural evolution of introspective thought has been recognized to undergo a drastic change during the middle of the first millennium BC. This period, known as the “Axial Age,” saw the birth of religions and philosophies still alive in modern culture, as well as the transition from orality to literacy—which led to the hypothesis of a link between introspection and literacy. Here we set out to examine the evolution of introspection in the Axial Age, studying the cultural record of the Greco-Roman and Judeo-Christian literary traditions. Using a statistical measure of semantic similarity, we identify a single “arrow of time” in the Old and New Testaments of the Bible, and a more complex non-monotonic dynamics in the Greco-Roman tradition reflecting the rise and fall of the respective societies. A comparable analysis of the twentieth century cultural record shows a steady increase in the incidence of introspective topics, punctuated by abrupt declines during and preceding the First and Second World Wars. Our results show that (a) it is possible to devise a consistent metric to quantify the history of a high-level concept such as introspection, cementing the path for a new quantitative philology and (b) to the extent that it is captured in the cultural record, the increased ability of human thought for self-reflection that the Axial Age brought about is still heavily determined by societal contingencies beyond the orality-literacy nexus.

  26. Bill Benzon said,

    September 23, 2013 @ 10:52 am

    This just came in over the transom. I know nothing about it, but the Benjamins website has it tagged for corpus linguistics.

    Metaphor across Time and Conceptual Space: The Interplay of Embodiment and Cultural Models (John Benjamins)

    Author: James J. Mischler, III

    Description: Contemporary linguistic forms are partially the product of their historical antecedents, and the same is true for cognitive conceptualization. The book presents the results of several diachronic corpus studies of conceptual metaphor in a longitudinal and empirical “mixed methods” design, employing both quantitative and qualitative analysis measures; the study design was informed by usage-based theory. The goal was to investigate the interaction over time between conceptualization and cultural models in historical English-speaking society. The main study of two linguistic metaphors of anger spans five centuries (A.D. 1500 to 1990). The results show that conceptualization and cultural models—understood as non-autonomous, encyclopedic knowledge—work together to determine both the meaning and use of a linguistic metaphor. In addition, historically a wide variety of emotion concepts formed a complex cognitive array called the Domain Matrix of emotion. The implications for conceptual metaphor theory, research methodology, and future study are discussed in detail.

    Pages: xv, 237 pp.

    Table of contents

    Tables and figures

    Part I. Theoretical foundations

    Chapter 1. The Cognition-Culture interface

    Chapter 2. Diachronic aspects of synchronic concepts

    Chapter 3. Metaphor across historical time

    Part II. A macro-study of human emotion in cultural context, A.D. 1500–1990

    Chapter 4. Research questions and methodology

    Chapter 5. Results of the ancillary study of non-linguistic data

    Chapter 6. The main study of two diachronic metaphors of anger

    Part III. Micro-studies of emotion – the 19th century

    Chapter 7. The edge of anger: The spleen metaphor across emotion domains

    Chapter 8. Bubbling happiness: Properties of emotion

    Part IV. Conclusions and implications

    Chapter 9. The non-autonomous nature of cognition, language, and culture

    Epilogue. “Bridging the Gap” between theory and real-world language use

    References. The historical Four Humors texts with brief annotations

    Appendices

    Index

  27. Sockatume said,

    September 30, 2013 @ 9:52 am

    I like the debate on interdisciplinary work.

    However I'm having a hard time understanding the nature of the objections to the paper's results. To use an analogy, if I were to develop a machine that allows one to broadly classify foods as sweet, savoury, aromatic etc. on the basis of their chemical components, I don't think that it would provoke astounded objections that it ignored the entire field of culinary writing. Unless I had gone on the road claiming that the machine rendered food critics obsolete, or something.

    At the risk of being provocative, do humanities scholars assume that when a scientist presents a tool, he's presenting the be-all and end-all of that analysis?

  28. Alon Lischinsky said,

    October 11, 2013 @ 10:14 am

    @Sockatume:

    To use an analogy, if I were to develop a machine that allows one to broadly classify foods as sweet, savoury, aromatic etc. on the basis of their chemical components, I don't think that it would provoke astounded objections that it ignored the entire field of culinary writing.

    it's a good analogy, and it shows precisely where things go wrong. Bamman et al.'s method is not a machine that classifies foods, although they think it is. It's a machine that classifies how people talk about food under very specific social and technical constraints. And it is well-known in linguistics, even of the more computational sort, that the way people talk about a topic in a certain context does not neatly correlate with they way they do in a different context, let alone with any intrinsic properties of the topic itself.

    This means that, even if their method allows them to reliably model their object in terms of a classification, what is being modeled is not plots— it's summaries of plots. Observed regularities may be due to the constraints of the plot summary genre, or Wikipedia writing conventions, or the stylistic preferences of the demographic that predominantly contributes to Wikipedia. There is no assurance that they correspond to regularities in the films themselves, or even that the same method would yield similar results when analysing equivalent plot summaries from IMDB. In fact, the method provides no evidence that there is such thing as personas characterised by specific kinds of actions and modifiers, at least in the films themselves— it could all be an artifact of the research method and the dataset. (I feel that is the point that Ted Underwood above is missing.)

    People working in data-driven approaches to literary language (such as Michaela Mahlberg, who's done reams of work analysing Dickens' fiction with quantitative techniques) are acutely aware of these issues. Bamman et al.'s failure to consider them means that their method will be irrelevant to the people who would be best positioned to understand and apply it, and generally misrepresents the nature and extent of their contribution.

RSS feed for comments on this post