Tonal relationships and emotional effects

« previous post | next post »

I'm a bit pressed for time this morning, so discuss among yourselves: Daniel L. Bowling et al., "Major and minor music compared to excited and subdued speech", Journal of the Acoustical Society of America, 127(1): 491–503, January 2010.  The abstract:

The affective impact of music arises from a variety of factors, including intensity, tempo, rhythm, and tonal relationships. The emotional coloring evoked by intensity, tempo, and rhythm appears to arise from association with the characteristics of human behavior in the corresponding condition; however, how and why particular tonal relationships in music convey distinct emotional effects are not clear. The hypothesis examined here is that major and minor tone collections elicit different affective reactions because their spectra are similar to the spectra of voiced speech uttered in different emotional states. To evaluate this possibility the spectra of the intervals that distinguish major and minor music were compared to the spectra of voiced segments in excited and subdued speech using fundamental frequency and frequency ratios as measures. Consistent with the hypothesis, the spectra of major intervals are more similar to spectra found in excited speech, whereas the spectra of particular minor intervals are more similar to the spectra of subdued speech. These results suggest that the characteristic affective impact of major and minor tone collections arises from associations routinely made between particular musical intervals and voiced speech.

As always, comments are likely to be more interesting if you read the paper and figure out what they did before expressing an opinion. You might find it interesting to compare and contrast this work.

[I'll explain and discuss what they did in another post, when I have a spare 45 minutes or so to explain it.]



16 Comments

  1. uberVU - social comments said,

    January 19, 2010 @ 9:07 am

    Social comments and analytics for this post…

    This post was mentioned on Twitter by PhilosophyFeeds: Language Log: Tonal relationships and emotional effects http://goo.gl/fb/o2jL

  2. Vance Maverick said,

    January 19, 2010 @ 9:40 am

    I wish the article weren't behind a paywall. This gives only a taste, and makes one wonder whether they controlled for other variables — and used realistic music and speech samples.

  3. Roger Lustig said,

    January 19, 2010 @ 11:23 am

    I'd like to know what they consider to be the "the characteristic affective impact of major and minor tone collections". Major/Happy, Minor/Sad? If that's where they're going, they're already lost.

    Why are most Freilachs (Klezmer "happy" songs) in minor or similar modes? Those other components of a piece of music (texture, tempo, rhythm, meter) that convey affect? How universal are those, either across cultures or over time?

    Do they expect any consistency across languages? Which ones did they tune into?

    They may address all of these issues, but the phrase "characteristic affective impact" is a red flag. There have been zillions of studies of these sorts of things, from acoustical to psychoanalytic (the latter producing some real gems); almost all are based on assumptions ranging from flawed to bizarre.

  4. marc said,

    January 19, 2010 @ 3:14 pm

    Mr. Happy is correct that these experiments are often based on false assumptions, but even more importantly, emotional cues are far more complex than just 'sad' or 'happy'. Think of what it means to have a sad smile. There is a lot of music in major modes that fits that description. (Late Schubert anyone? Sade? The Blues?)

    And there might be music in speech, but music is not speech nor is it really a language, and from Chomsky on, all attempts to tie music too closely to speech have failed.

    I suppose this kind of research at such a basic level is necessary because what we know about the subject is so diffuse and unscientific, but still, it's hard to know what to actually do with the extraordinarily limited conclusions that these folks seem to reach.

  5. Forrest said,

    January 19, 2010 @ 3:16 pm

    Sadly I can't get at the paper, because I don't have an account with AIP.

    I'm curious about the methodology here, and what's actually being said. Like Roger Lustig before me, though, I'm sensing some red flags.

    Consistent with the hypothesis, the spectra of major intervals are more similar to spectra found in excited speech, whereas the spectra of particular minor intervals are more similar to the spectra of subdued speech.

    I'm not sure exactly what that's supposed to mean. What I do know, however, is that A minor and C major use the same set of notes, and therefore the same intervals. You just "start" in different places on the scale.

    Also, anecdotally, the first thing that jumps to my mind is that Clair de Lune is in D major and is downright subdued, while Sonata Pathetique is in C minor and can only be described as excited. Two examples don't make a comprehensive data set, and "intensity, tempo, and rhythm" are mentioned specifically, so, hopefully, the authors would agree with my assessment of those pieces.

  6. Vance Maverick said,

    January 19, 2010 @ 3:48 pm

    Wow, Roger Lustig! Just like old times on USENET. Anyway, what he said. I was wondering whether the famous transition from A-major to A-minor in Mahler's 6th would be associated with a clear spectral change in the signal….it seems vanishingly unlikely.

    Thanks, ML, for the copy of the paper. Will read before commenting further.

  7. Layra said,

    January 19, 2010 @ 8:51 pm

    @Forrest: I'm guessing, having similar access to the paper as other people here (which is to say, not that much), that the major intervals would be the major second, third, sixth and seventh intervals (two, four, nine and eleven semitones respectively), and the minor intervals being the corresponding minor intervals (one, three, eight, and ten semitones respectively).

  8. Jeff DeMarco said,

    January 19, 2010 @ 9:35 pm

    It also depends on the relative tunings of the notes. I also am unsure as to what they mean by "spectra" in this context, but the audio spectrum (as that word is normally used) of a "pure" major third is different from a "tempered" major third, with quite a different affect. (Hi Roger!)

  9. Coby Lubliner said,

    January 19, 2010 @ 10:47 pm

    Isn't this a subject that Plato and Aristotle wrote about?

    [(myl) Not to speak of Descartes.

    ]

  10. Vance Maverick said,

    January 20, 2010 @ 2:09 am

    If I'm understanding the paper right, they're taking as established an analogy between two sets of three frequencies: (1) two musical tones in a just interval, over their "implied fundamental", i.e. the highest tone of which both would be harmonics; and (2) F1 and F2 in a speech spectrum, over the real fundamental of the sound. Is this really widely accepted? Note that their strongest statistics for the melodies are (as one might have imagined) for intervals from melodic notes to the tonic, rather than from melodic notes to their immediate successors — so of their two tones in (1), one is frequently only implied, meaning the "implied fundamental" is actually at two removes from anything directly audible.

    And (again if I'm parsing this right: p. 495), they don't actually work with F1 and F2 as I remember them from phonetics, but with the nearest harmonics of the fundamental. This move seems to me to smuggle music into their raw speech data (AutoTuning it, as it were).

    Looking at their conclusions, I think they explain their speech data reasonably well — in excited speech, the fundamental goes up, but the formants stay roughly the same, meaning the harmonics nearest to F1 and F2 tend to be lower with respect to the fundamental (and thus fall in smaller integer ratios). I'm not convinced this has anything to do with music.

    Tangentially, they don't examine actual musical spectra, only ratios of fundamentals. (Also, just intonation is a giant can of worms — I doubt that the facts about it are so clear as they claim to believe.)

    I'm no expert, but this looks to me like a long chain of very dubious connections.

  11. Murray Schellenberg said,

    January 20, 2010 @ 2:58 am

    I had the same questions as Vance about the validity of the comparison: an audible F0 and a ratio of its "formants" vs. a hypothetical fundamental reconstructed from a tone and the inaudible tonic (first note of the scale)? Hmmm…

    They also make all kinds of problematic assumptions about music such as the idea that major and minor are universally associated with happy and sad (drop that idea into a conversation with an ethnomusicologist and see the reaction!) My biggest bugaboo is the (unfortunately very common) assumption that language is a determinant of music — it's a very popular assumption but the evidence out there just doesn't support it.

  12. Layra said,

    January 20, 2010 @ 3:04 pm

    That bit about approximating F2/F1 by finding the harmonics closest to the LPC peak bothers me as well. Western harmony only means something in the context of certain instruments, and it could very well be that the human voice is not one of those instruments, that the Bessel functions of the human vocal tract are different enough from sinusoidal waves that the harmonics act in a distinct manner.
    Also, it sounds like they don't know anything about music or music theory. Which, given what I know of classical music theory, might not be such a loss in this case.
    Finally, a linguistics postdoc once mentioned a theory he had wherein instead of music coming from language, it was the other way around.

    [(myl) Was this postdoc by any chance named "Charles Darwin"?]

  13. Vance Maverick said,

    January 20, 2010 @ 4:38 pm

    Layra, the human voice produces as clean an example of a periodic signal decomposable into a Fourier series as any instrument, certainly more so than a piano. And in classical Western music, harmony and counterpoint are thought of as deriving from a tradition of a-capella vocal music. So I think your specific objection won't wash.

  14. Forrest said,

    January 20, 2010 @ 8:36 pm

    Having had a chance to read the paper, it looks like the authors have addressed my two main complaints. My objection about Claire de Lune is dealt with by hedging a little bit: the tonal relationships in music and speech are used for coloring "all else being equal," but apparently take a back seat to tempo and rhythm. I guess I'd read that as "my objection is over two edge cases." My thought on pairs of major and minor scales using the same notes is also addressed in the paper:

    Instead the justification for using implied fundamentals with tonic intervals depends on the tonic’s role in providing the tonal context for
    appreciating the other melody notes Randel, 1986; Aldwell and Schacter, 2003. Each note in a melody is perceived in the context of its tonic regardless of whether or not the tonic is physically simultaneous. This is evident from at least two facts: 1 if the notes in a melody were not perceived in relation to the tonic, there would be no basis for determining the interval relationships that distinguish one mode from another, making it impossible to distinguish major compositions from minor ones Krumhansl, 1990; Aldwell and Schachter, 2003; 2 because a note played in isolation has no affective impact whereas a note played in the context of a melody does, the context is what gives individual notes their emotional meaning Huron, 2006. Thus, the fact that we can hear the difference between major and minor melodies and that we are affected by each in a characteristic way indicates that each note in a melody is, in a very real sense, heard in the context of a tonic.

    I'm guessing there's some truth to the underlying point of the paper, "The hypothesis examined here is that major and minor tone collections elicit different affective reactions because their spectra are similar to the spectra of voiced speech uttered in different emotional states."

    I do feel a little funny reading the paper, as if there were too many assumptions, or too few subjects speaking excitedly and in a subdued manner, but the authors describe their research as covering all the obvious bases.

  15. Amy Stoller said,

    January 24, 2010 @ 2:32 pm

    I suspect this paper, which I have not read (sorry about that) is entirely over my head, but my first thoughts on reading your post were essentially identical to Roger Lustig's.

    (Based on this admittedly insufficient data, I now assume that Roger Lustig is a very smart guy.)

  16. Music Geekery: Other Things Being Unequal « Eric Pazdziora said,

    January 30, 2010 @ 5:30 pm

    […] Geekery: Other Things Being Unequal Not long ago on the always interesting and uber-nerdy Language Log, Mark Liberman directed readers' attention to a recent paper supposedly studying what gives […]

RSS feed for comments on this post