Rating and judging non-native English
« previous post | next post »
From Martijn Wieling:
We have created a questionnaire about rating English accents and judging English audio samples from non-native speakers of English. We'd like to get as many native English speakers as possible to provide their judgements about the audio samples and I was hoping you'd be willing to link the questionnaire.
Note that the survey link randomly redirects people to one of two questionnaires. One is about deciding which English word you hear (pronounced by a Dutch speaker), the other about rating the nativelikeness of English accents, similar to the questionnaire that you recruited subjects for in 2012 ("Rating American English Accents").
So all you native English speakers, please volunteer — the task just takes a couple of minutes: http://www.martijnwieling.nl/survey
Ursa Major said,
June 3, 2019 @ 9:42 am
I have rated the 25 recordings I was given. I put two of them at rank 5 because they were clearly better than those I had to rate at 4, but I still had some sense that they might not have been native.
One possible problem is that I felt the script was not ideomatic English, so anyone saying it would sound a little foreign. Specifically, in "ask her to bring these things with her from the store" the "with her" sounds distinctively Dutch/German English to me, and a native speaker would say "bring these things from the store".
Joke Kalisvaart said,
June 3, 2019 @ 2:20 pm
I'm curious, but English is not my native language. Would I contaminate the results if I just listened to a few examples or can I just leave without saving?
RP said,
June 3, 2019 @ 3:05 pm
(Minor spoilers follow for those who haven't rated the recordings.) I also rated the recordings about bringing things from the store. A few points struck me. For example, native speakers don't always read fluently and sometimes stumble, so to what extent should nonfluent speech be counted against the speakers? Also, is it OK if the speakers sound like a native speaker might do when reading something out (without seeing or practising the sentence beforehand) or does it have to sound like a native speaker would when speaking off the cuff?
Also, is the goal to establish how native-like the speakers sound compared with native speakers, or how native-like they sound compared with each other? I take it that it's primarily the latter (as the text about the purpose of the study shown at the end would suggest). But if we wanted to know the former, I would advise including a few native speakers among the voices – that way, if I didn't give any of them 5, you'd know that I was being harsh (perhaps unconsciously influenced by the knowledge – or assumption – that none of them are native speakers).
Incidentally, I had never heard of snow peas, but I assumed they were a real thing, so I didn't hold that against the speakers. Looking it up, I now find that we (in Britain) call them "mangetout" (which I've heard of).
EF said,
June 3, 2019 @ 3:28 pm
Also [SPOILER ALERT]
@RP: Yeah, I ignored the stumbles and focused on pronunciation mostly. A couple of words jumped out clearly at me "peas" as peace (AmEng = peez), same with cheese, brudder, wit for with and so on. I counted down from a full score on each of these, so even some clearly regional accents sounded like native speakers to me and scored 5.
The thing I question is if cultural bias will influence the results. (Maybe the questionnaire will filter that.) In the US, I'm pretty sure a Spanish or African accent would trigger a more negative response than it would in a British listener, while maybe Brits would do the same with Indian accents? Pure speculation on my part, but I'm curious as to what impact it might have on the results.
Brian said,
June 3, 2019 @ 8:57 pm
Having taken the other survey, i.e. the one that asks you to decide which word you hear [SPOILERS AHEAD] …
I found it very difficult. There were always two choices, but I frequently heard something else entirely. For example, for one question you had to choose between "clock" and "clog", but what I heard distinctly sounded like "glock". I then had to decide if it mattered more that it ended with a /k/ sound (thus "clock") or that the initial and final consonants were distinct (thus "clog").
In a perhaps similar vein, I found other where either one was possible. For example, one of the questions asked me to choose between "pan" and "pen", and what I heard could easily be either one, depending on the surrounding context. Without any context, it seemed impossible to favor one over the other.
Pete said,
June 4, 2019 @ 4:57 am
SPOILERS BELOW!
They were pretty much all 3 or 4 for me (3 being in the middle, 5 indistinguishable from a native). None were indistinguishable from natives as they were all recognisable as Dutchmen & Dutchwomen, but none were "very foreign-sounding" because they were all basically correct pronunciations. The only real variations were whether they got the right vowel in "brother" (natives pronounce it with STRUT while Dutch people and other E2L speakers tend to use LOT), and whether they got the "th" sounds (lots of them turned it into a "d" sound).
Some of them were stammered a lot but I ignored that as far as I could. If I'd known the experiment was about alcohol I probably would have marked them down for the stammering, but I suppose that's the point of keeping it from us till the end.
Keith said,
June 4, 2019 @ 6:03 am
I did the test yesterday, and took the offer of rating a second batch after doing the first one.
I'll not include any kind of spoiler here, but I have a three comments.
1. I heard some stumbling over the words, which I put down to the speaker reading out a printed text, and that I made an effort to discount from my judgment of the "nativeness ("nativity"?) of the pronunciation.
2. I remember that at least two, and possibly three, of the recordings were much quieter than all the others.
3. I remember that at least one, and possibly two, of the recordings had very noticeable artefacts from the microphone being too close to the speaker's face.
Trogluddite said,
June 4, 2019 @ 10:02 am
@Brian
I also found the lack of context often made those tests tricky to answer. Particularly for the vowel-based minimal pairs, familiarity with a wide variety of BrE accents, plus Commonwealth/US accents via the media, often made either reading equally plausible to me. Even trying my best to answer promptly using only my own idiolect for reference didn't always help, as my accent is a very variable mix of Sussex, West-London, East-Midland, and Yorkshire accents. When reflecting upon my own pronunciation, I can very easily imagine a range of social contexts where code-switching would lead to very different results (and I should probably include the context of being asked to explicitly reflect upon my pronunciation!). I'm not convinced that the biographical questions about the language environments of one's upbringing will be sufficient to distinguish the variety of "native" accents against which subjects might be making comparisons, nor whether a subject is comparing against their everyday idiolect or an idealised "correct" pronunciation.
Cervantes said,
June 4, 2019 @ 10:32 am
My perception is that all of them speak excellent English, although I can't evaluate their fluency because they are reading from a script. The only thing I'm evaluating is pronunciation, which was quite good for a non-native speaker for all of them. Only one seemed really indistinguishable from a native speaker, but it may also be that they could be considered to be speaking the "Dutch dialect of English" since English is commonly spoken in The Netherlands. It's analogous to people from India, Pakistan or Jamaica who speak English as a first language but have a distinct pronunciation. (The movie The Harder They Come is in English with English subtitles.) But in any case I gave those a four as long as they read fluently. Since I felt I had to make distinctions I downgraded people who stumbled over words but I couldn't be sure if they had a reading deficiency or a speech impediment, but their accent really was just as good. Now I know that they were hammered.
I don't think this was measuring what they intended it to measure.
Scott Grimmer said,
June 4, 2019 @ 2:05 pm
I had the minimal pairs. I am quite sure that for all of them, I would have heard whatever was intended in context. Out of context, I generally heard something not exactly like either choice. I will note that I always heard e (like bet) and never a (like bat) (sorry that I don't know IPA symbols.) But in context I would have known that they wanted to use the /a/ sound if that had been appropriate.
/ck/ vs /g/ was always super close.
Alyssa said,
June 4, 2019 @ 2:08 pm
I had the "shopping list" task, an it wasn't clear to me either if I should be rating pronunciation or intonation. Some of the speakers had all the right vowels and consonants but very halting, non-fluent phrasing, while others sounded like typical fluent non-natives to me (clearly foreign vowels but very natural sounding phrasing). I decided to rate the former low and the latter high, but it sounds like most others made the opposite call.
John Swindle said,
June 5, 2019 @ 2:10 am
I'm a native speaker of American English and had the shopping-list task. Two things struck me. First, native English speakers from one country (say India or the Philippines or Nigeria) might sound non-native to speakers from another (say US or UK or NZ). Second, several speakers' emphasis or high pitch on "things" in "these things with her from the store" seemed odd.
alan bennett said,
June 7, 2019 @ 12:45 am
I think that the apparent failure to include native speakers and the fact that all of the speakers probably were Dutch made making meaningful comments difficult. In particular, if I ignored the stumbles, I would rate them almost all as being clearly none native speakers with moderate fluency. i.e. hard to differentiate their level in a meaningful way
Michael Watts said,
June 7, 2019 @ 9:51 pm
I had the "which word does the 4-second recording demonstrate?" task. I was surprised by the statement at the end of the survey:
The samples show a pretty wide range of quality, but the survey design throws most of that information away. You can't rate the quality of pronunciation, you can't rate your own confidence in whether you classified a sample correctly, and you also can't leave the really ambiguous (or badly-recorded…) ones blank.
I rated a couple of samples as using the TRAP vowel rather than DRESS, but that was really mostly motivated by my assuming that some of the recordings must have intended TRAP, and therefore rating two or three of the closest ones as TRAP (out of 50 samples not all of which featured that contrast). Like Scott Grimmer, I always heard DRESS.