AI encroachments
« previous post | next post »
It's already happening.
Of course, we don't want our students to use AI software to help them write their papers, but the fact of the matter is that some of them, especially among those who have poor English writing skills, are already routinely doing so. Their attitude seems to be that they will do the basic research and sketching out of the argument, and then they have AI tools make it sound nice. In some cases, they even ask AI bots to assist them with the data search that goes into their paper.
Some may argue that this is completely unacceptable, that such students should be expelled right away, but where do you draw the line on computer assisted research and writing? Moreover, this is clearly not the same thing as plagiarism, because the individual who is relying on a chatbot to help him or her is not appropriating the intellectual property of another person. He/she is utilizing material that he/she specifically requested a machine to create / produce on his / her behalf and at his / her direction. In other words, the electronic tools are acting as extensions of his / her brain.
I can enjoin the students not to use chatbots to help them write their papers till I'm blue in the face (as my mother used to say) and I surely do that, and they nod in acquiescence. But I can tell from reading their papers that they are fudging and filching. In many instances, I have seen students make phenomenal progress in the quality of their writing (vocabulary, grammar — everything), so much and so quickly that I am stunned. Yet, I know the field well enough that they couldn't have lifted it from something that was already published. Moreover, if they hired someone to write the paper for them, it is highly unlikely that they could have found someone who knows what we have been reading and discussing in class sufficiently well to write the kinds of papers that they turn in.
By no means am I condoning this behavior, but we must face reality (no Realitätsflucht! no fugue!) and come up with practical ways and means to deal with it.
The same thing is happening in language learning. 20 years ago, it would have been considered the worst kind of cheating if a student consulted an electronic aid to help him / her write characters correctly. They were to be produced neural-muscularly by the student him/herself. Yet already at least ten years ago in Singapore schools, students were permitted, even encouraged / required (in some cases), to avail themselves of computers, even on exams. The same is true of many progressive programs in the United States; see, for example, the paper by Theresa Jen and Ping Xu in the bibliography below.
Shall we rewrite the honor codes and rules of conduct of our colleges and universities to state that any use of AI to assist in the writing of one's paper / exam will be grounds for immediate expulsion, the same as with plariarism? But what degree of reliance on AI will be constitute culpability?
It's a brave new world, my friends, and we are already living in it.
A case history
One of the best students I had in my classes during this past academic year (2022-23) has gone overseas to study this summer. To protect her identity and the identity of the school in which she is studying this summer, I have made slight modifications in the following paragraphs.
The dynamite remarks about AI come in the fifth paragraph of this abbreviated version of her report to me.
I really want to share an incident that occurred recently during my XXXXX language and culture classes at YYYYY University in ZZZZZ. The experience left me with mixed feelings, which strengthened the stereotype of "inferiors need to obey superiors" in East Asian cultures.
For each of our XXXXX culture classes, there is a regular assignment: a comment sheet expressing our thoughts and reflections on the class material. Initially, I wrote criticisms and provided explanations that I believed would be helpful. Here are two examples to illustrate my perspective.
During the calligraphy class, I observed that a significant portion (90%?) of the session was devoted to the history of Chinese calligraphy, and only at the end the teacher mentioned a little bit about modern XXXXX calligraphy innovations. I suggested that it would have been beneficial to include more content on XXXXX art history, as it would have aligned better with the theme and objectives of the course.
Similarly, in the XXXXX Pop Culture class, the instructor presented numerous captivating examples to highlight the impact of "odorless" on the global dissemination of XXXXX culture. However, I noticed that she predominantly read from the text and refrained from offering her own insights or addressing questions from the students. I expressed my belief that she could have engaged with the question in terms of the political power of linguistic symbolism.
Then, my comment sheets in both courses were consistently awarded a score of 80, without any accompanying feedback. Reflecting on this and considering the cultural norms in East Asian societies where inferiors are expected to defer to superiors, I began to question whether my criticisms had inadvertently upset my professors, leading them to assign me unideal scores. Out of frustration, I wrote the subsequent comment sheets with the help of Chat GPT (me 50%, Chat GPT 50%). The content of these comment sheets is just summarizing the content of the class as well as some complimentary remarks, which resulted in perfect scores of 100. This situation both angered and amused me, as it seemed to indicate that the comment sheets had lost their intended purpose of facilitating genuine feedback and discouraging critical thinking and creativity.
Bear in mind that, halfway through last semester, when I asked the entire class flat out whether they used DeepL to help them with their translations, they all looked at me with straight faces and said that they did, and they said it in a way that led me to believe that everybody did it as a matter of course in the diverse schools they came from before they were at Penn. Ditto for ChatGPT for assistance in writing papers.
Selected readings
- "Spelling bees and character amnesia" (8/7/13) — includes discussion of dictation
- Theresa Jen and Ping Xu, "Penless Chinese Character Reproduction", Sino-Platonic Papers, 102 (March, 2000), 15 pages. (free pdf)
- "Realitätsflucht" (7/20/23)
Victor Mair said,
July 21, 2023 @ 6:14 am
People are talking about this problem, taking it seriously.
Here is an announcement from the MLA:
What AI Means for Teaching
Wednesday, 26 July, 2:00 p.m. EDT
Join us for this free webinar about the risks and benefits of AI and some recommendations for navigating AI in the classroom.
About the Event
What do you need to know about generative AI like ChatGPT? Members of the Modern Language Association–Conference on College Composition and Communication Joint Task Force on Writing and AI share what they’ve learned from their survey of teachers and researchers and provide an overview of their first working paper. They’ll discuss the risks and benefits of generative AI for teachers and students in writing, language, and literature programs, as well as their recommendations for how educators, administrators, and policy makers can work together to support critical AI literacy.
Taylor, Philip said,
July 21, 2023 @ 6:36 am
[OT] "Modern Language Association–Conference on College Composition and Communication Joint Task Force on Writing and AI" — hereinafter referred to as the MLACCCCJTFWAI ?!
Victor Mair said,
July 21, 2023 @ 6:44 am
Yes, since it's a TASK FORCE, this must be a matter of the greatest urgency and importance, requiring prompt mobilization
bks said,
July 21, 2023 @ 6:56 am
I know a physics professor who prepared his (1990's) intro lectures from the Encyclopedia Britannica. Revoke tenure?
Cervantes said,
July 21, 2023 @ 7:01 am
Well, in my own lifetime, since attending college in the 1970s:
We were allowed to type our exams;
Students could use calculators in exams;
Students could use word processors;
Students could do on-line bibliographic searches;
Students had automated spell checking and grammar checking;
Students could do on-line searches of the entire WWW for information;
Yes, students could use on-line translation;
And more. As an academic investigator, I use all these tools to write my own papers and now my book. Why wouldn't I? That's how you get your work done nowadays. It was probably all originally regarded as cheating by some people. If you use a LLM to draft some bullet points for your argument, why not? You can't write a compelling essay if you don't understand the material, but you have a head start on organizing and you might be reminded of something you hadn't given enough prominence to in your thinking. It's a tool. It isn't cheating to use power tools to make furniture, it's just how it's done nowadays.
Victor Mair said,
July 21, 2023 @ 7:42 am
Right, Cervantes.
Few people under the age of 40 have ever heard of, seen, much less used a slide rule, but up until the 60s, they were de rigueur for STEM-type folks, many of whom used to carry them around in a holster.
SusanC said,
July 21, 2023 @ 7:51 am
There's a serious question about what portion of the grade is based on English language competence, and what is based on understanding the subject matter. (An English as a a foreign language course is very different from e.g. a physics course that happens to be taught in English).
Often happens that my students who speak English as a second language are worried about their English not being very good. My usual reply to this is say English is not your first language in the standard special difficulties section of your project report, and press ahead with slightly broken English, (it is permissible for your project supervisor to proofread your report for you. The standard thanks to your supervisor in the intro of your report covers this kind of help, so your not breaking the plagiarism rules by asking for this assistance).
If we're assessing the students technical knowledge, rather than language competence, I really don't have any problem with them writing it in some other language and running it through machine translation. You probably ought to declare that you've done this, to be on the safe side.
SusanC said,
July 21, 2023 @ 8:02 am
During my undergrad, comment sheets on lecture courses were anonymous. If they're identified, students might hold back on saying what they didn't like about the lecture course.
Benjamin E. Orsatti said,
July 21, 2023 @ 8:10 am
Can I be a Cassandra / Jeremaiah / nudnik again here for a minute?
A calculator will allow me to calculate the coefficient of dispersion _faster_, but I still need to know the formula and how to perform the calculation, and a sea change of people using calculators instead of slide rules isn't going to change the way we think about arithmetical manipulations. Similarly, spellcheck allows me to write _faster_, but I'm no less fluent in English because of it.
Allowing a neural network to do the work of my own neurons, however, is a different story, isn't it?
Here's how I did undergrad research c. 1997: Had to write a metaphysics paper. Took my metaphysics textbook to the library along with an anthology of C. J. Jung because I thought that Jung had some interesting ideas that could be explored ontologically. So, I hit the stacks and actually had to _read_ things. I would find books I never intended to find just because they happened to be in the same rack as the book I had been looking for. Books I only needed a few ideas from I just jotted notes down about; books that required more extensive exploration got checked out and taken back to the dorm, as many as would fit in the backpack in one trip.
TL/DR: The "heavy lifting" of brainstorming, searching, finding, reading, annotating, and synthesizing ("thinky-stuff", you might say) was done by my own soggy gray meatloaf. By the time I'd done all that, making outlines and typing the thing out on my (desktop) computer was just a matter of organizing the thoughts and having them make sense to a reader. But if you have a computerized neural network do all that instead…
WTL/DR: I'm 100% certain that there are higher levels of human cognition that cannot ever be unlocked, save by the odor of ink, paper, and binder's glue.
Benjamin E. Orsatti said,
July 21, 2023 @ 8:18 am
SusanC said: "say English is not your first language in the standard special difficulties section of your project report, and press ahead with slightly broken English".
— Yes! This is the ideal solution. C'mon, ivory-tower types, _now_ is the time to really "celebrate diversity". Pennsylvania used to abound with dozens of different types of apple trees, thanks to Johnny Appleseed. Now, we have maybe 2-3 because we've decided that those are the ones that are "commercially viable". If we let AI get its foot (further) in the door, aren't we conceding that, henceforth and forevermore, the "right" way to write (and speak?) will be in the style of the chatbot? We'll get so used to AI "style" that it will become the norm, and all the humanity will eventually leach out of scholarship completely.
Aardvark Cheeselog said,
July 21, 2023 @ 9:14 am
We are going to find out whether letting AIs do our writing for us brings about the end of civilization as people forget how to think, or if it's going to be more like the replacement of oral transmission of culture with writing, where future people will have lost some abilities we take for granted or think are of central importance, but turn out just fine anyway. Because people are not going to not use these things.
Coby said,
July 21, 2023 @ 11:54 am
SisanC: "The international language of science is broken English." (Theodore von Kármán)
Benjamin E. Orsatti said,
July 21, 2023 @ 11:59 am
Aardvark Cheeselog,
Maybe not the "end" of civilization, but maybe a turning towards a new dark age, lit only by the harsh glow of a smartphone. Maybe we'll snap out of it before the oceans burn off in a billion-and-a-half years, maybe not.
On writing (or the invention of the printing press, or what have you). When you add up the asset and liability columns, don't you find that all the benefits the written word has given us more than make up for the loss in rote memory occasioned thereby? Can we say the same for AI? Seems as though AI is as good for society as television, fentanyl, or the world-wide-web (cf. Thomas Putnam, "Bowling Alone").
Regarding the "people are not going to not use these things" argument, it sounds like an intuitive argument, but is it? Can't I prevent AI from rewiring my brain simply by continuing to read books and talking to flesh-and-blood human beings, and not engaging in conversation with Siri, Heygoogle, DeepL, ChatGPT, HAL 9000, and their friends?
Cervantes said,
July 21, 2023 @ 12:00 pm
Well, as of today at least, you can't really get away with letting ChatGPT do your writing for you. It's going to have errors, and to the extent it doesn't, it produces squishy generalities. So you'll get a B at best. But it's impossible to stop students, or for that matter Institute Professors, from using it to draft essays. But you still have to understand the issues, hopefully have an original idea or two and be able to support them by argument and example, and write clearly and effectively. Not to say AI won't be able to do that sometime not so far off, but it can't do it now.
RfP said,
July 21, 2023 @ 1:58 pm
Salvor Hardin is one of the protagonists in the first book of Isaac Asimov’s Foundation trilogy.
At one point, he expresses analogous concerns to the all-powerful Board of Trustees of the Encyclopedia Committee:
AI can be used in a lot of useful and interesting ways, but…
Paul Garrett said,
July 21, 2023 @ 3:06 pm
I am not so disturbed by AIs' possibilities, but more by the apparent neglect of giving credit where credit is due, both by humans, and, currently (?) by Chat-X's. Yes, mention use of software systems… um, but, typesetting systems? Yes, some decades ago, people did thank D. Knuth for creating TeX, etc. Ok, and crediting Sage or Mathematica… Mm, but what about Apple or Microsoft for word-processing software? 50 years ago, thanking one's slide-rule and typewriter? Non-trivial to formalize where the line might be drawn…
martin schwartz said,
July 21, 2023 @ 8:54 pm
As for Chat GPT etc., I keep on remembering Fredric Brown's
prescient 1954 story "The Answer", quickly readable online.
Viseguy said,
July 22, 2023 @ 1:06 am
@Benjamin E. Orsatti: "Can't I prevent AI from rewiring my brain simply by continuing to read books and talking to flesh-and-blood human beings, and not engaging in conversation with Siri, Heygoogle, DeepL, ChatGPT, HAL 9000, and their friends?"
Only if all the books you read were written pre-AI and all the human beings you talk to have not themselves been rewired (to whatever extent) by AI — in short, if you were willing to become a hermit, and where's the fun in that? And surely it's far too early in the game to rank AI with fentanyl in the hierarchy of social ills. For one thing, "model collapse", referenced in a recent post by Prof. Liberman, may place asymptotic limits on the good or evil that AI will be able to accomplish. I sometimes wonder if the principal danger of AI isn't a sort of inverse model collapse, whereby human thinking becomes so warped by AI inputs that people lose the ability to bring independent (human) critical assessments to bear on those inputs. That would be a catastrophe, and AI does seem to be provoke a lot of catastrophic thinking — which I doubt is the type of thinking most conducive to reaching sound judgments about the uses and ethics of AI.
Benjamin E. Orsatti said,
July 24, 2023 @ 6:45 am
Viseguy said:
"I sometimes wonder if the principal danger of AI isn't a sort of inverse model collapse, whereby human thinking becomes so warped by AI inputs that people lose the ability to bring independent (human) critical assessments to bear on those inputs. That would be a catastrophe, and AI does seem to be provoke a lot of catastrophic thinking — which I doubt is the type of thinking most conducive to reaching sound judgments about the uses and ethics of AI."
YES! _That_ thing. Him say what me say but him use better word. Thank you!
[Trigger warning: this next part gets a little Proustian]
From where I'm standing, though, the catastrophe has already happened, or else the tsunami has crested, at least. I was telling my kids about life in the Before Times, and I began to think that there's a certain degree of "warping" that's already happened, and it's this: In ages past, when you were walking down the street, and a question occurred to you — say, "Is Steven Wright married?", or, "Which political party, if any, deserves my allegiance?", or, "What is the way to tie a tie that uses the fewest movements?" — you either had to (1) ask someone; (2) go out and search for the proper reference material, if any; (3) try to work it out for yourself inside your own head or through experimentation; or (4) content yourself that this question is one to which you don't presently know the answer. It seems like only a form of option (2) is available to us now, and isn't that a loss?
I also remember thinking to myself, c. 1990, "Wouldn't it be neat if I had my own 'Hitchhiker's Guide to the Galaxy' that would bring all the knowledge of the universe to my fingertips?", immediately followed by the thought, "…and wouldn't it be miserable if _everybody_ did?".
Peter Grubtal said,
July 24, 2023 @ 12:27 pm
These aspects brought to mind Richard Hoggart's fears about "unbending the springs of action" (The Uses of Literacy). He was thinking of other factors at the time, but AI could now be added to those.
Taylor, Philip said,
July 24, 2023 @ 4:57 pm
Benjamin — "only a form of option (2) [go out and search for the proper reference material, if any] is available to us now" — I don't believe that this is true. Listening to the radio a couple of days ago, I learned from the announcer that the BBC (who now apparently "own" the Henry Wood Promenade Concerts and have re-titled them the "BBC Proms" (said announcer referred to "a proms" on more than one occasion) had featured "Northern Soul" as one of this year's concerts. Fearing the worst, I listened on, and my fears were realised — "Northern Soul" is a genus of popular music, and has (to my mind) no place in the Promenade Concerts at all. But I didn't use the Internet to see if others shared my opinion, I didn't ask ChatGPT or any of its ilk whether they thought that the inclusion of Norther Soul was in any way justifiable, I simply telephoned a friend. So if my experience is anything to go by, options other than option 2 are available, and long may they remain so. Incidentally, said friend, on asking daughter what sort of music "Northern Soul" was, and receiving the answer "pop music", opined that the late Sir Henry Wood must now be spinning in his grave …
But I have to confess, it is over a year since I last consulted my Encyclopædia Britannica, although my (1933) OED still sees fairly regular use, as does my 1966 Onions and other similar or related works …
Scott P. said,
July 25, 2023 @ 2:58 am
We haven't had to rewrite our academic integrity policy as it already prohibits unauthorized outside assistance on exams and papers, so AI is included under that umbrella.
Benjamin E. Orsatti said,
July 25, 2023 @ 5:57 am
Philip Taylor said:
[after the Beeb had pop-luted a historic concert series] "But I didn't use the Internet to see if others shared my opinion, I didn't ask ChatGPT or any of its ilk whether they thought that the inclusion of Northern Soul was in any way justifiable, I simply telephoned a friend."
Good for you. And I mean that in the most un-ironic manner possible. Keep the humanity!
Peter Grubtal said,
"These aspects brought to mind Richard Hoggart's fears about 'unbending the springs of action' (The Uses of Literacy). He was thinking of other factors at the time, but AI could now be added to those."
[from https://www.taylorfrancis.com/%5D:
"The argument is that the concept of an almost unlimited inner freedom, as it has been conveyed to the working-classes through increasingly shallow channels, has flowed into and absorbed the older notion of tolerance, and taken it much farther than it had gone before. The tolerant phrases have been joined by others in similar dress; the new depreciate the old, and together they become the ritual uniforms of a shared unwillingness to admit that freedom can have its punishments. The chapter suggests that the sense of the importance and the predominant rightness of the group is being linked to, and increasingly made to subserve, a callow democratic egalitarianism, which is itself the necessary ground for the activities of the really popular publicists. Democratic egalitarianism, paradoxically, requires the continuance of the 'Them' and 'Us' idea in some of its poorer forms."
Sounds about right.