Writing with AI

« previous post | next post »

It's been clear for a while that "large language models" can be prompted to fulfill writing assignments, and that LLM detection doesn't work, and that "watermarking" won't come to the rescue. There's lots of on-going published discussion, and even more discussion in real life.

As documented by the MLA-CCCC Joint Task Force on Writing and AI, the conclusion seems to be a combination of bringing AI explicitly into the class, and designing some assignments where students need to function without it.

In one recent example, Joe Moxley has posted the syllabus for his course "Writing with Artificial Intelligence – Syllabus (ENC 3370)".

As I suggested in "LLMs in education: the historical view" (5/1/2023), education will survive the introduction of AI, just as it survived the introduction of writing, despite Plato's skepticism. But maybe oral rhetoric would be a useful alternative to hand-written essays, as an AI-free (or at least AI-reduced) dimension of evaluation (and target of instruction)?

See the discussion at the end of "The LLM-detection boom" (7/7/2023), copied below [image from Ethan Mollick's "The Homework Apocalypse", 7/1/2023]:

In 1999, Wendy Steiner and I taught an experimental undergraduate course on "Human Nature".

Both of us were skeptical of essay-question exams, mostly due lack of trust in our own subjective evaluations. So we decided to experiment with an in-person oral midterm. But as I recall, there were about 80 students in the class — so how to do it?

We circulated in advance a list of a dozen or so sample questions, explaining that the actual questions would be similar to those.

And we scheduled 16 (?) one-hour sessions, each involving five students and both instructors sitting around a small seminar table.  We went around the table, asking a question of each student in turn, and following up with redirections or additional questions as appropriate,. We limited each such interaction to 10 minutes, and before going on to the next student, we asked the group for (optional, brief) comments or additions. We both took notes on the process.

This took the equivalent of two full 8-hour days — spread over a week, as I recall — but it went smoothly, and was successful in the sense that our separately-assigned grades were almost exactly in agreement.

There are obvious problems with grades based on this kind of oral Q&A, just as there are problems with evaluating in-class essays, take-home exams, and term papers. And I've never repeated the experiment. But maybe I should.



5 Comments »

  1. Cervantes said,

    September 1, 2024 @ 7:56 am

    In a somewhat smaller class — up to maybe 15 students — I have often had them do conference style presentations. The format of the class is that each student has a relevant project of their own, so they aren't all regurgitating the same material, but the project has to be informed by the overall course content. Of course this takes multiple class sessions, but it's informative for everyone so it's a good use of time.

  2. Jerry Packard said,

    September 1, 2024 @ 8:00 am

    In my last few years at UIUC, I taught a GenEd course that used a writing program called CPR (Calibrated Peer Review), which basically had (hundreds of) students rate each other’s essays based on a fixed set of structural criteria that were used to both write and evaluate essays. I found that the program worked surprisingly well, but this was pre-AI, so I imagine it would be easy for students to instruct AI to write essays using CPR structural criteria.

  3. Stephen Goranson said,

    September 1, 2024 @ 8:24 am

    AI-generated text may affect definitions or usage of forgery and ghostwriting and deus ex machina.

  4. Rodger C said,

    September 1, 2024 @ 9:52 am

    I'm not sure what deus ex machina has to do with it. Diabolus ex machina, maybe.

  5. Michael W said,

    September 1, 2024 @ 10:25 am

    I work with kids up to high school level, and I've noticed that English teachers seem to be moving toward the 'in-class' essay, although the student typically also has to plan out and maybe also present the information prior to this. Some of this is sound pedagogical technique, but I think there is a move toward reducing the weighting of the essay itself. This has of course been in the works thanks to online cheating availability, but AI is going to push it even farther.

    I have also noticed students tending to use LLM-AI as a learning companion when reviewing fact-based material (e.g. history, science). It still has limits there, especially as it tends to oversimplify subjects and interpretations, but that seems a more sensible use of the technology.

RSS feed for comments on this post · TrackBack URI

Leave a Comment