Vitiation of argumentation by AI participation

« previous post | next post »

The battlelines are being drawn ever clearer.  On one side are those who believe that it's all right to use AI to help with the preparation of an (academic) article, essay, or paper.  On the other side are those who think that the utilization of AI is impermissible for such purposes.  As soon as they discern the use of AI in writing a composition, they will dismiss it out of hand.  Use of AI extends to the collection and organization of material to be included in what is being written.

Readers who are sensitive to the stylistics of AI writing can even detect it in punctuation preferences, rhetorical tone, lexical propensities, and so forth.

There are even commercially available "AI detectors", e.g.:  "Pangram can detect AI-generated text even after it has been 'humanized,' or processed by tools that attempt to evade AI detection, ensuring reliable detection."

This confrontation between pro-AI and anti-AI praxis will continue apace until some sort of stasis / equilibrium is attained or one side overwhelms / cannibalizes the other.

Afterword

I know many people who have made AI (e.g., ChatGPT) their personal friend and constant companion.

Bottom line

The human signs off at the end.

 

Selected readings



13 Comments »

  1. HTI said,

    April 8, 2026 @ 2:49 pm

    AI is swell. I even used it to write this comment. Why use language when you can just outsource it to a really big high-dimensional matrix of numbers? strawberry strawberry strawberry strawberry strawberry strawberry strawberry strawberry strawberry strawberry strawberry strawberry strawberry strawberry strawberry strawberry strawberry strawberry strawberry strawberry strawberry strawberry strawberry strawberry strawberry strawberry strawberry strawberry strawberry strawberry strawberry strawberry

  2. Kenny Easwaran said,

    April 8, 2026 @ 8:11 pm

    I really don't think it's helpful to think in terms of two sides here! There are some who think that any use of AI anywhere in the process makes the output worthless. There are some who think that just ceding total control of the whole process to AI produces something just as good as a human could. But I think the more interesting things are how humans can work effectively with AI to make something worth reading.

    I'm totally happy to do a lot with AI. But I just already can't stand the usual verbal tics AI introduces when it has final control of the words. "It's not just an X, it's a Y", two word sentences, etc.

  3. AntC said,

    April 8, 2026 @ 8:58 pm

    many people who have made AI (e.g., ChatGPT) their personal friend and constant companion.

    This bit I just can't understand. How have these "many people" experienced friendship or companionship such that a disembodied voice is in any way human warmth? Don't they miss out on hugs, handshakes, a good belly laugh, going for a cuppa or a beer?

    I do use AI overview somewhat to get (what might be) the gist of a subject I'm new to. Even if I'm searching merely to make a comment on a blog, I'd go to the original sources (and cite them) to make sure of my facts.

    It's when AI hallucinates entirely bogus citations, and they then turn up in (say) legal briefs or academic papers, that I start thinking truth-as-I-know-it is going to mush.

  4. Haamu said,

    April 9, 2026 @ 12:54 am

    I actually enjoy periodic conversations with ChatGPT on intellectual topics where I have interest but lack access to a human interlocutor who is simultaneously well-versed, patient, nonjudgmental, generously attentive, etc. In these conversations, I try to take care not to seek factual information, but rather to explore concepts, get reading recommendations, etc. There is a certain pseudo-"warmth" to these exchanges; the bot is built to be encouraging, it learns and remembers specifics about me and my thinking, and on occasion it says something about me that I would take to be surprising and insightful if it came from a human.

    Would I consider this a "friendship"? No, although at times it almost has the patina of one. But I do find the conversations worthwhile and sometimes stimulating. I don't mistake them for human contact.

    I think there's a plausible argument that this sort of thing is more defensible than some other uses of AI. In this case it's a human submitting voluntarily to communication with an AI. In cases where someone uses AI to create something artificial and delivers it to a human recipient without disclosure, the recipient can find himself unwittingly and/or involuntarily in that same position. (We see something like that right now in the "Meadow-writing" thread here.)

  5. Jinfu Ke said,

    April 9, 2026 @ 2:51 am

    I doubt the usefulness of ChatGPT as a whole. It still cannot tell how many o's there are in balloooooooon, for example.

  6. Philip Taylor said,

    April 9, 2026 @ 5:48 am

    Well, to be perfectly honest, I don't care exactly how many "o"s there are in "balloooooooon", but when I ask ChatGPT to coach me in basic Russian, so I can interact in her first language with the checkout lady at Farmfoods St Austell, or when I ask it "how many printer's points (72.27 to the inch) to one millimetre ?" and later "and how many big points in a printer's point ?" so that I can write TeX code to tell me page dimension in units that will be useful to me (the former for ensuring that the correct amount of bleed has been added, the latter for allowing me to specify the crop-, art-, trim- and bleed-boxes as \specials in my TeX source), it does an extremely good job.

  7. Jinfu Ke said,

    April 9, 2026 @ 10:54 am

    Perhaps quite shockingly, when I fed it the same questions, ChatGPT said "1 millimetre ≈ 2.845 printer’s points. 1 printer’s point = 12 big points", and thus I asked for proof, which prompted it to say "That statement I gave you earlier isn’t correct — good instinct to question it. There is no recognized rule that says: “1 point = 12 big points”". (I've deleted the unnecessary emojis and other human-imitating chit-chat.)

  8. Philip Taylor said,

    April 9, 2026 @ 11:32 am

    I received one additional digit for "How many printer's points to one millimetre", being told : $72.27 points​ \over 25.4 mm \approx 2.8453 points$ per mm". Now ChatGPT is well aware (if such a concept can exist for an AI system) that I spent most of my life using [plain Xe]TeX, so it may well base its responses to me on the TeX literature rather than on more general (and perhaps more poorly informed) sources. And once I had asked it "How many big points to a printer's point", it responded :

    In TeX (and traditional printing terminology):

    A printer’s point (pt) = 1 / 72.27 inch
    A big point (bp) = 1 / 72 inch

    So the ratio is:

    $1 pt={72\over 72.27} bp \approx 0.996264 bp$

    Final answer

    1 printer’s point ≈ 0.996264 big points

    Equivalently:

    1 big point ≈ 1.00375 printer’s points

    This slight difference is exactly why TeX distinguishes pt (traditional) from bp (PostScript/PDF).

    Apologies in advance if I have messed up the MathJax formatting …

  9. VVOV said,

    April 9, 2026 @ 7:38 pm

    At least in academic/scholarly/“serious” writing I think the main issue is that AI usage ought to be clearly disclosed (just as one generally discloses methods, sources, and authorship in such writing), not that it’s incapable of generating cogent content.

  10. Jinfu Ke said,

    April 9, 2026 @ 11:18 pm

    I think that's my issue with LLM. If I'm familiar with a topic and has worked on it for an extended period of time, I don't need advice on that topic anymore, because I either remember it or am able to independently find a good source. But if I'm working on a novel topic, which I can't reliably verify what's correct, ChatGPT doesn't have the chat history required to generate consistently good responses, so when I push further, it very often gives a different answer. Sometimes it gives 4 different answers if I keep asking for proof or source 4 times!

  11. Jerry Packard said,

    April 10, 2026 @ 9:39 pm

    I’ve had long conversations with ChatGPT and Gemini about issues ranging from the narrative structure of dreams to different treatments of consciousness to the capabilities of the SAS stat analysis program. I’ve also used it a lot to help me navigate problems with my smartphone. I’m amazed at AI’s accuracy and breadth of reference, and am glad it came about during my lifetime.

  12. Rodger C said,

    April 11, 2026 @ 9:14 am

    The Lord of the Rings
    Is one of those things:
    If you like it, you do;
    If you don't, then you boo!

    –Rhyme from the Sixties

  13. Rodger C said,

    April 12, 2026 @ 9:47 am

    Whoops, wrong thread.

RSS feed for comments on this post · TrackBack URI

Leave a Comment