ChatGPT: Theme and Variations
« previous post | next post »
[This is a guest post by Conal Boyce]
Here I’ll recount some recent exchanges I had with ChatGPT. Given the scope of ChatGPT, and the fact that it’s in a self‑described intermediate state, our various impressions of it as of February 2023 must be like those of the three blind men examining an elephant — except the elephant is running. In the heart of the professional programmer, ChatGPT creates existential dread since it can spit out in a few seconds a page of code which would have required hours or days for him/her to write and debug — and that only after a lifetime of coding. For the rest of us, for the moment at least, it just provokes curiosity perhaps.
Below are my notes on some chats numbered 1 through 5. Chat 1 (re chirality in particle physics) and Chat 2 (re the Voynich Manuscript) show ChatGPT at its stunning, tongue-on-the-floor best. At the same time, Chats 1 and 2 provide a glimpse of the near future when it seems safe to assume that nearly all responses from ChatGPT will be of the same high caliber (with Chat 5 offering a slight caveat). Chat 3 (re Min Sandhi) and Chat 4 (re Mendeleev) I include for comic relief, and to remind ourselves that the tool is still imperfect — only in its ‘beta‑release’ or whatever. Chat 5 (re cat meows teaching us the four tones of Mandarin Chinese) is another disappointing one, but disappointing in a particular way that is likely to stay with ChatGPT permanently, since there will always be something that ChatGPT doesn’t quite fathom yet and which it will nevertheless feel compelled to babble about at length, in what Victor calls pablumese.
CHAT 1. I asked, “Tell me about physicists and chirality in 1957.” The AI understood immediately that I was alluding to the work of C.S. Wu, C.N. Yang and T.D. Lee, and provided a coherent summary of their work. I found this quite impressive since the term ‘chirality’ is (unfortunately) not the one usually associated with the work of Madame Wu; instead, in the passive‑aggressive argot of particle physics, her historic experiment using super‑cooled cobalt is known as the ‘violation of parity.’ First was my first experience with ChatGPT, intended as a test, and it passed with flying colors. (Google comparison: Prompted with “physicists and chirality in 1957,” Google too coughed up the relevant data, but it was located about three items down on the page.)
CHAT 2. In the midst of an on‑line discussion, I wanted to refer to the Voynich Manuscript but I could not recall its name — not even the fact that its name begins with the letter V, only the look of some of its bizarre illustrations. So, in desperation, I put this into Google: “unsolved bibliographic mysteries.” Google’s algorithm skipped over the all‑important ‘bibliographic’ modifier and responded with useless things, such as Unsolved Mysteries of the Old West and Unsolved Mysteries in Genealogy. Next, prompted by the same three words, “unsolved bibliographic mysteries,” ChatGPT seemed to actually read my mind, as it came back with this:
There are several unsolved bibliographic mysteries that continue to intrigue researchers and book enthusiasts:
-
- Voynich Manuscript: The Voynich Manuscript is an illustrated codex that was written in an unknown script and language. The manuscript dates back to the 15th or 16th century and has been the subject of many attempts at decipherment, but its contents remain a mystery.
- Shakespeare's lost plays: While William Shakespeare is known for his many plays, there are believed to be several that have been lost to time. These include Love's Labour's Won, Cardenio, and The History of Cardenio, among others.
- The identity of "Belle da Costa Greene": Belle da Costa Greene was the librarian and personal secretary to J.P. Morgan, and she was responsible for building his extensive library. However, it was later discovered that Greene had been passing as white, when she was in fact African American. Her true identity and family background have remained a mystery.
CHAT 3. I asked “What is Min sandhi?” I expected the response to mention the tonal system of the Southern Min language. But ChatGPT identified ‘Min sandhi’ as a feature of the Burmese tonal system. Close but no cigar. (In contrast, Google knows exactly what is meant by ‘Min sandhi’ and even takes one directly to a paper I wrote on the subject, circa 1980.)
CHAT 4. When I asked “What did Boyce say about Mendeleev's ontology?” ChatGPT told me that one John Boyce in Ireland disagreed with Mendeleev's ontology. But it failed to mention my 2019 paper in Hyle: International Journal for Philosophy of Chemistry (which uniquely introduces the anglophone world to what Mendeleev actually said in Russian, and, accordingly, has been cited by 6 according to Google Scholar). Since I had never even heard of John Boyce, I found ChatGPT’s response to be blackly comical. (In contrast, Google seems not to know about John Boyce but shows my 2019 Hyle paper at the top of the page, when given the same prompt.)
CHAT 5. Someone shared with me a video [not on ChatGPT] of a Chinese woman and a cat who supposedly was producing the Four Tones of Mandarin Chinese, and thus providing a fun and easy way for foreigners to learn them. It’s a cute idea, I thought, but poorly implemented in that particular video. (It would be nice if Day 1 in Chinese could feature 貓毛鉚帽 māu máu mău màu in lieu of tiresome old 媽麻馬罵 mā má mă mà.) Then I began to wonder: Perhaps elsewhere in cyberspace someone has done this right, with a cat who has a wider range of vocalizations? First I tried Google: “Can cats teach us the four tones of Mandarin Chinese?” No dice. Then, after I gave the same prompt to ChatGPT, this is the answer that came back:
While cats are known for their vocalizations, they cannot directly teach us the four tones of Mandarin Chinese. The four tones of Mandarin Chinese are different ways of pronouncing a syllable that can change the meaning of a word. These tones are difficult to master for many non-native speakers of Mandarin Chinese.
However, cats might be useful in helping us practice the four tones of Mandarin Chinese. Practicing the tones involves training our ears to distinguish between subtle differences in pitch, and cats' meows and purrs might provide a useful auditory stimulus for this kind of practice. By listening to and imitating the pitch variations in a cat's meow or purr, we might be able to improve our ability to produce the four tones of Mandarin Chinese.
That being said, it's important to note that while cats might be a helpful tool for practicing the four tones of Mandarin Chinese, they cannot replace the guidance and instruction of a qualified teacher or language learning program.
Etc. etc.
By the way, ChatGPT stops only when you tell it to stop. It’s this obsession ChatGPT has with being ‘friendly’ and ‘talkative’ ad nauseam that makes some of its responses not just absurd but slightly creepy in my opinion. And it is thanks to this verbosity feature that students love ChatGPT since it can “write my term paper for me!”
Availability: In principle, ChatGPT is readily available to all at chat.openai.com. However, due to its unexpected level of popularity, one finds that it is sporadically unavailable, as of February 2023 at least.
Selected readings
- "GLM-130B: An Open Bilingual Pre-Trained Model" (1/25/2023)
- "ChatGPT writes Haiku" (12/21/22)
- "Translation and analysis" (9/13/04)
- "Welcome to China" (3/10/14)
- "Alexa down, ChatGPT up?" (12/8/22)
- "Detecting LLM-created essays" (12/20/22)
- "Artificial Intelligence in Language Education: with a note on GPT-3" (1/4/23)
- "DeepL Translator" (2/16/23)
Chris Barts said,
February 21, 2023 @ 6:04 am
I don't understand why parity violation is any lesser of a term for what Dr Wu accomplished: Violations of fundamental properties are, themselves, fundamental in advancing physics, and demonstrating parity violations is very important in understanding fundamental particle interactions.
Taylor, Philip said,
February 21, 2023 @ 8:44 am
"The four tones of Mandarin Chinese are different ways of pronouncing a syllable that can change the meaning of a word". I respectfully disagree. The four tones of Mandarin Chinese are different ways of pronouncing a syllable — each possible combination of tone and syllable conveys a different word (for a rather loose definition of "word").
Jarek Weckwerth said,
February 21, 2023 @ 9:00 am
@ Philip Taylor: I think that ChatGPT wording mirrors the unfortunate wording somtimes used to define a phoneme: a unit that "can change the meaning of a word". What they do is they distinguish between words, just like the tones. But they don't touch the meanings…
Peter Taylor said,
February 21, 2023 @ 9:35 am
Speaking as a professional programmer, the only dread that ChatGPT creates in me is that of having to explain to people yet again that, just like the many many previous iterations of products which were going to make programming accessible to hoi polloi, it looks good because people demo it with trivial problems but as soon as you try to do something non-trivial it creates junk which it would be quicker to rewrite from scratch than to fix.
I've seen one example where someone posted some ChatGPT output which the person posted it claimed to be "a working RPG Text game in a c# console application." Far from working, it wouldn't even compile. When the obvious errors were fixed, there was enough structure there to see how to extend it into an actual game, but there wasn't yet a game.
At present its output looks like a big copy-paste machine which takes context into account to some extent, but as an actual programmer is about as good as someone who's been learning for 4 months.
Gregory Kusnick said,
February 21, 2023 @ 11:52 am
Another professional programmer here. I'll second what Peter said. Boyce has apparently swallowed the hype and failed to grasp that code produced by LLMs such as ChatGPT is merely statistically plausible and comes with no guarantee of correctness or fitness for purpose. It's ludicrous to think it could be put into production as is without careful review and debugging by engineers who actually understand the problem it's meant to solve. AI-generated code fragments can make human programmers more productive, but we're a long, long way from replacing them entirely.
david said,
February 21, 2023 @ 2:14 pm
ChatGPT = BS
https://languagelog.ldc.upenn.edu/nll/?p=22605
rpsms said,
February 21, 2023 @ 3:58 pm
I think it was @BretDevereaux who pointed out that it basically makes up citations.
Laodamia said,
February 21, 2023 @ 5:39 pm
I work with a pronouncing dictionary where users enter material to hear a pronunciation. There is a time limit as to how much a speaker can produce. ChatGPT, which produces many of the sentences users now enter, almost invariably cannot cut out redundancies or choose a shorter way of saying the same thing. They are acceptable sentences, but stylistically wordy. It likes the sound of its own voice!
bks said,
February 21, 2023 @ 5:50 pm
Apropos chirality:
bks: If I lift my right foot, what happens to my reflection in a mirror?
chatGPT: If you lift your right foot, your reflection in a mirror will also appear to lift its right foot. …
Xtifr said,
February 21, 2023 @ 5:52 pm
I haven't heard of any serious concerns among software developers, but at least one literary market has had to temporarily stop accepting new submissions, thanks to a genuine flood of AI-generated material!
http://neil-clarke.com/a-concerning-trend/
Taylor, Philip said,
February 21, 2023 @ 5:58 pm
"bks: If I lift my right foot, what happens to my reflection in a mirror?
chatGPT: If you lift your right foot, your reflection in a mirror will also appear to lift its right foot. …"
Rather worryingly, I agree with ChatGPT on this occasion. If you mark your right foot with a clearly visible "R", and your left foot with an equally visible "L", and lift the foot bearing the "R", the reflection in the mirror will do the same (albeit with the letter actually appearing as "ᴙ"). Since you know that the letter "R" is only on your right foot, you will have little or no difficulty in accepting that the reflection is also lifting its right foot …
bks said,
February 21, 2023 @ 6:46 pm
I follow your argument Philip Taylor, and yet …
bks: If I am facing due north, which way does my reflection face
chatGPT: If you are facing due north, your reflection in a mirror will also be facing due north …
Taylor, Philip said,
February 22, 2023 @ 6:51 am
Er, yes, it is rather harder to argue against that.
However …, if you were to place a stick in front of you on top of which you stuck a cut-out of a letter "ɴ" to indicate which way is north, you would, I think, find that your reflection is also looking at a similar cut-out indicating north, albeit one that looks more like "ᴎ".
Anyhow, cynic that I am, I shall continue to believe that ChatGPT and all of its ilk are stupid and that one should place no more credence on anything that they emit than one would on reading chicken entrails until such time as they (ChatGPT and friends, that is, not the entrails) can demonstrate true sentience.
Richard Hershberger said,
February 22, 2023 @ 6:59 am
I tested ChatGPT with baseball questions. I was unable to correctly describe any rules. The interesting thing is that when asked to describe a force play, it was hilariously wrong. When asked to describe the infield fly rule, it was mostly right until the end, and the mistake is one that a casual fan might make. I later realized that the force play is something even casual fans internalize at an early age, and so is not discussed much, while many find the infield fly rule a dark mystery, leading to frequent explanations.
I also asked it about early baseball history, which is my specialty. When asked about the relationship of baseball and rounders, the answer was wrong, but in a conventional way, and with follow-up questions I managed to bring it around to a decent paragraph that was substantially correct. Then I asked it about Alexander Cartwright. It again started out with a conventionally wrong answer, but I was unable to bring it around. Instead it went into full bullshit mode, up to and including making up nonexistent newspaper articles and mischaracterizing the contents of secondary sources.
I am unclear on how exactly this relates to Bing's incorporation into search results, but I am not filled with serene confidence in the results.
Aardvark Cheeselog said,
February 22, 2023 @ 10:23 am
Yet Another professional programmer here, chiming in to agree that no, ChatGPT is not close to putting me out of a job. ChatGPT is not even the kind of thingthat I might one day have to worry about, if I live long enough.
Also, are you a physicist? I mean, are you truly competent to evaluate its responses about chirality in particle physics? Because having looked at a number of situations where somebody prompted ChatGPT about something and it came up with a plausible-to-laymen response, there's a clear pattern where the real expert looks at that response and says "bullshit."
Jonathan Smith said,
February 22, 2023 @ 12:08 pm
The real question is how in the world the sequence of events
ChatGPT is rightly recognized as a good-to-great chatter >> users discover ChatGPT cannot put 2 and 2 together literally as that is not its nature >> ChatGPT is incorporated into Bing as if it will be a great source of facts
occurred, as "silly wabbit, search engines are for linking users to information"
But then Musk hasn't realized that "silly wabbit, driving is for freedom" yet, so it seems this kind of thinking is massively scaled up in a reflexive fashion on the regular…
Daniel Powell said,
February 22, 2023 @ 2:40 pm
I'm also a professional programmer, and I agree with my colleagues in this thread.
I haven't seen very good output from ChatGPT. The examples of code that I've seen have been very small (think minutes of work for a human, not hours or days) and often riddled with basic errors. It seems to me that ChatGPT is easily outperformed by even inexperienced humans.
Remember that GPT-3.5 is trained on a corpus consisting primarily of natural language. This corpus was assembled rather indiscriminately from the Web, so while the model has certainly seen some amount of code, it has largely been exposed to natural language texts. It would be very surprising to me if ChatGPT were capable of generating good code.
There are other tools available, like GitHub Copilot, which do a somewhat better job of producing code suggestions, but even Copilot is (in my experience) largely unusable for any serious purpose.
I'm willing to entertain the possibility that generative transformer architectures will eventually lead to powerful code-generation tools, but I'm very skeptical that any approach involving OpenAI's natural-language GPT models will be productive.
I'd caution non-professionals to approach this topic with some humility; there are many unjustifiable claims circulating in the media.
wanda said,
February 23, 2023 @ 2:24 am
I've given ChatGPT the writing prompts for my intro bio course. What it comes up with is indistinguishable from what my students come up with- including the errors that nearly all of them make, if they don't pay attention to or understand the rubric I've given them that tells them the answer.
Conal Boyce said,
February 23, 2023 @ 3:58 pm
Glad to see there was actually some interest in my guest post. Things are moving so fast, I thought it might look obsolete already.
@Peter Taylor, @Gregory Kusnick, @Aaardvark Cheeselog, @ Daniel Powell I'm actually pleased to hear your response(s) all saying NO, ChapGPT is not a threat to us. I worked for 10 years as a fairly untalented programmer (somewhere between B-minus and B-plus level?) so I have fond memories of the field and would hate to see it go up in smoke. My "existential dread" was just thrown out there (based on a too-small sample) to see what the response might be. And you responded. Thank you all for the detailed feedback.
@Chris Barts and @Aardvark Cheeselog re chirality and parity. No I'm not a physicist, but I've spent years studying this topic and I've written several papers (unpublished) on it. Briefly, here is the issue: Physicists talk as if all kinds of symmetry are AS 'sacred' as those associated with Emmy Noether's theorem. But it's simply not true. It happens that there are several kinds of symmetry that will lie forever outside of her realm. Nevertheless, physicists have such religious-seeming fervor about symmetry that they treat these 'lesser' kinds of symmetry AS IF they were Noether-level symmetries. 'Parity' happens to be one such 'lesser' symmetry. Now what Dr. Wu accomplished in 1957 was a great moment in history when humans discovered/confirmed that the universe has chirality (handedness). That would be the positive, commonsensical way to express the accomplishment of C.S Wu, C.N. Yang and T.D. Lee. But within their own field, that moment is called, instead, "the violation of parity." Why? Because symmetries are such a religion (or in this case, a quasi-superstition) to physicists. Accordingly, they fought the results of 1957 for decades, trying to show by creative accounting methods that somehow symmetry wasn't really broken by Dr. Wu's experiment. That's why I call the label "violation of parity" a passive-aggressive bit of argot because it says, in effect, "Well, yeah, she seems to have demonstrated a case where parity doesn't hold, but we know better; we know that eventually we can find some way to 'save' parity, because symmetry is our religion." (Spoiler alert: They never did, unless one buys their creative-accounting hocus-pocus.)
Conal said,
February 24, 2023 @ 8:02 am
…continuing from my previous comment, this provides an opportunity to try ChatGPT one more time. I asked it "What is the difference between the discrete symmetries and continuous symmetries?" (The former are what I call 'wannabe-Noetherian symmetries' or 'lesser symmetries'; the latter are the true Noetherian symmetries. One understands the physicists' quasi-religious awe toward the Noetherian symmetries, but when they rise up to fervently "defend" the discrete symmetries too, that's just wishful thinking or superstition, not science.) ChatGPT's four-paragraph response to my question ("What is the difference…?") nicely illustrates its mindless-but-basically-not-incorrect style of pseudo-chatting over a coffee at length on any topic:
"Discrete symmetries and continuous symmetries are two types of symmetries in mathematics and physics. The main difference between them is related to the nature of the transformations they generate.
Discrete symmetries involve transformations that leave a system invariant under a finite set of operations. These operations can be thought of as "jumps" or "steps" that move the system from one configuration to another, where the system remains invariant under each step. Examples of discrete symmetries include reflections, rotations, and translations in Euclidean space.
On the other hand, continuous symmetries involve transformations that leave a system invariant under an infinite set of operations. These operations can be thought of as smooth transformations that gradually change the system from one configuration to another, while preserving its properties. Examples of continuous symmetries include rotations in three-dimensional space, translations in space-time, and gauge transformations in quantum field theory.
In summary, the main difference between discrete and continuous symmetries lies in the nature of the transformations they generate: finite and discrete for the former, and infinite and continuous for the latter."
For real answers (in contrast to ChatGPT's pesudo-answer above), I recommend T.D. Lee, Symmetries, Asymmetries, and the World of Particles (1988) pp. 57-59 or Dave Goldberg, The Universe in the Rearview Mirror (2013), pp. 295-297. (Like T.D. Lee, Goldberg is a physicist.) Goldberg's presentation is by far the better of the two, although T.D. Lee adds some nuances and technical details not found in Goldberg.
Brett said,
February 24, 2023 @ 5:11 pm
@Conal Boyce: Your description of the history of parity violation and how it was understood is bizarrely wrong. Trying to "save" parity symmetry was never a major mainstream viewpoint. Lee and Yang famously won the Nobel prize less than a year after the publication of Madame Wu's experimental result demonstrating parity violation in cobalt decay. Since then, searches for further symmetry violations have been ubiquitous and have turned up many fascinating results; the further demonstration of not just P violation but CP violation led to two Nobel prizes—one for the discovery and another for the conclusion that CP violation was caused by the existence of third-generation quarks. Moreover, the discovery of parity violation eventually led to the exciting subsequent understanding that the weak interaction is, at relativistic energies, not slightly parity violating but actually maximally parity violating! And it is currently a major puzzle why parity is apparently not violated in the strong interactions, even though it could be, through the existence of a vacuum θ angle.
As to the linguistic issue of whether it is more useful to refer these effects in terms of parity or chirality, there is also the fact that chirality already had a (related but distinct) meaning in stereochemistry. Optically active materials of distinct chirality lead to polarization rotation when light passes through them—something that is a critically important phenomenon in the modern age of laser optics, which was also dawning around the same time as the early measurements of parity-violation in β-decay. So the term chirality (although it certainly used in particle physics to describe the eigenvalues of γ₅, which determine whether an elementary particle interactions with an intermediate vector boson) was, to an important extent "preoccupied" (as they say in biological systematics).