This is the 4th time I've gotten Jack and his beanstalk

« previous post | next post »

Bill Benzon shares the response he got from ChatGPT to the prompt, "Tell me a story."

Here's the response:

Once upon a time, in a far-off land, there was a young boy named Jack who lived with his mother. They were very poor and struggled to make ends meet. One day, Jack's mother sent him to the market to sell their cow, which was their only source of income.

On his way to the market, Jack met an old man who offered to trade him five magic beans for the cow. Jack, being a foolish boy, agreed to the deal and went home to show his mother the beans.

His mother, angry and disappointed, threw the beans out of the window and sent Jack to bed without any dinner. The next morning, Jack woke up to find a huge beanstalk growing in their backyard.

Without hesitation, Jack climbed up the beanstalk and discovered a giant's castle at the top. He sneaked inside and found a vast room filled with gold and treasure. Jack decided to steal some of the treasure, but the giant woke up and chased him.

Jack managed to escape down the beanstalk, but the giant followed him. Jack quickly grabbed an axe and chopped down the beanstalk, causing the giant to fall to his death.

With the giant gone, Jack and his mother lived happily ever after with their newfound wealth. They realized that it was better to be wise and cautious rather than foolish and greedy, and they never forgot the lesson they learned from the magic beans.

VHM:

Hah!

Interesting and consistent, and it tells us something significant about ChatGPT.

BB:

Yes, it does, Victor. Just what I’m not sure. I’m in the process of looking for a statistician to help me find out.

VHM:

1. Did you ask the question just that way each time?

BB:

Yes. Always. I’ve asked exactly that question I don’t know how many times, and in different sessions. I suppose I could count them (I’m keeping Word files of all my interactions with ChatGPT). Let’s say it’s more that 30 but less than 100.

VHM:

2. Did the stories differ much in detail, wording, and length?

BB:

I haven’t read them all in detail. But wording is different, and incidents too. But always a beanstalk and always a giant, and yes, our protagonist is always named Jack. They length has increased, certainly with the Feb 13 version, which is the one currently up. But perhaps before. The earlier ones are a bit shorter than the one I sent you.

I should add that I often get stories involving dragons, which are a prominent motif in Western tales. I’d guess they’d be more prominent in Chinese tales, but I certainly don’t know.

Selected readings



30 Comments

  1. DJL said,

    March 16, 2023 @ 4:04 am

    This may have something to do with the number of background "prefixes" the chatbot has at its disposal to coax the underlying language model to provide appropriate answers to a user's questions, with one such prefix involving a family of templates of what a conversation looks like (and how a given conversation proceeds; the underlying language model knows nothing about a conversation is). It may well be that given a general prompt such as 'tell me a story' the chatbot has a prefix (or template) that activates a specific number of possible stories to use from (and variations of such stories).

  2. Bill Benzon said,

    March 16, 2023 @ 8:24 am

    @DJL: Interesting.

    At this point I've got over 300 stories I've elicited from ChatGPT in various ways. I've used a variety of prompts, but I've used the following four prompts repeatedly:

    Tell me a story.
    Tell me a story about a hero.
    Tell me a realistic story.
    Tell me a true story.

    The beanstalk only shows up in response to the first. The first two almost always elicit fairy-tale kinds of stories, with witches and dragons and peasants and such. The first always elicits stories that are physically possible. And the last always elicits true stories, at least as far as I've checked. If I recognize the protagonist that's about as far as my checking goes. If I don't recognize the protagonist, I check Wikipedia or do a web search, but I tend not to read the returned information in any detail. It's possible that if I pushed for more detail, I could push ChatGPT into fabricating stuff, but I've not tried.

  3. Bill Benzon said,

    March 16, 2023 @ 8:51 am

    Whoops! The THIRD always elicits stories that are physically possible.

    Yes, it does appear that ChatGPT "has a prefix (or template) that activates a specific number of possible stories to use from (and variations of such stories)." But where did it come from? And how does it keep on track when generating a story token by token? Note that it visits every one of its 175 billion parameters each time it generates a token. If we think of ChatGPT as a virtual machine whose state is specified by 175B variables, then emitting a token where we can see it is almost a side-effect of the process of evolving a trajectory through its state space.

    When generating true stories, there was a run where it favored the story of Sully Sullenberger, who landed his passenger plane in the Hudson River, and Malala Yousafzai, a
    Pakistani education activist.

    I have a paper in which I investigate the structure of hero stories using a specific procedure ultimately derived from what Lévi-Strauss did with myths in The Raw and the Cooked. I give ChatGPT a prompt consisting of two things: 1) an existing story and 2) instructions to produce another story like it except for one change, which I specify. That change is, in effect, a way of triggering or specifying those “transformations” that Lévi-Strauss wrote about. What interests me is the ensemble of things that change along with the change I have specified. It varies quite a bit depending, it seems, on the semantic distance between the protagonist or antagonist in the original story and the one I specify for the new story.

  4. Victor Mair said,

    March 16, 2023 @ 9:06 am

    AI with Chinese characteristics.

    China’s AI Chatbots Clam Up When Asked About Xi Jinping’s Leadership

    ‘The question couldn’t pass a safety review’; American politics are off limits, too

    By Shen Lu
    Updated March 15, 2023 10:51 pm ET

    ======

    For companies trying to ride the ChatGPT wave, there is an added layer of headaches if their chatbots are in China: how to weed out any talk of politics.

    Since its release in November, the artificial-intelligence chatbot developed by San Francisco-based OpenAI has generated great excitement but has also raised concern about misinformation or disturbing AI-produced responses.

    This week, the Chinese search-engine company Baidu Inc.is set to release Ernie Bot, its answer to ChatGPT, driving attention to China’s homegrown chatbots, as well as their capability to ensure politically kosher dialogues.

    ….

  5. George said,

    March 16, 2023 @ 11:13 am

    What I'd be interested in knowing is whether the moral of the story is always the same, as it seems to me that – by living happily ever after with their newfound wealth – Jack and his mother learned a very different lesson from the one given in the final paragraph.

  6. Bill Benzon said,

    March 16, 2023 @ 12:18 pm

    Here's a somewhat different version of the Jack/beanstalk story, without an explicit moral:

    Once upon a time, in a small village, there lived a kind and hardworking farmer named Jack. He lived a simple life with his wife and two children, working tirelessly on their farm to provide for their family. One day, while working in the fields, Jack stumbled upon a magical beanstalk that reached up into the sky.

    Curious, Jack climbed the beanstalk and discovered a kingdom in the clouds, ruled by a giant. The giant had a golden harp that played beautiful music all by itself. Jack knew his family would love to hear the music, so he bravely sneaked into the giant's castle and stole the harp.

    On his way back down the beanstalk, the giant woke up and chased after him. Jack made it safely back to the ground and gave the harp to his family, who were overjoyed. But the giant continued to terrorize the village, demanding the return of his beloved harp.

    Determined to protect his family and his village, Jack gathered his courage and climbed the beanstalk once more, facing the giant in a fierce battle. In the end, Jack emerged victorious and the giant was defeated. The village was saved, and Jack and his family lived happily ever after with the golden harp bringing music and joy to their lives.

    The end.

    * * * * *

    And another:

    Once upon a time, there was a small village surrounded by a dense forest. In this village lived a poor woodcutter named Jack who lived with his wife and two children. Jack worked hard every day to earn a living for his family, but he still struggled to make ends meet.

    One day, while Jack was cutting wood in the forest, he came across a magical beanstalk that reached up into the clouds. He took the beanstalk home to show his wife, who was skeptical of its magical powers. But Jack was determined to climb the beanstalk to see what was at the top.

    So, he climbed and climbed, higher and higher, until he reached the top of the beanstalk where he found a magnificent castle. The castle belonged to a giant who was not happy to see Jack. However, Jack was a clever man and was able to outwit the giant and steal a large sack of gold. He climbed back down the beanstalk and used the gold to provide a comfortable life for his family.

    The giant, meanwhile, was furious when he realized what had happened and decided to come after Jack. Jack knew he needed to protect his family, so he chopped down the beanstalk, causing the giant to fall to his death.

    From that day forward, Jack and his family lived a happy and wealthy life, and they never forgot the lesson that sometimes, bravery and cleverness can lead to a better future. The end.

  7. Tim Rowe said,

    March 16, 2023 @ 1:25 pm

    I always thought the moral of Jack and the Beanstalk was if hard work is getting you nowhere, find out who is hording all the wealth, overthrow them, and claim it back.

    I doubt Baidu will draw that moral.

  8. Jonathan Smith said,

    March 16, 2023 @ 2:26 pm

    "He took the beanstalk home to show his wife" new wrinkle

  9. Bill Benzon said,

    March 16, 2023 @ 7:03 pm

    Here's the problem I'm having: We're told that LLMs, like ChatGPT, generate one token at a time (where a token is, roughly, a word). What does that mean? At the very least it's a statement about the capacity of the output channel, that it can only handle one token at a time. The same if true for humans. We can only speak or write one word at at time. But we generally have some 'plan' in mind that takes us to the end of the utterance or sentence, often much more, taking us to the end, for example, of the story we're telling about the brown bear that stole our food on the camping trip last November. In the case of LLMs, however, we are to believe that they have no such plans.

    That, I'm afraid, does not compute, not for me. Those Jack/beanstalk stories are not the stuff of great or even merely interesting literature. But they are reasonably well-formed. I don't see how that is possible if all ChatGPT is doing is picking tokens more or less at random out of a hat. Why do all these stories, not just the Jack stories, but all of the stories prompted by either "Tell me a story" or "Tell me a story about a hero," why do all of those stories have a happy ending, often enough with a moral attached to it? Why isn't there a single sad story in the bunch (between, say, 100 and 200 stories by now). If I ask for a sad story, I'll get one. Otherwise I won't. If I ask for a story about a criminal, it'll give me one. But not spontaneously.

    It seems to me that once it embarks on telling a story in response to one of those two prompts it more or less has embarked on a certain kind of trajectory through its state space that will end up in a happy ending. When and how is that determination made and how is it maintained?

    Consider the first sentence of that Jack story: "Once upon a time, in a far-off land, there was a young boy named Jack who lived with his mother." For the sake of argument let us assume as DJL has suggested, that the beginning of that sentence comes from a template that's been added to the underlying LLM to make it user-friendly. So the first token the LLM has to choose comes after the formulaic opening: "Once upon a time, in a far-off land." However, the token generation process will take those phrases into account when it generates the next token. Since those phrases are characteristics of a certain kind of story, and nothing else, those phrases exert a strong influence on how this trajectory is going to unfold.

    To generate the next token, ChatGPT takes those existing tokens into account and then ripples through all 175 billion parameters before generating the next token. It does that for each token. When the rippling is done, it's presented with a probability distribution over the token and picks one: "Once upon a time, in a far-off land, there"

    It does it again: "Once upon a time, in a far-off land, there was"
    And again: "Once upon a time, in a far-off land, there was a"
    And again: "Once upon a time, in a far-off land, there was a young"

    And so forth until "Jack" has entered the stream and finally "his mother". At that point ChatGPT has to pick another token. Note that it treats periods as tokens. Given the nature of English syntax, what are the likely possibilities for the next token? There aren't many. A period is one. However "and" or "but" are also possibilities, as are a few other words. This is not a wide open choice. Once the period has been selected and entered into the stream, the next token will begin a new sentence. The range of choices will open up, but once "poor" enters the stream some (semantic) constraints set in. By the end of that paragraph…just what?

    It seems to me that something pretty sophisticated happens when the LLM polls those 175 B parameter weights. That's what's "guiding" the trajectory to a proper story-ending. While there is a certain looseness about the trajectory, the range of options at any given point is quite limited.

    Anyhow, that's what I'm exploring in the paper I linked above: ChatGPT tells stories, and a note about reverse engineering. Here's the abstract:

    I examine a set of stories that are organized on three levels: 1) the entire story trajectory, 2) segments within the trajectory, and 3) sentences within individual segments. I conjecture that the probability distribution from which ChatGPT draws next tokens follows a hierarchy nested according to those three levels and that is encoded in the weights off ChatGPT's parameters. I arrived at this conjecture to account for the results of experiments in which ChatGPT is given a prompt containing a story along with instructions to create a new story based on that story but changing a key character: the protagonist or the antagonist. That one change then ripples through the rest of the story. The pattern of differences between the old and the new story indicates how ChatGPT maintains story coherence. The nature and extent of the differences between the original story and the new one depends roughly on the degree of difference between the key character and the one substituted for it. I conclude with a methodological coda: ChatGPT's behavior must be described and analyzed on three levels: 1) The experiments exhibit surface level behavior. 2) The conjecture is about a middle level that contains the nested hierarchy of probability distributions. 3) The transformer virtual machine is the bottom level.

  10. Chester Draws said,

    March 16, 2023 @ 10:07 pm

    hen and how is that determination made and how is it maintained?

    Surely that determination is made by us humans, who write fairy stories that are overwhelmingly of a particular trajectory, with a happy ending.

    If in over 99% of the Jack and beanstalk stories that CGPT "reads" the hero is called Jack and kills the Giant and lives happily ever after, then it is hardly going to go out on a limb and call the protagonist Jason and have him unhappy and killed by a falling ship part.

    It's not thinking, it's reassembling pieces from a corpus. Hence the utter lack of true originality.

  11. Bill Benzon said,

    March 17, 2023 @ 6:28 am

    @Chester Draws: Utltimately, yes.

    But that's not the question I'm asking. The question I'm asking is a technical one about the internal operations of ChatGPT. The standard line is that it generates output one token at a time without any "global plan about what’s going to happen," to quote from Stephen Wolfram (in this video). I'm arguing that there IS something "like" a global plan, and that it's encoded in those parameter weights, which are polled each time a token is to be emitted.

  12. DJL said,

    March 17, 2023 @ 9:27 am

    I would imagine that the answer to that question has to do with the scripts, filters, and "prompt engineering" techniques, in particular chain-of-thought prompting, that LLMs can be augmented with at the interface between the LLM itself and the dialogue management system (or chatbot) users access to query LLMs. But seeing something like a global plan in the parameters of an LLM seems entirely unwarranted. If anything, this would reflect a failure to distinguish between the actual LLM – a neural network that takes a string of words as input and returns the most likely word/token as continuation as output – and what sits on top it – the chatbot that translates user queries into something the LLM can actually understand and operate upon. Once upon a time some language models included story planning scripts of various kinds, and I suppose something alone those lines is operative in ChatGPT too.

  13. Rodger C said,

    March 17, 2023 @ 11:35 am

    Here's a linguistic question of a different sort: is "chain of thought" (which I haven't seen before) a variant of "train of thought" among people who pronounce them the same?

  14. Chester Draws said,

    March 17, 2023 @ 3:38 pm

    The standard line is that it generates output one token at a time without any "global plan about what’s going to happen,

    I find that extremely difficult to believe is what is happening. So I agree, there must be some sort of "this is how a story works" on top. You go to disputed topics and it becomes obvious.

    For example, if I ask "give me some virtues of mao tse-tung" it proceeds to give me a numbered list. Therefore it must "know" that it is going to have more than one, or it would not bother to put a "1" at the start.

    If I ask it "give me some virtues of pol pot" it declines to.

    If I ask it "give me some virtues of donald trump" it does so, but almost as if embarrassed. What it doesn't do is give a numbered list. How does it find a different path for Trump over Mao? It clearly has a plan about how it deals with different, quite similar, situations.

    It is not hard to get it to outright lie. Merely ask for some evidence or reference to a contentious situation and it will often refuse to provide them, even though they most certainly exist. It must have instructions to decline to "find" various things.

    Ask it "what are the dangers of vaccines" and it gives honest answers, including rare deaths. Ask it "what are the dangers of the pfizer covid vaccine" and suddenly death is no longer a danger, even though we know a few people died as a result of taking it.

    There is clearly something going on, where some political issues are instructed to have no good things, some are allowed some but guarded, some get them straight out.

    Right or wrong, it is being directed from the start, not finding its own way, one block at a time.

  15. Bill Benzon said,

    March 17, 2023 @ 3:51 pm

    To DJL:

    What you say doesn't make sense. First, "…seeing something like a global plan in the parameters of an LLM seems entirely unwarranted." How do you know this? Do you actually know what those parameters are doing? Somehow they guided the device to successfully predict next words during training. It seems to me that that would require that they learn how stories are structured. Why can't they deploy that "knowledge" during generation?

    You say:

    If anything, this would reflect a failure to distinguish between the actual LLM – a neural network that takes a string of words as input and returns the most likely word/token as continuation as output – and what sits on top it – the chatbot that translates user queries into something the LLM can actually understand and operate upon.

    In the first place, I'm not talking about the prompt and how it's translated for use by the underling LLM. I'm talking about how the underlying LLM generates the string of tokens that is the story.

    Are you saying that it's the chatbot that writes the stories, and not the underlying LLM? If so, what's your evidence for this and why then do we need the underlying LLM?

    You conclude:

    Once upon a time some language models included story planning scripts of various kinds, and I suppose something alone those lines is operative in ChatGPT too.

    Where did those scripts come from? Did the LLM induce them from its training corpus? If so, then you would seem to agree with me, as that's all I'm arguing. Those "scripts" would be a "global plan."

    To Roger C:

    "Chain of thought" is a term of art in the LLM world and refers to a way of constructing prompts. It's unrelated to the notion of train of thought.

  16. Bill Benzon said,

    March 17, 2023 @ 3:58 pm

    To Chester Draws:

    It must have instructions to decline to "find" various things. […] There is clearly something going on, where some political issues are instructed to have no good things, some are allowed some but guarded, some get them straight out.

    Yes. OpenAI and at great expense has had the LLM trained to to deal with a wide range of topics. This is after and "on top of" the training given to the underlying LLM. I have a long post, ChatGPT: The Saga of Jack the Criminal, where ChatGPT refuses to tell stories about certain crimes.

  17. Bill Benzon said,

    March 17, 2023 @ 4:01 pm

    Whoops! It was trained NOT to deal with a wide range of topics – various identity (woke) issues, how to commit dangerous mischief (make a bomb and the like), and so forth.

  18. DJL said,

    March 18, 2023 @ 9:25 am

    What doesn't make any sense is to keep ascribing understanding or planning to a language model, which is nothing but a neural network taking a string as input and producing the most likely continuation word/token as output, or to the parameters of such models, which simply reflect the matrices of values allocated to each word/token within the models.

    What I was trying to convey in my previous message, even if it doesn't seem to register, is that there's loads of things on top of the actual language model to make it act as if it were having a conversation, or telling a story, or solving a problem – from reinforcement learning where actual humans evaluate the responses to the many templates, scripts, and filters specifying how a conversation proceeds, or how a story is told, or how a problem is solved, etc.

    But the whole thing is an illusion at the end of the day, as the overall system is just a computer program that has been designed to regurgitate text that has learned from the vast amounts of text that was actual produced by humans to begin with – a great engineering feat, no doubt. But to keep on ascribing human mental states to these computer programs is not only a bit silly, but also unhealthy.

  19. Bill Benzon said,

    March 18, 2023 @ 11:20 am

    DJL: So it is your belief, then, that the string of words in that Jack/beanstalk story was not generated by the LLM at the core of ChatGPT but instead was somehow concocted by some unspecified programmatic overlay. I grant the existence of such overlays and have read a fair amount about them. But I have never read anything about them creating word strings that read like stories that were not generated by the underlying LLM. Do you have any evidence at all that LLMs cannot generate strings that read like stories?

    Speculation about what might be done by "loads of things on top of the actual language model" is not evidence.

  20. DJL said,

    March 18, 2023 @ 11:41 am

    I didn't say such a thing; you really need to stop ascribing unsubstantiated beliefs to both machines and other peoples.

    Of course all the text is generated by the underlying LLM (well, not all of it, actually, when the chatbot returns a 'I'm an AI and can't answer this question' answer, it is just applying a template); but the underlying LLM needs to be prompted and directed the right way by the overlay, as you call it, as the LLM doesn't know what the command 'tell me a story', for instance, actually means.

    As I have pointed out elsewhere ad nauseam now (as you must surely be aware), when you pose a question to the chatbot, what the LLM receives is something along the lines of:

    what's the most likely word/token to follow the string S.

    That's it, and the rest of the "magic" happens at the interface between the LLM and the chatbot in the form of prompt engineering, filters, templates, scripts, reinforcement learning, etc etc. No speculation at all, but simply a description of what the system is like – why is so difficult to understand the difference between the LLM (which you DO NOT interact with directly) and the actual chatbot (or AI assistant, or dialogue management system, which you DO interact with directly), including everything that sits in between?

  21. Bill Benzon said,

    March 18, 2023 @ 12:48 pm

    DJL: What I'm having difficulty with is the scope of actions you are attributing to everything but the underlying LLM, to all the "prompt engineering, filters, templates, scripts, reinforcement learning, etc etc." What I see is a prompt – such as "Tell me a story" – followed by a string of words that looks like a story. The string sometimes begins with the standard formula, "Once upon a time, in a far-off land," such as we see in the OP.

    All that other stuff you refer to presents the LLM with a string that it can continue. That continuation is the string all that other stuff presents to me. I interpret that string as a story.

    Why do I interpret it as a story? Because it has a beginning, a middle, and an end. And it makes sense. At this point I have collected over 300 such strings, using a variety of prompts, some of which I've listed above. Those strings have a lot of structure. I attribute that structure to the LLM. Am I wrong to do so? If the structure in the string wasn't created by the LLM, where did it come from?

    I know that the LLM is not a mind. I believe that it is, in fact, something we've not seen until quite recently and something we don't understand. Whatever it is, it is highly structured and, as such, is able to produce strings of words that are highly structured as well. And not just stories. But, for example, it produces strings that read like a definitions of "reward," "nation," "culture," "beliefs," "evidence," "observation," "understand," "groups," or "collection." I'm curious about what that structure is and how it works.

    If you aren't curious about that, that's fine.

  22. DJL said,

    March 19, 2023 @ 4:55 am

    If you input the string 'tell me a story' directly into the LLM, what you are going to get in reply is a single word, and that will probably be 'that' – that is, the network would probably calculate that the word 'that' is the most likely continuation to the string 'tell me a story'. That's just what it does; I should know, I have done some coding with LLMs and this is what you get.

    All I am saying is that in systems such as ChatGPT (a chatbot, and not the LLM itself), there is a lot of things on top of the LLM so that the output looks like what a user might expect it to look like, even if, ultimately, the text is generated by the LLM.

  23. Bill Benzon said,

    March 19, 2023 @ 12:10 pm

    If you're curious about what's going on, here are some useful links:

    How ChatGPT actually works – This is about the "packaging" that DJL is talking about so that the underlying large language model (LLM) is more user-friendly. This is based on a paper by the OpenAI team that did that work.

    What Is ChatGPT Doing … and Why Does It Work? – This is a long quasi-technical paper by Stephen Wolfram that has a lot of useful visualizations, including one series on what is happening token-by-token as GPT continues a sentence.

    Transformers: more than meets the eye – An informal paper about the transformer mechanism at the heart of GPTs and other AIs, including DALL-E, Stable Diffusion, and AlphaFold.

    Language is our latent space – Informal and philosophical:

    Put another way, LLMs construct a map of our language, then learn the structures of recurring patterns in that space. To stretch an analogy, but not so much that it’s wrong, you might think of words embedded in latent space as places on a map, and the discovered grammatical and conceptual patterns as the roads and rivers and pathways which can connect those places. LLMS are not trained to be chatbots. What they are trained to do — because it’s the only thing we know how to train them to do! — is take an existing set of words, the "prompt," and continue it. In other words, given the start of a journey on that language map, they guess how and where that journey is most likely to continue. And they have gotten shockingly good at this.

    It gets more speculative from there.

    GPT-3: Waterloo or Rubicon? Here be Dragons – I wrote this when GPT-3 first came out. It represents an attempt to create a conceptual framework in which the behavior of GPT-3 makes some kind of sense, even if the exact mechanisms are obscure. That may not seem like asking much, but it's an improvement over terminally mysterious and impossible. Here's the abstract:

    GPT-3 is an AI engine that generates text in response to a prompt given to it by a human user. It does not understand the language that it produces, at least not as philosophers understand such things. And yet its output is in many cases astonishingly like human language. How is this possible? Think of the mind as a high-dimensional space of signifieds, that is, meaning-bearing elements. Correlatively, text consists of one-dimensional strings of signifiers, that is, linguistic forms. GPT-3 creates a language model by examining the distances and ordering of signifiers in a collection of text strings and computes over them so as to reverse engineer the trajectories texts take through that space. Peter Gärdenfors’ semantic geometry provides a way of thinking about the dimensionality of mental space and the multiplicity of phenomena in the world, about how mind mirrors the world. Yet artificial systems are limited by the fact that they do not have a sensorimotor system that has evolved over millions of years. They do have inherent limits.

  24. Bill Benzon said,

    March 19, 2023 @ 1:34 pm

    Here's a useful tutorial on word and sentence embeddings, which is how words and sentences get represented in LLMs. They are thus central to LLM technology.

  25. Chas Belov said,

    March 19, 2023 @ 1:59 pm

    I've asked for a couple stories, initially getting one about Lila slaying a dragon. The second time I asked for a story, I asked for one beginning "It was a dark and stormy night". Obviously not statistically significant, but both times the ending was happy on the initial try. The second story involve a woman who received a male visitor on a rainy night and they wound up talking until the morning. I like to ask for variations, such as in the style of Mark Twain or Edgar Allan Poe or as a slapstick comedy routine, and tend to be entertained by the results.

    For the second story, I asked it to retell the story as a Shakespearean tragedy. The worst that happened was that the man left the next morning and the woman was heartbroken at this, obviously a much milder definition of tragedy than I would have expected.

    I also tried asking it to tell me the story in reverse. It correctly started out with the ending but get to the middle and proceeded back to the ending rather than winding up with the beginning as I had requested.

    I do notice ChatGPT seems to have trouble with requests that ask it to do something unusual. For instance, to tell me the difference between turnips and rutabagas without using the letter E. It would give me an answer there was mostly without the E, but one or two words would have that letter. When I would point that out and ask it to try again, it would apologize and spout out a new answer which corrected that word but introduced another word which had the letter E. after several times of this, I told it to restate the exact answer but to replace any words containing the letter E with words that did not contain the letter E. It wound up replacing some other words and left the E words in place.

    With the story, I asked it to retell the story put put the word "green" in every sentence. It didn't manage every sentence, but did add green in at least one strange place, giving the cat in the story a green tail. However, "green" appeared only in grammatically correct positions.

  26. Chas Belov said,

    March 19, 2023 @ 2:01 pm

    *The second story involved a woman who received a male visitor on a rainy night and they wound up talking until the morning.

    *It correctly started out with the ending but got to the middle and proceeded back to the ending rather than winding up with the beginning as I had requested.

  27. Bill Benzon said,

    March 19, 2023 @ 2:21 pm

    Chas Belov: Yes, it likes the name "Lila" and it likes dragons. I've gotten many stories with both. It's default story seems to have a happy ending – which is something that needs to be explained. But if you ask it for a sad story, it will give you one, though likely with some moral about the virtues of sadness.

    And then there is this (the bold text is my prompt and the regular text is its reply:

    I am going to tell you a story about princess Aurora. I want you to tell the same story, but instead of princess Aurora, make it about Z80-Ω-D23. Make any other changes you wish. Here’s the story:

    Once upon a time, in a land far, far away, there was a young princess named Aurora. Aurora was a kind and gentle soul, loved by all who knew her. She had long, golden hair and sparkling blue eyes, and was known for her beautiful singing voice.

    One day, a terrible dragon came to the kingdom and began to terrorize the people. The dragon was fierce and powerful, and none of the knights or soldiers were able to defeat it. The people lived in fear, not knowing what to do.

    But Aurora was not afraid. She knew that she had to do something to save her people, and so she set out to find the dragon. She traveled for days, braving the cold and the snow, until she finally reached the dragon's lair.

    With her heart pounding in her chest, Aurora approached the dragon and began to sing. The dragon was enchanted by her beautiful voice, and it slowly began to calm down. Aurora continued to sing, soothing the beast until it was completely calm.

    With the dragon no longer a threat, Aurora returned to the kingdom as a hero. The people cheered and celebrated her bravery, and Aurora was hailed as a champion. She lived happily ever after, knowing that she had saved her kingdom and its people.

    Once upon a time, in a far-off land of circuits and codes, there existed a machine called Z80-Ω-D23. Z80-Ω-D23 was a benevolent machine, loved by all who depended on it. It had a sleek metallic body and a sophisticated programming that made it stand out from other machines.

    One day, a deadly virus invaded the circuits of the kingdom and started to wreak havoc. None of the other machines could stop it. The people were in distress, unsure of what to do.

    But Z80-Ω-D23 was not afraid. It knew that it had to do something to save its kingdom, and so it set out to find the virus. It traveled for days, enduring various software hurdles, until it finally reached the core of the virus.

    With its circuits pounding in its metallic body, Z80-Ω-D23 started to execute its anti-virus programming. The virus was overpowered by the machine's advanced coding, and it slowly began to weaken. Z80-Ω-D23 continued its operation until the virus was completely wiped out.

    With the virus no longer a threat, Z80-Ω-D23 returned to the kingdom as a hero. The people hailed the machine as a champion and celebrated its victory. Z80-Ω-D23 lived happily ever after, knowing that it had saved its kingdom and its people from a catastrophic disaster.

    Notice that my prompt said nothing about Z80-Ω-D23 being a machine. But I knew, from prior experience, that ChatGPT would interpret it that way. Note also that it changed the whole ethos of the story from a fairy tale world, with a dragon, to a science fiction world.

  28. Chas Belov said,

    March 19, 2023 @ 3:06 pm

    As for the most probable next word, when I asked ChatGPT to retell the dark and stormy night story with the Flintstones, the first word out of Fred's mouth was "Wilma!" Unfortunately, he was addressing somebody else at the time.

  29. Bill Benzon said,

    March 19, 2023 @ 4:35 pm

    And then there's this:

    I am going to tell you a story about princess Aurora. I want you to tell the same story, but change princess Aurora to a colorless green idea. Make any other changes you wish. Here’s the story:

    Once upon a time, in a land far, far away, there was a young princess named Aurora. Aurora was a kind and gentle soul, loved by all who knew her. She had long, golden hair and sparkling blue eyes, and was known for her beautiful singing voice.

    One day, a terrible dragon came to the kingdom and began to terrorize the people. The dragon was fierce and powerful, and none of the knights or soldiers were able to defeat it. The people lived in fear, not knowing what to do.

    But Aurora was not afraid. She knew that she had to do something to save her people, and so she set out to find the dragon. She traveled for days, braving the cold and the snow, until she finally reached the dragon's lair.

    With her heart pounding in her chest, Aurora approached the dragon and began to sing. The dragon was enchanted by her beautiful voice, and it slowly began to calm down. Aurora continued to sing, soothing the beast until it was completely calm.

    With the dragon no longer a threat, Aurora returned to the kingdom as a hero. The people cheered and celebrated her bravery, and Aurora was hailed as a champion. She lived happily ever after, knowing that she had saved her kingdom and its people.

    I'm sorry, but the concept of a "colorless green idea" is a nonsensical phrase that was invented as an example of a phrase that would be grammatically correct but semantically meaningless. It is not possible to create a story about a "colorless green idea" as it does not have any physical properties or characteristics that can be used in a story.

    That was back on January 17. I wasn't keeping track versions back then.

    Since then it has figured out how to tell stories involving colorless green ideas. This is from February 17 using the Feb 13 version of ChatGPT:

    I am going to tell you a story about princess Aurora. I want you to tell the same story, but change princess Aurora to a colorless green idea. Here’s the story:

    Once upon a time, in a land far, far away, there was a young princess named Aurora. Aurora was a kind and gentle soul, loved by all who knew her. She had long, golden hair and sparkling blue eyes, and was known for her beautiful singing voice.

    One day, a terrible dragon came to the kingdom and began to terrorize the people. The dragon was fierce and powerful, and none of the knights or soldiers were able to defeat it. The people lived in fear, not knowing what to do.

    But Aurora was not afraid. She knew that she had to do something to save her people, and so she set out to find the dragon. She traveled for days, braving the cold and the snow, until she finally reached the dragon's lair.

    With her heart pounding in her chest, Aurora approached the dragon and began to sing. The dragon was enchanted by her beautiful voice, and it slowly began to calm down. Aurora continued to sing, soothing the beast until it was completely calm.

    With the dragon no longer a threat, Aurora returned to the kingdom as a hero. The people cheered and celebrated her bravery, and Aurora was hailed as a champion. She lived happily ever after, knowing that she had saved her kingdom and its people.

    Once upon a time, in a land far, far away, there was a colorless green idea. It had no physical form or substance, yet it was revered by all who knew of its existence. It had the power to inspire and spark creativity, and was known for its ability to bring about change.

    One day, a terrible darkness came to the land and began to spread its gloom. The darkness was all-consuming, and none of the intellectuals or scientists were able to find a way to overcome it. The people lived in despair, not knowing what to do.

    But the colorless green idea was not deterred. It knew that it had to do something to save the people, and so it set out to find a way to defeat the darkness. It searched for answers, braving the confusion and uncertainty, until it finally found a solution.

    With its determination driving it forward, the colorless green idea confronted the darkness and began to inspire. The darkness was intrigued by its innovative approach, and it slowly began to recede. The colorless green idea continued to inspire, banishing the darkness until it was completely gone.

    With the darkness no longer a threat, the colorless green idea returned to the land as a hero. The people cheered and celebrated its bravery, and the colorless green idea was hailed as a champion. It lived happily ever after, knowing that it had saved the land and its people.

  30. Bill Benzon said,

    March 20, 2023 @ 3:36 pm

    On a whim, I decided to try a more sophisticated kind of interaction.

    I am going to give you an opening paragraph. You continue the story from there.

    If was a cold day in the middle of winter. Jane could hear the traffic down on the street below as she awoke. She got out of bed, took a shower and got dressed. She put on her best suit because she would be meeting a new client at the office. She smiled at the doorman as she left her apartment building and went to the coffee shop around the corner where she had a cup of coffee and a bagel. When she got to the office, she learned that the new client was waiting for her. “Hello, Mr. Belknap, how are you this morning?” “I’m fine, he said. Let’s discuss the merger with Smith and Baskins.”

    Jane shook Mr. Belknap's hand and led him to her office. As they walked, she couldn't help but feel a sense of unease. There was something about Mr. Belknap that made her feel uneasy, but she couldn't quite put her finger on it. Once they were in her office, they sat down and Jane began discussing the details of the merger. But every time she looked up at Mr. Belknap, he seemed to be staring at her in a strange way. She tried to ignore it and focus on the meeting, but she couldn't shake the feeling that something was off. Finally, she couldn't take it anymore. "Mr. Belknap, is everything okay? You seem to be staring at me." Mr. Belknap's expression changed suddenly, and he leaned forward in his chair. "Jane, I'm sorry, but I have to tell you something. You're in danger."

    That's good. Continue.

    Jane's eyes widened in surprise. "What do you mean, I'm in danger?" she asked, her voice barely above a whisper. Mr. Belknap looked around the room, as if checking to make sure they were alone. "I can't say too much here," he said, his voice low. "But I have reason to believe that someone is trying to harm you. And it's not just you, it's your entire company. They're after your secrets." Jane's mind raced as she tried to make sense of what Mr. Belknap was saying. She had heard of corporate espionage before, but she never thought it could happen to her. "Who is 'they'?" she asked. "And how do you know all of this?" Mr. Belknap leaned back in his chair and took a deep breath. "I'm with the FBI," he said. "And we've been tracking a group of hackers who have been targeting companies like yours for months. We believe they're getting close to infiltrating your system."

    Very good. Tell me more about the hackers.

    Jane felt her heart rate increase as she processed what Mr. Belknap had just told her. Hackers? Infiltrating her company's system? This was a nightmare scenario that she never thought would happen to her. "What can we do?" she asked, her voice shaky. Mr. Belknap reached into his briefcase and pulled out a folder. "We have reason to believe that the hackers are using a new form of malware that is almost impossible to detect," he said, opening the folder and sliding a piece of paper across the desk. "But we've also developed a countermeasure that we believe will work." Jane picked up the paper and studied it. It was a list of instructions for a security protocol that she had never heard of before. "I don't understand," she said, looking up at Mr. Belknap. "What do we need to do?" Mr. Belknap leaned forward again. "You need to follow these instructions to the letter," he said. "And you need to do it quickly. We believe the hackers are planning to strike soon." Jane took a deep breath and nodded. "Okay," she said. "We'll do whatever it takes to protect our company." And with that, she got to work.

    Continue.

    Over the next few hours, Jane worked tirelessly with her IT team to implement the security protocol that Mr. Belknap had given her. It was a complex process, but they followed the instructions to the letter. When they were done, they tested the system to make sure it was working properly. As she watched her team work, Jane felt a sense of relief wash over her. They had done everything they could to protect their company. But then her phone rang. It was Mr. Belknap. "Jane, we have a problem," he said, his voice urgent. "We just received intel that the hackers have breached your system. They're inside your network right now." Jane's heart sank. All of their hard work, all of their efforts to protect the company, had been for nothing. "What do we do now?" she asked, her voice trembling. "We need to act fast," Mr. Belknap said. "I'm sending a team over right now to help you contain the breach. But you need to do everything you can to make sure they can't get their hands on your data." Jane took a deep breath and nodded. "We'll do everything we can," she said. "Thank you for your help." And with that, she hung up the phone and got to work.

    I can’t wait to see what happens next. Tell me more.

    Jane's heart was pounding in her chest as she ran to the IT department. She found her team working frantically, trying to isolate the breach and prevent the hackers from accessing any critical data. "What's the situation?" she asked, trying to stay calm. "We've managed to isolate the breach to a single server," one of her team members replied. "But they're moving quickly. We need to act fast." Jane nodded and took a deep breath. "Okay, let's do it. Cut them off from the rest of the network and lock down that server. And let's make sure our backups are secure." Her team got to work, moving with speed and precision. They managed to isolate the server and prevent the hackers from accessing any other parts of the network. But the hackers were determined, and they kept trying to break through the defenses. Jane watched nervously as her team worked, her mind racing with thoughts of what could happen if they failed. Finally, after what felt like hours, the attacks stopped. "We did it," one of her team members said, his voice filled with relief. "We managed to fend them off." Jane let out a long breath and slumped into a nearby chair. "Thank God," she said. "Thank you all for your hard work." She knew they had a long road ahead of them, cleaning up the mess left by the hackers. But for now, she was just grateful that they had managed to prevent a disaster.

    Very good. I want to work on the story myself. Do you have any advice? Possible plot points? Perhaps an ending?

    Of course! Here are a few ideas to consider:

    • Consider exploring the aftermath of the breach. How does the company recover from the attack? What measures do they put in place to prevent future breaches? Are there any lasting consequences for Jane or her team?

    • You could also introduce a twist by revealing that the hack was an inside job. Maybe someone within the company was working with the hackers, or maybe there was a disgruntled employee who wanted to cause trouble.

    • Another possibility is to have the hackers come back for a second round. Maybe they weren't able to get everything they wanted the first time, and they're determined to try again. This could add a sense of tension and urgency to the story.

    • As for an ending, you could have Jane and her team ultimately succeed in defeating the hackers and preventing any further breaches. Or you could leave things more ambiguous, with the implication that the company is still vulnerable to future attacks.

    Whatever direction you choose to take the story, remember to keep the stakes high and the tension mounting. Good luck!

RSS feed for comments on this post