Everything's Fine

« previous post | next post »

Eve Armstrong's latest — "Everything's Fine", arXiv.org 3/29/2024:

I investigate the peculiar situation in which I find myself healthy and strong, with a darling family, stimulating job, top-notch dental plan, and living far from active war and wildfire zones — yet perpetually ill at ease and prone to sudden-onset exasperation when absolutely nothing has happened. My triggers include dinner parties, chairs, therapists, and shopping at Costco. In analysing this phenomenon, I consider epigenetics, the neuroscience of neuroticism, and possible environmental factors such as NSF grant budgets. Yet no obvious solution emerges. Fortunately, my affliction isn't really all that serious. In fact, it's good writing material. So while I'm open to better ideas, I figure I'll just continue being like this.

The experimental html version gives us a sequence of alternative titles:

"What’s Chasing Me?"
"Why Am I Running When There’s Nothing Chasing Me?"
"What Am I Running From?"
"What Am I So Mad About?"
"Why Am I So Mad When Nothing’s Wrong?"
"Why Do Things Upset Me For No Reason?"
"What Am I So Irritated About?"
"Why Am I So Edgy For No Reason?"
"Why Am I So Mad When Nothing Happened?"
"Why Am I So Edgy When Nothing’s the Matter?"
"I’m okay."
"Everything’s Fine."

Eve has been producing relevant papers at this time of year for a while — my favorite is still "A Neural Networks Approach to Predicting How Things Might Have Turned Out Had I Mustered the Nerve to Ask Barry Cottonfield to the Junior Prom Back in 1997", 2017:

We use a feed-forward artificial neural network with back-propagation through a single hidden layer to predict Barry Cottonfield's likely reply to this author's invitation to the "Once Upon a Daydream" junior prom at the Conard High School gymnasium back in 1997. To examine the network's ability to generalize to such a situation beyond specific training scenarios, we use a L2 regularization term in the cost function and examine performance over a range of regularization strengths. In addition, we examine the nonsensical decision-making strategies that emerge in Barry at times when he has recently engaged in a fight with his annoying kid sister Janice. To simulate Barry's inability to learn efficiently from large mistakes (an observation well documented by his algebra teacher during sophomore year), we choose a simple quadratic form for the cost function, so that the weight update magnitude is not necessary correlated with the magnitude of output error.
Network performance on test data indicates that this author would have received an 87.2 (1)% chance of "Yes" given a particular set of environmental input parameters. Most critically, the optimal method of question delivery is found to be Secret Note rather than Verbal Speech. There also exists mild evidence that wearing a burgundy mini-dress might have helped. The network performs comparably for all values of regularization strength, which suggests that the nature of noise in a high school hallway during passing time does not affect much of anything. We comment on possible biases inherent in the output, implications regarding the functionality of a real biological network, and future directions. Over-training is also discussed, although the linear algebra teacher assures us that in Barry's case this is not possible.

Some earlier LLOG posts discussing her work:

"Advances in birdsong modeling", 4/1/2017
"A dynamical systems approach to the game of Clue", 4/1/2018
"GFOOEOPQ", 4/1/2020
"Case studies of Peer Review", 4/1/2022
"'An exercise in inference sabotage'", 3/31/2023

For a list that also includes her HEP-relevant papers, see here.

 



3 Comments

  1. ohwilleke said,

    April 1, 2024 @ 2:46 pm

    Another great linguistic April 1 paper from arXiv: https://arxiv.org/abs/2403.20302

    "I'm in AGNi: A new standard for AGN pluralisation
    Andrew D. Gow, Peter Clark, Dan Rycanowski
    We present a new standard acronym for Active Galactic Nuclei, finally settling the argument of AGN vs. AGNs. Our new standard is not only etymologically superior (following the consensus set by SNe), but also boasts other linguistic opportunities, connecting strongly with relevant theology and streamlining descriptions of AGN properties.
    Comments: 4 pages, 3 figures, accepted for publication in Acta Prima Aprilia"

  2. Jason said,

    April 4, 2024 @ 3:28 am

    So the twist is that this *isn't* the result of an AI prompt to "Write a psychological paper in the style of a vapid Cosmopolitan.com lifestyle columnist?", but by a real person?

    I'm sorry, I was *convinced* this was AI generated! I thought ChatGPT 4 was getting worrisomely smart.

  3. Edith said,

    April 5, 2024 @ 6:13 am

    Hot news: everything IS fine:

    https://www.youtube.com/watch?v=smoBjnTeSp4

RSS feed for comments on this post