Super Fakehuman grammar everything advice

« previous post | next post »

Grammarly recently became part of Superhuman, and then began the shockingly unethical practice of pretending to offer writing advice from living people, without getting their permission or even informing them.

Some coverage:

"Grammarly Is Offering ‘Expert’ AI Reviews From Your Favorite Authors—Dead or Alive", Wired 3/4/2026:

Once relied upon only to proofread for correct grammar and spelling, the writing tool Grammarly has added a host of generative AI features over the past several years. In October, CEO Shishir Mehrotra announced that the overall company was rebranding as Superhuman to reflect a new suite of AI-powered products. […]

Perhaps most insidiously, however, Grammarly now has an “expert review” option that, instead of producing what looks like a generic critique from a nameless LLM, lists a number of real academics and authors available to weigh in on your text. To be clear: Those people have nothing to do with this process. As a disclaimer clarifies: “References to experts in this product are for informational purposes only and do not indicate any affiliation with Grammarly or endorsement by those individuals or entities.”

Stevie Bonifield, "Grammarly is using our identities without permission", The Verge 3/6/2026:

Grammarly’s “expert review” feature offers to give users writing advice “inspired by” subject matter experts, including recently deceased professors, as Wired reported on Wednesday. When I tried the feature out myself, I found some experts that came as a surprise for a different reason — one of them was my boss.

Julia Angwin, "Why I’m Suing Grammarly", NYT 3/13/2026:

A few days ago, an awkward sentence written by the editing service Grammarly flashed across my screen: “Could Meta be quietly leveraging this intimate information to refine ad targeting or fuel its vast business interests in unseen ways?”

The writing was clunky, the point weirdly unspecific. Grammarly had been offering paying users editing suggestions, supposedly from a handful of writers — including me. Pop a piece of prose into its service and little editing bubbles would emerge on the page from “Julia Angwin,” suggesting things like, “Lead with personal stakes to boost immediacy.” That sentence about Meta was something Grammarly apparently thought I would suggest.

Like all writers, I live by my wits. My ability to earn a living rests on my ability to craft a phrase, to synthesize an idea, to make readers care about people and places they can only access through words on a page. Grammarly hadn’t checked with me before using my name. I only learned that an A.I. company was selling a deepfake of my mind from an article online.

And it wasn’t just me. Superhuman — the parent company of Grammarly — made fake editor versions of a range of people, including the novelist Stephen King, the late feminist author bell hooks, the former Microsoft chief privacy officer Julie Brill, the University of Virginia data science professor Mar Hicks and the journalist and podcaster Kara Swisher.

Angwin adds:

At this point in a story about A.I. exploitation, I would normally bemoan the need for new laws to tackle the novel harms of a new technology. But in this case, there is an old law that’s able to do the job.

In my home state of New York, the century-old right of publicity law prohibits a person’s name or image from being used for commercial purposes without her consent. At least 25 states have similar publicity statutes. And now, I’m using this law to fight back. I am the lead plaintiff in a class-action lawsuit against Superhuman in the U.S. District Court for the Southern District of New York, alleging that it violated New York and California publicity laws by not seeking consent before using our names in a paid service.

After a wave of criticism, the Superhuman chief executive, Shishir Mehrotra, announced that the company was disabling the feature while it reimagined how to give “experts real control over how they want to be represented — or not represented at all.” In a statement to The Atlantic, Mr. Mehrotra said that the company “believes the legal claims are without merit and will strongly defend against them.”

Grammarly long ago burst the "plastic fetters of grammar" to enfold (however imperfectly) the whole Trivium; and now has d-AI-gested the Quadrivium plus (presumably) philosophy and theology and all the practical arts, as presented by experts and celebrities past and present.

We can see this ironically as a corporate move in favor of open-access humans.

Update — More from Kaitlyn Tiffany at The Atlantic: "What was Grammarly Thinking?", 3/12/2-26:

To my dismay, I was unable to summon the AI version of myself. I pasted in numerous articles I’d written and numerous fake articles that I had asked a chatbot to make up. But Grammarly seemed to think other writers were more expert in these articles’ subject matter and therefore more qualified to advise me. It suggested tech journalists, pop-culture academics, and legendary practitioners of narrative nonfiction. I wouldn’t appear. My boss tried too. He messaged me: “i have both claude and chatgpt writing fake essays in an attempt to fool a different AI into presenting me with an unauthorized simulacrum of one of my writers.” He failed. We both felt bad about the way we were spending our time.



8 Comments »

  1. Mai Kuha said,

    March 14, 2026 @ 10:01 am

    Now I'm wondering also whether guardrails on *non*-commercial uses might be beneficial, and whether they might be feasible. After all, it's not just about "a person's name or image" any more, and it seems disturbing that potentially anyone out there could create a simulacrum of whoever they want. It's worse if they profit from it commercially as well, but ultimately that's not the main reason it's disturbing, right?

  2. Mai Kuha said,

    March 14, 2026 @ 10:02 am

    Also, Google claims no hits on "d-AI-gested" – did you just coin it? Nice!

  3. Mark Liberman said,

    March 14, 2026 @ 10:33 am

    @Mai Kuha:

    The other side of the coin is satire, like this or this.

    As for "d-AI-gested", I wrote "digested" first and then realized that the context suggested a better spelling. It wouldn't be a shock if someone else had thought of the same pun, but I didn't borrow it.

  4. David Marjanović said,

    March 14, 2026 @ 11:30 am

    it seems disturbing that potentially anyone out there could create a simulacrum of whoever they want

    I'm not sure if it's made more or less disturbing by the fact that the simulacra aren't at all convincing. "Boost immediacy"? Really?

  5. J.W. Brewer said,

    March 14, 2026 @ 11:34 am

    Obviously the marketing or user interface for this may well have been so "subtle" in its explanation or disclaimers as to be misleading (and perhaps even intentionally misleading), but I think the wording "inspired by X" is a pretty conventional way to signal "but not actually authorized by X and X certainly isn't getting any royalties." Perhaps that's insider-jargon that the masses don't always understand, of course, but to use a different-but-related example everyone in the tv or movie business (along with hopefully some fraction of viewers) understands that "inspired by a true story" means something different than "based on a true story" and that an "inspired by" work is generally going to correspond less closely to the actual historical details of the referenced "true story" than a "based on" one.

  6. AntC said,

    March 14, 2026 @ 4:19 pm

    @JWB "inspired by" is not the only claim. As Klee says

    To be clear: Those people [authors named] have nothing to do with this process. As a disclaimer clarifies: “References to experts in this product are for informational purposes only …"

    What is it that's 'informational'?: that we have just made this all up? _Is_ that informational? Or more evasive/obfuscatory? Attempting to be exculpatory? IANAL, but they're not merely claiming that the offered advice is 'inspired by' (a panel of) authors-in-general, rather naming specific authors with specific pieces of advice.

  7. J.W. Brewer said,

    March 14, 2026 @ 5:59 pm

    @AntC: Look, these people went to market without first offering to pay me at my usual hourly rate to advise them on how to do it legally. So they may well have screwed it up; that would depend on details I can't confidently figure out from these breathless media accounts.

    I do think that, with some careful attention to details, it should e.g. be perfectly legal to set up a service that will edit customers' draft documents to make them comply with all the bogus prescriptivist rules set forth by Strunk and White w/o obtaining permission from or paying royalties to the heirs of Strunk & White. S&W did not actually "own" their bogus rules in the way one might own a house, or a beat-up Chevy pick-up truck. Such a business could legitimately make appropriate references to Strunk & White in describing the services it provides, although again careful attention to the details of how it did so might be important.

  8. Jon W said,

    March 15, 2026 @ 11:40 am

    Grammerly initially offered people the opportunity to opt out of inclusion in the service (a little challenging since there was no way to check whether they were using your name in the first place). As the OP notes, it has now withdrawn the feature entirely. Without regard to whether it might in theory be possible to structure a feature like this one so that it wasn't actionable, a look at their Grammerly's outputs convinces me that there were no lawyers involved in their process, and that the right-of-publicity claim against them was (& is) really straightforward.

RSS feed for comments on this post · TrackBack URI

Leave a Comment