The Witch's Mark
The AI Slop Police and the Unfalsifiable Accusation
This post follows my standard early access schedule: paid subscribers today, free for everyone on May 5.
In February 2026, the YouTuber Frankie’s Shelf published a video essay, more than two and a half hours long, about the horror novel Shy Girl by Mia Ballard. The YouTuber argued the book was likely created by artificial intelligence.
This video was not an isolated piece of content. It was one voice in a chorus that had been building since early 2026. Readers on Goodreads and Reddit had been forensically dissecting isolated passages of Ballard’s novel, identifying what they took to be the fingerprints of machine authorship: flat sentence rhythms, uniform vocabulary, overuse of em-dashes, and the particular brand of structural tidiness associated with generative text.
The stakes were substantial. Ballard’s self-published novel had been picked up by Orbit, a prestigious imprint of Hachette, after acquiring nearly five thousand ratings on Goodreads. It was released in the United Kingdom in November 2025 and was scheduled for a major American launch in April 2026.
By March 2026, after a coordinated campaign of denunciation amplified by social media, Hachette retreated. The American launch was canceled. The British edition was discontinued, and the book was pulled from Amazon globally. Ballard denied writing the novel with AI and attributed any algorithmic residue to an editor she had hired during the self-publishing phase. She told the New York Times that her mental health had reached an “all-time low” and that her professional name had been ruined “for something she didn’t even personally do.”
What is frustrating to me is that the countless readers who believed they could sense the hand of a large language model in Ballard’s prose were almost certainly wrong about their own ability to sense any such thing. But the accuracy of their detection ultimately did not matter. The accusation alone was sufficient to destroy a book’s commercial life. What we are seeing is a digital enforcement apparatus built on a perceptual claim that the empirical evidence does not support, and we are giving it the power of commercial excommunication.
Some call this enforcement apparatus the “AI Slop Police.”
The case for vigilance
I need to acknowledge that the underlying anxiety driving this movement is not irrational. Generative AI has introduced actual harm to creative ecosystems. Creative as well as scholarly integrity is under genuine strain. Artists have watched their labor compete against tools trained, in part, on their uncompensated work. And publishers have every reason to fear the flood of synthetic manuscripts hitting their inboxes.
This is not limited to written or visual content. Music streaming platform Deezer reported in late 2025 that roughly fifty thousand fully AI-generated tracks were being uploaded to its servers each day, up from ten thousand at the start of that year. This made up about 34 percent of their entire streaming catalog. Readers, viewers, and listeners have a legitimate interest in knowing whether the cultural artifacts they consume originated in a human mind. The desire for authenticity is not the problem. The problem is the mechanism we have improvised to enforce it.
The ghosts of plagiarism hunters past
The contemporary hunt for AI content repeats a historic pattern. In the early 1990s, two researchers at the National Institutes of Health, Walter Stewart and Ned Feder, built an automated plagiarism detection system and turned it on the work of established scholars. They later submitted their algorithmic findings to the American Historical Association, primarily targeting the historian Stephen B. Oates. They alleged that his 1977 biography of Abraham Lincoln, With Malice Toward None, had lifted portions from Benjamin P. Thomas’s 1952 work on the same subject.




