The Elephant in the Feed
On undisclosed AI use, the disclosure penalty, and why transparency still wins
Writing extensively with AI assistance teaches you a thing or two that are difficult to learn any other way. After more than a year of deliberate, transparent, and carefully documented collaboration with large language models, I have developed what I can only describe as a finely tuned radar for AI-assisted prose. I have previously written at length about the telltale signs of AI-generated text, and the patterns become unmistakable if you know what to look for. They are probabilistic fingerprints of how large language models construct text. And like all fingerprints, they are invisible until you learn how to identify them.
I notice these patterns constantly now. But what strikes me most is where I notice them: everywhere. Scrolling through my Substack feed or browsing thought-leadership posts on LinkedIn, most of the content I consume on these platforms now carries at least some trace of algorithmic involvement.
Before I continue I want to be clear about something. I have zero moral objections to this practice. To object would be deeply hypocritical. I use AI in my own writing process. I have described that process in detail across multiple essays, and I believe these tools represent a genuinely useful evolution in how we produce text. My issue is not with the use. My issue is with the silence.
Almost nobody discloses it.
The cost of honesty
This silence is not accidental. It is driven by something researchers have identified and empirically validated: the AI disclosure penalty. The term describes a consistent and measurable phenomenon in which content labeled as AI-assisted receives lower evaluations from audiences, regardless of its actual quality. The penalty is real; it is persistent, and it helps explain why so many creators quietly integrate these tools into their workflows without ever mentioning it.
The empirical evidence is substantial. A series of sixteen preregistered experiments involving over 27,000 participants found that identical texts received systematically lower quality ratings when audiences believed the text had been produced with AI assistance. The penalty held across different evaluation metrics, content types, and experimental conditions. The key mediating factor was perceived authenticity; readers place psychological value on the effort and personal experience they believe went into a text. When AI involvement is disclosed, the perception of authenticity fractures, and the evaluation drops accordingly. A separate study using stylistic rewrites confirmed the depth of this bias: evaluators consistently preferred texts labeled as human-written over those labeled as AI-generated, even when the texts were identical. Interestingly, the AI models themselves, when used as evaluators, exhibited the same preference at 2.5 times the strength of their human counterparts.
Underlying much of this reaction is what psychologists call the effort heuristic: readers assign higher value to work that appears to have required significant time, labor, and skill. Because generative tools dramatically compress the visible effort behind polished prose, the resulting output triggers an instinctive devaluation. The penalty varies by context, but it appears in every domain researchers have examined. In professional settings such as business communications and news media, the damage primarily affects perceived credibility and reliability. In creative contexts like poetry and personal essays, the devaluation centers on expectations of emotional connection and vulnerability. And in academic peer review, disclosure raises doubts about a researcher’s methodological rigor and intellectual contribution.
This penalty creates a punishing asymmetry. Creators who choose transparency bear a measurable credibility cost. Creators who stay silent enjoy the efficiency gains of AI assistance without any of the associated stigma. The system, as it currently operates, rewards concealment and penalizes honesty.
Why the silence persists
Most knowledge workers regularly use generative tools in their workflows, according to survey data, yet only a fraction disclose this to their audiences or employers. The gap between usage and disclosure is enormous.
Professional self-preservation drives much of this behavior. Creators fear that acknowledging AI assistance will trigger perceptions of intellectual laziness or diminished originality. In academic settings, the publish-or-perish culture intensifies the pressure: researchers worry that disclosure will compromise how peer reviewers assess their expertise. One study surveying digital humanities scholars found that while 63% acknowledged disclosure as ethically necessary, only 28% had actually disclosed AI use in their published work. The distance between knowing what is right and doing it is considerable.
And on platforms like Substack and LinkedIn, the incentive structure actively discourages transparency. Audiences reward consistent, polished, high-volume output. Generative tools make that output possible at scale. But disclosing the tools risks triggering the penalty, alienating subscribers, and reducing engagement metrics. Creators find themselves caught in a bind: ethically obligated to be transparent about their methods, economically punished by their audiences for doing so.
There is also a philosophical defense of non-disclosure worth acknowledging honestly. Some scholars argue that authorship has never been defined by process but by responsibility; that demanding disclosure of AI use is philosophically equivalent to demanding disclosure of a spell-checker or a skilled human editor. If the creator takes full responsibility for the final text, the argument goes, the details of its production are irrelevant.
This position has intellectual merit. I take it seriously, even though I ultimately disagree with it. A spell-checker corrects surface errors; a language model can reshape the substance of an argument, introduce claims the writer never considered, and generate prose that the writer could not have produced alone. The degree of contribution matters, and readers have a legitimate interest in knowing about it.
The blurring line between human and machine prose
The disclosure question is further complicated by a phenomenon researchers call stylistic convergence. As writers consume large quantities of AI-generated text and increasingly use these tools for drafting and editing, the characteristic patterns of algorithmic prose are migrating into genuine human writing.
I have previously written about the consistent markers of AI-assisted text. These include overuse of specific vocabulary items, uniform sentence length and structure, formulaic paragraph organization that prioritizes balance over conviction, and broad generalizations where specific lived-experience insights would be more appropriate. Human writing tends to be what linguists call “bursty,” mixing short fragments with long, structurally complex sentences. AI-generated text maintains a more predictable rhythm, producing what reads as a mathematically smoothed version of natural prose.
The insidious aspect of this convergence is its self-reinforcing nature. Writers who consume AI-generated content begin to internalize its conventions. Students and professionals subconsciously adopt the hyper-structured, vocabulary-dense style as a model of good writing. Over time, what were once distinctive markers of machine-generated text become absorbed into the baseline of professional communication. This creates a genuine problem for anyone attempting to identify AI involvement: human writing that has been shaped by sustained exposure to AI prose may trigger the same recognition patterns as text that was directly AI-assisted.
I acknowledge this complication fully. It is entirely possible that some patterns I detect in other creators’ work reflect stylistic convergence rather than direct AI involvement. My radar is not infallible, and the line between influence and assistance grows harder to draw with each passing month. Still, the scale of what I observe and the consistency of the patterns make the most straightforward reading of the evidence clear: widespread AI use with minimal disclosure.
Why disclosure remains the right choice
The arguments against mandatory disclosure are not trivial. The penalty is real, and the economic costs are measurable. There are also solid philosophical arguments supporting creative autonomy. And on top of everything, the blurring of stylistic boundaries between human and machine writing complicates attribution.
But none of this changes the fundamental ethical calculation.
Not disclosing AI assistance is functionally equivalent to not disclosing a co-author. It deprives readers of critical context about how the text was produced, what cognitive labor went into it, and what potential limitations it carries. AI-generated text can contain hallucinated facts, embedded biases, and a kind of confident vagueness that masks the absence of genuine expertise. Readers deserve to know when these risks are present so they can calibrate their trust accordingly.
Some will counter that collaboration has always been opaque. Academic publishing tolerates honorary authorship, the practice of listing individuals as authors who made little to no significant contribution to a research paper, at strikingly high rates. Studies have found it in up to 26% of original articles in major medical journals. If we already accept that level of ambiguity in human collaboration, why single out human-machine collaboration for stricter scrutiny? The answer is that the existing opacity is itself the problem, not a license to extend it further. The standard should rise, not sink to its lowest precedent.
The consequences of continued silence extend well beyond individual credibility. Audiences who reflexively dismiss AI-assisted content will keep doing so, unaware that much of what they read and trust was produced with exactly the tools they claim to reject. Creators who disclose will keep bearing a disproportionate cost for their honesty. The information ecosystem, meanwhile, will grow increasingly saturated with undisclosed AI-assisted content whose provenance no reader can verify. That trajectory degrades the basic trust that makes written communication meaningful.
A call for honest practice
I am not calling for elaborate disclosure frameworks or burdensome documentation requirements. I am not suggesting that every use of a spell-checker or grammar tool needs a footnote. The line between minor editing assistance and substantive AI involvement in drafting is admittedly imprecise, and reasonable people will draw it in different places.
What I am calling for is a baseline commitment to honesty. If AI played a meaningful role in producing your text, say so. A brief note at the end of a post, a line in your publication’s about page, a transparent description of your workflow; these are minor acts that carry significant ethical weight. They respect your readers’ intelligence and their right to evaluate your work with full knowledge of how it was made.
Yes, disclosure currently carries a cost. The research makes that unambiguous. But the cost of universal silence is far greater. Every creator who discloses normalizes the practice and chips away at the stigma that feeds the penalty. Every creator who stays silent reinforces the conditions that make honesty expensive.
The disclosure penalty will not disappear overnight. It will diminish only as audiences develop a more mature understanding of what AI-assisted writing actually involves, and that understanding depends entirely on creators being willing to have the conversation openly. We cannot ask readers to move past their biases if we refuse to give them the information they need to do so.
I have incurred an actual cost for my own transparency. Some readers have told me they stopped reading my work the moment they learned I use AI in my writing process. That stings, and I understand the impulse behind it. But I would rather lose those readers honestly than keep them through omission. The integrity of the work depends on it. So does the integrity of the broader conversation about what it means to write honestly when the tools themselves are changing what writing looks like.
If you use AI to write, disclose it. The penalty is temporary. The principle is not.
The images in this article were generated with Nano Banana 2.
P.S. I believe transparency builds the trust that AI detection systems fail to enforce. That’s why I’ve published an ethics and AI disclosure statement, which outlines how I integrate AI tools into my intellectual work.





