Home » Forum » Author Hangout

Forum: Author Hangout

Why AI writing is so generic, boring, and dangerous

Michael Loucks 🚫

Semantic ablation. [The Register]

Semantic ablation is the algorithmic erosion of high-entropy information. Technically, it is not a "bug" but a structural byproduct of greedy decoding and RLHF (reinforcement learning from human feedback).

During "refinement," the model gravitates toward the center of the Gaussian distribution, discarding "tail" data – the rare, precise, and complex tokens – to maximize statistical probability. Developers have exacerbated this through aggressive "safety" and "helpfulness" tuning, which deliberately penalizes unconventional linguistic friction. It is a silent, unauthorized amputation of intent, where the pursuit of low-perplexity output results in the total destruction of unique signal.

When an author uses AI for "polishing" a draft, they are not seeing improvement; they are witnessing semantic ablation.

I've seen this with Grammarly (and other writing software), and is why I turn off anything other than grammar/spelling/punctuation suggestions.

So, not just slop, but boring slop.

Replies:   Switch Blayde  ystokes
Switch Blayde 🚫

@Michael Loucks

I asked ChatGPT for a simpler explanation of what that guy wrote. This is what it gave me:

The author is saying that when AI "refines" or "polishes" writing, it often removes the most unique, complex, or unusual parts.

AI systems are trained to predict the most likely next word. Because of that, they naturally prefer common, safe, and average phrasing. Rare words, unusual structures, or very specific ideas are statistically less common, so the model tends to replace them with more typical language.

The term "semantic ablation" here means stripping away meaning. The claim is that AI doesn't just fix grammar — it can smooth out the distinctive parts of writing. Over time, the text becomes more generic and centered around what is most probable rather than what is most original.

The reference to "greedy decoding" means the AI usually picks the most likely next word.
"RLHF" (reinforcement learning from human feedback) means humans trained the model to give answers that feel helpful and safe. That training can further encourage conventional, non-controversial language.

When the author talks about "discarding tail data," they mean removing rare or complex wording in favor of more common phrasing.

So the final claim is this:
When someone uses AI to "polish" their draft, they may think the writing improved — but according to this argument, the AI might actually be reducing originality and complexity, making the writing more generic.

In short: the passage argues that AI tends to smooth out uniqueness in order to sound clearer, safer, and more statistically typical.

Replies:   Switch Blayde  Pixy
Switch Blayde 🚫
Updated:

@Switch Blayde

Then I asked ChatGPT if it agreed. This is the AI's answer:

Partly — but not as dramatically as the original passage suggests.

It's true that AI systems are trained to produce high-probability, broadly acceptable language. That does create a pull toward clarity, convention, and smoother phrasing. If someone asks for "polishing" without guidance, the model may simplify, standardize tone, and reduce unusual phrasing. In that sense, yes: distinctive edges can get softened.

But I don't agree that this is an unavoidable "destruction of unique signal."

A few important nuances:

1. It's not inherently erosive.
The model doesn't have to remove complexity or voice. If prompted to preserve tone, eccentricity, or technical precision, it can do so. The "averaging" effect often happens when the goal is vague (e.g., "make this better").

2. Refinement and erasure aren't the same.
Sometimes clarity really is improvement. Removing ambiguity, redundancy, or accidental obscurity can strengthen a piece without harming its originality.

3. User intent matters a lot.
If someone hands over a draft and asks for generic smoothing, they're effectively instructing the system to optimize for conventional readability. If they instead say, "Keep the strange metaphors and sharp tone; just fix grammar," the outcome is very different.

4. Creative loss is possible — but not guaranteed.
AI does have a bias toward coherence and common patterns. If the writer doesn't actively guard their voice, the output can feel blander. But that's a tooling issue, not an inevitable structural amputation.

So I'd say:
There is a real tendency toward standardization, but calling it "semantic ablation" overstates the case. AI can flatten writing — but it can also sharpen it — depending on how it's used.

Michael Loucks 🚫

@Switch Blayde

I asked Grok…

Analysis of Conclusions

The core conclusions are:

• AI writing is boring: Strongly supported. LLMs are statistical compressors of internet-scale data, so outputs regress to the mean. Human writing thrives on deviation—idiosyncrasy, friction, surprise. Ablation strips that, yielding "safe but soulless" prose. This isn't new (critics noted it in 2023), but the article crystallizes why.

• AI writing is dangerous: More provocative, with merit but nuance needed.

• Cultural risk: A "race to the middle" could homogenize discourse. If AI polishes everything, original voices erode, echoing concerns about social media echo chambers but at the thought level. In academia, journalism, or creative fields, this amplifies "entropy decay" in collective knowledge.

• Cognitive atrophy: Over-reliance dulls skills. Writing is thinking; outsourcing it risks what the article calls "intellectually void output." Parallels to calculators weakening mental math, but scaled to ideas.

• Hyperbole check: "Dangerous" implies existential threat, like misinformation or job loss. Here, it's subtler: insidious cultural flattening. Not apocalyptic, but worth heeding—especially as AI agents automate more "writing" tasks.

Looks like Grok believes the author is on target with regard to AI writing being boring.

Michael Loucks 🚫

@Switch Blayde

I also asked Gemini. Highlights…

• The "JPEG of Thought" Analogy: This is arguably the article's strongest point. Just as a JPEG image looks "fine" but has lost 90% of its original data density through lossy compression, AI-refined text looks "clean" but has lost the "blood" and "friction" of the original thought.

• Framing a Subtle Problem: The article successfully names a phenomenon many users feel—the "uncanny valley" of AI writing—and provides a technical framework (entropy decay) to measure it.

• Challenging "Optimization": It correctly identifies that "optimizing for helpfulness" often means "optimizing for the least common denominator."

Final Verdict

Nastruzzi's article is a timely warning. It suggests that while we were worried about AI lying to us (hallucinations), we missed the fact that AI is quietly lobotomizing us (semantic ablation). The conclusion is clear: If we use AI to "polish" our thoughts, we must be aware that the polish is actually an abrasive that may be removing the most valuable parts of our message.

Nastruzzi's article is a timely warning. It suggests that while we were worried about AI lying to us (hallucinations), we missed the fact that AI is quietly lobotomizing us (semantic ablation). The conclusion is clear: If we use AI to "polish" our thoughts, we must be aware that the polish is actually an abrasive that may be removing the most valuable parts of our message.

Replies:   Switch Blayde
Switch Blayde 🚫

@Michael Loucks

Nastruzzi's article is

I didn't give the article to ChatGPT to review. I gave it the part you posted here.

I actually asked for a simpler explanation. And then I asked if it agreed.

Vincent Berg 🚫

@Switch Blayde

Just like AI, to give you two mutual, self-contradictory quotes about its own ability to do anything of value. Now, if that isn't telling, then you haven't been doing nearly enough reading. ;)

Pixy 🚫

@Switch Blayde

I find it darkly humorous, that an AI explanation (as provided by SB) was needed by me, to explain WTF the original post was saying.

Replies:   Michael Loucks
Michael Loucks 🚫

@Pixy

I find it darkly humorous, that an AI explanation (as provided by SB) was needed by me, to explain WTF the original post was saying.

It was overly technical, but I'm steeped in that kind of language. Whether that's good or bad is left as an exercise for the reader. 😜

ystokes 🚫

@Michael Loucks

To my knowledge I have not read any AI stories yet.

Replies:   Michael Loucks
Michael Loucks 🚫
Updated:

@ystokes

To my knowledge I have not read any AI stories yet.

I have them filtered on the homepage and avoid them like the plague.

I've read some, but only as a result of feeding my work into an AI and asking it to create scenes and chapters as an experiment. The results are unsurprisingly disappointing.

Ditto for asking it to refine/polish my work. It loses ALL my style and loves to replace diverse words with 'common' ones. It's horrid, to say the least.

Of course, I'm certain some people think that of my original writing, but to each their own!

Back to Top

 

WARNING! ADULT CONTENT...

Storiesonline is for adult entertainment only. By accessing this site you declare that you are of legal age and that you agree with our Terms of Service and Privacy Policy.


Log In