Home » Forum » Author Hangout

Forum: Author Hangout

Why AI writing is so generic, boring, and dangerous

Michael Loucks 🚫

Semantic ablation. [The Register]

Semantic ablation is the algorithmic erosion of high-entropy information. Technically, it is not a "bug" but a structural byproduct of greedy decoding and RLHF (reinforcement learning from human feedback).

During "refinement," the model gravitates toward the center of the Gaussian distribution, discarding "tail" data – the rare, precise, and complex tokens – to maximize statistical probability. Developers have exacerbated this through aggressive "safety" and "helpfulness" tuning, which deliberately penalizes unconventional linguistic friction. It is a silent, unauthorized amputation of intent, where the pursuit of low-perplexity output results in the total destruction of unique signal.

When an author uses AI for "polishing" a draft, they are not seeing improvement; they are witnessing semantic ablation.

I've seen this with Grammarly (and other writing software), and is why I turn off anything other than grammar/spelling/punctuation suggestions.

So, not just slop, but boring slop.

Switch Blayde 🚫

@Michael Loucks

I asked ChatGPT for a simpler explanation of what that guy wrote. This is what it gave me:

The author is saying that when AI "refines" or "polishes" writing, it often removes the most unique, complex, or unusual parts.

AI systems are trained to predict the most likely next word. Because of that, they naturally prefer common, safe, and average phrasing. Rare words, unusual structures, or very specific ideas are statistically less common, so the model tends to replace them with more typical language.

The term "semantic ablation" here means stripping away meaning. The claim is that AI doesn't just fix grammar — it can smooth out the distinctive parts of writing. Over time, the text becomes more generic and centered around what is most probable rather than what is most original.

The reference to "greedy decoding" means the AI usually picks the most likely next word.
"RLHF" (reinforcement learning from human feedback) means humans trained the model to give answers that feel helpful and safe. That training can further encourage conventional, non-controversial language.

When the author talks about "discarding tail data," they mean removing rare or complex wording in favor of more common phrasing.

So the final claim is this:
When someone uses AI to "polish" their draft, they may think the writing improved — but according to this argument, the AI might actually be reducing originality and complexity, making the writing more generic.

In short: the passage argues that AI tends to smooth out uniqueness in order to sound clearer, safer, and more statistically typical.

Replies:   Switch Blayde  Pixy
Switch Blayde 🚫
Updated:

@Switch Blayde

Then I asked ChatGPT if it agreed. This is the AI's answer:

Partly — but not as dramatically as the original passage suggests.

It's true that AI systems are trained to produce high-probability, broadly acceptable language. That does create a pull toward clarity, convention, and smoother phrasing. If someone asks for "polishing" without guidance, the model may simplify, standardize tone, and reduce unusual phrasing. In that sense, yes: distinctive edges can get softened.

But I don't agree that this is an unavoidable "destruction of unique signal."

A few important nuances:

1. It's not inherently erosive.
The model doesn't have to remove complexity or voice. If prompted to preserve tone, eccentricity, or technical precision, it can do so. The "averaging" effect often happens when the goal is vague (e.g., "make this better").

2. Refinement and erasure aren't the same.
Sometimes clarity really is improvement. Removing ambiguity, redundancy, or accidental obscurity can strengthen a piece without harming its originality.

3. User intent matters a lot.
If someone hands over a draft and asks for generic smoothing, they're effectively instructing the system to optimize for conventional readability. If they instead say, "Keep the strange metaphors and sharp tone; just fix grammar," the outcome is very different.

4. Creative loss is possible — but not guaranteed.
AI does have a bias toward coherence and common patterns. If the writer doesn't actively guard their voice, the output can feel blander. But that's a tooling issue, not an inevitable structural amputation.

So I'd say:
There is a real tendency toward standardization, but calling it "semantic ablation" overstates the case. AI can flatten writing — but it can also sharpen it — depending on how it's used.

Michael Loucks 🚫

@Switch Blayde

I asked Grok…

Analysis of Conclusions

The core conclusions are:

• AI writing is boring: Strongly supported. LLMs are statistical compressors of internet-scale data, so outputs regress to the mean. Human writing thrives on deviation—idiosyncrasy, friction, surprise. Ablation strips that, yielding "safe but soulless" prose. This isn't new (critics noted it in 2023), but the article crystallizes why.

• AI writing is dangerous: More provocative, with merit but nuance needed.

• Cultural risk: A "race to the middle" could homogenize discourse. If AI polishes everything, original voices erode, echoing concerns about social media echo chambers but at the thought level. In academia, journalism, or creative fields, this amplifies "entropy decay" in collective knowledge.

• Cognitive atrophy: Over-reliance dulls skills. Writing is thinking; outsourcing it risks what the article calls "intellectually void output." Parallels to calculators weakening mental math, but scaled to ideas.

• Hyperbole check: "Dangerous" implies existential threat, like misinformation or job loss. Here, it's subtler: insidious cultural flattening. Not apocalyptic, but worth heeding—especially as AI agents automate more "writing" tasks.

Looks like Grok believes the author is on target with regard to AI writing being boring.

Michael Loucks 🚫

@Switch Blayde

I also asked Gemini. Highlights…

• The "JPEG of Thought" Analogy: This is arguably the article's strongest point. Just as a JPEG image looks "fine" but has lost 90% of its original data density through lossy compression, AI-refined text looks "clean" but has lost the "blood" and "friction" of the original thought.

• Framing a Subtle Problem: The article successfully names a phenomenon many users feel—the "uncanny valley" of AI writing—and provides a technical framework (entropy decay) to measure it.

• Challenging "Optimization": It correctly identifies that "optimizing for helpfulness" often means "optimizing for the least common denominator."

Final Verdict

Nastruzzi's article is a timely warning. It suggests that while we were worried about AI lying to us (hallucinations), we missed the fact that AI is quietly lobotomizing us (semantic ablation). The conclusion is clear: If we use AI to "polish" our thoughts, we must be aware that the polish is actually an abrasive that may be removing the most valuable parts of our message.

Nastruzzi's article is a timely warning. It suggests that while we were worried about AI lying to us (hallucinations), we missed the fact that AI is quietly lobotomizing us (semantic ablation). The conclusion is clear: If we use AI to "polish" our thoughts, we must be aware that the polish is actually an abrasive that may be removing the most valuable parts of our message.

Replies:   Switch Blayde
Switch Blayde 🚫

@Michael Loucks

Nastruzzi's article is

I didn't give the article to ChatGPT to review. I gave it the part you posted here.

I actually asked for a simpler explanation. And then I asked if it agreed.

Vincent Berg 🚫

@Switch Blayde

Just like AI, to give you two mutual, self-contradictory quotes about its own ability to do anything of value. Now, if that isn't telling, then you haven't been doing nearly enough reading. ;)

Marc Nobbs 🚫
Updated:

@Switch Blayde

For me, this is the most important point in this whole thread.

2. Refinement and erasure aren't the same.
Sometimes clarity really is improvement. Removing ambiguity, redundancy, or accidental obscurity can strengthen a piece without harming its originality.

Clarity is important. Ambiguity, redundancy & obscurity are undesirable--unless that ambiguity, redundancy & obscurity is authorial intent, of course.

And that's the point. AI, like any other tool, is only as 'good' as the way it is used. If an author allows the AI to simplify or standardise their prose (or dialogue), that's on them. Because, as the author, they remain in control. An author can *always* ignore the AI's suggestions in favour of their own style. Indeed, they *should* ignore any AI suggestions that diminish their voice.

Replies:   Bondi Beach
Bondi Beach 🚫

@Marc Nobbs

Clarity is important. Ambiguity, redundancy & obscurity are undesirable--unless that ambiguity, redundancy & obscurity is authorial intent, of course.

Clarity is important, but if it comes at expense of complexity, nuance or subtlety one risks being left with a text a first-grader can handle. Not necessarily bad, first-graders need to start somewhere, but our readers are a little beyond that.

~ JBB

Replies:   Pixy
Pixy 🚫

@Bondi Beach

Not necessarily bad, first-graders need to start somewhere, but our readers are a little beyond that.

Given some of my PM's over the years, I'm not so sure about that...

Pete Fox 🚫
Updated:

@Switch Blayde

All very good points and what I have seen as I have experimented wiht the AI editing tools in the last year. I strugle with show vs tell, I ask for help in the form of suggestions, some I use, many I dont. My grammar sucks too. Grammarly is my last stop before my human editor, saving him loads of work.

Pixy 🚫

@Switch Blayde

I find it darkly humorous, that an AI explanation (as provided by SB) was needed by me, to explain WTF the original post was saying.

Michael Loucks 🚫

@Pixy

I find it darkly humorous, that an AI explanation (as provided by SB) was needed by me, to explain WTF the original post was saying.

It was overly technical, but I'm steeped in that kind of language. Whether that's good or bad is left as an exercise for the reader. 😜

Marc Nobbs 🚫

@Pixy

I find it darkly humorous, that an AI explanation (as provided by SB) was needed by me, to explain WTF the original post was saying.

Glad I wasn't the only one...

ystokes 🚫

@Michael Loucks

To my knowledge I have not read any AI stories yet.

Replies:   Michael Loucks
Michael Loucks 🚫
Updated:

@ystokes

To my knowledge I have not read any AI stories yet.

I have them filtered on the homepage and avoid them like the plague.

I've read some, but only as a result of feeding my work into an AI and asking it to create scenes and chapters as an experiment. The results are unsurprisingly disappointing.

Ditto for asking it to refine/polish my work. It loses ALL my style and loves to replace diverse words with 'common' ones. It's horrid, to say the least.

Of course, I'm certain some people think that of my original writing, but to each their own!

EricR 🚫

@Michael Loucks

If I hand you a chisel and a slab of marble, the chisel won't suddenly make you into Michelangelo. Any tool is as good as your skill with it, no better.

Having said that, there is a lot of shitty AI writing out there.

Replies:   Michael Loucks
Michael Loucks 🚫

@EricR

If I hand you a chisel and a slab of marble, the chisel won't suddenly make you into Michelangelo. Any tool is as good as your skill with it, no better.

"The sculptor arrives at his end by taking away what is superfluous."

• Michelangelo, in a 1549 letter to Benedetto Varchi (published in Varchi's Lezzioni).

Or, as commonly expressed:

"The sculpture is already complete within the marble block, before I start my work. It is already there, I just have to chisel away the superfluous material."

😎

Replies:   EricR
EricR 🚫

@Michael Loucks

Hmmmm.... the implication of that is that there are masterworks hidden in the AI engines just waiting to be discovered, no?

:)

Switch Blayde 🚫

@EricR

the implication of that is that there are masterworks hidden in the AI engines just waiting to be discovered, no?

There are masterpieces hidden in the dictionary. :)

Replies:   Michael Loucks
Michael Loucks 🚫

@Switch Blayde

There are masterpieces hidden in the dictionary. :)

Precisely my point!

awnlee jawking 🚫

@EricR

Hmmmm.... the implication of that is that there are masterworks hidden in the AI engines just waiting to be discovered, no?

I don't think so. The marble is completely uniform (I hate to say random) so what is produced is entirely the product of the sculptor's imagination. (Unless the sculptor is a machine mass-producing Prince Andrew Toby Jugs).

However LLMs are highly structured, so they contain painting by numbers 'masterpieces'.

AJ

jimq2 🚫

@EricR

AI represents the million monkeys at a million typewriters.

Replies:   Michael Loucks
Michael Loucks 🚫

@jimq2

AI represents the million monkeys at a million typewriters.

AI is predictive, not random (though the prediction is somewhat randomized). The monkeys have no language context, while AI does.

BlacKnight 🚫

@EricR

The masterworks are what goes in. Chunky AI vomit is what comes out.

Pete Fox 🚫
Updated:

@EricR

AI lacks any type of original creativety that our human minds have and bring to our twisted stories. That spark.

Big Ed Magusson 🚫

@Michael Loucks

My new favorite internet acronym:

ai:dr

Replies:   Michael Loucks
Michael Loucks 🚫

@Big Ed Magusson

My new favorite internet acronym:

ai:dr

Perfect! I love it!

Eddie Davidson 🚫
Updated:

@Michael Loucks

I saw one today (not naming names)

I smiled. I stood carefully and followed him toward the bedroom, leaving the journal on the desk where moonlight touched its spine. My story was written. The next one was just the beginning.


Moonlight touched it's spine? That's how the story ends.

The scaffolding in how the tense is written and the decorative language in this particular story is so thick it feels like wading through a cow pasture full of cow shit.

Recently, someone wrote to me to accuse me of writing with AI. It was discouraging and in part because I didn't actually write the story in question. It's Mike McGifford. I edit, add bits and bobs and illustrate for him.

He's a brilliant writer that crafts word pictures and if I could get an AI that outputs text like him, I'd fucking use it.

You'd be a fool not to use a tool that turns shit into diamonds.

Unfortunately, all of my evaluations of Claude, ChatGPT, etc turns shit into shittier shit.

I experimented with it some just to give it a fair chance. It's not that I want it to write for me and take the credit.

However, what I do use it for is to spell check. I have another VBA thread where I've tried to tighten my process to edit by automatically applying fixes I know will work.

The AI is actually quite good at detecting it's own shitty prose, and grammar and spell checking.

It's a tool, and while I know this tool is going to eventually destroy our way of life, and it fills SOL with toxic shit-grade stories, it still has it's uses.

It's like using Agent Orange to kill weeds though. It's going to eventually backfire in unexpected ways as it contaminates everything around you and you eradicate the dandelions and clover that you now consider weeds to be eliminated.

I've found that in small bursts it can work well. A great example is that when I try to write a story using British English for dialogue it's helpful at keeping me honest.

I was told my first attempt "was like Hollywood writes dialogue for Hugh Grant movies."

That's definitely a fair assessment. I didn't use AI, I used my own collected observation and I am sure it was quaint at best to a proper Brit.

I am working on a nudist bike ride out of Runcorn and I use AI to take my American dialogue and turn it into British without making everyone sound like Michael Caine. It can even tell me when something is more Southern than Northern, or how they would treat lunch, supper, dinner, etc in various regions.

That's fantastic.

However, what I cannot do is what it appears most AI authors on this site do.
"Hello ChatGPT, make me a story about (XYZ kink)"

Chat GPT: (Bleep-Bloop, here is story...milky trails, on the morrows edge, we looked to the morrow. Him. My breasts heaved, as I sighed. The growing trails narrowed into destiny as we sought our futures."

Incomprehensible shit emerges.

"Great, let me go to SOL and post it. I'll be the savior of the human race. Now, I am an author!"

One could make the case that it's doing this to music and art, and that is true.

The difference for me is that I can actually tell a fucking story. I may not be great at Grammar but I know how to bullshit with the best. I often ask;

Am I the best storyteller?

No.

Am I trying to be?

Also, no.

I am good at it though, and I love to live through the eyes of my characters and share with people.

The difference with the art is I can't hire models, and I do not want to have to use real porn because you have to censor the eyes. I can make exactly what I need to enhance a story. I spend a ton of time using photoshop, upscaling, enhancing, caption it, etc.

It's a creative process.

With music, I do not have the senses of someone like Brian Wilson, Jim Hendrix or Quincy Jones. They are producers because they have talent and they hear things that no one else hears. They know how to turn that into art and that is difficult to learn or replicate. You must have some spark of genius that goes along with it.

I listen on Youtube to covers that are performed in different styles.

https://www.youtube.com/watch?v=-fGGc2uPJOQ&list=RD-fGGc2uPJOQ&start_radio=1

This is one of my favorites as an example. When people tell me that AI slop is ruining music, I agree. However, there are people with the sense of a Quincy Jones that now have the tools to make things that I'd never imagine.

This is the Black Parade, but song in a soulful way with cracks in his voice, and emotion that blows my mind. It is one of the earliest that I found of this genre of music. I believe it's the artists actual voice because I've seen him on TikTok. It's just incredible how it changes up and he did more than say "Suno, make me a song like Black Parade but as if Otis Redding sang it."


So, I think I conclude that if you use AI as a tool, not a crutch. If you don't try to get it to do everything so you can take credit then it can help a talented person be more talented.

We finally have some tools that we didn't have before to help us. We aren't at that point where AI writing is going to do it all. I've tried Claude, Grok, ChatGPT and a dozen local LLM models or more to see just how good they are.

They aren't.

I use them but not to "Write" the story. I've gone as far as fed it all the nuances and details about a submissive woman in one of my stories. There are many ways that someone might approach submission and reasons. Its no surprise that some sub missives are service oriented, others adrenaline junkies and everything in between.

I asked it hypothetical questions and fed it dialogue to ask "What if" questions that helped inform me and it's responses were very surprising. They rang true for how I envisioned this character and yet, I wouldn't have written her that way.

Is that cheating? I don't think it is. It's using a tool and making a judgment call. It could have been way off. I wouldn't be mad at all if someone did that.

I wouldn't be mad at all if someone disagreed with me and thought I shouldn't use it for that. It's a valid opinion.

The only thing that sucks to me is that I am walking in a dog park full of dog shit that nobody picks up. Every time I click on a story, I invest time to start reading only to discover "Yep, that was dog shit."

Bad writing is bad writing, but there is way more of it with the way most AI authors use it. I wish they would stop because I set up the AI generated filter to eliminate it but so many people are not self-reporting and we will see so much that eventually people will stop checking stories due to exhaustion.

You can only take so much dog shit before you wonder why you walked into the dog park at all.

Replies:   Michael Loucks
Michael Loucks 🚫

@Eddie Davidson

Bad writing is bad writing, but there is way more of it with the way most AI authors use it. I wish they would stop because I set up the AI generated filter to eliminate it but so many people are not self-reporting and we will see so much that eventually people will stop checking stories due to exhaustion.

That is my fear. The tsunami of bad AI will simply overwhelm the space and drive out good authors, just as bad money drives out good (Gresham's law).

Replies:   Mushroom
Mushroom 🚫

@Michael Loucks

The tsunami of bad AI will simply overwhelm the space and drive out good authors, just as bad money drives out good

The funny thing is, I have been laughing at AI for many many years now. It is not new, and in my experience it is little better than "Eliza" from the 1970s. It just threw in a search engine to have a larger selection of garbage to look through.

And the flood of AI in everything is something that to be honest disgusts me. I have spent decades honing my craft in things like writing, audio and video editing, and other things. And now, anybody thinks they can simply tell AI to make something then they throw it on YT or somewhere else and think it will make them rich.

I now quickly disengage once I realize something was created by AI. Especially as I know it is complete garbage and is actually wrong more often than it is right. And one video I watched last night was a perfect example.

It talked about the proposed "State of Jefferson" in the US back in 1941. And I was already suspecting it was AI because of the voice used. Then they got to where the AI stated that if it had been successful that would have made Jefferson the 51st state.

A real human would have known that was incorrect, it would have been the 49th state. But AI lacks any kind of actual awareness or reasoning, and only vomits out what it is expected to vomit out.

Replies:   Michael Loucks
Michael Loucks 🚫

@Mushroom

A real human would have known that was incorrect, it would have been the 49th state.

I wouldn't count on that! Barack Obama said he had visited '57 states' during a speech in 2008, while he was campaigning for the presidency!

Replies:   jimq2
jimq2 🚫

@Michael Loucks

The result of AI...

Eddie Davidson 🚫

@Michael Loucks

I smiled. I stood carefully and followed him toward the bedroom, leaving the journal on the desk where moonlight touched its spine. My story was written. The next one was just the beginning.

I told the author (politely) that I do not want to discourage him from writing but he may want to be aware that his ending comes across quite like AI. AI does this sort of scaffolding thing, and it insists on adding a overly dramatic transitionary phrase at the end of every scene like this.

He wrote back "How do you know that maybe moonlight didn't crack through the window?"

Lol. So, he definitely had AI write it. I plugged into Gemini and it said:

Gemini said
The likelihood that this specific passage originated from an AI remained remarkably high, likely hovering around 85% to 90%.

Several stylistic markers typical of large language models saturated these few sentences. The use of "moonlight touched its spine" served as a classic example of AI "poeticisms"—phrases that sounded evocative but appeared frequently in training data to signify closure. The structure of the final two sentences followed a familiar "thematic wrapping" pattern where a metaphorical "end" transitioned immediately into a "new beginning." This type of tidy, sentimental symmetry often signaled machine generation, as human writers frequently favored more jagged or less thematic conclusions.

While a human could certainly have penned this, the specific combination of "careful" movement, personified moonlight, and the "story/beginning" metaphor aligned almost perfectly with the default creative writing weights of modern models.

The whole story is like that, should we report these so they can get the AI tag? It's a short one and putting into Gemini comes back with

Analyzing this narrative, the probability that the majority of this text was generated by AI is extremely high, likely in the 95% to 98% range.

The story displays several "hallmarks" of sophisticated AI writing that become very apparent in a long-form piece:

1. Thematic and Structural Symmetry
AI models are trained on narrative structures that favor "closed loops." Every conflict in this story is resolved with clinical perfection:

The Graduation Speech: The protagonist gives a "hard truth" speech and is met with a "standing ovation along with thunderous applause"—a classic AI trope where controversial actions result in immediate, universal validation.

The Shooting: The assassination attempt follows a rigid sequence (the hit, the internal monologue about death, the heroic guard, the miraculous recovery) that mirrors thousands of medical/action dramas in training data.

The Ending: The final paragraph is a near-verbatim "summary" ending that AI uses to signal the end of a long context. Phrases like "leaving the journal on the desk where moonlight touched its spine" and "The next one was just the beginning" are standard generative outputs for a "poetic" conclusion.

2. Linguistic Patterns and "AI Slop"
There are specific phrases and sentence structures that appear repeatedly in LLM creative writing:

The "Nod/Inclined Head" Loop: Characters in this story "nod once," "incline their head," or "offer a faint smile" with robotic frequency to signal agreement or composure.

Adverb Usage: The prose relies heavily on adverbs like "methodically," "precisely," "clinically," and "seamlessly."

The "Not X, but Y" Construction: The text frequently uses this rhetorical device (e.g., "Not as my employer... but as my partner," "Not out of curiosity, but surprise," "Not a store, but a workshop"). While used by humans, AI overuses this to create a sense of profoundness.

3. Logic and Tone Consistency
The story maintains a "hyper-competent" tone where the protagonist never makes a mistake, never feels true embarrassment, and processes trauma (like being shot) through a lens of "statistical anomalies." This lack of "human friction"—the messy, irrational parts of human behavior—is a strong indicator of an AI following a persona prompt.

That "Not X, but Y" thing gets me as hard three R's in Strawberry. Every time someone wants to rely on AI to make decisions I point out it still has problems with that.

Replies:   awnlee jawking
awnlee jawking 🚫

@Eddie Davidson

The "Not X, but Y" Construction: The text frequently uses this rhetorical device (e.g., "Not as my employer... but as my partner," "Not out of curiosity, but surprise," "Not a store, but a workshop"). While used by humans, AI overuses this to create a sense of profoundness.

I've seen that a lot in AI-generated (officially and unofficially) stories. It's particularly grotesque when AI uses abstract concepts that it doesn't understand, because the result is usually meaningless gobbledegook.

AJ

Replies:   Mat Twassel
Mat Twassel 🚫

@awnlee jawking

That sounds interesting. Can you provide an example of an abstract concept that AI doesn't understand? I'd like to see what CoPilot makes of it.

Replies:   BlacKnight
BlacKnight 🚫

@Mat Twassel

Anything whatsoever. AI doesn't understand anything.

Pete Fox 🚫

@Michael Loucks

I make it quite clear in my prompt to leave the original text, to list any changes. You are correct that AI try to adhere to a base proper rules of writing, often not taking into your style. Its a tool, learn to use it.

Back to Top

 

WARNING! ADULT CONTENT...

Storiesonline is for adult entertainment only. By accessing this site you declare that you are of legal age and that you agree with our Terms of Service and Privacy Policy.


Log In