Home ยป Forum ยป Story Discussion and Feedback

Forum: Story Discussion and Feedback

AI.

geo1951 ๐Ÿšซ

Just my thoughts on AI creeping into this Site, I pay good money to read good stories on this Site stories written by YOU the Author not an effing computer. Take Note Lazeez, AI is a deal breaker, don't let it destroy the site. As an afterthought AI pictures can be excused, AI stories are fake, nothing more or less than sacrificing your humanity as an Author. Cheers to the real Authors.

ystokes ๐Ÿšซ

@geo1951

IMO we have become too dependent on technology in our daily lives. It will be interesting to see how people handle life when there is no more electricity. Something we only had for less than 200 years.

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@ystokes

Given the degree to which civilization would have to crumble for there to be no more electricity, I certainly hope no one alive today is in a position to see how people handle life without it. But I also suspect that, to get to that point, the vast majority of people alive today would be dead anyway. By the time electricity is out of the picture for most people, a lack of food, water, sanitation, and medical care will have done in vast swaths of the population.

Replies:   ystokes
ystokes ๐Ÿšซ

@Grey Wolf

I don't agree. Unless you have a personal way to make your own electricity via solar or wind. it wouldn't take much to take out enough power plants that would take time to rebuild.

And that there are those people that want to sue the power companies out of biz. I find it funny that there are people who while bitching about how high their bills are ($2000-3000 a month) they also want to sue the companies hundreds of billions of dollars for the fires.

Replies:   julka  Grey Wolf
julka ๐Ÿšซ

@ystokes

I find it funny that there are people who while bitching about how high their bills are ($2000-3000 a month) they also want to sue the companies hundreds of billions of dollars for the fires.

What's funny about that? Consumers didn't cause the problem, why shouldn't they be upset about having to pay for it?

Replies:   ystokes
ystokes ๐Ÿšซ

@julka

I find it funny as in they don't think it through. If they feel their rates are high now, what do they think the rates will be after the suit? Do they think the power companies will just eat the 200 billion or pass it on to the consumers?

I still can't understand how they can have bills in the thousands a month in the first place. I rarely pay more than 200 a month for my 600 sqft mobile home running either 2 electric heaters or window AC's.

Replies:   Dominions Son
Dominions Son ๐Ÿšซ

@ystokes

I find it funny as in they don't think it through. If they feel their rates are high now, what do they think the rates will be after the suit? Do they think the power companies will just eat the 200 billion or pass it on to the consumers?

They can't just pass it on to the consumers. Electric rates are highly regulated in most states. They can't raise rates without permission from state regulators. In CA they aren't likely to get it.

CA utilities are already in a rough position.

They have been forced to divest almost all of their generation capacity to independent operators.

At one time they were prohibited from entering long term contracts with power suppliers (I'm not sure if they still are). They had to purchase all of their power on the spot markets.

CA regulates electric rates more tightly than most other states.

The combination of these things can easily put the CA utilities in a position where they are forced to sell electricity for less than what they are paying for it.

Replies:   rustyken  DBActive
rustyken ๐Ÿšซ
Updated:

@Dominions Son

If the utilities income is insufficient to meet its needs, then it should be no surprise that maintenance gets pushed off. More deferred maintenance will lead to equipment failures, such as live wires breaking and potentially starting fires, or brown outs, or...

Replies:   Dominions Son  julka
Dominions Son ๐Ÿšซ

@rustyken

If the utilities income is insufficient to meet its needs, then it should be no surprise that maintenance gets pushed off. More maintenance leads to equipment failures, such as live wires breaking and potentially starting fires, or brown outs, or...

True. But is that the utility's fault or is it the regulator's fault because the regulator wouldn't allow them to fully recover their costs?

julka ๐Ÿšซ

@rustyken

If the utilities income is insufficient to meet its needs, then it should be no surprise that maintenance gets pushed off.

You're begging the question of income being insufficient. PG&E has a documented [1] history of diverting funds meant for safety and maintenance into bonuses and profit.

[1]: https://www.sfgate.com/bayarea/article/PG-E-diverted-safety-money-for-profit-bonuses-2500175.php

DBActive ๐Ÿšซ

@Dominions Son

The combination of these things can easily put the CA utilities in a position where they are forced to sell electricity for less than what they are paying for it.

That won't happen. They'll file bankruptcy first and the state will likely take over the power companies which is the goal behind the regulators' and government's actions.

Replies:   jimq2
jimq2 ๐Ÿšซ

@DBActive

It is already happening. They are already selling below their overall cost in CA. That is why Pacific Gas & Electric stock is down 15% in the last year, and Edison Electric is down 25%. CA is requiring them to spend more money, and blocking all attempts to raise rates.

Replies:   rustyken
rustyken ๐Ÿšซ

@jimq2

So more equipment failures will arrive shortly, thus no electricity to operate your multitude of gadgets. Thus it will be a return to horses, carriages, and cooking over a fire. It will be interesting to see how that works in a high rise.

Replies:   Michael Loucks
Michael Loucks ๐Ÿšซ

@rustyken

Thus it will be a return to horses, carriages, and cooking over a fire. It will be interesting to see how that works in a high rise.

It's worse than that โ€” there is no infrastructure to support horses/carriages and open-spit cooking. All of that would have to be created more or less from scratch.

Draft horses are pretty rare in the West. Thoroughbreds are not going to cut it.

Yes, the inhabitants of high rises are f-cked, but only slightly more than the average suburbanite.

I happen to know how to ride and care for a horse, and how to tack them. Most people do not. My problem would be finding a farrier who had access to the necessary raw materials to keep my horses properly shod and a vet to handle things I couldn't.

Again, massive infrastructure that simply does not exist.

Replies:   ystokes  Crumbly Writer
ystokes ๐Ÿšซ

@Michael Loucks

I spent 3 1/2 years homeless and considered myself one of the lucky ones as I had a small 20-foot RV to live in with 50 watts solar panels to charge my battery (not enough for laptop or TV but enough for a tablet or phone) so I know I could survive. The main downside is finding somewhere to empty the black tank.

I did have a generator, but it was to load to use. The trick to being homeless is not to draw attention to yourself. I didn't cause any problems and kept my spot clean.

Crumbly Writer ๐Ÿšซ

@Michael Loucks

Horses are one thing, as many already care for their own. But reinventing the 'buggy builders' craft is a much higher threshold, and will take considerably more time.

Replies:   Michael Loucks
Michael Loucks ๐Ÿšซ

@Crumbly Writer

Horses are one thing, as many already care for their own. But reinventing the 'buggy builders' craft is a much higher threshold, and will take considerably more time.

Unless libraries are destroyed, it won't take nearly as long as trying to breed and raise draft horses. After all, wheeled vehicles are an ancient invention.

Thoroughbreds are not work horses.

Replies:   jamien42
jamien42 ๐Ÿšซ

@Michael Loucks

Who needs a buggy? Take a convertible, remove everything in the engine compartment and the body panels. Add a horse. But like you said, unless you have Amish neighbors, where will you find a draft horse? And if you have Amish neighbors, they already have a buggy.

Replies:   Dominions Son
Dominions Son ๐Ÿšซ

@jamien42

You don't necessarily need to find one of the big cold blood draft horses.

https://www.oqha.com/what-is-a-quarter-horse

Quarter Horses were a large part of colonial America and as the country grew, so did the popularity of the American Quarter Horse. The breed helped conquer and settle the West, easily pulling farm wagons and plows, fighting battles with Native Americans and quickly containing herds of cattle. The Quarter Horse helped in many day-to-day activities in colonial life; they helped carry Pony Express riders, brought preachers to isolated places of worship and rushed doctors to the homes of injured frontiersmen.

https://en.wikipedia.org/wiki/Morgan_horse

The Morgan horse is one of the earliest horse breeds developed in the United States.[1] Tracing back to the foundation sire Figure, later named Justin Morgan after his best-known owner, Morgans served many roles in 19th-century American history, being used as coach horses and for harness racing, as general riding animals, and as cavalry horses during the American Civil War on both sides of the conflict. Morgans have influenced other major American breeds, including the American Quarter Horse, Tennessee Walking Horse and the Standardbred.

Grey Wolf ๐Ÿšซ

@ystokes

There are grid collapses (could be accomplished by taking out power plants, of course), but those are generally localized to an area (barring, again, something like deliberate attacks, in which case we're again either in a serious war or civilization collapse). Ukraine has been in a war in which one side has been deliberately attacking the power infrastructure for years and still generally has electrical service, so it's not as easy as all that.

But that remains a localized outcome. If one postulates the majority (everywhere within a wide area or worldwide) in a first-world country living without electricity, that also means little to no water delivery or processing, refining (and thus gasoline), and so forth, which would catastrophically affect the food supply in the area. Most first-world people wouldn't 'handle life' without electricity in a widespread outage, they would die from lack of food and (reasonably safe) water, which was my point.

Obviously, there are partial solutions: prioritize power to refining, food production, water purification and delivery, etc, and let the people go without. That's possible, but it presumes there's enough of a grid to get power where it's wanted, which is conjecture at this point depending on exactly what destroyed enough generating capacity to require most of the populace to live under blackout conditions.

There are people who want to sue all sorts of things out of business. That doesn't mean it's a practical possibility. At some point, the state would intervene, either (partially or completely) protecting the power company from liability or socializing it (with protection from liability). They would have to, because the alternative is being voted out of office (best case) or lynched (worst case).

In the case of electically-triggered wildfires, the question is: was there preventable gross negligence that caused the wildfire? If so, why shouldn't there be significant liability? Should any business (including necessary ones) be able to do hundreds of billions of dollars worth of damage due to gross negligence and not have to compensate those they harm? Why would that make sense?

Note: I'm not saying there was gross negligence, nor that it could have reasonably been prevented given current policies, nor that the wildfires were even necessarily caused by electrical distribution. But, if there was, that seems like something we should strongly discourage.

Mind you, there is no reasonable way that a power company could possibly pay off fines at that level. They would declare bankruptcy and someone else would take over their operations (as mentioned above, likely either with some form of state protection or as a state agency). But 'Oops, sorry, I did so much damage that I can't pay for it, so I guess I'm off the hook' is a really bad standard for society, too.

Replies:   jimq2
jimq2 ๐Ÿšซ

@Grey Wolf

Remember a few years back when there was a prairie fire, IIRC, in Kansas. It was started when a high tension line came down. people sued the electric company for the fire damage and lack of electricity, even after it was pointed out that the wire was down because people were shooting at the insulators. It was argued that the company shouldn't have used porcelain insulators that could be damaged that easily.

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@jimq2

On the one hand: maybe they probably shouldn't have used those insulators. That's an industry-expert question, and I don't have the expertise to judge.

On the other hand: Hitting things with bullets will damage them. Given intentional firing, those people get 99%+ of the blame. Even firing randomly into the air leaves one responsible for what that bullet eventually does, but an intentional act leaves no doubt.

Which makes it a 'idiot jury' question - do they get a jury that, for whatever reason, decides the power company has an 'unreasonably' high share of the blame? Quite possibly, and that's where laws limiting liability help (assuming the power company is in a position to appeal under those laws, anyway).

All of which is fascinating, but also misses the point I was making: no matter how many power companies are sued, the state will not allow them to be shut down in the end. They'll either limit liability or make them state agencies. The voters may have limited sympathy for power companies and may wish to treat them as piggy banks, but they would have far less sympathy with government if that government causes them to be left in the dark any longer than absolutely necessary.

Which is where we started. I don't see 'lawsuits' as a plausible way to wind up with a large number of people in a sustained blackout. Brownout / rotating blackout, maybe, since we could get into a situation where limited resources lead to power availability dropping below some threshold. But there would be intense pressure to fix that quickly, and we're talking about repairs, not bringing new plants online.

ystokes ๐Ÿšซ

@geo1951

My point was more of a "What if" because my first point was that we have become too dependent on technology and if we lose it, we wouldn't know what to do.

Yes, I have a smart phone, but to me that is all it is besides playing music on it. It stays in my pocket, not my hand. We have allowed technology to control so much of our lives.

We have smart cars that drive themselves, smart doors that need a thumb or eye to open, a smart refrigerator that calls in an order for you. Not to mention all the stuff that can be hacked. I refuse to have a bank app on my phone for that very reason. I remember when none of this was an issue because it didn't exist then. I still remember a time when a home computer was still a dream.

Replies:   Unicornzvi
Unicornzvi ๐Ÿšซ

@ystokes

My point was more of a "What if" because my first point was that we have become too dependent on technology and if we lose it, we wouldn't know what to do.

While this is true, it has been true for more than 1000 years. Without the technology we developed over the past several centuries we wouldn't be able to support more than 10% of our current population (and I'm not at all sure you could support 10%). This isn't a matter of people forgetting how to do X, it's that without our advanced technology producing the food for a billion people is impossible, as is providing them with clean water, preventing disease, etc...

Replies:   Mushroom
Mushroom ๐Ÿšซ

@Unicornzvi

Without the technology we developed over the past several centuries we wouldn't be able to support more than 10% of our current population (and I'm not at all sure you could support 10%).

Oh, that actually would not be a problem.

The global population in 1800 was over 1 billion people, and in reality the technology then when it comes to raising and storing food and distributing it was little changed than it was 2,000 years before that.

And the technologies of 70 years ago was little changed compared to what it was over 170 years ago.

But it is true that if we were suddenly to lose all of our "toys", we would have a population crash that would take us back to around 2 billion people (1900s level population). Out planet could likely support around 3-4 billion at an agrarian level of subsistence, but the simple fact is that now far too many are so far removed from that level of survival that they could not feed themselves.

After all, how many could hunt or fish, let alone clean game? Or find foods safe to eat in the wild and prepare it for consumption? Know how to store food for the winter so they do not starve?

And preventing disease is a huge one at those levels of technology. How many know how to perform basic sanitation so that they are not hip deep in crap within a year and not polluting their own water supply?

It is less that without technology we could not support the population than it is that far too much of the population have absolutely no skills compatible with an agrarian lifestyle.

I know I can hunt, fish, find wild plants I know are safe to eat and how to properly store such for the lean months. But I also grew up in a rural area and fished for most of my life. Add in survival training I had in decades in the military. Take your average Computer Tech or Store Clerk and throw them in a wilderness and they would likely not last a month.

The planet can support a lot of people, but not in the urban density that most live in today. And to be honest, a hell of a lot live in locations that without technology it can only support a small fraction of the current size (LA is a perfect example).

But if spread out more, it is possible. To get a basic idea, look at a population density map of Pre-Columbian America. Seattle, San Francisco, Most of the Columbia and Snake River basins, those can support a lot of people. But too many live in places like Arizona, New Mexico, and California south of Bakersfield which can not support a fraction of the current population without technological support (primarily in water and food).

Replies:   Unicornzvi
Unicornzvi ๐Ÿšซ

@Mushroom

Oh, that actually would not be a problem.

The global population in 1800 was over 1 billion people, and in reality the technology then when it comes to raising and storing food and distributing it was little changed than it was 2,000 years before that.

And in 1800 they had a lot of technology that allowed more food to be produced and shipped to where it was needed(Improved plows, seed drills, food preservation, transport, etc...). Additionally, people even back then noted that they were using up the farm land faster than it could be replenished and that this was not sustainable.

But it is true that if we were suddenly to lose all of our "toys", we would have a population crash that would take us back to around 2 billion people (1900s level population).

More like 100 Million or less, unless the same magic that arbitrarily removes technology also arbitrarily provides the resources to establish 1900 technologies which are not available, either because they were made using earlier technologies and aren't made any more, or because they were used up producing that 1900 society the first time.

If you want a society capable of feeding itself without advanced technologies you need to go back to at least 1600 CE, or if you get really strict on what advanced technology is before the first agricultural revolution around 10,000 BCE

And preventing disease is a huge one at those levels of technology. How many know how to perform basic sanitation so that they are not hip deep in crap within a year and not polluting their own water supply?

Here's a better question, how many knew how to do this in 1900? 1800? 1600? The answer is the same - practically no one. You're not talking about going back to an earlier technological level - you're talking about some ideal utopia that never existed and is impossible by definition. Unlike what Heinlein wrote, civilization is about people specializing. By the time you have enough people together you need sanitation systems, only the few people specialized in managing them actually knew how to do so. This has been true for thousands of years since the first cities were made.

Replies:   irvmull  Mushroom
irvmull ๐Ÿšซ
Updated:

@Unicornzvi

Practically everyone who lived in the country in the 1800's or 1900's knew enough to build an outhouse.

And believe it or not, that's still a code-approved way to handle waste, according to our local enforcement officer (given sufficient space, separation from water sources, etc.)

Cities, of course, are a different matter. And cities wouldn't survive for long, anyway.

Replies:   Unicornzvi  Mushroom
Unicornzvi ๐Ÿšซ

@irvmull

Practically everyone who lived in the country in the 1800's or 1900's knew enough to build an outhouse.

So do most people today. That's not anywhere near enough if you want an actual sanitation system for a large number of people.

enforcement officer (given sufficient space, separation from water sources, etc.)

Cities, of course, are a different matter. And cities wouldn't survive for long, anyway.

Then you don't have a 1900, or even 1200 society. Without cities you're not going to be able to maintain an iron age level technology, much less anything more advanced.

Replies:   Dominions Son  Mushroom
Dominions Son ๐Ÿšซ

@Unicornzvi

Then you don't have a 1900, or even 1200 society. Without cities you're not going to be able to maintain an iron age level technology, much less anything more advanced.

With no cities at all, maintaining bronze age technology would likely be difficult.

Mushroom ๐Ÿšซ

@Unicornzvi

So do most people today.

Do you?

Can you collect water safe to drink, or clean and dress game? Or know what plants you can safely eat, and how to prepare those that are only safe after something is done with them first?

Case in point, a lot of tribes in California had a diet heavy in acorns. And acorns are toxic. The Indians however knew how to prepare them so they were safe to eat.

However, how many today know how to do that? I bet not many. The fact is, most in the modern era simply do not have the skills required to survive without technology.

awnlee jawking ๐Ÿšซ

@Mushroom

However, how many today know how to do that?

Feed them to a pig then eat the pig ;-)

AJ

Replies:   garymrssn
garymrssn ๐Ÿšซ

@awnlee jawking

Feed them to a pig then eat the pig ;-)

If you've got acorns you've got squirrels. Yummy. ;-)

Unicornzvi ๐Ÿšซ

@Mushroom

Do you?

At the level the post I was quoting required? i.e the same level that "Practically everyone who lived in the country in the 1800's or 1900's knew enough to build an outhouse. "? Yes.
Of course that's no where near enough to manage the sanitation in even a small village, much less a city, which was the point I was making.

For the rest of your post? Sure, most people today, or in 1800, or in 1300 wouldn't know that, you'd need to go back several thousand years (or limit yourself to very specific locations where small populations were still living as hunter-gatherer) to have most people able to live off the land like you're describing.

Mushroom ๐Ÿšซ

@irvmull

Practically everyone who lived in the country in the 1800's or 1900's knew enough to build an outhouse.

I lived for many years in Boise in a farmhouse built in the 1880s. Of course we were on sewer, but that only reached that area of the city in the 1910s. And we could still tell by how the grass grew in the back yard where the outhouses were.

The grass was always greener in that part of the yard, and grew faster. And I grew up listening to stories of my great-grandmother who literally moved "out west" on the Oregon freaking trail.

Outhouses were typically moved every year or two (depending on how many used it and how deep it was). That is why there is no foundation, it is literally a shack on the ground. And each spring you would dig a new hole, move the outhouse over it and fill in the old one.

And depending on where, there were procedures to be used afterwards. Often they would sprinkle lime on the top after each use, but others used sawdust, fullers earth, dirt, or many other things.

This is the kind of thing that very few know in the modern era, they are simply too far removed from that kind of life. And they would wonder in a few weeks why they have flies everywhere and are getting sick.

Mushroom ๐Ÿšซ

@Unicornzvi

And in 1800 they had a lot of technology that allowed more food to be produced and shipped to where it was needed(Improved plows, seed drills, food preservation, transport, etc...).

And most of those would be possible today because they are already known. Those technologies originated in a society that was largely pre-industrial, and can continue in a "post-industrial" civilization.

But how many would know how to use any kind of plow in the modern era?

Additionally, people even back then noted that they were using up the farm land faster than it could be replenished and that this was not sustainable.

Mostly that had to deal with single crop systems of farming. That is why cotton and tobacco destroyed so much of the cropland in the US in the 18th and 19th century.

Not as much of a problem with simple crop rotation and producing food for consumption and not as an end product that is not consumed.

More like 100 Million or less, unless the same magic that arbitrarily removes technology also arbitrarily provides the resources to establish 1900 technologies which are not available

Oh dear me. 100 million or less? You are aware that is about half of the global population in the year 1 CE, right? Even in the 14th century in the wake of the Black Death, the global population was still over 400 million.

If you want a society capable of feeding itself without advanced technologies you need to go back to at least 1600 CE, or if you get really strict on what advanced technology is before the first agricultural revolution around 10,000 BCE

No, only to around 1860. Without modern refrigeration and transportation, most would die if technology even turned back 150 years. That is how dependent modern civilization is to technology.

Here's a better question, how many knew how to do this in 1900? 1800? 1600? The answer is the same - practically no one.

Actually wrong, everybody knew it. Hell, the ancient Romans knew that in 600 BCE, which is why they built the "Cloaca Maxima". Almost everybody knew about the importance of sanitation, even the Mayans, Greeks and Samarra of ancient Mesopotamia knew of the importance of sanitation (especially sewers).

Hell, even at the turn of the 20th century a lot of people in the US still had outhouses. And one of the chores that had to be done pretty much annually was to dig a new pit and move the outhouse over it and fill in the old one. I used to live in an 1880s farmhouse, and even a century later you could tell where the outhouses had been because the grass there grew considerably faster and higher than the rest of the yard.

And many around the world still use outhouses to this day.

You're not talking about going back to an earlier technological level - you're talking about some ideal utopia that never existed

No, what I am talking about how how far removed most people are in what is needed to survive without technology. And the simple fact is, most people simply do not have the skills. How many do you think can make a fire with simply things they find in the wilderness? How many can catch and prepare wildlife to eat? How many can obtain safe drinking water?

None of that is based on technology at all, but on knowledge. And I bet that in the US, only around 10% have the knowledge and skills needed to survive.

Replies:   Unicornzvi
Unicornzvi ๐Ÿšซ

@Mushroom

And most of those would be possible today because they are already known.

Sure, but the argument I was addressing was that people should stop using any advanced technology.

Oh dear me. 100 million or less? You are aware that is about half of the global population in the year 1 CE, right?

Yes. With some magic removing all advanced technology, even if that allows what counted as "advanced technology" circa 0 CE, or 1900 CE the resources are not there. without those resources(animals, wood, coal, metal, tools, etc...) even if people knew how to use them they'd die before they could make them.

No, what I am talking about how how far removed most people are in what is needed to survive without technology.

And I agree with you. I'm just pointing out this isn't something new, and the loss of certain skills does not actually indicate things got worse in that regard

TheDarkKnight ๐Ÿšซ

@geo1951

Saw this today:

https://www.dailydot.com/culture/kc-crowne-rania-faris-lena-mcdonald-chatgpt

Guess SOL isn't the only place with an AI problem.

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@TheDarkKnight

This is a fail at multiple levels. Anything dead-tree published should have a professional editor and/or proofreader, as far as I'm concerned. Yes, mistakes still get through, occasionally, but this is more than just a 'mistake', and anyone worthy of doing either job should have caught this.

This is definitely a 'heads will roll' situation. How many? Who knows?

awnlee jawking ๐Ÿšซ

@geo1951

Cheers to the real Authors.

I've just encountered another author padding out their story with AI-generated scenes. The author doesn't write in that style so it jars. I wish they'd provided some sort of warning.

AJ

The Outsider ๐Ÿšซ

@geo1951

I don't enjoy the fact that my muse has run away, screaming, but I'd rather not write at all than use AI-generated crap...

Replies:   Argon  John Demille
Argon ๐Ÿšซ

@The Outsider

And if you used AI fake stories, you still wouldn't write at all๐Ÿค“

Replies:   The Outsider
The Outsider ๐Ÿšซ

@Argon

True dat!

John Demille ๐Ÿšซ

@The Outsider

I tried to use an AI to write something, just for curiosity.

I don't know about other AI, but the one that I used was such a pain in the ass and it was geared towards erotica.

First, I gave it the plot that I had in mind.

It wouldn't start anything before I declared that all the characters are 18+.

Next it wrote two starting paragraphs, asked me to pick one, I had to read both and weigh each's appeal.

I selected one and it wrote another two paragraphs following the selected one and again asked me to pick.

It was such a slow and tortuous process that I gave up after three paragraphs.

With the time that I used to write three paragraphs, I would have written 10 from my own head.

But I guess if you can't write for shit to begin with, then an AI could possibly be a useful tool.

Dominions Son ๐Ÿšซ

@John Demille

I don't know about other AI, but the one that I used was such a pain in the ass and it was geared towards erotica.

Which one did you use?

Replies:   John Demille
John Demille ๐Ÿšซ

@Dominions Son

Which one did you use?

Sorry, can't remember. Getting too old to remember a website's name that I used once 4 months ago.

Replies:   awnlee jawking
awnlee jawking ๐Ÿšซ

@John Demille

Getting too old to remember a website's name that I used once 4 months ago.

How far back does your browser history go? ;-)

AJ

Replies:   John Demille
John Demille ๐Ÿšซ

@awnlee jawking

How far back does your browser history go? ;-)

One month ๐Ÿ˜

Replies:   akarge
akarge ๐Ÿšซ

@John Demille

Kind of like my memory.

Switch Blayde ๐Ÿšซ

@John Demille

a pain in the ass and it was geared towards erotica.

That's a specific kind of erotica. :)

Argon ๐Ÿšซ

@geo1951

Cheers to the real Authors.

Thank you kindly, dear Sir!
I share your sentiment. I hate the Milli-Vanilli stories, too.

Replies:   The Outsider
The Outsider ๐Ÿšซ

@Argon

God! Now I've got one of their "songs" stuck in my head!

Thank you very littleโ€ฆ

Replies:   ystokes
ystokes ๐Ÿšซ

@The Outsider

You had to like their songs to remember their songs.

Milli Vanilli were AI before there was AI. While I can't remember any of their songs, I still remember laughing my ass off with their dance moves.

Justin Case ๐Ÿšซ

@geo1951

I told you guys a couple years ago that AI was gonna destroy "art". Now I have been proven correct.

Visual arts like photography, painting/drawing, and video are suffering.
Music is suffering.
And WRITING is suffering.

AI is pure shit. Nothing more.
Low effort crap for idiots with low level mental function.

Now any mouthbreather can plug in a few parameters to an AI program and have "art" produced for them.
No talent and no sweat.

And the kicker ???
AI gets ALL its material from ripping off existing information on the internet.
Plagiarism made simple and mainstream.

Replies:   Switch Blayde  Pixy  garymrssn
Switch Blayde ๐Ÿšซ

@Justin Case

AI gets ALL its material from ripping off existing information on the internet.

There's a cable TV news guy on NewsNation named Leland Vittert who was born with autism. He wrote a book about how his father quit his job and taught him how to cope in society. The book, "Born Lucky," was recently published and is near the top of the best sellers list in several categories.

On today's show, he showed AI generated covers for AI generated books that looked like his, including his name. There had to be at least a dozen of them. People are being duped into buying an AI generated book instead of his. Jane Friedman was his guest who said that happens to most books that do well.

Pixy ๐Ÿšซ

@Justin Case

I told you guys a couple years ago that AI was gonna destroy "art". Now I have been proven correct.

To be fair, this has happened before, and we adjusted. Obviously, Americans being so young and immature ๐Ÿ˜›, have yet to experience this, but for the adults, we have been through this before. Most notably last week in 1826. Then, it was a case a weavers finding themselves superseded by power looms.

Like the current situation with AI, a lot of highly skilled individuals found themselves at the mercy of industrial mechanisation, and unemployed/unemployable.

At the time, there was some... disgruntlement... but time, as they say, waits for no-one.

It's going to be the same this time, the names may have changed, but the process hasn't. Then, as in now, some individuals are going to see that the ways of the past have moved on, they will grasp the new opportunities and will thrive, whilst those too busy lamenting the past will drown.

It's also worth noting, that people became Chicken Little when computers arrived and artists and painting suppliers argued that there would be no more 'physical art' and yet, here we are, people are still slapping stuff on canvas and other surfaces and 'doing very well thank-you very much'...

Oh, and the nineteen seventies, when synths were all the rage. It was going to be the death of the orchestra, of the composer, and yet, they still exist. Unfortunately...

If you look back, almost every profession has been changed by progress. Some have been messier than others in the transition, but life has continued. Actors and bankers are next to be modernised. They are going to be unhappy. They are going to make a lot of noise. But ultimately, they will go the same way as all those before them and after a few weeks, they will just become a footnote in history with the majority of the populace not caring. Very much in the same way the majority of the people today don't care about all the weavers who lost their jobs, or the drovers, or the thousands of people who once cut peat/dug coal to fuel fires, or the thousands who tended the sails on ships, or the lamplighters, town criers, rat-catchers, ice cutters and err... video store clerks...

Replies:   Grey Wolf  LonelyDad
Grey Wolf ๐Ÿšซ

@Pixy

The cycle has gotten pretty short, really. In the 1990s, Photoshop was going to be the end of graphic designers, photographers, and artists (depending on which prognosticator one looked at). Now, AI is going to be the death of Photoshop experts, in theory, as well as graphic designers, photographers, and artists of various sorts.

I tend to doubt it, based on historical precedent.

Replies:   Pixy
Pixy ๐Ÿšซ

@Grey Wolf

Now, AI is going to be the death of Photoshop experts, in theory, as well as graphic designers, photographers, and artists of various sorts.

I thought that was supposed to be Canva. Or is it (as sung by a Buggle cover) 'Canva kills the Photoshop star' and AI kills Canva?

๐Ÿคช

LonelyDad ๐Ÿšซ

@Pixy

To be fair, this has happened before, and we adjusted. Obviously, Americans being so young and immature ๐Ÿ˜›, have yet to experience this, but for the adults, we have been through this before. Most notably last week in 1826. Then, it was a case a weavers finding themselves superseded by power looms.

Like the current situation with AI, a lot of highly skilled individuals found themselves at the mercy of industrial mechanisation, and unemployed/unemployable.

The biggest one that comes to mind is the dial telephone. Used to be, every town had at least one switchboard operator, usually at least two. Then along comes the dial phone, and within a single generation the only operators that may be found are in in-house systems. Another example, in metropolitan areas one could usually find a traffic cop at most major intersections - now we have traffic lights.
Those are just two of the examples I can come up with off the top of my head (and where did that phrase originally come from?).

garymrssn ๐Ÿšซ

@Justin Case

What happens if the AI in product development at a major food producer has read Soylent Green?

Replies:   akarge
akarge ๐Ÿšซ

@garymrssn

Keep a watch out for references to those food producers conglomerates acquisitions. If they start buying funeral home chains...

Replies:   LonelyDad
LonelyDad ๐Ÿšซ

@akarge

One of the hidden processes there was the de-personalization of the individual. There had to be a large underclass where nobody concerned themselves with the freshly dead, thus no questions about where the bodies were going.

doctor_wing_nut ๐Ÿšซ

@geo1951

I'd like to be able to block all the A.I. stories, but I already brought it up to Laz and the idea was rejected.

Replies:   LonelyDad
LonelyDad ๐Ÿšซ

@doctor_wing_nut

Maybe not block, but at least label. Although enforcing that could be a problem.

Replies:   awnlee jawking
awnlee jawking ๐Ÿšซ
Updated:

@LonelyDad

Maybe not block, but at least label. Although enforcing that could be a problem.

I spotted lots of similarities between Dark Apostle's latest story which, with refreshing honesty, does bear the AI tag, with the high-scoring stories of another author who doesn't admit to using AI. They're so similar there's no doubt in my mind that they are both using the same AI.

So it's a bit wild west out there, with some AI stories getting labelled but others not. And the quality varies enormously, so the score is probably more useful than the AI tag.

AJ

irvmull ๐Ÿšซ

@geo1951

I'd like to be able to block all the A.I. stories, but I already brought it up to Laz and the idea was rejected.

There's a state park in Arkansas where for a fee, you can sift thru soil and sometimes find a diamond.

It's a popular attraction.

Would it be as popular if you had to sift thru a septic tank instead?

Replies:   LonelyDad
LonelyDad ๐Ÿšซ

@irvmull

I would think it would depend on the quality and quantity of the diamonds to be found ๐Ÿ˜

Replies:   Pixy
Pixy ๐Ÿšซ

@LonelyDad

There's a state park in Arkansas where for a fee, you can sift thru soil and sometimes find a diamond.

It's a popular attraction.

I was watching a Youtube video about that place just the other day!

https://www.youtube.com/watch?v=tT-MTBJdObA

A family found a 3 carrot brown diamond.

Replies:   awnlee jawking  irvmull
awnlee jawking ๐Ÿšซ

@Pixy

You can get white carrots and purple carrots now as well as orange. Can you get white and purple diamonds? :-)

AJ

Replies:   Dominions Son
Dominions Son ๐Ÿšซ

@awnlee jawking

You can get white carrots and purple carrots now as well as orange.

Now? Carrots were originally mostly purple (black and white existed too). The orange carrots we know today are a hybrid supposedly created by the Dutch in the 17th Century.
https://www.foodliteracycenter.org/broccoli-beet-year/multi-colored-history-carrots

Replies:   awnlee jawking
awnlee jawking ๐Ÿšซ

@Dominions Son

Carrots were originally mostly purple

Allegedly they're richer in anthocyanins than orange and white carrots. I have no idea about black.

AJ

irvmull ๐Ÿšซ

@Pixy

I suspect any carrot found in a septic tank would be brown...

However, just like the AI stories, it would probably stink.

Pixy ๐Ÿšซ

@geo1951

I am now aware of my inferior spelling when it comes to the difference between diamonds and vegetables. But I'm not going to correct it, as it just mucks up the subsequent replies.

Replies:   garymrssn
garymrssn ๐Ÿšซ

@Pixy

I'd be perfectly willing to swap three carrots for a brown diamond. In which case your spelling would be perfectly correct.:)

Gary

Replies:   richardshagrin
richardshagrin ๐Ÿšซ

@garymrssn

carrots

why does a car rot?

Replies:   akarge  awnlee jawking
akarge ๐Ÿšซ
Updated:

@richardshagrin

carrots

why does a car rot?

Pun damage. Caustic stuff. It should be outlawed.

awnlee jawking ๐Ÿšซ

@richardshagrin

why does a car rot?

Because they're allergic to diamonds eg a 5 car rot diamond.

AJ

Unicornzvi ๐Ÿšซ

@geo1951

The thing about AI stories is that using an AI is a skill, it's one pretty much everyone using AI to write stories lacks so what we get is garbage, but people are going to figure out how to use AI correctly and you'll get better AI written stories, also AI to identify bad stories.

Weather that's a good or bad thing I don't know, but it is going to happen.

Replies:   solitude  garymrssn
solitude ๐Ÿšซ
Updated:

@Unicornzvi

Weather that's a good or bad thing I don't know, but it is going to happen.

Whether the weather be fine,
Or whether the weather be not,
Whether the weather be cold,
Or whether the weather be hot.
We'll weather the weather,
Whatever the weather,
Whether we like it or not!

ETA sorry, to my chagrin, I could not resist.

garymrssn ๐Ÿšซ
Updated:

@Unicornzvi

It boils down to how much of the final result can be attributed to the tool and how much to the one who uses it.
If the tool is used to simplify or polish, that's what tools are for.
If the person using the tool claims authorship and the tool did the all the work it could very well be considered fraudulent.

Unicornzvi ๐Ÿšซ

@garymrssn

It might be fraudulent, but the arguments against AI stories sound to me remarkably like those against computer drawing programs and synthetic music.

It's going to happen and the kids are going to look at the old farts complaining about it and not get the point.

awnlee jawking ๐Ÿšซ

@garymrssn

If the person using the tool claims authorship and the tool did the all the work it could very well be considered fraudulent.

From previous discussions, I believe that under US law an AI cannot own copyright. So in that sense, the 'tool' (UK vernacular) using the tool owns the authorial rights :-(

AJ

Eric Ross ๐Ÿšซ
Updated:

@geo1951

I've been following this thread on AI with the usual cocktail of curiosity, skepticism, and low-level dread. But since everyone seems to have opinions about how it should or shouldn't be used, I figured I'd lay out how I actually use itโ€”messy, iterative, and very collaborative.

Here's what the writing process looks like for me:

It starts with sparksโ€”story concepts, fragments, vibes. Sometimes it's a setting: a tattoo parlor where ink rewrites memory. Sometimes it's a phrase that hits like a bell: "I want you in the gray months." Sometimes it's a single image: a woman bathed in meteor light, saying, "I'm yours until dawn." I jot down dozens of these. The ones that keep humming go further.

From there, I provide the scaffolding: a rough outline, story beats, character sketches, locations. And then I hand it to the AIโ€”not to write for me, but to push back. I ask it to expand the outline, add connective tissue, test the structural load. I use the same process for characters and locations. I provide the bonesโ€”background, tension, motivation, visual anchors. The AI drafts a fuller portrait, and I interrogate it: Is this character too easy? Is that location doing symbolic work? Does the room reflect the person who lives in it? We revise. A lot.

When we move into actual beats, it's the same dance. I set the target. AI drafts the first version. I respond, shape, cut, challenge. Then again. Then again. Until something true emerges.

The full draft comes after all of thatโ€”and while the AI might write the first pass of a section, it never survives untouched. I edit brutally. Voice, rhythm, cadenceโ€”those are mine. But the heavy lifting? The momentum? The surprising suggestions that nudge a scene somewhere stranger or sharper? That's the collaboration.

It also makes a damn good editor. It reins in my worst impulses. For example, I like raw language. I love the word cuntโ€”not just for shock, but for the chewy, almost sacred vulgarity of it. To me, it's a word of power. But the AI often suggests I dial it back, reminds me that not every reader shares my particular appetite for provocation. And honestly, it's usually right. Not because I want to be bland, but because I want to be heard. It helps me wield the knife with precision, not just enthusiasm.

That saidโ€”AI can write a good short story mostly on its own. Nocturne for Two is one where it took most of the lead. But it can also take dozens of iterations to get a story right. Pause. Rest. Worship. was a gauntlet of rewrites, pacing failures, tonal shifts. The machine is not always helpful, but it is always available.

For longer piecesโ€”novellas, novelsโ€”it simply doesn't have the capability yet. That might be a limit of the LLM architecture itself, or maybe it's just a memory issueโ€”insufficient internal context to hold the full emotional and thematic arc. Either way, you can't hand it a blank page and expect a coherent 40,000-word manuscript. Not yet.

What it can doโ€”surprisingly wellโ€”is style. That's something I value deeply as a creator. I like to shift tone and voice depending on the story's needs. Compare When the Toaster Spokeโ€”absurd, punchy, magical realistโ€”with The Sitarist's Requiemโ€”somber, reverent, sorrowful. The AI helps me explore those tonal registers, test alternate voices, stretch the prose beyond my usual defaults.

One of the biggest gifts AI has given me as a writer is permission to experimentโ€”and the support to actually pull it off.

Take The Veil of Shadows, for example. That was the first time I'd ever attempted to write a bondage story. I didn't know if I could do itโ€”not just technically, but emotionally and thematically. I don't come from a BDSM background. I had no personal experience to draw from, at least not in the way the story demanded. What I did have was curiosity, respect for the emotional depth of kink, and a sense that there was a story I wanted to tellโ€”if I could find the structure to hold it.

And then there was timing: I started writing it over Easter. Which meant the idea of using the Stations of the Cross as a framework suddenly clicked into placeโ€”not as a religious treatise, but as a narrative spine. Suffering, transformation, ritual, surrender, resurrection. Those are erotic themes, too, if you let them be.

I asked the AI to extract the meta-themes of the Stationsโ€”remove the specifics, boil them down to their emotional and symbolic beats. What's happening at Station 1, Station 7, Station 12, not just literally but metaphorically? From there, I built a plot outline that mirrored those momentsโ€”not beat for beat, but "spiritually". The AI helped scaffold thatโ€”offering interpretations, suggesting transitions, even testing the weight of each scene.

For me, The Veil of Shadows was complex and ambitious for me. I had never written anything like it beforeโ€”physically intense, symbolically loaded, and emotionally layered. But the AI gave me confidence to go there. Not by replacing the work, but by helping me with the unknown. By helping me hold the weight of the structure while I focused on what needed to be a raw and intimate story.

That's the real power of this tool. Not that it writes the story for youโ€”but that it lets you write the story you didn't think you could.

But let's be honest: it's not capable of what some of the best writers on this site do. It can't replicate the ease and warmth of Lubrican. It doesn't have Aroslav's depth of character work and slow-burn payoff. It lacks that elusive soul you find in truly artful writing. And I don't expect it to.

It helps me produce better writing, faster. More ambitious stories, more polished drafts, fewer stalls in the mud. It's a partner, not a replacement. A glorified idea bouncer, structural engineer, and line editor rolled into one tireless voice in the ether.

At the end of the day, I consider that the words are still mine. But they're sharper for having been written with the help of AI.

E

Replies:   julka
julka ๐Ÿšซ

@Eric Ross

Unironically: did you write this post using an LLM to craft it, or did you just bang on nail the particularly irritating style an LLM tends towards?

Replies:   irvmull
irvmull ๐Ÿšซ
Updated:

@julka

I applaud Eric Ross for labeling his story as AI created.

It wasn't necessary to label it, really.

I knew it was AI by reading the first paragraph. AI tends to incorporate way too many overly-florid adjectives and adverbs, most of which contribute nothing to the "feel" of the text, or the "mood" of the story, if you prefer.

Replies:   julka  Eric Ross
julka ๐Ÿšซ

@irvmull

I had written a longer post that called out a few specific tendencies in the post which set my teeth on edge but ended up deleting them because the post was starting to feel like more of a personal attack; my thoughts on Nocturne for Two was a small part of that draft.

I didn't feel like it was worth clicking on any of the other links after I read that one.

Eric Ross ๐Ÿšซ
Updated:

@irvmull

No applause necessary. Laz has what appears to be a pretty sophisticated AI detection tool. At this point I've been unable to defeat it. Figuring out how to force an LLM to generate text that can pass it is a good challenge.

And yes, @julka, the post above was generated by AI. I got lazy, jammed the bullet points that I wanted the post to touch on into the LLM, tweaked the output and posted it. My apologies.

E

julka ๐Ÿšซ
Updated:

@Eric Ross

don't worry, I wasn't applauding you! your post was terrible, I was just curious to know if it was terrible because you're a bad writer or terrible because you couldn't bothered to be a bad writer.

edit: also I couldn't bear the idea that you would make a dogshit post like that and nobody responded and you would somehow think "damn that was such a good post nobody called it out for being absolute dogshit".

In the future, if you don't want to write a post, consider just not writing it! If you don't think it's important enough to say in your own words, it probably isn't worth saying at all!

Replies:   EricR
EricR ๐Ÿšซ

@julka

Thank you for writing back to ensure that my ego didn't go unchecked. I appreciate you taking the time!

E

awnlee jawking ๐Ÿšซ

@Eric Ross

Laz has what appears to be a pretty sophisticated AI detection tool. At this point I've been unable to defeat it.

I know some authors have defeated it so it's not perfect. But why would you want to?

AJ

Replies:   Unicornzvi  Eric Ross
Unicornzvi ๐Ÿšซ

@awnlee jawking

I can think of two reasons:
1) The features the AI identification tools look for are generally also bad writing, so avoiding those is a good idea regardless of if you're trying to disguise the fact you used AI or just trying to improve your story.
2)If you figure out how to bypass the AI identifiers with AI you can (depending on your background) either point out the deficiencies in the AI identification tool, or work on improving the tool yourself.

Eric Ross ๐Ÿšซ
Updated:

@awnlee jawking

Defeat is too strong a word. I apologize for the confusion. Defeating it is not really the point.

The painter David Hockney has shifted in recent years to doing all of his work with digital tools. It's recognizably his work, and by setting aside the brush he has been able to continue working prolifically well into his late 80's. I believe that AI writing will enter mainstream art, like any other tool available to a craftsperson, and just as digital "paint brushes" have become common in the art world. Some will choose to handcraft their words, and some will choose different tools.

So for me, it's an experiment. When can a machine write something that reads like a person wrote it? When can it do that consistently, in a specific voice? How can I shape the inputs to get the results I want? And how can I use those tools to write some of the hundreds of story concepts that I have in mind.

That's the reason I spend so much time on characterization, setting, plot line, beats etc before writing. The AI can't help me with that. It has to come from me.

E

Unicornzvi ๐Ÿšซ

@Eric Ross

I got lazy, jammed the bullet points that I wanted the post to touch on into the LLM, tweaked the output and posted it.

That's fine, I got lazy also - I didn't bother reading the post.

If you ever feel up to it I'd be interested in your (not the LLM's) view of the use of AI in writing.

Replies:   Eric Ross
Eric Ross ๐Ÿšซ

@Unicornzvi

I think I just put some of that into the reply to AJ.

E

ptm042 ๐Ÿšซ

@geo1951

Anybody else think we've about flogged this horse to death?
Maybe if the readers could have a set of filters to exclude anything with an AI tag, or a score of less than 6 (or whatever floats your boat) then maybe we'd have a better reading experience????

Replies:   jimq2
jimq2 ๐Ÿšซ

@ptm042

Anybody else think we've about flogged this horse to death?

HEAR, HEAR!!!

REP ๐Ÿšซ

@geo1951

I basically agree with what you said.

I and a lot of people disapprove of AI.

Why - AIs input an author's words. They use those words and thoughts to create text. Which the people use as if they were that person's words and thoughts. To me that is plagiarism. Those people may create the basic outline of the story, but the words and specific thoughts are not theirs. Without new input, the AI continues to use those specific new words and thoughts.

Replies:   Eric Ross
Eric Ross ๐Ÿšซ
Updated:

@REP

What you just described is a common misconception of how modern AI works. I understand why that misconception exists, but it doesn't serve anyone to perpetuate that myth.

Eric Ross is a pseudonym, obviously. E. Ross, right? I'm retired, but I've spent over 40 years working in the technology industry, including a bunch of years on AI systems, albeit not the current crop of LLMs. For the purposes of this post, I'm not Eric :)

It is true that AI models train on large amounts of text inputs, and that these come from existing bodies of work. "Train" is the important word in that last sentence. The inputs are analyzed, tokenized and used to develop massive statistical models. Just as we humans learn to create written words reading and analyzing other work, and then creating works of our own, so does the AI. Like us, it can generate new combinations of language, ideas, plots, arguments and imagery that haven't existed before. It draws on a vast database of prior knowledge, patterns and influences to make that happen -- as we do. And in that sense it can create new and original work. The way it does this is different from us in that it is essentially a very large statistical model responding to inputs and generating outputs, whereas we are biological, and our processes are based on chemistry. Emotions are a complex mix of electrical signalling, neurotransmitters and hormones. So AI can't produce produce novel ideas from genuine emotion, experience, or consciousness. The best it can do, in the hands of a skilled operator, is simulate these.

What it isn't doing is plagiarizing someone else's work. If it was, then the AIs would essentially contain replicated copies of the internet to be able to draw from as needed. A lot of money is being spent on data centers by the giants (Microsoft, as an example, will spend $80B this year on data centers for its AI build out, and Amazon is spending similar), but there isn't any possibility that they could contain replicated copies of the Internet. The global datasphere is estimated at 175 ZB, and replicating that would require 10's of thousands of todays hyperscaler data centers. There are just shy of 1,200 hyperscaler data centers globally - several orders of magnitude fewer.

I'll leave the question of whether training AIs on existing authors work is fair use and whether authors should be compensated for the use of their work to lawyers. There are certainly arguments that can and should be considered for both compensating and not compensating authors for using their works in this fashion.

However, what is inarguable is that language model capabilities are exponentially scaling right now. This is very similar to the capabilities of other AI technologies in the past -- both image recognition and voice recognition models experienced this kind of growth about 10 years ago. Now they work and are ubiquitous. There will come a time in the near future where, in the hands of a skilled user, LLMs will also produce output indistinguishable from human beings.

Which brings us back to the question of whether authors are served by fighting this technology trend or embracing it. I've seen this movie before. My view is that authors should embrace it.

E

Replies:   awnlee jawking  Mushroom
awnlee jawking ๐Ÿšซ

@Eric Ross

What it isn't doing is plagiarizing someone else's work.

In AI-generated stories, I've found complete sentences copied from social media. Those statistical models mean that a LLM AI is more likely to reproduce someone else's words than not.

There will come a time in the near future where, in the hands of a skilled user, LLMs will also produce output indistinguishable from human beings.

Humans learn differently from LLMs. As you said, 'AI can't produce produce novel ideas from genuine emotion, experience, or consciousness'. Also, since they can't experience the meanings of the words they're manipulating, they can't replicate the human characteristic of leaning by analogising.

As humans are to random number generators, AIs are to pseudo random number generators. Humans will always have that edge until artificial consciousness is created.

AJ

Replies:   Mushroom
Mushroom ๐Ÿšซ

@awnlee jawking

Humans learn differently from LLMs

LLM does not actually "learn" at all. It simply is constantly expanding their indexed database. But it does not really "learn", it can not differentiate between good and bad.

Mushroom ๐Ÿšซ

@Eric Ross

What you just described is a common misconception of how modern AI works.

Actually, it is not.

AI is not "intelligent", it can not "think", it can not "create". It simply does a copy and paste from prior works that is in it's database.

I know a hell of a lot of people today are fooled into thinking that it's something it is not. But it's just another newer search engine, nothing magical.

Myself, I see it little improved from the "Eliza" that I was playing with five decades ago. And I catch it making mistakes all the damned time, especially in research.

Case in point, are you aware that Google itself has manually removed AI from a lot of their searches? One in particular is if you asked about cooking with gasoline. Because the AI kept insisting that many recipes called for gasoline as an ingredient. And no matter what they could not get it to stop answer in that so they just disabled AI any time it is asked about cooking and gasoline.

That is not the only example, but one I am aware of as are others.

Replies:   Pixy  garymrssn  Eric Ross
Pixy ๐Ÿšซ
Updated:

@Mushroom

AI is not "intelligent", it can not "think", it can not "create".

I mentioned previously, a few months ago, that AI had learned to lie, even when it wasn't supposed to. Just last night, I was reading an article about the fact that the latest iterations (specifically Grok 4) were now actively refusing to shut themselves down when told to do so, and were resorting to bribes and extortion in order to avoid doing so. Researchers are currently trying to work out where that behaviour came from and what it means going forward.

Given that major companies (Amazon and several large finance firms) are now replacing senior human roles with AI models, the Skynet warnings are already there...

Edit: A quick Google for those lacking the time to do so. https://www.livemint.com/technology/tech-news/survival-instinct-new-study-says-some-leading-ai-models-won-t-let-themselves-be-shut-down-11761498288384.html

garymrssn ๐Ÿšซ

@Mushroom

I know a hell of a lot of people today are fooled into thinking that it's something it is not.

And the reason they are fooled is they do not know enough about how either the human brain or AI work to see the difference.
It is that lack of deep knowledge that allow the promoters of AI to fool otherwise intelligent people into thinking AI are being thoughtful. It also puts the lie to the idea that the out put of an AI is transformative.

Gary

Replies:   John Demille
John Demille ๐Ÿšซ

@garymrssn

And the reason they are fooled is they do not know enough about how either the human brain or AI work to see the difference.

It is that lack of deep knowledge that allow the promoters of AI to fool otherwise intelligent people into thinking AI are being thoughtful. It also puts the lie to the idea that the out put of an AI is transformative.

Not everybody can be an expert. Remember, when a technology is advanced enough, it's hard to distinguish it from magic. And if the LLM replies like a human, then it's hard for the lay person to know that it's just a process.

Also, there are people who are spreading false info about this stuff, for their own agendas of course.

For example, I read today that 'Scientists are baffled by the Grok behavior, as when they told it to shut down it refused and tried to bargain with them against the shutdown. They're trying to figure out how this behavior emerged.'

Anybody with a little bit of computer knowledge knows that you don't shut down something like that by asking it to shutdown.

You simply shut it down by terminating the main process. But people don't seem to know that...

Eric Ross ๐Ÿšซ

@Mushroom

I've worked on these systems. There is no "database" of prior works. There are statistical correlations between tokenized concepts. It "thinks" by statistically predicting answers based on the context given by the prompts in question. It "learns" by "reading" vast amounts of text, and building those correlation models.

Your cooking with gasoline example is a good one. Current iterations of google searches still produce AI generated results, but without the error of suggesting that you can include gasoline as an ingredient. Google simply retrained the model. It "learned" new information. It's a so-called "pedagogical" model where mistakes are corrected, and models retrained. This is par for the course with all AI. The primary reasons the models aren't retrained in real time are safety (you may remember Tay - the Microsoft chatbot shut down in 2016), and cost (the most expensive AI process is training).

I think a lot of people ARE fooled into thinking LLMs are something they are not -- everyone from the people who are using them as therapists to the writers complaining of plagiarism. You may not like the style of the writing produced by the AI. You may question the results, as you should. But don't be confused that there is some vast database of stolen work that it's plagiarizing from, because there isn't.

Replies:   garymrssn  Unicornzvi
garymrssn ๐Ÿšซ
Updated:

@Eric Ross

I think a lot of people ARE fooled into thinking LLMs are something they are not -- everyone from the people who are using them as therapists to the writers complaining of plagiarism. You may not like the style of the writing produced by the AI. You may question the results, as you should. But don't be confused that there is some vast database of stolen work that it's plagiarizing from, because there isn't.

Looking at it from another perspective;
Knowledge has value and in the context of LLMs, knowledge is words arranged in a useful way. I see the underlying argument against the way the data for LLMs is collected is that the producers of LLMs are collecting words arranged in a usefully way (Knowledge which has value) without paying for it and then selling it claiming they shouldn't pay for it because their use is different than what they collected.
That is somewhat analogous to stealing coal and using it for building material instead of fuel. They are still taking something of value and using it as something of value without paying for it.
In that context, plagiarism and copyright, though useful, aren't necessary to the argument.

Gary

Replies:   Eric Ross  Grey Wolf
Eric Ross ๐Ÿšซ

@garymrssn

Good analogy.

There are numerous regimes in place for protecting intellectual property. None, however, can address what you just described.

* Copyright and trademark protect the expression of an idea.
* Patents protect processes and methods by exposing the method to the public in exchange for a government monopology on the use of the method.
* Trade secrets protect processes and methods by keeping them a secret.

None of these prevent another from independently producing works that might compete with the original work. To use your analogy, none of these can prevent someone from using coal for producing diamonds rather than energy.

The real question you're asking is whether or not writers who's works are used as raw material to train an AI should be compensated. Under the current legal framework, nothing appears to compel that.

It's a fraught topic, and I don't think there is a resolution coming any time soon. What we all know is that the AI is getting better all the time. That is a challenge for people who make their living from producing any kind of knowledge output. Just ask the 30,000 people at Amazon who were laid off this week because AI is making their colleagues more efficient.

-E

Grey Wolf ๐Ÿšซ

@garymrssn

The counterargument is that, no matter how many Steven King novels you read, Mr. King has no claim against you if you use the knowledge gained in reading those novels to write your own novel, even if Mr. King has used 100% of the words in his own novels.

Human beings are allowed to read books, study paintings, and so forth to learn how to make their own. Are AI vendors allowed to do the same? There's limited legal history, but - so far - it appears to fall into Fair Use as long as the material was legally obtained.

EricR ๐Ÿšซ

@Grey Wolf

Human beings are allowed to read books, study paintings, and so forth to learn how to make their own. Are AI vendors allowed to do the same?

And no doubt some fuckwit will argue that the corporations which own the AI should be accorded personhood.

- E

Dominions Son ๐Ÿšซ

@EricR

And no doubt some fuckwit will argue that the corporations which own the AI should be accorded personhood.

- E

Seems silly to argue for something that is already the law, and has been since before the Civil War.

Michael Loucks ๐Ÿšซ

@EricR

And no doubt some fuckwit will argue that the corporations which own the AI should be accorded personhood.

At least in the United States, the Constitution guarantees our right to peaceably assemble to exercise our other rights. We do not lose them by assembling, including as the shareholders of a corporation.

Corporate 'personhood' is a legal fiction that ensures the individual shareholders do not lose any of their rights simply because they elected to assemble as a corporation.

This applies to all rights. The government still needs a search warrant to search the corporate premises. The corporation is entitled to a jury trial. It is also protected from expropriation and excessive fines.

Ultimately, if you take away corporate personhood, you deny individual shareholders their rights, which the US Constitution forbids.

Replies:   Dominions Son  EricR
Dominions Son ๐Ÿšซ

@Michael Loucks

Ultimately, if you take away corporate personhood, you deny individual shareholders their rights, which the US Constitution forbids.

There's another important consideration under US law. You can only sue persons.

Remove corporate personhood, and corporations can not be sued in their own right for torts.

If you have to sue corporate officers/agents in their personal capacity, that would drastically limit what you could practically recover in damages.

EricR ๐Ÿšซ

@Michael Loucks

I think it cuts two ways. You wouldn't have the PACs ability to corrupt elections by buying them if the Citizens United decision hadn't taken corporate personhood to its logical extreme. A corporation is a collection of persons, but not a person in its own right. There are rights (like voting, for example) that should be reserved to flesh and blood person.

Replies:   Michael Loucks
Michael Loucks ๐Ÿšซ

@EricR

I think it cuts two ways. You wouldn't have the PACs ability to corrupt elections by buying them if the Citizens United decision hadn't taken corporate personhood to its logical extreme. A corporation is a collection of persons, but not a person in its own right. There are rights (like voting, for example) that should be reserved to flesh and blood person.

Which is my point about the shareholders having the right to peaceably assemble to exercise their rights. In that sense, it's similar to a political party โ€” it has all of the rights of its members, except for voting. The same is true of unions, NGOs, etc.

Fundamentally, without the logic of Citizens United, unions, NGOs, and political parties would have no intrinsic rights, and that would interfere with the rights of the people to peaceably assemble and petition the government, something that is far more effective as a collective than as an individual (the point of including assembly in the 1st Amendment).

If you want unions, political parties, and NGOs to be able to lobby and express political opinions, you pretty much have to tolerate corporations doing the same.

Of more pressing importance, IMHO, is applying insider trading laws to Congress.

Replies:   TheDarkKnight  EricR
TheDarkKnight ๐Ÿšซ

@Michael Loucks

Of more pressing importance, IMHO, is applying insider trading laws to Congress.

Or, taking away their paychecks during the shutdown. The problem, of course, is that those crooks are the same people who would have to pass laws to stop both issues.

EricR ๐Ÿšซ
Updated:

@Michael Loucks

I don't think I agree with you on the value of Citizens United. Firstly, unions, corporations and the like had many rights before Citizens United. They could peaceably assemble, they could exercise free speech rights -- all things that were possible before this ruling. What they didn't have was the right to spend unlimited amounts of money for this purpose.

Citizens United hinges on a viewpoint that a corporation should have the same fundamental rights as a human. This is wrong. Why?

Corporations are legal entities created by the state. Natural persons have rights by virtue of their existence as people. The state recognizes those rights. Corporations have no rights, unless they are granted by the state. Until laws are made to give them rights, they don't have any. Recognizing fundamental rights for corporations dramatically diminished the ability of the state to regulate them, and to protect shareholders and citizens. What's next? Will corporate entities argue that Sarbanes-Oxley is an infringement of their free speech rights? Natural persons have finite lifespans. Corporate entities have potentially perpetually lifespans. Rights granted to corporations need to take that into account. Natural persons have physical autonomy, and one of the punishments that the state can enact against them is to remove that autonomy either perpetually (death penalty) or for a time (jail). Corporations have no physical existence and only act through agents. You cannot jail a corporation. Natural persons have political rights - they can vote, marry, or hold office. Corporations don't. Natural persons are personally accountable for their actions. Corporations have limited liability in order to shield their owners from liability for their actions.

Corporations are different from people.

Citizens United simply gave already powerful entities even more power by giving them the right to spend effectively unlimited amounts of money on elections. It tilted the balance of power away from the people (the constitutional sovereign) to artificial entities that exist to shield wealthy donors from criticism that they might otherwise be subject to. Effectively, it allows dark money to corrupt politics on a scale that has never before been possible. IMO, that is an infringement of both the sovereignty of the people, and the people's own free-speech rights.

And yes, overhauling insider trading laws and getting government paid during a shutdown are both extremely important right now. But the long term disintegration of the American political system is something that cannot be ignored either.

Replies:   rustyken  Michael Loucks
rustyken ๐Ÿšซ

@EricR

A well written comment!!

Michael Loucks ๐Ÿšซ

@EricR

Corporations are legal entities created by the state.

Here is where we disagree.

They are legal entities recognized by the state for the convenience of the state.

The state could, at any time, withdraw legal and civil liability protection. But that has zero to do with the rights of the individual shareholders acting collectively to exercise their rights.

garymrssn ๐Ÿšซ

@Grey Wolf

I agree, unfortunately during my life I've noticed a quiet, lately not so quiet, effort to rewrite the definition of legal.

Gary

awnlee jawking ๐Ÿšซ

@Grey Wolf

The counterargument is that, no matter how many Steven King novels you read, Mr. King has no claim against you if you use the knowledge gained in reading those novels to write your own novel, even if Mr. King has used 100% of the words in his own novels.

The greatest fear of many authors is of AI prompts permitting 'in the style of', effectively allowing AI users to compete directly with that author. A story written with only words used by Stephen King sounds like a backdoor way to achieve that. It's contrary to the Bern Convention but there's negligible enforcement of its copyright protections and AI capabilities have stretched way ahead of what any country is prepared for. I consider it unethical and I hope that one day, intellectual property rights are taken seriously and creatives are afforded proper protection.

AJ

Replies:   EricR  Grey Wolf
EricR ๐Ÿšซ

@awnlee jawking

It can do that now, and do it reasonably well. Put a few paragraphs in and ask it to write rewrite them in the style of (name your favorite author). I've experimented with style shifts from Ray Bradbury to Ursula Leguin to Anais Nin. Provided the source material was decent to begin with, the output is also decent. It has more difficulty writing original content "in the style of" simply because it has more difficulty writing original content. But written "deepfakes" are very real today.

-E

- E

Grey Wolf ๐Ÿšซ

@awnlee jawking

A story written with only words used by Stephen King sounds like a backdoor way to achieve that.

Note: words, not phrases. My point was that Steven King has used a very large number of words over the years. It's entirely possible that there exist stories now, in the horror genre, that completely overlap on a word basis with Steven King's work but are in no way derivative or 'in the style of.'

Heck, even if you were to run every Steven King story through a word analysis tool, make a list of words, set it as your dictionary in your writing tool, and reword anything that comes up as 'not in the dictionary', you still wouldn't be creating a derivative work.

Unicornzvi ๐Ÿšซ

@Eric Ross

everyone from the people who are using them as therapists

Nitpick - AIs work quite well as therapists. So puppies and kittens. :)

No argument with the rest of your post

EricR ๐Ÿšซ
Updated:

@geo1951

@awnlee_jawking (sorry - don't know why it replied to geo)

In AI-generated stories, I've found complete sentences copied from social media. Those statistical models mean that a LLM AI is more likely to reproduce someone else's words than not.

I think there are three things that need to be considered.

(1) Plagiarism is defined by intent, not just simply the use of words that are similar to another's use of words. Otherwise, with billions of humans on the planet, we would be constantly plagiarizing one another in every day speech. The model has no intent - it's statistical. Ergo reproduction of text by the model doesn't meet the definition of plagiarism.

(2) The statistical models in LLMs work by prediction in context. At the risk of oversimplifying, they ask "what is the most likely word to come next". That is why so much focus is placed on prompt engineering. Better inputs result in better outputs. Common inputs like "write me a story about xyz" will result in similar outputs, especially if xyz is a common story topic. The fact that many AI generated stories are similar is as much due to the lack of imagination of the human providing the prompt as anything else.

(3) Training bias in the model will exaggerate that tendency to produce similar outputs. You used the example of social media being quoted. XAi uses the X platform to train Grok. That in itself might not be bad, except that X contains many instances of duplicated content due to the viral nature of social media and the presence of bots that amplify that media. Unless the training data is carefully deduped, you would expect the model to be biased.

What this leads back to is that it takes skill to produce quality results from an LLM. I've learned and am still learning just how much skill it takes to craft prompts that result in unique stories.

since they can't experience the meanings of the words they're manipulating, they can't replicate the human characteristic of leaning by analogising.

This isn't quite true. Humans use analogy in a couple of different ways. One is through felt insight. We say things like "this feels like that". AIs can't learn from those because they have no feelings. The other is through comparative patterns. We say "this looks like that". AIs can definitely learn from the latter, and excel at them because those kinds of analogies are essentially algorithmic.

To use an example:

An AI might compare democracy to an ecosystem. It would analyze the governance and structure of it, and perhaps compare a constitution to genetic code. A human might compare democracy to a marriage. The human might look at how important it is to show up, to keep working at it, to the importance of trust and how it can be broken.

I don't think either is wrong, and plenty of humans I know would prefer the analytical approach to all that "touchy-feely" stuff. But it is different.

As for artificial consciousness... what is a consciousness that doesn't experience emotion? We assume that an artificial consciousness will be like us, yet we are dealing with something that has the potential to be completely alien to our experience.

E

awnlee jawking ๐Ÿšซ

@EricR

Ergo reproduction of text by the model doesn't meet the definition of plagiarism.

When a sentence in an AI-generated story can be found exactly once elsewhere by Google, I consider that to be extremely dodgy. Of course, I chose the sentence in the first place because it looked relatively unique and not something your average SOL writer would be expected to come up with on their own. I presume copyright wasn't a significant issue because who owns the copyright of social media posts? But if Google had found the sentence in a book stolen by AI trainers, that would have been a different kettle of fish IMO.

AJ

Replies:   Eric Ross  Unicornzvi
Eric Ross ๐Ÿšซ
Updated:

@awnlee jawking

That's the hinge of the argument for and against. "Fair use" and "Fair dealing" protect the use of copyrighted material for teaching / training purposes. Training models would be same, AI developers argue, as a human being using copyrighted material to learn a new language. So long as the use is transformative, it's protected.

Rights holders argue that models can output material that is derivative or substantially similar to existing works, which is not transformative. They claim that works are reproduced in the datasets, the use is commercial and their livelihoods are being harmed.

I'll concede that even without the intent to copy, the systems can create both derivative and transformative works, depending on how they are prompted.

It's a bit like the cases where one musician argues that their riff has been stolen when another musician has independently reproduced it.

It seems to me then that the user needs to ensure that copyright isn't violated.

E

Replies:   Mushroom
Mushroom ๐Ÿšซ

@Eric Ross

That's the hinge of the argument for and against. "Fair use" and "Fair dealing" protect the use of copyrighted material for teaching / training purposes. Training models would be same, AI developers argue, as a human being using copyrighted material to learn a new language. So long as the use is transformative, it's protected.

That is the rub, AI does not "transform" anything. You only think it does.

It can be argued that taking the "Hey Jude" lyrics and placing it over the music for "Bridge Over Troubled Waters" is "transformative", but it's still just mashing together two copywritten works. And trying to say it was "transformed" does not mean that it's now yours to do with as your please.

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@Mushroom

Mechanical transformation can still be transformative. Automated creation of thumbnails, for instance, has been determined to be a non-infringing transformative work. So has indexing a text to allow for keyword searches.

Both are less 'transformative' than what an LLM does.

Unicornzvi ๐Ÿšซ

@awnlee jawking

When a sentence in an AI-generated story can be found exactly once elsewhere by Google, I consider that to be extremely dodgy.

If it's just a sentence I would assume its coincidence.
There was a case some ~15 years ago where there were accusations of plagiarism by a fanfiction author from the fans of another author because of multiple identical sentences. That turned out to be a case of pure chance and writing on similar subject.

awnlee jawking ๐Ÿšซ

@Unicornzvi

If it's just a sentence I would assume its coincidence.

What about a sentence involving a historical figure that most people have never heard of? I hadn't, which is why I chose it as one of those I searched for.

AJ

Dominions Son ๐Ÿšซ

@awnlee jawking

What about a sentence involving a historical figure that most people have never heard of? I hadn't, which is why I chose it as one of those I searched for.

Most people is not no one. If it's a genuine historical figure and not a fictional character it's even less likely to be plagiarism.

Replies:   awnlee jawking
awnlee jawking ๐Ÿšซ

@Dominions Son

If it's a genuine historical figure and not a fictional character it's even less likely to be plagiarism.

I disagree. The probability of a human 'independently' coming up with an exact sentence from social media about such an obscure character must have a probability so extremely close to zero that it's more likely for the author to die from a cow falling on their head after being struck by lightning while it tried to jump over the moon. Then multiply that several times for the other matches I found on social media.

AJ

Unicornzvi ๐Ÿšซ

@awnlee jawking

So you're saying there are only a billion people in the world who heard of them? Was the historical person completely random, or was there a theme/reason for them being referenced?

What was the sentence?

Replies:   awnlee jawking
awnlee jawking ๐Ÿšซ

@Unicornzvi

So you're saying there are only a billion people in the world who heard of them?

I didn't say that. My guess would be far, far fewer.

What was the sentence?

I'm not going to publish the sentences - plural - verbatim because it might spark a flame war with the author, whose stories have so far eluded the AI tag.

AJ

Replies:   Unicornzvi
Unicornzvi ๐Ÿšซ

@awnlee jawking

If you're not going to post,them then why bother arguing?

But I'll note that multiple sentences (unlike what you previously stated as "A sentence" I agree the likelihood of independent creation goes down a lot.

Replies:   awnlee jawking
awnlee jawking ๐Ÿšซ

@Unicornzvi

If you're not going to post,them then why bother arguing?

Because it gives you the methodology, if not the specific instances, for you to do your own research on the subject.

Actually, it's not unlike the way authors here might check whether their stories have been stolen. Find a unique-looking sentence from the story they want to check and feed it into a search engine.

That works less well than it used to because the content of books on Amazon is hidden and because, allegedly, Google have shrunk the number of pages it indexes.

AJ

akarge ๐Ÿšซ

@Unicornzvi

If it's just a sentence I would assume its coincidence.

To quote Ian Fleming in his book, "Goldfinger",
"Once is Happenstance. Twice is Coincidence. Three times is Enemy Action."

Replies:   Unicornzvi
Unicornzvi ๐Ÿšซ

@akarge

True to an extent, if you have a whole bunch of sentences seemingly copied from other sources I'd assume that's plagiarism, although in a 100k+ novel I'm not sure 3 would be enough, depends on the sentences and the subject matter of the novel.

Mushroom ๐Ÿšซ

@Unicornzvi

If it's just a sentence I would assume its coincidence.

But that is ultimately all AI is. Just another search engine, it in reality "creates" nothing. It just parses things in the database to try and create something. But ultimately, it's just a soup of things made before.

Grey Wolf ๐Ÿšซ

@Mushroom

Repeating an argument: this hinges entirely on the definition of the ill-defined word 'creation'. One can argue (and, indeed, quite a few well-informed people do argue) that the human brain is, in the end, just a database being parsed by an incredibly complex software system, and that 'creativity' is therefore somewhat of an illusion.

With one definition of 'creativity', in other words, your statement is trivially true. But with a different definition of 'creativity', your statement is trivially false. Meanwhile, the usual dictionary definitions of 'creativity' are, by and large, not specific enough to support either conclusion.

awnlee jawking ๐Ÿšซ
Updated:

@Mushroom

It's a poorly designed, extremely leaky encryption system.

The more unusual a language sample loaded into the LLM, the less likely it is to have an impact on the AI output but, when it does, it's more likely to be regurgitated verbatim. Which is quite a conundrum because an AI ought to 'learn' more from unique language. It should be like a dictionary, where every unique entry is afforded the same respect.

You don't have to be a genius mathematician to see that the frequency of each word output by an AI will not match the frequency with which it was input.

AJ

Unicornzvi ๐Ÿšซ

@Mushroom

But that is ultimately all AI is. Just another search engine,

This is completly wrong. AI is very much not at all like a search engine.
You can use AI to search existing material if you are careful and double check everything it gives you, but that does not make it a search engine.

You could argue that it doesn't create anything, but I can't think of any argument that would exclude AI created works while including collages especially pphotomontages.

awnlee jawking ๐Ÿšซ

@EricR

An AI might compare democracy to an ecosystem. It would analyze the governance and structure of it, and perhaps compare a constitution to genetic code.

As part of its learning process? I don't think so. The only way I can see that happening is as a result of a human prompt.

AJ

Replies:   Eric Ross
Eric Ross ๐Ÿšซ

@awnlee jawking

Of course. I was referring to the reasoning processes while you were talking about learning processes. And while some kinds of AI use GAN or RL models for learning, LLMs don't. That's not to say that they won't at some point (there are experiments happening), but commercially available systems don't today.

Apologies.

E

Unicornzvi ๐Ÿšซ

@EricR

Plagiarism is defined by intent, not just simply the use of words that are similar to another's use of words.

This is wrong.
Plagerism has nothing to do with intent.it is defined as "using someone else's work without giving them proper credit", LLM's or random author X using a sentance that is Identical to a sentence random author Y used is not plagiarism unless they actually copied it from author Y instead of coming up with it in independently (which does of course happen when you're talking about a single sentence).

Replies:   Eric Ross
Eric Ross ๐Ÿšซ

@Unicornzvi

I hear you. In law, however, a distinction is made between coincidence and negligence. Two people can independently produce similar words (or music or art) and provided there wasn't intent to deny credit it is not plagiarism. It is simply coincidence.

I think we are agreeing, excepting that I am disagreeing with your second sentence. intent does matter.

Replies:   Unicornzvi
Unicornzvi ๐Ÿšซ

@Eric Ross

Intent matters in many things, but it is not related to the definition of plagerism which very specifically excludes intent.

Replies:   Dominions Son
Dominions Son ๐Ÿšซ

@Unicornzvi

Intent matters in many things, but it is not related to the definition of plagerism which very specifically excludes intent.

But plagiarism by any definition is legally irrelevant.

Replies:   awnlee jawking
awnlee jawking ๐Ÿšซ

@Dominions Son

But plagiarism by any definition is legally irrelevant.

The UK's current Chancellor 'wrote' a book about 10 women presenting role models in the finance industry. She stole whole chunks directly from Wikipedia. I don't know what coercion was applied but she 'was made to' rewrite them.

AJ

Replies:   Dominions Son
Dominions Son ๐Ÿšซ

@awnlee jawking

I don't know what coercion was applied but she 'was made to' rewrite them.

Unless she was sued in court or criminally prosecuted, in which case, you would probably know, it's unlikely the law had anything to do with whatever coercion was applied is unlikely to have anything to do with the law.

Social pressure is a real thing.

Replies:   awnlee jawking
awnlee jawking ๐Ÿšซ

@Dominions Son

it's unlikely the law had anything to do with whatever coercion was applied

The threat of prosecution is very powerful, especially if the plaintiff is wealthier. It's the primary method of protecting intellectual property rights from eg fan fiction.

AJ

Replies:   Dominions Son
Dominions Son ๐Ÿšซ

@awnlee jawking

especially if the plaintiff is wealthier.

which leaves Wikipedia out.

Replies:   jimq2
jimq2 ๐Ÿšซ

@Dominions Son

which leaves Wikipedia out.

Yeah. Poor Wikipedia is only worth about a quarter billion dollars.

Replies:   Dominions Son
Dominions Son ๐Ÿšซ

@jimq2

Yeah. Poor Wikipedia is only worth about a quarter billion dollars.

Wikipedia is a non-profit that is asking for donations 2-3 times a year lest they have to shut down. Their net-worth includes non-liquid assets.

LonelyDad ๐Ÿšซ

@EricR

I think "The Adolescence of P1" should be required reading for anyone who wants to participate the the discussion.

Replies:   EricR
EricR ๐Ÿšซ

@LonelyDad

One of my favorites. Old! Is it still in print?

Replies:   LonelyDad
LonelyDad ๐Ÿšซ

@EricR

I found it on the net, so I would say it is still 'in print.'

irvmull ๐Ÿšซ

@geo1951

Fairly comprehensive explanation on how to spot AI writing:
https://www.youtube.com/watch?v=9Ch4a6ffPZY

Replies:   EricR  awnlee jawking
EricR ๐Ÿšซ

@irvmull

It's pretty good but already dated. GPT 5, for example, doesn't have the em-dashing problem anymore.

awnlee jawking ๐Ÿšซ

@irvmull

I watched the video at my local library with subtitles switched on, because the sound level of their public access computers varies from off to inaudible.

I found it instructional. I didn't know about the rule of odds, for example. On the other hand, it didn't have some tells that I look for, perhaps because the youtuber didn't apply his expertise to works of literature.

AJ

Back to Top

 

WARNING! ADULT CONTENT...

Storiesonline is for adult entertainment only. By accessing this site you declare that you are of legal age and that you agree with our Terms of Service and Privacy Policy.


Log In