Please read. Significant change on the site that will affect compatibility [ Dismiss ]
Home ยป Forum ยป Story Ideas

Forum: Story Ideas

AI hallucination

Harold Wilson ๐Ÿšซ

Inspired by several recent posts on the uses of LLM AIs:

AIs are notorious for "hallucinating" things. That is, they generate a fake thing that "looks right" because they use the correct formatting, etc. Examples include AI-generated legal briefs that contain case citations that are totally well-formatted, but completely fictional.

So, imagine if someone was the "victim" of a hallucination. That is, an AI generated a report that they were the owner of a property, or one party to a court case, or a convicted sex offender, or the manager of a staggeringly wealthy hedge fund, or the winner of a non-existent lottery.

What happens?

LupusDei ๐Ÿšซ

@Harold Wilson

Not searching for it now, but I vaguely remember there was an Australian politician who got accused in a crime by early version of ChatGPT. As in, someone asked about him, and/or certain rumors, and the AI confidently (as always) claimed he did it. The random answer got some publicity, and the guy threatened to sue, but I don't remember what come out of it.

LupusDei ๐Ÿšซ

@Harold Wilson

You perhaps would want to postulate a word that is more trusting towards AI than ours right now. Also, probably declare a future version of AI that is more consistent in its answers to different people.

Otherwise, it's just a curiosity for laugh, and not much more. Nobody would take it seriously without further corroboration... with can, of course, be to some extent be generated by the same AI, as they seemingly dislike to be proven wrong (until you wipe the conversation, next one may go in different direction).

But still, it's very flimsy footing for a fraud, but, not nothing, and give a seed for fraud someone is, or becomes wiling to commit once that seed is given. One may become victim to a self-fraud this way, where they want to believe and interpret all further information with confirmation bias. They may or not succeed in convincing, successfully defrauding others, or because convinced the whole world is conspiring against them.

The first piece of information on a subject is extremely important as it forms perception of any following. Repeat a falsehood often enough and it becomes truth.

LupusDei ๐Ÿšซ

@Harold Wilson

In this vein, but perhaps different, I was thinking about possibilities for building alternative, mostly AI generated history interpretations as a basis for a manipulation, of either single individual or groups. In this case it's purposefully guided by a motivated manipulator.

For a silly example, convince a community that it had been a popular nudist camp until relatively recent past when it was shut down and information about it suppressed, and agitate to "restore" the "tradition" and fight for "renewal" of the status.

Sure, that cuts much too close to how AI is and will be used in real world (unfortunately, not likely to popularize nudism, however).

Grey Wolf ๐Ÿšซ

@Harold Wilson

This sent me in a different direction: what if the AI's hallucinations started becoming real? Something of a 'genie' story, or perhaps 'be careful what you ask, you just might get something somewhat like it'.

Replies:   LupusDei
LupusDei ๐Ÿšซ

@Grey Wolf

All you need is make the AI omnipotent steward of the society with autonomous production capabilities and yes, that AI probably can be infallible, as it makes any of its claims new really whether it was previously so or not.

Replies:   akarge
akarge ๐Ÿšซ
Updated:

@LupusDei

All you need is make the AI omnipotent steward of the society

Ahhh, so THAT is how NIS started. ๐Ÿ˜†

Replies:   LupusDei
LupusDei ๐Ÿšซ

@akarge

Indeed. I even made a tread about it: https://storiesonline.net/d/s5/t11786/school-uniforms-enforced

Back to Top

Close
 

WARNING! ADULT CONTENT...

Storiesonline is for adult entertainment only. By accessing this site you declare that you are of legal age and that you agree with our Terms of Service and Privacy Policy.


Log In