Inspired by several recent posts on the uses of LLM AIs:
AIs are notorious for "hallucinating" things. That is, they generate a fake thing that "looks right" because they use the correct formatting, etc. Examples include AI-generated legal briefs that contain case citations that are totally well-formatted, but completely fictional.
So, imagine if someone was the "victim" of a hallucination. That is, an AI generated a report that they were the owner of a property, or one party to a court case, or a convicted sex offender, or the manager of a staggeringly wealthy hedge fund, or the winner of a non-existent lottery.
What happens?