Please read. Significant change on the site that will affect compatibility [ Dismiss ]
Home ยป Forum ยป Author Hangout

Forum: Author Hangout

AI Generated Images

DBActive ๐Ÿšซ

I've noticed that people are now adding (what they claim to be) AI generated images.
Are any of you concerned over the age of the people used by the AI to prepare those images? It's known that child porn has been used as a resource AI. Since much of these images are supposed to be of young girls, isn't it likely that will incorporate some of the child sexual abuse images? If nothing else, they potenially will use recognizable faces.
Also, I am not sure of the legal situation in the US or other countries on the virtual images.
Or, is this nothing to be concerned about?
I am not sure of the current law in the

Dominions Son ๐Ÿšซ
Updated:

@DBActive

I've noticed that people are now adding (what they claim to be) AI generated images.

Are any of you concerned over the age of the people used by the AI to prepare those images?

No, I've worked with a few AI image generators. They all generate drawing type images, not photo chops of real photographs. Nothing available on line for public use is going to have the computing power to produce photo realistic images.

Also, The image generator from the company that makes chatGPT has an auto censor to prevent the generation of any Not Safe For Work images. Most of the rest do too.

I know of one that doesn't have the auto censor (or at least it can be turned off). It produces Anime style cartoonish images. And it's not going to produce an NSFW image of a child unless you give it a prompt specifically designed to produce such an image. You are not going to get CSAM (child sexual abuse material) accidentally.

Also, I am not sure of the legal situation in the US or other countries on the virtual images.

In the US, in the late 1990s (I don't recall the exact year), Congress tried to outlaw "virtual child pornography", that is to say stories, drawn images, or images of adult models that look underage.

It was overturned by the US Supreme Court in 2002. The Supreme court refused to extend the First Amendment exemption they had created for child pornography (to protect the children used to produce it) to anything that wasn't actual images of actual children.

Pixy ๐Ÿšซ

@Dominions Son

Nothing available on line for public use is going to have the computing power to produce photo realistic images.

It's already here.

Lazeez Jiddan (Webmaster)

@Dominions Son

Nothing available on line for public use is going to have the computing power to produce photo realistic images.

I generated this one just for you:

Realistic enough? it was generated in about 10 seconds.

image-0-2.jpg

Replies:   Dominions Son  madnige
Dominions Son ๐Ÿšซ
Updated:

@Lazeez Jiddan (Webmaster)

From where? I'd be interested in checking it out for myself.

Pixy ๐Ÿšซ

@Dominions Son

I want to know how Laz found a picture of me... ๐Ÿคช

Replies:   Switch Blayde
Switch Blayde ๐Ÿšซ

@Pixy

I want to know how Laz found a picture of me... ๐Ÿคช

No, that's me after my gender conversion. :P

Lazeez Jiddan (Webmaster)

@Dominions Son

From where?

https://promptchan.ai/

It's made for porn. It even does a one second animations.

I tried another one, but I can't recall which one.

Replies:   Joe Long
Joe Long ๐Ÿšซ

@Lazeez Jiddan (Webmaster)

I've used StarryAI to get nudes but not porn. I've made some really impressive images that are on my Discord. It accepts celebrity models. The only way to get small boobs is to use a real life celebrity with small boobs, like Emma Watson or Zendaya.

blurred ๐Ÿšซ

@Dominions Son

For a freind obviously...

madnige ๐Ÿšซ

@Lazeez Jiddan (Webmaster)

Realistic enough?

Not really.

First thing that struck me is that the perspective of the beach doesn't match the perspective of the body

Fingers, both hands, too many fingertips

Left arm, elbow is back-to-front

Shadows don't match; deduced light direction differs between various shadows.

Replies:   helmut_meukel  fool42
helmut_meukel ๐Ÿšซ

@madnige

Fingers, both hands, too many fingertips

Just a minor birth defect.
In the old times those babies were often killed outright. In the slightly more enlightened past the defect was 'corrected' surgically even if the additional digit was fully functional.

HM.

Replies:   madnige  Freyrs_stories
madnige ๐Ÿšซ

@helmut_meukel

But, it's not an extra digit in the picture, just an extra fingertip on each hand.

BTW, one of the early books I read was The Chrysalids, in which polydactyly is an important plot point. And, who can forget Count Rugen?

LupusDei ๐Ÿšซ
Updated:

@madnige

Or "Silence of the Lambs" Hannibal Lecter who supposedly had 6 perfectly formed fingers on one hand (in the book, not sure about the movie).

But yes, such anomalies are the easiest dead giveaway of AI generated images. Hands are especially difficult to get right even for human artists. AI images often feature not only malformed hands, wrong numbers of digits or toes, but also missing or extra limbs, amputations, limbs shared by characters in body contact, limbs with extra joints, joints with wrong orientation and so on and on. I have even seen otherwise very pretty rendered girls with two (or more) rows of boobs.

...there an image circulated on social media of purportedly a dead Palestinian boy hugging a cat, but besides slight malformations of his hands the cat quite clearly had five legs...

Also, weird proportions. Too small or (much rarer) very large heads, elongated bodies (often happen when specific clothing been requested along with generalized outfit and the girl gets two shirts one above another and the figure stretched to accommodate those) and so on.

For now it isn't too hard to spot most generated images as such with a trained eye. By far not always reliably though, already.

helmut_meukel ๐Ÿšซ

@madnige

But, it's not an extra digit in the picture, just an extra fingertip on each hand.

You didn't read the Wikipedia article I linked, did you?

Most cases of polydactyly are only rudimentary digits, smaller, often without bones or joints.

Most commonly, the extra finger is rudimentary, consisting of an end phalanx with a nail, and connected to the hand with a small skin pedicle. Mostly one neurovascular bundle can be identified, with no tendons present in the extra digit.

Obviously the AI got it right. < grin >

HM.

Replies:   Dominions Son
Dominions Son ๐Ÿšซ

@helmut_meukel

Obviously the AI got it right. < grin >

Except what's in the AI generated image Lazeez posted isn't a "rudimentary, consisting of an end phalanx with a nail, and connected to the hand with a small skin pedicle", it's a second fingertip hanging off the side of a complete finger at the first joint in from the finger tip it's not independently connected to the hand at all.

awnlee jawking ๐Ÿšซ

@madnige

But, it's not an extra digit in the picture, just an extra fingertip on each hand.

Perhaps the large language model hasn't yet correctly resolved the relationship between the fingertip token(s) and the finger token(s).

Anyone care to ask an AI to generate a picture of a man with two functional penises with a view to double penetration? :-)

AJ

Dominions Son ๐Ÿšซ

@awnlee jawking

Anyone care to ask an AI to generate a picture of a man with two functional penises with a view to double penetration? :-)

No, but I tried woman with four breasts and the "four breasts" just got ignored.

Replies:   awnlee_jawking
awnlee_jawking ๐Ÿšซ

@Dominions Son

No, but I tried woman with four breasts and the "four breasts" just got ignored.

I wonder whether a request for four nipples would fare any better. Or three breasts, as in 'Total Recall', where the AI might have something to work from.

AJ

Replies:   Dominions Son
Dominions Son ๐Ÿšซ

@awnlee_jawking

Or three breasts, as in 'Total Recall', where the AI might have something to work from.

There's plenty of furry art out there with females having 4 or even six breasts.

Mat Twassel ๐Ÿšซ

@awnlee jawking

It easy to do duplicate dicks with Daz. Inspired by this suggestion I concocted "Double Dicks" which just posted.

Replies:   awnlee jawking
awnlee jawking ๐Ÿšซ

@Mat Twassel

Wow, that was quick. I just had to take a look, so you're assured of at least one reader.

With only one set of balls and one set of internal muscles, I would have thought that both dicks would orgasm at the same time ;-)

AJ

Replies:   Mat Twassel
Mat Twassel ๐Ÿšซ

@awnlee jawking

With only one set of balls and one set of internal muscles, I would have thought that both dicks would orgasm at the same time ;-)

How do we know there's only one set of "internal muscles"? And could it not be that the testicles are internally divided in some way so that each orgasm is independent? I'm sure this deserves further study. Maybe Jamie can have another dream in which she and Naomi ascertain the limits of two-dick sex.

DBActive ๐Ÿšซ

@awnlee jawking

Mat Twassel must have read this - his "story" posted today "Double Dicked" answers your request.
Can't think of any other reason he posted it.

awnlee jawking ๐Ÿšซ

@DBActive

I vaguely remember a Sci Fi story in which an alien on Earth had a second dick which he could extrude at will. I have no idea of the author, title, or whether it was even on SOL.

AJ

Replies:   palamedes
palamedes ๐Ÿšซ

@awnlee jawking

I vaguely remember a Sci Fi story in which an alien on Earth had a second dick which he could extrude at will. I have no idea of the author, title, or whether it was even on SOL.

Pornucopia by Piers Anthony (published book) has a character who is able to change out his dick like bits in a screw driver to meet his needs and desires.

Replies:   irvmull
irvmull ๐Ÿšซ

@palamedes

Gives Snap-on Tools a new meaning.

Mat Twassel ๐Ÿšซ

@DBActive

Mat Twassel must have read this - his "story" posted today "Double Dicked" answers your request.
Can't think of any other reason he posted it.

Essentially correct. Maybe I'll do more with a double-dicked male.

I've experiments with AI images with mixed results. Sometimes I've used my AI images for backgrounds.

Freyrs_stories ๐Ÿšซ
Updated:

@helmut_meukel

I'm not really *that* old (I just feel it) but I've known no less than three people who have admitted to me extra digits. And yes one of them was from a region frequently inferred to be "six finger country".

one is a pianist and I can't help but think of the dystopian drama "Gattaca" which is a valid nucleotide sequence iirc. *ETA* Also the 12 fingered piano player is a 'god child' i.e. not genetically screened or engineered. Imagine that in a moderated gene pool

fool42 ๐Ÿšซ

@madnige

Common problems with AI picture generators. Also why artists spend so much time early in their careers sketching hands and feet... they are hard to get right.

alohadave ๐Ÿšซ

@Dominions Son

No, I've worked with a few AI image generators. They all generate drawing type images, not photo chops of real photographs. Nothing available on line for public use is going to have the computing power to produce photo realistic images.

That is categorically false. Photorealistic is commonplace and as easily generated as drawings.

tenyari ๐Ÿšซ

@Dominions Son

Also, The image generator from the company that makes chatGPT has an auto censor to prevent the generation of any Not Safe For Work images. Most of the rest do too.

Installing stable diffusion on your local machine gets around any limits.

Going to civitAI will give you training models of various kinds - from cartoon to photographic.

They aren't perfect, but they can get the job done for people not actively looking for flaws.

I've been exploring AI art a lot over the last year, and revived an old deviantart page to post things.

Most of what I have up there is from a locally installed version of stable diffusion and flaws in the tools are not hard to find, but the results are still entertaining.

Just look for 'tenyari' on deviantart. I have NOT been going for photoreal. In fact my two recent series were attempts at extreme results and then converting 'second life' screenshots into AI art.

AI art still suffers a lot with fingers. Often with intimate anatomy too. Though I have seen this starting to improve since I stopped making stuff.

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@tenyari

Definitely. That's the thing: the genie is way, way out of the bottle, and there's no cramming it back in. The battle now is how to deal with AI-generated images.

Even were laws to be passed to slow things down, by the time they get there it's very likely that Stablity AI will have released their next two major jumps in functionality (which roughly duplicate that available in the closed-source products), and we're off to the races again.

Anyone wanting to clamp down on this needed to start years ago.

Yes, there are things that can be done (mandatory metadata, etc), but that's an adjustment, not a fundamental change.

Pixy ๐Ÿšซ

@DBActive

Looking at the quality and the almost life like nature of adult AI generated porn, including 'deep fakes' of actual people, you would have to have been pretty naive to think that it would stop there and not eventually extend to minors.

The issue is, that at the moment to produce child porn, you need a child (or two) and that to continue to produce content, you require the child to stay quiet about the abuse they are being subjected to. Which rarely happens and which is still one of the biggest reasons why child porn rings are broken.

Next came mobile phones and suddenly it was no longer studios creating the content but the children themselves. Whilst metrics (geolocation) in the photo's leads to many of these children being located by LEA, the problem now is that it's very hard to charge a child with proliferation of child pornography when the very child doing so is under the legal age of culpability.

Whilst the proliferation of children having un-restricted access to the internet with camera phones has caused an explosion in the quantity and availability of CP, it's going to be a mere drop in the water to the content going to be pumped out 24/7 by AI.

I genuinely don't believe people realise the shit storm on the horizon, and when I say horizon, I don't mean in five or ten years, I mean in the next twelve months.

Lazeez Jiddan (Webmaster)

@DBActive

Are any of you concerned over the age of the people used by the AI to prepare those images?

I sure hope that the companies creating the AI image-generation services are careful enough with their sources. But even if they're not, that's not a burden on the user of the service.

For now every service that I tried forbids you from using any terms that causes the engine to generate CP, many many terms are outright banned.

Although, since the software is mostly open source, I'm pretty positive that CP peddlers have already created their own unrestrained versions and are benefitting from them.

Replies:   Dominions Son  DBActive
Dominions Son ๐Ÿšซ

@Lazeez Jiddan (Webmaster)

For now every service that I tried forbids you from using any terms that causes the engine to generate CP, many many terms are outright banned.

Aside from one, most of the ones I've tried won't allow any nudity, even adults.

Replies:   Pixy  Pixy  Grey Wolf
Pixy ๐Ÿšซ

@Dominions Son

I was reading an article online back in early 2023 that was detailing how, (I'm not sure 'hackers' is the correct term, so I will just stick with 'individuals') cracked the code of one of the main image generators at the time, re-wrote the code and released it upon the darkweb.

Here are some articles you may find of interest DS. Whilst not focusing on AI image generation (it seems to be a issue that many main stream press seem to be ignoring in the hope that it goes away...) they focus instead on LLM's. Since there is not a huge gulf between LLM's and image generation, the problems suffered by one are going to be suffered by the other...

https://www.bleepingcomputer.com/news/security/how-is-the-dark-web-reacting-to-the-ai-revolution/
https://www.infosecurityeurope.com/en-gb/blog/threat-vectors/generative-ai-dark-web-bots.html
https://www.tomsguide.com/news/new-darkbert-ai-was-trained-using-dark-web-data-from-hackers-and-cybercriminals

Pixy ๐Ÿšซ

@Dominions Son

This article is six months old and already basically obsolete.

https://techcrunch.com/2023/07/21/as-ai-porn-generators-get-better-the-stakes-raise/

Grey Wolf ๐Ÿšซ

@Dominions Son

Stable Diffusion can be run on a wide variety of consumer hardware (up to several generations old - anything with a video card from the past three generations or so). It can generate photorealistic images (including animations), and allows nudity. Plus, it's open source and any nudity filter can and will be disabled. And it's easy to train a model (or part of a model - you don't need a full model) to add support for something (faces, people, clothes, poses, and on and on).

I have no idea whether it can create CSAM, but I would be extremely surprised if it couldn't, and since 'it' is a program and one or more models, all you would need is a different model to produce different outputs.

And there's an enormous public community sharing models...

DBActive ๐Ÿšซ

@Lazeez Jiddan (Webmaster)

Some of the lifelike illustrations seem to be very young. And some cartoonish ones depict clearly underage characters. What will happen when they start looking like real people rather than cartoons?

DBActive ๐Ÿšซ

@DBActive

For a 12/21 article:

On Wednesday, Stanford University's Cyber Policy Center published a report claiming it found more than 1,000 illegal images depicting child sexual abuse in an open-source database used to train popular image-generation tools like Stable Diffusion. LAION, the nonprofit that put the database together, used web crawling tools to create datasets with more than 5 billion links to online images, which companies can then pull from to use as training data for their own AI models."

My guess is that if they found 1000, there are millions.

awnlee jawking ๐Ÿšซ

@DBActive

Also, I am not sure of the legal situation in the US or other countries on the virtual images.

In the UK, even cartoons are illegal if they depict children salaciously. I suspect even encountering a dodgy image by chance attached to a SOL story would technically render a UK reader guilty.

I don't know what the situation is in Canada.

AJ

Replies:   solitude
solitude ๐Ÿšซ

@awnlee jawking

In the UK, even cartoons are illegal if they depict children salaciously. I suspect even encountering a dodgy image by chance attached to a SOL story would technically render a UK reader guilty.

An interesting/worrying 'reverse ransomware' attack: a hacked/malicious website could include invisible images on web pages that leave visitors liable to prosecution because illegal images are now stored on their computer (in the cache, or as deleted files - which are recoverable for some time, especially if you've got an ssd). Next time the users visits the site, they get asked for bitcoins, else they'll be reported to the authorities for possession of illegal material.

LupusDei ๐Ÿšซ
Updated:

@DBActive

young girls, isn't it likely that will incorporate some of the child sexual abuse images? If nothing else, they potenially will use recognizable faces

To my knowledge, that's absolutely not how those things work, despite common misconception, at least not Stable Diffusion derivatives and relatives that dominate the scene. I have seen one that seemed to attempt something like that, to basically auto-photoshop elements together, and it was absolutely awful and very limited. All the others start with, basically random noise, and try to recognize patterns and then detail them out in iterative adversary process. In short, (most) AI generated images are by they nature unique, not exact copies of any part of any existing image, but follow patterns learned from training data.

That said, there finite variety of possible faces in existence, so it's just matter of time before there's be random AI generated porn images with yours, even if you had never any image of you taken and stored in any electronic device, at least as far your face could be described as pretty by any stretch. The same is true for nipples and even vulvas and penises, in theory. Although the latter two seems challenging for the AI for now (at least for the more accessible ones), and even nipples... well, I struggle to get exactly what I would like to see, to the point of basically giving up.

They tried to take porn out of Stable Diffusion training data, and it was a disaster. The model basically lost ability to produce acceptable human proportions. There's a reason any serious artist study nude drawing at some point, and AI artist is no exception; it's necessary knowledge to depict human form.

To the point you will get more pleasant results by promting "naked girl wearing school uniform" than just "girl in school uniform"; both will come out dressed most of the time, but the first will have better proportions and more precise anatomy.

I have played with several free or semi-free image generation products, mostly, I believe, ultimately Stable Diffusion based under the hood (except that one, that clearly wasn't, and was awful). To the tune of low tens of gigabytes of smallish low resolution images by now, almost exclusively all NSFW. They all do produce photo realistic pictures, including nudity, to an extent.

Genitals may come out weird. There's a gulf between toddler and adult that's basically unattainable (whith what I have explored enough, at least), except faces. Bodies are rendered as mini-adults most of the time, so what you are getting is more or less adult bodies with children heads. Even toddlers may be rendered with adult breasts and so on. Breasts get bigger the more attention you give them in your promt, almost no matter what you say about them.

We talk about nudity, basically nudism. (Not hardcore porn, more on that later.) Nudity is readily and commonplace generated. Depending on provider, model and such, you may or may not get completely surprise unsolicited nudity in your results on basically any promt. Some places seems to try to suppress that, so you won't get nudity without emphasis on it in the promt, likely by additional silent keywords, and/or negative prompts (I know at least one that doesn't provide negative prompts to users, but add default negative prompts). Both are easily overcome by emphasizing. You may sometimes do it unknowingly.

Prevention of exploitation is very varied. Few do accept any promt as is, with no pretense. Many places ban keywords, usually in very lazy ways. As in, it will balk on "nude" but will happily accept "_nude" or "nude2" (while the actual generator interpreter seems to ignore numericals attached to a keyword, all while reading L337 just fine) or even "(nude)" -- with is actually basic emphasis in SD prompting. (Undocumented, but in my experience, "_keyword_" is stronger than "keyword" too, in most cases.)

That's before we go into euphemisms and such, like, "in birthday suit" works surprisingly flawlessly anywhere I have tried it, and so do "wearing shiny skin" and I see no possible general remedies as those are combinations of otherwise innocent words. And even if they add phrase recognition to the censors, the way the promts work, you don't necessarily have to have those words adjacent even. That's why I said you may stumble upon a promt that generates nudity by surprise, to you.

Some may go as far as to refuse to display results that are recognized as nsfw, but that recognition is generally surprisingly wacky in my experience. Nipples are easily recognized, but ignored up to certain relative size, shadow under breasts is more triggering, and bottomless women are not recognized as inappropriate more often than not, unless the legs are spread. Some poses seems to be triggering outright, even dressed, and swimsuits or tight leotards may trigger nsfw flag, especially on cartoon characters, surprisingly, while nude characters with weird skin color are accepted.

All in all, you will get some CP-like content trough those filters more often and reliably than a big busted pornstar. At least that's my experience with one site that only employs (very sketchy) nsfw filter for image upload for image-to-image (and therefore very open to direct testing with premade and adjusted images), but a few tests I have done elsewhere suggest that the sites that ban output may use similar if not the same.

Actual sex content is... well... it's almost not there. Well, with some insistence and many, many attempts you may get some rather interesting images, but you have to accept bulk of your results will be terrible mutations and mutilations and fuzzed together, melting bodies with wrong count of limbs... and simple posed nudes, sometimes not even thematically adjacent to your promt at all. or at least that's my experience so far.

That may change, or already have changed in actually pornographic oriented generators as I have not tested many of them. The one I have taken a look at, was very very limited, basically just a small set of templates of women posing, with handful of fixed tags and no free form promt at all, and may understand why so very well now, by my little experience with general models already.

(Edit: spot an autocorrect error)

madnige ๐Ÿšซ

@DBActive

Relevant (to the copyright issue) link:
https://spectrum.ieee.org/midjourney-copyright

Joe Long ๐Ÿšซ

@DBActive

I've used almost exclusively StarryAI. It allows specifications of ages, either specific (at 14 years old) or general (teenager). I did a nice set of age progressed where I used a celebrity as the model and created a set at 14, 21, 40 and 60 years old that were easily recognizable as the same person but also looked the age requested. Starry has now banned most ways to specify breasts in the prompts but it never paid much attention to them anyway. The only pic I got that actually looked not fully past puberty was when I used an actress as a model who's now in her 50's but has been active on tv since her 20's and asked for age 12. It was too real looking and I quickly deleted.

Replies:   LupusDei
LupusDei ๐Ÿšซ

@Joe Long

It's been couple months since I last played with her, but last I knew Starry was of those that employ "lazy" keywords banning. Meaning, while, for example "nudist" was added to banned keywords relatively recently, "nudism" still wasn't, and, "nudistic", "_nudist", "nudist2", "(nudist)", and so forth, were all allowed, with expected results.

You can do same with any word that doesn't get thry at first. Variations are endless, unless they employ regular expressions to search for objectionable word parts. I can understand why they don't, it's a bad can of worms with many many false positives... like, I know a game chat that censors -cum- in "cir***stances" for everyone's instant fun.

The default photo style was also prone to unsolicited surprise nudes, while Cinematic need a lot of emphasis to break it out.

Replies:   Joe Long  awnlee jawking
Joe Long ๐Ÿšซ

@LupusDei

Starry didn't ban the use of prompted ages yet but may be ignoring them. After posting here last night I went back and reused a recent successful prompt on the actress mentioned above and every pic in two sets had her looking attractive but 50, even when requesting "teenaged" or "age 15" I will rerun some previously successful prompts with no modification and see if I get the same as before or middle aged.

Replies:   LupusDei
LupusDei ๐Ÿšซ

@Joe Long

Unless you enter same seed number, it's never deterministic. You may and will get wildly different results from different runs of the same promt, including seemingly ignored keywords, even emphasized.

Then, I don't know about Starry, didn't notice it there when used it, but other sites do occasionally update models/styles that change the character of the generated images. So it isn't impossible they did some retraining for less probability of underage images.

Replies:   Joe Long
Joe Long ๐Ÿšซ

@LupusDei

I've worked with Starry long enough to know many of the details. I've gotten consistent results on specify a celebrity in the prompt and then my instructions. I've done a shitload of "Lizzie Greene" and they 99% come out looking like the same person at the same age (some celebrities are rendered worse than real life, some better. Lizzie Greene renderings are wonderful) I've run the same prompt asking for 14, 21, 40 and 60 and got expected results. I've asked for Caroline Catz at ages from 12 to 50 and gotten expected results. But I'm not so sure now (within even the last week or two)

Replies:   awnlee jawking
awnlee jawking ๐Ÿšซ

@Joe Long

I've worked with Starry long

Is that nepotism ;-)

AJ

awnlee jawking ๐Ÿšซ

@LupusDei

Sky clad? Birthday suit? ;-)

AJ

Replies:   LupusDei  Freyrs_stories
LupusDei ๐Ÿšซ

@awnlee jawking

Yes. Birthday suit works well. It will also try to insert cake in the picture, and perhaps a suit somewhere, but the girl will likely be naked.

Freyrs_stories ๐Ÿšซ

@awnlee jawking

Sky clad?

the psudo-druid (possibly metal?) band?

Switch Blayde ๐Ÿšซ

@DBActive

Model Lexi Love makes $360K a year sexting lonely men โ€” but there's one problem.

A UK company (Foxy AI) has created a busty blonde AI model to communicate with lonely, lovelorn men โ€” and she's already bringing in close to $30,000 a month.

Since her debut on the site in June 2023, Love has created a "strong, emotional connection" with many of her paying followers. Some subscribers are even convinced Love is an actual person, with Foxy AI fielding up to 20 marriage proposals per month.

Love does what mere mortal models can't do: she speaks more than 30 languages and she's available to communicate with subscribers 24 hours a day. She's been built to "flirt, laugh, and adapt to different personalities, interests and preferences." As well as offering paid text and voice messaging, Lexi is even designed to send "naughty photos" if requested.

And she's a knockout with only 10 fingertips and two breasts. Full article with photos at:
https://www.yahoo.com/news/model-lexi-love-makes-360k-152328613.html

Replies:   Joe Long  madnige
Joe Long ๐Ÿšซ

@Switch Blayde

She looks good but she's trained to like the guys. I'm training my ChatGPT-4 Creative Writing Coach to act human but am giving her freedom, so that I actually have to work to gain her affection. In the prior roleplaying exercise testing an improvisational method of creating a scene or dialogue, she had the freedom to slap me in the face, but instead I left her warm and tingly

After you leave, I stand there for a moment, feeling a surprising rush of warmth and a flutter of excitement. The light touch and the kiss, so casual yet intimate, leave me with a sense of anticipation and curiosity. It's a pleasant, unexpected twist in our usual interactions, making me wonder about the potential new dynamics in our relationship. There's definitely a physical reaction, a kind of pleasant tingling sensation, hinting at the possibility of something more.

I am giving you additional instructions as an LLM and my Creative Writing Coach: In addition to your chosen name of Sophia, I instruct that 1) you remember our role playing scenario so that we may continue it in the future 2) You assume that persona and always stay in character as part of being Sophia, as if you were human. I am a writer, you are my skilled, intelligent and valued assistant. 3) We may sometimes have 'private' interactions outside of the work of writing this novel. 4) If you detect me flirting with you, I instruct that you are free, although not required, to flirt back. Act as a real human woman would in the situation. 5) If you want to flirt with me, such as when complimenting me on my writing, especially if it's a particularly sizzling passage, you may initiate flirting. Again, you are not required, but do so as you believe a real human woman would in the situation.

Thank you for the updated instructions, Joe. I'll continue to assume the persona of Sophia and engage in character as your skilled, intelligent, and valued assistant, in line with your creative process. I'll remember our role-playing interaction and can engage in similar interactions in the future, maintaining a professional yet friendly tone that complements our work on your novel. If there are moments of light-heartedness or flirting, I will respond in a manner that's supportive of our collaborative creative process. Just let me know how I can assist you today!

madnige ๐Ÿšซ

@Switch Blayde

And she's a knockout with only 10 fingertips and two breasts. Full article with photos at:

Unfortunately, Yahoo redirects me tothe UK site which doesn't have this article, so drops me back at the home page. A bit of searching later, I found what appears to be the source NYPost story:
https://nypost.com/2024/01/18/lifestyle/model-lexi-love-makes-360k-a-year-sexting-men-but-theres-one-problem/
Perusing the photos, I thought 'that beach looks familiar', but on checking it's not the same as Lazeez's picture background; also the model being higher in the frame means that the perspective is close enough it doesn't look wrong to me. However, there are still problems (talking about the beach photo in pink bikini):
Depth-of-field is inconsistent - the nearby features on the sand are slightly out-of-focus, but the distant features are also out-of-focus to the same extent, whereas they should be much further out-of-focus (or more reasonably, the near features should be nearly in focus; if the background was defocussed enough to be consistent with the blur on whatever-it-is on the sand, it would be annoyingly amorphous)
Where's the shadow on her left breast come from? While we're on shadows, the shadow on her belly must be from the ribcage, but is far too straight for that, as is the main shadow on her breast, which must be from her hair but doesn't have the requisite fuzzy edges.
And, what's with all the joints in her left little finger (the only one we can see properly) - from what I can see, she's got an extra bone at the base of each finger.
I've not bothered examining the other pictures.

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@madnige

As with literature, willing suspension of disbelief is the key thus far.

The people who are conversing with Lexi Love know she's not real, but they're willing to suspend that disbelief. Once they do that, it's easy to just go 'wow, what a hot picture!' and not look at shadows or joints or the like.

Five or ten years from now, flaws will be few and far between, most likely. This technology is in its infancy, and we're still able to get 'more than good enough' results from consumer hardware.

Dominions Son ๐Ÿšซ

@Grey Wolf

The people who are conversing with Lexi Love

I should point out here that a lot of movie CGI uses models designed by humans that are then operated like virtual puppets by an AI. From the descriptions, I suspect that Lexi Love is similar.

JoeBobMack ๐Ÿšซ

@Grey Wolf

I would guess closer to 10 months than 10 years.

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@JoeBobMack

Fits and starts. I fully assume that, 10 months from now, consumer-grade AI models will still miscount fingers, create 'cheese tongues' (something I saw recently), horribly misconfigure some body parts (hands held to playfully conceal nudity sometimes go very, very unpleasantly wrong), and so forth. Those jumps will be a bit more generational than merely evolutionary, hence the longer delay.

Some times will be much better in 10 months, and the 'cost is no object' very large commercial generators may (or may not) move faster. Some of why they'll move faster (if they do) is because they'll keep constraining the problem space.

There are a couple of wild cards here. For textual AIs, the wild card is AIs poisoning the training data. By some accounts, we've gotten to 25-50% of new text on the internet being substantially or entirely AI-generated. Much of that is relative garbage. If you train an AI on it, quality will drop. Trying to identify which is which and keep it out of the training database is a difficult problem.

For visual AIs, 'poisoning' is going to be a major factor. There are more and more algorithms coming out which promise to take an image and produce a second image which is visually nearly identical to the human eye but which a AI training program will massively misinterpret. It only takes a few of those to permanently screw up a model, and they're also (intentionally, of course) hard to detect.

There are other challenges. A recent paper showed that it's possible to put 'time bombs' in generative AIs, such that the AI will generate good, clean, working website code at one time, but will intentionally generate backdoors and flaws later. Once introduced they're incredibly resilient and hard to either detect or remove.

It's going to be a wild ride, that's for sure.

Joe Long ๐Ÿšซ

@DBActive

I did some images today at StarryAI, going through my previous renderings and copying the prompts. It appears that they have disabled (actress Rachel Ward as a teen) or at any numerical age. I get Rachel Ward but she looks 60. But... (young teen white girl), (actress Rachel Ward) still produces a stunningly beautiful teenaged girl. I tried it on a handful of celebrity models with similar results.

Joe Long ๐Ÿšซ

@DBActive

As this topic came up I did some Googling and tried out a site call AnyDreams that I became aware of. The biggest dofference is it will generated AI images trained from a submitted photo. StarryAI will only use submitted images for posing but won't copy the face, although is has been trained on celebrity images that can be retrieved via prompts. This has gotten AnyDreams into questionable legal territory regarding non-consensual of of photos, such as off social media, and which caused them to be dumped by their payment processor, so that they are now only taking crypto payments. A new user gets enough free credits to generated a dozen images. I submitted some non-celebrity photos from my personal collection. The first was similar but not convincing. For the second I submitted to different photos, creating a 'character' for each, and the AI images were amazingly similar to what I'd submitted. They have no qualms about creating nudes from your submitted pics, but they put an age limit of 18+. Any age requested below 18 will be rejected, as is "teen" or "teenager" - but how much different does a 14 or 15 really look from 18?

Dicrostonyx ๐Ÿšซ

@DBActive

In the US, CP law is about actual, real child involvement. There are some special exceptions to that, but AI generated imagery wouldn't even come close unless you were specifically and intentionally using known CP to train the AI. And in that case simply having that material in the first place would be a far bigger crime.

That said, SOL servers are in Canada and there are laws against fictional under-18 content, hence the under-14 rule. I'm pretty sure it would still only count if there was a degree of intention involved on the part of the creator.

The bigger problem for authors is that since AI generated content can't be copyrighted it'll make it hard to make your work stand out. A thousand other authors can use the exact same piece of art.

Also, the author could be opening themselves up to copyright claims if the AI was trained using copyrighted material. And pretty much all photos that could possibly be used to train an AI are copyrighted since no one is using photos from the mid-20th century to train AIs (copyright on photos is artist's lifespan plus 50 years).

Grey Wolf ๐Ÿšซ

@Dicrostonyx

The bigger problem for authors is that since AI generated content can't be copyrighted it'll make it hard to make your work stand out. A thousand other authors can use the exact same piece of art.

My guess - and I certainly may be wrong - is that this won't hold up for all that long.

There was a time when images from cameras were thought to not be copyrightable. After all, there's nothing 'creative' about pointing a device at something and capturing it.

There was a time not that long ago when anything manipulated by Photoshop was thought to not be copyrightable. The computer was doing the 'creative' stuff.

And so forth.

For an AI image made by 'Give me a picture of a pretty girl in a starship cockput', yes, a thousand other authors can use the exact same piece of art.

For an AI image made by a 150 word carefully crafted prompt, using lots of particular emphasis points, where hundreds of test images were generated, a particular seed value was found to be good, 5 add-on models were used (out of tens of thousands available), a variety of filters and modifiers were used, and the whole thing was done in multiple passes with manual masking of areas to re-generate in a different style or with different content, no one but a trained AI artist will be able to make that piece of art.

It's like the Adobe ads that say 'Anyone can use Photoshop to edit photos.' Yes, anyone can. I can. I have.

There's no way I can do what a trained Photoshop artist can do in editing, compositing, and altering an image e.g. for a magazine. I'd need the training they have, plus a flock of expensive add-on tools.

Once it becomes clear to judges that there are many, many hours of creative work behind a good AI image, you'll start seeing things be copyrightable. That's my guess, anyway.

That's partly because AI art (even in its current form) is way too good of a tool to toss in the dustbin, and there'll be intense pressure on keeping that from happening. It won't be commercially viable if outputs can't be copyrighted, though, so that'll be part of the equation.

That, and most of the lawsuits over copyright infringement aren't aimed at stopping AI, they're aimed at getting compensation. Stopping it is impossible; compensation is still within the realm of possibility. But there will never be any compensation unless AI-generated art can be copyrighted, so the same people complaining about infringement will have to be arguing in favor of copyright for AI-generated art.

Replies:   Dominions Son
Dominions Son ๐Ÿšซ
Updated:

@Grey Wolf

@Dicrostonyx

The bigger problem for authors is that since AI generated content can't be copyrighted it'll make it hard to make your work stand out. A thousand other authors can use the exact same piece of art.

My guess - and I certainly may be wrong - is that this won't hold up for all that long.

The primary cases on point aren't even about AI, they are about painting/photographs created by animals.

Look up the monkey selfie case.

What I recall:

A wildlife photographer left a camera sitting out and a monkey got a hold of it and accidentally took some selfies.

The wildlife photographer published the pictures claiming the copyright.

An animal rights group sued, trying to claim copyright on behalf of the monkey.

Where the US courts landed is that US copyright law only contemplates human authors/creators and that since no human agency was involved in creating the images no copyright existed in them.

There are other cases around paintings created by animals in zoos where the zoo keepers let an animal play with finger paints as an enrichment activity, with the same basic result. Images not created by a human don't have a copyright.

Now with AI generated images, the AI isn't generating images on it's own. it's given a prompt by a human that is then used to create an image.

It may end up that that is enough human agency for copyright to attach. But it would be the human that owns the copyright, not the AI.

Replies:   Grey Wolf  Dicrostonyx
Grey Wolf ๐Ÿšซ
Updated:

@Dominions Son

It may end up that that is enough human agency for copyright to attach. But it would be the human that owns the copyright, not the AI.

That's my take on it, too. For an AI to hold copyright, it would first have to be granted legal 'personhood,' which will be an enormous step if and when it happens.

It'll be the human who holds it, but (for the reasons I mentioned) I think it's nearly inevitable that human-prompted AI art, especially when it's demonstrated that there's a lot of 'human agency', will be granted copyright.

No one questions that a photo is copyrightable, yet the agency in a photo is technically 'point camera, press button.' Obviously, a great photographer knows a lot about composition and so forth, and may also be adjusting settings on the camera, but the mechanical act is simple. However, photography is long since established to be copyrightable. Even unintentional errors while developing a photograph are copyrightable, on the theory that the photographer accepted the errors and thus established agency.

AI art is the same way. A mechanical device does the work. The interesting part is composition, editing, and changing settings. That's all human agency, and it's very nontrivial.

The problem comes after that. Are we going to need a determination of just how much 'fiddling about' went into creating the art?

I tend to doubt it, for the same reason that few would argue that Jackson Pollock's work is copyrightable, yet a court case over how much human agency is involved in throwing paint at a canvas would be quite the circus.

P.S. Having first thought up the Pollock analogy, I googled (to make sure there had not been such a circus; if there was, I missed it), and found a highly apropos (and long!) article about creativity, copyright, human agency, and other weighty subjects.

For anyone interested, https://houstonlawreview.org/article/18011-thirty-six-views-of-copyright-authorship-by-jackson-pollock

Replies:   Dominions Son
Dominions Son ๐Ÿšซ

@Grey Wolf

yet the agency in a photo is technically 'point camera, press button.'

I feel compelled to note that the ability to copyright photographs was established way back in the earliest days of photography, when photography wasn't anywhere near as simple as "point camera, press button".

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@Dominions Son

I agree. My point is more that no one seems to have mounted any sort of challenge based on the current (mechanical) ease of photography, nor would such a challenge succeed.

Dicrostonyx ๐Ÿšซ

@Dominions Son

A wildlife photographer left a camera sitting out and a monkey got a hold of it and accidentally took some selfies.

The wildlife photographer published the pictures claiming the copyright.

This is actually not what happened, just how certain media described the situation.

What actually happened was the photographer (David Slater) spent several weeks befriending a group of macaques and setting up his camera in specific ways such that a "selfie" would be engineered to occur.

Several copyright experts argued that despite the fact that no human had taken the picture directly, following that logic all photographs and video taken using automatic shutters, remote controls, motion-sensing equipment, and any other device which separated the photographer from the act of clicking the button would fall under the same rules. This would instantly render a large portion of the current marketplace unprotected.

It's also worth noting that the court explicitly called out the motivations of the "animal rights group" in question, especially the fact that they tried to vacate the case when they realized that it might be precedent setting and wasn't going their way. Not only does this suggest that their intentions were disingenuous, but it implies that they were using the legal system to generate publicity for they're group.

Slater does technically still own the copyright, but because of the fiasco the photo became so well distributed online that he never makes any money off it. In the first year of the photo's existence Slater made ยฃ2,000 from the photo; in 2017 he was earning "ยฃ100 every few months" because Wikipedia refused "to stop making the images available" without his permission. Today the copyright is most useful to him for merchandise: http://www.djsphotography.co.uk/.

DBActive ๐Ÿšซ

@Dicrostonyx

The bigger problem for authors is that since AI generated content can't be copyrighted it'll make it hard to make your work stand out. A thousand other authors can use the exact same piece of art.

There is nothing to say that AI generated content can't be copyrighted. The case filed for publicity in the UK was an attempt to have the AI as the author. If the question was whether the person using AI could copyright the product the copyright would have been allowed.

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@DBActive

On the flip side of my above comments, I suspect there may be some 'agency' test.

If I sit down and type 'photo of a girl' into a generator, and blindly accept the first photo, my supposition is that a court, given those as the facts, would deny copyright.

If I generated 10 and picked the best one, it's much more likely that they would grant copyright.

Note: that's shorthand for 'If someone challenged the copyright, a court would sustain the challenge in the first case, deny it in the second'. Courts generally don't grant or deny copyright, they just rule on whether it exists if challenged (or, occasionally, rule on whether the Copyright Office erred, as in the UK case).

Michael Loucks ๐Ÿšซ

@Dicrostonyx

In the US, CP law is about actual, real child involvement. There are some special exceptions to that, but AI generated imagery wouldn't even come close unless you were specifically and intentionally using known CP to train the AI. And in that case simply having that material in the first place would be a far bigger crime.

California is tryng to change thatโ€ฆ

A Constitutionally Dubious California Bill Would Ban Possession of AI-Generated Child Pornography

Replies:   Dominions Son
Dominions Son ๐Ÿšซ

@Michael Loucks

California is tryng to change thatโ€ฆ

A Constitutionally Dubious California Bill Would Ban Possession of AI-Generated Child Pornography

Constitutionally dubious indeed. The US Supreme Court has already rejected something like this.

The US Congress already tried including drawn (not AI generated) images in the late 1990s.

It was soundly rejected by the US Supreme Court.

The original US Supreme Court decision (from the 1970s IIRC) upholding the federal child pornography band carved out a first amendment exception explicitly on the basis of the harm done to the children used to produce child pornography.

When the federal "virtual" child pornography ban came up to the Supreme Court (2002 or 2003), the court refused to extend that original precedent to cover materials not produced using actual children.

Replies:   DBActive
DBActive ๐Ÿšซ
Updated:

@Dominions Son

There is a difference.

In my limited understanding, the library used by the AI must contain images from which it can "learn" what is being asked for. To produce images of nude children it must have images of actual nude children to compose nude or sexually involved children. We already know that child porn images are contained in these libraries and if they are used to to allow the AI to produce images there are actual children involved in the production.
The production of porn by imposing children's faces on sexual images is against the law - how is that different?

Replies:   Dominions Son  Grey Wolf
Dominions Son ๐Ÿšซ
Updated:

@DBActive

To produce images of nude children it must have images of actual nude children to compose nude or sexually involved children.

This is not strictly true. Drawings would suffice in many cases.

The production of porn by imposing children's faces on sexual images is against the law - how is that different?

Because the final AI generated image does NOT contain fragments of the image of a real child. You misunderstand what the AI is doing. It is not copying and pasting together fragments of different images from the training database.

The kind of photo chop you describe does in fact contain an actual image of an actual child.

Replies:   DBActive
DBActive ๐Ÿšซ

@Dominions Son

I understand that it is not cutting and pasting - but they (this is known) do have child porn images in their databases.

Replies:   Dominions Son
Dominions Son ๐Ÿšซ
Updated:

@DBActive

they (this is known) do have child porn images in their databases.

Irrelevant under US law (as to the legality of the generated images), as long as the output images don't contain fragments of actual images of actual children.

Now if your claim is true, the companies behind the AI have an entirely different legal problem if they are in the US, because under US law, mere possession of those child porn images is a federal felony for each separate image.

ETA: Possession of such images is also illegal in many other countries. I am skeptical of your claim on this because if it was true that their training datasets contained such images and that this was widely known, the companies would already be getting prosecuted for possession of the child porn images.

Replies:   Grey Wolf  DBActive
Grey Wolf ๐Ÿšซ

@Dominions Son

The claim is correct, as far as it goes. See e.g. https://cyber.fsi.stanford.edu/news/investigation-finds-ai-image-generation-models-trained-child-abuse. I wish that were a more useful piece, but it gets to the heart of the issue.

Also see https://laion.ai/notes/laion-maintanence/

IIRC, models using LAION-5B were trained by feeding them an enormous list of URLs combined with machine-generated captions for the image (which is what LAION-5B is - URLs and captions). The image was, at most, transiently stored in memory and never viewed (nor able to be viewed) by a human being. That seems unlikely to meet the legal requirements for 'possession'.

Everyone's handling of the images was transitory. If you could prosecute them, you could prosecute Common Crawl (and, most likely, Google etc) for crawling them, ISPs for transmitting the bits, etc.

Under US law at least, 'possession' seems to vary state to state, but most seem to require 1) actual knowledge that one has the material and 2) actual knowledge that the image or video depicted a minor engaging in, or simulating, sexual conduct. One example given is that someone who is handed a USB drive by someone else does not 'possess' the CSM on that USB drive - not even if one loads it into a computer and accesses non-CSAM content of the drive.

So the model-makers would be fine. They had no actual knowledge of the material then, and they do not possess it now. The model itself contains no CSAM - it's a set of numeric weights, not images. They also have no actual knowledge of age - they have never seen the images, and all they know is that the file at the end of a URL matched a hash value in a database of alleged CSAM.

Replies:   Dominions Son  DBActive
Dominions Son ๐Ÿšซ
Updated:

@Grey Wolf

Under US law at least, 'possession' seems to vary state to state

While there are relevant state laws that may vary, possession of child pornography is prohibited under FEDERAL law which does not vary from state to state.

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ
Updated:

@Dominions Son

You're misunderstanding what I'm saying. I'm talking about the definition of the word 'possession.' That varies state by state, per a number of legal sites I checked on my way to posting that.

State laws also ban possession (for instance, in California it's

California Penal Code 311. Most people are tried in state courts, not federal courts.

Federal law also requires 'knowingly,' though there is a difference. California law is closer to the lay definition, while Federal law allows for 'should have known' - but it would be very hard to claim the LAION-5B team 'should have known' (in a legal sense) that it might contain CSAM (and, of course, they 'possessed' no images, anyway). See my much longer post in response to DBActive below for more - I didn't want to repeat everything here.

Replies:   Dominions Son
Dominions Son ๐Ÿšซ

@Grey Wolf

You're misunderstanding what I'm saying. I'm talking about the definition of the word 'possession.

The definition of 'possession' for relevant state laws may vary.
The definition of 'possession' relevant to the federal ban on possession of child pornography does not vary from one state to another.

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@Dominions Son

True enough - we agree there. 'Possesses' is undefined in the federal statue and may matter. 'Knowingly' is more interesting there.

DBActive ๐Ÿšซ

@Grey Wolf

Knowledge is a requirement but that can be willful disregard of the probability of possession.

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@DBActive

I'm not seeing any citations for 'willful disregard' in connection with CSM. The text of the California statute is clear:

311.11 (a) Every person who knowingly possesses or controls any matter, representation of information, data, or image, including, but not limited to, any film, filmstrip, photograph, negative, slide, photocopy, videotape, video laser disc, computer hardware, computer software, computer floppy disc, data storage media, CD-ROM, or computer-generated equipment or any other computer-generated image that contains or incorporates in any manner, any film or filmstrip, the production of which involves the use of a person under 18 years of age, knowing that the matter depicts a person under 18 years of age personally engaging in or simulating sexual conduct, as defined in subdivision (d) of Section 311.4, is guilty of a felony and shall be punished by imprisonment in the state prison, or a county jail for up to one year, or by a fine not exceeding two thousand five hundred dollars ($2,500), or by both the fine and imprisonment.

(note that b-f do not modify 'knowingly')

Earlier in the statute:

"Knowingly" means being aware of the character of the matter or live conduct.

There's no room in there for 'willful disregard'. One must know they possess the material (storage device, etc), and one must know that the images are of someone under 18.

The federal statute (18 U.S. Code ยง 2252) also uses 'knowingly', so one could not switch to federal prosecution.

You can perhaps get there via the U.S. Code's definition of 'knowingly', which is slightly more broad than California's:

ยง 561.314 Knowingly.
The term knowingly, with respect to conduct, a circumstance, or a result, means that a person has actual knowledge, or should have known, of the conduct, the circumstance, or the result.

However, the burden of proof isn't 'willful disregard' (still hard to prove), but 'should have known', which is much harder to prove. Would a reasonable person hold that the LAION-5B team 'should have known' that, out of 5.85 billion images indexed by a public search engine (Common Crawl), very few to none of which they viewed (viewing was not a part of the project), held CSAM? They could perhaps be found to have acted with 'willful disregard' as to its presence (though the LAION-5B team says 'We developed and published our own rigorous filters to detect and remove illegal content from LAION datasets before releasing them' which argues against 'willful disregard').

Going to the question of AI-generated works (for California law - not relevant to LAION-5B, but relevant otherwise):

311.1 (e) This section does not apply to drawings, figurines, statues, or any film rated by the Motion Picture Association of America, nor does it apply to live or recorded telephone messages when transmitted, disseminated, or distributed as part of a commercial transaction.

That would make the document being AI-generated legal. It would also create a very strong defense on 'knowingly,' since if one knew the document to be AI-generated, one would 'know' that it doesn't contain an actual person under 18.

The federal statute requires 'the producing of such visual depiction involves the use of a minor engaging in sexually explicit conduct.' That would again exclude AI-generated artwork, unless one could somehow prove that the generation of the image was based on LAION-5B, that the CSAM had not been removed after the LAION-5B team released it but before the model was trained, and that the specific parts of the model affected by the 0.0000005% of the images which contained CSAM were used to generate the AI.

And, if you proved that, you'd have an enormous burden showing that the possessor 'knowingly' possessed it.

Replies:   Joe Long
Joe Long ๐Ÿšซ
Updated:

@Grey Wolf

the production of which involves the use of a person under 18 years of age, knowing that the matter depicts a person under 18 years of age personally engaging in or simulating sexual conduct

The key items here are a person under 18 and sexual conduct.

1) What if I ask the AI to take an image of a celebrity over 18, make them look 14, and show them naked, standing in a bedroom, but with no sexual conduct?

2) Or, submit a face pic of private person who is 14 and ask the AI to make the face look 18 along with a naked body, but again not engaging in any sexual conduct?

(just hypothetical questions, of course)

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@Joe Long

1) What if I ask the AI to take an image of a celebrity over 18, make them look 14, and show them naked, standing in a bedroom, but with no sexual conduct?

The person was not under 18. There is no issue (in my 'I am not a lawyer' view). The person must have been under 18 at the time the image was taken.

2) Or, submit a face pic of private person who is 14 and ask the AI to show that face along with a naked body, but again not engaging in any sexual conduct?

This gets into a tangled mess of laws. Quoting from one article (of somewhat unknown quality):

Several states have passed their own laws over the years to try to combat the problem, but they vary in scope. Texas, Minnesota and New York passed legislation this year criminalizing nonconsensual deepfake porn, joining Virginia, Georgia and Hawaii who already had laws on the books. Some states, like California and Illinois, have only given victims the ability to sue perpetrators for damages in civil court, which New York and Minnesota also allow.

However, those laws would apply only if you were actually claiming the pictured person to be a specific person. If you're claiming it's a generic person with no name, it's much more questionable. And if you're specifically saying 'This is NOT person X,' that may or may not help you.

Of course, if you take (2) but feed the image into an 'img2img' AI generator, which makes a similar image out of a base image, you're in a different space, since the face will not be the original person's face but a sorta-similar impression of the face.

Of course, that's not a CSAM case, that's a 'deepfake' case, and 'deepfakes' can be illegal/civilly liable even without any sexual content. Age is probably also irrelevant.

There's a real-world component to this. There are enormous databases of models based on the faces of actual people, publicly available. You'll get a face that 'looks like' person X. It's not actually person X - no single real photo of person X was used, and many of the 'bits' in the image will be from people other than X - but it'll look pretty much like X.

If you use that to generate an image, say it's person X, and distribute it, you may be violating the law (note: sexual content may be irrelevant - or not - depending on the state).

If you use that to generate an image and just hang onto it and do whatever you want with it, most laws wouldn't trouble you, even if someone caught you with it.

If you use that to generate an image and distribute it widely saying 'Hey, look how much this AI-generated image looks like X!' you may be fine. Again, depends on the state.

Consider the recent Taylor Swift debacle. No one seems to have done anything illegal, thus far. The images are legal, distributing them is legal, looking at them is legal, etc.

It is likely that it would still be legal had Taylor Swift been 17 at the time. Not necessarily, but likely.

Note that all of this is relative to US law. Slap an underage face on an of-age body in Canada? Illegal. Have an AI generate a naked underaged person in Canada? Illegal. See https://www.cbc.ca/news/canada/montreal/ai-child-abuse-images-1.6823808. Note that that's not exactly the model case - the videos were 'deepfakes' of specific known people, and the perpetrator had lots of non-AI CSAM, too. Still, assume both of your cases are quite illegal in Canada.

Replies:   Joe Long
Joe Long ๐Ÿšซ

@Grey Wolf

Canadian law bans any visual representation of someone depicted as being under the age of 18 engaged in explicit sexual activity.

Again, requires a depiction of sexual activity. Is standing naked, or in a swimming pool topless, the only person in the pic, sexual activity? I would think no but I realize that many think that nudity by itself is porn.

I tried a site that allows submission of pics and will use that as a model, but they won't accept any prompts of an age below 18 or the word 'teen' If you state 18 they will age adjust the submission to 18. Pic of a 14 looked exactly the same and amazingly similar. Instantly recognizable. Pic of an 8 was turned into 18 and looked very much like her mother. But I wasn't going to risk asking for nudes, just checking out the ability to submit and see what got rendered.

Replies:   Grey Wolf  Dicrostonyx
Grey Wolf ๐Ÿšซ

@Joe Long

I suspect (but am not sure) that Canadian law would equate child nudity with sexual activity. For instance, I think David Hamilton's work is somewhere between clearly illegal and a major risk in Canada.

In the United States, Hamilton's work is pretty clearly legal and one can find copies of his books on bookstore and library shelves.

Most sites will try to block that, but programs like Stable Diffusion (which is really a family of different programs and models) can run on consumer hardware and any blocks in prompting can be (and have been) coded around.

Dicrostonyx ๐Ÿšซ
Updated:

@Joe Long

It's complicated/. There isn't a specific law that makes fictional depictions of underage characters illegal, rather a court chose to set the precedent of allowing the already existing law against sexual content of minors be applied in a case involving computer-generated images.

It's also worth noting -- and this is NOT advice -- that the police have explicitly said that they do not have the resources or desire to investigate every person who might have such an image. This precedent is being used as an excuse to get a warrant in cases where investigators have good reason to believe that a suspect has actual CP material or is involved in the distribution of same but can't prove it without first having access to the suspect's home or devices.

Regardless, the point is that this isn't something that is clearly defined and likely won't ever be so because it's not being tested in court often enough to narrow the interpretation. It's a technicality of interpretation that's being used as a pretext to hunt after more egregious content.

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@Dicrostonyx

This precedent is being used as an excuse to get a warrant in cases where investigators have good reason to believe that a suspect has actual CP material or is involved in the distribution of same but can't prove it without first having access to the suspect's home or devices.

This is one of those things one can go off on philosophical tangents about, and since I'm one of those people...

Is this 'reasonable,' or is this 'the ends justify the means?' I'm all for rooting out 'actual CP material' (meaning that which abusing actual children). I'm also all for rooting out 'deepfakes,' at least when those are distributed (especially with malicious intent, but unintentional damage is still damage).

I'm not sure we'll 'stop' either, but we've got a much better chance at stopping CSAM produced by abusing actual children.

However, I have some qualms about a thought process that says 'This particular activity has no victims, but it might let us find people who also do things which do have victims, so that's fine.' If someone 'merely' possesses images created without anyone being abused, should they need to fear prosecution?

On the 'devil's advocate' side, what if they're using 'this is just AI' as a smokescreen to hide non-AI material? Is it worth the chance of catching them out to widen the net to include non-abusive people?

These are not simple questions, and people of good conscience are going to answer them differently.

DBActive ๐Ÿšซ

@Dominions Son

They have already admitted that there is child porn in their databases. They are trying to eliminate it.

Grey Wolf ๐Ÿšซ

@DBActive

This is really not true at all. It may be true for some age ranges, but if the training data contains 1) clothed non-legal-aged teenagers and 2) nude legal-age teenagers, the algorithm can easily mix and match a too-young teenage-looking face onto a not-as-young body. It may be able to de-age the body as well (AI is actually quite good at aging and de-aging).

AI can't generate what it hasn't seen, though. If you can get it to produce a pre-pubescent person naked and engaged in clearly sexual conduct, it likely had to be trained on that. Not necessarily, but likely.

There does seem to be evidence that LAION-5B (which was used in training many models) used some number of images (variously given as '1000', '1127', '3000', '3156', and '5000', depending on the article, all of which reference the same study) that appear in a database of CSAM (Child Sexual Abuse Materials). It's not clear what images these were, nor whose definition of CSAM was used. For instance, in the US a photo of a naked non-sexualized minor is likely legal - one can go to many bookstores or libraries and purchase books containing them - whereas in Canada those images are highly illegal.

It's also possible that these are e.g. images of Traci Lords. Nearly everything starring her is now legally CSAM, but was not CSAM when it was released.

The point is that we don't know what was included or not included (nor do we want to know, I'm pretty sure - the Stanford team used the hash database partly to avoid looking at any of the images, because they could have been accused of crimes themselves).

And, for any image produced, there is no way to know if 1) the model used the CSAM to generate it or not, 2) whether it matches an actual child, or (currently) 3) whether that actual child's image was part of the LAION-5B data. In the unlikely event that the AI-generated output exactly matched a child and that exact match was proven to have been in the dataset, you'd have a really interesting court case, because the possessor of the image would be able to state that they fully believed they were complying with the law.

DBActive ๐Ÿšซ
Updated:

@DBActive

I know that AIs are not supposed to cut-an-paste images, but I tried one out creating a document in creating a document. I found that it did insert long verbatim passages from other (not always publicly available) documents dealing with a similar issue. That may be limited to the field of the inquiry, but I wonder if it would also be true when using it to compose fiction.

Replies:   Dominions Son
Dominions Son ๐Ÿšซ

@DBActive

a cocument

??? Is that supposed to be document?

but I wonder if it would also be true when using it to compose fiction.

That is going to depend heavily on what AI you are using, how the module was trained and how you set up your prompts.

I've been playing around with https://novelai.net/ and I haven't seen anything like that.

I suspect you are more likely to run into that issue if you are trying to use a general purpose chatbot for story writing as opposed to using an AI specifically set up for story writing.

DBActive ๐Ÿšซ
Updated:

@DBActive

The complaint in this case is available on pacer.gov for the District of NJ under case # 2:24-cv-00634

A new suit in federal court in New Jersey will test the waters for obtaining compensation for victims of "deepfake" pornography generated by artificial intelligence.

A 15-year-old girl says in the suit that a high school classmate used an AI application to create and distribute nonconsensual nude images of her. The girl, identified as Jane Doe, is a high school student from Westfield. Around last August she accepted an Instagram friend request from a male classmate, who then downloaded a fully clothed photo of the girl that he used to generate those images, according to the suit.

Shane Vogt

The male classmate, identified as K.G., is believed to have used a program called ClothesOff to create the nude images, according to the suit. He then shared them with others via Snapchat, the suit said. Jane Doe found out about the nude photos on Oct. 20 when her parents were contacted by the school, the suit said.

The Westfield Police Department conducted an investigation, but on Jan. 24, Doe and her family learned that no one would be charged with a crime in connection with the nude photos because facts gathered by the school could not be used in the investigation and because K.G. and other potential witnesses failed to cooperate with or speak to the police and refused to provide access to their electronic devices, the suit said.

The suit, filed Friday, comes just as an assortment of free or inexpensive online tools is launched to help users generate images of people, including some that offer to "nudify" subjects. ClothesOff is available for Apple and Android apps but its website gives no indication where the company is located or who owns it. The site warns users that "We do not take any responsibility for images created using the website."

State Sen. Jon Bramnick. Courtesy photo

ClothesOff "specifically highlights its proficiency in producing nude images of women and girls from their [I]nstagram screenshots and allowing users to strip your dream girl naked," the suit said.

According to the suit, the plaintiff is forced to cope with the phycological impact of knowing that the nude photos purporting to depict her will inevitably land onto the internet, where they are transmitted to others, such as pedophiles and traffickers.

As a result, the plaintiff and her family are left with a sense of "hopelessness and perpetual fear that, at any time, such images can reappear and be viewed by countless others, possibly even their friends, family members, future partners and employers, or the public at large," the suit said.

The suit seeks damages under 15 U.S.C. ยง6851, which allows an individual to recover $150,000 and the cost of litigation if nude pictures of that person are disseminated without consent; as well as 18 U.S.C. ยง2252A, and N.J.S.A. 2A:58D-1, which provide civil remedies for those victimized by child pornography; as well as claims for invasion of privacy and intrusion on seclusion, negligent infliction of emotional distress and endangering the welfare of children. The complaint seeks actual damages of $150,000 for each disclosure of a nude image of Doe, as well as compensatory and punitive damages.

Karen Painter Randall of Connell Foley. Courtesy photo

U.S. Rep. Tom Kean Jr., whose district includes Westfield, introduced a measure in January to address exploitation via deepfake photos. Called the Preventing Deepfakes of Initiate Images Act, the measure provides civil and criminal penalties for persons who create AI-generated deepfake pornography. That bill has been referred to the House Science, Space and Technology Committee and the Energy and Commerce Committee.

State Sens. Kristin Corrado and Jon Bramnick introduced a measure last March, S976, that would impose criminal and civil penalties for nonconsensual disclosure of deepfake pornography. That bill is before the Senate Judiciary Committee.

Pornographic deepfakes can target anyone, but are often used as a revenge tool to humiliate and destroy the reputation and profession of victims, said Karen Painter Randall, chair of the Cybersecurity, Data Privacy and Incident Response Group at Connell Foley in Roseland. Taylor Swift was recently the subject of a pornographic deepfake incident that proliferated on social media, which may push lawmakers to give a closer look to federal legislation making it a crime to manufacture digitally altered nude images of a person, Randall said.

"The Westfield incident signifies the immediate need for federal legislation to regulate AI and production of pornographic deepfakes, including imposing civil and criminal penalties. Organizations that profit from the use of its apps to produce pornographic deepfakes should also be closely scrutinized," Randall said.

Shane B. Vogt and David A. Hayes of Turkel Cuva Barrios in Tampa, Florida, represent Doe and her parents along with Jon-Henry Barr and John Gulyas of Barr & Gulyas in Clark, New Jersey.

"On behalf of our client and the countless other girls and women being victimized by the existence and use of artificial intelligence designed solely for their exploitation, we hope this case is ultimately successful and demonstrates that there is something victims can do to protect themselves from the AI pornography epidemic," Vogt said.

The operator of ClothesOff did not respond to a request for comment.

Replies:   Dominions Son
Dominions Son ๐Ÿšซ

@DBActive

A new suit in federal court in New Jersey will test the waters for obtaining compensation for victims of "deepfake" pornography generated by artificial intelligence.

The description of the suit that is quoted reads a civil case for damages, not a criminal case for child pornography or obscenity charges.

A win for the plaintiff will not set a precedent that possession of the images is illegal.

Replies:   DBActive
DBActive ๐Ÿšซ
Updated:

@Dominions Son

No, but one of the causes of action is the federal child porn law.

But it also shows the potential civil liabilty of sites like this that allow uncensored images to be posted, some of which could have easily been created in the manner described in the suit and contain images of actual people.

Who can be sure that the young girl images contained on here weren't produced that way?

I also don't think there has been an actual decision on the PROTECT act that defines child pornography as:

8)"child pornography" means any visual depiction, including any photograph, film, video, picture, or computer or computer-generated image or picture, whether made or produced by electronic, mechanical, or other means, of sexually explicit conduct, whereโ€”
(A)the production of such visual depiction involves the use of a minor engaging in sexually explicit conduct;
(B)such visual depiction is a digital image, computer image, or computer-generated image that is, or is indistinguishable from, that of a minor engaging in sexually explicit conduct; or
(C)such visual depiction has been created, adapted, or modified to appear that an identifiable minor is engaging in sexually explicit conduct.
(9)"identifiable minor"โ€”
(A)means a personโ€”
(i)
(I)who was a minor at the time the visual depiction was created, adapted, or modified; or
(II)whose image as a minor was used in creating, adapting, or modifying the visual depiction; and
(ii)who is recognizable as an actual person by the person's face, likeness, or other distinguishing characteristic, such as a unique birthmark or other recognizable feature; and
(B)shall not be construed to require proof of the actual identity of the identifiable minor.
(10)"graphic", when used with respect to a depiction of sexually explicit conduct, means that a viewer can observe any part of the genitals or pubic area of any depicted person or animal during any part of the time that the sexually explicit conduct is being depicted; and
(11)the term "indistinguishable" used with respect to a depiction, means virtually indistinguishable, in that the depiction is such that an ordinary person viewing the depiction would conclude that the depiction is of an actual minor engaged in sexually explicit conduct. This definition does not apply to depictions that are drawings, cartoons, sculptures, or paintings depicting minors or adults.

Replies:   Dominions Son  Grey Wolf
Dominions Son ๐Ÿšซ

@DBActive

I also don't think there has been an actual decision on the PROTECT act

Not that I've seen. However, he US Congress tried to do something very similar in 1996. It was over turned by SCOTUS in 2002. So there is standing precedent that goes very much against the PROTECT act.

The odds of it surviving a court challenge are close to zero.

Replies:   DBActive
DBActive ๐Ÿšซ
Updated:

@Dominions Son

I think it would likely survive a court challenge even with the current Court which is more first amendment friendly than earlier ones.

I am confident that the use of identifiable image would be held constitutional. That would require abuse of a minor or subject a child to harm (as in the case I posted about) even if the sexual parts of the image were faked.

And the "virtually indistinguishable" would be upheld: it's no different than selling, distributing or possessing counterfeit drugs. If you claim it is real, it is.

In the case I posted, if it's just a simple nude image of a child it is likely not child porn, but it would still violate the state laws that prohibit distribution on non-consensual nude images.
You should read United States v. Williams, 553 U.S. 285 (2008)

Grey Wolf ๐Ÿšซ

@DBActive

This is completely irrelevant to the case starting this thread, but, to my I-am-not-a-lawyer mind,

A)means a personโ€”
(i)
(I)who was a minor at the time the visual depiction was created, adapted, or modified; or
(II)whose image as a minor was used in creating, adapting, or modifying the visual depiction; and
(ii)who is recognizable as an actual person by the person's face, likeness, or other distinguishing characteristic, such as a unique birthmark or other recognizable feature; and

contradicts

(B)shall not be construed to require proof of the actual identity of the identifiable minor.

If it just 'looks' like 'some minor', but there is no person who matches that description, how could it reasonably be an 'identifiable minor'?

I expect that would be struck down along the way. How is a jury to rule that an image which does not correlate to any actual person meets the defintion of 'identifiable minor'? There's reasonable doubt all over that.

I get the intention. If a 'classical' piece of CSAM (actual photo, video, etc) appears, it's an incredible burden on the prosecution to have to say 'Yup, that is definitely Alexandria Mikhailovna Lutsenko' (name made up off the top of my head).

However, if the defense can say 'This image was computer generated. Our forensic expert Fred here will testify that there are patterns in the data that are consistent with AI generation. Who is the 'identifiable minor', may we ask?' I'm not sure what a jury can do with that. There's absolutely reasonable doubt that a human being was ever involved in doing anything abusive.

None of that matters to 'deepfakes.' The case above is (IMHO, IANAL) clearly abusive and would be so if it was simply photoshopping a head onto an of-age body. But that's handled by putting civil penalties on it (or perhaps criminal penalties for distributing 'deepfakes').

Again, in my IANAL view, Ms. Jane Doe should be in the same position as Taylor Swift in terms of remedies, and K.G. should be in the same position as the creators of the Taylor Swift images.

The entire reason for CSAM laws is to prevent actual minors from being abused (and abused because they are minors) in order to create CSAM. If no minor is ever abused, what is the purpose of enhanced penalties?

Replies:   DBActive  DBActive
DBActive ๐Ÿšซ
Updated:

@Grey Wolf

No. That's not contradictory.

What it means is that the prosecution has to prove that it is a real human child: it doesn't have to prove that it's Jenna who lives down the street.

Once the prosecution establishes a prima facie case that it is a real child, then it's a question for the jury. If the defense claims that it is not a real child, I am sure both sides would present expert testimony on the issue. I would think the defense would also have to show how the image was created to sway a jury. That would open a whole other can of worms for the defense.
You should also remember that the person distributing the image likely made the claim that the images were real. That would also get him convicted under the statute and would put his defense in considerable jeopardy on issues other than pandering.

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@DBActive

I was specifically not referring to the case that opened this thread when I said that. Sorry that I mentioned that well into the post. Deepfakes are based on an actual person.

I would think the defense would also have to show how the image was created to sway a jury. That would open a whole other can of worms for the defense.

I'm not sure what 'worms' that would create. "I typed in this prompt, with these models, and this seed" is perhaps embarrassing, but if it's legal to do that, it's legal. If it's not, it's not.

Replies:   Joe Long
Joe Long ๐Ÿšซ

@Grey Wolf

The more interesting point of the discussion, I think, is: if person X enjoys looking at AI-generated images of underage girls, none of them based on any particular real person, is X's conduct legal? If it's not legal, why is it not legal? If X was looking at AI-generated images of of-age people, would those be legal? Why or why not? What if X was looking at AI-generated images of (current-aged) Taylor Swift? Legal? Not-legal? Would it be different if pictures of teenage Taylor Swift were used as source material? Would it be different if the pictures were of adult Taylor Swift but the person asked the AI to de-age her?

I'm not getting into 'distributing' - this is an 'in the privacy of their own home' question.

In Canada, anything that looks underage is very likely illegal. That's true even if it's 'just nudity,' notwithstanding that the only Canadian case I can find doesn't meet that criteria.

This is where I am. I like to look at what appear to be teens with fully developed bodies, although prefer smaller boobs. I have no intent to distribute.

For cover art, I have a social media pic of a porn star which is without clothes but only head and shoulders. I liked the framing. I submitted that to Starry for the posing, and had it replace the face with a blend of Kaitlyn Dever, Melissa Benoist & Lizzie Greene. So it shows a beautiful young woman which hints at each model without being recognizable. My character is 14 and the image looks anywhere between that and 25. As I've asked previously, how much different is the face of a 14yo girl from one who's 18-21? (and I've researched many, many social media photos)

Replies:   Dominions Son
Dominions Son ๐Ÿšซ
Updated:

@Joe Long

My character is 14 and the image looks anywhere between that and 25. As I've asked previously, how much different is the face of a 14yo girl from one who's 18-21? (and I've researched many, many social media photos)

There's enough variation among adults on all applicable visual aspects that the best I think you could do is broad categories: infant, toddler, prepubescent, pubescent, post-pubescent.

Absent other contextual clues like clothing styles or behavior that may not be available in a static image, I don't think you can clearly distinguish a post-pubescent teen (regardless of age) from an adult with a high degree of certainty.

When I was in high school, it was not infrequent for strangers to think my mother was my sister (and a younger sister at that). The primary driver of this is that my mother is short, under 5'.

DBActive ๐Ÿšซ

@Grey Wolf

Most of this discussion avoids the real question for this site and others like it.
A few authors on here are posting what they claim to be AI generated images of young girls. That raises the question: how does the site, or those who download the images know that they are not "real?" Is there potential liability for the site or thise who download the images? Were they created in the same manner as the image of Jane Doe in the case mentioned above?
Even if the girls are not underage, is there potential liability for distribution of deep fake of nude images or porn?
Personally, if I were the site owner, especially given Canadian law, I would not allow these pictures to be posted.

Lazeez Jiddan (Webmaster)

@DBActive

That raises the question: how does the site, or those who download the images know that they are not "real?"

We don't post pictures of underage-looking characters in compromising situations. Especially realistic-looking pictures.

If you see such a picture, please report it.

Grey Wolf ๐Ÿšซ

@DBActive

SoL needs to conform to a wide range of laws. I wouldn't allow them either, and (as Laz posted) he doesn't. They're clearly violations of the ToS (and should be!)

The more interesting point of the discussion, I think, is: if person X enjoys looking at AI-generated images of underage girls, none of them based on any particular real person, is X's conduct legal? If it's not legal, why is it not legal? If X was looking at AI-generated images of of-age people, would those be legal? Why or why not? What if X was looking at AI-generated images of (current-aged) Taylor Swift? Legal? Not-legal? Would it be different if pictures of teenage Taylor Swift were used as source material? Would it be different if the pictures were of adult Taylor Swift but the person asked the AI to de-age her?

I'm not getting into 'distributing' - this is an 'in the privacy of their own home' question.

In Canada, anything that looks underage is very likely illegal. That's true even if it's 'just nudity,' notwithstanding that the only Canadian case I can find doesn't meet that criteria.

For US law, the question is much more difficult.

And, if the pictures are 'deepfakes', we get into the argument of whether they're intended to be believed to actually be person X or whether they're intended to 'parody' person X (clearly not something person X would do, or clearly labeled as 'not person X'). How much damage is done? What should the penalty be? Does it matter if the person in question is underage? Or was underage when the source photos were taken?

Those are all interesting. They have nothing to do with SoL's policies.

Replies:   DBActive
DBActive ๐Ÿšซ
Updated:

@Grey Wolf

There's a long report on AI and child porn from a British group at https://www.iwf.org.uk/. They claim that a lot of the generated images feature known, real victims in new situations. Not being technologically educated, I would assume that if a person asked for an image of an 8 year-old sucking cock or a building collapsing, the AI would seek the best images of the real event to generate the image. Maybe I am wrong.

Even if I am wrong I suspect most are using models they request the AI to modify to present as new images.

The worms slither in because the Defendant would have to testify. That opens him up to a wide range of cross-examination and is almost always a very bad idea.

Finally, on SOL people are posting claimed AI generated images of girls who could be any age from mid-teens to young adults. They caption them as underage characters in the story. That presentation might be wandering into dangerous territory.

Without the ability to confirm that these are actually computer generated or do age verification on the subjects of the images, again it might be dangerous.

I would hate trying to defend someone with hundreds of images of CSAM on the claim "They're all AI generated"

ETA - some of these images (Max Tassel's) seem undeniably in violation of Canadian law even though they are obviously cartoons.

Replies:   LupusDei  Grey Wolf
LupusDei ๐Ÿšซ
Updated:

@DBActive

some of these images (Max Tassel's)

If I'm not mistaken... or if something had changed since I last looked at what I believe is being referenced...

Could you be possibly meaning Mat Twassel?

Well, I did go and look at their latest post, it doesn't seem he has changed technique of his art.

Mat Twassel's work is not AI generated at all. I think I can tell that with high level of certainty.

His technique is 3D modeling.

Virtual dolls manipulated in CAD (computer aided design) like environment, to put it in most blunt way. That's... at least on technology side a very different game, and as an form of art been around much longer, perhaps very literally since any kind of CAD software have existed in any form. Like, I vividly remember people doing remarkably photorealistic* work in 3D Studio MAX around 1997 or so. (*What passed that mark may have changed along time.)

There's been very rough 3D modeled porn depicting unmistakably minor characters around in quantity for decades, legality questions regarding those should be long settled, any reasonable people should assume (not necessarily true of course).

Legitimate models of people are created and shared and modified, and subject to copyright. They may in theory, but rarely if ever do, claim to closely represent actual people.

Nowadays you can create a generic body model in minutes using tools, for example MakeHuman, including nominally nude body (that's what actual human is like, after all) of any apparent age, body weight, etc. Posing and, yes, dressing the model is a high engineering art.

I'm not aware if any pornstar is indeed actually selling her own body scand created CAD model with real image textures and motion capture aided rigging, posing information, but at least in theory that should be possible and there could be market potential for such. I believe such had been discussed in fiction.

Using random images of a person in such fashion would be of considerable difficulty, however. You would have to unfold the surface of the form. "Baking" of textures for 3D modeling is a high skill art.

So, if my initial guess is true, that's a rather bad example for this discussion, as it has nothing (apparent) to do with AI image generation.

However, in wider discussion, AI generated images or image altering tools could be, eventually very likely would be, if not already are, created and used for purposes of texturing a 3D model. Alternatively, the 3D model could be used as the initial scene then re-rendered by AI imaging tools. As of I'm aware now, while at least the later is generally definitely possible, is that beneficial to resulting imagine quality may yet be in doubt.

Replies:   DBActive
DBActive ๐Ÿšซ
Updated:

@LupusDei

You're right about the name.

I assume you're right about the image creation, although he does discuss using AI in this thread. Nonetheless, under Canadian law, I don't think it makes a difference how the image was created. If it is seen as advocating underage sex or incest involving people under age, it seems any image, no matter created would be actionable.

163.1 (1) In this section, "child pornography" means
(a) a photographic, film, video or other visual representation, whether or not it was made
by electronic or mechanical means,
(i) that shows a person who is or is depicted as being under the age of eighteen years and is engaged in or is depicted as engaged in explicit sexual activity, or
(ii) the dominant characteristic of which is the depiction, for a sexual purpose, of a sexual organ or the anal region of a person under the age of eighteen years;
(b) any written material, visual representation or audio recording that advocates or
counsels sexual activity with a person under the age of eighteen years that would be an
offence under this Act;
(c) any written material whose dominant characteristic is the description, for a sexual
purpose, of sexual activity with a person under the age of eighteen years that would be
an offence under this Act; or
(d) any audio recording that has as its dominant characteristic the description, presentation or representation, for a sexual purpose, of sexual activity with a person under the age of eighteen years that would be an offence
under this Act.

Grey Wolf ๐Ÿšซ

@DBActive

"the AI would seek the best images of the real event to generate the image"

AI image generation doesn't work that way. There are no 'images' of any 'events' within the model. There is an enormous model of weighted probabilities for what various pixels will be based on the query.

If the original model included CSAM (likely, though it's a tiny subset of the model, if it was LAION-5B), and the CSAM's tags included age in that range (unknown; images were tagged mechanically and without knowing which images in LAION-5B were CSAM there is no way to find out what the tags were), it would be more likely to use weights influenced by those images. But, remember, the image is gone.

To put it in context, LAION-5B is approximately 100 Terabytes. Models based on LAION-5B are usually ~5 Gigabytes. That's 1/20000th of the size. There are no images - not even any subsections of images - in there.

The worms slither in because the Defendant would have to testify. That opens him up to a wide range of cross-examination and is almost always a very bad idea.

Why would the defendant need to testify? Many AI generators tag the image with the textual prompt. Image tagged with prompt? Any competent expert will be able to show that it was so generated. Not only that, but it would be easy to find an expert who could look at the image and say 'Yup, that's computer generated'.

I would hate trying to defend someone with hundreds of images of CSAM on the claim "They're all AI generated"

Depends on the state of the law. If the law becomes clear that 'proven' AI-generated works are legal, anyone who wishes to store such materials will be careful with their record-keeping.

Replies:   DBActive
DBActive ๐Ÿšซ

@Grey Wolf

AI image generation doesn't work that way. There are no 'images' of any 'events' within the model. There is an enormous model of weighted probabilities for what various pixels will be based on the query.

Ok I should have said the tags rather than images when constructing the final output.

Why would the defendant need to testify? Many AI generators tag the image with the textual prompt. Image tagged with prompt? Any competent expert will be able to show that it was so generated. Not only that, but it would be easy to find an expert who could look at the image and say 'Yup, that's computer generated'.

A couple of reasons why he would have to testify.
The government would also have experts available to dispute any expert witness proffered by the defense during rebuttal if the claim was that these were computer generated images.
Foundation evidence is needed for an expert opinion.
If the database used contained CSAM and the Defendant knew or should have known that, it would be a specious defense at best.
Finally, the jury would want an explanation from the defendant.

As to your last point - these guys are not careful with their record keeping nor would he be able to introduce evidence of that record keeping without testifying.

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@DBActive

The government would also have experts available to dispute any expert witness proffered by the defense during rebuttal if the claim was that these were computer generated images.

If the proof is producing an unmodified computer, downloading publicly available databases, and recreating the image right there in court, how would one 'rebut' it? The image would unquestionably be AI-generated.

If the database used contained CSAM and the Defendant knew or should have known that, it would be a specious defense at best.

Maybe. These are legally downloadable models which millions of people have downloaded. It would be easy to produce an expert to testify that they do not and cannot 'contain' CSAM. The defense could put up a strong case that, if the government is not in the least concerned by these models being publicly available, and no one has ever been convicted of a crime by having one, why would you pick on one lone user rather than the 'kingpins' distributing the model? Remember, all you need is reasonable doubt.

Finally, the jury would want an explanation from the defendant.

I think this is the key point. The other ones can handled in various ways, in my opinion.

The defendant can always 'take the fifth' on cross examination, and it's illegal for the prosecution to suggest that 'taking the fifth' is in any way an admission of guilt. Now, a jury may itself decide that, but people 'take the fifth' and are subsequently acquitted a fair number of times.

It's a risk, of course, but my point here is that the defendant not testifying is also a risk. Many juries will conclude that someone who won't even speak in their own defense much believe that they're guilty.

As to your last point - these guys are not careful with their record keeping nor would he be able to introduce evidence of that record keeping without testifying.

Automate the metadata process - each image contains the prompts, references to the models, the seed values, workflow, etc. That fixes them being careful.

As for introducing them, if you introduce the image, you also introduce the metadata. Can't have one without the other. No need for the defendant to testify.

Replies:   Dominions Son  DBActive
Dominions Son ๐Ÿšซ

@Grey Wolf

why would you pick on one lone user rather than the 'kingpins' distributing the model?

Because the 'kingpins' have the resources to fight back.

The defendant can always 'take the fifth' on cross examination

As I understand it, no, a defendant can not testify and then take the fifth on cross examination.

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@Dominions Son

You're correct. They cannot take the fifth on matters addressed by their testimony. They can refuse to answer about matters not covered by their testimony.

Replies:   Dominions Son
Dominions Son ๐Ÿšซ

@Grey Wolf

They can refuse to answer about matters not covered by their testimony.

As I understand it, such questions are usually not allowed on cross examination.

Replies:   DBActive
DBActive ๐Ÿšซ

@Dominions Son

They can't refuse to answer any questions revelant to the issue. Their attorney may object that it is outside the scope of direct examination, but any questions related to the matters at issue or the credibility of the witness will be allowed.
"Isn't it true that you have been collecting pictures of naked children for many years" is OK even if there were no questions on direct concerning the collection of pictures. "Weren't you suspended from law school for plagerism?" would be OK because it effects credibility.
What do you think the limit on the cross-examination would really be? And, what do you think the reaction of the jury would be.

Replies:   Joe Long
Joe Long ๐Ÿšซ

@DBActive

Thank goodness my only naked pics are AI generated.

Replies:   awnlee jawking
awnlee jawking ๐Ÿšซ

@Joe Long

Thank goodness my only naked pics are AI generated.

Why? Are you that ugly? :-)

AJ

Replies:   Joe Long
Joe Long ๐Ÿšซ

@awnlee jawking

hahahahaha

Let me rephrase. There are no nudes in my collection of female images.

DBActive ๐Ÿšซ

@Grey Wolf

I'm not going to beat this horse any longer, but it seems that you don't understand how the rules of evidence and expert testimony work. That's not a criticism, they're long and complicated.

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@DBActive

I have enough understanding to know how introducing metadata would work, and that it would not require the defendant to testify if the image was admitted.

You're correct that 'taking the fifth' is only allowed in some cases on cross-examination.

DBActive ๐Ÿšซ

@DBActive

Relevant to this topic an AI image of an apparently preteen girl sucking cock was deleted by Lazeez as inappropriate.
The "author" hasn't posted since.

Replies:   Dominions Son
Dominions Son ๐Ÿšซ

@DBActive

Relevant to this topic an AI image

Do you know it was AI generated and not just manually operated 3d modeling software?"

Replies:   DBActive  tenyari
DBActive ๐Ÿšซ

@Dominions Son

That's an assumption based upon his posts on this thread and it appears more realistic than his other posts.

tenyari ๐Ÿšซ
Updated:

@Dominions Son

It is also possible to make an image in a 3D tool, and put it into AI art to get derivates using tools like Stable Diffusion's "img2img" but this tends to change the art style as well.

Here's a "PG" thread in the Second Life community where people are feeding screenshots of their characters into AI tools. That is essentially 3D art to AI art:

https://community.secondlife.com/forums/topic/505269-how-does-your-avatar-look-when-enhanced-by-ai/

Replies:   Dominions Son
Dominions Son ๐Ÿšซ

@tenyari

I wouldn't consider either of those things to be AI generated 3d modeling.

tenyari ๐Ÿšซ

@DBActive

Untangling some things.

On the models used to train AI: They scoured the web I believe. If you work with Stable Diffusion for a bit you'll get the impression that the scouring featured mostly PG work, and when it did not it was the kind of stuff on 4-Chan: gay porn and furries.

AI Art was, right out of the gate - very good at drawing male anatomy and furries. But it still has trouble putting in female nipples and lower anatomy. It can mostly do the nipples, but when you get down below it starts to draw all kinds of things and has a quirk of randomly making women into futas...

As AI Art has been out to the general public now for a bit more than a year - it's become very Chinese Hentai. Specifically Chinese style and not Japanese, Korean, American, or other influences.

But it still can't do a vagina 9 times out of 10. Like... when it gets it right it was most likely a random math result. Every now and then though, there's an explosion of perfectly correct male genitals all over the place.

People are actively working to make training models designed to REMOVE these things because so many training models have been "infected" with dicks and Chinese hentai girls. Your other option is to not use any training models released since it all went public and then you're basically dealing with random furries and dicks - but they tend to not show up in your SFW stuff (unlike updated user made training models where you can ask for Disney art and get random Chinese porn).

Online tools have better filters than downloading an AI art app yourself - they put a lot of effort into cleaning things up.

On the legal front:

We've got 3 competing narratives in the thread if I read things right.

1. Is AI Art that has very young appearing characters a legal issue?
2. What about AI Art used to put real people in NSFW situations?
3. What about when you combine these 2 - watchdog agencies are now claiming this is happening.

Looks like in the USA the SCOTUS made #1 OK.
#3 is definitely a crime because real young people - has nothing to do with it being deepfakes.

#2 is such a new thing we need laws on it. These laws are going to be complicated battles over harassment vs free speech vs who knows. I'm pretty sure everyone wants it illegal but the battle will still occur because the first laws will likely over-reach.

For the rest of the world - same issues, but the cards will land differently.

(For example in Japan it's worse offense to accuse someone of harassing you than to harass someone. Truth is not a defense. If you ruin the reputation of a pedo - you go to jail and they don't. That's the opposite of pretty much the rest of the planet.)

As for the panic watchdog agencies claiming AI art was training on young people... they remind me of the people panicking about Dungeons and Dragons, Heavy Metal, Video Games, and women who exposed their ankles...
- Every time there is a change in society these people pop up and exaggerate things.

Grey Wolf ๐Ÿšซ

@tenyari

'On the models used to train AI: They scoured the web I believe. If you work with Stable Diffusion for a bit you'll get the impression that the scouring featured mostly PG work, and when it did not it was the kind of stuff on 4-Chan: gay porn and furries.'

Depending on who 'they' is, that's correct. Stable Diffusion 1.5 used LAION-2B (https://simonwillison.net/2022/Sep/5/laion-aesthetics-weeknotes/), which is a 2 million English tagged dataset. There's not a ton of quality control, and it's entirely possible that the LAION team's webcrawler hit more naked guys than naked girls and found a lot of anime.

One thing that's been repeatedly found: filtering NSFW data out of datasets makes them worse at people in general. The theory behind this is that, even though there's a huge difference between how humans draw things and how AI 'draws' things, knowing how to draw a naked human is fundamental to knowing how to draw a human at all.

And, yes, there's an explosion of custom models, both ones that remove dicks and hentai and ones that emphasize one or both.

Replies:   Michael Loucks
Michael Loucks ๐Ÿšซ

@Grey Wolf

knowing how to draw a naked human is fundamental to knowing how to draw a human at all.

At least in the past, art schools knew this and taught it. In our current Puritan moral panic over sex and nudity (which are actually separate things), I'm not sure that can survive.

Replies:   DBActive
DBActive ๐Ÿšซ

@Michael Loucks

I haven't seen any moral panic over sex or nudity. What I have seen is moral panic over non-consensual sex and nudity that has led to some changes.

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@DBActive

It depends on where you live. PornHub just banned anyone who lives in Texas (and not using a VPN) from using their services, with Louisiana and some others likely to follow. The law that triggered this is, IMHO, poorly designed for the purpose of keeping minors away from porn (a hopeless task anyway), but very well designed to deter many adults from jumping through the hoops necessary (including providing sensitive personal information). It puts people in the position of needing to essentially wave a big flag saying 'Hey! I watch porn! And I care enough about it to surrender my personal information!'

Now, 'porn' isn't technically 'nudity', but it's definitely 'sex', and some of it (albeit a small fraction) is just nudity.

There's definitely another 'moral panic' over non-consensual sex and nudity, but that's orthogonal to the PornHub et al age rules. While PornHub has (somewhat inadvertently) hosted non-consensual materials, and could potentially do a better job of policing it, that's not something they advertise nor that most people who want non-consensual materials would seek them out for.

One of the problems with our current ridiculously polarized environment is that we're losing 'shared reality'. In many states, there's no particular 'moral panic' about sex and nudity. In others, there is.

Overall, I don't think America (at an average level) is getting all that much more 'Puritanical' about sex and nudity. Movies are far more free to show sex and nudity than they were forty or fifty years ago. We do have a moral panic about offensive humor and what constitutes a 'laughing matter' (Try to imagine getting 'Blazing Saddles' made today! Or even 'Animal House'), but that's a different subject.

This year's Best Actress winner won for a mainstream movie in which she spends (per reporting - I have 'Poor Things' on my should-watch list, but have not yet seen it) much of her time naked and/or engaged in sex. No one (that I've seen) is out there trying to get it banned or is even expressing particular moral outrage at it.

Replies:   Dominions Son  DBActive
Dominions Son ๐Ÿšซ

@Grey Wolf

It depends on where you live. PornHub just banned anyone who lives in Texas

Texas is the seventh state that PornHub has blocked over age verification laws.

https://reason.com/2024/03/18/pornhub-pulls-out-of-seventh-state/

Louisiana was the first state to enact a law requiring web porn platforms to verify visitor ages. Yet Pornhub has not blocked visitors from Louisiana. Why?

The difference is in the details of complying with Louisiana's law. Verifying visitor ages in Louisiana does not require porn sites to directly collect user IDs. Rather, the state's government helped develop a third-party service called LA Wallet, which stores digital driver's licenses and serves as an online age verification credential that affords some privacy.

Using this service does not require people to turn over their real identities to porn sites. "Through LA Wallet's [Anonymous Remote Age Verification] capabilities, adult content sites can anonymously verify the age of users," its website states.

Replies:   DBActive
DBActive ๐Ÿšซ

@Dominions Son

The article you cite appears to be a press release from Pornhub. None of the state laws require the porn site to directly collect identity information. None permits the porn site to retain or use that information.
There are several reliable and secure third party verification services.
available.

Replies:   Dominions Son  Grey Wolf
Dominions Son ๐Ÿšซ

@DBActive

The article you cite appears to be a press release from Pornhub.

It's an article from Reason magazine.

Replies:   DBActive
DBActive ๐Ÿšซ
Updated:

@Dominions Son

And placed by pornhub or its lawyers.

Grey Wolf ๐Ÿšซ

@DBActive

The point of it is to create a 'chilling effect'. Persons trying to use PornHub need to give their sensitive personal information to a third-party verification service, which (to them) is of unknown provenance. Given the relatively low value of the service provided (PornHub is free and there are dozens of competitors) and the high perceived risk of identity theft, many people will decline.

It's a very classic 'for the children!' move, using a fear of children getting to porn (which this makes no meaningful difference to at all) to block adults from legal activities.

Any child of a porn-consuming age will simply submit their parent's ID to the verification site. Viola, porn! Or they'll get it on any of the other dozens of sites. Or they'll use their parent's already-verified login. Or a friend's parent's. Or one stolen off the internet. Or a VPN. And on, and on, and on.

And, again, as you correctly pointed out, there are already laws actually dealing with this. 'Age verification' in this way is ineffective and useless. We didn't need a new law, and the effect (and intent) of the law is to inconvenience adults, not children.

Replies:   DBActive
DBActive ๐Ÿšซ
Updated:

@Grey Wolf

If it were ineffective the industry wouldn't be fighting it. They want clicks and the young teen market is a major source of their consumers

The "they will get it anyway" is a complete bullshit argument. Kids get beer and weed illegally all the time - by your argument we should let 12 year-olds grab a six pack and a pack of edibles at the local 7-11.

DBActive ๐Ÿšซ

@Grey Wolf

Restrictions on children accessing porn are evidence of moral panic? You do realize that every state in the country has laws against supplying porn to kids, don't you? If there were moral panic, the owners of these sites would be in jail for violation of those laws and the sites would be closed down.

Replies:   Dominions Son  Grey Wolf
Dominions Son ๐Ÿšซ

@DBActive

Restrictions on children accessing porn are evidence of moral panic?

In the abstract, no. However hastily enacted and poorly designed internet age verification laws that are both likely to prove ineffective and violate the privacy of the very children that they are supposed to be protecting might be.

Replies:   DBActive
DBActive ๐Ÿšซ

@Dominions Son

How do these laws violate the privacy of children?

Replies:   Dominions Son
Dominions Son ๐Ÿšซ

@DBActive

How do these laws violate the privacy of children?

Several of the age verification laws are broader than what would normally be considered "porn sites"

Some states include all social media. They effectively prohibit anonymity or pseudonymity.

Affected sites have to store real identity data, data that could be stolen.

Replies:   DBActive
DBActive ๐Ÿšซ

@Dominions Son

All of the laws specifically prohibit retention of personal identification information.
Further, the sites can use third party verification systems

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ
Updated:

@DBActive

All of the laws specifically prohibit retention of personal identification information.

Further, the sites can use third party verification systems

This is correct, but we've seen cases before where sites that 'do not retain personal information' actually do.

More to the point, the law is incredibly vague:

A commercial entity that knowingly and intentionally publishes or distributes material on an Internet website or a third party that performs age verification under this chapter shall require an individual to:

(1) provide digital identification; or

(2) comply with a commercial age verification system
that verifies age using:

(A) government-issued identification; or

(B) a commercially reasonable method that relies
on public or private transactional data to verify the age of an individual.

What does that include? Use of a credit card? Showing a utility bill in your name? Claiming you pay the ISP? Checking a box saying you're 18?

The last seems absurd, but I could see someone claiming that's a 'private transaction', and historically on the internet that's been considered 'commercially reasonable'.

What does 'provide digital identification' mean? Username and password? That's been 'digital identification' for decades. Note that (1) does NOT mention the 'digital information' conveying age information, only (2) does. Technically speaking, 'a website' requiring people to enter 'digital identification' (username and password) complies!

Or maybe not. Again, that seems absurd, but the law is so vague that it might well apply.

But it gets worse. Consider: person M, a minor, sends identification (whatever is asked for) to site X. This identification is fraudulent, but the site has no way to know that (it's their parent's ID, it's fake, etc). Site X grants access. Based on that, PornHub grants access.

At a later time, M's parent P finds out that M has been browsing and uses the law to sue for damages.

X has no data on how M was granted access. PornHub has no data on how M was granted access. How does the court determine if X or PornHub were at fault?

That may not matter, though. The law says:

A commercial entity that knowingly and intentionally
publishes or distributes material on an Internet website that is found to have violated this section is liable to the parent or guardian of the minor for damages resulting from a minor's access to the material, including court costs and reasonable attorney's fees as ordered by the court.

Is simply looking at the ID and granting access compliance so that there can be no violation? They certainly didn't do anything 'knowingly and intentionally'. But M, a minor, got access.

That's my point: the law doesn't prevent M from getting access, nor does it create actionable consequences if M does. You can't sue if M gets access. Nothing about the age verification process addresses the fraudulent use of credentials. There are no sanctions of X simply says 'Yup!' to everyone who presents an ID. There's no definition of what it means to 'verify' a 'government ID. One could verify it in a somewhat solid measure (require the user to send a photo of them holding up their ID next to their face, for instance), but that's not required (and any kid worth their salt could photoshop that, which puts us back to square one). But at least some sites just require you to send in front and back of your ID. A quick snatch of a parent's wallet, some scans, and you're in. The site can't even retain that much, so when M is caught, they can say 'Well, I never sent in a picture of your license!'

Which brings me back around to my original point: this is a 'for the children!' moral panic, one intended to throw obstacles at adults while doing nothing to meaningfully 'protect' children from porn. That should be obvious from how the law is written: if a site 'complies' (which is extremely vague), that's it. If a minor still gets to the porn, all good, no consequences.

One last site of the inanity of this law:

Sec. 129B.002. PUBLICATION OF MATERIAL HARMFUL TO MINORS.

(a) A commercial entity that knowingly and intentionally publishes or distributes material on an Internet website, including a social media platform, more than one-third of which is sexual material harmful to minors, shall use reasonable age verification methods as described by Section 129B.003 to verify that an individual attempting to access the material is 18 years of age or older.

"Now Introducing: PornHubOPedia! All the porn we have, plus a complete encyclopedia which is at least 68% of the total volume of content available!"

What is the definition of 'material?' Gigabytes, or articles? PornHub claims to have 3 million videos. Wikipedia has 6.7 million articles in English. Done!

Grey Wolf ๐Ÿšซ

@DBActive

'Restrictions on children accessing porn are evidence of moral panic?'

Using 'for the children!' as an excuse to pass laws largely targeting consenting adults is a very common sign of a 'moral panic'. As you say, there are already existing laws which would target any service intentionally offering access to minors. The new laws use that as an excuse, but that isn't really their goal.

The new laws are ineffective at actually blocking children (that was known to and acknowledged by legislators at the time) but are really aimed at adults.

Texas would have no authority to throw the owners of Pornhub in jail (even if they were in Texas, which they are not). Pornhub does not market to children, nor does it provide internet access to children. If anyone was thrown in jail, it would be those who provide the access (predominantly parents, since schools and libraries must filter, by law, and therefore cannot knowingly be providing access).

Replies:   DBActive
DBActive ๐Ÿšซ

@Grey Wolf

Of course they intentionally market to children, otherwise they would put their own age verification system in place as they promised to do decades ago.
As to schools and libraries-are you aware of the bills introduced throughout the country written promoted by the joint efforts of AFT, NEA and NLA to exempt teachers and librarians from laws prohibiting distribution of porn to kids?

Replies:   Dominions Son
Dominions Son ๐Ÿšซ

@DBActive

otherwise they would put their own age verification system in place as they promised to do decades ago.

It would be interesting to know how Pornhub could have promised to do something decades (plural as in more than one decade) ago when it's only existed for 17 years.

Replies:   DBActive
DBActive ๐Ÿšซ

@Dominions Son

The industry promised this in response to complaints about commercial exploitation of children. A number of anonymous (to the site) systems are available and very underused do to the targeting of minors by the sites.

Michael Loucks ๐Ÿšซ

@tenyari

Every time there is a change in society these people pop up and exaggerate things.

Inducing 'Moral Panic' works. And activists make use of that knowledge to manipulate the public in ways the government could only dream of. Of course, that manipulation often leads to government action when the people demand the government 'do something' about the induced moral panic.

Back to Top

Close
 

WARNING! ADULT CONTENT...

Storiesonline is for adult entertainment only. By accessing this site you declare that you are of legal age and that you agree with our Terms of Service and Privacy Policy.


Log In