@LupusDeiBackground:
It happens I have fun "communicating" with an adaptive AI designed to screen NSFW images, but being hilariously bad at it (as of yet, perhaps). Not going into details of where, why or how... but it had made me make some striking realisations about perception.
So, someone obviously had shown that thing lots of nudes and clothed people, perhaps "Czech casting" kind of way, side by side, and when it got the nude rejected most of the time called it the day, believing the adaptive feature would refine the recognition over time.
Hint, it kinda works but it can be subverted just as easily in making it accept increasingly inappropriate images. Like, you supperimpose two images of a clothed and naked ladies and make the dress increasingly transparent, and by the end of the day it will claim the naked lady is wearing the dress. And the ultimate "bikini" it accept is three string bows glued to points of hips and in the cleavage, or from the back, a single string around rib cage is enough to get a girl "dressed" while, annoyingly, fully naked back isn't okay even if the butt isn't in frame.
You maybe already have noticed, it doesn't care about filling of regions, just contour lines. It's not human logic. Butts and boobs are the same round-ish objects with shadow line under, cleavage and crotch are the same Y patterns. Any seasoned artist likely isn't surprised, but I needed the AI to tell me that. One bra present in the image "covers" them all even if its hanging aside entirely.
It reliability and efficiently recognize nipples, but someone had said it that some nipples (male) are okay, so it mostly ignores them all, instead measuring breast volume by its shadow. If it doesn't need support it doesn't need to be covered, very "East European" attitude it had developed there. Effectively, it's that shadow it deems obscene not the breast itself. Or butt, for that matter, and those orbs don't have nipples so are "covered" obviously. Mole on a shoulder blade may warp it into a breast and get rejected, or not, depending on the shape and shadow.
Cupless bra is a mixed bag, some nipples (unfortunately including the fleshy teats I so love) you will never get past it without drawing a 2pt line right across. Yay for long hair.
And pussies, oh boy, it's lost in the variegated forms to the point it likes them naked better, absolutely in love with crotchless panties even though it's supposed to reject upskirt shots and cameltoe. And guess what, it does so quite reliably. But someone had said that structurally invisible yoga pants are okay, so ladies wearing a belt are dressed, as long the pubic hair is absent or neatly trimmed, but she need stockings if the vagina is noticeable. Don't ask me why. Perhaps because panties hanging around mid thighs still count as being worn. Very Japanese.
Dappled sunlight shadows... you can imagine by now.
For a human observer, it's like totally random what image is okay and what isn't, until you learn to recognize those contour lines and even then.
I can only dream social media will adopt this thing. Hey, it works, most of the time, until you do something it doesn't expect. But it will, at first, rejected even an upside down contorted flexy girl, just because, lots of skin forms smashed together in unexplainable way, must be some kind of sex. Until you convince it, fragment by fragment, that every 1/9 of the image is "appropriate" by it's own twisted rules, so the whole must be, even though it's skin from top to bottom.
The disappearing dress act above; do it in reverse and you will end up with "inappropriate" dress, because it will remember the nudity underneath.
And that's just with a single figure in the image. With groups, it becomes worse. It has huge difficulties to differentiate between subjects. It can't count past four and four is same as three. It counts: one, two, three, three, group.
Have a line of overlapping figures where first girl from left wearing a dress, the second a shirt and third is naked, and the AI would rule: Appropriate, three naked girls. Flip the image, and ruling is: Inappropriate, three girls in dresses.
Put a naked girl tight between two or three clothed girls and it's seemingly not seeing the nude at all, excluding her from person count, and ruling it accordingly appropriate.
And did I mention it remember elements of images it has ruled appropriate or inappropriate and use them to make new rules? And it prioritize recent input over old, and thus can easily become moody.