Home ยป Forum ยป Author Hangout

Forum: Author Hangout

Don't trust AI-generated info!

helmut_meukel ๐Ÿšซ

Just found this:
'AI isn't quite there yet'

It's a blog entry of British author Niall Teasdale about his experience with an AI:

So, I tried an experiment and asked Bing's AI search about Nava Greyling Sonkei. Specifically: Give me five paragraphs on Nava Greyling Sonkei.

Nava Greyling Sonkei is the female MC in his 6 books series Death's Handmaiden.

Read his blog entry to see how the AI failed royally.

Any AI-user who doesn't know much more than the MC's name would believe the crap the AI came up with! It's quite well formulated so you would believe someone has actually researched the topic.
No-one can doubt Niall's knowledge of the subject, he created the fictional character and wrote six books about her.

Niall Teasdale's blog entry is the best I've found about the inability of current AI's to meet their name "Artificial Intelligence".
After about fifty years of research, they still can only simulate AI, but they convinced and are still convincing enough people to fund their "research".

HM.

Sarkasmus ๐Ÿšซ

@helmut_meukel

I apologize if this offends anyone...

But people who don't get how AI currently works, and just accept whatever it pushes out without checking it, are on the same level as all the people who fried their iPhones after they read somewhere that they could charge it with a microwave. Or drowned it because they read somewhere that the firmware update made them waterproof.

Grey Wolf ๐Ÿšซ

@helmut_meukel

After about fifty years of research, they still can only simulate AI, but they convinced and are still convincing enough people to fund their "research".

I have real trouble with this paragraph, because it involves taking a term ('Artificial Intelligence'), which has a fairly well-defined meaning (computer processing which displays aspects of learning to make inferences and perform tasks normally associated with humans) and imposes upon it the meaning 'Artificial Sentience' or perhaps 'Artificial Omniscience', neither of which is what is being claimed.

The second part of the sentence implies that someone is being hoodwinked here - that those who funding the research somehow are convinced that we're terribly close to Artificial Sentience and just a bit more will push us over the top.

That's not the goal, and - for the most part - it really hasn't been the goal. Artificial Intelligence as it exists today is tremendously valuable commercially, and it has been for at least three decades. That's why people are convinced to fund the research - they're making a great deal of money off of it, right now, today.

Most of the AI engines of great current fanfare are toys, in the grand scheme of things. Sure, ChatGPT3 (and 4) 'knows' a lot of things, but it also 'knows' a lot of things that are false, and - more importantly - it has no idea what it doesn't know. Machine Learning sucks at understanding what it doesn't know as of now.

Ten years ago, though, those toys were impossible, not because of lack of computing power or of training but because we didn't have some algorithms yet that we do now. Perhaps in the next decade we'll find far better feedback algorithms which remove non-information from output.

And even some 'toys' are valuable now. People use Alexa (fundamentally a special-purpose AI), Siri (another one) and the like daily in many ways, even though they're much more 'stupid' than ChatGPT. People are using Bing Search to ask for searches (not paragraph generation) that they don't know how to phrase, and the search results themselves are 'valid' in that there's a website somewhere that says X. If X is garbage, it's not the AI's fault - it properly found what you were looking for.

Look at machine translation (also a form of AI). In the 1960s, programmers were absolutely confident they could write machine translators good enough for most purposes within a decade. Turned out, they weren't even vaguely close. Machine translation was a total mess for decades after. It would have been easy to say 'after about forty years of research, they still can only simulate translation, but they convinced and are still convincing enough people to fund their "research".'

Now? Machine translation is everywhere. It's on the net, translating webpages. It's on your phone, translating documents. It's also on your phone, translating live photos in real time (which also involves complicated OCR - another type of AI). Some systems are starting to get to realtime bidirectional audio processing.

It's not 100%. Glitches happen. But it's making an enormous difference in the ability for people to travel in countries where there's a language barrier or to access information which is only found in another language.

All from something that was pretty much a 'dead end' for decades. Most of the progress that took machine translation from a silly toy to incredibly valuable took place over a few years.

Yes, current text-manipulating AI's confabulate. They confabulate a lot. Everyone should know that. We're a long way away from fixing that - and, if we do create an artificial sentience, there's no reason to believe it won't lie, either. Intelligent beings to date have a 100% track record of being able to lie, after all. Why should 'artificial' ones be any different?

Replies:   awnlee jawking
awnlee jawking ๐Ÿšซ

@Grey Wolf

Ten years ago, though, those toys were impossible, not because of lack of computing power or of training but because we didn't have some algorithms yet that we do now.

And therein lies a problem I have with the term AI. A human being can still replicate its workings with pencil and paper if they have the algorithms and the base data. Is that really intelligence?

AJ

Replies:   Sarkasmus  Grey Wolf
Sarkasmus ๐Ÿšซ
Updated:

@awnlee jawking

Is that really intelligence?

Small anecdote regarding that.

Around five years ago, I was involved in a Vaporware case where some guy was selling an analytics module supposed to predict... business figures... using machine learning.

When we looked into the module, though, all we could find was bayesian statistics. I don't want to belittle that, since we're all using that for our SPAM-Filters, but it's just not what he advertised. So, we asked him about it.

Turns out, around that time, there was a general shift in the marketing strategies used by most tech companies.

"Machine learning" is now what we used to know as "Statistics". If you do actual machine learning now, it's marketed as "Deep learning" (which is an established term for a certain training method for neural networks).

"Linear Regression" (statistics), is now renamed to "AI equation".

Just like "Crypto" is now somehow used to refer to bitcoins and blockchains.

Grey Wolf ๐Ÿšซ

@awnlee jawking

That becomes a question which philosophers have pondered long before the existence of electronic computing machines. The question of whether human behavior could be replicated by a sufficiently complicated set of rules, inputs, and outputs remains very much open. There are significant numbers (in fact, it's a growing community) of philosophers who believe that the universe is either 1) fully predetermined - every action (including the entire scope of human intelligence) is determined by underlying physics back to the big bang, or 2) semi-predetermined, with the inherent uncertainty of quantum mechanics being the only source of uncertainly (which, unless one is postulating that, being intelligent beings, we are a source of quantum uncertainty in a different, and meaningful, way than nonintelligent beings, machines, rocks, etc) still leaves 'intelligence' as merely an emergent behavior of an extremely complex system.

To paraphrase the great Arthur C. Clarke, 'Any sufficiently advanced artificial intelligence is indistinguishable from intelligence.' At some point (and we're long since past it) the amount of pencil-pushing required is impossible in a human lifetime, even for a dictator who gets all of his people to push pencils.

Putting it in your formulation, while no one today has the algorithms and base data, there exists no proof that a human being could not replicate the workings of another human being with pencil and paper given those algorithms and base data (aside from the sheer computational impossibility of it, which is a barrier of quantity, not quality).

Until we have an objective, widely accepted definition of what intelligence is, and can prove that humans fall under that definition in a way that rocks and trees do not, such arguments are of the sort which will entertain philosophers (and many authors, and their readers) for generations to come.

To the base point: yes, the term 'Artificial Intelligence' is very broad, but it has long incorporated all manner of behaviors which are not sentient. Perhaps we need another term, but actually getting one is a serious problem.

One more note: nearly all of the behavior of an individual can be modeled by a surprisingly small number of rules, but ants exhibit some extremely complex emergent behaviors in groups. Are ants 'intelligent' in any way? If not, where is the dividing line? Are only humans 'intelligent'? If other mammals (for instance) are, are all mammals? And so forth. The word is much too shaky to be used to rule in or out whole categories of things, in my opinion at least.

Replies:   awnlee jawking
awnlee jawking ๐Ÿšซ

@Grey Wolf

Putting it in your formulation, while no one today has the algorithms and base data, there exists no proof that a human being could not replicate the workings of another human being with pencil and paper given those algorithms and base data (aside from the sheer computational impossibility of it, which is a barrier of quantity, not quality).

My own, off-the-cuff theory, is that if such a computation involved a number of paths larger than the number of particles in the universe, it is de facto irreproducible.

By intelligence, do you mean what I would call sentience? My tests would include self-awareness (the mirror test), language and altruism.

Oddly enough, what people claim as artificial intelligence has already passed the language test. I can't remember the details but IBM did an experiment and found two computers that were able to communicate had 'improved' on the language the programmers had initially programmed them with. IBM were so worried, they shut the experiment down.

AJ

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@awnlee jawking

I agree with you on the tests, and yes, that's why I distinguished Artificial Intelligence with Artificial Sentience. Artificial Intelligence includes all manner of 'intelligent' behaviors that are not sentient (for instance, most people would hold that animals have a level of 'intelligence', but not 'sentience').

I disagree on the calculation on the following basis: we know that one form of processing can take the number of inputs provided to the human brain and produce the expected outputs (the human brain, of course). If the human brain is capable of doing so, either it is special in some not yet shown way or other forms of computation can do so.

One impossible-to-answer question is: when presented with the exact same set of inputs and the same initial state, will the human brain always produce the same output and new state? If not, is the difference random, or 'intelligent'? If 'intelligent', then that 'intelligence' must constitute an additional state of some form.

In terms of Artificial Sentience, self-awareness is a factor, and so is altruism, but I would postulate that it's a matter of time before one could code a sufficiently powerful neural network with sufficiently advanced input and output filters to be able to fake passing those tests. It would be a fake, because someone would've knowingly coded to meet the test.

At some future point, a system may be able to meet those tests with no human-coded fakery involved. Will it then be sentient? Or will it merely be emergent behavior from an extremely complicated non-sentient system?

But, then, the same question applies in reverse: are we 'sentient', or are we merely an evolved set of processing systems which appear to meet the definition of 'sentience' which we have (inevitably) created, simply because we had to create it all along?

All of this necessarily intersects the question of free will vs predestination (writ large). If the universe itself is a very, very, very large finite state machine in which the state and deterministic interactions of every particle (down to superstrings or whatever have you) systematically leads to the next state with the application of time (a purely mechanistic universe), there is no true free will and thus arguably no true sentience. I believe that I know myself because 'myself' is a system designed to 'believe' that I 'know' it, no more, no less.

If the universe is nearly that deterministic, but subject to quantum variation, not much changes. It's only when some factor external to pure physics combined with pure randomization intervenes that what I think of as real sentience is possible. If one cannot make any decision that was not inevitable since long before one was born, how sentient is one, anyway?

One can easily argue that ChatGPT3 (and 4, and following iterations) meet the Turing Test, which has been nearly met a number of times. Indeed, there are professionals in the field who claim those are sentience on the part of such algorithms on that basis. I disagree, but the point is that we have algorithms now which are either sentient or who can trick a professional in the field, knowledgable about how such systems work in detail, into believing they are sentient, which is the toughest version of the Turing Test.

Given continued growth in processing power and training resources, and the emergent ability of such systems to generate working code, at some point in the not-very-distant future we will create systems which no human has coded, the computation of which will have more paths than the number of particles in the universe, and which cannot be distinguished from sentient on the basis of external testing. What do we do then?

And, again, we're far astray from the original point - but that is also the point. Current-generation generative AI is very unreliable. One should not believe it. It will confabulate very easily. The distinction between 'meaning' and 'fact' versus simply assemblages of words is very shaky. That doesn't mean that a future AI built along fairly similar lines couldn't be taught fact versus fiction. In fact, one can easily hypothesize a 'fact engine' which parses and 'understands' chat output, rejects non-factual outputs (triggering new chat output), and iteratively repeats until the entire output is factual.

I'm not sure that's a win, though. It's a win for some applications, but human beings have no problem lying. An AI which was believed to always be 'true' would be an enormous risk if anyone could compromise it, just as the saying'It must be true! I read it on the Internet!' has become a joke
about the reliability of online information. Teaching people not to blindly accept information sources and to carefully consider measures of veracity and trustworthiness is the right approach (if very difficult, since people appear to be biologically wired to not want to do that, and in some cases to strenuously reject doing that).

irvmull ๐Ÿšซ
Updated:

@helmut_meukel

AI graphics, for example Stable Diffusion, is already politically active. Compare the results I get when I ask for a portrait of Biden and a portrait of Trump:

https://www.survivalistboards.com/attachments/untitled1-jpeg.521789/

https://www.survivalistboards.com/attachments/untitled2-jpeg.521790/

Replies:   Dominions Son  Grey Wolf
Dominions Son ๐Ÿšซ

@irvmull

You ask me, the grinning Biden is the creepiest.

Grey Wolf ๐Ÿšซ

@irvmull

Stable Diffusion is randomly seeded. If you generate 10 pictures, you'll get 10 different pictures.

There is undoubtedly implicit bias in the dataset. There are far more parody pictures of Trump in the world than there are parody pictures of Biden. That will, to some extent, be reflected in the outcome.

That said, I used stablediffusionweb.com (free, no registration required) to generate a picture of each:
Donald Trump: https://ibb.co/Y35YrDq
Joe Biden: https://ibb.co/b37zRh0

Those were the first two I got. Prompt strings were the names provided along with the picture, nothing else.

I then generated several dozen more of each person. They all varied, but none of either was much of a parody. They were essentially all 'neutral'.

The faces on these are terrible; no one would confuse these with actual photographs. Better generators exist.

One thing to remember is that the prompt matters a lot, and given an arbitrary picture there's no way to know what the prompt string was. If I ask for 'Donald Trump as a clown' and 'Joe Biden as a clown', I'd get significantly different results, and no one could (after the fact) prove whether I'd done that or whether it was simply luck/bias.

That's not really true - if one ran through the entire random seed range, and either hit the same photo or did not, that's 'proof', but at a staggering cost.

Replies:   irvmull
irvmull ๐Ÿšซ

@Grey Wolf

Stable Diffusion is randomly seeded. If you generate 10 pictures, you'll get 10 different pictures.

No. I used the same seed for both.

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@irvmull

While picking a single seed is fine, it's still random. Using a different seed would get you a different picture (as I demonstrated). There's no one 'Donald Trump' or 'Joe Biden' picture from Stable Diffusion, there are tens of thousands, and a very quick test verifies that different seeds produce wildly different pictures.

The existence of a picture that seems biased within a random range doesn't prove that the collection is biased. To conclusively prove bias, one would need to generate all possible pictures within the random seeding limits and compare the level of bias. Right now, we have the equivalent of walking into a library with ten thousand books, picking up a book at random, pointing to a picture of a person, and claiming that the library overrepresents whoever's picture has been found. Someone else walking in, picking up ten books, and finding ten pictures of other people doesn't disprove the first claim, because the library has such a large number of remaining books that the sample set is too small to have any statistical validity.

In my random subsample (which is statistically meaningless, since the range is far, far larger than the few dozen I generated) I found no biased pictures of either figure. That is proof of nothing, of course (as noted), but it does at least create a equal and opposite counterexample.

Again, my supposition is that the input data is factually
skewed, since the algorithm was trained on easily accessible content on the internet over the past few years, and parody pictures of Donald Trump abound. Indeed, to avoid that sort of skew, some person would have needed to look at every training image and remove or retag parody images. But they would have to do that for every political figure, or else the resulting model would be biased in favor of those for whom parody images had been pruned or retagged. The mere use of a potentially skewed training database doesn't mean there's political bias on the part of anyone associated with Stable Diffusion, nor that the dataset was chosen in a biased manner. They chose the largest available inexpensive-to-obtain dataset.

Also, Stable Diffusion is a family of models and an architecture, not just one. There are at least four different datasets (which produce significantly different results) which could plausibly be called 'official'. There are thousands of third-party models which fall within the family of models and which anyone could plausibly also call 'Stable Diffusion'. Anyone could train a model to be politically biased by adding a large number of negative images.

The overall point here is that there are so many possible sources of bias in the process, most of which are not intentionally political, that describing the model as 'politically active' when one particular random seed generates a parody image of one person is an enormous stretch. By that same logic, I could find an image that I perceive as negatively biased towards Donald Trump that has appeared on foxnews.com (there are plenty) and, on that basis, claim that foxnews.com is clearly 'politically active' against Donald Trump.

Note for the moderators: I hope this doesn't cross into 'politics,' notwithstanding that we are discussing politicians and biased images of them. The point of the discussion has nothing to do with 'politics,' except perhaps for the perception of bias. It has to do with understanding the models and what is and is not evidence of bias.

Mind you, it is certainly possible that, in fact, someone on the Stable Diffusion team intentionally trained the model on a large supply of images intended to reflect negatively on Donald Trump (or Joe Biden, or whoever), and that the resulting published model is more likely to generate negative images of that person. The problem is that it's nigh impossible to prove that it happened at all, who did it if it happened, why they did it, or anything else.

But the starting point would most likely be to generate a enough images created with a specific version of the model using a fixed search string and random seeds (I suspect sequential would be fine, but randomized would likely be superior) to create a statistically valid pool of images, inspect that for bias, then generate the same number of images of someone else, inspect that for bias, and report on the results. That would create at least a reasonable claim towards bias, but would still not tell us whether the bias was 'politically active,' nor whether that political activity was on the part of the model trainers, the internet as a whole, or something else.

It might even be a false positive or negative; anyone who's paid attention to polling knows that a bad sample produces an inaccurate poll, and there is no claim made that the random seeds to Stable Diffusion actually produce a similarly random sampling of the potential outputs. Perhaps, for some unknown reason, the negative images collect in a certain range of seed values. If so, a sample including that range will show bias at an enormous rate, while any other sample would show no bias.

Mat Twassel ๐Ÿšซ

@Grey Wolf

Notwithstanding your conclusions (which, as far as I understand them I think are on target), doesn't one first have to have a clear understanding of what bias actually looks like?

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@Mat Twassel

Absolutely, I agree with you. You need a clear statement of what counts as 'bias'.

Replies:   richardshagrin
richardshagrin ๐Ÿšซ

@Grey Wolf

what counts as 'bias'.

When you go to a prostitute to obtain anal sex, you buy ass.

Replies:   joyR
joyR ๐Ÿšซ

@richardshagrin

When you go to a prostitute to obtain anal sex, you buy ass.

How much did you pay him?

Replies:   Dominions Son
Dominions Son ๐Ÿšซ

@joyR

@richardshagrin

When you go to a prostitute to obtain anal sex, you buy ass.

How much did you pay him?

The going rate for a donkey is about $3K

Replies:   Marius-6
Marius-6 ๐Ÿšซ

@Dominions Son

The going rate for a donkey is about $3K

What!?! It seems a lot of people's Horse Trading ability is pathetic! Or you are in a very expensive market? Or perhaps there is a "premium" for a "sex donkey" thankfully I am ignorant of such things!

Thankfully I have learned how to purchase various equines for much more reasonable prices. Of course, those equines were for more conventional uses, such as trail riding or 4H. I have not personally purchased a horse, or other equine in a couple of decades. I have enjoyed a couple of day trips to watch, among other things, horses, mules, etc. being sold.

Replies:   Dominions Son
Dominions Son ๐Ÿšซ

@Marius-6

What!?! It seems a lot of people's Horse Trading ability is pathetic!

All I did was google for the average price of a donkey.

awnlee jawking ๐Ÿšซ

@Grey Wolf

The existence of a picture that seems biased within a random range doesn't prove that the collection is biased. To conclusively prove bias, one would need to generate all possible pictures within the random seeding limits and compare the level of bias.

That's no more true than having to wait until every combination has been drawn at least once in order to prove that a lottery draw machine is biased.

AJ

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@awnlee jawking

There's a clear understanding of what a lottery draw machine is supposed to do, and an understanding of how the mechanics of it work, which allow you to take statistical shortcuts. That's not present here.

Consider a lottery draw machine with one billion possible seed values (seeded by something like random electromagnetic noise or the least). If each seed value leads (via a complex and hard-to-reverse process) to a given output state at the same probability, you can do random sampling.

However, suppose malicious actor X knows that certain seed values are far more likely than others to be 'chosen' from the random noise. If one attempts to verify the device by giving evenly weighted random inputs, it'll appear fair, but functionally it will be biased.

I am definitely not an expert in the field, but I know enough about this to understand how hard it is to prove something when the data set is highly uneven.

The problem with vetting Stable Diffusion for errors is that the correlation between input X and output X' is exceptionally opaque, by the nature of generative AIs, and there is no reason to believe that X .. X+100 and X+101 .. X+200 are in any way comparable.

Think of it in terms of polling. If I randomly phone 10,000 people in a state of 1,000,000 people (and we'll assume everyone both answers the phone and answers accurately), the odds are extremely high that the resulting poll will accurately reflect the 1,000,000 people. However, that's still odds-based; it is not conclusive proof of the attitudes of the 1,000,000. There is a non-zero chance that the attitudes will significantly vary from the polling date, perhaps extremely. Highly unlikely, but my point was that 'conclusive proof' is different from statistical evidence.

I will normally happily accept statistical evidence, but that's because we've studied population polling for decades. We have nearly no experience in determining whether a randomly seeded generative AI is biased based on running statistical tests of inputs versus outputs, and the impact of the seed upon the output is not perfectly understood (and probably cannot be).

What seems like a 'random sample' of seed values might easily turn into the statistical equivalent of phoning all the Democrats and assuming you have a random sample, for instance. From an algorithmic perspective when designing a generative AI, the behavior 'inputs 1-65536 are equally likely to produce a "good" image and a "bad" image' vs 'inputs 1-32768 produce mostly "good" images and inputs 32769-65536 produce mostly "bad" images' are equally valid, but may affect tests very differently, particularly ad hoc tests of the start we started with.

Replies:   awnlee jawking
awnlee jawking ๐Ÿšซ

@Grey Wolf

If I randomly phone 10,000 people in a state of 1,000,000 people (and we'll assume everyone both answers the phone and answers accurately), the odds are extremely high that the resulting poll will accurately reflect the 1,000,000 people. However, that's still odds-based; it is not conclusive proof of the attitudes of the 1,000,000. There is a non-zero chance that the attitudes will significantly vary from the polling date, perhaps extremely. Highly unlikely, but my point was that 'conclusive proof' is different from statistical evidence.

Bad choice. Virtually all the pre-Brexit Referendum showed a majority in favour of staying in.

The problem with vetting Stable Diffusion for errors is that the correlation between input X and output X' is exceptionally opaque, by the nature of generative AIs, and there is no reason to believe that X .. X+100 and X+101 .. X+200 are in any way comparable.

If X and X+1 and X+2 etc showed similarities, wouldn't it be obvious if users are allowed to enter their own seeds? And that's something that even basic pseudo-random number generators are designed to guard against so there's no reason for generative AIs to be less well-designed.

AJ

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@awnlee jawking

Bad choice. Virtually all the pre-Brexit Referendum showed a majority in favour of staying in.

Which is another data point supporting the existence of polling failures. I doubt the pre-Brexit polling contacted 100% and got a 100% accurate response rate, though.

Putting it yet another way: I have a bag of 10,000 marbles. 100 are white, the rest are red. There is a non-zero chance that drawing 100 marbles will result in 50 white, 50 red. If I get such a sample, and I know nothing other than that there are 9,900 marbles left in the bag, I have no way to ascertain whether the bag is close to 50/50 or whether I've drawn an incredibly unlikely sample. Obviously, the safe guess is 'near 50/50', but also obviously that guess could be wrong. The only way to be certain is to check the bag. Each additional marble increases the odds that the guess is accurate.

If X and X+1 and X+2 etc showed similarities, wouldn't it be obvious if users are allowed to enter their own seeds?

Only if a user hit the sensitive range (assuming there is one). If a clump of 100 are similar, somewhere in an enormous range, the user would have to enter a seed in that range.

If the user is looking for bias, and happens to enter the first seed in the range: poof, instant apparent bias.

I'm not concerned with the PRNG, since user input is allowed, and since most SD installs generate a PRNG for the first picture but increment the seed for subsequent pictures. This is intentional, since being able to duplicate one's work is important for many techniques (inpainting, for instance). If one starts with 51296, and then blips through 10 images before going 'Aha! I like this one!' it's more useful for that to be 51306 than for it to be a totally random number (simply because humans sometimes fail to write things down).

And, again, we're getting a bit far afield, into the weeds, down the rabbit hole :) And, yes, I led us there.

The key points for me are:
1) We don't have a definition of 'bias'
2) We don't have any actual evidence of 'bias'. One image at random isn't enough.
3) We don't have a test designed to find 'bias,' partly because we don't understand the nature of the PRNG seeding enough to know if random inputs produce accordingly random outputs.
4) We don't have any way of knowing how big the pool of 'biased' images are, so picking a sample size is hard. Trying to find evidence of 100 'biased' images out of 65536 is much different from trying to find evidence of 10,000, even if all of the randomization is perfect.
5) If we were to find clear and unequivocal evidence of a larger pool of 'biased' images of one person over another person, we don't know where the 'bias' was introduced, nor do we know if it is evidence of 'political action' on anyone connected with the design, construction, or training of the generative AI.

Jumping to the conclusion 'Stable Diffusion is politically active' because it generated one potentially biased image of one person when given an unknown prompt and seed value is akin to picking a random driveway in rural Texas, driving up the house, finding out that the residents vaguely lean Democratic, and declaring that rural Texas is a hotbed of Democratic activists.

There may be bias. It may be fully intentional and the result of political activity. But there just as well may be no bias and no politics. There's simply no meaningful data here from which to draw a conclusion, and the point is that testing for that conclusion is a really tough problem, not something someone is going to solve in an hour or two of clicking through images.

Replies:   awnlee jawking
awnlee jawking ๐Ÿšซ

@Grey Wolf

I have a bag of 10,000 marbles. 100 are white, the rest are red. There is a non-zero chance that drawing 100 marbles will result in 50 white, 50 red.

2.7E-78.

AJ

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@awnlee jawking

Yes, it's exceptionally unlikely. My point remains: there is a difference between statistical 'proof' and actual proof.

One can easily create scenarios where the outcome is more likely - exceptionally unlikely, but within a few percentage points.

Again, those Brexit polls had an MoE of under two percent, and were a number of them. All were wrong. Polling is different than drawing marbles from bags - but it may be quite comparable to picking generated images at random, since the pool of 'respondants' isn't even close to uniform.

richardshagrin ๐Ÿšซ
Updated:

@helmut_meukel

The letter I sometimes looks like the letter L.

Al could be part of an Al, which is part of anAl sex.

Stories about anal sex should star males named AL maybe Queen Victoria's consort prince Albert, or someone named Aloysius or Alvin or even Alexander. Sex with an Al would be fun. Unless you strongly prefer vagin Al. Or he does not like meat and is vegan Al.

Replies:   helmut_meukel
helmut_meukel ๐Ÿšซ

@richardshagrin

The letter I sometimes looks like the letter L.

Al could be part of an Al

The female MC in Niall Teasdale's Aneka Jansen series (actually 7 books) has a supporting AI in her alien-made android body. She christened this AI Al, because Hal was already used.

For me, knowing the origin of the name HAL, this makes it twice funny. (alphabetical HAL comes 1 position before IBM).

HM.

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@helmut_meukel

Arthur C. Clarke has always maintained that the name similarity was merely a coincidence.

In a similar story, the same lead designer was significantly responsible for VAX VMS (one of the major minicomputer operating systems) and Windows NT (which has a number of architectural similarities to VMS).

VMS and WNT are one letter apart.

richardshagrin ๐Ÿšซ
Updated:

@helmut_meukel

In our alphabet, H comes before I, A comes before B, and L comes before M. So HAL comes before IBM.

There may be a relationship between anal and I bowel movement. At least if you were full of shit. or a BM.

joyR ๐Ÿšซ
Updated:

@helmut_meukel

Niall Teasdale's blog entry is the best I've found about the inability of current AI's to meet their name "Artificial Intelligence".

So, I tried an experiment and asked Bing's AI search about Nava Greyling Sonkei. Specifically: Give me five paragraphs on Nava Greyling Sonkei.

Looked at another way, the AI created five paragraphs that are believable to anyone who does not know the facts.

Knowing those facts isn't really a matter of intelligence, but simply a matter of being familiar with the character in the stories. Basically, reading the books.

If you challenge anyone to write five paragraphs about a specific person, event, etc. and they don't know anything about it, is it more intelligent to write nothing, or to create five paragraphs that are believable yet utter bollocks?

If the AI had written five paragraphs that were absolutely correct, is that a sign of intelligence of simply a sign that the books containing that character had been uploaded to the database?

Bottom line is that the 'test' isn't a worthwhile test of anything, except perhaps the authors ignorance of AI.

The real test of an actual AI isn't that it has a massive database of facts, but that it can access the relevant facts and extrapolate from them not just a course of action, but one that makes absolute sense. Not just once, but each and every time, regardless of the subject matter.

Of course since we humans can't even manage that, expecting an AI to do so is likely to be disappointing.

Oh, and if we do manage to create an AI that evolves sufficiently to be considered sentient, will anyone involved remember that sentience means that no two will 'think' exactly the same? How many sentient AI's does it require before their differences lead to them forming separate groups, aggressively opposed to each other?

Replies:   helmut_meukel
helmut_meukel ๐Ÿšซ
Updated:

@joyR

If you challenge anyone to write five paragraphs about a specific person, event, etc. and they don't know anything about it, is it more intelligent to write nothing, or to create five paragraphs that are believable yet utter bollocks?

Depends on the goal of the challenge.
โ€ข finding enough facts by searching different sources, correlating them and presenting them, then it's more intelligent to state "insufficient data found".
โ€ข for checking creativity, then it's ok to create five paragraphs that are believable yet utter bollocks.

This comes down to what AIs are intended to be:
Tools to provide the decision makers with facts retrieved from a multitude of sources and a vast amount of data (which humans can't do within an acceptable time frame due to the limitations of the human body and mind).
Or replacing the human as fabulator.

HM.

JoeBobMack ๐Ÿšซ

@helmut_meukel

You can certainly trust some of what the current "AI" models like Bing produce. I used Bing to write a macro to help with a spreadsheet I was using to plan my next novel. I described that in this forum post. Since then, I've gone on to develop another macro in Excel and a Visual Basic routine in Word to take the relevant cells from my spreadsheet and convert them into a Word outline with the first sentence of each cell as a Level 1 entry and the rest of any cell as Level 2 entry.

Working with Bing on this was like having an incredibly fast, very knowledgeable, but over-confident young expert as my assistant. Took some iterations, but ultimately, the "intelligence" for this task was a joint effort. I had the concept and knew the basic capabilities of the software. Bing made it happen.

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@JoeBobMack

It's possible that, on multiple iterations and with guidance, the AI might have generated a viable description of the character in question.

It's possible that it wouldn't. There are instances where an AI has continued to confabulate despite prompts which strongly discouraged that.

They're very useful, but everything should be taken with a grain of salt. Some of those grains might be very large.

JoeBobMack ๐Ÿšซ

@helmut_meukel

I found this youtube video quite thoughtful and interesting on how close ChaptGPT-4 come to "artificial general intelligence."

ystokes ๐Ÿšซ

@helmut_meukel

I can't believe this shit.
https://www.msn.com/en-us/news/world/hundreds-of-protestants-attended-a-sermon-in-nuremberg-given-by-chatgpt-which-told-them-not-to-fear-death/ar-AA1cngSM?ocid=msedgntp&cvid=3d7c3c5e88304c42b6281834fb7191b9&ei=25

Replies:   irvmull
irvmull ๐Ÿšซ
Updated:

@ystokes

Did the chatgpt "preacher" invite them all to stay after and enjoy the purple "cool-aid"?

If not, I'm pretty sure it's coming soon. AI ain't your friend.

awnlee jawking ๐Ÿšซ

@helmut_meukel

I just got temporarily suspended from Twitter for 'unusual behaviour'. So I wouldn't trust AIs to differentiate correctly between humans and AIs ;-)

AJ

jeepdude ๐Ÿšซ

@helmut_meukel

It is interesting that this topic has come up at this time, because I've been hunting AI in news articles after reading that a court briefing prepared by an AI was full of inaccuracies and the AI even made up 2 of the 3 precedents stated in the argument. When asked or misdirected, AI will deliberately lie. AI generated news articles are still relatively easy to spot because humans do not report journalistically like that.
Also, there is something strange going on even in this esteemed group, where there is systematic words substitutions being made. If you would like to see examples, pick almost any story and go below the "There is more of this text..." break point and if you read several paragraphs, you will see the word 'or' substituted for 'and', 'on' for 'in' and 'a' for 'the' . It is now happening in almost every story - makes it very hard to enjoy the story. If you want a ready example, go to
https://storiesonline.net/s/30432:276969/chapter-27-spellman and search for the paragraph starting with "I shook my head". Read slowly because your brain will substitute the correct word if you go fast. I have seen this in many different sites now - almost like spell check has been corrupted.

Replies:   Paladin_HGWT
Paladin_HGWT ๐Ÿšซ
Updated:

@jeepdude

jeepdude said,

I have seen this in many different sites now - almost like spell check has been corrupted.

I understand that "Language Evolves" (I have muddled through copies of original versions of the US Constitution. Using double ff for double ss in writing, for example.) However, I am concerned that "Gen Z" will probably "update" dictionaries and "spell check" to their version of "leet speak" or the gibberish used by too many in texting...

[Waves Shotgun while yelling] "Get of My LAWN! Damn Whippersnappers are all Hooligans these days!"

Dominions Son ๐Ÿšซ

@Paladin_HGWT

[Waves Shotgun while yelling] "Get of My LAWN! Damn Whippersnappers are all Hooligans these days!"

You are supposed to shake your cane at the young whippersnappers. :)

helmut_meukel ๐Ÿšซ

@Paladin_HGWT

Using double ff for double ss in writing, for example.)

Those are no 'f's, they are long 's's, an archaic form of the lowercase letter (s).

HM.

Vincent Berg ๐Ÿšซ

@helmut_meukel

Having been involved since the earliest days of AI, I'm always reminding people that the whole concept of AI, is to lump all information together, the best of humanity mixed with our absolute worst, and then treating it all as equivalent.

Thus, what you typically get, is--at best--subpar writing. What's worse, however, the entire ChatGPT generation of AI, just like humans, when 'confronted', it tends to 1) lie, and 2) get defensive. Again, this behavior is the result of it's mixing the best of humanity's creative impulses, with the absolute worst online behaviors of people who've never written anything creative in their entire lives.

If you insist on using these AI tools, then at least carefully read the results. After all, do you really want to risk your entire professional reputation, just to avoid doing the minimal amount of work on any given product? In general, those instances are easy enough to spot, and readers will remember them, whatever you write (or 'AI') in the future!

Back to Top

Close
 

WARNING! ADULT CONTENT...

Storiesonline is for adult entertainment only. By accessing this site you declare that you are of legal age and that you agree with our Terms of Service and Privacy Policy.