Home ยป Forum ยป Author Hangout

Forum: Author Hangout

to trust ai or not

Switch Blayde ๐Ÿšซ
Updated:

I've been using Meta AI and ChatGPT the past few days to do everything from coming up with character names to asking questions about the 1870s and 1880s. I used to Google my questions and read the articles but I've been trying AI.

I just asked both AI engines this question: were brothels legal in NYC in 1870s

I got contradictory answers.

ChatGPT = In the 1870s, brothels were not technically legal in New York City, but they operated openly [there was more]

Meta AI = Yes, brothels were legal in New York City during the 1870s. In fact, prostitution was legal in New York State until 1921, when it was outlawed by the state legislature. [there was more]

So I got both: "not technically legal" and "Yes, brothels were legal."

Who do you believe? Meta AI had references at the end, like "The Regulation of Prostitution in New York City, 1860-1920" by Mary Gibson (Journal of Urban History, 2000) and "Prostitution in New York City, 1790-1920" by Timothy J. Gilfoyle (NYU Press, 2018). Both of those had dates ending in 1920 and Meta AI said it was made illegal in 1921 so that gives Meta AI some credibility.

But how can one believe them?

DBActive ๐Ÿšซ

@Switch Blayde

Illegal since colonial times.
Check this article https://www.archives.nyc/blog/2019/8/29/a-history-of-prostitution-in-new-york-city-from-the-american-revolution-to-the-bad-old-days-of-the-1970-and-1980s?format=amp

Switch Blayde ๐Ÿšซ

@DBActive

Illegal since colonial times.

I didn't read that in that article. In fact, the book referenced was the 3rd reference in the Meta AI response that I didn't bother to include.

Meta AI said that although it was legal, there were many groups against it and prostitutes were arrested for crimes like vagrancy and disorderly conduct. Not specifically prostitution. Prostitution in NY State wasn't illegal until 1921 (according to Meta AI).

Switch Blayde ๐Ÿšซ

@DBActive

From an article that is actually talking about the economics and land value in NYC due to prostitution: https://digitalcommons.iwu.edu/cgi/viewcontent.cgi?article=1027&context=uer#:~:text=Although%20violence%20against%20brothels%20was,'disorderly%20persons'%20.%20.%20.

Although violence against brothels was prevalent in the early 1800s, regulation of prostitution proved unsuccessful before 1870. Prostitution itself was not considered illegal, however the "police could and did arrest 'all common prostitutes who have no lawful employment' as vagrants or 'disorderly persons' . . . (in addition) though the police periodically raided Sixth Ward brothels and often hauled in streetwalkers from predominately immigrant areas, elite brothels were almost never disturbed."18

Footnote 18 is: "18 Burrows, Edwin G., and Mike Wallace. Gotham. (Oxford: Oxford University Press, 1999) 807."

But my post wasn't about prostitution in 1870s NYC. It was that I got conflicting answers from 2 AI engines.

awnlee jawking ๐Ÿšซ

@Switch Blayde

Now I'm imagining court cases being contested by adversarial AI lawyers :-(

AJ

julka ๐Ÿšซ

@Switch Blayde

But how can one believe them?

You shouldn't. Even if the output cites sources, you should still verify that a) the source actually exists and b) states what you think it does.

Neither Meta AI nor Chat GPT have any sort of theory of mind or a concept of true and false, they're just generating text. It's fancy text that is coherent, but coherency has no bearing on reliability.

Replies:   Dominions Son
Dominions Son ๐Ÿšซ

@julka

It's fancy text that is coherent, but coherency has no bearing on reliability.

Yeah. There's a legal blog I follow. It has covered a number of instances where lawyers used ChatGPT and the like to help write briefs only to get in trouble with the judge because the AI out right invented cases.

https://reason.com/volokh/2024/03/20/no-sanctions-in-michael-cohen-hallucinated-citations-matter/

https://reason.com/volokh/2024/02/25/dont-give-me-that-chatgpt-4-nonsense-judge-says/

https://reason.com/volokh/2024/02/16/2000-sanction-in-another-ai-hallucinated-citation-case/

JoeBobMack ๐Ÿšซ

@Switch Blayde

Of the two types of uses for AI that you include in your post, I have found AI to be far more helpful for the brainstorming questions such as generating character names than for questions focused on factual research. Both their ability to synthesize results and their tendency to hallucinate make them unreliable for that type of research. I generally find I can resolve such questions faster with more traditional means.

That said, I ran your query through Perplexity and got much the same results as you suggest. However, how much additional research I might have to do would depend on the exact needs of my story. For example, if it were just the question of whether it would be historically reasonable to write a story about going on in a brothel and some more well-to-do area of NYC in 1870, then both the research I saw and my own experience with human societies would suggest that such a story could go forward with no further question. Of course, if I wanted to set my story on a particular street, in a boarding house with a particular name, then that might require much more research. I thought the Gentleman's Guide to New York City referenced in one of the stories would be a fascinating study!

Replies:   Switch Blayde
Switch Blayde ๐Ÿšซ

@JoeBobMack

Both their ability to synthesize results and their tendency to hallucinate make them unreliable for that type of research.

Yeah, that's what I surmised. There's so much incorrect information on the internet that it makes sense that since that's the source for the AI programs "learning," the results should be questionable.

Replies:   Dominions Son  LupusDei
Dominions Son ๐Ÿšซ

@Switch Blayde

It's not just regurgitating false information out on the internet. The links I posted about lawyers using AI to write legal briefs show the AI outright fabricating information that didn't exist anywhere.

LupusDei ๐Ÿšซ
Updated:

@Switch Blayde

Just to pile up, indeed, AI doesn't simply regurgitate false information (although it happily would do that too) but it wholesale confabulate completely unique inventions, complete with whatever supporting material you may request when challenging it's inventions. Being the world's most competent bulshitters and never abandoning tone of total confidence those things can be very convincing.

In one of the very early stories about this "feature", ChatGPT offered a fragment of otherwise passable computer code (don't remember what language) except, it used an undefined function that appeared to be part of well known framework. Challenged about it, it not only detailed the new function, but claimed it to be part of a non-existent standard, introduced in never happened conference, referencing non-existent book and research papers by made up people.

Just yesterday I read a comment on ARS where a lawyer said he asked a supposedly specialized legal documentation AI assistant for specific precedent and it come up with two cases applicable to a T, weren't they both invented on the spot. Further research revealed both were mashups of several real cases, but none of those real cases used to hallucinate the "perfect" examples were in fact useful on their own, unfortunately.

So, you should never trust AI on concrete factual information and whenever possible verify its claims against less fungible sources (as long those exist). You can, however, well enough use its output as abstract examples where accuracy isn't critical. However, even there's a nonzero chance it may come up with alternative history sometimes. I imagine, especially, when asked about slippery subjects its inputs or outputs may be suppressed about.

Replies:   DBActive
DBActive ๐Ÿšซ

@LupusDei

I have personal experience with this issue. I used a major law publishers AI on a free trial.
Every single attempt resulted in it giving me citations that, when checked, either didn't exist or had facts and law that were irrelevant to the query.
It gave me some real cases but it would have been easier and faster to do my own research from the start.

Replies:   Dominions Son
Dominions Son ๐Ÿšซ

@DBActive

I have personal experience with this issue. I used a major law publishers AI on a free trial.

Look at the links I posted. Lawyers have done this and not checked the results before submitting them to a court.

Pixy ๐Ÿšซ

@Switch Blayde

So, after reading this thread and the fact that AI's with LLM's are quite happy to fabricate shit to prove their point, it would appear to confirm that AI's are human after all....

REP ๐Ÿšซ
Updated:

@Switch Blayde

When considering the use of AIs, I think of the phrase 'Garbage In, Garbage Out'.

A number of people on this Forum do not like Wikipedia. But when it comes to factual information, I would trust Wikipedia before I would trust an AI.

Replies:   fool42
fool42 ๐Ÿšซ
Updated:

@REP

If you trust Wikipedia at all, you are making a leap of faith. Just don't take any of their financial advice or quote them on any official document. Then, you won't be too far out on that skinny tree limb.
GIGO isn't restricted to computers.

Replies:   Pixy  REP  hst666
Pixy ๐Ÿšซ

@fool42

If you trust Wikipedia at all, you are making a leap of faith

This was mainly caused by how stupidly easy it was to 'edit' articles when the site first went 'live'. It was a pranksters wet dream and boy, did they not gleefully accept the challenge.

Since then, they (the site owners) have made it harder to edit articles and edits now need to be approved/validated before being added to the main site.

These days, the site is every bit as reliable as DTP encyclopedias. More so, as mistakes in print will only be corrected with the subsequent edition, whilst mistakes in Wikipedia are corrected as soon as they are noticed.

It's not yet 100% trustworthy, but then, neither is the MSM, which is always issuing retractions and apologies because of mistakes with their reporting. Compared with MSM, Wiki is more trustworthy, and as for those amongst us who read technical/medical/research papers, downright lies are frequently to be found, written by those who should really know better.

This is why teachers and professors always say to both quote the source AND double check the data.

awnlee jawking ๐Ÿšซ
Updated:

@Pixy

These days, the site is every bit as reliable as DTP encyclopedias

No.

A year or two back, a newspaper article exposed how easy it is to get bad info onto wikipedia. You write a 'paper' in support of your opinion, you get it published in a 'vanity' scientific journal (of which there's been a huge proliferation, particularly on-line only), you cite your paper in your contribution to Wikipedia (spit!) and, hey presto, your info has become gospel.

AJ

John Demille ๐Ÿšซ

@Pixy

These days, the site is every bit as reliable as DTP encyclopedias.

Maybe for things that are nearly impossible to politicize.

Wikipedia is now a leftist propaganda machine. Anything that has any political potential is heavily skewed left. Historical items have been rewritten to serve the current agenda.

Even Jimmy Wales, Wikipedia's founder, admitted that Wikipedia has lost its credibility because of its extreme leftist bias.

The tricky part is that it's a billed as a reference source. Who needs a reference source? People who don't know the info and need it. If you trust wikipedia, then you'll be misled on countless topics. The only people who know what can be trusted or not are the people who don't need wikipedia on the topic.

Replies:   Switch Blayde  DiscipleN
Switch Blayde ๐Ÿšซ

@John Demille

has lost its credibility because of its extreme leftist bias.

Ditto for almost any news media on the internet (and TV) so you can't seem to get the truth anywhere. So you might as well use wikipedia. For that matter, it's true for what's being taught in our education institutions, especially at the university level. So what is the truth?

DiscipleN ๐Ÿšซ

@John Demille

Even Jimmy Wales, Wikipedia's founder, admitted that Wikipedia has lost its credibility because of its extreme leftist bias.

I looked for this quote, but couldn't find it. What I did find is:
"Jimmy Wales, a co-founder of Wikipedia, stresses that encyclopedias of any type are not usually appropriate as primary sources, and should not be relied upon as being authoritative." from a 2005 article in Businessweek. [This info should be obvious advice to anyone doing research outside of well-establish 'first' sources]

Please support your assertion.

John Demille ๐Ÿšซ

@DiscipleN

I looked for this quote, but couldn't find it.

It was in a recent interview that he made.

I don't recall where I've seen the video, it was probably on Twitter/X.

Switch Blayde ๐Ÿšซ

@DiscipleN

Please support your assertion.

I didn't find a quote by Jimmy Wales, but I found this article "Wikipedia co-founder says site is now 'propaganda' for left-leaning 'establishment'" quoting Wikipedia co-founder Larry Sanger.
https://nypost.com/2021/07/16/wikipedia-co-founder-says-site-is-now-propaganda-for-left-leaning-establishment/

Now keep in mind, "The NY Post" is a right-leaning newspaper.

Wikipedia co-founder Larry Sanger has warned that the website can no longer be trusted โ€” insisting it is now just "propaganda" for the left-leaning "establishment."

REP ๐Ÿšซ

@fool42

If you trust Wikipedia at all

AIs use social media as part of their input. Social media is even worse than Wikipedia's input.

Replies:   Dominions Son
Dominions Son ๐Ÿšซ

@REP

AIs use social media as part of their input.

The biggest problem with using AI as a source of factual information is not the AI pulling bad information off the internet. The biggest problem that the AI will outright invent fictional information.

Replies:   REP
REP ๐Ÿšซ

@Dominions Son

I don't disagree. That sounds like the major problem with an AI's output.

Personally, I think the main problem goes back to the software programmers who created the AI program. Then there are the people who accept the program as generating accurate viable output.

My granddaughter is doing a Master's in a medical field. Many of her fellow students use an AI program to write the reports that they have been assigned as an aid in their learning. Just think about the quality of medical treatment their patients will receive. You and other Forum participants may one day be one of their patients.

I can just picture them inputting the patient's symptoms and asking an AI program for a diagnosis.

awnlee jawking ๐Ÿšซ

@REP

I can just picture them inputting the patient's symptoms and asking an AI program for a diagnosis.

I think that's where the future of General Practice lies. There's too much information for one person to remember. Some conditions can go undiagnosed for years because individual GPs are a long way from knowing everything. Is there a better solution than a trained practitioner using a purpose-built AI to access an on-line system containing all the latest information?

AJ

Replies:   Dominions Son
Dominions Son ๐Ÿšซ
Updated:

@awnlee jawking

Is there a better solution than a trained practitioner using a purpose-built AI to access an on-line system containing all the latest information?

If they can't keep the AI from inventing a fictional diagnosis, then in my opinion, doing nothing would be a better solution.

An AI that has any chance of feeding the primary care provider fictional information would make things worse not better.

Replies:   REP  awnlee jawking
REP ๐Ÿšซ

@Dominions Son

I agree. In my opinion, a medical text would be the preferred source of a diagnostic guide.

In the case of my granddaughter's friends, they are likely to not know enough about their specialty to use a medical text, and if they can, they are likely to not understand what they read.

Replies:   REP
REP ๐Ÿšซ

@REP

Good God Almighty!

I just checked and the medical field is introducing AI into making diagnosis. It is believed to be the future in the medical field. However, remarks are being made that AIs are not ready yet and when they are, human intervention is necessary. Currently, AIs are to take on the role as an assistant to the doctor.

I doubt I would trust a doctor who uses an AI to diagnose my ailment.

Replies:   awnlee jawking
awnlee jawking ๐Ÿšซ

@REP

I doubt I would trust a doctor who uses an AI to diagnose my ailment.

AIs are particularly useful when they are doing something boring, like analysing X-rays or cervical smear samples, because unlike people, they don't get bored and their attention doesn't wander.

AJ

Replies:   hst666
hst666 ๐Ÿšซ

@awnlee jawking

This is accurate. Basic data analysis conducted by a program is generally superior to that of doctors.

awnlee jawking ๐Ÿšซ

@Dominions Son

If they can't keep the AI from inventing a fictional diagnosis

That's why I said purpose-built. You obviously wouldn't use an existing AI and train it from the internet.

It's already starting to creep in in the UK. When the doctor is entering the symptoms and diagnosis, they get asked whether they're sure it isn't sepsis or cancer. Allegedly most doctors ignore the prompts.

AJ

Replies:   Dominions Son
Dominions Son ๐Ÿšซ

@awnlee jawking

That's why I said purpose-built. You obviously wouldn't use an existing AI and train it from the internet.

You apparently missed the post up thread that pointed out that a purpose-built legal AI is fabricating case citations.

No, purpose-built does not let you hand wave away the problem of AI fabricating information from nothing.

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@Dominions Son

No, purpose-built does not let you hand wave away the problem of AI fabricating information from nothing.

LLMs fabricate information from nothing. Special-purpose neural nets generally don't.

The problem here is people talking about 'AI' is if it's one thing. It's not. It's many different, highly divergent, types of software.

A purpose-built LLM will fabricate in some (but not all) cases. There are some which only use the LLM on the input side, using it to select between a large number of human-written outputs. Those don't fabricate. They may give you the wrong output if they misunderstand the question, but the output will be correct (and is hopefully written in such a way as to make clear what it's answering).

awnlee jawking ๐Ÿšซ

@Grey Wolf

The problem here is people talking about 'AI' is if it's one thing. It's not. It's many different, highly divergent, types of software.

Some claim that telephone answering systems, that keep you going round in circles rather than letting you talk to a human when your problem isn't covered by any of its options, are AIs. (Not mentioning British Telecom by name)

AJ

Replies:   Dominions Son
Dominions Son ๐Ÿšซ

@awnlee jawking

I have yet to encounter a VRU (Voice Response Unit) that wouldn't transfer me to a live person in the situation where my issue is outside it's programming.

Switch Blayde ๐Ÿšซ

@Dominions Son

I have yet to encounter a VRU (Voice Response Unit) that wouldn't transfer me to a live person in the situation where my issue is outside it's programming.

I have.

awnlee jawking ๐Ÿšซ

@Dominions Son

I have yet to encounter a VRU (Voice Response Unit) that wouldn't transfer me to a live person in the situation where my issue is outside it's programming.

I have too. An insurance provider who REALLY wanted people to use its website. It was supposed to attach you to a real person at the second time of choosing 'car insurance' but that didn't work.

AJ

Grey Wolf ๐Ÿšซ

@Dominions Son

Sadly, I'll second the others. I've met at least three of them. One of them let you leave a number for a live person to call you back at some arbitrary point in the future, but I left a number, got no callback, googled it, and found out that nearly no one every gets a callback.

They're far more common in 'support' situations than 'sales' situations, for obvious reasons.

The solution to getting a live person at two of those companies was to call sales and make them transfer the call. Surprisingly or not, that worked.

Replies:   Dominions Son
Dominions Son ๐Ÿšซ
Updated:

@Grey Wolf

Sadly, I'll second the others.

I want to clarify: I don't doubt poorly designed (maybe deliberately) VRUs exist. I have been fortunate.

I might suggest a couple of strategies when dealing with one you seemingly can't get out of.

1. pressing 0 or 9 will sometimes get you out of the VRU and transfer you to a live operator.

2. Say "operator"

3. If all else fails, stop doing business with those companies.

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@Dominions Son

Tried 1 and 2 (those are among my go-to's).

3 is good, when you can. Sometimes you can't, or it's so impractical that it's worth putting up with the crazy.

Dominions Son ๐Ÿšซ

@Grey Wolf

LLMs fabricate information from nothing. Special-purpose neural nets generally don't.

I'm not saying it's impossible to build an AI that wouldn't fabricate information.

However, just saying "purpose built" doesn't get you there.

Grey Wolf ๐Ÿšซ

@REP

I can just picture them inputting the patient's symptoms and asking an AI program for a diagnosis.

This is inevitable, but they won't be chat / LLM AIs, they'll be neural nets trained on disease symptoms.

There are already AIs who can take certain easy-to-obtain screenings and detect things no human could detect with more than ample accuracy to justify followup screenings, and we're still (relatively speaking) in the infancy of such tools.

Neural nets are great at finding very subtle patterns in data. The problem, of course, is that one has to be careful about what is the 'signal' and what is the 'noise'.

There's a great story about a program that was shown a large number of pictures. Half of them had tanks (the armored vehicle sort) in them; half did not. The program became very, very good at identifying tanks.

Then they showed it a new set of tanks and it failed miserably. After some investigation, it turned out that the 'with tank' photos were taken in one season, the 'without tank' photos were taken in another, and what they had was an AI that picked between the two seasons.

Still, get your inputs right and you'll wind up with AIs that are better than nearly any human doctor at making diagnoses from a set of symptoms.

Anyone trying to do that with an LLM is a fool, though. That's not what LLMs are for.

They may be, one day, but that'll take another feedback look that can semantically understand the answer and fact-check it. We're not there yet, but the path to getting there is much clearer than it would have been not that long ago.

hst666 ๐Ÿšซ

@REP

Actually, basic computer programs have been more accurate at diagnosis than doctors for decades. I am not sure if these have been tested along demographic lines for both doctors and patients, but overall, computers have been beating general practitioners for decades.

hst666 ๐Ÿšซ

@fool42

The great thing about Wikipedia is it has links. You can check the sources for the information. Regardless, Wikipedia, like encyclopedias, would only be a starting point.

Replies:   awnlee jawking
awnlee jawking ๐Ÿšซ

@hst666

The great thing about Wikipedia is it has links.

It's also it's weakness because it doesn't vet the quality of the underlying documents. With traditional encyclopaedias, humans supposedly check the quality of the supporting information.

On the other hand, Wikipedia(spit!) is now far beyond typical dead-tree encyclopaedias, both in the range of articles and their being up-to-date.

AJ

Replies:   hst666
hst666 ๐Ÿšซ

@awnlee jawking

But you can vet the quality of the source yourself.

Replies:   awnlee jawking
awnlee jawking ๐Ÿšซ

@hst666

But you can vet the quality of the source yourself.

In real life, how many people routinely do that? I think most people assume that because there is a link, it somehow verifies whatever is claimed.

AJ

irvmull ๐Ÿšซ

@Switch Blayde

So, if the AI doc can't figure out what's causing your pain, it will just do like the AI Lawyers, and invent a brand new organ that no one ever had before.

Replies:   awnlee jawking
awnlee jawking ๐Ÿšซ

@irvmull

That could well happen if medical AIs were let loose on teh interwebs using Large Language Models. But medical AIs are more like Lazeez's wizards.

AJ

nihility ๐Ÿšซ

@Switch Blayde

If actual facts matter to you, don't trust AI; use it if you like but verify everything.

If you value your writings, don't submit to AI, your voice will become part of the AI.

Back to Top

Close
 

WARNING! ADULT CONTENT...

Storiesonline is for adult entertainment only. By accessing this site you declare that you are of legal age and that you agree with our Terms of Service and Privacy Policy.