Home ยป Forum ยป Author Hangout

Forum: Author Hangout

Chat GPT censorship

KimLittle ๐Ÿšซ

So interesting. I gave chat gpt a prompt, and then was progressively telling it to rewrite with more specifics and pointers. In case you don't know, it's like watching a chat in 9600bps internet land, and the words appear as the AI generates them.

It was doing pretty good, and suddenly it stopped, blanked out, and a message appeared saying something like "This response appears to violate the terms of service."

So basically, it realised it was about to go porno, had a conscience moment, and went 'Wait - this is naughty. Hold on...'

Hilarious!

Replies:   Gauthier  irvmull  Keet  irvmull
Gauthier ๐Ÿšซ

@KimLittle

Sorry to burst your conscience moment theory, it's just a bad cop hiding chatGTP response. The censorship is a post-processing implementation.

Replies:   REP
REP ๐Ÿšซ

@Gauthier

I doubt the bad cop was reading what was being created by Chat GPT.

Therefore, the bad cop is the programmer who coded the Chat GPT program to look ahead and determine if the story Chat GPT created contain porno, and if so, to stop sending the story and display the message KimLittle saw.

I agree that was censorship, but censorship that was built into the Chat GPT program. If Chat GPT is not allowed to send a story containing porno to the requester, why did the programmers allow Chat GPT to create a story containing porno?

Replies:   Gauthier
Gauthier ๐Ÿšซ
Updated:

@REP

why did the programmers allow Chat GPT to create a story containing porno?

The programmer have absolutely no control on the LLM output capabilities.

It's trained on human generated text, it's output will look like text written by humans.

The censor is a regular classifier Neural Network which analyze the input and the output of chatGTP LLM and try to conform to the Term of Service. The censor NN is evolved more rapidly and independently of the chatGTP LLM. A common trick was to bypass the censor by speaking l33t to chatGTP, the censor now catch that.

When we talk about ChatGPT version X, we refer to the trained Large Language Model version X.

If you refer to chatGPT as the services offered by OpenAI.

I guess you can say censorship is build into it but it's not true of the LLM.

*Edit:*
Considering the total nonsense accumulated bellow, it looks like I need to explain a minimum about LLM.

Lets be clear: a LLM in a given version can't improve and can't learn.

Only the small context of the chat session is operated upon by the model to generate an answer.
once you close that context, it's lost to the current model.

If you want to improve a LLM, all you can do is refine and increase the training dataset tweak the model size and context size and start the training over from scratch, then pay a huge electric bill.

Once that is complete, you have a new version of the LLM.
We are very far from a General IA able to assimilate new knowledge on the fly.

A LLM in the current architecture will not replace a search engine as it can't learn new things.
For a given version no new content can be added to be learned from.

Of course effort are ongoing to find a way to stick additional learning on an existing LLM.

Replies:   REP
REP ๐Ÿšซ

@Gauthier

I've heard ChatGPT mentioned in the forum, but know nothing about it. Your post provides me with a glimmer of an idea of how it operates.

It sounds to me as if you are saying the LLM is not part of ChatGPT at one point, but at a second point used 'chatGTP LLM', which makes it sound like LLM is part of ChatGPT. You also refer to a sensor as something totally separate from ChatGPT and LLM.

You said that the LLM output can be trained to emulate text written by humans, but its training is controlled by code written by a programmer. The programmer evidently did not have a set of design criteria that would prevent the generation of porn. It would also seem that the LLM was trained to produce porn, otherwise it would not be aware of sexual activities.

Overall I'm confused by your post. While I'm interested in learning more, this is not the place for that type of discussion. I'll do some research to see if I can gain an understanding of chatGTP. Thanks.

Replies:   Sarkasmus
Sarkasmus ๐Ÿšซ
Updated:

@REP

but its training is controlled by code written by a programmer.

No.

You can view ChatGPT as an articulate version of Google. If you google something, the request is sent back home to the Google Servers where they use a rather complicated set of algorithms to determine whether or not something could be a proper response to your inquiry. All the possible results are then shown to you in a list. If you click on a result, but then come back and click on something else, Google learns that the result you clicked on first was apparently not what you're looking for, and the next time someone looks for the same thing, that search result will be listed lower.

ChatGPT does the same thing, up until the result is given. Instead of a list of all the possible results, you get only the top result that Google would have placed first. And you don't get it as a quick dataset, but as a fully worded-out sentence. If you then ask for something else, ChatGPT learns that what information it gave you wasn't sufficient, and the next inquiry will receive a longer response that contains what you had to add.

Now, here's the problem with AI:

Contrary to the algorithms Google uses, an AI can not be supervised. Once it learned something, it's in there for good! That's why Tay, Microsoft's first attempt at creating a Chat AI, had to be shut down after it learned that "Hitler did nothing wrong". There was no way to get that out of there. They would have to scrap the whole thing and start over from scratch.

So, what does OpenAI do to stop something like that from happening again? They don't even try. They apply filters instead.

Every response ChatGPT gives is checked with a script that contains a whole bunch of bad words. If the response contains any of those words, it will not be shown to the user. Simple as that.

Replies:   REP  KimLittle
REP ๐Ÿšซ
Updated:

@Sarkasmus

You are anthropomorphizing ChatGPT and Google.

You said ChatGPT is an articulate version of Google, and then described Google as a computer program. Then you talked about Google as if it were a person making the decisions and providing the results. Then you did the same thing for ChatGPT.

ChatGPT is NOT a person. It is a computer program used to gather information, process the information in a specified manner, and present that information in a defined format.

Computer programs, like ChatGPT and Google, do not suddenly pop into existence. Their program code is written by programmers and that code contols what the program does.

Switch Blayde ๐Ÿšซ

@REP

Their program code is written by programmers and that code contols what the program does.

But true AI learns from experience and goes beyond what was programmed.

Replies:   awnlee_jawking
awnlee_jawking ๐Ÿšซ

@Switch Blayde

But true AI learns from experience and goes beyond what was programmed.

An AI that can go beyond its programming is surely an artificial sentience.

AJ

helmut_meukel ๐Ÿšซ

@REP

Their program code is written by programmers and that code controls what the program does.

That's why Tay, Microsoft's first attempt at creating a Chat AI, had to be shut down after it learned that "Hitler did nothing wrong". There was no way to get that out of there.

So Tay was typical M$ programming, not thinking of a way to "unlearn" wrongs. (something like Heinlein's "I tell you three times").

HM.

LupusDei ๐Ÿšซ

@REP

It's weirder than that. ChatGPT *did* sorta pop into existence and operate by "programming" incomprehensible to any human. (So do the image generation models like Stable Diffusion. That's why it's said those are trained instead of programmed.)

I very essence ChatGTP, or to be precise, the Large Language Model (LLM) behind it doesn't do anything else than try to guess a statistically likely world that may follow the current input/output stream. It doesn't search for information. Actually, it's explicitly said it's unaware about world facts past a certain date. Because the dataset used to train the LLM is limited and contained. And no, ChatGPT doesn't learn from input.

It can however, be used to search for information and to simulate "self learning" characters, but those are extended uses of the tool, but aren't inbuilt in the tool itself.

So, how the LLM did pop into existence? By competition of two iterative algorithms. One was producing random gibberish. The second compared that to an absurd amount of texts. Evey time the first produced something vaguely similar to the real texts, the second said, "good AI" and the parameters of that generation got reinforced. Until the first got it "right" most of the time. That first is the ChatGPT. The censor is somewhat similar to the second, only this time it's negative.

That's basically what the LLM is, it's just a very huge matrix of values, supposedly descriptive of how often certain words follow certain combinations of other words, only it contains no words as such, and nobody at all knows what correspond to what in that matrix or even how it's supposed to be used because it's a product of reinforced random values by self-modifying code.

But somehow (no human knows how exactly) it can generate texts roughly similar to those in the training corpus upon request. With include quite a big chunk of internet, therefore porn. It also included a good library of literature including steamy romance novels etc, therefore porn. ChatGPT should, in its nature be able to produce porn just as well as anything else, from computer code to made up court cases, and yes, it make up non-legible computer code too, inventing own functions that doesn't exist, and then back their existence up with made up documentation when challenged.

ChatGPT is the world's best bullshitter. It's so good at it most it say is mostly true. But you should never see anything ChatGPT says as authoritative or even trustworthy, it's still million monkeys mashing keyboards, it's only filtered in such an amazing way the result seems legible answer to your inquiry.

Sarkasmus ๐Ÿšซ
Updated:

@REP

Hmm... seems like I have to go into a little more detail. So, let me try:

Google and ChatGPT are not people. That is correct. But they ARE entities with a rather large presence in our lives.

Now, in the case of Google, you are correct. As I wrote before, when you make a search through Google, that search is sent to the Google-Servers where a complicated set of algorithms is used to rifle through the vast amount of data on their hard drives, and to determine which result would best fit your question.

First, this was done by simple word occurrence. If you googled for indian food recipes, you got the sites listed that had the most occurrences of the terms "indian", "food", "recipes". Simple. Over time, this evolved and got more and more complicated. But, in the end, it's still nothing more than the server following the instructions given by a bunch of computer scripts.

OpenAI, however, is NOT working that way.

The only thing the programmers do to create an AI, is to set up the basic structure and provide it with a dataset (which is basically the contents of the Google Servers). The basic structure being "here's the dataset, let me create a few basic associations for the first responses, and then make it so the AI can create its own associations". From that point on... the programmer is out of the picture. From now on, the AI will learn by itself, based on the inquiries it receives from the users.

Now, you have a freshly created Chat AI. You ask it who Elon Musk is, and it will give you the first line of his Wikipedia page. Realizing this is not enough, you send another inquiry back, asking about the boring company. So, the AI, all by itself, creates a link between Elon Musk and the boring Company, and the next time someone asks who the guy is, the response will be the first line of his Wikipedia page PLUS a sentence regarding the boring company.

Simple, right? That's why everybody and their grandmother are currently creating AIs. The problem, however, is that there are no codes to control the AIs or even supervise them, as multiple politicians are currently demanding. Once an AI made a connection between a piece of information in its dataset and an inquiry by a user, that connection is there and CAN NOT be severed anymore. Because there is no programmer supervising the thing. There are no scripts governing its responses that a programmer could alter. It's the AI learning something on its own, and now that connection is in there for good!

So, the only way to stop an AI from spewing racist bullshit and turning into a fourteen-year-old phone-sex operator is to apply the filters I mentioned earlier. Those ARE traditional scripts like you described. But they are completely detached from the ChatAI. It is a completely different process that just searches the AI's response for bad words. Like a SPAM-Filter in your EMails.

Replies:   REP  Tw0Cr0ws
REP ๐Ÿšซ

@Sarkasmus

and then make it so the AI can create its own associations"

That is precisely what is happening. The program code creates and stores the associations. It uses those associations to create new associations, which are also stored. I accept that.

What I am saying is, there is a computer program that controls the creation, storage, and recall of the associations. But the bottom line is, the computer program is not modifying its code. That requires a programmer.

Your explanations seem to infer that that these software programs/modules are evolving and evolution results in them changing their code. That may be possible someday, but it isn't possible now.

These software programs/modules are nothing more than complex, sophisticated sections of code that interact with other software programs/modules in a way that is not clear to the consumer (us) of the information they create. The result is people who talk about the programs/modules as if they are evolving their code.

Replies:   Sarkasmus
Sarkasmus ๐Ÿšซ

@REP

The result is people who talk about the programs/modules as if they are evolving their code.

Well, they are not evolving their code. But they are going beyond what we currently consider programming code.

Let's stay with the SPAM-Filters as an example, because they fit perfectly:

In a traditional SPAM-Filter, you have a list of conditions that indicate SPAM. One condition is a match with a wordlist that contains terms like "Viagra", "Nigerian prince", "Prize", and stuff like that. Another condition would be whether the sender's address matches the server address the email actually came from. And, finally, a list of all the email addresses that other users have marked as SPAM senders.
Now, before an incoming mail is categorized as SPAM, it needs to match with a number of those conditions. The incoming email starts with a score of Zero, which rises with each match and, once the score goes beyond a predetermined threshold, it will be automatically categorized as SPAM.

For many people, that LOOKS like a form of AI because the servers do stuff all by themselves. But, in the end, every single one of these steps and conditions has been hard coded in a computer script, and the server simply works its way through those instructions.

AI, in turn, does indeed set those conditions by itself! There are no hardcoded scripts determining what results would fit a given inquiry. The AI does that by learning, and then setting the scores (for the connections it made) itself. Thanks to the speech model the programmers gave the AI, it then can even add new conditions for the scoring without anyone telling the AI to do so, simply because a user inquiry told it how the response was unsatisfactory.

Replies:   REP
REP ๐Ÿšซ

@Sarkasmus

You continue to describe software programs as if they act autonomously. They don't. Their program code is executed by a computer, and the results are determined by the code.

For example: Rephrasing your comment. New conditions can be added for scoring when a user's inquiry results in an unsatisfactory result because the programmer envisioned that circumstance and added code to the program to allow for the addition of new conditions.

Bottom line - everything you have been talking about is done by a computer, and computers only do what their code tells them to do.

Replies:   Switch Blayde
Switch Blayde ๐Ÿšซ

@REP

You continue to describe software programs as if they act autonomously. They don't. Their program code is executed by a computer, and the results are determined by the code. โ€ฆ computers only do what their code tells them to do.

AI is way above my knowledge, but I believe true AI does more than what they are programmed to do. With a normal program, the program can only do and make decisions based on what the programmer coded: If this, then do this. But AI learns from doing.

So a programmer can code the moves to make a computer program play chess, but an AI program can actually get better by playing. It can learn from making bad moves and good moves. It can make decisions the programmer never knew to code. At least that's my understanding of AI.

Replies:   helmut_meukel  REP
helmut_meukel ๐Ÿšซ

@Switch Blayde

So a programmer can code the moves to make a computer program play chess, but an AI program can actually get better by playing. It can learn from making bad moves and good moves.

It can make decisions the programmer never knew to code.

I separated the last sentence of the quote by a blank line.
The first part is all programmed, even the "learning" part.
That's just storing and retrieving and evaluating of data. It implies a back-feeding of the results.

The last statement is not true, I doubt it will ever become true, because it implies the ability of the AI to change or augment its programming.

HM.

awnlee jawking ๐Ÿšซ

@helmut_meukel

The last statement is not true, I doubt it will ever become true, because it implies the ability of the AI to change or augment its programming.

As an aside, self-modifying code is nothing new - I've written some myself - the problem is making the program able to modify itself autonomously, rather than having the original programmer decide when the and where program can modify itself.

AJ

Switch Blayde ๐Ÿšซ

@helmut_meukel

The last statement is not true, I doubt it will ever become true, because it implies the ability of the AI to change or augment its programming.

As I said, my knowledge of AI is limited. But I thought true AI, maybe not what's currently available, will be able to do that. I thought that's why it's called artificial intelligence.

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@Switch Blayde

This becomes either/both a religious argument or a semantic argument. There are some who argue that a computer can never become 'intelligent' for one reason or another.

For instance, I'd boil the argument here down to (and apologies for straw-manning, if I'm doing that):
1) A human wrote all of the base code.
2) The AI just did what it was told in 'learning'.
3) Any code it writes to add to itself was therefore, at its base, authored by the original humans and/or the humans who provided the database it 'learned'.
4) Repeat infinitely.

The problem with that argument is that one can make a similarly reductive argument:
1) A bacteria is not intelligent.
2) A large clump of bacteria are not intelligent.
3) Just organizing them into interdependent cells and so forth doesn't add intelligence - still just a bunch of stupid bacteria.
4) Therefore, humans aren't intelligent, we're just super-complicated bio-mechanical machines.

There are a lot of similaries to 'free well' versus 'predestination'. The argument for 'a computer will never be intelligent' looks a lot to me like the argument for 'free will is an illusion - if one had the complete state of the universe at time T, everything throughout the course of history could be determined mathematically, perhaps barring quantum uncertainty and perhaps not - but quantum uncertainty is no basis for free will'.

As far as I'm concerned, there is a level of behavior at which one must believe an entity to be intelligent. If a software system demonstrates that level, and does not evince failings that undermine that determination, one must credit it with being 'intelligent'.

In other words, if one cannot devise a test that separates 'intelligent' humans from 'unintelligent' computers on the basis of behavior, there is no meaningful distinction and the term is simply arbitrary. How the system got there is beside the point. We got here via an exceptionally long process of natural selection and mutation, none of which had any design goal to produce 'intelligence' (or, really, any goal aside from 'life finds a way').

Current systems (some too-eager people aside) simply can't match that standard. Their behavior is clearly not 'intelligent' in any sense of the word. The best that might be said would be that perhaps they can match the performance of someone with a perfect memory but serious cognitive deficiencies - they know 'everything', but what comes out might be inspired or nonsense (or inspired nonsense).

On the other hand, what we have today was barely science fiction a decade and a half ago, and the biggest leaps have been algorithmic, not simply throwing more hardware at the problem. We're going to throw more (and more, and more, and more) hardware at the problem by the natural course of events. Combine that with another few algorithmic breakthroughs and in another decade or two we're going to come up against some 'AIs' that are exceptionally hard to distinguish from humans.

Things like ChatGPT can already 'hypothesize' about how they themselves would 'take over the internet' and similar endeavors. All that one need imagine is an AI which can ask itself questions, and behavior rapidly becomes emergent and nonpredictable and expansive.

All that said, it may simply be an exceptionally complicated mechanism with no true free will.

But, then, the same can be (and is! commonly!) said of humans.

REP ๐Ÿšซ
Updated:

@Grey Wolf

I liked your post.

There is a test known as the Turing test which is designed to determine if a machine can be differentiated from a human. The assumption in the test is that if an evaluator cannot differentiate between a second person and the machine in a nonverbal conversation, then the machine is considered human.

The test sounds like a good method to use, but I would not use it to determine sentience in a machine. For me there is more to sentience than being able to converse like a human.

One of those things is the ability to engage in independent thought. Although I am not sure how to define what I mean by independent thought.

Like you said in the post, I sometimes wonder if some people qualify as sentient beings.

Replies:   joyR  Grey Wolf
joyR ๐Ÿšซ

@REP

There is a test known as the Turing test

Perhaps a better test where an AI is concerned is to simply address it as 'computer'. If it continues to communicate with you, it isn't sentient. If however it recognises the insult, triggers the halon fire suppressor system and awaits a more intelligent human to interact with...

:)

Replies:   REP
REP ๐Ÿšซ

@joyR

That would probably work except for one thing. It is a very human trait to seek revenge as a way of having the last word.

So if a computer stops speaking to the human and seeks revenge, isn't that action proving that it is equal to a human. :)

Replies:   joyR
joyR ๐Ÿšซ

@REP

So if a computer stops speaking to the human and seeks revenge, isn't that action proving that it is equal to a human. :)

Or perhaps seeking revenge is also a very AI trait?

We can't really know until a number of truly sentient AI exist. It would be foolish to presume AI traits based on the behaviour of only a few AIs.

Replies:   REP
REP ๐Ÿšซ

@joyR

True and it will be interesting to see how a sentient AI deals with humanity.

Grey Wolf ๐Ÿšซ

@REP

I intentionally omitted the Turing Test, but didn't spell that out. The reason I omitted it is that ChatGPT4 is arguably able to pass the Turing Test, or is at most a small number of iterative improvements away from it. It was a great standard for decades, but I think we're past that point now.

I agree with you about independent thought - including that there is a definitional issue there. From a 'black box' perspective, ChatGPT can make things up. Is that 'independent thought'? I'd say no, because we know why it does that, but that's because we understand how it functions. If we didn't? It would be a tough question.

And, to date, we really don't understand how humans actually think very well, so there's a layer of 'unfair comparison' going on.

Pixy ๐Ÿšซ

@Grey Wolf

1) A human wrote all of the base code.

3) Any code it writes to add to itself was therefore, at its base, authored by the original humans and/or the humans who provided the database it 'learned'.

That reminds me of the old question, "If you replace the shaft of your father's hammer with a new one after his death, and then a few years later, replace the head, is it still your father's hammer?"

I see the same question in your scenario. Yes, a human made the code, but if the construct has sufficient learning to adjust, adapt and modify the code, then once very little, if any, of the original code remains, will it still be the 'father's' code other than just in name?

Replies:   Mat Twassel
Mat Twassel ๐Ÿšซ

@Pixy

"If you replace the shaft of your father's hammer with a new one after his death, and then a few years later, replace the head, is it still your father's hammer?"

Here's what ChatGpt says about that:

This scenario is a classic philosophical thought experiment known as the Ship of Theseus paradox, and it raises questions about the nature of identity and change. In this case, you can apply the same concept to your father's hammer.

If you replace the shaft of your father's hammer with a new one after his death, you might still consider it your father's hammer because the head, which is a significant part of the tool, remains the same. However, when you also replace the head of the hammer a few years later, you are left with no original parts of the hammer.

The question then becomes a matter of personal perspective and interpretation. Some people might argue that it is no longer your father's hammer because none of the original components remain. Others might say that it still carries the essence or history of your father's hammer, even if all the parts have been replaced.

Ultimately, whether it is still your father's hammer or not depends on how you define identity and what significance you attribute to the physical components versus the history or sentimental value of the object. It's a philosophical question with no definitive answer, and different people may have different opinions on it.

Replies:   REP
REP ๐Ÿšซ

@Mat Twassel

Here's what ChatGpt says about that:

ChatGPT is not saying anything about that. It is the people who want to think ChatGPT is sentient who makes those remarks.

The reality is the program called 'ChatGPT' can modify the database it is using, but it does not modify its program code.

Replies:   Gauthier  Grey Wolf
Gauthier ๐Ÿšซ

@REP

The reality is the program called 'ChatGPT' can modify the database it is using, but it does not modify its program code.

A few correction on that:

1 Currently ChatGPT can modify the limited in size context. One of the solution to overcome the context size limit is to have a team of chatGPT bots speaking between themselves to delegate tasks and achieve the designated goal.

2 ChatGPT can't modify the database. One of the needed improvements is the capability to learn by itself thus modifying it's database.

3 The distinction between program code and data has no reality. It's just a convenient abstraction.

But in this case lets call the program a brain Simulator and the data a Brain state. It's then clear that your hangup about code modification have no basis as only the data is important.

Moreover:
* The brain simulator is really very basic code.
* chatGPT can write and correct code in most programming languages.
* With access to ChatGPT code you could ask it to rewrite or optimize it's own code. Today it may not fully succeed without human supervision but we are not far off. One of the market for chatGPT and other generative AI by the way is as a programmer assistant (and possibly soon replacement)...

Replies:   REP
REP ๐Ÿšซ
Updated:

@Gauthier

Currently ChatGPT can modify the limited in size context.

Whatever that means. Your terms seem to differ from those used by other posters.

ChatGPT can't modify the database. One of the needed improvements is the capability to learn by itself thus modifying it's database.

Somewhere in the above posts, it seemed to me that the poster was saying that CharGPT had the ability to learn and was adding what it learned to its database.

The distinction between program code and data has no reality. It's just a convenient abstraction.

From what little I know of computers, program code is the set of instructions that tell a computer what to do. If the purpose of the program is to gather data, then data is the result acquired by executing the code.

Both exist in my reality and I find it easy to differentiate between the two, so the distinction is not just a convenient abstraction.

Replies:   Keet
Keet ๐Ÿšซ

@REP

... learn and was adding what it learned to its database.

"Learn" is maybe a little too much. It can mark and score context as a response to users input. In next cycles it can use those markers. It probably uses multiple markers like "true/false" or a score that indicates how a piece of information is regarded by user responses. Multiple cycles add up which might be interpreted as learning.

Replies:   REP
REP ๐Ÿšซ

@Keet

That's fine.

I was stating what I understood of prior poster's posts.

Replies:   REP
REP ๐Ÿšซ

@REP

Since the posters in this thread seem to have a problem differentiating between program code and data, it isn't worth my time to continue in this thread.

Replies:   Switch Blayde
Switch Blayde ๐Ÿšซ

@REP

Since the posters in this thread seem to have a problem differentiating between program code and data

I kept mum about that because I thought it was an AI thing.

I started as a programmer and wrote code. Some high level languages like COBOL and Fortran, but mostly Assembly. Assembly is much lower level, but still not machine code. So I understand what code it.

I then became a DBA (database administrator) and finally in charge of the world-wide Data organization for a Fortune-50 company so I know what data is, both as flat files or in databases.

The code can read, write, massage, analyze, print, calculate, etc. data. Data Warehouse code can even find stuff that no one even knew to look for through analyzing the data in its own way (once the data is set up properly). I have no idea how AI does it, but code is code and data is data.

Replies:   awnlee jawking  REP
awnlee jawking ๐Ÿšซ

@Switch Blayde

but code is code and data is data

code is verbs and data is nouns ;-)

AJ

REP ๐Ÿšซ
Updated:

@Switch Blayde

I have no idea how AI does it,

Ah โ€ฆ someone who exists in the same reality that I do.

My understanding is 'AI' is used in three basic ways.

In science fiction, it refers to a computer that is not bound by its code. The computer is capable of independent thought. Thus it is a sentient being.

In my reality, 'AI' is used as an adjective to indicate a computer that appears to be capable of independent thought. That appearance is the result of program code that directs the computer to do things that emulate free thought.

Based on my limited understanding of the reality in which some posters in this thread seem to exist, the computer is not limited by its code. Without being directed to do so by its program code, the computer receives a prompt message, reviews a collection of information (database), analyzes its database, reaches a conclusion (what one poster called an association) regarding the prompt message, adds its conclusion to its database, creates a reply message, and displays its reply message to the individual who requested a reply to their prompt message.

Switch Blayde ๐Ÿšซ

@REP

the computer receives a prompt message, reviews a collection of information (database), analyzes its database, reaches a conclusion (what one poster called an association) regarding the prompt message, adds its conclusion to its database, creates a reply message, and displays its reply message to the individual who requested a reply to their prompt message.

Without adding its conclusion to the database, that's what a data warehouse does. The key to a data warehouse, though, is having the right data in the right format for the software to analyze and make associations/decisions.

Dominions Son ๐Ÿšซ

@REP

In my reality, "AI" is a marketing fluff term, meant to imply the science fiction meaning for something that falls light years short of that mark.

Truth in advertising would call it Artificial Stupidity.

Replies:   REP
REP ๐Ÿšซ

@Dominions Son

In my reality

Sounds like we share your reality.

Keet ๐Ÿšซ

@REP

Based on my limited understanding of the reality in which some posters in this thread seem to exist, the computer is not limited by its code. Without being directed to do so by its program code, the computer receives a prompt message, reviews a collection of information (database), analyzes its database, reaches a conclusion (what one poster called an association) regarding the prompt message, adds its conclusion to its database, creates a reply message, and displays its reply message to the individual who requested a reply to their prompt message.

Basically yes, You have the software (code), the database(s), and user input. The code interprets the user input and following coded rules it searches the database (analyze). The result is probably analyzed again with some other code before presenting it to the user. It's a 'little' more complicated because it's most likely that several sub-sets of results for parts of the user request are cross queried to get a final result. That is what makes some results so good and believable.

Problems possibly arise when the code is allowed to add results to the database as new data which in turn is used for next queries. We've seen that with algorithms that created output after 'learning' to be racist, hate-speak, or present complete lies as confirmed truth.

Grey Wolf ๐Ÿšซ

@REP

The reality is the program called 'ChatGPT' can modify the database it is using, but it does not modify its program code.

I would argue that the database is, in some ways, the code.

Or, to analogize back to humans, humans cannot alter their 'program code'. That's provided to us by biology. We can only alter the database that the program code operates over.

If that argument holds, then the ability to modify program code is not a test for intelligence. The question is whether the program code is sufficient to support intelligence if the database is sufficient.

ChatGPT4 fails that test, in my opinion. That does not mean that ChatGPT10 will fail it.

And it's entirely possible that humans will use ChatGPT6 to write ChatGPT7, and ChatGPT7 to write ChatGPT8, and so on.

awnlee jawking ๐Ÿšซ

@Grey Wolf

Things like ChatGPT can already 'hypothesize' about how they themselves would 'take over the internet' and similar endeavors.

I think that is unlikely.

I expect ChatGPT can peruse all the material it has access to relating to taking over the internet, and massage it into a narrative. But I don't believe ChatGPT is able to hypothesise about the possibilities unprompted by a human.

AJ

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@awnlee jawking

I didn't say that it was unprompted by a human. That wasn't the point. The point was that, when prompted, ChatGPT is able to offer plausible mechanisms by which it could take over the internet.

At some point, that prompting may be internal. That doesn't mean it will do anything - it's more that the possibilities already exist.

Flipping your argument around: if a plausible mechanism exists to 'take over the internet', and is published, a future version of ChatGPT will likely 'know' that method. It can also synthesize multiple data sources into a single 'plan.'

There is no 'intentionality' in current ChatGPT iterations. That doesn't mean that, a decade from now, we won't have moved on to intentionality.

We do already see plenty of 'intentionality' in numerous AI systems (highly constrained, but if you go all the way back to Asimov's Laws of Robotics, one of the primary risks is insufficient or poorly specified constraints).

Dicrostonyx ๐Ÿšซ

@Grey Wolf

But, then, the same can be (and is! commonly!) said of humans.

And outside random people on the internet saying this, I'll mention that this is a major discussion in evolutionary psychology theory.

I'm not going to go into detail, but the short version is that there's good reason to believe that the conscious mind isn't really a thing the way that we imagine it to be but is more of a "consistency module".

Decisions are made at an "unconscious" level based on task-focused modules. For example, when you see an attractive person there's one brain module that compares that person's perceived and imagined attributes to your current partner and decides whether you should "trade up" or not. If yes, another module decides whether the practicalities of that shift are worth the chance of it not not working. Whatever the result it is fed into your conscious mind as a decision: switch (or cheat) or don't.

The conscious mind then takes that decision, whatever it is, and compares it to past decisions and personality traits to come up with an explanation for the decision which is consistent with past decisions.

This is why everything you do makes sense to you in the moment, even if it doesn't later when you think about it. Your conscious brain is constantly creating fantasies to explain all of your choices in ways that create an illusion of a consistent personality.

[Note: evolutionary psychology is not the only brain model that is currently being debated by psychologists, but it is quite a popular one.]

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@Dicrostonyx

Thanks for the comment. That's a much better explained version of part of what I was going for.

Some of the work on brain models combined with physics and philosophy come together in the resurgence of the argument that 'free will is an illusion' - that the universe (from the individual to the entire system) is, ultimately, either deterministic or merely random, not directed by choices. If our 'choices' are made deterministically based on evolution combined with experience, a sufficiently powerful model could encapsulate the entire universe and know the course of events from big bang to whatever happens at the end.

In that version of reality, our 'intelligence' is merely a difference in degree from that of a current-generation AI, not of kind.

Mind you, I reject that argument - but I may simply be predestined to reject it :)

Replies:   awnlee_jawking
awnlee_jawking ๐Ÿšซ

@Grey Wolf

a sufficiently powerful model could encapsulate the entire universe

Wouldn't such a model necessarily be larger than the universe?

AJ

Replies:   LupusDei  Grey Wolf
LupusDei ๐Ÿšซ
Updated:

@awnlee_jawking

Yes, by at least a couple of magnitudes, just to capture the current state, even before going into rewind/forward projections. Some kind of holographic principle may be of rescue, or claiming the whole thing follow strict patterns that can be regenerated out of a fragment, but I would think that resolution should suffer in involving such shortcuts/compression.

Grey Wolf ๐Ÿšซ

@awnlee_jawking

There are multiple facets to that, but in general, yes, it would.

As a thought experiment, though, it's still valuable. If the universe is 'computable' (given infinite resources), free will is an illusion. It doesn't matter if there are only finite resources and one cannot actually compute the universe - the thought experiment merely requires that it could be done if resources existed.

The similar thing in Computer Science is a Turing Machine. Turing Machines do not and cannot exist; they require an infinitely long 'tape'. Nevertheless, the set of things a TM can compute is extremely important, notwithstanding that some of them cannot be computed by a practical computer. More importantly, the set of things that cannot be computed even given infinite storage is a really important set: if something is in that set, it's a fool's errand to try to compute it.

The second facet is: why do we assume our universe is the only size universes come in? Maybe ours is ridiculously tiny. We only have one to look at, after all.

And the third facet is: perhaps our universe is far, far smaller than it is. This is pretty much the 'our universe is actually a simulation' argument. The only parts of our universe that need to be computed at full resolution are those within direct experience. A hypothetical modeler need not even model at the atomic level if no one is observing those interactions. Everything outside of a few light-years can be seriously hand-waved.

For the purposes of this discussion, everything but the first point is irrelevant, of course. They're just fun to think about (but, then, that's because I was predestined from the instant of the Big Bang to think they're fun to think about :) ).

Replies:   Dicrostonyx
Dicrostonyx ๐Ÿšซ

@Grey Wolf

There's also a fourth facet, or maybe a variation on #3. I don't know much physics myself but used to have a friend who studied it and back in the 90s there was some discussion of the idea that universal constants might actually just be local, though "local" could easily mean 100 light years in every direction.

If some constants change, that would throw off any calculations we've made about distance, mass, and time from areas outside the local area. Worse, outside areas wouldn't even be consistently variable, different "bubbles" of locality with different constants would act as lenses to our perception of the universe in different ways.

REP ๐Ÿšซ

@Switch Blayde

At least that's my understanding of AI.

I agree with your understanding. A 'true AI' would be a sentient computer.

The situation is, we do not currently have a true AI. What we have are programs that appear to evolve, but aren't sentient. What makes them appear to be sentient is the programmers created code that directed the computer to do things that a sentient individual would do - such as adding information to a database.

The key difference a computer with a sophisticated program and a sentient computer is, a sentient computer would expand its activities into areas other than the one it was programmed to perform.

Tw0Cr0ws ๐Ÿšซ

@Sarkasmus

Now, in the case of Google, you are correct. As I wrote before, when you make a search through Google, that search is sent to the Google-Servers where a complicated set of algorithms is used to rifle through the vast amount of data on their hard drives, and to determine which result would best fit your question.

That is how Google worked back in the good old days, these days Google takes what you search for and compares it with what their advertisers paid them to show, and if the advert does not fit the search term then the search term gets discarded.

KimLittle ๐Ÿšซ

@Sarkasmus

Every response ChatGPT gives is checked with a script that contains a whole bunch of bad words. If the response contains any of those words, it will not be shown to the user. Simple as that.

If I had to characterise it in human terms, basically what I saw was the engine generate text, realise what it was doing, and blank it out. In other words "Shit! Sorry, shouldn't have said that."

Interesting that the engine will generate and display before checking what it is creating.

Replies:   Sarkasmus
Sarkasmus ๐Ÿšซ

@KimLittle

Because those are two completely separate mechanisms. Maybe I should have explained that further.

ChatGPT did NOT go "shouldn't have said that". ChatGPT gave you the answer it had "learned" would be the most fitting for your inquiry. The filter OpenAI applied is a simple programming script in between ChatGPT and you.

When ChatGPT sends its response, it's not sent to you, but to the server you're connected to and that is displaying the text box for your questions. Before that server displays ChatGPT's response to you, the script is applied to the response and, if the result is clean, the response it displayed. If the response contains one of the bad words, it displays the "Sorry"-Message instead of the response. But the response from ChatGPT will most likely still be the same when the next person makes the same inquiry you made.

irvmull ๐Ÿšซ

@KimLittle

Regardless of how it works, the important thing to remember is that it often doesn't work very well.

I notice a lot of websites that have restaurant reviews or vacation rentals and such now seem to be "written" by ChatGPT or AI.

This shows when, for example, a restaurant lists a local restaurant "in the heart of downtown", when you know for a fact that it's way the hell out in the country, about 5 miles on a rural back road. A human writer would have known that by a glance at the online map.

I'm sure some of the restaurant reviews are written by AI also, since humans have taste buds...

The more you look, the more of this sort of thing you'll find.

Replies:   Pixy
Pixy ๐Ÿšซ

@irvmull

That's got nothing to do with AI and everything to do with human sales reps. What you quoted is standard fare for everything that needs to be sold. "Close to a local transport hub" is sales speak for it's 'Under a flight path of a the local airport' or 'next to the bloody rail line and your going to be woken up on the hour every hour...'.

Keet ๐Ÿšซ

@KimLittle

The essential difference is between "code" and "data". Current (and for the foreseeable future) AI's can't create and add code, i.e. their basic working mechanism. (Well, they can more or less but only as far as human directed.)
What they can do and what is called the learning part of AI's is add and interpret data. Every algorithm works against a data set and the AI gets feedback about what he presents as results generated from that data set. Using that feedback it can extend the data with received information or mark parts using a list of markers. It's a bit more complicated but that is the basic way it works.

Replies:   storiesonline_23
storiesonline_23 ๐Ÿšซ

@Keet

The essential difference is between "code" and "data".

That difference is as essential as the difference between foreground and background.

We commonly say that the records of your customers or widgets or whatever are data processed by an application program. But that application program could be written in BASIC, so it is "just" a bunch of lines being processed by the BASIC interpreter. And the BASIC interpreter is just a bunch of machine words being processed by the CPU. And so forth: it is points of view all the way down.

Essential? The answer depends on -- wait for it! -- your point of view.

Replies:   LupusDei
LupusDei ๐Ÿšซ

@storiesonline_23

Indeed.

And then we have processing in memory and all, and indeed, AI is big on those.

irvmull ๐Ÿšซ
Updated:

@KimLittle

We already have examples of how AI deals with humanity.

Former NBA player Brandon Hunter passed away unexpectedly at the young age of 42 this week, a tragedy that rattled fans of his 2000s career with the Boston Celtics and Orlando Magic.

But in an unhinged twist on what was otherwise a somber news story, Microsoft's MSN news portal published a garbled, seemingly AI-generated article that derided Hunter as "useless" in its headline.

"Brandon Hunter useless at 42," read the article, which was quickly called out on social media. The rest of the brief report is even more incomprehensible, informing readers that Hunter "handed away" after achieving "vital success as a ahead [sic] for the Bobcats" and "performed in 67 video games."

https://futurism.com/msn-ai-brandon-hunter-useless
...
Asked to provide a list of countries in North America that start with the letter "M," ChatGPT responded with:

"Certainly! There are three countries in North America that start with the letter 'M':

Mexico
United States of America (USA)
Canada

These are the three countries located in North America that begin with the letter 'M.'"

richardshagrin ๐Ÿšซ

@irvmull

Mexico
United States of America (USA)
Canada

These are the three countries located in North America that begin with the letter 'M.'"

Mis-United States of America
Mc Canada.

Replies:   anubelore
anubelore ๐Ÿšซ

@richardshagrin

It literally listed the three countries in North America...and the list starts with the letter M. I'm snickering right now.
"Objectives: 1- list three countries in North America. 2- Start with letter M."
Heh.

Replies:   Dicrostonyx
Dicrostonyx ๐Ÿšซ

@anubelore

Excepts there are 23 countries in North America, these are just the most populous. There are also an additional 24 territories that are dependencies of other countries.

What makes it especially funny is that there actually are 3 states in North America that begin with M: Martinique (France), Mexico, and Montserrat (UK).

Replies:   Dominions Son
Dominions Son ๐Ÿšซ
Updated:

@Dicrostonyx

there actually are 3 states in North America that begin with M: Martinique (France), Mexico, and Montserrat (UK).

Martinique and Montserrat are islands in the Caribbean and as such, arguably not part of the continent of North America.

And if you do count islands as part of a continent, those two are closer to South America than North America. The closest mainland country for both would be Venezuela

Replies:   helmut_meukel
helmut_meukel ๐Ÿšซ

@Dominions Son

Martinique and Montserrat are islands in the Caribbean and as such, arguably not part of the continent of North America.

And if you do count islands as part of a continent, those two are closer to South America than North America.

What you state is not the widely accepted definition of North America.

The southern portion of North America includes Central America and non-English speaking Caribbean nations. The north of the continent maintains recognized regions as well. In contrast to the common definition of North America, which encompasses the whole North American continent, the term "North America" is sometimes used more narrowly to refer only to four nations, Canada, Greenland, Mexico, and the U.S.

Back when there were only two super continents (Laurasia and Gondwana) North America was part of Laurasia und South America was part of Gondwana. Both broke apart โ€“ Gondwana first โ€“ and the pieces drifted apart.
Long, long after this, the two american continents connected.
The actual distance of the Caribbean Isles to mainland North America and Mainland South America does not count.
While Cuba is part of the large north-american plate, all others are part of the small Caribbean plate.
I couldn't find if the Caribbean plate broke from North or South America, but this would determine if they are geologically part of North America or South America.

HM.

Grey Wolf ๐Ÿšซ
Updated:

@irvmull

It missed the Magnited States of America (which is somewhat of an in-joke for Alamo Drafthouse fans - google 'Alamo Drafthouse Magnited States of America' for a rather funny voicemail that the Alamo Drafthouse people turned into an anti-texting-in-theaters announcement).

Back to Top

 

WARNING! ADULT CONTENT...

Storiesonline is for adult entertainment only. By accessing this site you declare that you are of legal age and that you agree with our Terms of Service and Privacy Policy.


Log In