@REPHmm... seems like I have to go into a little more detail. So, let me try:
Google and ChatGPT are not people. That is correct. But they ARE entities with a rather large presence in our lives.
Now, in the case of Google, you are correct. As I wrote before, when you make a search through Google, that search is sent to the Google-Servers where a complicated set of algorithms is used to rifle through the vast amount of data on their hard drives, and to determine which result would best fit your question.
First, this was done by simple word occurrence. If you googled for indian food recipes, you got the sites listed that had the most occurrences of the terms "indian", "food", "recipes". Simple. Over time, this evolved and got more and more complicated. But, in the end, it's still nothing more than the server following the instructions given by a bunch of computer scripts.
OpenAI, however, is NOT working that way.
The only thing the programmers do to create an AI, is to set up the basic structure and provide it with a dataset (which is basically the contents of the Google Servers). The basic structure being "here's the dataset, let me create a few basic associations for the first responses, and then make it so the AI can create its own associations". From that point on... the programmer is out of the picture. From now on, the AI will learn by itself, based on the inquiries it receives from the users.
Now, you have a freshly created Chat AI. You ask it who Elon Musk is, and it will give you the first line of his Wikipedia page. Realizing this is not enough, you send another inquiry back, asking about the boring company. So, the AI, all by itself, creates a link between Elon Musk and the boring Company, and the next time someone asks who the guy is, the response will be the first line of his Wikipedia page PLUS a sentence regarding the boring company.
Simple, right? That's why everybody and their grandmother are currently creating AIs. The problem, however, is that there are no codes to control the AIs or even supervise them, as multiple politicians are currently demanding. Once an AI made a connection between a piece of information in its dataset and an inquiry by a user, that connection is there and CAN NOT be severed anymore. Because there is no programmer supervising the thing. There are no scripts governing its responses that a programmer could alter. It's the AI learning something on its own, and now that connection is in there for good!
So, the only way to stop an AI from spewing racist bullshit and turning into a fourteen-year-old phone-sex operator is to apply the filters I mentioned earlier. Those ARE traditional scripts like you described. But they are completely detached from the ChatAI. It is a completely different process that just searches the AI's response for bad words. Like a SPAM-Filter in your EMails.