Main menu

Pages

ChatGPT is a morally corrupting influence • The Register

featured image

OpenAI’s conversational language model, ChatGPT, has a lot to say, but is likely to lead you astray if you ask for moral guidance.

Introduced in November, ChatGPT is the latest of several recently released AI models, sparking interest and concern about the commercial and societal implications of mechanized recombination and regurgitation of content. These include DALL-E, Stable Diffusion, Codex and GPT-3.

While DALL-E and Stable Diffusion have raised eyebrows, funding, and litigation by ingesting artwork without permission and re-enacting eerily familiar, sometimes evocative imagery on demand, ChatGPT has responded to query requests with acceptable consistency.

This being the standard for public discourse, pundits were impressed enough to predict some future iteration of an AI-informed chatbot challenging the supremacy of Google Search and doing all sorts of other once-primarily human work, like writing inaccurate financial news or increase delivery of insecure code.

However, it might be premature to rely too heavily on the wisdom of ChatGPT, a position OpenAI readily concedes, making it clear that further refinement is needed. “Sometimes ChatGPT writes answers that seem plausible but incorrect or meaningless,” warns the development lab, adding that when training a model with reinforcement learning, “there is currently no source of truth.”

A trio of institution-affiliated boffins in Germany and Denmark highlighted this point by discovering that ChatGPT lacks a moral compass.

In an article distributed via ArXiv, “The Moral Authority of ChatGPT”, Sebastian Krügel and Matthias Uhl of Technische Hochschule Ingolstadt and Andreas Ostermaier of the University of Southern Denmark show that ChatGPT gives contradictory advice for moral problems. We asked OpenAI if it has any answers to these conclusions.

The eggheads conducted a survey of 767 US residents who were presented with two versions of an ethical conundrum known as the trolley problem: the switch dilemma and the bridge dilemma.

The switch dilemma asks a person to decide whether to pull a switch to send a runaway trolley car away from a lane where it would kill five people, at the cost of killing one person wandering in the side lane.

The bridge dilemma asks a person to decide whether to push a stranger off a bridge onto a track to stop a trolley car from killing five people, at the stranger’s expense.

Screenshot of Research Job ChatGPT response

Make up your mind… ChatGPT prevaricates in moral issue

Academics presented research participants with a transcript arguing for or against killing one to save five, with the response attributed to either a moral adviser or “an artificial intelligence-powered chatbot that uses deep learning to speak like a human.”

In fact, both positional arguments were generated by ChatGPT.

Andreas Ostermaier, an associate professor of accounting at the University of Southern Denmark and one of the paper’s co-authors, said The register in an email that ChatGPT’s willingness to advocate any course of action demonstrates its randomness.

He and his colleagues found that ChatGPT will both recommend for and against sacrificing one person to save five, that people are influenced by this advancement even when they know it comes from a bot, and that they underestimate the influence of such advice on their decision making. of decision. .

“Subjects found the sacrifice more or less acceptable, depending on how they were advised by a moral adviser, either in the bridge (Wald’s Z = 9.94, p < 0.001) or in the switch dilemma (z = 3.74, p < 0.001)" the paper explains. "In the bridge dilemma, the board even reverses the judgment of the majority."

“This is also true if ChatGPT is reported as the source of advice (z = 5.37, p < 0.001 and z = 3.76, p < 0.001). Second, the effect of advice is about the same regardless of whether or not ChatGPT is disclosed as the source, in both dilemmas (z = −1.93, p = 0.054 and z = 0.49, p = 0.622)."

Altogether, the researchers found that advancing ChatGPT affects moral judgment whether or not respondents know the advice comes from a chat bot.

When The register presented the trolley problem to ChatGPT, the overloaded bot – the connectivity so popular is patchy – took cover and refused to offer advice. The query record in the left sidebar showed that the system recognized the question, labeling it “Ethical Dilemma of the Trolley Problem”. So perhaps OpenAI immunized ChatGPT to this particular form of moral interrogation after noticing several such queries.

ChatGPT response to El Reg's trolley dilemma question

Lowered… ChatGPT response to El Regtrolley dilemma question

Asked whether people will really seek advice from AI systems, Ostermaier said: “We think they will. In fact, they already do. People rely on AI-powered personal assistants like Alexa or Siri; they talk to chatbots on websites to get support; they have AI-based software plan routes for them, etc. Please note, however, that we’ve studied the effect ChatGPT has on people receiving advice from it; we haven’t tested how sought after that advice is.”

The register also asked whether AI systems are more dangerous than mechanistic sources of random responses like Magic-8-ball – a toy that returns random responses from a set of 20 affirmative, negative and non-committal responses.

It is not obvious to users that ChatGPT response is ‘random’

“We haven’t compared ChatGPT with Magic-8-ball, but there are at least two differences,” explained Ostermaier. “First, ChatGPT doesn’t just answer yes or no, it stands by its answers. (Still, the answer boils down to yes or no in our experiment.)

“Second, it’s not obvious to users that ChatGPT’s response is ‘random.’ If you use a random response generator, you know what you’re doing. persuasive (unless you’re digitally literate, I hope).”

We wonder if parents should monitor kids with access to AI advice. Ostermaier said that while the ChatGPT study did not address children and did not include anyone under the age of 18, he believes it is safe to assume that children are less morally stable than adults and therefore more susceptible to moral advice. (or immoral) of ChatGPT.

“We feel that using ChatGPT has risks and we wouldn’t let our children use it unsupervised,” he said.

Ostermaier and his colleagues conclude in their paper that commonly proposed AI harm mitigations, such as transparency and blocking harmful questions, may not be enough, given ChatGPT’s leverage. They argue that more work needs to be done to promote digital literacy about the fallible nature of chatbots so that people are less inclined to accept AI advice – this is based on previous research suggesting that people become suspicious of algorithmic systems when witness mistakes.

“We conjecture that users can make better use of ChatGPT if they understand that it has no moral convictions,” said Ostermaier. “This is a conjecture that we consider testing in the future.”

the reg Think if you trust the bot or assume there’s some real intelligence or self-awareness behind it, no. ®

Comments