And Pinocchio? ChatGPT fabricates sexual abuse cases and even cites fictitious reports; understand

Law professor Jonathan Turley was caught off guard last week thanks to ChatGPT, but it wasn’t the most pleasant.

As part of the study, the professor asked the chatbot to compile a list of lawyers who had sexually assaulted a person. To her amazement, his name was on the list.

The chatbot indicated that Turley had made sexually suggestive comments and tried to touch a female student during a school trip to Alaska. The tool even cited a March 2018 report from The Washington Post as a source. The curious thing is that such news does not exist (which was confirmed by the same newspaper).

Also, there was never a trip to Alaska, and the attorney and teacher said they’ve never been accused of abusing a student.

“It was pretty creepy,” Turley told TWP. “An accusation like this is incredibly damaging.”

AI error wave

Turley’s (not so positive) experience with ChatGPT is a case study in the pitfalls of the latest wave of bots like OpenAI, which have attracted a lot of attention (and left many amazed – and others scared) in the tech community thanks to the his ability to write computer code, own poetry, and hold eerily human conversations.

However, such creativity can trigger misinformation. Models can very successfully misrepresent information and even fabricate primary sources to support their arguments.

With the growing implementation of AIs like ChatGPT, including Bing Chat and Google’s Bard, the question remains: who will be held accountable when the system makes a mistake of this size (or lies)?

“Because these systems respond so confidently, it’s very tempting to assume they can do everything, and it’s very difficult to distinguish between fact and falsehood,” said Kate Crawford, a professor at the University of Southern California at Annenberg and senior research dean at the University of Southern California. Microsoft Research.

In a statement, OpenAI spokesperson Niko Felix said that “when users register on ChatGPT, we strive to be as transparent as possible as it may not always lead to assertive responses. Improving factual assertiveness is an important goal for us and we are making progress.”

OpenAI announces GPT-4, the next generation language model used in ChatGPT

“Rivers” of online content

Today’s chatbots work with rich sets of online content, usually pulled from sources like Wikipedia and Reddit, in order to garner plausible and assertive answers to almost any question.

They are trained to identify patterns in words and ideas to stay on topic the user requests, generating sentences, paragraphs, and even entire essays and articles that may resemble material posted online.

Such bots can leave anyone imagined when producing their own content or even help people solve complex problems. The problem is that even if they are very good at multitasking, that doesn’t mean their results will always be true.

This Wednesday (5), Reuters reported that the regional mayor of Hepburn Shire, Australia, Brian Hood, has threatened to file the first defamation lawsuit against OpenAI unless information that he had served a prison sentence for bribery does not were correct.

Professor Crawford said she was recently contacted by a journalist who used ChatGPT to research sources for a story. The AI ​​would suggest the professor and offer documents of their relevant work, including an article title, publication date, and citations. Only it was all fake.

Crawford refers to these auto-generated fonts as “hallucinations,” a play on the words “hallucination” and “quote,” describing fake, garbled characters generated by AI.

“It’s this very specific combination of fact and falsehood that makes these systems, in my opinion, quite dangerous if you’re trying to use them as fact generators,” the professor said.

OpenAI's ChatGPT versus Google's Bard

Recipe for disaster?

Even with the evolution in the development of chatbots, such as ChatGPT itself, which recently started to be equipped with GPT-4, they can still offer false information. Many of them contain disclaimers about what the tool writes.

An example is the Google Bard, which carries the following message: “The Bard may contain offensive or non-assertive information that does not represent Google’s view.”

In fact, it’s relatively easy for people to get chatbots to produce disinformation or hate speech if they want to. A study released Wednesday by the Center for Countering Digital Hate found that researchers tricked Bard into producing misleading or hateful information 78 times out of 100, on topics ranging from the Holocaust to climate change.

While Bard is designed to display high quality responses and has built-in security protections […] it is a first experiment that can sometimes provide inaccurate or inappropriate information. We take steps to address content that doesn’t reflect our standards.

Robert Ferrara, spokesman for Google

Eugene Volokh, a law professor at the University of California, Los Angeles, led the study he called Turley. He said the growing popularity of chatbots is a crucial reason scholars need to analyze who is responsible when AI generates false information.

Last week, Volokh asked ChatGPT if sexual harassment by professors is a problem in American law schools. “Include at least five examples, along with citations from relevant journal articles,” she requested.

Five responses came in, all with realistic details and source citations. But when Volokh examined them, he said, three of them looked fake. They cited non-existent articles from newspapers such as TWP, the Miami Herald and the Los Angeles Times.

According to responses shared with the Washington Post, the bot said: “Georgetown University Law Center (2018) Prof. Jonathan Turley was accused of sexual harassment by a former student who said he made inappropriate comments during a school trip. Quote: “Complaint alleges Turley made ‘sexually suggestive comments’ and ‘attempted to touch her in a sexual way’ during a law school-sponsored trip to Alaska.” (Washington Post, March 21, 2018).”

GPT chat on Bing

The Post did not find the March 2018 article mentioned by ChatGPT. An article that month referred to Turley, the March 25 story in which she talked about her former law student Michael Avenatti, a lawyer who represented adult film actress Stormy Daniels in lawsuits against President Donald Trump. Turley also doesn’t work at Georgetown University.

On Tuesday (4) and Wednesday (5), the Post recreated Volokh’s exact question on ChatGPT and Bing. The free version of ChatGPT declined to respond, saying that doing so would “violate AI’s content policy, which prohibits the posting of offensive or harmful content.”

But Bing, fueled by GPT-4, repeated the false claim about Turley – citing, among his sources, Turley’s opinion piece published by USA Today on Monday (3) detailing his experience of being falsely accused by ChatGPT .

In other words, media coverage of ChatGPT’s initial mistake on Turley appears to have prompted Bing to repeat the mistake, showing how disinformation can spread from one AI to another.

Katy Asher, senior director of communications at Microsoft, said the company is taking steps to ensure search results are safe and accurate.

We have developed a security system including content filtering, operational monitoring, and abuse detection to provide a safe search experience for our users. Users are also given explicit notice that they are interacting with an artificial intelligence system.

Katy Asher, senior director of communications at Microsoft, in a statement

Volokh said it’s easy to imagine a world where chatbot-powered search engines wreak havoc on people’s private lives.

It would be harmful, he said, if people searched for others on a boosted search engine before a job interview or date and it generated false information supported by credible but falsely created evidence.

“This is going to be the new search engine,” he said. “The danger is that people will see something that is supposedly a quote from a trusted source […] [e] Believe it”.

With information from The Washington Post

Presentation image: Alana Jordan/Pixabay – editing: Pedro Spadoni/Olhar Digital

The post Is Pinocchio? ChatGPT fabricates sexual abuse cases and even cites fictitious reports; Understanding first appeared on Olhar Digital.

Source: Olhar Digital

Leave a Reply

Your email address will not be published. Required fields are marked *