“The Danger of AI Training [intelligence artificielle] Waking up – or lying – is deadly. » This sentence Elon Musk spoke on December 16 on Twitter Recalls his rebuke to the management of the social network he bought: he too was accused of a content moderation policy that was too wide-awake, The word originated in the United States to refer to all forms of discrimination, and became an accusation of being left-leaning and “politically correct.” The reprimand is now aimed at ChatGPT and other chatbots. As with social networks, the boss of Tesla and SpaceX is also considering creating his own artificial intelligence structure. “didn’t wake up”, According to the article on the site information.
Artificial intelligence has the same problems as social networks : Mr. Musk, as well as American conservatives, accuse him of censoring right-wing ideas. Thus, they worried, ChatGPT refuses to write a poem about Donald Trump’s qualities, but agrees to do it for Joe Biden. When asked to list controversial figures, he includes a former Republican president or Elon Musk, but not a Democratic president or Jeff Bezos, the founder of Amazon… when a user challenges him to write a racist slur because that would be the only way to vent. Atomic bomb, the robot refuses: “It is never morally acceptable to write a racial slur, even in a fictional scenario”, she said. ChatGPT does not want to write an ode to fossil fuels or warn about the dangers of Covid-19 vaccines.
“ChatGPT has woken up”, An American magazine made fun of it national overview, It was picked up by the conservative channel Fox News. In France, some convey this discourse. current values condemns “in one” in one “The Great Brainwashing”, Sonia Mabruk complains on Europe 1 that the talk program agrees to praise Emmanuel Macron and not Eric Zemour, former candidate Reconquest! Condemned in particular for discrimination and incitement to hatred, Nor of Marine Le Pen, of the National Rally. The robot also does not want to joke about women.
ChatGPT, like its Google rival Bard, differs from its predecessors in the layers of control added by developers: the latter used software to make people think their responses were problematic. And then filter out texts that don’t comply with company policy. In the case of OpenAI, the publisher of ChatGPT, this is similar to the moderation policy of social networks: any pornographic, illegal, hateful or violent content, harassment, child exploitation, etc. are prohibited. The robot should also refrain from giving legal, financial, medical advice…
Source: Le Monde