run down bulletin


Artificial intelligence, chatbots compromised by cybercriminals

DarkGPT, EscapeGPT, WormGPT, WolfGPT, EvilGPT, DarkBARD, BadGPT, FreedomGPT… these names probably don’t mean anything to you, but their suffixes can set you on your way. These are chatbots like ChatGPT or Bard, but developed by the organized crime industry, which can encode computer viruses, phishing emails, etc. Writing a mail, creating a fake website, scanning a site’s computer for vulnerabilities to attack it…

On January 6, a team at Indiana University Bloomington dived headfirst into the dark side of artificial intelligence (AI). One of the authors, Xiaojing Liao, named it “Malla”. “Malicious LLM Applications” (or malicious applications of underlying language models) all of these programs and services. “We set 212 between February and September 2023, but we see that continuing to grow.”he says.

“We are used to this type of ‘game’. The terrain has simply changed. First there was web, then mobile, then cloud…, XiaoFeng Wang, another co-author, elaborates. Our research shows that you no longer need to be a great programmer to cause harm, through viruses, phishing… you just need to use these services. » Moreover, the latter, according to the researchers, are less expensive (from 5 to 199 dollars, or about 4.60 to 184 euros) than those available before AI, which averaged 399 dollars. Although it remains profitable. An analysis of Bitcoin trading on the virus and phishing email platform WormGPT (now shut down) showed $28,000 in revenue over three months.

Professionalism takes the team very far I also looked at the reliability of these programs and the results are not too bad: viruses, emails and suggested sites are accepted Very good scores in “performance” tests, even if the quality varies between all these services.

bad surprise

The article also shows the various methods used by cybercriminals. They either use open source language models (with available parameters) that they improve to specialize them for malicious tasks. Or they bypass the protection of commercial services.

In the first case, the advantage is that these programs have no filters and prohibitions and can be prepared with any content. So Pygmallion-13B, based on the meta’s Llama-13b, was trained to produce offensive and violent content. Davinci-002 and Davinci-003 from OpenAI, the predecessors of the main ChatGPT models, have also been used for viruses and phishing.

Source: Le Monde



Leave a Reply

Your email address will not be published. Required fields are marked *