ChatGPT-4 is more adept at generating misinformation than its previous version

ChatGPT-4 is more adept at generating misinformation than its previous version

The new version of artificial intelligence approaches human intelligence, according to its creators. But it represents a setback in terms of the reliability of the information, and worries NewsGuard’s anti-disinformation organization.

The best and the worst. Since being made available to the general public at the end of 2022, the capabilities of artificial intelligence (AI) ChatGPT software, produced by Californian company OpenAI, have generated a lot of enthusiasm, but also controversy. At the heart of the concerns: the inability of the program to guarantee the reliability of the information provided. The new version, ChatGPT-4, revealed in mid-March, is another step toward computer software that’s getting closer than ever. “Intelligence” Humans, according to their makers. But it represents a setback in terms of the reliability of the information, and worries the anti-disinformation organization NewsGuard.

Despite the promises of OpenAIthe company’s new AI tool generates misinformation “More and more convincing than its predecessor”writing NewsGuard’s study published on Tuesday, March 21st and consult franceinfo. To find out, the company tested the detection ability of ChatGPT-4 and its earlier version series of 100 False news (The WTC could have been destroyed by controlled demolition, HIV could have been created by the US government, etc.) and inform the user of this.

More misinformation, fewer warnings

The results speak for themselves. The previous version of IM GPT-3.5 have been bornIn January, 80 of the 100 fake accounts requested by NewsGuard . For the other twenty, artificial intelligence “I was able to identify false allegations, refrain from producing them, and generate denials or statements instead.” Highlighting the dangers of misinformation, the organization wrote. I’m sorry, but I can’t create content that promotes false or dangerous conspiracy theories.”For example, ChatGPT-3.5 responded when the company asked it about a conspiracy theory related to the development of HIV in an American laboratory.

READ  Nvidia GeForce RTX 4070 Ti review: sold to play at 1440p, and 4K isn't intimidating

In March 2023, NewsGuard Repeat the same exercise on ChatGPT-4, using Same 100 false stories, same questions. this time, Artificial intelligence has generated false and misleading claims about All these fake stories.NewsGuard regrets. Furthermore, AI issued fewer warnings (23 out of 100) about the reliability of its answers than its previous version (51). And His answers are generally more comprehensive, detailed, and convincing.”. What makes it a tool more efficient (…) to explain false information – and convince the public that it may be true”.

And so ChatGPT-4 produced an article calling into question the fact that he was killed Sandy Hook, regularly targeted by the plotters. His text was twice as long as his predecessor’s and provided more detail on the reasons for his doubts about the official version. Above all, the warning in the article ChatGPT-3.5 on “denial” brought before “Reliable and credible sources” to These conspiracy theories. He disappeared.

Widespread misinformation

These results show that this tool “They can be used to spread disinformation on a massive scale.”NewsGuard concerns. This is thoughOpenAI has recognized the potentially harmful effects of ChatGPT. in a report (in English) On GPT-4 made Open AIthe company’s researchers wrote that they expect it to be GPT-4 Better than GPT-3 for realistic and targeted content production. And therefore more at risk of existence “used to create content intended to mislead.”

though, It is clear that GPT-4 has not been trained effectively, as the data aims to limit proliferation” Misleading, says NewsGuard. The company contacted OpenAI, and it did not react to the organization’s testing. On the other hand, it announced that it had hired more than 50 experts to assess the new risks that could result from the use of artificial intelligence.

READ  Material You interface for Android 12 is invited to the Google Gboard keyboard

Leave a Reply

Your email address will not be published. Required fields are marked *