To protect scientific research from the clutches of artificial intelligence, experts propose a “simple” solution.

To protect scientific research from the clutches of artificial intelligence, experts propose a “simple” solution.

⇧ [VIDÉO] You may also like this partner content

The quality of future scientific research is at risk of deteriorating with the increasingly widespread involvement of generative AI. At least, that’s what some researchers suggest who highlight the risks associated with these technologies, especially because of the errors they generate that are still very frequent. However, researchers at the University of Oxford propose a solution: using LLMs (large language models) as “zero translators.” According to them, this method can enable the safe and effective use of artificial intelligence in scientific research.

In an article published in the magazine Natural human behaviorresearchers from the University of Oxford share their concerns about the use of large language models (LLMs) in scientific research.

These models can generate incorrect answers that can reduce the reliability of studies and even lead to the dissemination of false information by creating false study data. Moreover, science has always been described as a fundamentally human activity. It involves curiosity, critical thinking, creating new ideas and hypotheses, and the creative combination of knowledge. The fact that all these human aspects are being “delegated” to machines raises concerns within the scientific communities.

The Elisa effect and overconfidence in artificial intelligence

Oxford scholars have cited two main reasons for using linguistic models in scientific research. The first is the tendency of users to attribute human qualities to generative AI. This is a recurring phenomenon called the “Elisa effect,” where users subconsciously view these systems as understanding, sympathetic, and even wise.

READ  "Why are planets round and not any other shape?"

The second reason is that users may show blind trust in the information provided by these forms. However, AI systems are likely to produce incorrect data and do not guarantee the correctness of answers, despite recent developments.

Moreover, according to the study’s researchers, MBAs often provide answers that seem convincing, whether they are true, false, or inaccurate. Faced with certain queries for example, instead of answering “I don’t know,” AI would prefer to provide incorrect answers, because it has been trained to please users, and in particular to simply predict the logical sequence of words when faced with a query. . .

All of this clearly calls into question the utility of generative AI in research, where the accuracy and reliability of information is crucial. “Our tendency to anthropomorphize machines, trust models as if they tell the truth as much as humans, and consume and disseminate the bad information they produce in the process, is particularly troubling for the future of science.», researchers write in their document.

“Zero” translation as a solution to the problem?

However, researchers suggest another, safer way to involve AI in scientific research. This is “zero translation”. In this technology, AI works through a set of incoming data that is already considered reliable.

Instead of generating new or creative responses, AI in this case focuses on analyzing and reorganizing this information. Therefore, its role is limited to processing data, without introducing new information.

In this approach, the system is no longer used as a vast repository of knowledge, but rather as a tool intended to process and reorganize a specific and reliable set of data in order to learn from it. However, unlike the normal use of LLMs, this technique requires a deeper understanding of AI tools, their capabilities, and, depending on the application, programming languages ​​such as Python.

READ  Climate Crisis: Faraway, Uncertain, Complex?

To better understand, we directly asked one of the researchers to explain the principle to us in more detail. To start, according to him, using LLMs to convert precise information from one form to another, without specific training for this task, brings the following two advantages:

source : Natural human behavior

Leave a Reply

Your email address will not be published. Required fields are marked *