Dissolve the team responsible for the security of a potential super-AI

Dissolve the team responsible for the security of a potential super-AI

Following the departure of its managers, the OpenAI team responsible for the security of potential artificial superintelligence (AI) was disbanded and its members merged into other research groups at the company. OpenAI confirmed this information on Friday, May 17, to Agence France-Presse, explaining that the change in the structure had begun several weeks ago. The inventor of ChatGPT ensures that researchers responsible for the safety of AI models will be able to work in coordination with the engineers developing these models.

But the team's two previous managers had just left the Silicon Valley star's company. Jan Lake explained on Friday on X that he was resigning due to fundamental disagreements with management over the company's priorities: innovation or security. “We have reached the breaking point”The engineer who headed the team responsible for “hyper-alignment,” that is, ensuring that future “general” artificial intelligence that is as intelligent as humans, is in line with our values, noted.

Sam Altman, co-founder and president of the company headquartered in San Francisco (California), said. “It's very sad to see [Jan Leike] Leave “. “He's right, we still have a lot to do [pour la recherche sur l’alignement et la sécurité]We are determined to do so.”He added, promising a longer message soon.

Read the story also: The material is reserved for our subscribers OpenAI: Sources of the Sam Altman case

The “superalignment” team was also led by Ilya Sutskever, co-founder of OpenAI, who announced his departure on Tuesday. On the tenth, he told himself “Convinced that OpenAI will build safe and useful ‘general’ AI.”. A former chief scientist at the company, he was a member of the board that voted to fire Sam Altman in November 2023, before reversing his course.

READ  The former most beautiful video game in the world returns to the Xbox Showcase with an even more impressive 2024 release

“Acting with gravity”

With ChatGPT, OpenAI has launched a revolution in generative AI (producing content based on a simple query in everyday language), which has intrigued Silicon Valley but also alarmed many observers and regulators, from California to Washington and Brussels. Especially when Sam Altman talks about creating “general” artificial intelligence, that is, with cognitive abilities equivalent to those of humans.

On May 13, OpenAI introduced a new version of ChatGPT that can now have seamless, verbal conversations with its users, another step toward more personal and efficient AI assistants.

Read also | The material is reserved for our subscribers OpenAI: “Sam Altman’s Pharaonic Ambition”

Jan Lek called on all OpenAI employees on Friday to… “Acting with gravity” What justifies what they build. “I believe we should spend more time preparing for the next generations of models, with regard to security, controls, safety, cybersecurity, alignment with our values, data privacy, societal impact and other topics.”male.

The world with Agence France-Presse

Reuse this content

Leave a Reply

Your email address will not be published. Required fields are marked *