OpenAI Forms Team to Manage Risks of Superintelligent AI

Photo - OpenAI Forms Team to Manage Risks of Superintelligent AI
OpenAI, the team behind the AI chatbot ChatGPT, has announced the formation of a new team dedicated to managing the risks associated with superintelligent AI systems. In a blog post on July 5, the non-profit organization expressed its intention to navigate and control AI systems that surpass human intelligence.
While OpenAI believes that superintelligence has the potential to address numerous challenges, it also acknowledges the risks associated with it. The organization warns that the immense power of superintelligence could pose dangers, including the potential disempowerment or even extinction of humanity.

To address these concerns, OpenAI plans to allocate 20% of its existing compute power to this initiative. The organization aims to recruit and develop a team of researchers specializing in automated alignment, striving to achieve a level of alignment comparable to human intelligence.

Currently, Ilya Sutskever, OpenAI’s chief scientist, and Jan Leike, the head of alignment at the research lab, have been appointed as co-leaders of the effort. OpenAI has extended an open invitation to machine learning researchers and engineers to join the team and contribute to this important mission.

GN
GNcrypto
Author