📣 Several of the OpenAI Employees Stood Up to the Company
posted 4 Jun 2024
A group of current and former OpenAI employees has come forward to raise concerns about the company's alleged prioritization of speed and profit over safety in its pursuit of Artificial General Intelligence (AGI). These whistleblowers paint a picture of a company culture characterized by secrecy and a relentless drive to "be first," often at the expense of thorough risk assessments.
OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there,stated Daniel Kokotajlo, a former OpenAI research scientist specializing in governance.
Employees further claim OpenAI discourages open discussions on safety concerns, both publicly and internally. In response, they have published an open letter urging leading AI companies to prioritize transparency and whistleblower protections.
We also understand the serious risks posed by these technologies. These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction,the letter reads.
The letter's signatories include current and former OpenAI employees, with research engineer William Saunders being a notable name. Interestingly, some current Google DeepMind employees have also joined the initiative. In a predictable fashion, OpenAI responded to the initiative by declaring that they are maintaining communications both with governments and the public.
When I signed up for OpenAII," remarked Saunders, "I did not sign up for this attitude of ‘Let’s put things out into the world and see what happens and fix them afterward.’
This dissent comes on the heels of the departures of two senior OpenAI researchers last month: Ilya Sutskever and Yann LeCun. While neither has officially joined this initiative, both have previously expressed concerns about the potential dangers of powerful AI and OpenAI's deteriorating safety focus.