OpenAI: Scandals and Secrets
There’s always some drama going on with OpenAI, the developer of the popular generative AI chatbot ChatGPT. Employee revolt, mysterious decisions, and concerns over safety have become its inseparable part.
OpenAI became widely known to the public in November 2022 with ChatGPT’s release. The app made a real boom, gaining 1 million users in 5 days. According to UBS Research, ChatGPT is the fastest-growing app in history. In 2024, ChatGPT has over 100 million monthly active users.
Over time, OpenAI continues to improve the app and introduces new versions. The recent one, GPT-4o was launched in May, bringing new features, including voice conversations and improved image and video capabilities.
OpenAI’s main mission is to build an Artificial General Intelligence (AGI) system that can outperform human cognitive capabilities. As the company is growing, so does the number of scandals and controversies around it. Further, we mention some of the loudest ones:
Over time, OpenAI continues to improve the app and introduces new versions. The recent one, GPT-4o was launched in May, bringing new features, including voice conversations and improved image and video capabilities.
OpenAI’s main mission is to build an Artificial General Intelligence (AGI) system that can outperform human cognitive capabilities. As the company is growing, so does the number of scandals and controversies around it. Further, we mention some of the loudest ones:
- Scarlett Johanson’s accusation of OpenAI for using her voice without permission
- Sam Altman’s firing and rehiring within a week
- Founders and senior employees leaving
- Elon Musk’s conflict with OpenAI
Scarlett Johansson Speaks Out Against OpenAI for Using Her Voice Without Permission
It all started when OpenAI introduced its new model, the GPT-4o in May. One of the model’s features is the ability to recognize and process speech. In a demo by the company, you can hear how the bot changes its tone after a request to be sarcastic. The voice to which it responds was named Sky by OpenAI.Commenting on the demo, some users pointed out that Sky sounded similar to Hollywood actress Scarlett Johannson in the movie “Her,” where she impersonated Samantha, a virtual AI assistant. Soon, the actress, herself, reacted to Sky and accused OpenAI of using her voice without consent. Johannson wrote a statement, where she mentioned that in September 2023, OpenAI CEO Sam Altman offered her to be Sky’s voice. The actress declined the offer. Two days before the ChatGPT-4o was launched, Johannson’s agent received another letter from Sam Altman asking her to reconsider. According to the statement, before they could connect, the system was out there.
I was shocked, angered, and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference.Sharing her feelings when she heard the voice, Johanson said:
Scarlett Johannson hired a legal counsel who wrote letters to OpenAI inquiring about their process of creating Sky. She noted that the similarity between voices was intentional, and that Sam Altman’s single-word tweet “her” on the day of the bot’s release points to the connection further.
As a response, OpenAI paused Sky. At the same time, the company published a press release saying they didn't intend to make a voice similar to Johansonn’s; Sky’s voice was chosen through casting.
The voice of Sky is not Scarlett Johansson's, and it was never intended to resemble hers. We cast the voice actor behind Sky’s voice before any outreach to Ms. Johansson. Out of respect for Ms. Johansson, we have paused using Sky’s voice in our products. We are sorry to Ms. Johansson that we didn’t communicate better.The release included Altman’s statement saying:
Anyway, taking into account Altman’s tweet and Scarlett Johansson’s comments, the choice of voice doesn’t seem to be random. It’s still uncertain if the actress will pursue legal action or not.
Altman’s firing created chaos inside the company. More than 700 staff members threatened the board to leave if Altman didn't return. Instead, they wanted the board to go. Soon, Ilya Sutskever tweeted that he deeply regretted his participation in the board’s actions.
A big player in this situation was Microsoft, the largest investor of OpenAI, which surely has weight in the company and its future. Microsoft announced that Altman and Greg Brockman would lead their new AI research unit.
However, the conclusion of the interesting course of events was Altman and Greg’s return and the removal of Helen Toner and Tasha McCauley from the board.
Sam Altman Was Fired and Rehired Within a Week. Reasons Are Still Being Discussed
In November 2023 OpenAI’s board of directors made a decision to fire the CEO Sam Altman. The decision-makers were Ilya Sutskever, Helen Toner, Tasha McCauley, and Adam D’Angelo. Their public statement said Altman was fired “for not being consistently candid in his communications with the board.” Did that mean Altman hid technological developments or built other partnerships aside, was not disclosed. Following the news, OpenAI president Greg Brockman quit.Altman’s firing created chaos inside the company. More than 700 staff members threatened the board to leave if Altman didn't return. Instead, they wanted the board to go. Soon, Ilya Sutskever tweeted that he deeply regretted his participation in the board’s actions.
A big player in this situation was Microsoft, the largest investor of OpenAI, which surely has weight in the company and its future. Microsoft announced that Altman and Greg Brockman would lead their new AI research unit.
However, the conclusion of the interesting course of events was Altman and Greg’s return and the removal of Helen Toner and Tasha McCauley from the board.
What’s the deal with Sam Altman?
Although it’s impressive that so many OpenAI employees stood up for the CEO, there’s the other side of the story. Recently, Helen Toner talked about what happened and why the board decided to fire Altmanm at the TED AI Show. She said people were afraid that without Altman OpenAI would be destroyed. Also, they didn’t want to lose the promised equity stake of the multi-billion dollar company. She described Sam Altman’s leadership as psychological abuse. According to Toner, Altman couldn’t be trusted with being open with the board, and there were cases when he held back important information. For example, Sam Altman didn’t inform the board about the release of ChatGPT in advance. Another issue mentioned by Toner was Altman’s lying about safety practices. She said Altman tried to get her fired after she wrote a research paper talking negatively about Open AI’s approach. At the end, Toner pointed out that in 2019 Altman was fired from Y Combinator for deceptive behavior.
Elon Musk Against OpenAI. The Never Ending Dispute
Despite OpenAI becoming widely known in recent years, it has a longer history. The company was founded by Elon Musk, Sam Altman, Greg Brockman, Ilya Sutskever, Andrey Karpathy, and others in 2015 as a non-profit. Elon Musk, who was also the biggest investor in OpenAI, left in 2018 because of a “conflict of interests.” That conflict took new forms through the years. Elon Musk regularly speaks about OpenAI criticizing the company for its strategy. Mainly, he is dissatisfied with the company’s transition from an open-source model toward a closed-source in 2019. In different posts, Musk says that ChatGPT doesn’t pursue the interests of humanity, but maximum profit. The matter got to a point when in March 2024 he sued OpenAI for changing its initial business model and required compensation.
OpenAI, as a response, published letters that revealed in 2018, that Musk suggested the company’s acquisition by Tesla. OpenAI says in 2017, together with Elon Musk they realized that the next step for their mission was switching to a for-profit entity model. However, Musk left when his offer to attach OpenAI to Tesla was refused. While leaving, he said that there’s a need for a competitor to Google/DeepMind in terms of AI and that he would do it himself. In 2023, Musk founded xAI, an open-source company behind the development of Grok, a competitive bot to ChatGPT.
We're sad that it's come to this with someone whom we’ve deeply admired—someone who inspired us to aim higher, then told us we would fail, started a competitor, and then sued us when we started making meaningful progress towards OpenAI’s mission without him.Presenting their side of the conflict with Elon Musk:
The judge will decide if OpenAI breached the contract. Meanwhile, Musk has no intention of dropping the case.
Key People Keep Leaving OpenAI
Not only Musk disagrees with OpenAI’s steps and decisions. The company has seen many of its founders, scientists, and researchers gone. Although they mainly leave quietly, the reasons for quitting the hottest AI startup remain uncertain.During recent months, founders Andrey Karpathy, and Ilya Sutskever, both considered as among the brightest brains in AI, announced they are leaving to work on personal projects.
While Andrey Karpathy and Sutskever may present intriguing projects, a promising startup started by former OpenAI employees is already gaining traction. We are talking about Anthropic, an AI company co-founded by seven former OpenAI employees, including the CEO Dario Amodei, and his sister Daniela Amodei, who is the president. Previously, Dario served as the vice president of research at OpenAI, and Daniela as vice president of safety and policy. The siblings founded Anthropic in 2021 as an AI safety research lab. They have built Claude, an ethical AI that is one of GPT’s main rivals.
Recently, Anthropic welcomed Jan Leike, a former key safety researcher at Open AI.
Recently, Anthropic welcomed Jan Leike, a former key safety researcher at Open AI.
Several days before, Jan Leike wrote about why he was leaving OpenAI in a Twitter thread. He mentioned having disagreements with OpenAI about core priorities and the ways to control AI systems. Leike said he believes the company needs to provide much more computational power for getting ready for the next generations of models, on topics such as security, monitoring, and social impact. He concluded:
“Over the past years, safety culture and processes have taken a backseat to shiny products. We are long overdue in getting incredibly serious about the implications of AGI. We must prioritize preparing for them as best we can. Only then can we ensure AGI benefits all of humanity.”
Recently, OpenAI formed a new AI Safety and Security Committee to evaluate and develop the company’s AI systems. With this move, the company is supposed to increase its focus on AI safety, which may eventually lead to more trust and less scandals.
“Over the past years, safety culture and processes have taken a backseat to shiny products. We are long overdue in getting incredibly serious about the implications of AGI. We must prioritize preparing for them as best we can. Only then can we ensure AGI benefits all of humanity.”
OpenAI’s Models Are Close to Human-Level Intelligence
In early 2024, Sam Altman said that artificial general intelligence could be developed in the close-ish future. The next generation of AI systems is supposed to outperform humans' cognitive tasks. It would be able to learn on its own, understand human emotions, and adapt its communication style among other abilities. Regarding the concerns that technology may replace people in jobs, Sam Altman said during the World Economy Forum that the change will be much less than people think.Recently, OpenAI formed a new AI Safety and Security Committee to evaluate and develop the company’s AI systems. With this move, the company is supposed to increase its focus on AI safety, which may eventually lead to more trust and less scandals.