AI's Existential Threat: OpenAI Founder Sounds the Alarm
The heads of major tech companies have released an open letter, expressing their profound concern about humanity's future in light of AI's escalating influence. Even Sam Altman, who some regard as the originator of this threat, has signed on.
At the time this article was written, nearly 400 professionals, recognized as thought leaders in the field of emerging technologies, Turing Award laureates, and founders of successful tech and financial ventures, have endorsed the letter.
Fortunately, successful entrepreneurs and scientists aren't politicians. They can articulate their views without resorting to euphemisms or ambiguous phrases. Hence, in the letter, issues are called out for what they are: nuclear threats and pandemics like the recent COVID-19 outbreak could seem like mere child's play if we fail to manage the advancement of neural network technologies.
What genuine threats are perceived by those who drafted the open letter?
- Unrestricted dissemination of fake news and antagonistic propaganda through the Internet. People have grown used to trusting the information they find online. Critical thinking, fake-news filters, and the drive to verify information from multiple sources have nearly withered in an era when the influx of information far exceeds our ability to cope.
- Massive unemployment for the "white-collar" workforce. Primarily, lawyers, journalists, translators, and designers stand to be most affected. There's also a significant risk to educators who could easily be replaced by machine learning.
- As per the World Economic Forum participants' estimates, by 2025, AI will automate around 75 million jobs, affecting not just intellectual sector professionals.
- Such a trajectory could instigate immense social upheavals that, in effect, no one will be able to control. Revolts will be fueled by a flood of unchecked and manipulative information (refer to point 1).
The initiators of this open letter include:
- Sam Altman, CEO of OpenAI;
- Demis Hassabis, CEO of Google DeepMind;
- Dario Amodei, CEO of Anthropic;
- Yann LeCun, Head of Meta's research lab;
- Geoffrey Hinton and Yoshua Bengio, neural network researchers often referred to as the "godfathers" of AI.
These innovators of machine intelligence openly admit that their "genie" has escaped its bottle, and they currently see no viable means to rein it in. They express fear that the cutthroat competition in the field may cause entrepreneurs to ignore arguments for caution. However, while they mention the necessity for "strict control," no specific, practical algorithms are proposed in the letter.
In a manner of speaking, it could be likened to God observing from above, as his beloved creation considers itself all-powerful and enters a self-destruction mode. But in this case, there's no one to utter the plea, "Lord, stop this."