Will AI Destroy Our Civilization? A 50/50 Perspective
Peter Berezin, the chief global strategist at BCA, believes that there is a real possibility of artificial intelligence spelling the downfall of humanity. He estimates the likelihood of this dystopian outcome to be around 50%.
AI deployment without proper safety protocols is a growing concern
BCA Research is a reputable firm with an impressive 70-year track record. Over these years, the analysts at BCA have made a series of unexpected forecasts that have proven to be spot-on. One notable example is their foresight in 2011 during the Eurozone crisis when they convinced investors that no European country would choose to exit the eurozone. As a result, their clients were able to benefit from substantial risk premiums by investing in EU assets, a decision that yielded significant returns. Another instance highlighting their accuracy was in 2015 when BCA confidently identified Donald Trump as a potential frontrunner in the presidential race, despite widespread skepticism.
Peter Berezin and his gloomy forecast Source: CNBC
In a nutshell, BCA Research has a knack for making predictions, and when it comes to AI, the company is decidedly cautious. In a recent interview with CNBC, Peter Berezin advocated for a more serious tone. He stressed that within the next 25 years or so — by the mid-century — we should be prepared for serious repercussions from the use of AI.
The strategist emphasizes the importance of thinking exponentially rather than linearly. He points out the recent situation with the rapid spread of the COVID-19 pandemic, where the exponential growth was underestimated.
The same principle applies to AI: people may not notice the rapid progress until machines start performing tasks that were traditionally considered exclusive to humans. Berezin highlights the tremendous potential for the unrestrained expansion of artificial intelligence capabilities. He recalls past leaps in civilization, such as the agricultural and industrial revolutions, all of which exhibited exponential growth.
The researcher is particularly concerned about the casual approach of some stakeholders towards AI. In particular, Berezin expresses doubts about the responsible actions of tech giants like Microsoft, Google, and OpenAI. He notes that these companies prioritize their shareholders' interests over the well-being of humanity.
AI wipes out humanity because the code wites codeBerezin argues.
The strategist explains that security protocols recommended by AI experts are being disregarded. This includes not only restrictions on writing custom code but also limitations on AI's access to the Internet.
Such negligence raises the risk that models like ChatGPT, as they continue to exponentially develop, will surpass human control. Humans have been accustomed to being the dominant species on the planet, but the situation could drastically change.
BCA Research also mentions the possibility of unforeseen consequences when advanced AI systems are given global objectives without clear restrictions on the means to achieve them. This doesn't necessarily resemble the plot of the movie "Terminator." Berezin provides an example of an advanced AI system being asked to address global warming. Without specific instructions, the system might conclude that nuclear war successfully reduces the planet's temperature (with all the resulting consequences). The expert consistently emphasizes that complex systems are unpredictable.
Will AI destroy humanity?
At GNCrypto, we asked ChatGPT for its opinion on the compelling arguments put forth by Peter Berezin.
First and foremost, it’s worth noting that AI didn't dismiss the obvious. It acknowledged the speaker's weighty points about the need to think exponentially, rather than linearly, when considering the growth and potential of AI. The concerns raised regarding the disregard of safety protocols and the possibility of AI surpassing human control were deemed important considerations as well.
However, ChatGPT also highlighted that perspectives on this matter vary depending on where one stands. Some individuals perceive the risks associated with exponential AI growth, while others believe that the benefits outweigh them. This ongoing and intricate discussion calls for the assessment of all viewpoints, scenarios, and the development of effective governance structures.
In a nutshell, as the saying goes, the defendant neither admitted guilt nor denied it. The issue at hand is complex and calls for continuous evaluation, as there are valid arguments on both sides.
ChatGPT acknowledges the validity of Peter Berezin's arguments Source: Dialogue with ChatGPT