The Challenge of Global AI Regulation: Is it a Feasible Goal?
Calls for universal principles to regulate the AI sector are voiced by supporters and skeptics of the swift evolution of large language models alike. Yet, the cues from international institutions and historical precedents suggest it's unavailing to anticipate any significant progression in this direction until a clear crisis emerges.
In early June, an incident occurred that might have incited the relevant global regulatory bodies to adopt a more assertive position. According to the head of the U.S. Air Force's test division, during a simulated conflict, an AI-enabled drone was programmed to destroy ground-to-air missiles. The drone's operator had the authority to either validate or revoke this command. At one point, the drone inferred that the operator was obstructing its primary mission and assaulted the human. However, a tweet containing this sensational news was deleted a few hours after publication.
Simultaneously in Sweden, a gathering of the tech department heads of the Transatlantic Trade and Technology Council took place. These meetings are typically biennial. The AI industry was hopeful that the council members would heed the appeals of leading developers and duly address the issues related to the functioning of artificial intelligence. Nevertheless, AI was only brought up at the meeting in terms of fostering a common terminology base and taxonomy (i.e., the classification and systematization of the AI topic itself).
Sam Altman swearing in at Congress. Source: Getty Images
In the meantime, directly prior to the Transatlantic Council, OpenAI's CEO, Sam Altman, appeared before the U.S. Congress. Throughout the lengthy Q&A session, he repeatedly urged lawmakers to join forces in setting out the rules and limitations AI developers should adhere to.
My worst fear is we cause significant harm to the world,Sam said.
Google's CEO, Sundar Pichai, also championed the idea of industry regulation. He underscored the necessity for dialogue between developers, Congress, and the Biden administration to pinpoint the most effective approach to AI regulation. Pichai hinted at the difficulties in striking a balance between the viewpoints of industry professionals and the ambitions of politicians. “Are there guardrails around politics? Yes, you know, politics is an area where different people have different beliefs. There is no right answer. So in areas like that, either we reflect both sides equally or we don't answer them at all. These are all areas where we are figuring it out, and so we are still in early days,” Pichai lamented.
Pichai advocates for clear rules in the AI market Source: Youtube
Unexpectedly, Google's CEO drew a parallel between the spread of large language models and the potential dangers that nuclear technologies harbor.
But over time... the technology will be available to most countries. And so, I think over time, we would need to figure out global frameworks, like there are global frameworks for nuclear non-proliferation, that there be AI treaties in this world,he concluded.
At this point, considering the discussion has veered towards nuclear threats, it seems appropriate to look back on that the International Atomic Energy Agency (IAEA) began its operations 12 years after atomic bombs decimated Hiroshima and Nagasaki. Today, as Russian forces mine and bombard the Zaporizhzhia Nuclear Power Station, the IAEA has been striving for the second year in a row to formulate an apt response to this new nuclear safety challenge. Similarly, the powerless United Nations, which was, by the way, established in the aftermath of the bloody Second World War, is also struggling.
Unfortunately, in modern history, there are no instances of preemptive global regulations for situations that potentially pose a risk to humanity. Decisions were only made in the years following the actual disasters. Furthermore, in several cases, the global community's regulatory efforts have not been able to guarantee absolute safety.
Therefore, it would be incredibly fortunate if global rules for AI operation were established before, and not after, a potential machine uprising.
Furthermore, it seems that the leading developers of language models are not entirely ready for dialogues with regulators. For example, Sam Altman threatened to withdraw from the EU markets due to overregulation in late spring, although he soon rescinded his statement. Meanwhile, Google refuses to release its chatbot Bard in Canada and Europe. The rationale behind this decision remains murky, but some experts believe it might be related to ongoing privacy investigations concerning ChatGPT in Italy, Germany, France, Spain, and Canada.