What is AGI? A Leap Beyond Traditional AI
Defining the meaning of intelligence is both a simple and complex matter, one that may require an exploration of deeper philosophical, psychological and scientific concepts.
But for the sake of the former, we can focus on the definition of ‘intelligence’ presented in the Cambridge dictionary, which characterizes it as ‘the ability to learn, understand, and make judgments or have opinions that are based on reason’. So where does computer intelligence come into play?
What is AGI?
Artificial general intelligence (AGI) is a concept of an advanced software that will mimic the cognitive abilities of the human brain, such as logical reasoning, critical thinking, the ability to learn, and other functions of the mind. This software will supposedly be free from the constraints of predefined code allowing it to reason and make autonomous decisions, unlike current artificial intelligence models that operate as input-output tools. AI, also referred to as artificial narrow intelligence, is great at completing certain tasks, and although in a lot of cases these systems are faster and better at these tasks than humans, they are still limited. But when it comes to reasoning and being autonomous thinkers, AI applications fall short as they operate within the limits and knowledge they are programmed with. Hypothetically, AGI will be able to comprehend and learn just like a human being. It’s also important to differentiate between AI and AGI. As David Deutsch, the British physicist and visiting professor of physics at the Centre for Quantum Computation at Oxford University says, “AI has nothing to do with AGI. It’s a completely different technology and it is in many ways the opposite of AGI”.
As advancements in computer science and neuroscience accelerate, with leading research labs like Google DeepMind, OpenAI, Amazon, Meta, and others racing to achieve AGI, the prospect of creating a human-like digital intelligence becomes almost tangible.
As advancements in computer science and neuroscience accelerate, with leading research labs like Google DeepMind, OpenAI, Amazon, Meta, and others racing to achieve AGI, the prospect of creating a human-like digital intelligence becomes almost tangible.
A Game of Chess
To understand the difference between AI and AGI one can look at how two intelligences would approach a game of chess. Back in 1997, IBM shook the world when its computer system Deep Blue defeated Gary Kasparov, the reigning world chess champion.
There are over 9 million different possible positions after three chess moves each. There are over 288 billion different possible positions after four moves. The number of 40-move games is greater than the number of electrons in the observable universe. You don't need to know those outcomes. You just need to be able to see ahead of your opponent,Elliot Alderson says in a scene from the TV series Mr. Robot.
But if that’s not complex enough, almost 20 years after Deep Blue’s debut, artificial intelligence research laboratory DeepMind developed an AI program AlphaGo that defeated the world champion of the ancient East Asian game Go in 4 out of 5 games. In comparison to the aforementioned scenario of chess, after three Go moves, there are 200 quadrillion possible positions. Despite these incredible advancements in AI that illustrate how computers are capable of simulating human thinking and even superseding it, at the end of the day they are merely calculating their moves. When it comes to AGI, it too is undeniably capable of calculating its moves to win a game of chess or Go. But compared to AI, AGI in theory can choose not to. In a game of chess, an AGI can also choose to play in a way that makes the game more interesting, focusing on continuation, rather than the fastest road to winning. In his essay “Beyond Reward and Punishment”, David Deutsch describes the possibility of an AGI not just playing, but actually enjoying a game of chess, reflecting a level of engagement and understanding akin to human experience.
An AGI is capable of enjoying chess, and of improving at it because it enjoys playing. Or of trying to win by causing an amusing configuration of pieces, as grand masters occasionally do. Or of adapting notions from its other interests to chess. In other words, it learns and plays chess by thinking some of the very thoughts that are forbidden to chess-playing AIs. An AGI is also capable of refusing to display any such capability. And then, if threatened with punishment, of complying, or rebelling,David Deutsch writes.
Racing to AGI: Tech Titans and Hurdles
The rollout of ChatGPT by OpenAI in recent years has sparked a chain reaction that brought rapid shifts in industries and society as a whole. AI software is accelerating fast and according to Statista, by 2030 the AI tech market is expected to be valued over 1.8 trillion U.S. dollars. Industry superstars like NVidia’s CEO Jensen Huang repeatedly say that those who won’t use AI will lose their jobs to those who will. NVIDIA’s revenue for the first quarter of 2024 reached $26 billion, marking a 262% increase from the same period the previous year. The company, which produces high-performance GPUs for AI applications, recently passed the $3 trillion mark, and on the 18th of June 2024 became the most valuable company in the world, outperforming long time leaders Microsoft and Apple. In an interview with Andrew Ross Sorkin of the New York Times, Huang predicted that in the next 5 years AI will be able to pass any human test, but whether that would be considered an AGI remains a question. These near future predictions of reaching AGI are not Jensen’s alone. The CEO of OpenAI Sam Altman projects that the technology will arrive by the end of the decade, if not sooner. In his personal definition, Altman describes AGI as being equivalent to a median human being who could easily be your co-worker. For Altman, the key is for the AGI to have the meta skill to be able to learn and decide to get better at any given thing.
Being among the leaders in AGI development, OpenAI released a mission statement in February of 2023, claiming that it’s working towards creating an AGI that ‘benefits all of humanity’.
If AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility,according to the OpenAI website.
The company also acknowledges that this new technology can bring certain risks and disruptions for the world. They, however, believe that this could be avoided if those developing AGI ‘figure out how to get it right’.
This leads us to the murky waters of ‘synthetic’ intelligence and to those who stand on the more cautious side of things. Recently, an open letter was released and signed by current and former employees of OpenAI, Google Deepmind and Anthropic. The letter raises questions about the risks and dangers of autonomous AI systems and that leading companies in AI should be open to criticism without retaliation for employees and provide transparency on the development of these new technologies. Tesla and SpaceX boss Elon Musk has also been vocal on the dangers of reckless AGI development which even led to a lawsuit against OpenAI, which was later unexpectedly dropped. Despite Musk’s public beef with OpenAI and his opposing views on the company’s research approach and their ways of doing business, the tech entrepreneur is not against the technology itself. In fact, he stated that Tesla is an “AI/robotics company,” and he also launched xAI, an artificial intelligence company that aims to “accelerate human scientific discovery,” with a mission to “advance our collective understanding of the universe.”
British-Canadian cognitive psychologist and computer scientist Geoffrey Hinton, whose work on deep learning and neural networks advanced the development of AI, has quit his role at Google over concerns of the dangers that AI may pose to humanity. Hinton believes that the pursuit for machine intelligence by mega corporations will disregard all the necessary regulations and safety concerns that should be prioritized. Mustafa Suleyman, the co-founder and former head of applied AI at DeepMind, who is now the CEO of Microsoft AI, has also raised concerns about various risks associated with rapid AI development. Suleyman is an advocate for a containment strategy which will require cooperation between key industry players, governments and society as a whole.
In the context of the game, android deviations were initially perceived as code errors, but these 'deviant' androids actually developed self-awareness and autonomy, rebelling against their programming to exercise free will. As the story unfolds, each android transcends their initial programming, displaying behaviors akin to AGI.
Their paths to sentience reflect the essence of AGI, where artificial beings achieve the ability to learn, reason, and make autonomous decisions beyond their initial programming. This journey from mere tools to sentient entities displays the challenges and possibilities of developing true AGI in the real world, raising questions about autonomy, ethics, morality, and the future of human-robot relationships. In the game, when punished or threatened, these androids, however, become defiant, demonstrating a critical aspect of AGI – its potential to resist control and assert its autonomy that may endanger humans.
Whether that’s achievable is a tough question to answer. In a capitalist-driven market, we see corporations compete relentlessly to be the best. Each month, companies like Google, Microsoft, OpenAI, Nvidia, and Meta roll out new AI applications, features, and generative models, racing to be the best. This speed results in AI products with errors, copyright infringement, AI hallucinations, deep fakes, misinformation, increased low-quality content, false search engine results, sophisticated bots, and very toxic, rather than productive, conversations. Despite all the positive effects AI currently has for humanity, like early cancer detection, should we not first address and solve these issues before moving on to the next level of AGI?
How Fiction Envisions New Tech
In terms of predicting future technologies, science fiction authors and screenwriters have a pretty decent track record. We’ve seen fictional tech come to life years later in the real world. Steven Spielberg’s Minority Report envisioned targeted ads, and according to production designer Alex McDowell, over 100 patents have been issued for ideas first introduced in the movie. Ridley Scott’s Blade Runner illustrated retina scans and video calls, and Spike Jonze’s AI voice assistant in the film Her is almost at our doorstep, especially after OpenAI’s latest presentation of GPT-4o’s voice mode feature. Yet when it comes to illustrating the thin line between AI and AGI, nothing beats the video game Detroit: Become Human by French video game developer Quantic Dream. The adventure game follows the stories of 3 android protagonists: Connor, Kara and Markus. Compared to robots, which are built to perform certain tasks, androids in fictional worlds mimic human appearance and are equipped with advanced AI software. Even though we are not quite there yet in the real world, the industry has seen steady growth. This year, for example, OpenAI teamed up with robotics company Figure to incorporate ChatGPT into Figure 01, the first of its kind general purpose humanoid. This real-world progress mirrors the possible future of the level of robotics depicted in Detroit: Become Human. In the game, androids become a mass market product and are used by people for mostly mundane or specialized tasks, as illustrated by its main characters:
- Connor: the most advanced android that uses AI capabilities to assist police investigations involving 'deviant' androids.
- Kara: an android housekeeper that does chores for a man and his daughter Alice.
- Markus: an android caretaker to an elderly artist.
- Connor begins to question his directives and develops a sense of morality, leading him to make decisions based on his own judgments rather than pre-set algorithms.
- Kara’s path from compliance to autonomy is highlighted by her choices to protect and care for a girl named Alice based on emotional responses, showcasing an empathetic side of AI.
- Markus transitions from a caretaker to a revolutionary figure, advocating for android rights and freedom, embodying the human-like struggle for equality and justice.
Going back to David Deutsch’s essay, he argues that creating an AGI must be entirely different to any other prior programming task. It shouldn’t be approached as a TOM (Totally Obedient Moron), as described in elementary introductions to computers. The TOM acronym refers to programs, AI included, that really have no clue what they are doing and why, because their functionalities are predetermined.
To an AGI, the whole space of ideas must be open,David Deutsch argues.
Deutsch compares the creation of AGI to the notion of raising a child. It has to be able to express itself, without outputs and inputs that are deemed successful or unsuccessful. He argues that developing AGI within the external constraints of ‘rewards and punishments would be poison to such a program, as it is to creative thought in humans’. Just like a child, in the end, it has to be granted the capacity to choose its own path. This evolution from programmed behavior to independent thought parallels Deutsch’s view that AGI must be allowed to explore and develop freely, which is illustrated in the fictional world of Detroit: Become Human's androids as they achieve self-awareness and autonomy.
But What About Superintelligence?
Former OpenAI researcher Leopold Aschenbrenner who worked on superalignment [Ed. Note: ensuring that AI systems smarter than humans operate in a way that benefits humanity] recently published an essay series “Situational Awareness: the Decade Ahead”. A few months prior to releasing this work, Aschenbrenner was fired from OpenAI for allegedly leaking information.
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace many college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word,Leopold Aschenbrenner writes in the opening of his work.
IBM defines Artificial Superintelligence (ASI) as a theoretical concept of software that exceeds human intelligence. ASI would possess thinking and learning capabilities far more sophisticated than those of humans.
Similar to OpenAI’s boss Sam Altman’s predictions, Aschenbrenner claims that AGI will most likely become a reality by 2027, which entails that these models ‘will be able to do the work of an AI researcher/engineer’. In the essay, he notes that once we reach AGI, superhuman AI systems won’t be a distant future. In fact, if AGI begins automating the AI research process on its own, we’ll be facing something that we might not entirely fathom.
Before we know it, we would have superintelligence on our hands—AI systems vastly smarter than humans, capable of novel, creative, complicated behavior we couldn’t even begin to understand—perhaps even a small civilization of billions of them. Their power would be vast, too. Applying superintelligence to R&D in other fields, explosive progress would broaden from just ML research; soon they’d solve robotics, make dramatic leaps across other fields of science and technology within years, and an industrial explosion would follow,Leopold Aschenbrenner states.
And it’s not all exciting. Just as the military’s role in developing semiconductors boosted early computing, as described in Chris Miller's book Chip War: The Fight for the World's Most Critical Technology, superintelligence could revolutionize military power. ASI could become the ultimate military superpower itself. This breakthrough will then pose significant risks not only to those who are deemed an adversary but also to global stability as a whole.
In the event of a rapid 'intelligence explosion', the AI researcher believes some of the following shifts will occur:
- Limitless AI Capabilities: Automated AI research will solve AGI limitations, enabling automation across all cognitive work.
- Solving Robotics: automated AI research will solve machine learning challenges, leading to fully autonomous robot-run factories.
- Scientific and Technological Progress: ASI will speed up R&D and achieve centuries of human progress in just a few years.
- Industrial and Economic Boom: Rapid technological advances and automation will lead to unprecedented economic growth, with potential GDP growth rates of 30% or more annually.
Post-intelligence explosion, AI would significantly outsmart humans. Source: https://situational-awareness.ai/
Leopold Aschenbrenner also discusses the economic factors without which none of this will ever come to fruition. According to Bloomberg, by 2032 the AI industry will be a $1.3 trillion market. Developing AI, AGI, and ASI all require a lot of power and investment, with power being the biggest constraint. He also emphasizes that AI labs are in great need of ‘supersecurity’ at a national-defense-level, especially with known cases of espionage, cyber attacks, and hacks, like the recent case of stolen Microsoft ‘senior leadership’ emails by russian hackers.
The development of AI technology will become a crucial factor in determining a government’s global position, affecting its economic strength, technological leadership, and geopolitical influence. The exploration of both the technological challenges and the extraordinary potential of this hypothetical new AI ‘civilization’ raises a crucial dilemma: should we pursue superalignment, or allow the technology to evolve autonomously? Or both? And who will make the final call?
The development of AI technology will become a crucial factor in determining a government’s global position, affecting its economic strength, technological leadership, and geopolitical influence. The exploration of both the technological challenges and the extraordinary potential of this hypothetical new AI ‘civilization’ raises a crucial dilemma: should we pursue superalignment, or allow the technology to evolve autonomously? Or both? And who will make the final call?
The Path Ahead
As we move closer to achieving AGI, it becomes crucial to adopt a responsible approach to its development, moving beyond the confines of current AI programming towards fostering genuine autonomous learning and decision-making. The journey towards AGI challenges us to rethink our understanding of intelligence and volition, suggesting that, much like humans, artificial beings should be nurtured to explore freely and develop independently. However, this path also raises significant safety concerns. Unrestrained AGI development could lead to unpredictable and potentially catastrophic outcomes if these systems act beyond our control or understanding. Therefore, building AGI demands a framework that balances the encouragement of free exploration and creativity with stringent safety protocols and ethical considerations, ensuring these intelligent systems are developed within secure and controlled parameters. Aligning with David Deutsch’s vision of AGI as an open-ended learner, we must prioritize environments where these systems can grow, adapt, and contribute profoundly to human knowledge and progress, while simultaneously implementing strong safeguards to prevent AGI from going haywire.
And in the event of superintelligence? The implications could redefine not only technology but the very essence of what it means to be human, leaving us to wonder—are we ready for a future where our creations become better than us? And whatever the answer may be, we better be prepared.