The rapid advancement of AI is both fascinating and alarming. On the one hand, artificial superintelligence has the potential to help people overcome poverty and suffering. On the other hand, there are concerns that AI may cause people to not only lose their critical thinking skills but also their sense of humanity.
In 2015, Elon Musk, former Y Combinator President Sam Altman, and Stripe Co-Founder Greg Brockman founded OpenAI, a non-profit research laboratory committed to investing $1 billion in developing safe artificial intelligence systems for humanity. In 2018, Musk resigned from the OpenAI board of directors. A year later, Microsoft, owned by Bill Gates, became a privileged partner of the laboratory, allocating an additional $1 billion to OpenAI and acquiring an exclusive license for the GPT-3 text generation algorithm.
Today, the Future of Life Institute, primarily funded by the Musk Foundation, is calling for a halt to AI development and gathering one thousand signatures in support of the cause. Meanwhile, Sam Altman and Bill Gates are sharing their insights into the future of GPT and artificial intelligence during their public speeches.
Sam Altman: Artificial intelligence technology will reshape society as we know it
Most likely, this transformation will happen within the next few decades. Then we will look back and see that GPT-4, which made so much noise in March 2023, was very primitive, slow, and unreliable. In a recent podcast, Sam Altman compares GPT-4 to the very first personal computers, which took several decades to develop further. But they gave humanity a vector of development and eventually became an integral part of our lives.
OpenAI has already made significant progress in creating its large pre-trained models. Now the company faces the challenge of making AI usable, wise, and ethical.
The company operates with an open-source mentality and believes it is crucial for people to have access to AI technology at an early stage. This is necessary to quickly identify both its negative and positive aspects and directly influence the development process. As Sam emphasizes, wisdom and talent from the external world can reveal things that developers would never learn on their own.
We want to make our mistakes while the stakes are low. We want to get it better and better each rephe says.
In a way, the GPT training process is similar to human communication. We interact, trying to figure out which words to use to better understand each other. Sometimes it feels like the GPT training process is a way to learn more about ourselves.
Altman admits that he doesn't like it when the machine starts correcting him. He demands that the system treat the user as an adult. The dialogue format allows the GPT chat to respond to additional requests, acknowledge mistakes, refute false assumptions, and reject inappropriate requests.
But also there's a feeling like it's struggling with ideas. Yeah, it's always tempting to anthropomorphizejoked Altman.
According to the developer, GPT-4 was completed last summer, and the company spent six months rigorously testing the technology. However, AI has yet to be synced with human values and perspectives.
If someone were to ask GPT, "Do I look fat in this dress?", there are different approaches to answering even such a simple question. In an ideal world, people could have a thoughtful discussion about the boundaries for GPT, without anyone getting exactly what they want. But eventually, a solution could be found that satisfies everyone, and within its framework, national rules for AI could be created. This might sound too good to be true, but Sam hopes that over time people will at least start having discussions of this kind.
Altman is actually pretty skeptical about concerns that AI will completely replace humans in the job market. He points to the example of Kasparov losing to the supercomputer Deep Blue. Back then, people thought that nobody would want to play chess anymore because what's the point if AI will beat humans anyway? But as Sam puts it, "There is a little bit of coffee tastes too good." Even today, chess is still popular. And let's be honest, watching two AIs compete against each other, even if their game is flawless, just isn't as interesting as seeing what real people can do.
Altman firmly believes that GPT-4 is still a long way from achieving Artificial General Intelligence (AGI). He argues that a system cannot truly be considered super-intelligent if it cannot discover or invent new fundamental sciences. To accomplish this, developers will need to expand the GPT paradigm into unexplored directions that do not yet have concepts.
Despite these challenges, Sam is excited about the future of AI. He sees AI as becoming an extension of humanity, and the most powerful tool ever created. It's possible that OpenAI may never develop AGI, but it will still make humans even better than they are today
Bill Gates: AI will revolutionize education and healthcare
In his blog, Bill Gates reminisces about the early days of personal computers, when the software market was so small that all the world's developers could fit on one stage. Today, the tech industry is focused on AI and it has become a global industry. The days of typing commands in a command line like C:> will soon seem as distant as the era before artificial intelligence.
Gates believes that AI will revolutionize the medical industry by freeing up time for healthcare workers through automated documentation. In developing countries, AI will make medical services more accessible to people who may not have had the opportunity to see a doctor before.
AI has the potential to revolutionize not just the diagnosis and treatment of patients, but also the advancement of medical science as a whole. With the vast amount of information available on the functioning of complex biological systems, AI-powered software can analyze this data, identify targets for pathogens, and even create new drugs – including those to combat cancer. And it's not just limited to that – the next generation of AI will be able to calculate dosages and predict potential side effects as well, making treatments even more precise and effective.
According to Bill Gates, despite high hopes, computers haven't had a significant impact on student performance indicators. However, the next 5–10 years could see AI revolutionizing the principles of teaching and learning. AI will have the ability to adapt the material to a student's interests and talents, making it more engaging and personalized. It will also objectively assess understanding, recognize when a student loses interest, and determine the best way to keep them motivated. Additionally, AI will provide comprehensive information and career planning recommendations.
Similar to Sam Altman, Bill Gates has also stated that we are still a long way off from creating AGI. It may take at least a decade or more to achieve this goal.
Artificial intelligence still doesn’t control the physical worldGates says.
However, this "strong" AI may have the ability to set its own goals, which raises important questions. What will those goals be? And what if they conflict with humanity's goals? Should we try to halt the development of AGI? Gates acknowledges that these questions will become increasingly relevant over time.
There is an answer to these questions from Yan Meng, the co-founder of Solv Protocol. He believes that humans can coexist with AGI, but only by using blockchain technology. The main challenge with AGI is that its internal workings are not fully understood by humans, making it pointless to try to manipulate it in the hopes of achieving safety.
Blockchain has a different value proposition compared to AGI, but that's precisely what will enable them to form complementary relationships. Over time, AGI will be responsible for increasing efficiency, while blockchain will maintain fairness. AGI will boost productivity, while blockchain will regulate production relations. AGI will develop cutting-edge technologies, while blockchain will connect it through smart contracts. Even after AGI largely replaces the human brain, one of the few tasks that humans will still have to do is write and verify smart contracts on the blockchain.
In short, AGI is unconstrained and the blockchain puts the reins on itYan Meng concludes.
Meanwhile, the market for AI-related jobs is rapidly growing. Companies are willing to pay over $335,000 per year to prompt engineers who help humans and AI better understand each other.