AI May Become Smarter than Humans But We’ll Still Control It
Will AI become smarter than us? Maybe, but we’ll still be able to manage it, says an expert.
Yann LeCun, Meta’s chief AI scientist, has provided an extensive overview of AI’s threats and more.
In a comment for the Financial Times LeCun warned against premature AI regulation, emphasizing that it would only serve to worsen the situation, reinforcing the dominance of the big technology companies. Addition, it’d stifle competition.
“Regulating research and development in AI is incredibly counterproductive,” LeCun, one of the world’s leading AI researchers in neuroscience, told the Financial Times ahead of next month’s Bletchley Park conference on AI safety hosted by the British government. “They want regulatory capture under the guise of AI safety.”
He added that big companies suffer from the so-called “superiority complex”, claiming that only they can be trusted with developing the domain. He dubbed this approach “incredibly arrogant”, adding that he has “the exact opposite” view. He explained that he believes that open-source models stimulate competition and enable a greater diversity of people to build and use AI systems.
No Terminator scenario
LeCun also dismissed fears of extra disinformation risks and the need to tame the technology’s evolution – the same mantra as during the internet’s early days when similar arguments were put forward. Still, the technology managed to flourish thanks to its decentralized nature.
“The same thing will happen with AI,” he said, who's also at odds with Geoffrey Hinton and Yoshua Bengio, the two scientists with whom he shared the Turing Award for computer science in 2018.
Unlike the other two, LeCun discards the potential AI-related dangers, including existential threats, among others, highlighted by Elon Musk. He likewise dismissed the Terminator scenario of the smarter species wanting to dominate others. Even though he believes that AI will become more intelligent in most domains, he thinks that rather than killing them en masse, it would stimulate a second Renaissance in learning.
“Intelligence has nothing to do with a desire to dominate. It’s not even true for humans,” he said. “If it were true that the smartest humans wanted to dominate others, then Albert Einstein and other scientists would have been both rich and powerful, and they were neither.”
He added that some researchers don’t grasp how the world works and are incapable of planning.
“We do not have completely autonomous, self-driving cars that can train themselves to drive in about 20 hours of practice, something a 17-year-old can do,” he said.
LeCun believes that several “conceptual breakthroughs” were still necessary before AI systems approached human-level intelligence. Yet, even once they have achieved it, there is a way of controlling them. Through the encoded “moral character”.
“The debate on existential risk is very premature until we have a design for a system that can even rival a cat in terms of learning capabilities, which we don’t have at the moment,” he said, adding that our interaction with the digital world will be mediated by AI systems.
This means that search engines won’t be necessary anymore.
Meanwhile, Matt Brittin, president of Google for Europe, the Middle East, and Africa, expressed the notion that AI technology is "too important not to get right". A top Google executive has told the BBC that it has the potential for "huge breakthroughs" across industries.
The UK is slated to host an AI conference on November 27-28.
Previously, GN Crypto shed light on Charlie Minger: The investment genius and AI skeptic.