- Tesla and SpaceX CEO Elon Musk has repeatedly said that he thinks artificial intelligence poses a threat to humanity.
- Of the companies working on AI technology, Musk is most concerned by the Google-owned DeepMind project, he said in a new interview with the New York Times.
- "The nature of the AI that they're building is one that crushes all humans at all games," he said. "It's basically the plotline in 'WarGames.'"
- In the 1983 film "WarGames," starring Matthew Broderick, a supercomputer trained to test wartime scenarios is accidentally triggered to start a nuclear war.
- Visit Business Insider's homepage for more stories.
Billionaire Elon Musk has been sounding the alarm about the potentially dangerous, species-ending future of artificial intelligence for years now.
In 2016, he warned that human beings could become the equivalent of "house cats" to new AI overlords. He has since repeatedly called for regulation and caution when it comes to new AI technology.
But, of all the various AI projects currently in the works, none has Musk more worried than Google's DeepMind.
"Just the nature of the AI that they're building is one that crushes all humans at all games," Musk told the New York Times in a new interview. "I mean, it's basically the plotline in 'WarGames.'"
In "WarGames," a teenage hacker played by Matthew Broderick connects to an AI-controlled government supercomputer trained to run war simulations. In attempting to play a game titled "Global Thermonuclear War," the AI convinces government officials that a nuclear attack from the Soviet Union was imminent.
In the end (spoiler for those who haven't seen the 37-year-old movie), the computer runs enough simulations of the potential end results of global thermonuclear war that it declares no winner to be possible, and that the only way to win is to not play. The 1983 film is a direct reflection of its time and place: fear in the US of nuclear war with the Soviet Union still looming, and fear of increasingly advanced technology.
But Musk wasn't just talking about old films when he compared DeepMind to "WarGames" – he also said that AI could surpass human intelligence in the next five years, even if we don't see the impact of it immediately. "That doesn't mean that everything goes to hell in five years," he said. "It just means that things get unstable or weird."
Musk was an early investor in DeepMind, which sold to Google in 2014 for over $500 million, according to reports. Rather than seeking a return on investment, Musk said in a 2017 interview, he did it to keep an eye on burgeoning AI developments.
"It gave me more visibility into the rate at which things were improving, and I think they're really improving at an accelerating rate, far faster than people realize," he said in the 2017 interview. "Mostly because in everyday life you don't see robots walking around. Maybe your Roomba or something. But Roombas aren't going to take over the world."
But Musk thinks artificial intelligence should have a different connotation.
"I think generally people underestimate the capability of AI — they sort of think it's a smart human," Musk said in a August 2019 talk with Alibaba CEO Jack Ma at the World AI Conference in Shanghai, China. "But it's going to be much more than that. It will be much smarter than the smartest human."
Is is "hubris," he said in the Times interview this week, that keeps "very smart people" from realizing the potential dangers of AI.
"My assessment about why AI is overlooked by very smart people is that very smart people do not think a computer can ever be as smart as they are. And this is hubris and obviously false."