I worry about a lot of things — my health, my kids, and the size of my retirement account. I never worry about an impending robot apocalypse ... but maybe I should. A handful of very smart people in the science and technology worlds are worried about that very thing.
First it was Microsoft cofounder Bill Gates, who got the Internet all fired up when he answered questions in a Reddit "AskMeAnything" thread.
"I am in the camp that is concerned about super intelligence," Gates wrote in response to a question about the existential threat posed by artificial intelligence (AI). "First, the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that, though, the intelligence is strong enough to be a concern."
Nobel-prize winner Stephen Hawking, who spends a lot of time thinking about the shape of the universe, is also worried. "It's tempting to dismiss the notion of highly intelligent machines as mere science fiction. But this would be a mistake, and potentially our worst mistake in history," he wrote in The Independent, a British newspaper.
Hawking, no technophobe, is bullish on the potential benefits of AI and machine learning, and he says AI could become "the biggest event in human history."
"One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand," Hawking wrote. "Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all."
Hawking was one of roughly 100 scientists who signed an open letter hailing AI's potential, but also warning of related dangers such as autonomous weapons systems and invasions of privacy. Elon Musk, founder of Tesla and SpaceX, also signed the letter, prompting a comment from Gates during his chat session: "I agree with Elon Musk and some others on this and don't understand why some people are not concerned."
I'm making light of the issue, of course, but there is a serious point: We need to think long and hard about how we use new technologies and, perhaps more importantly, consider the negative implications along with the potential.