It may not be time to worry about a robot apocalypse, but artificial intelligence (AI) poses some very real threats, according to more than 100 leading scientists and tech luminaries, including Bill Gates and Stephen Hawking. I worry about a lot of things — my health, my kids, and the size of my retirement account. I never worry about an impending robot apocalypse … but maybe I should. A handful of very smart people in the science and technology worlds are worried about that very thing. First it was Microsoft cofounder Bill Gates, who got the Internet all fired up when he answered questions in a Reddit “AskMeAnything” thread. “I am in the camp that is concerned about super intelligence,” Gates wrote in response to a question about the existential threat posed by artificial intelligence (AI). “First, the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that, though, the intelligence is strong enough to be a concern.” [Related News: Bill Gates Tells Reddit About His Mysterious ‘Personal Agent’ Project at Microsoft] Nobel-prize winner Stephen Hawking, who spends a lot of time thinking about the shape of the universe, is also worried. “It’s tempting to dismiss the notion of highly intelligent machines as mere science fiction. But this would be a mistake, and potentially our worst mistake in history,” he wrote in The Independent, a British newspaper. Hawking, no technophobe, is bullish on the potential benefits of AI and machine learning, and he says AI could become “the biggest event in human history.” “One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand,” Hawking wrote. “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.” Hawking was one of roughly 100 scientists who signed an open letter hailing AI’s potential, but also warning of related dangers such as autonomous weapons systems and invasions of privacy. Elon Musk, founder of Tesla and SpaceX, also signed the letter, prompting a comment from Gates during his chat session: “I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.” I’m making light of the issue, of course, but there is a serious point: We need to think long and hard about how we use new technologies and, perhaps more importantly, consider the negative implications along with the potential. Related content opinion Website spoofing: risks, threats, and mitigation strategies for CIOs In this article, we take a look at how CIOs can tackle website spoofing attacks and the best ways to prevent them. By Yash Mehta Dec 01, 2023 5 mins CIO Cyberattacks Security brandpost Sponsored by Catchpoint Systems Inc. Gain full visibility across the Internet Stack with IPM (Internet Performance Monitoring) Today’s IT systems have more points of failure than ever before. Internet Performance Monitoring provides visibility over external networks and services to mitigate outages. By Neal Weinberg Dec 01, 2023 3 mins IT Operations brandpost Sponsored by Zscaler How customers can save money during periods of economic uncertainty Now is the time to overcome the challenges of perimeter-based architectures and reduce costs with zero trust. By Zscaler Dec 01, 2023 4 mins Security feature LexisNexis rises to the generative AI challenge With generative AI, the legal information services giant faces its most formidable disruptor yet. That’s why CTO Jeff Reihl is embracing and enhancing the technology swiftly to keep in front of the competition. By Paula Rooney Dec 01, 2023 6 mins Generative AI Digital Transformation Cloud Computing Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe