It may not be time to worry about a robot apocalypse, but artificial intelligence (AI) poses some very real threats, according to more than 100 leading scientists and tech luminaries, including Bill Gates and Stephen Hawking. I worry about a lot of things — my health, my kids, and the size of my retirement account. I never worry about an impending robot apocalypse … but maybe I should. A handful of very smart people in the science and technology worlds are worried about that very thing. First it was Microsoft cofounder Bill Gates, who got the Internet all fired up when he answered questions in a Reddit “AskMeAnything” thread. “I am in the camp that is concerned about super intelligence,” Gates wrote in response to a question about the existential threat posed by artificial intelligence (AI). “First, the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that, though, the intelligence is strong enough to be a concern.” [Related News: Bill Gates Tells Reddit About His Mysterious ‘Personal Agent’ Project at Microsoft] Nobel-prize winner Stephen Hawking, who spends a lot of time thinking about the shape of the universe, is also worried. “It’s tempting to dismiss the notion of highly intelligent machines as mere science fiction. But this would be a mistake, and potentially our worst mistake in history,” he wrote in The Independent, a British newspaper. Hawking, no technophobe, is bullish on the potential benefits of AI and machine learning, and he says AI could become “the biggest event in human history.” “One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand,” Hawking wrote. “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.” Hawking was one of roughly 100 scientists who signed an open letter hailing AI’s potential, but also warning of related dangers such as autonomous weapons systems and invasions of privacy. Elon Musk, founder of Tesla and SpaceX, also signed the letter, prompting a comment from Gates during his chat session: “I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.” I’m making light of the issue, of course, but there is a serious point: We need to think long and hard about how we use new technologies and, perhaps more importantly, consider the negative implications along with the potential. Related content brandpost The steep cost of a poor data management strategy Without a data management strategy, organizations stall digital progress, often putting their business trajectory at risk. Here’s how to move forward. By Jay Limbasiya, Global AI, Analytics, & Data Management Business Development, Unstructured Data Solutions, Dell Technologies Jun 09, 2023 6 mins Data Management feature How Capital One delivers data governance at scale With hundreds of petabytes of data in operation, the bank has adopted a hybrid model and a ‘sloped governance’ framework to ensure its lines of business get the data they need in real-time. By Thor Olavsrud Jun 09, 2023 6 mins Data Governance Data Management feature Assessing the business risk of AI bias The lengths to which AI can be biased are still being understood. The potential damage is, therefore, a big priority as companies increasingly use various AI tools for decision-making. By Karin Lindstrom Jun 09, 2023 4 mins CIO Artificial Intelligence IT Leadership brandpost Rebalancing through Recalibration: CIOs Operationalizing Pandemic-era Innovation By Kamal Nath, CEO, Sify Technologies Jun 08, 2023 6 mins CIO Digital Transformation Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe