When so many of the world's smartest people warn us about 'killer robots' and other ethical issues inherent in artificial intelligence, we should heed their call to make sure A.I. is used for societal good Growing up, it was great fun watching shows like Star Trek, The Jetsons, Flash Gordon and The Green Hornet. They made you dream about what life would be like with amazing technologies at your disposal. Now we’re all grown up and actually using some of the once-imaginary marvels from those TV shows (except for George Jetson’s flying car, which I’m still waiting for!). On a recent overseas trip, I watched the new British sci-fi film Ex Machina, which tells the story of a programmer pushing the boundaries of artificial intelligence with a robot named Ava. Not long after that, I read about Elon Musk, Stephen Hawking and thousands of A.I. researchers calling for a ban on autonomous weapons, a.k.a. ‘killer robots.’ While some people scoff at such warnings, we need to remember that humans think in linear rather than exponential terms. That’s the primary reason Moore’s Law continues to outperform itself. It’s also why we should expect the rate of technology evolution to outpace that of human evolution. Another compelling addition to this burgeoning debate is the open letter from the Future of Life Institute, in which A.I. scientists emphasize the importance of using A.I. for societal benefit, not destruction and war. Let’s use artificial intelligence to eradicate disease and poverty, the letter argues, and “reap its benefits while avoiding potential pitfalls.” Coupled with that letter, Musk made a $10 million donation aimed at keeping A.I. “beneficial for humanity.” All of this puts the question of A.I. ethics on the table, at exactly the right moment. Why play the ethics card so early, before some of the imagined benefits have even materialized? Stuart Russell, a pioneering A.I. researcher, worries that this technology will be exploited for military use rather than human advancement. He and other scientists compare the potential of A.I. with that of nuclear technology, reminding us that the original, primary interest in nuclear reactions was as an “inexhaustible supply of energy.” Not bombs. As Hawking said: “I think there is a reasonable analogy between unlimited amounts of energy and unlimited amounts of intelligence. Both seem wonderful until one thinks of the possible risks.” When so many of the world’s smartest people raise their hands to warn us, we should not only hear them but heed them. As we contemplate the future of artificial intelligence, let’s keep a strong grip on our ethics. Related content opinion Beyond Moore's Law: Five technologies that will change the future IDG CEO Mike Friedenberg is watching the tech horizon and sees game-changing technologies in A.I., robotics, quantum computing and more. By Michael Friedenberg May 21, 2015 2 mins CIO 3D Printers Technology Industry opinion Security crashes the boardroom party Given the recent spate of headline-grabbing data breaches, CIOs need to be prepared to answer a lot of board questions about risk. By Michael Friedenberg Mar 30, 2015 2 mins Cybercrime Security opinion Are You Ready to Replace Yourself? CIOs rarely get to name their successors, and companies overall do a poor job of succession planning. CEO Michael Friedenberg says it's time to get serious about closing the succession gap. By Michael Friedenberg Jan 28, 2015 2 mins CIO Mentoring Careers opinion Two Critical Questions Every CIO Should Answer IDG Communications CEO Michael Friedenberg dips into the history of the once-thriving ice industry to put todayu2019s period of business transformation in crystal-clear perspective. By Michael Friedenberg May 29, 2014 2 mins Innovation IT Leadership Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe