by Michael Friedenberg

The future of A.I. ethics is in our hands

Aug 31, 2015

When so many of the world's smartest people warn us about 'killer robots' and other ethical issues inherent in artificial intelligence, we should heed their call to make sure A.I. is used for societal good

Growing up, it was great fun watching shows like Star Trek, The Jetsons, Flash Gordon and The Green Hornet. They made you dream about what life would be like with amazing technologies at your disposal.

Now we’re all grown up and actually using some of the once-imaginary marvels from those TV shows (except for George Jetson’s flying car, which I’m still waiting for!).

On a recent overseas trip, I watched the new British sci-fi film Ex Machina, which tells the story of a programmer pushing the boundaries of artificial intelligence with a robot named Ava. Not long after that, I read about Elon Musk, Stephen Hawking and thousands of A.I. researchers calling for a ban on autonomous weapons, a.k.a. ‘killer robots.’

While some people scoff at such warnings, we need to remember that humans think in linear rather than exponential terms. That’s the primary reason Moore’s Law continues to outperform itself. It’s also why we should expect the rate of technology evolution to outpace that of human evolution.

Another compelling addition to this burgeoning debate is the open letter from the Future of Life Institute, in which A.I. scientists emphasize the importance of using A.I. for societal benefit, not destruction and war. Let’s use artificial intelligence to eradicate disease and poverty, the letter argues, and “reap its benefits while avoiding potential pitfalls.” Coupled with that letter, Musk made a $10 million donation aimed at keeping A.I. “beneficial for humanity.”

All of this puts the question of A.I. ethics on the table, at exactly the right moment. Why play the ethics card so early, before some of the imagined benefits have even materialized? Stuart Russell, a pioneering A.I. researcher, worries that this technology will be exploited for military use rather than human advancement. He and other scientists compare the potential of A.I. with that of nuclear technology, reminding us that the original, primary interest in nuclear reactions was as an “inexhaustible supply of energy.” Not bombs. As Hawking said: “I think there is a reasonable analogy between unlimited amounts of energy and unlimited amounts of intelligence. Both seem wonderful until one thinks of the possible risks.”

When so many of the world’s smartest people raise their hands to warn us, we should not only hear them but heed them. As we contemplate the future of artificial intelligence, let’s keep a strong grip on our ethics.