by Mark MacCarthy

AI policy should be based on science, not science fiction

Opinion
Jul 26, 2017
Artificial IntelligenceGovernmentIT Leadership

Government should be looking for ways to promote AI rather than creating regulatory roadblocks

artificial intelligence / machine learning
Credit: Thinkstock

Elon Musk is one of the most forward-thinking innovators of our time, so it’s particularly troubling to hear him fear-mongering about the future of artificial intelligence (AI). At the recent National Governors Association meeting in Washington, D.C., Musk renewed his call for the federal government to actively regulate AI research. A deregulation-minded Washington is unlikely to create a new federal AI agency, as Musk would like, but his comments could damage AI’s enormous potential for social good.

And to be clear, his words are not half-hearted. At the governors’ meeting, he warned that AI is a “fundamental risk to the existence of human civilization,” justifying “proactive regulation” to make sure that we don’t do something very foolish. And, a few years ago, he compared AI research to “summoning the demon,” where the certainty that “the guy with the pentagram and the holy water” can control the demon “doesn’t work out.”

Musk is not alone in sounding an alarm. In 2014, Stephen Hawking, Stuart Russell, Max Tegmark and Frank Wilczek said, “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.” For years, that same doomsday message has been delivered by other high-profile thinkers, including back in 1965, when computer scientist I. J. Goode warned that “… the first ultra-intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”

At a high level, the dream of AI is the development of a single system or linked group of systems that not only perform a range of disparate tasks, but autonomously reprogram to learn new tasks. From there, it is easy to imagine a system that can improve in a self-directed way, and in virtually any field. And when it eventually applies its learning capacity to self-improvement, it won’t be long before it far surpasses anything that humans have been able to do.

Fears of losing control to machines are not entirely unreasonable, particularly as smart technologies become everyday parts of our lives and specially designed computing systems become more capable, and in more fields. They recognize speech and spam, detect fraudulent transactions, select military targets, make educational recommendations, diagnose diseases, drive our cars, and beat humans at chess and Go. And increasingly, they perform jobs that were previously reserved for humans.

But in reality, today’s predictions of imminent human-level intelligence remain as speculative as they were at the dawn of the computer age.  

Andrew Moore, Dean of Carnegie Mellon’s School of Computer Science, throws cold water on the potential for self-directed machines, saying, “… no one has any idea how to do that. It’s real science fiction. It’s like asking researchers to start designing a time machine.” He estimates that “98 percent of AI researchers are currently focused on engineering systems that can help people make better decisions, rather than simulate human consciousness.”

Cutting-edge companies like IBM say clearly, “Cognitive systems will not realistically attain consciousness or independent agency. Rather, they will increasingly be embedded in the processes, systems, products, and services by which business and society function – all of which will and should remain within human control.”

Of course, research precautions should be taken, as scientific institutes such as The Future of Humanity Institute and the Future of life Institute have urged. Their recently developed Asilomar Principles urge researchers to design AI systems “so that their goals and behaviors can be assured to align with human values throughout their operation.” And the government should not shy away from using federal research dollars to encourage the development of AI research with the right approach, such as the “human-compatible AI” design philosophy recently advanced by AI scientist Stuart Russell.

In addition, task-specific AI systems can raise special issues that policymakers should monitor. For instance, many technologists, including Musk, think the development of autonomous weapons systems is immoral and have asked national and international governments to ban them. And, at some point, they may need to consider legislation to promote the development of specific AI applications. For instance, legislation to create a uniform national regulatory scheme for autonomous cars is on the Congressional agenda this year – with strong support from the technology and automotive industries.

But speculative fear shouldn’t lead us into creating an omnibus regulatory structure to oversee all AI research. There’s simply no evidence that truly self-directing machines are around the corner, and a regulatory agency to supervise AI research is clearly a solution in search of a problem.

What’s worse, a fear-driven policy approach could significantly slow the development of AI systems that will make our homes and roads safer, cure deadly and costly illnesses, drive economic progress, and lead to many other societal advancements. Government should be looking for ways to promote AI rather than creating regulatory roadblocks.