by Jill Dyché

Does your AI plan account for safety?

Opinion
Dec 04, 2018
Artificial IntelligenceIT StrategyTechnology Industry

It might only take a troublesome naysayer to sabotage an otherwise promising AI program.

artificial intelligence robotics machine learning
Credit: Thinkstock

One thing I’ve learned in the last several years is that, just like other high-profile company initiatives, AI isn’t immune from corporate politics. The most prevalent example of this is people battling over AI ownership, a topic I covered in “The internal disruption of AI.”

But ownership isn’t the only hot potato being tossed between teams who recognize the potential of AI—and the pitfalls. I’m currently advising an after-market automobile products supplier on revamping its AI team. Digital innovations are rapidly changing the industry, and the phenomenon of autonomous vehicles seems to have captivated executives’ imaginations, if not their budgets. My client was newly-deliberate about its AI roadmap, and in order to secure budget we were creating an incremental plan.

The AI team complained to me about a manager—I’ll call him Ken—who had been labeled “anti-AI.” Citing his disruptive questions and accusing him of having a penchant for drama, the team decided to exclude Ken from weekly AI progress calls. 

Amidst all the exciting news about potential uses of AI it’s tempting to dismiss naysayers as modern-day Chicken Littles portending AI’s ability to send the heavens above into freefall. I advised the team to hear Ken out.

I wasn’t defending Ken’s position. But AI has the potential to do harm—remember Microsoft’s racist chatbot of a few years ago?—as well as good—AI can enhance disease diagnoses and even spot abnormal cells in patients. The popular press celebrates the combination of AI and gene-editing technologies to cure disease. Conversely the prospect of “designer babies” has sparked fierce debate. The topic of AI’s safety and ethics has gone from the nervous fringes to the boardroom.

That’s actually a good thing.

The auto industry is uniquely poised to exploit technology innovation. Speech recognition, enhanced navigation, digital displays, night vision, warning systems, driver monitoring, and other advancements have enhanced driving comfort and safety.

It turns out my client’s deliberate roadmap was the very thing needed to assuage Ken’s fears. Since use cases for AI are so numerous (and increasing as algorithms and their users become more sophisticated), it’s not AI as a class of tools but the context of discrete AI usage that matters. It was likely that recent news about failures in self-driving cars were making Ken particularly twitchy. Will AI and machine learning be used to distinguish physical objects like other cars, or humans? Or to enhance navigation systems? Or to display real-time traffic suggestions on the driver’s windshield? Or to channel edge analytics for predictive maintenance?

You see my point: show me an industry and I’ll show you dozens if not hundreds of use cases for artificial intelligence. It turns out we were able to overcome organizational resistance to AI by adopting three considerations into our planning: 

1. What are the most promising use cases for AI?

I name several above, but it’s a good idea to actually list these opportunities as a team. Which ones align with strategic objectives? Which could help gain market share? Which could make operations smarter? We used a weighted scoring method to establish priorities, thus making the roadmap more tactical.

2. How do we ensure AI safety?

Once you land on a few opportunities for AI, engage your analytics or data science team in describing the best type of algorithm for the job, and brainstorm ways it could be compromised. For instance, could a deep learning algorithm that recognizes red lights be hacked into thinking “red” means “go?” What are the necessary measures to avoid such risks?

Any AI application could have unintended consequences. While you might be reluctant to go through this process—it could discourage some promising AI ideas—it’s a worthwhile exercise for engaging a cross-functional team.

3. What’s the goal for the AI project?

Be clear about the desired outcome(s). Will candidate AI capabilities be additive? Are you after industry dominance or competitive parity? The continuum here could inform whether AI functionality is developed in-house or using a partner that has already proven the viability—and the security—of its AI deployment.

I recommended including Ken in these conversations. After all his perspective, while a bit contrarian, could drive some interesting—and in these days, increasingly necessary—conversations.