Recently in a risk management meeting, I watched a data scientist explain to a group of executives why convolutional neural networks were the algorithm of choice to help discover fraudulent transactions. The executives—all of whom agreed that the company needed to invest in artificial intelligence—seemed baffled by the need for so much detail. “How will we know if it’s working?” asked a senior director to the visible relief of his colleagues.
Although they believe AI’s value, many executives are still wondering about its adoption. The following five questions are boardroom staples:
1. “What’s the reporting structure for an AI team?”
Organizational issues are never far from the minds of executives looking to accelerate efficiencies and drive growth. And, while this question isn’t new, the answer might be.
Captivated by the idea of data scientists analyzing potentially competitively-differentiating data, managers often advocate formalizing a data science team as a corporate service. Others assume that AI will fall within an existing analytics or data center-of-excellence (COE).
AI positioning depends on incumbent practices. A retailer’s customer service department designated a group of AI experts to develop “follow the sun chatbots” that would serve the retailer’s increasingly global customer base. Conversely a regional bank considered AI more of an enterprise service, centralizing statisticians and machine learning developers into a separate team reporting to the CIO.
These decisions were vastly different, but they were both the right ones for their respective companies.
- How unique (e.g., competitively differentiating) is the expected outcome? If the proposed AI effort is seen as strategic, it might be better to create team of subject matter experts and developers with its own budget, headcount, and skills so as not distract from or siphon resources from existing projects.
- To what extent are internal skills available? If data scientists and AI developers are already clustered within a COE, it might be better to leave the team as-is, hiring additional experts as demand grows.
- How important will it be to package and brand the results of an AI effort? If AI outcome is a new product or service, it might be better to create a dedicated team that can deliver the product and assume maintenance and enhancement duties as it continues to innovate.
2. “Should we launch our AI effort using some sort of solution, or will coding from scratch distinguish our offering?”
When people hear the term AI they conjure thoughts of smart Menlo Park hipsters stationed at standing desks wearing ear buds in their pierced ears and writing custom code late into the night. Indeed, some version of this scenario is how AI has taken shape in many companies.
Executives tend to romanticize AI development as an intense, heads-down enterprise, forgetting that development planning, market research, data knowledge, and training should also be part of the mix. Coding from scratch might actually prolong AI delivery, especially with the emerging crop of developer toolkits (Amazon Sagemaker and Google Cloud AI are two) that bundle open source routines, APIs, and notebooks into packaged frameworks.
These packages can accelerate productivity, carving weeks or even months off development schedules. Or they can exacerbate collaboration efforts.
- Is time-to-delivery a success metric? In other words, is there lower tolerance for research or so-called “skunkworks” projects where timeframes and outcomes could be vague?
- Is there a discrete budget for an AI project? This could make it easier to procure developer SDKs or other productivity tools.
- How much research will developer toolboxes require? Depending on your company’s level of skill, in the time it takes to research, obtain approval for, procure, and learn an AI developer toolkit your team could have delivered important new functionality.
3. “Do we need a business case for AI?”
It’s all about perspective. AI might be positioned as edgy and disruptive with its own internal brand, signaling a fresh commitment to innovation. Or it could represent the evolution of analytics, the inevitable culmination of past efforts that laid the groundwork for AI.
I’ve noticed that AI projects are considered successful when they are deployed incrementally, when they further an agreed-upon goal, when they deliver something the competition hasn’t done yet, and when they support existing cultural norms.
- Do other strategic projects require business cases? If they do, decide whether you want AI to be part of the standard cadre of successful strategic initiatives, or to stand on its own.
- Are business cases generally required for capital expenditures? If so, would bucking the norm make you an innovative disruptor, or an obstinate rule-breaker?
- How formal is the initiative approval process? The absence of a business case might signal a lack of rigor, jeopardizing funding.
- What will be sacrificed if you don’t build a business case? Budget? Headcount? Visibility? Prestige?
Incumbent norms once again matter here. But when it comes to AI the level of disruption is often directly proportional to the need for a sponsor.
A senior AI specialist at a health care network decided to take the time to discuss possible AI use cases (medication compliance, readmission reduction, and deep learning diagnostics) with executives “so that they’d know what they’d be in for.” More importantly she knew that the executives who expressed the most interest in the candidate AI undertakings would be the likeliest to promote her new project. “This is a company where you absolutely need someone powerful in your corner,” she explained.
- Does the company’s funding model require an executive sponsor? Challenging that rule might cost you time, not to mention allies.
- Have high-impact projects with no executive sponsor failed? You might not want your AI project to be the first.
- Is the proposed AI effort specific to a line of business? In this case enlisting an executive sponsor familiar with the business problem AI is slated to solve can be an effective insurance policy.
5. “What practical advice do you have for teams just getting started?”
If you’re new to AI you’ll need to be careful about departing from norms, since this might attract undue attention and distract from promising outcomes. Remember Peter Drucker’s quote about culture eating strategy for breakfast? Going rogue is risky.
On the other hand, positioning AI as disruptive and evolutionary can do wonders for both the external brand as well as internal employee morale, assuring constituents that the company is committed to innovation, and considers emerging tech to be strategic.
Either way, the most important success measures for AI are setting accurate expectations, sharing them often, and addressing questions and concerns without delay.
- Distribute a high-level delivery schedule. An unbounded research project is not enough. Be sure you’re building something—AI experts agree that execution matters—and be clear about the delivery plan.
- Help colleagues envision the benefits. Does AI promise first mover advantage? Significant cost reductions? Brand awareness?
- Explain enough to color in the goal. Building a convolutional neural network to diagnose skin lesions via image scans is a world away from using unsupervised learning to discover unanticipated correlations between customer segments. As one of my clients says, “Don’t let the vague in.”
These days AI has mojo. Companies are getting serious about it in a way they haven’t been before. And the more your executives understand about how it will be deployed—and why—the better the chances for delivering ongoing value.