Recently in a risk management meeting, I watched a data scientist explain to a group of executives why convolutional neural networks were the algorithm of choice to help discover fraudulent transactions. The executives\u2014all of whom agreed that the company needed to invest in artificial intelligence\u2014seemed baffled by the need for so much detail. \u201cHow will we know if it\u2019s working?\u201d asked a senior director to the visible relief of his colleagues.\nAlthough they believe AI\u2019s value, many executives are still wondering about its adoption. The following five questions are boardroom staples:\n1. \u201cWhat\u2019s the reporting structure for an AI team?\u201d\nOrganizational issues are never far from the minds of executives looking to accelerate efficiencies and drive growth. And, while this question isn\u2019t new, the answer might be.\nCaptivated by the idea of data scientists analyzing potentially competitively-differentiating data, managers often advocate formalizing a data science team as a corporate service. Others assume that AI will fall within an existing analytics or data center-of-excellence (COE).\u00a0\nAI positioning depends on incumbent practices. A retailer\u2019s customer service department designated a group of AI experts to develop \u201cfollow the sun chatbots\u201d that would serve the retailer\u2019s increasingly global customer base. Conversely a regional bank considered AI more of an enterprise service, centralizing statisticians and machine learning developers into a separate team reporting to the CIO.\nThese decisions were vastly different, but they were both the right ones for their respective companies.\nConsiderations:\n\nHow unique (e.g., competitively differentiating) is the expected outcome? If the proposed AI effort is seen as strategic, it might be better to create team of subject matter experts and developers with its own budget, headcount, and skills so as not distract from or siphon resources from existing projects.\nTo what extent are internal skills available? If data scientists and AI developers are already clustered within a COE, it might be better to leave the team as-is, hiring additional experts as demand grows.\nHow important will it be to package and brand the results of an AI effort? If AI outcome is a new product or service, it might be better to create a dedicated team that can deliver the product and assume maintenance and enhancement duties as it continues to innovate.\n\n2. \u201cShould we launch our AI effort using some sort of solution, or will coding from scratch distinguish our offering?\u201d\nWhen people hear the term AI they conjure thoughts of smart Menlo Park hipsters stationed at standing desks wearing ear buds in their pierced ears and writing custom code late into the night. Indeed, some version of this scenario is how AI has taken shape in many companies.\nExecutives tend to romanticize AI development as an intense, heads-down enterprise, forgetting that development planning, market research, data knowledge, and training should also be part of the mix. Coding from scratch might actually prolong AI delivery, especially with the emerging crop of developer toolkits (Amazon Sagemaker and Google Cloud AI are two) that bundle open source routines, APIs, and notebooks into packaged frameworks.\nThese packages can accelerate productivity, carving weeks or even months off development schedules. Or they can exacerbate collaboration efforts.\nConsiderations:\n\nIs time-to-delivery a success metric? In other words, is there lower tolerance for research or so-called \u201cskunkworks\u201d projects where timeframes and outcomes could be vague?\nIs there a discrete budget for an AI project? This could make it easier to procure developer SDKs or other productivity tools.\nHow much research will developer toolboxes require? Depending on your company\u2019s level of skill, in the time it takes to research, obtain approval for, procure, and learn an AI developer toolkit your team could have delivered important new functionality.\n\n3. \u201cDo we need a business case for AI?\u201d\nIt\u2019s all about perspective. AI might be positioned as edgy and disruptive with its own internal brand, signaling a fresh commitment to innovation. Or it could represent the evolution of analytics, the inevitable culmination of past efforts that laid the groundwork for AI.\nI\u2019ve noticed that AI projects are considered successful when they are deployed incrementally, when they further an agreed-upon goal, when they deliver something the competition hasn\u2019t done yet, and when they support existing cultural norms.\nConsiderations:\n\nDo other strategic projects require business cases? If they do, decide whether you want AI to be part of the standard cadre of successful strategic initiatives, or to stand on its own.\nAre business cases generally required for capital expenditures? If so, would bucking the norm make you an innovative disruptor, or an obstinate rule-breaker?\nHow formal is the initiative approval process? The absence of a business case might signal a lack of rigor, jeopardizing funding.\nWhat will be sacrificed if you don\u2019t build a business case? Budget? Headcount? Visibility? Prestige?\n\n4. \u201cWe\u2019ve had an executive sponsor for nearly every high-profile project. What about AI?\u201d\nIncumbent norms once again matter here. But when it comes to AI the level of disruption is often directly proportional to the need for a sponsor.\nA senior AI specialist at a health care network decided to take the time to discuss possible AI use cases (medication compliance, readmission reduction, and deep learning diagnostics) with executives \u201cso that they\u2019d know what they\u2019d be in for.\u201d More importantly she knew that the executives who expressed the most interest in the candidate AI undertakings would be the likeliest to promote her new project. \u201cThis is a company where you absolutely need someone powerful in your corner,\u201d she explained.\nConsiderations:\n\nDoes the company\u2019s funding model require an executive sponsor? Challenging that rule might cost you time, not to mention allies.\nHave high-impact projects with no executive sponsor failed?\u00a0 You might not want your AI project to be the first.\nIs the proposed AI effort specific to a line of business? In this case enlisting an executive sponsor familiar with the business problem AI is slated to solve can be an effective insurance policy.\n\n5. \u201cWhat practical advice do you have for teams just getting started?\u201d\nIf you\u2019re new to AI you\u2019ll need to be careful about departing from norms, since this might attract undue attention and distract from promising outcomes. Remember Peter Drucker\u2019s quote about culture eating strategy for breakfast? Going rogue is risky.\nOn the other hand, positioning AI as disruptive and evolutionary can do wonders for both the external brand as well as internal employee morale, assuring constituents that the company is committed to innovation, and considers emerging tech to be strategic.\nEither way, the most important success measures for AI are setting accurate expectations, sharing them often, and addressing questions and concerns without delay.\nConsiderations:\n\nDistribute a high-level delivery schedule. An unbounded research project is not enough. Be sure you\u2019re building something\u2014AI experts agree that execution matters\u2014and be clear about the delivery plan.\nHelp colleagues envision the benefits. Does AI promise first mover advantage? Significant cost reductions? Brand awareness?\nExplain enough to color in the goal. Building a convolutional neural network to diagnose skin lesions via image scans is a world away from using unsupervised learning to discover unanticipated correlations between customer segments. As one of my clients says, \u201cDon\u2019t let the vague in.\u201d\n\nThese days AI has mojo. Companies are getting serious about it in a way they haven\u2019t been before. And the more your executives understand about how it will be deployed\u2014and why\u2014the better the chances for delivering ongoing value.