Business leaders at every level see the value of using artificial intelligence, but using AI well is where the true value lies.\n\nAnd the stakes are high. According to a Deloitte survey released last summer, 61% of companies expect AI to transform their industry over the next three years. Companies with effective leaders, a high level of commitment to AI projects, and a clear AI vision and strategy are positioned to benefit the most from this shift, according to a McKinsey survey released last November.\n\n[ Cut through the hype with our practical guide to machine learning in business and find out the 10 signs you\u2019re ready for AI \u2014 but might not succeed. | Get the latest insights with our CIO Daily newsletter. ]\n\nThe key differentiator? Being able to deploy AI at scale. Instead of proofs of concept, or one-off AI projects, companies that will come out on top are the ones capable of deploying multiple AI applications across multiple teams. So far, only 13% of organizations have been able to do this, according to Capgemini\u2019s 2020 state of AI report.\n\nHere are eight tips for transforming artificial intelligence projects into business value, as told by those who are already deriving real-world benefits from AI.\n\nFocus on business transformation\n\nThree years ago, when General Electric was in the early stages of its AI journey, AI projects required a keen focus on specific business benefits, starting out with minimal viable projects. Today, the story is more about using AI as part of a transformation of the business itself.\n\n\u201cYou can go look at one tiny silo, optimize how much inventory you save, and save $2 or $3 million, but it doesn\u2019t translate into creating value throughout the company,\u201d says Colin Parris, senior vice president of chief technology officer at GE, pointing to an example in which GE figured out how to manage part inventories more efficiently with AI. The next step was to take what GE learned and offer the same service to its customers.\n\n\u201cI can make the prediction \u2014 or give you software and you can use it \u2014 so you know what parts to buy,\u201d he says. \u201cSo I went from efficiency to revenue generation in my industry. And then I can open up my market and the same techniques can be applied to other industries.\u201d\n\nBut the jump from using AI to cut costs to using AI to increase business requires a fundamental shift in strategy to focus on business transformation. At GE, that has meant leveraging lean manufacturing principles, powered by AI. One advantage of pairing AI with lean is that it reduces internal resistance to change.\n\n\u201cWe've been doing lean in manufacturing for years,\u201d says Parris. \u201cPeople know that their job isn\u2019t going away.\u201d\n\nKnow the limits of AI\n\nAs AI projects scale up and become core company operations, associated risks go up as well. If an AI system trained on a specific problem is then applied to a slightly different problem, the results may be suboptimal \u2014 or even dangerous.\n\n\u201cWe have this thing called humble AI,\u201d Parris says. \u201cIf things change, then I don\u2019t use the AI model. I go back to the model I had before. Humble knows when it should step away. It limits your business risk. And increases adoption.\u201d\n\nAnother aspect of GE\u2019s \u201chumble AI\u201d approach is to ensure the AI explains its reasoning. For example, when technicians get data from wind turbines, they traditionally look up wind speed versus tower vibration in the manual, and the manual tells them what to do. An AI system, however, can get the data, plot the curves, and report to the technician that the turbine is experiencing a pitch bearing problem. Explainable AI would also show those curves to the technician and pull up the page in the manual that has the relevant information.\n\n\u201cThe AI is now explaining itself,\u201d Parris says. \u201cAnd the technician could look at it and say, \u2018This looks a little different. Or they could say, \u2018That\u2019s exactly right. Let\u2019s do that.\u2019\u201d\n\nThe AI helps the technician get to the solution faster \u2014 and helps the technician believe it actually works.\n\n\u201cIt\u2019s about augmented intelligence, assisted intelligence,\u201d Parris says. \u201cIt\u2019s not replacing you; it\u2019s assisting you.\u201d That is helping with the adoption of AI at scale, he adds.\n\nListen to stakeholders \u2014 and customers\n\nFor some companies, ensuring AI systems produce useful results requires help beyond the core AI team. Like any project, this begins with requirements gathering around data, outcomes, and models.\n\n\u201cIdeally, you kick off a project with a whiteboard meeting where all the key stakeholders spend the afternoon sussing out the details and documenting the query requirements,\u201d says Jim Metcalf, chief data scientist at Healthy Nevada Project, whose team learned this lesson working on a protocol for dealing with cardiac patients.\n\nThe project required collecting information on medicines patients were prescribed when released from the hospital. But some medications, such as statins, are prescribed when patients are first admitted, and are continued when the patient leaves. The system assumed these medicines were ongoing prescriptions that patients were already taking, not new medicines related to their heart attack hospitalizations, a problem that was discovered only when medication counts ended up being lower than expected.\n\n\u201cThe team could have worked this out much earlier if we\u2019d had more detailed discussions with all interested parties from the outset,\u201d says Metcalf. \u201cOur data science team has learned to assume nothing. We thoroughly vet, discuss, and document query requirements long before anyone puts fingers on a keyboard.\u201d\n\nFor enterprise spending management platform provider Coupa, a customer tip pointed the way to a new way to detect fraud. \u201cIn our industry, the approach has been to look at spending fraud in silos,\u201d says Donna Wilczek, the company\u2019s vice president of product strategy and innovation.\n\nBut it turns out that an employee cheating in one area is more likely to be cheating in other areas as well, she says. It took conversations with procurement experts and financial auditors to find out that the secret of fraud detection is looking at the individual people at the heart of the fraud.\n\nCoupa now collects examples of fraudulent behavior that businesses report, then adds those real-life examples to the AI system.\n\nNo more proofs of concept\n\nWhen the technology was brand new, proofs of concept (POCs) made sense. Today, however, there\u2019s less of a need to start your AI journey with experiments, says JJ L\u00f3pez Murphy, data and AI technology director at Globant.\n\n\u201cEach of these experiments is very expensive, in terms of money, time and political clout,\u201d he says. \u201cOnce you\u2019ve done four POCs that go nowhere, people stop believing in AI.\u201d\n\nInstead, companies should work on projects that go somewhere, he says. \u201cIf it\u2019s not in production, if it\u2019s not being used, it\u2019s sometimes worse than worthless.\u201d\n\nGartner analyst Whit Andrews agrees, recommending that companies instead create minimum viable products. \u201cThe risk is a little higher,\u201d he says. \u201cBut the benefit is that you get rolling. Now you just keep adding capacity and functionality.\u201d\n\nAccording to a 2020 survey from Gartner, companies that succeed with AI conduct an average of 4.1 pilot projects. Companies that are not successful conduct 5.2 POCs. \u201cWe\u2019re past the \u2018throw it at the wall and see what sticks\u2019 point,\u201d he says.\n\nMixed teams\n\nAccording to the Gartner report, organizations that got \u201csignificant value\u201d from their AI projects also had 14% more roles on their AI teams, including project managers, strategists, and people with differing backgrounds and perspectives.\n\n\u201cThe No. 1 habit of successful companies is using well-mixed teams,\u201d Andrews says.\n\nFor an AI project that Tech Data worked on that involved counting puffins, that meant bringing in hardware experts.\n\n\u201cIf you\u2019ve ever seen National Geographic specials, puffins are huddled up close together, thousands of them,\u201d says Clay Davis, Tech Data\u2019s vice president for global data and IoT solutions. \u201cWe were tasked to leverage AI to count puffins.\u201d\n\nBefore Tech Data was called on to help with the project, there was a team of data scientists working to get the best possible models in place for counting puffins, and a separate team of hardware professionals choosing the cameras and computing equipment.\n\n\u201cWhen you have physical hardware like a camera that is capturing images, sometimes in remote areas, sometimes it\u2019s more effective to do the computation on site, and sometimes it\u2019s not,\u201d he says. \u201cAnd if you do computation on site, you need to make sure that the hardware that you\u2019re leveraging is enough to handle the models you\u2019ve built with the data scientists.\u201d\n\nThree months in, it turned out that the hardware chosen couldn\u2019t run the models the data scientists were coming up with. \u201cNow you\u2019ve got to restart,\u201d he says. \u201cEither you have to buy new hardware or ask the data scientists to build a more efficient model. You needed to have both people in the project from day one.\u201d\n\nIn the case of the uncounted puffins, the data scientists were able to switch to a trend mapping model, so they could stick with the existing hardware.\n\nEmbrace domain expertise\n\nRelying solely on data scientists to surface insights from data is a big mistake, says Halim Abbas, chief AI officer at Cognoa, which is applying AI to behavioral diagnostics, helping to identify children with autism and other behavioral health issues.\n\nAscertaining data interdependencies and data relevance often takes a subject matter expert. For example, if a set of patients diagnosed in a room with blue walls and another in a room with white walls produced different results, an analytics model looking for patterns might deduce that wall paint has clinical significance.\n\n\u201cAs the data set size grows, you will obviously avoid such silly conclusions,\u201d Abbas says. \u201cBut there still might be some subtle ones.\u201d\n\nThese are issues that an AI expert without domain expertise wouldn't be aware of, he adds. This is especially critical when data sets are small, such as with rare conditions or small demographics.\n\nBut domain experts can have their own biases, Abbas says. \u201cA good way to be doubly sure is to take the input from the domain experts, and do the same on the AI side, and only work with what\u2019s doubly validated, on both sides of the equation.\u201d\n\nBlending domain expertise with AI can be essential in data curation, as CAS, a 111-year-old company that collects and publishes chemical research data, has found.\n\n\u201cThings like spaces, subscripts, dashes, or a change of a single letter in a chemical structure can make a difference between a safe and explosive reaction,\u201d says CTO Venki Rao. \u201cWe have over 350 PhDs in our facility, curating data.\u201d\n\nRecently, the company has begun using AI to help categorize and curate the data, freeing up some of these PhDs for more complex work. But it takes domain expertise even to build a simple optical character recognition system.\n\n\u201cIf you are a pure technologist, you can\u2019t be productive on day one for us,\u201d he says. \u201cIf you brute force it with the technology, without understanding the chemistry, it will never be optimal.\u201d\n\nRealize the value of real-world testing\n\nNo battle plan survives contact with the enemy \u2014 and no AI system survives contact with the real world. If your company isn\u2019t prepared for this fact, your AI project is doomed before it starts.\n\nJennifer Hewit, head of cognitive and digital services at Credit Suisse Group, met this challenge head-on. When the financial services company launched its first customer support chatbot, Amelia, Hewit knew it would often give up and send customers to human agents instead of being able to answer most queries on its own.\n\n\u201cI made the decision early to go live,\u201d she says, when the chatbot\u2019s ability to understand intent was just 23%. But by being in real-world scenarios, the chatbot was able to observe multicultural, multilingual, and multi-generation conversations and learn from them.\n\n\u201cGoing live fast, and exposing the capability to the organization, meant that we were able to increase her ability to understand intent from 23% to 86% in five months,\u201d she says.\n\nHave a higher purpose\n\nAs companies compete for scarce AI talent, having meaningful projects can make a big difference. At Envision Virgin Racing, for example, the goal of using AI isn\u2019t just to shave off a few seconds in a Formula E electric car race. \u201cWe\u2019re moving the industry forward,\u201d says Sylvain Filippi, managing director and CTO.\n\n\u201cAll the software and technologies are flowing almost directly from racing to high-end premium cars and then to road cars,\u201d he says. \u201cIt\u2019s a lot more motivating when we know that this technology is really going to accelerate the transition to electric cars.\u201d\n\nThe next generation of electric cars will start racing in 2023, he says, pushing the boundaries in battery technology and fast charging.\n\n\u201cFast charging, in combination with high-density batteries, will help enable an easy transition to electric cars,\u201d he says. \u201cNot ten years from now, but two or three years from now. For the consumer, once you have a car that has roughly a 300-mile range and superfast charging, it\u2019s game over for internal combustion.\u201d\n\nToday, the average consumer car charges about 50 or 100 kilowatts, he says, and an 80% charge takes about 40 minutes. Increasing beyond what even Tesla is currently doing at 200 kilowatts will make a big difference on longer road trips, he says. \u201cThe idea is to get it to 300. At 300, you\u2019re looking at 15 minutes to charge. At 600, it takes less than 10 minutes.\u201d\n\nEnvision Virgin Racing is also hopes to show that running cars at high voltage is safe.\n\n\u201cThe amount of abuse we put the cars through is real. People can watch that and say, \u2018Oh, cool, if these cars can do that, I can, too,\u2019\u201d he says. \u201cIf we can make those cars reliable for an entire season of abuse like this, a road car will be on the road forever. It\u2019s a reliable testing ground \u2014 like motorsports used to be.\u201d\n\nAnd AI is at the heart of this. \u201cWe have race engineers, system engineers, and also a bunch of software engineers, which is new to motor sports,\u201d he says. \u201cBy virtue of being an electric car, there is so much performance derived from software. The cars are the same at the start and the end of a season, but the software has changed six times, and the car is noticeably faster.\u201d\n\nFortunately, there\u2019s plenty of data for the AI teams to make use of, as electric cars are filled with sensors, collecting vast amounts of highly structured data. \u201cFor data scientists, it\u2019s a fantastic playground to apply their resources,\u201d he says.\n\nEnvision Virgin Racing works with Genpact, a consulting company, for the data science models and tools. Its parent company, Envision, is an alternative energy company that started out in wind turbines and has since gotten into software to improve energy grid efficiency. And as the owner of the fifth largest battery manufacturer in the world, Envision is very interested in pushing the technology to its limits, Filippi says.\n\n\u201cThere are a lot of important learnings here,\u201d he says.