by Maria Korolov

8 secrets of successful AI projects

May 27, 202115 mins
AnalyticsArtificial IntelligenceData Science

Artificial intelligence holds great business promise, but it takes more than a working model to create scalable, transformative change

virtual brain / digital mind / artificial intelligence / machine learning / neural network
Credit: MetamorWorks / Getty Images

Business leaders at every level see the value of using artificial intelligence, but using AI well is where the true value lies.

And the stakes are high. According to a Deloitte survey released last summer, 61% of companies expect AI to transform their industry over the next three years. Companies with effective leaders, a high level of commitment to AI projects, and a clear AI vision and strategy are positioned to benefit the most from this shift, according to a McKinsey survey released last November.

[ Cut through the hype with our practical guide to machine learning in business and find out the 10 signs you’re ready for AI — but might not succeed. | Get the latest insights with our CIO Daily newsletter. ]

The key differentiator? Being able to deploy AI at scale. Instead of proofs of concept, or one-off AI projects, companies that will come out on top are the ones capable of deploying multiple AI applications across multiple teams. So far, only 13% of organizations have been able to do this, according to Capgemini’s 2020 state of AI report.

Here are eight tips for transforming artificial intelligence projects into business value, as told by those who are already deriving real-world benefits from AI.

Focus on business transformation

Three years ago, when General Electric was in the early stages of its AI journey, AI projects required a keen focus on specific business benefits, starting out with minimal viable projects. Today, the story is more about using AI as part of a transformation of the business itself.

Colin Parris, senior vice president of chief technology officer, General Electric

Colin Parris, senior vice president of chief technology officer, General Electric

“You can go look at one tiny silo, optimize how much inventory you save, and save $2 or $3 million, but it doesn’t translate into creating value throughout the company,” says Colin Parris, senior vice president of chief technology officer at GE, pointing to an example in which GE figured out how to manage part inventories more efficiently with AI. The next step was to take what GE learned and offer the same service to its customers.

“I can make the prediction — or give you software and you can use it — so you know what parts to buy,” he says. “So I went from efficiency to revenue generation in my industry. And then I can open up my market and the same techniques can be applied to other industries.”

But the jump from using AI to cut costs to using AI to increase business requires a fundamental shift in strategy to focus on business transformation. At GE, that has meant leveraging lean manufacturing principles, powered by AI. One advantage of pairing AI with lean is that it reduces internal resistance to change.

“We’ve been doing lean in manufacturing for years,” says Parris. “People know that their job isn’t going away.”

Know the limits of AI

As AI projects scale up and become core company operations, associated risks go up as well. If an AI system trained on a specific problem is then applied to a slightly different problem, the results may be suboptimal — or even dangerous.

“We have this thing called humble AI,” Parris says. “If things change, then I don’t use the AI model. I go back to the model I had before. Humble knows when it should step away. It limits your business risk. And increases adoption.”

Another aspect of GE’s “humble AI” approach is to ensure the AI explains its reasoning. For example, when technicians get data from wind turbines, they traditionally look up wind speed versus tower vibration in the manual, and the manual tells them what to do. An AI system, however, can get the data, plot the curves, and report to the technician that the turbine is experiencing a pitch bearing problem. Explainable AI would also show those curves to the technician and pull up the page in the manual that has the relevant information.

“The AI is now explaining itself,” Parris says. “And the technician could look at it and say, ‘This looks a little different. Or they could say, ‘That’s exactly right. Let’s do that.’”

The AI helps the technician get to the solution faster — and helps the technician believe it actually works.

“It’s about augmented intelligence, assisted intelligence,” Parris says. “It’s not replacing you; it’s assisting you.” That is helping with the adoption of AI at scale, he adds.

Listen to stakeholders — and customers

For some companies, ensuring AI systems produce useful results requires help beyond the core AI team. Like any project, this begins with requirements gathering around data, outcomes, and models.

“Ideally, you kick off a project with a whiteboard meeting where all the key stakeholders spend the afternoon sussing out the details and documenting the query requirements,” says Jim Metcalf, chief data scientist at Healthy Nevada Project, whose team learned this lesson working on a protocol for dealing with cardiac patients.

The project required collecting information on medicines patients were prescribed when released from the hospital. But some medications, such as statins, are prescribed when patients are first admitted, and are continued when the patient leaves. The system assumed these medicines were ongoing prescriptions that patients were already taking, not new medicines related to their heart attack hospitalizations, a problem that was discovered only when medication counts ended up being lower than expected.

“The team could have worked this out much earlier if we’d had more detailed discussions with all interested parties from the outset,” says Metcalf. “Our data science team has learned to assume nothing. We thoroughly vet, discuss, and document query requirements long before anyone puts fingers on a keyboard.”

Donna Wilczek, vice president of product strategy and innovation, Coupa

Donna Wilczek, vice president of product strategy and innovation, Coupa

For enterprise spending management platform provider Coupa, a customer tip pointed the way to a new way to detect fraud. “In our industry, the approach has been to look at spending fraud in silos,” says Donna Wilczek, the company’s vice president of product strategy and innovation.

But it turns out that an employee cheating in one area is more likely to be cheating in other areas as well, she says. It took conversations with procurement experts and financial auditors to find out that the secret of fraud detection is looking at the individual people at the heart of the fraud.

Coupa now collects examples of fraudulent behavior that businesses report, then adds those real-life examples to the AI system.

No more proofs of concept

When the technology was brand new, proofs of concept (POCs) made sense. Today, however, there’s less of a need to start your AI journey with experiments, says JJ López Murphy, data and AI technology director at Globant.

JJ López Murphy, data and AI technology director, Globant

JJ López Murphy, data and AI technology director, Globant

“Each of these experiments is very expensive, in terms of money, time and political clout,” he says. “Once you’ve done four POCs that go nowhere, people stop believing in AI.”

Instead, companies should work on projects that go somewhere, he says. “If it’s not in production, if it’s not being used, it’s sometimes worse than worthless.”

Gartner analyst Whit Andrews agrees, recommending that companies instead create minimum viable products. “The risk is a little higher,” he says. “But the benefit is that you get rolling. Now you just keep adding capacity and functionality.”

According to a 2020 survey from Gartner, companies that succeed with AI conduct an average of 4.1 pilot projects. Companies that are not successful conduct 5.2 POCs. “We’re past the ‘throw it at the wall and see what sticks’ point,” he says.

Mixed teams

According to the Gartner report, organizations that got “significant value” from their AI projects also had 14% more roles on their AI teams, including project managers, strategists, and people with differing backgrounds and perspectives.

“The No. 1 habit of successful companies is using well-mixed teams,” Andrews says.

For an AI project that Tech Data worked on that involved counting puffins, that meant bringing in hardware experts.

Clay Davis, vice president for global data and IoT solutions, Tech Data

Clay Davis, vice president for global data and IoT solutions, Tech Data

“If you’ve ever seen National Geographic specials, puffins are huddled up close together, thousands of them,” says Clay Davis, Tech Data’s vice president for global data and IoT solutions. “We were tasked to leverage AI to count puffins.”

Before Tech Data was called on to help with the project, there was a team of data scientists working to get the best possible models in place for counting puffins, and a separate team of hardware professionals choosing the cameras and computing equipment.

“When you have physical hardware like a camera that is capturing images, sometimes in remote areas, sometimes it’s more effective to do the computation on site, and sometimes it’s not,” he says. “And if you do computation on site, you need to make sure that the hardware that you’re leveraging is enough to handle the models you’ve built with the data scientists.”

Three months in, it turned out that the hardware chosen couldn’t run the models the data scientists were coming up with. “Now you’ve got to restart,” he says. “Either you have to buy new hardware or ask the data scientists to build a more efficient model. You needed to have both people in the project from day one.”

In the case of the uncounted puffins, the data scientists were able to switch to a trend mapping model, so they could stick with the existing hardware.

Embrace domain expertise

Relying solely on data scientists to surface insights from data is a big mistake, says Halim Abbas, chief AI officer at Cognoa, which is applying AI to behavioral diagnostics, helping to identify children with autism and other behavioral health issues.

Halim Abbas, chief AI officer, Cognoa

Halim Abbas, chief AI officer, Cognoa

Ascertaining data interdependencies and data relevance often takes a subject matter expert. For example, if a set of patients diagnosed in a room with blue walls and another in a room with white walls produced different results, an analytics model looking for patterns might deduce that wall paint has clinical significance.

“As the data set size grows, you will obviously avoid such silly conclusions,” Abbas says. “But there still might be some subtle ones.”

These are issues that an AI expert without domain expertise wouldn’t be aware of, he adds. This is especially critical when data sets are small, such as with rare conditions or small demographics.

But domain experts can have their own biases, Abbas says. “A good way to be doubly sure is to take the input from the domain experts, and do the same on the AI side, and only work with what’s doubly validated, on both sides of the equation.”

Blending domain expertise with AI can be essential in data curation, as CAS, a 111-year-old company that collects and publishes chemical research data, has found.

Venki Rao, CTO, CAS

Venki Rao, CTO, CAS

“Things like spaces, subscripts, dashes, or a change of a single letter in a chemical structure can make a difference between a safe and explosive reaction,” says CTO Venki Rao. “We have over 350 PhDs in our facility, curating data.”

Recently, the company has begun using AI to help categorize and curate the data, freeing up some of these PhDs for more complex work. But it takes domain expertise even to build a simple optical character recognition system.

“If you are a pure technologist, you can’t be productive on day one for us,” he says. “If you brute force it with the technology, without understanding the chemistry, it will never be optimal.”

Realize the value of real-world testing

No battle plan survives contact with the enemy — and no AI system survives contact with the real world. If your company isn’t prepared for this fact, your AI project is doomed before it starts.

Jennifer Hewit, head of cognitive and digital services at Credit Suisse Group, met this challenge head-on. When the financial services company launched its first customer support chatbot, Amelia, Hewit knew it would often give up and send customers to human agents instead of being able to answer most queries on its own.

“I made the decision early to go live,” she says, when the chatbot’s ability to understand intent was just 23%. But by being in real-world scenarios, the chatbot was able to observe multicultural, multilingual, and multi-generation conversations and learn from them.

“Going live fast, and exposing the capability to the organization, meant that we were able to increase her ability to understand intent from 23% to 86% in five months,” she says.

Have a higher purpose

As companies compete for scarce AI talent, having meaningful projects can make a big difference. At Envision Virgin Racing, for example, the goal of using AI isn’t just to shave off a few seconds in a Formula E electric car race. “We’re moving the industry forward,” says Sylvain Filippi, managing director and CTO.

Sylvain Filippi, managing director and CTO, Envision Virgin Racing

Sylvain Filippi, managing director and CTO, Envision Virgin Racing

“All the software and technologies are flowing almost directly from racing to high-end premium cars and then to road cars,” he says. “It’s a lot more motivating when we know that this technology is really going to accelerate the transition to electric cars.”

The next generation of electric cars will start racing in 2023, he says, pushing the boundaries in battery technology and fast charging.

“Fast charging, in combination with high-density batteries, will help enable an easy transition to electric cars,” he says. “Not ten years from now, but two or three years from now. For the consumer, once you have a car that has roughly a 300-mile range and superfast charging, it’s game over for internal combustion.”

Today, the average consumer car charges about 50 or 100 kilowatts, he says, and an 80% charge takes about 40 minutes. Increasing beyond what even Tesla is currently doing at 200 kilowatts will make a big difference on longer road trips, he says. “The idea is to get it to 300. At 300, you’re looking at 15 minutes to charge. At 600, it takes less than 10 minutes.”

Envision Virgin Racing is also hopes to show that running cars at high voltage is safe.

“The amount of abuse we put the cars through is real. People can watch that and say, ‘Oh, cool, if these cars can do that, I can, too,’” he says. “If we can make those cars reliable for an entire season of abuse like this, a road car will be on the road forever. It’s a reliable testing ground — like motorsports used to be.”

And AI is at the heart of this. “We have race engineers, system engineers, and also a bunch of software engineers, which is new to motor sports,” he says. “By virtue of being an electric car, there is so much performance derived from software. The cars are the same at the start and the end of a season, but the software has changed six times, and the car is noticeably faster.”

Fortunately, there’s plenty of data for the AI teams to make use of, as electric cars are filled with sensors, collecting vast amounts of highly structured data. “For data scientists, it’s a fantastic playground to apply their resources,” he says.

Envision Virgin Racing works with Genpact, a consulting company, for the data science models and tools. Its parent company, Envision, is an alternative energy company that started out in wind turbines and has since gotten into software to improve energy grid efficiency. And as the owner of the fifth largest battery manufacturer in the world, Envision is very interested in pushing the technology to its limits, Filippi says.

“There are a lot of important learnings here,” he says.