Eighteen months ago, Mr. Cooper launched an intelligent recommendation system for its customer service agents to suggest solutions to customer problems. The company, formerly known as Nationstar, is the largest non-bank mortgage provider in the U.S., with 3.8 million customers, so the project was viewed as a high-profile cost-saver for the company. It took nine months to figure out that the agents weren’t using it, says CIO Sridhar Sharma. And it took another six months to figure out why.
The recommendations the system was offering weren’t relevant, Sharma found, but the problem wasn’t in the machine learning algorithms. Instead, the company had relied on training data based on technical descriptions of customer problems rather than how customers would describe them in their own words.
“We didn’t do a good job of making sure that the root of the question that the customer was asking was captured in the terms the customer was using,” he says. “It was coded in the technical terms that we were using internally.”
In addition, the system’s feedback mechanism, in which agents recorded the results of the calls, had overlapping categories, which made the problem even worse, says Sharma, who declined to say how much the project cost the company.
Mr. Cooper’s troubled foray into AI is no anomaly. According to a recent IDC survey, only about 30 percent of companies reported a 90 percent success rate for AI projects. Most reported failure rates of 10 to 49 percent, while 3 percent said that more than half of their AI projects failed.
A lack of staff and unrealistic expectations for the technology were cited by more than a quarter of respondents as major challenges. Another 23 percent said that their AI projects failed because of a lack of necessary data.
“At the first sign of failure, the tendency is to pull the project,” Sharma says. “But if you do that, you’re doomed.”
Mr. Cooper will come back to the customer service project next year as part of an overhaul of its CRM system, and the company remains committed to AI. Its latest ML project, which involves analyzing unstructured data, is already having a positive business benefit and is helping create better language training data for the future.
“These learnings are not cheap,” he says, adding that it takes buy-in from the CEO and CFO to stay on track when things don’t go right.
A dearth of data
Data issues are among the chief reasons why AI projects fall short of expectations. According to a report released by McKinsey last fall, two of the biggest challenges limiting the application of AI technology have to do with data.
First, like Mr. Cooper, many companies have difficulty getting properly labeled data to train their machine learning algorithms. If data isn’t properly categorized, humans must take time to label it, which can delay projects or cause them to fail. The second data issue is not having the right data for the project.
“Companies often don’t have the right data, and get frustrated when they can’t build models with data that isn’t labeled,” says Anand Rao, partner and global AI leader at PricewaterhouseCoopers. “That’s where companies consistently fail.”
National Audubon Society is using artificial intelligence to help protect wild birds. For example, in July the organization released the results of an AI-powered analysis of how climate change will affect 38 species of grassland birds.
“If we do nothing to slow the pace of climate change, then 42 percent of those grassland bird species will be considered highly vulnerable,” says Chad Wilsey, vice president of conservation science at the organization. “But if we are able to take action, then we can also reduce the vulnerability to just 8 percent.”
Not all of the Audubon Society’s AI projects have been as successful. Last summer, the organization attempted to use machine learning to count the number of brown pelicans and black skimmers on beaches. The pilot project was based on a set of images collected by a volunteer who flew a drone over an island off the coast of Texas.
“We were interested in understanding how the hurricane that passed through had impacted bird populations,” says Wilsey.
It took 2,000 labeled images of brown pelicans before the system’s accuracy was good enough for their needs, he says. But there weren’t enough images of black skimmers. “For other applications of computer vision, you might be able to use something that’s available on the Internet,” he says. “But in this case, the images of the birds are very specific.”
For example, most available pictures of birds are taken by people who are at ground level, instead of by drones shooting straight down. And because this was a pilot study, the Audubon Society didn’t have the resources to go back and take more pictures, Wilsey says.
Training data bias
Another example of an AI project hampered by a lack of data is Fritz Lab’s attempt to create a model to identify hair in people’s pictures. Fritz helps mobile developers build AI models that can run directly on phones, without having to send data back to a central server for processing.
“We wanted to build a feature that would detect hair in live video and change the color in real-time,” says Jameson Toole, the company’s CTO.
Everything looked good at first, he says, but there was a significant flaw in the algorithm that would have been extremely problematic if the system went public.
“Thankfully we do a lot of manual testing, in the office amongst ourselves, and with people we recruit, and we realized that it was not doing well for certain ethnicities,” says Toole. “We went back to the data set and sure enough, there was nobody in the data set who were part of those groups.”
There are a lot of image data sets available for training, he says, both free and commercial. But firms have to check that there’s enough of the particular kinds of data that they need.
“It starts with taking time and putting in the effort of building your own set of test cases that is representative of your user base,” he says.
Fritz Labs ended up collecting the missing images and annotating them manually. “This certainly highlighted the fact that it’s not hard to introduce bias into systems like this when you’re limited by the data you have available,” Toole says.
According to a recent PricewaterhouseCoopers survey, more than half of companies don’t have a formal process for assessing AI for bias. Worse, only 25 percent of respondents said they would prioritize the ethical implications of an AI solution before implementing it.
Data integration issues
Sometimes, the problem isn’t a lack of data as too much of it — in too many places. That was the case at one global bank, according to the managing director who heads up AI and data for the retail side of the organization, and who was not authorized to speak on the record.
If he could go back in time, he says, the bank would have started bringing different channels of data together sooner. “That is something we did not do, and that was a big mistake,” he says. “We had siloed data and the consequence was that we did not have a completely 360-degree view of the customers.”
That data integration issue hurt the bank’s ability to create effective marketing messages, which led to lost revenue, he says, add that the bank is now moving to a multi-channel view of customer data, including online, mobile, and in-person interactions.
“We’re still not there,” he says. “Siloed data is one of the biggest challenges that we had and still have.”
The challenge isn’t so much a technical one, he says, but a business problem, the first issue of which is compliance and regulations. “There are certain types of data that we’re not allowed to mix.”
The other issue has to do with company priorities. “There are so many other projects that are running. And who is going to pay for putting the data together? That, by itself, is not a value add for the bank,” he says, adding that this is a challenge that every bank has to deal with.
If he were to do it over again, he would have started the data integration process when the bank first began working on its AI use cases. “I don’t think we’ll ever really be all the way done, because there are so many data sources,” he says. “I don’t think any company is every getting completely done.”
He says the bank expects to have its main data sources connected in the next 18 to 24 months. Right now, he says, the bank is only about 10 to 15 percent of the way there.
Another issue for AI projects is when companies rely on historical data instead of active transactional data for their training sets. In many cases, systems trained on a single, static, historic snapshot don’t do well when transitioned to real-time data, says Andreas Braun, managing director at Accenture.
“You offload some data, train the model, and get pretty good uplift of the model in the lab,” says Braun, who heads up Accenture’s European data and AI business. “But once you reintegrate that into the organization, the problems start.”
There can be a significant difference between historic data samples and data coming through a live system for, say, detecting fraud in real-time or spotting money laundering, because the models aren’t trained to pick up the small changes in behavior.
“If you copy your data at some point in time, maybe at night, or on Saturday or Sunday, you have a frozen situation,” he says. “That makes analytics in the lab very easy. But when the machine learning models are reintegrated into the live system, they are much worse.”
The solution, Braun says, is to move away from putting data scientists in a separate silo from the production technology side. In particular, when models are built using live data, the integration of the models into production environments is much faster.
“And the successes are much, much better,” he says. “It completely changes your game.”
Untouched unstructured data
According to a recent Deloitte Consulting survey, 62 percent of companies still rely on spreadsheets, and only 18 percent have taken advantage of unstructured data such as product images, customer audio files, or social media comments in their analytics efforts.
In addition, a lot of historical data that companies have been collecting lacks the context it needs to be useful for AI, or is stored in summary form, says Ben Stiller, strategy and analytics practice leader for retail and consumer products at Deloitte Consulting.
“Data limitations can certainly set a project up for failure right from the outset,” he says.
However, companies that take advantage of unstructured data, like Mr. Cooper is doing, are 24 percent more likely to have exceeded their business goals, according to the survey.
“It really requires a fundamental shift in how you think of data,” says Stiller.
Mr. Cooper, for example, has a lot of unstructured data in the form of about 1.5 billion customer documents. As a result, customer service agents spend too much time finding the documents they need to help customers, and sometimes have to call customers back.
So the company scanned all 1.5 billion documents using machine learning technology and closely analyzed the first set of 150 million documents that fall into the 200 most used types.
“Now we have a machine learning project that’s bringing in value and is in production today,” Mr. Cooper’s Sharma says.
In addition to making customer service calls go faster, the document analysis is also helping create a better language dictionary for future use, when the company comes back to its previously troubled AI customer service project.
Beyond data, organizational issues present significant challenges for AI success.
For example, if he were to go back in time, Sharma says he would have started out by focusing on the language customers use when detailing their problems — and by pairing up subject matter experts with AI developers.
“You have to have business people sitting with our technology teams so that the context is always top of mind,” Sharma says. “You have to make them sit together, and make it a full-time job.”
And unless you can learn from mistakes like these, your chances for making good on the promise of AI may dwindle, as failing AI projects can be troubling to investment teams making funding choices, and can have a negative impact on employee and customer satisfaction.
“Early failed projects around AI can turn an executive team off to making more significant investments in the space,” Stiller says.
That can cause companies to fall behind the competition.
And it all starts from the top. As Deloitte’s survey shows, senior-level buy-in for AI projects is vital. “If the CEO is sponsoring it, you’re 77 percent more likely to exceed your business objectives,” Stiller says.
So don’t let a setback derail your organizational commitment to AI, as a long-term approach to AI will pay off, he says. “The more projects you do over time, the ROI continues to improve.”