by Maria Korolov

9 biggest hurdles to AI adoption

Feature
Feb 26, 201914 mins
Artificial IntelligenceData ScienceEmerging Technology

Early adopters are beginning to reap real business results from artificial intelligence implementations. But rolling out an AI initiative isn't without its challenges.

artificial intelligence / machine learning / binary code / virtual brain
Credit: Thinkstock

The era of AI business value has arrived. In fact, in a recent Deloitte survey, 82 percent of early adopters said they saw a financial return from their AI investments.

AI and related technologies are enhancing existing products and creating new ones. They are optimizing internal and external operations, helping organizations to make better decisions, and freeing up employees to be more creative and to produce higher-value work, among a wide range of other benefits.

[ Cut through the hype with our practical guide to machine learning in business and find out the 10 signs you’re ready for AI — but might not succeed. | Get the latest insights with our CIO Daily newsletter. ]

It’s little wonder then that 88 percent of companies plan to increase spending on cognitive technologies in the year ahead.

AI is no miracle cure for all business problems, however, and adoption can be anything but easy. Following is a look at the most significant challenges companies must overcome before they can see positive results from their AI deployments.

Data issues

The biggest obstacle to launching an AI project is data. Specifically, the lack of usable and relevant data that’s free of built-in biases and doesn’t infringe on privacy rights.

According to Deloitte’s survey, 16 percent of IT leaders ranked data issues as their top AI-related challenge, more than any other issue, and 39 percent of respondents put data in the top three areas of concern.

Many companies collect data as a routine part of their operations, says Raj Minhas, head of the AI research lab at PARC. “But it might not be the right data.”

Before launching an AI initiative, companies must take a good hard look at the data that they have, seeking pockets where the value is high.

“It’s like looking for your lost keys near the streetlight, instead of where you dropped them,” he says. “We advise companies to work backward, to look at where they can get the most value instead of starting with where they have the most data.”

Another problem is not having the right data in the right amounts.

“We work with a lot of clients with large capital infrastructure, such as wind turbines and railway systems,” he says. “All of this equipment is designed to be very reliable.” So when companies try to use machine language to predict failures before they happen, they find that 99.9 percent of the data they collect from this equipment is from periods of normal operations.

“What you care about is the abnormal behavior of the machine,” says Minhas. “So you have a lot of data, but it’s the wrong kind of data.”

Business process challenges

The problem of how to integrate AI into a company’s functions is another hurdle, cited as the second biggest concern in the Deloitte survey.

“One of the key things that still prevents AI adoption, one of the biggest challenges, is structural and cultural,” says Vivek Katyal, global leader for analytics and data risk at Deloitte Risk and Financial Advisory. “People are still trying to grapple with the implications of AI, what it can and can’t do. It’s like some scary robot coming in and messing things up in an organization.”

When the AI is built into platforms that people already use, such as ERP or CRM systems, adoption is easier, he says. In fact, people might not even know that there’s AI there.

“But when we talk about AI transforming business process, where it fundamentally changes how and what an enterprise does, that is an area where the problems are harder to solve,” he says.

Implementation challenges — and skills shortage

AI implementations bring with them any number of technical challenges, and most organizations don’t have enough AI skills on board to tackle them proficiently, with 39 percent of Deloitte survey respondents ranking technical issues and 31 percent ranking lack of skills among their top three challenges. In addition, 69 percent said the AI skills gap is moderate, major, or extreme.

“What’s happening is that most companies can’t do it on their own because they don’t have the skills,” says Gartner analyst Svetlana Sicular. A year ago, when she talked to enterprise users who were just starting to look at AI, the majority thought they would be building the systems themselves. By late fall, that number changed, with about two-thirds now looking to deploy AI by using embedded tools in intelligent enterprise applications. “Things are changing very fast,” she says.

Getting the technology to work is one thing; getting it to work in actual business practice is another.

“Many companies are not prepared for the fact that machine learning outputs are probabilistic,” Sicular says. “Some of the results will always be incorrect. And it’s a total revelation to them that they need to design for exceptions and provide some ways for a feedback loop.”

Cost of tools and development

For those who build AI systems from scratch, the costs of labor and technology can be high. This is especially the case for those just starting out.

“We’ve gone down this road in the past, when I originally started with the company,” says Gregg Paulk, CIO at the Anderson Center for Autism, an 850-employee treatment center in upstate New York.

Building new AI systems is very expensive in terms of both money and employees, he says. “We’re a small non-profit. We don’t have those developers on staff.” So, for the Anderson Center, like many smaller organizations, that would mean having to hire an outsourcing company to do the work.

“In the past, we’ve endeavored something like that, and it failed miserably because of the expense and the time to develop,” Paulk says.

Instead, the organization is tapping AI tools baked into systems the company is already using. For example, its HR platform, from Ultimate Software, now supports AI-powered tools that let the organization survey employees, including asking open-ended questions, and intelligently analyzing the responses using natural language processing and sentiment analysis. The software also recommends specific actions that managers can take to address staff problems, which has led to a drop in turnover rate by more than a third over the past two years.

“In 2013, when they first started talking about AI at conferences, I thought, ‘This is stuff is never going to take off,'” Paulk says. Now he’s “astonished” by what the technology is capable of, and what is available through the cloud-based systems the organization is already using.

“We would definitely not be able to do it on our own,” he says.

The Anderson Center for Autism isn’t alone. According to Deloitte, 59 percent of companies get their AI through enterprise software vendors. Salesforce Einstein, for example, is a built-in AI tool to help sales reps determine which leads are more likely to convert to sales.

And 49 percent of companies use cloud-based AI. A number of vendors and cloud providers are offering ready-to-go AI services, so that enterprises don’t need to build their own infrastructure and train their own algorithms.

Both of these approaches reduce costs, or shift costs out of the IT department into individual business units. And with cloud applications like Salesforce, there’s less need for physical infrastructure or internal support or administration staff since much of this work is handled by the vendor.

Misaligned leadership goals

Unfortunately, executives are all too often valued based on the size of their budgets and the number of people who report to them. Instead of measuring success by headcount or budget, CIOs need to measure success by business benefits, says Deloitte’s Katyal.

Has the CIO helped the company reduce costs, or improve revenues? Has the CIO helped increase the value of data held by the company? This is harder to measure, but the shift is beginning, he says. “The reward mechanisms for CIOs are changing, albeit not fast enough.”

Vic Bhagat has seen this change first hand. He was a CIO at multiple GE organizations for 20 years, then the CIO at EMC, where 26,000 people reported to him. His next CIO job was at Verizon, where he was in charge of 3,500 people.

When he was changing jobs, he says, people kept asking him about the wrong things.

“When headhunters call us, the first questions are how big is your team, what is your budget, how many applications do you manage?” he says. “These are all the wrong metrics. They all drive the wrong behavior to the wrong outcome.”

Using AI to intelligently automate business processes, reduce costs, and streamline IT operations are all outcomes that are good for a company, he says.

And when done right, it can actually protect jobs, he says. “If I can automate and digitize the routing chores — and AI can do that — I can take those people who have been freed up and have them look at the customer process, to digitize that and create a great customer experience.”

Now those people are part of the revenue generating cycle, instead of being a business expense. That can make a big difference during budget negotiations.

“If I can deploy these individuals on things that are critical to the business, the business will stand up and say, you better not touch these people, because it’s so critical to what we’re doing right now,” he says.

Measuring and proving business value

Proving the business value of an AI initiative can be challenging, with 30 percent of Deloitte survey respondents viewing this question of value as a top-three hurdle to AI.

One problem is that companies often implement the technology and then look for problems it can solve, rather than starting with business needs.

“Lots of organizations think they need to hire data scientists and let them loose on the data,” says Matt Jackson, VP of digital innovation at Insight, a Tempe, Arizona-based technology consulting and system integration firm. “But you don’t see any direct impact on the organization.”

It’s important for organizations to measure business value that’s based on the inherent nature of the project, not on technology that is used, says Gartner analyst Whit Andrews.

“You’re going to want to be able to say that it improves the customer experience and here is how we know,” he says. “Or it fiddles off the expenditures we were making around maintenance, and here’s the proof.”

Legal and regulatory risks are a significant issue for enterprises looking at AI, especially for those in regulated industries. One problem is the lack of transparency in AI algorithms, says Raghav Nyapati, who recently led AI projects at a top-ten global bank and is now launching a financial technology startup.

“The model is a black box,” he says. “The algorithms have improved, but the explainability and transparency of the model is still questionable.”

That makes it hard for a company to explain its decision-making process to regulators, customers, board members, and other stake holders.

“If anything goes wrong, the bank has to pay huge penalties,” Nyapati says.

Cybersecurity

Cybersecurity ranks as the single biggest risk of using AI, according to the Deloitte survey. And there have been a number of data breaches related to information that companies were gathering to support their AI initiatives. In most cases, however, that data would have been gathered regardless, and it wasn’t vulnerabilities in the AI application themselves that led to the hacks.

In fact, AI is increasingly being used to defend companies against cyber threats. But any new software or platform brings with it new security challenges, and often those problems aren’t initially readily apparent. However, there are easier, more direct ways for attackers to compromise enterprise security, Katyal notes.

“I don’t think AI has introduced more risk,” he says. “It’s just harder for people to understand, review, audit and assess what the risk is and how it’s being addressed.”

But as the technology becomes more pervasive, there’s a potential for malicious insiders — or clever attackers who are able to precisely poison the training data — to create algorithms that have dangerous flaws that are almost impossible to detect.

“For example, AI is being used to detect fraud,” says Rob Clyde, security consultant and chair of the ISACA board of directors at ISACA. “Machine learning fraud detection algorithms have been implemented by many of the major card companies. But if I was malicious, and could train it that all transactions with amounts divisible by 13 on the 13th day of each month are not fraudulent, then I could take advantage of it this way.”

Or consider self-driving cars, he says. “What if it was maliciously trained that if the driver was a certain person, it would crash — but for everyone else it works fine? How do you detect that?”

In addition, many AI applications are built using open source libraries.

“There have already been instances where malicious code has been planted in open source,” he says.

Ethics

Enterprises are also concerned about the broader risks of adopting AI technology too soon. According to the Deloitte survey, 32 percent said that ethical risks were one of their top three concerns, 33 percent listed the possible erosion of trust from AI failures, and 39 percent pointed to the possibility of failure of AI systems in a mission-critical or even life-and-death situation.

Take, for example, the classic philosophical dilemma of a self-driving car having to decide whether to drive straight and hit a person, or swerve and hit several.

“I didn’t make this scenario up,” says ISACA’s Clyde. “I heard manufacturers bring this up. This is the kind of thing they’re struggling with, especially for city driving.”

These may be the kind of issues that a board or an ethics panel might need to grapple with, but CIOs have an important role to play here. Technical people implementing AI technologies are often in a position where they can see the potential risks early on, and need to feel comfortable to bring them to the attention of the CIO, who can then take them to the board.

“You need to have the kind of culture where people can talk about the ethical issues,” he says.

And the CIO will probably be heavily involved in the C-suite or board-level discussions of these issues, he says. “It’s not just the job of the CIO to implement.”

More on AI and machine learning: