Across enterprises, artificial intelligence (AI) adoption is steadily rising. According to a Constellation Research survey of C-level executives in numerous industries, 70 percent of respondents stated their organization currently utilizes some form of AI technology. Moreover, most respondents noted plans to spend up to $5 million in AI investments in 2018.
As more companies realize the great opportunities that AI brings and embrace advanced technologies, they there are some practical considerations to see significant business impact across their enterprise. From our experience applying AI in numerous client environments, we find three factors that organizations must address: lack of explainable AI, low-data density environments, and the need for richer knowledge graphs. To get adoption and implementation right, enterprises must think through these practical considerations first, and keep them top of mind throughout their AI implementation.
1. Lack of explainable AI
Explainable AI centers on the ability to answer the question, “Why?” Why did the machine make a specific decision? The reality is many new versions of AI that have emerged have an inherent notion of a “black box.” There are many inputs going into the box, and then out of it comes the actual decision or recommendation. However, when people try to unpack the box and figure out its logic, it becomes a major challenge. This can be tough in regulated markets, which require companies to disclose and explain the reasoning behind specific decisions. Further, the lack of explainable AI can affect the change management needed throughout the company to make AI implementations succeed. If people cannot trace an answer to an originating dataset or document, it can become a hard value proposition to staff.
Implementing AI with traceability is a way to address this challenge. For example, commercial banks manage risk in an online portfolio. A bank may lend money to 5,000 small- to medium-sized businesses. It will monitor their health in balance sheets within the portfolio of loans. These sheets may be in different languages or have different accounting standards.
With AI, the bank can actually take all of these balance sheets, ingest the information, convert the unstructured data to structured data, and then yield a score for risk. That bank should be able to take the risk score, click, and drill down to see the subcomponent numbers that gave rise to the final score. There may be one score that does not look right. A user can then drill down to the next level of detail, and so on, until they arrive at the number that did not make sense, which can then, for instance, lead them to the 36th balance sheet on page 16. There, in the footnotes will be the information that the system used to derive its score. Users can look at a decision and unpack it to the component information that drove the machine to that end point. Traceability facilitates compliance and increases AI adoption.
2. Low-data density environments
AI works very well when it can leverage lots of data, such as in conversational AI with virtual assistants like Siri that can access email, online shopping data, and multiple apps. That is why most AI applications started in business-to-consumer (B2C) environments, where algorithms can run millions of data points. Enterprises often do not always have access to the same data volume. Take for example, if an organization is preparing past contracts for ingestion, it may be working with 100,000 contracts – not a million or 10 million. Enterprises therefore face double the challenge: they have too many documents to handle manually but too few to train the algorithm. One way to extract the data out of those documents is through traditional natural language processing algorithms, which use statistical methods. Further, we have found that computational linguistics, which deciphers meanings and extracts data based on context, can be effective when challenged by minimal amounts of data.
In another example, a wealth management firm can use technologies like natural language processing and machine learning to aggregate statements from financial institutions and investors at high speeds. A firm can take 80,000 documents, each 50 to 60 pages long with 40 to 50 transactions on each page, and extract knowledge from the language. It is a sizable amount of data and quite burdensome to process manually, but a fraction of the amount data that many mainstream with which AI applications.
So, it is important to note that not all AI is created equal. It is important to understand the data environment when determining the best AI solution. With high-data density environments, organizations can run unsupervised learning far more effectively. With low-data density environments, supervised learning is most effective. In situations where all the necessary data is not available, new techniques like synthetic data creation can help enterprises train the models. For example, in retail, companies can use game simulations to create synthetic data.
3. The need for richer knowledge graphs
Just as AI struggles to yield optimal results in low-data density environments, it is currently lacking rich knowledge graphs to make AI relevant to specific domain and industry applications.
Knowledge graphs capture context and relationships, to train data models in AI and classify incoming information in the right context. They are what enable voice assistants like Alexa or Siri to answer common questions like, “Where is the nearest Starbucks?” Alexa and Siri can provide users with an immediate answer by connecting millions of reference points, including search results from Amazon or Apple services. While useful for these simple interactions, the current ontologies still cannot replicate nor understand the complexities of real human conversation and capture the thoughtful interaction that consumers expect.
For instance, if a five-year old complains to his mother, “Ben pushed me at school. I fell down, got up, and pushed him back. The teacher saw me but did not see Ben so I got detention and he didn’t – that’s not fair.” Every five-year old will understand what this sentence means, but AI systems would still struggle because they do not understand causality and fairness. Beyond knowledge graphs, AI systems need the conversational interface to finish their thinking.
Within businesses, many are using conversational AI via chatbots to try to deliver a more interactive experience with their customers. Banks have been at the forefront of this, allowing customers to get basic account information through chatbots in their online portal or mobile app. However, more complicated requests, such as loan applications or contract reviews can be challenging to a machine. The bot needs to be able to factor in the ontology of the words used, the contextualization of the questions being asked, and the threading together of multiple streams of conversation. It needs domain, contextualization, orientation, and knowledge to make the process whole and more organic. Companies are actively working on developing domain-specific ontologies and embedding them into the right knowledge management systems, so they can drive more compelling experiences and applications of AI in business environments.
With more AI investments and implementations, enterprises must factor in these three practical considerations: First, unlock the black box, trace the machine’s decision, and present it in an explainable fashion. Second, know how to apply AI in environments where they do not have a lot of information with which to work. Finally, embed domain knowledge and experiential learning to enrich their knowledge graphs and drive more effective AI applications.