Artificial intelligence (AI) is top of mind for CIOs across industries. Many are planning to put AI to work in their enterprise to help make their operations smarter, faster, more profitable, and competitive. In fact, 82 percent of senior executives plan to implement AI in the next three years, according to research conducted by Genpact and Fortune Knowledge Group. Yet to effectively put AI to work, we must first agree on what is or is not AI.
At Genpact, we define AI as the intersection of technologies that reason, interact, and learn:
Reasoning allows AI technologies to extract critical information from large sets of structured and unstructured data, perform clustering analysis, and use statistical inferencing in a way that starts to approach human cognition.
Interaction allows AI technologies to use computer vision to see, conversational AI to communicate, and computational linguistics to read, again getting closer to human level cognition.
What really separates AI from just intelligent automation though, is the ability for the technology to learn and become smarter over time. Only AI possesses this third dimension. One of the reasons why AI is so promising is because it changes the paradigm in how we have been writing software code. Instead of programming in every “if, then…except” condition, essentially telling a computing engine what to do, AI tells the computing engine to learn and figure it out on its own. In doing so, AI addresses the last-mile problems that traditional software programming never could.
To date there are three types of learning in AI.
1. Assisted learning
This is the most common form of learning that happens today, where a machine is presented with defined inputs and outputs in a sample set of data and then reverse engineers the algorithm in between. Once the machine has figured it out, it is considered “trained” and can apply the function to any new set of information. For instance, this type of machine learning can help predict the likelihood of corporate bankruptcy. A user will put in a large amount of data on companies that have gone bankrupt, which the machine will derive underlying trends. From there, it can learn and recognize if any warning signs appear in another company going forward.
2. Unassisted learning
Unassisted learning is when a machine is presented with a set of documents or data and then figures things out on its own. This can be done with even a very small number files. For example, if a company wanted to classify a batch of contracts, the machine could read each document and, based on context, automatically separate them into categories, such as by clauses like intellectual property clause, limitation of liability clause, termination of indemnification clause, etc. So, the machine is able to define a rudimentary ontology without human input.
3. Reinforcement learning
Also known as “goal-oriented” learning, reinforcement learning is when a machine is presented with a set goal and then cut loose to do what it needs to do – including adjustments and rework – until it finds ways to most effectively reach that goal. The most popular examples of reinforcement learning are in video gaming where the machine’s goal is to win the game. One can see how it can also be applied in a warehouse operation, where automated materials handling systems can be set with the goal of optimizing the movement of products throughout the facility. The system will find and determine the best ways to pick and retrieve products for the best utilization of warehouse space and efficient operations.
Using the dimensions of reasoning, interaction, and learning allows organizations to define AI in a way that can yield concrete applications. Only with the three components in totality can something be truly defined as AI. Each of the three components are progressing in their own spectrum. What is possible today is represented by the intersection of these three fields. Progressive CIOs and other executives must pick the business use cases that are at this intersection to get the most out of AI. Forward-thinking CIOs and executives see this as a continuum and pick the right transformation use cases based on where the evolving AI scene sits on that spectrum at the time.