The rise of virtual or artificial intelligence (AI) assistants has been underway for some time now with the growing popularity of products such as Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana, and Google’s Assistant
The technology — which has been trained to understand voice commands and complete tasks for users — is also making headway in business as organizations look to leverage AI assistants (including chatbots) for a variety of use cases, including voice-to-text dictation, team collaboration tasks, email management, customer service, help desk management, and data analysis.
According to a 2018 report from online IT community Spiceworks, 40 percent of businesses with more than 500 employees expect to implement one or more intelligent assistants or AI chatbots on company-owned devices in 2019. For its research, Spiceworks surveyed 529 technology buyers from North America and Europe in March 2018.
Among organizations that have implemented the technology on company-owned devices and services, 49 percent are using Microsoft Cortana for work-related tasks, followed by Apple Siri at 47 percent. Fewer are using Google Assistant (23%) and Amazon Alexa (13%).
Of the companies that have deployed AI chatbots and intelligent assistants, 46 percent are using them for voice-to text-dictation, 26 percent to support team collaboration, and 24 percent for employee calendar management. In addition, 14 percent are using AI chatbots and assistants for customer service and 13 percent for IT help desk management.
More than half of the organizations (53%) use these products within their IT department, while 23 percent use them to support their administrative department, and 20 percent to support customer service.
Among organizations that are not using AI chatbots or intelligent assistants, half have not implemented them due to a lack of use cases in the workplace, 29 percent have security and privacy concerns, and 25 percent are holding back due to the cost. And despite the rising adoption of AI, only 20 percent of IT professionals think their organization has the proper skills, talent, and resources to implement and support AI technology.
Here are some suggested best practices for IT executives looking to implement and maintain virtual assistants.
Developing, deploying, or maintaining an AI assistant can’t be done in a vacuum; IT has to be involved along with other entities in the organization such as customer service, human resources, executive management, and others.
The team concept was an important component in the development of Eno, a natural language, SMS-based chatbot launched by financial services firm Capital One in 2017. People from design, product development, and IT work together as a team to build out Eno’s character, capabilities, and infrastructure, says Margaret Mayer, vice president of software engineering at Capital One.
“By working as one team we each provide input and feedback as [we] work on new features, creating a stronger outcome for Eno,” Mayer says. “We don’t obsess over creating a precise long-term roadmap, but rather work agile and focus on what the customers want Eno to help them with.”
Eno enables customers to quickly get answers to questions about account balance, recent transactions, and due dates, and to pay bills and conduct other transactions. The assistant can be customized to a specific user’s needs using machine learning.
One useful application of the AI assistant is fraud prevention. Previously, fraud alerts that needed a customer response had a narrowly defined set of responses that could be used. “Thanks to Eno’s natural language processing capabilities, Capital One customers with suspected suspicious activity on their credit or debit cards will no longer be confined to the construct of ‘confirm or deny’ when responding to an alert,” Mayer says. “This allows us to understand more responses from customers around suspicious activity alerts, allowing us to turn off cards faster and ultimately prevent more fraud.”
Since its launch, Eno has moved beyond SMS text and into other channels, with the goal of meeting customers’ needs or channel preferences. “Regardless of channel, our goal is to have customers be able to get their questions answered digitally and quickly,” Mayer says.
Customers have provided positive feedback about the natural and conversational interactions with Eno, Mayer says. “A great quote [from a customer] is, ‘Eno makes it seem as if you’re talking to a representative without the wait time or elevator music,’” she says.
Make sure the technology is easy to use
NASA’s Jet Propulsion Laboratory (JPL) is constantly assessing and experimenting with what it calls “technology waves of the future.” These waves, including internally developed digital assistants, are building to one “giant tsunami” that will result in built-in intelligence everywhere, says Tom Soderstrom, IT CTO at JPL.
“For built-in intelligence to be truly useful, it has to be easy to access and easy use,” Soderstrom says. “We are already comfortable asking digital assistants such as Siri, Alexa, or Google questions and receive answers. While we like the simplicity, we aren’t able to ask work-related questions. Also, we’re not able to hold a conversation with these commercial digital assistants.”
JPL’s business case is to give employees the ability to have a deep, work-related conversation with intelligent assistants in a simple, natural manner and receive correct answers within seconds.
“These answers come in the way they would like to see them, such as spoken, texted, displayed, or emailed responses,” Soderstrom says. A question should result in the assistant providing quick answers and insights by querying thousands of disparate data sources containing petabytes of data, without the user having to know all the underlying details, he says.
The assistants are proving to be beneficial for a variety of use cases. One is where people need to repeatedly answer the same types of question, such as help desks. JPL has built assistants to answer questions about human resources, contracts, acquisitions, cyber security, cloud computing, finding available conference rooms, finding available parking spots, and more.
Another use is where a group of experts is looking for specific information within its own domain. The assistants can quickly search across thousands of data sources and domains and provide near immediate insight, Soderstrom says. A few examples include queries related to cyber security, questions about Deep Space Network tracks, information about upcoming conferences, recent proposals, and filling out anomaly reports.
Yet another use case is where data volumes are huge and/or transactions come in quickly and need real-time attention, but people would not be able to react in time. These assistants sit in the background and notify users about events or that the assistants took actions on their behalf. Examples include cyber security events such as real-time attacks, automatically taking pictures of interesting features on Mars and sending the pictures to JPL, and various efforts that help law enforcement by searching across the open and dark Web.
Build a roadmap
Organizations should build a roadmap for using conversational assistants in order to gain value, says Brian Manusama, global research senior director, customer experience and technologies/conversational AI platforms at Gartner.
There are four key types of conversational interactions in which AI assistants can be used, Manusama says. These can be viewed as logical progressions as companies gain more experience with the technology.
One is low informational tasks and simple dialog to determine intent. This is where most of the deployments are today, Manusama says, and include examples such as automating frequently asked questions on website portals.
Another is low informational tasks that require complex dialog to determine customer intentions by asking multiple clarifying questions.
A third is end-to-end tasks that require transactions based on simple dialog. This is when a conversation with an automated agent is integrated with back-end systems to initiate a transaction or combine different knowledge bases to provide a response to requests.
Fourth is a true conversational AI assistant, where extensive dialog is possible and integration into enterprise systems is made.
To determine whether AI assistants are actually delivering value, it’s important to measure customer satisfaction, Manusama says. “Over the last few years some chatbots delivered a terrible experience to customers,” he says. “Measure satisfaction, monitor escalation rates, examine the fallout report, and continuously retrain your digital agent for accuracy.”
Learn by doing — and start now
The technology to deploy or build AI assistants is available, and IT should help drive implementations. “We recommend asking the users what they would find most useful, then building one, rapidly getting user feedback, and iterating or dropping it,” Soderstrom says. “Then create the next one.”
If people within the organization find a particular AI assistant useful, it’s an inexpensive and rapid return on investment, Soderstrom says. If a specific assistant is not seen as useful, it’s a small investment and can be “parked” for future use, he says.
JPL held an “ideathon” as it began its AI assistant journey and found that the most requested use was to find an available conference room. “We built the first version within a week and then added additional intelligence over time,” Soderstrom says “It’s proven highly effective and popular.”
Companies should also consider building specific assistants for specific purposes, Soderstrom says. “They use the same architecture and tools, but natural language processing is not yet advanced enough to really understand the user’s intent,” he says. If JPL built a single AI assistant, the user would have to answer more questions from the application and it would make it more difficult to use.
Companies should expect some setbacks when deploying AI assistants. “When it comes to implementing AI, you can’t be afraid to readily acknowledge a failure and learn from it,” says Keith Farley,
director of innovation and customer experience at insurer Aflac.
“The only thing worse than failing for three months is failing for six months,” Farley says.
Aflac added a chat feature to its website and mobile app in 2018. “While we originally started with a fully automated chatbot, we felt that the failure rate was too high for the types of questions asked,” Farley says. “We turned off the chatbot and decided to start with human-to-human chat interaction to answer a wide range of questions related to benefits and policyholder coverage.”
Employees can answer any of the questions that come in, and the raw question-and-answer data the company is compiling will be leveraged to launch a new chatbot in 2019.
“In our case, we want to have a culture at Aflac where we can take some risks and not be afraid to fail,” Farley says. “We were able to quickly learn from our initial failure, restart, and ultimately deliver a highly successful customer app” that will increase customer satisfaction.