by Terena Bell

5 chatbot strategy mistakes — and how to avoid them

Feature
Nov 28, 20187 mins
Artificial IntelligenceDigital TransformationEnterprise Applications

Chatbots are fast becoming a common solution for customer and end-user communications. But many chatbot strategies are missing the mark.

artificially intelligent [AI] virtual assistant / chatbot
Credit: Thinkstock

Are you building the right bot? Recent data from software company Pegasystems shows people don’t always use chatbots the way that companies think. Take Q&A, a function many bots are built for, but one the survey says less than half of consumers want.

With $1.25 billion expected to pour into chatbot development by 2025, a misalignment of business assumptions and customer desire could cause costly mistakes. According to software development company RubyGarage, it takes between $6,000 and $12,240 to build a single bot from scratch. Of course, this same tech saved companies $20 million last year, per Juniper Research. But seeing those savings — or any other benefit — requires the right training and implementation.

[ Learn the 6 secrets of successful chatbot strategies and cut through the hype with our practical guide to machine learning in business. | Get the latest insights with our CIO Daily newsletter. ]

Whether it’s misunderstanding market need or something else, there are plenty of ways a chatbot strategy can go wrong. Here are the top five areas experts pinpoint for mistakes — and the tips they give to prevent them.

1. Market misfit

According to Pegasystems, more people want to use bots after purchase than before, with 60 percent turning to them to track orders already placed. Another 41 percent use chatbots to update addresses and 43 percent provide feedback or make complaints.

“It’s important for companies to think about what they are trying to do,” says Juliette Kopecky, vice president of marketing for chatbot maker Talla, as well as “what results they expect to see.” This isn’t just essential for determining product/market fit, Kopecky notes; it’s essential to chatbot training as well. She asks, “Is it just a simple process they are looking to automate or are they expecting something more sophisticated, that will be interactive, learn over time, and improve?” Thinking through what your bot should really do better sets it — and the company — up for success.

2. Assuming one bot means one use case

Most businesses are veterans at buying tech — packaged software and software as a service (SaaS), that is. Problem is, both of those are typically built to meet single — not multiple — business needs. But not chatbots. Talla CEO Rob May says, “If a system learns, the more use cases you can give it.”

For example, when teams expand a Q&A bot to answer employee or customer questions, the bot’s language understanding and company knowledge grow, making it better at both tasks. As a result, savvy companies — like Verizon — use one bot for multiple use cases. Vice President of Digital Ashok Kumar says the company’s My Verizon App bot for mobile customers and its My Fios App chatbot for television are two instances of the same core technology.

Yes, training a chatbot to meet more than one need means more upfront  work. At Verizon, this meant teaching each instance a different meaning of the word “device” (“mobile phone” for My Verizon and “set-top box” for My Fios), but May says the payoff is worth it: “Systems that are more integrated are going to perform better.”

3. Forgetting to also train humans

There’s so much focus on training chatbots that companies can forget human users need training of their own. In fact, Pegasystems reports 43 percent of those who haven’t yet used bots simply don’t know how. Respondents weren’t tech subliterate — 65 percent regularly use text and 61 percent use Facebook Messenger. The data suggests they haven’t been exposed to chatbots or been given any guidance in their use.

But May indicates that even chatbot-savvy users may need reminding that the bot is there. His company’s bot answers repetitive human resource questions, like how much vacation do I have left. “If you only interact with HR a couple times a year, you forget to go to [the] bot,” he says, “A bot has to be your everything bot first, to get you to go to it for HR questions” — reinforcing the case for multiple instances.

4. Misunderstanding the value of real people

Grand View Research reports 45 percent of chatbot users prefer the tech as their primary method of customer service communication. Unfortunately, that means more than half still want human connection. Pegasystems data also points toward an overwhelming people preference, noting 60 percent of users “would rather talk to a human when conversing with a brand online.”

This does not, however, mean chatbots should pretend to be people. While it’s trendy to give them human names like Polly or Amelia, Kumar says, “I would never advise a bot … to pretend that it is human.” The data agrees: Only 36 percent of Pegasystems yet-to-use-bots group said they’d prefer a human presentation. Twenty-seven percent called bots that act like people “creepy.”

Find a way to maximize technical efficiency while still involving humans. Ian Bain, vice president of corporate communications at [24]7.ai, suggests real people review chatbot conversation, looking for areas of improvement. While this is a standard training step for any machine learning, Bain also recommends implementing a “bot-human relay” where “humans only get involved if the bot runs into trouble.” This handoff can work in two ways: First, an online customer can ask the chatbot to connect her to a real person. “The bot can also recognize when it’s not answering the question or the consumer is getting frustrated and the bot can escalate to the [human] agent,” says Bain.

5. Not watching your tone

In addition to making it clear that your bot is a bot, Dan Smith, director of growth firm Doogheno, says don’t forget about tone. “Tone is everything,” he says, discussing how general corporate communication tends to be formal or informal, casual or professional.

“With chatbots,” he explains, “that’s a challenge. The received wisdom is that a chatbot’s tone should be like talking to a friend, but why? If you called up customer service to complain and they automatically referred to you by the first name and added ‘verbal emoji,’ you’d think they weren’t taking your concerns seriously.” Especially, Smith adds, if you’re “a 55-year-old sorting out a warranty repair on a BMW.” Similarly, it’s off-brand for a pizza chatbot to call someone checking on delivery Mr. So-and-So.