How chatbots like Siri will get smarter

The next wave of advancements in artificial intelligence will consider how chatbots reside within a human system and how rapidly they adapt to changes in their environments.

chatbots chatbot bot
Thinkstock

Siri, Alexa, Google Assistant and other A.I.-based chatbots provide subpar interactions that are far from being capable of having a conversation with a real-life human assistant. These tentpole features of the Apple, Amazon and Google platforms perform quite well when asked about the weather or when instructed to add a calendar invitation, but they look inept when asked context-specific questions or when needing to carry on a dialogue.

Colin Allen, a professor of cognitive science at Indiana University and an expert on A.I. ethics, machine morality and animal cognition, observed this when he asked Siri, "Where can I find an expert on animal cognition?" While he acknowledges that it was an ego-stroking question, he reports that Siri did indeed provide a less-than-adequate answer that highlights some of the assumptions assistants make: It brought up two results from Apple Maps within 0.3 miles of his location that pointed to design and painting companies with “expert” in their names.

An example query to Siri Apple, Inc.

When asking a chatbot like Siri a complex query that requires context, the system fails to interpret the real meaning behind a question.

“Siri assumes that [answers to questions with the word find] must be based on local awareness. If I want to find a plumber, that’s a good assumption. If I want to find other things, it may be a bad assumption. People travel thousands of miles to find the right medical facility for their needs. Siri is completely clueless about the context-sensitivity of [the word] find,” Allen says. The results also threw out the expertise requirement of “animal cognition” because Siri “interpreted expert as a word to match, not a modifier of topic words,” Allen says.

Initial assumptions like these may be necessary for chatbots and generally all A.I. implementations today because it is all we may be able to achieve. Allen does not take issue with this, noting that “Teslas are probably better than teenagers on the road.” But he also notes with this example and others that we have a far way to go for general intelligence. “I am equally skeptical of those most pessimistic about the benefits of [A.I.-based] technology while working to dampen the optimism of those who think that all the problems will be solved in the next couple of decades,” he says. “My sense is that industry contains more of the latter, while philosophers of technology are more in the former camp.” Here, Allen directly contrasts attitudes toward artificial intelligence in business, a field where people are seeking monetary gain, and academia, a field where people are principally seeking further understanding of this complex technology.

If chatbots and similar technologies are to evolve, then experts like Colin Allen will have to solve problems that understand A.I. within an overall human-machine system. “We need to be more attentive to issues of how humans adapt to inflexibility (and inscrutability) in the machines. We make assumptions, often false, about their capacities, and then adjust our behavior to their inflexibility,” he says. Often, when a chatbot fails, we say unnatural phrases and twist our words in hopes they can accept the query and take action, but this seems at complete opposition with how assistants should work. Companies have favored enabling machines to handle focused queries rather than moving closer to examples seen in the 2013 movie Her, where the agent is more generalized and personalized to the user.

Allen believes animal cognition may be a source of inspiration for how chatbots may evolve.

“How do animals naturally adapt to us better than a machine?” he asks. “How does a dog deal with situations where there is uncertainty?” He believes that understanding the answers to these questions may be a hint to creating artificial general intelligence, as adaptability is a key characteristic of human-like intelligence.

More generally, Allen says, “we need to see this a problem of designing the whole human-robot system, not just making the machines themselves smarter, because, for the foreseeable future, the machines are not going to be anywhere near as adaptive as the humans.” Human queries assume a respondent understands context, which chatbots currently lack today. Animals, however, pick up on context queues by combining sense systems together, such as speech and vision. Companies should develop models that encourage cross-modal integration to create more adaptive, human-like chatbots.

While focus on specific use cases can provide enhancement for discrete, focused tasks, chatbots will continue to fall short because of a lack of adaptability. Focused domains like autonomous driving mimic intelligence based on a known set of inputs. Chatbots, with the burden of handling more diverse problem sets, need further advancement to provide the same level of human-like performance. Businesses that wish to implement their own chatbots still need to have clear domains that human actors can work within. Otherwise, chatbots may prove to be more of a hindrance than a benefit to the user.

This article is published as part of the IDG Contributor Network. Want to Join?

SUBSCRIBE! Get the best of CIO delivered to your email inbox.