Siri, Alexa, Google Assistant and other A.I.-based chatbots provide subpar interactions that are far from being capable of having a conversation with a real-life human assistant. These tentpole features of the Apple, Amazon and Google platforms perform quite well when asked about the weather or when instructed to add a calendar invitation, but they look inept when asked context-specific questions or when needing to carry on a dialogue.
Colin Allen, a professor of cognitive science at Indiana University and an expert on A.I. ethics, machine morality and animal cognition, observed this when he asked Siri, “Where can I find an expert on animal cognition?” While he acknowledges that it was an ego-stroking question, he reports that Siri did indeed provide a less-than-adequate answer that highlights some of the assumptions assistants make: It brought up two results from Apple Maps within 0.3 miles of his location that pointed to design and painting companies with “expert” in their names.
“Siri assumes that [answers to questions with the word find] must be based on local awareness. If I want to find a plumber, that’s a good assumption. If I want to find other things, it may be a bad assumption. People travel thousands of miles to find the right medical facility for their needs. Siri is completely clueless about the context-sensitivity of [the word] find,” Allen says. The results also threw out the expertise requirement of “animal cognition” because Siri “interpreted expert as a word to match, not a modifier of topic words,” Allen says.
Initial assumptions like these may be necessary for chatbots and generally all A.I. implementations today because it is all we may be able to achieve. Allen does not take issue with this, noting that “Teslas are probably better than teenagers on the road.” But he also notes with this example and others that we have a far way to go for general intelligence. “I am equally skeptical of those most pessimistic about the benefits of [A.I.-based] technology while working to dampen the optimism of those who think that all the problems will be solved in the next couple of decades,” he says. “My sense is that industry contains more of the latter, while philosophers of technology are more in the former camp.” Here, Allen directly contrasts attitudes toward artificial intelligence in business, a field where people are seeking monetary gain, and academia, a field where people are principally seeking further understanding of this complex technology.
If chatbots and similar technologies are to evolve, then experts like Colin Allen will have to solve problems that understand A.I. within an overall human-machine system. “We need to be more attentive to issues of how humans adapt to inflexibility (and inscrutability) in the machines. We make assumptions, often false, about their capacities, and then adjust our behavior to their inflexibility,” he says. Often, when a chatbot fails, we say unnatural phrases and twist our words in hopes they can accept the query and take action, but this seems at complete opposition with how assistants should work. Companies have favored enabling machines to handle focused queries rather than moving closer to examples seen in the 2013 movie Her, where the agent is more generalized and personalized to the user.
Allen believes animal cognition may be a source of inspiration for how chatbots may evolve.
“How do animals naturally adapt to us better than a machine?” he asks. “How does a dog deal with situations where there is uncertainty?” He believes that understanding the answers to these questions may be a hint to creating artificial general intelligence, as adaptability is a key characteristic of human-like intelligence.
More generally, Allen says, “we need to see this a problem of designing the whole human-robot system, not just making the machines themselves smarter, because, for the foreseeable future, the machines are not going to be anywhere near as adaptive as the humans.” Human queries assume a respondent understands context, which chatbots currently lack today. Animals, however, pick up on context queues by combining sense systems together, such as speech and vision. Companies should develop models that encourage cross-modal integration to create more adaptive, human-like chatbots.
While focus on specific use cases can provide enhancement for discrete, focused tasks, chatbots will continue to fall short because of a lack of adaptability. Focused domains like autonomous driving mimic intelligence based on a known set of inputs. Chatbots, with the burden of handling more diverse problem sets, need further advancement to provide the same level of human-like performance. Businesses that wish to implement their own chatbots still need to have clear domains that human actors can work within. Otherwise, chatbots may prove to be more of a hindrance than a benefit to the user.
Tarun Gangwani is an award-winning product and design professional whose work has been used by millions of people around the world. With a background in cognitive science and design, Tarun has delivered user-centered solutions to startups and enterprise companies within a wide variety of industries that leverage cloud technologies to deliver innovation to their clients.
Tarun is currently in charge of product, design and development for Grok, a cloud analytics and automation platform that detects anomalies within infrastructure and applications using machine intelligence. He joined Grok in 2016 to manage design and development teams and deliver compelling user experiences for businesses managing their cloud workloads.
Previously, Tarun led multidisciplinary product development teams within IBM’s $9 billion cloud business. He was among the first wave of designers who pioneered and designed Bluemix, IBM’s cloud developer platform. Bluemix is now the largest open-source cloud platform in the world. His work has been recognized in numerous outlets, including the New York Times. In 2016, Forbes magazine honored Tarun as a member of its 30 Under 30 List, which features entrepreneurs and leaders in business and technology worldwide.
Outside of the danger zone of cloud computing, Tarun is a coffee enthusiast. On any given weekend, he can be found perfecting the craft of home-brewed coffee. He is also a proud Indiana University alum, with degrees in cognitive science and human-computer interaction design. He enjoys writing on topics related technology, business and user design.
The opinions expressed in this blog are those of Tarun Gangwani and do not necessarily represent those of IDG Communications Inc. or its parent, subsidiary or affiliated companies.