Siri, Alexa, Google Assistant and other A.I.-based chatbots provide subpar interactions that are far from being capable of having a conversation with a real-life human assistant. These tentpole features of the Apple, Amazon and Google platforms perform quite well when asked about the weather or when instructed to add a calendar invitation, but they look inept when asked context-specific questions or when needing to carry on a dialogue.\n\n\nColin Allen, a professor of cognitive science at Indiana University and an expert on A.I. ethics, machine morality and animal cognition, observed this when he asked Siri, "Where can I find an expert on animal cognition?" While he acknowledges that it was an ego-stroking question, he reports that Siri did indeed provide a less-than-adequate answer that highlights some of the assumptions assistants make: It brought up two results from Apple Maps within 0.3 miles of his location that pointed to design and painting companies with \u201cexpert\u201d in their names.\n\n Apple, Inc. \n\nWhen asking a chatbot like Siri a complex query that requires context, the system fails to interpret the real meaning behind a question.\n\n\n\n\u201cSiri assumes that [answers to questions with the word find] must be based on local awareness. If I want to find a plumber, that\u2019s a good assumption. If I want to find other things, it may be a bad assumption. People travel thousands of miles to find the right medical facility for their needs. Siri is completely clueless about the context-sensitivity of [the word] find,\u201d Allen says. The results also threw out the expertise requirement of \u201canimal cognition\u201d because Siri \u201cinterpreted expert as a word to match, not a modifier of topic words,\u201d Allen says.\n\n\nInitial assumptions like these may be necessary for chatbots and generally all A.I. implementations today because it is all we may be able to achieve. Allen does not take issue with this, noting that \u201cTeslas are probably better than teenagers on the road.\u201d But he also notes with this example and others that we have a far way to go for general intelligence. \u201cI am equally skeptical of those most pessimistic about the benefits of [A.I.-based] technology while working to dampen the optimism of those who think that all the problems will be solved in the next couple of decades,\u201d he says. \u201cMy sense is that industry contains more of the latter, while philosophers of technology are more in the former camp.\u201d Here, Allen directly contrasts attitudes toward artificial intelligence in business, a field where people are seeking monetary gain, and academia, a field where people are principally seeking further understanding of this complex technology.\n\n\nIf chatbots and similar technologies are to evolve, then experts like Colin Allen will have to solve problems that understand A.I. within an overall human-machine system. \u201cWe need to be more attentive to issues of how humans adapt to inflexibility (and inscrutability) in the machines. We make assumptions, often false, about their capacities, and then adjust our behavior to their inflexibility,\u201d he says. Often, when a chatbot fails, we say unnatural phrases and twist our words in hopes they can accept the query and take action, but this seems at complete opposition with how assistants should work. Companies have favored enabling machines to handle focused queries rather than moving closer to examples seen in the 2013 movie Her, where the agent is more generalized and personalized to the user.\n\n\nAllen believes animal cognition may be a source of inspiration for how chatbots may evolve.\n\n\n\u201cHow do animals naturally adapt to us better than a machine?\u201d he asks. \u201cHow does a dog deal with situations where there is uncertainty?\u201d He believes that understanding the answers to these questions may be a hint to creating artificial general intelligence, as adaptability is a key characteristic of human-like intelligence.\n\n\nMore generally, Allen says, \u201cwe need to see this a problem of designing the whole human-robot system, not just making the machines themselves smarter, because, for the foreseeable future, the machines are not going to be anywhere near as adaptive as the humans.\u201d Human queries assume a respondent understands context, which chatbots currently lack today. Animals, however, pick up on context queues by combining sense systems together, such as speech and vision. Companies should develop models that encourage cross-modal integration to create more adaptive, human-like chatbots.\n\n\nWhile focus on specific use cases can provide enhancement for discrete, focused tasks, chatbots will continue to fall short because of a lack of adaptability. Focused domains like autonomous driving mimic intelligence based on a known set of inputs. Chatbots, with the burden of handling more diverse problem sets, need further advancement to provide the same level of human-like performance. Businesses that wish to implement their own chatbots still need to have clear domains that human actors can work within. Otherwise, chatbots may prove to be more of a hindrance than a benefit to the user.