Our Robots, Ourselves: 9 Efforts to Build More Relatable Robots

If we're going to live in a world of artificial intelligences, we've got to be able to feel affection for them.

The word "robot" entered English from Czech. The 1920 play Rossum's Universal Robots featured artificial lifeforms designed for manual labor who eventually rose up and exterminated their human creators. A common trope, to be sure, and here's another one: at the end, a pair of robots have learned to love one another, and the last human gives them his blessing, declaring them the new Adam and Eve.

Humans have had an ambivalent relationship with robots real and fictional ever since. The key to easing that discomfort has been to build bots we can relate to -- if not as people, than at least as something with a personality. This slideshow will detail some attempts that met varying degrees of success.


One of the earliest attempts to build a machine that people might relate to emotionally was ELIZA, the first chatterbot, developed by MIT computer scientist Joseph Weizenbaum in 1966. Named after Eliza Doolittle in the play Pygmalion and emulating the bland, nonspecific, open-ended questioning style of a Rogerian psychotherapist, ELIZA fooled many users into believing a real human was driving it, and they ended up pouring their hearts out to it. It startled Weizenbaum that such a simple and definitely non-thinking program could be so convincing, and he grew skeptical of some of the philosophical claims of AI proponents. You can play with ELIZA online and see how realistic it seems to you.

Twitter Markov bots

Perhaps the modern-day successors to ELIZA are the spate of Twitter Markov chain bots that have sprung up over the past few years, which build weird, loopy sentences based on the probability of one word following another in a pre-existing collection of text. Pioneered by @horse_ebooks, a beloved spambot that churned out delightful dada (until it was taken over by a human as an art project), these bots reached their transcendence with @tofu_product, who mirrors your own randomized writing style back at you. What do we find most relatable, after all, if not ourselves? People also built Twitter bots specifically to irritate their friends.

Sergeant Star

At the other end of the whimsical scale from Twitter chatbots is Sgt. Star, a feature on the U.S. Army website that serves as a "virtual recruiter," answering potential recruits' preliminary questions on military life. His avatar is actually based on a real soldier, but computer-rendered to make him look a bit more animated. Sgt. Star's handlers discovered that people were more comfortable talking to him than to a real person about delicate questions like group showers or fear of combat. Thanks to an EFF Freedom of Information Act request, you can check out the complete, 288-page transcript of everything Sgt. Star can say.

Please understand me

What would perhaps make a bot most relatable would be the ability to relate to us. Despite urban legends that the best way to bypass an automated phone tree is to shout obscenities at it, this is a nut that computer science has yet to really crack. Microsoft Research Asia is working on deriving your mood from your smartphone activity; the Israeli startup Beyond Verbal is trying to figure out how to auto-detect emotional changes in speech; and facial recognition research from North Carolina State aims to help computers tell if students taking MOOCs or automated tutorials are bored or struggling.


It's really in the world of physical robots that the quest to create a machine that reacts as if it has feelings is going into high gear. MIT has been working on a robot called Kismet since the 1990s, with articulated face parts that can provide a variety of expressions. Unfortunately, Kismet is also one of the most terrifying things we've ever seen. It looks like you tried to kill Stripe from Gremlins by setting him on fire only to discover that he was a Terminator underneath his skin. Watch the video if you dare!


Kaspar, a British-built robot, is slightly less traumatizing, though he does sort of look like he's wearing another face on top of his face as a mask. Still, as you can see in this video, his limited emotional range of facial expressions and body postures does a great deal of good for his target audience: autistic children, who can interact with the kid-sized robot and learn to better understand the emotional responses of others.


One way to avoid the uncanny valley effect humanoid robots can bring up is to build a robot that isn't humanoid at all. A British professor has been experimenting with Paro, a robotic baby seal, to help people with dementia come out of their shells and interact more. Paro wriggles and coos and makes real seal sounds; paradoxically, patients can relate to it better because they have few preconceived notions about seals, while many dislike dogs, cats, and possibly people. It may seem creepy to convince senile people to fall in love with a robot, but that video sure is affecting.

We do what we're told

Perhaps the one feature we as living things relate to most strongly is the will to survive. A researcher at the University of Canterbury in New Zealand recently tried to see how humans would react to a robot that appeared to posses just such an urge, sternly instructing his test subjets to turn off a robotic cat even as it begs for its life, in a twist on the famous Milgram experiment. Everyone turned the robot off eventually, but as you can see in the video, they felt pretty bad about it. (They also had a harder time turning of the robot if it were nicer.)

So long, Yutu

There's really no stronger evidence of our desire to connect to other living things -- and our willingness to be pretty liberal in the definition of "living" -- than in the outpouring of emotion that accompanied a farewell message from the Chinese lunar rover Yutu on Sina Weibo, the Chinese equivalent of Twitter. Never mind that the message didn't come from Yutu, and didn't even come from the scientists running the mission, but rather from a group of space enthusiasts. The truth is that people building relatable bots have a built-in head start that comes from our apparently innate desire to relate to them.

Copyright © 2014 IDG Communications, Inc.

Related Slideshows