by Sarah K. White

Robots and AI won’t cost you your job anytime soon

Feature
Oct 31, 20166 mins
CareersEmerging TechnologyInnovation

The growth of artificial intelligence was a hot topic at this year’s MIT Technology Review's EmTech conference, but if you're concerned that robots are about you take your job, rest easy. They aren't.

MIT Technology Review’s EmTech conference is all about looking to the future of tech and what’s more futuristic than artificial intelligence? If you are up to date with the latest news on AI, or maybe you’ve just watched a few episodes of HBO’s Westworld, you might be wondering how soon your job will be replaced by a robot. But if the presentation at MIT ‘s EmTech conference this year is any indication, while artificial intelligence is at an impressive point, we’re still far off from robot domination.

To emphasize that point, Dileep George, co-founder of Vicarious, an organization working on next-generation AI algorithms, showed video of robots falling over in silly situations in his presentation, Artificial Intelligence at Work. The footage not only got some laughs, it also highlighted the vast limitations of current robotics. According to George, it’s not that we don’t have the hardware to create intelligent robots, it’s that we don’t have the software to make robots intelligent enough to do something as simple as fall down correctly. Instead, most robots unnaturally tense up and fall to the ground from a mere push. (Also read: AI at work: Your next co-worker could be an algorithm)

He likens it to when a Roomba gets itself trapped in the corner of a room — which George also demonstrated on video — the device isn’t smart enough to figure out how to get itself out of that situation. Current robot intelligence is essentially on par with that of lower-level creatures that still have the “old brain,” such as reptiles, rodents, birds and fish. He gave the example of a frog trying to catch bugs on an iPhone display — the frog sees the insects crawling on the screen and continues to try to capture them, never realizing it’s impossible.

Robots function a lot like reptile brains. Technology hasn’t come far enough in biomimicry to create the right movements, expressions and thought patterns to bring AI to where it can work alone. Current AI technology, whether it’s an actual robot or just software, almost always need a human guide. At best, robots are relegated to one specific task that they can repeat multiple times.

Robots can’t help us yet

Stefanie Tellex, assistant professor of Computer Science at Brown University said we don’t have robots that can do much for us yet. We have autonomous cars and drones, but we don’t have robots that can do chores for us or even navigate real-life environments without human assistance.

“We’d like them to be able to get us a cup of coffee, peel a banana or give you a Kleenex when you have a cold,” she says. And beyond our homes, she points out that these types of robots would do a lot for us in environments like labs or even the international space station.

But our day-to-day environments are complex. For example, your desk probably looks nothing like your coworker’s desk. You might have similar objects, like a chair, computer or coffee mug, but they probably aren’t in the same location or even the same shape, size or brand. That means, a robot can’t be trained to just understand a desk as a predictable environment. The robot has to take in every object on the desk, whether it’s a pen, laptop or coffee mug in order to just understand how to do something as simple as pick up your mug, before it can even dream of pouring you a fresh cup of joe.

Another problem for Baxter is that robots, in general, have a hard time picking things up with precision — even robots like Baxter, and he’s a robot specifically designed to pick things up. Baxter was invented to make manufacturing easier and safer for human employees and to give smaller businesses access to more affordable equipment. He eliminates a lot of monotonous work for factories so that employees can focus on more complex tasks, freeing them up to work on something more strategic. He needs to be programmed to understand an object. He can’t just enter a new environment and get to work.

And as Tellex illustrates in her session, it’s become a full-time job for her team to figure out how to teach Baxter to identify a new object and then pick it up. Essentially, they need to teach the robot to learn, moving it past it’s “old brain” to enable it to interpret a situation through a series of photographs, a process called light field perception. With light field technology, eventually Baxter will be able to encounter an object he’s never seen before, and through a series of images, figure out what it is and how to pick it up — meaning, he’s still a far cry from being able to free interns from office coffee runs.

[ Related story: AI expanding in the enterprise (whether or not you know it) ]

The future of robotics is bright

While we’re still in the early stages of AI, there’s a lot in store for the future of robots. One of the most notable instances came from George, who demonstrated the robot Hermes. Hermes is being developed to take the place of high-risk workers, but it won’t completely strip them of their jobs. Instead, this robot is being designed to carefully mimic the movements and actions of a human controller so that humans can stay safe in dangerous situations.

For example, a firefighter can gear up and control a robot step-by-step, breaking down doors to get to potential victims and get a better sense of the scene, all without the risk of smoke inhalation, collapsing ceilings or coming face to face with a roaring blaze. In these instances, robots aren’t being developed to take our jobs, instead they’re being developed to make our jobs easier, safer and taking away some of the grunt work for us.