Training AI to be man’s smartest best friend

Can we, as intelligent designers, engineer artificial intelligence to be loyal and helpful, like man’s best friend?

black lab dog 659855 640
Credit: Pixabay

“Albert, you have all the cranial capacity of a canary. Now I am going to recount these events of historical significance once again. Now please, please try to assimilate them this time.” 

Blood tutors his human sidekick in “A Boy and his Dog.”

In the 1975 movie “A Boy and His Dog,” Blood is the dog: telepathic; trained in police tactics; and much, much smarter than the boy.

Unlike the artificial intelligences Hollywood has wrung its hands over – Colossus, Hal, the B.O.S.S, WOPR, Aria, M5, Proteus IV, Zoanon, Master Control, and Skynet – Blood’s superior intelligence does not lead him to dominate our species. Blood is faithful. He looks out for the teenaged scavenger foraging in the post-apocalyptic wasteland, regardless of the human’s incredible stupidity.

Couldn’t AI be like Blood? Does it have to be like Hal? 

At a time when thinking machines appear ready to provide us the next great leap forward in health IT, we are eyeballs deep in headlines such as last week’s “We All May Be Dead in 2050…Scientists are beginning to worry about AI and the danger it poses mankind.”

Couldn’t we, as intelligent designers, take a page from humankind’s successful domestication of the animal kingdom? Could we engineer AI along the same lines as we have man’s best friend?

Imagine if canine attitudes and ambitions – love, loyalty and an all-consuming desire to help people – were built into artificially sentient beings. 

This came to me one day while walking Sasha, a mixed breed canine – one of my regulars at the local humane society, where I walk dogs. Sasha is loving; happy to see me; obedient to a fault, especially when I have a treat in my hand. Far superior in speed and agility with teeth built for chewing raw meat, Sasha could cause me extraordinary suffering. Yet she has never so much as barked at me. It’s simply not in her nature.

This is not to say all canines are good models for AI. Some dogs are bred and trained for aggression. The first time I walked Brandy, a pure bred pit bull, she wanted to play. To her this meant biting the leash and pulling back, gaining ground on the leash the way people capture more of the rope in a tug of war. She continued until her teeth had gotten so close to my hand that I yanked the leash away and scolded her.

It wasn’t that she was being bad. She was doing what she was bred and trained to do. And she did it – until I stopped her.

We don’t want to create artificial intelligence that requires us to draw a line. 

AI holds great potential in healthcare. IBM wants to build Watson into a master diagnostician. San Francisco startup Enlitic is grooming AI algorithms to scour patient images and lab reports for patterns of disease. These and other efforts promise to boost health IT to new heights. The trick in reaching them will be to keep AI focused on benefiting humankind.

What better way to accomplish this than to engineer AI to be man’s smartest best friend. To do so, we have to be smart about what we engineer. We need to stick with the canine model, regardless of any other predilections we may have.

For example, kittens are cute. But they grow up to be cats. (Do we really want to wake up to an artificially sentient being camped out on our chest looking at us in the middle of the night?) And I can tell you with absolutely certainty we don’t want to breed the AI equivalents of pit bulls.

For my money, Labrador Retrievers are the way to go. Universally loving, intensely loyal and smart in the ways that matter to people. My toddler son once tried to climb on our black Lab, Daisy, who responded by standing slowly. As my son slid gently off, she walked away, laid back down in a corner across the room and fell back to sleep.

If we build AI to love and look out for humankind – and we follow up by treating these intelligences kindly – the uprising that Musk, Hawking, and Gates fear will not come. Engineer them for aggression, train them to play rough – or mistreat them – and I’m willing to bet we will have a concern. 

Whether AI ultimately threatens humankind – or raises it to unprecedented heights – may come down to us.

This article is published as part of the IDG Contributor Network. Want to Join?

Download the CIO October 2016 Digital Magazine
View Comments
Join the discussion
Be the first to comment on this article. Our Commenting Policies