AI zombies will soon be eating the brains of good, hard-working folks like us.\u00a0 Hawking. Musk. \u00a0Gates. They warned us.\u00a0 Artificial intelligence is bent on dispatching humankind to the trash heap.\u00a0 \u00a0\u00a0\nSure, the humanichs on CBS\u2019 Extant are staving off an attack by alien spores turned human hybrids. But I have my doubts about how that is going to work out.\u00a0 Intelligent robots were plenty helpful before they turned on us and the Fresh Prince of Bel-Air saved us \u2013 barely \u2013 from domination. \u00a0Ava of Ex Machina ran amuk. \u00a0(But really.\u00a0 In her place, who wouldn\u2019t.)\u00a0\nHal was a pain; Skynet nothing but trouble.\u00a0 And the synths on AMC\u2019s Humans.\u00a0 Don\u2019t get me started.\u00a0 (Oh, the humanity.)\nBack in the real world, a four-legged robot is opening a door in an engineering lab and a two-legged robot named Atlas is jogging \u2013 seriously, jogging \u2013 in the woods.\nNow, I don\u2019t want to be an alarmist\u2026but soon AI will be pouring over our vital signs.\u00a0 Yours. Mine. Pretty much those of anyone who lets it.\u00a0 The sensors are in mass production. Wearables from fitbit, Garmin, Jawbone measure heart rate, even blood pressure.\u00a0 Google is developing a contact lens that measures blood sugar.\nIBM\u2019s Watson is on track to crunch the big data from these and other life sign monitors. Truth be told, I\u2019m not that worried. \u00a0I\u2019ve been through this before.\nThirty-six years ago I had my first encounter with artificial intelligence.\u00a0 I was visiting Stanford University, home of SUMEX-AIM (Stanford University Medical EXperimental computer for Artificial Intelligence in Medicine), rubbing elbows with the AI elite \u2013 Joshua Lederberg, Edward Feigenbaum, Edward Shortliffe.\u00a0\nBack then we believed the singularity was just around the corner.\u00a0 Of course, we didn\u2019t call it that.\u00a0 It was just AI, the logical extension of computing.\u00a0 But, as it turned out, AI was \u2013 and is \u2013 a lot more than that in ways we are only beginning to understand. \u00a0Here\u2019s a new one, well, relatively new.\nAI\u2019s success is going to take more than digital tinkering. And keeping it benign certainly is going to take more than all of humanity locking arms against it and singing Kumbaya.\u00a0The future of AI will be determined to a large extent by our ability to nurture a positive working relationship with this new type of intelligence. And that won\u2019t be easy.\nFlesh-and-blood doctors don\u2019t much like computerized diagnosticians. I learned that early on, writing about SUMEX-AIM.\u00a0 That fact also did not escape the early developers of computer-aided medical technologies.\u00a0 When these entered the medical mainstream shortly after the turn of the 21st century, their developers spun them so they\u2019d be palatable to people.\u00a0 They turned computer-aided diagnosis into computer-aided detection. Same acronym. Hugely different meaning. \u00a0\nCAD\u2019s big break came as an adjunct to digital mammography. It was to this branch of women\u2019s health what the spellchecker is to writing. \u00a0From the outset, CAD software was highly sensitive, but notoriously nonspecific.\u00a0 It would identify just about every possible lesion in an image. This was very annoying to the mammographer who had to go back and essentially re-interpret the image. Yet, mammographers embraced CAD as an aide.\nTo be sure, CAD has gotten better. But it still has a ways to go. One of the limiters may be the lack of something only people can provide \u2013 trust. \u00a0\u201cSuboptimal performance of the human\u2013automation team is often caused by an inappropriate level of trust in the automation,\u201d opines one researcher who is looking into ways to make CAD more effective. \u201c(Physicians) sometimes under-trust CAD, thereby reducing its potential benefits, and sometimes over-trust it, leading to diagnostic errors they would not have made without CAD.\u201d \u00a0\nGiven what Watson might be able to achieve through IBM\u2019s proposed acquisition of Merge Healthcare, healthcare might be in for a big boost. \u00a0But it\u2019s only going to happen, if people understand what machines can \u2013 and cannot \u2013 do.\nTrust and teamwork sound like strange goals when talking about the relationship between people and machines. But meeting those goals could be critically important. It\u2019s good to be wary. Look no further than Commander Bowman (2001: A Space Odyssey) locked outside the pod bay door in Jupiter orbit. But, if and when machines become intelligent, we\u2019re going to have to assess their capabilities and treat them accordingly.\nIt may take an attitude adjustment on our part, whereby we don\u2019t look at machine intelligence so much as artificial as assistive.\nSeven years ago a mechanical engineer hinted at exactly that in an IEEE abstract, describing \u201cthe development of intelligent task-driven socially assistive robots.\u201d \u00a0\nToday there\u2019s a forum entitled \u201cAssistive Intelligence And Technology.\u201d\nA few days ago a story in Fast Company appeared under the title \u201cDon\u2019t call it AI: Put away your fears of artificial intelligence. Assistive intelligence is the future.\u201d \u00a0\nAnd so it has begun, swapping terms for AI, as we did for CAD.\u00a0 But, as in CAD, a word swap won\u2019t be enough.\nWe must be ready to view intelligent machines as \u201cteammates.\u201d Subordinate ones, of course. Limited in their ability.\u00a0 Beholding to us for having created them.\u00a0 But\u2026not so obviously that we hurt their feelings.\n\u00a0Let\u2019s not be stupid about it.