Artificial intelligence is wending its way into business processes. But CIOs should pause and research the potential business impact of AI tools — both good and bad. Credit: Thinkstock Hype over artificial intelligence reached its zenith in 2017, with CIOs, consultants and academics touting the technology as potentially automating anything from business and IT operations to customer connections. Yet through the first calendar quarter of 2018 several media organizations reported on the dangers of AI, which involves training computers to perform tasks normally requiring human intelligence. “There’s been so much hype in the media about it and this is just journalists trying to extend the hype by talking about the negative side,” says Thomas Davenport, a Babson College distinguished professor who teaches a class on cognitive technologies. Perhaps, but the concerns are hardly new and very persistent, ranging from fears about racial, gender and other biases to automated drones running amok with potentially lethal consequences. SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe One week after the MIT Technology Review published a story titled “When an AI finally kills someone, who will be responsible?” raising the issue of what laws apply should a self-driving car strike and kill someone, a self-driving Uber car struck and killed a woman in Arizona. Timing, as they say, is everything. Here CIO.com details some of the concerns regarding adoption of AI, followed by recommendations for CIOs who want to begin testing the technology. 5 top AI concerns 1. How rude As we learned from Microsoft’s disastrous Tay chatbot incident, conversational messaging systems can be nonsensical, impolite and even offensive. CIOs must be careful of what they use and how they use it. All it takes is one offensive, epithet-spewing outburst from a chatbot to destroy a brand’s friendly image. 2. Poor perception Though developed by humans, AI is, ironically, not very much like humans at all, according to Google AI scientist and Stanford University Professor Fei-Fei Li, in a column for The New York Times. Li noted that while human visual perception is deeply contextual, AI’s ability to perceive images is quite narrow. Li says that AI programmers will likely have to collaborate with domain experts — a return to the academic roots of the field — to close the gap between human and machine perception. 3. The black box conundrum Many enterprises want to use AI, including for some activities that may provide a strategic advantage, yet companies in sectors such as financial services must be careful that they can explain how AI arrives at its conclusions. It might be logical to infer that a homeowner who manages their electricity bills using products such as the Nest thermostat might have more free cash flow to repay their mortgage. But enabling an AI to incorporate such a qualification is problematic in the eyes of regulators, says Bruce Lee, head of operations and technology at Fannie Mae. “Could you start offering people with Nest better mortgage rates before you start getting into fair lending issues about how you’re biasing the sample set?” Lee tells CIO.com. “AI in things like credit decisions, which might seem like an obvious area, is actually fraught with a lot of regulatory hurdles to clear. So a lot of what we do has to be thoroughly back-tested to make sure that we’re not introducing bias that’s inappropriate and that it is a net benefit to the housing infrastructure. The AI has to be particularly explainable.” Without a clear understanding of how AI software detects patterns and observes outcomes, companies with risk and regulations on the line are left to wonder how strongly they can trust the machines. “Context, ethics, and data quality are issues that affect the value and reliability of AI, particularly in highly regulated industries,” says Dan Farris, co-chairman of the technology practice at law firm Fox Rothschild. “Deploying AI in any highly regulated industry may create regulatory compliance problems.” 4. Ethnographic, socioeconomic biases While running a project that used Google Street View images of cars to determine the demographic makeup of towns and cities across the U.S., Stanford University Ph.D. student Timnit Gebru became concerned about racial, gender and socio-economic biases in her research. The revelation prompted Gebru to join Microsoft, where she is working to ferret out AI biases, according to Bloomberg. Even the AI virtual assistants are encumbered by bias. Have you ever wondered why virtual assistant technologies such as Alexa, Siri and Cortana are female? “Why are we gendering these ‘helper’ technologies as women?” Rob LoCascio, CEO of customer service software concern LivePerson, tells CIO.com. “And what does that say about our expectations of women in the world and in the workplace? That women are inherently ‘helpers;’ that they are ‘nags;’ that they perform administrative roles; that they’re good at taking orders?” 5. AI leveraged in hack, deadly attacks AI’s rapid advances means risks that malicious users will soon exploit the technology to mount automated hacking attacks, mimic humans to spread misinformation or turn commercial drones into targeted weapons, according to a 98-page report crafted by 25 technical and public policy researchers from Cambridge, Oxford and Yale universities. “We all agree there are a lot of positive applications of AI,” Miles Brundage, a research fellow at Oxford’s Future of Humanity Institute, told Reuters. “There was a gap in the literature around the issue of malicious use.” The New York Times and Gizmodo also covered this report, titled, “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.” 6. AI will turn us into house cats Then there is the enslavement theory. Entrepreneur Elon Musk, of Tesla and SpaceX fame, warned that humans run the risk of becoming dependent “house cats” to AI of superior intelligence and capabilities. More recently, Israeli historian Yuval Noah Harari posited that the emergence of AI that automates everything could create a “global useless class.” In such a world, democracy will be threatened because humans don’t understand themselves as well as machines do. IT, undeterred Generally, these concerns are largely overblown, according to Davenport. For example, he says that biases have also long existed within the scope of normal analytics projects. “I don’t know anyone who ever worked with analytics who will say bias doesn’t exist there,” Davenport says. Davenport, who recently completed a new book, “The AI Advantage. All about big enterprise adoption of AI,” says that several large companies are testing AI responsibly. “We’re now seeing a lot of enterprise applications, and I haven’t heard anyone saying we’re going to discontinue our IT program,” Davenport says, adding that the technology remains immature. “The smart companies just keeping working on this stuff and try not to get deterred by pluses and minuses coming from the media.” Indeed, IT leaders appear to remain largely undeterred by the hype, as more than 85 percent of CIOs will be piloting AI programs by 2020 through a combination of buy, build and outsourcing, according to Gartner. And while Gartner recommends CIOs start building intelligent virtual support capabilities in areas that customers and citizens increasingly expect to be mediated through AI-based assistants, they must also work with their business peers to create a digital ethics strategy. Related content brandpost Four Leadership Motions make leading transformative work easier The Four Leadership Motions can be extremely beneficial —they don’t just drive results among software developers, they help people make extraordinary progress wherever they lead. By Jason Fraser, Director, Product Management & Design, VMware Tanzu Labs, Public Sector Sep 21, 2023 5 mins IT Leadership feature The year’s top 10 enterprise AI trends — so far In 2022, the big AI story was the technology emerging from research labs and proofs-of-concept, to it being deployed throughout enterprises to get business value. This year started out about the same, with slightly better ML algorithms and improved d By Maria Korolov Sep 21, 2023 16 mins Machine Learning Artificial Intelligence opinion 6 deadly sins of enterprise architecture EA is a complex endeavor made all the more challenging by the mistakes we enterprise architects can’t help but keep making — all in an honest effort to keep the enterprise humming. By Peter Wayner Sep 21, 2023 9 mins Enterprise Architecture IT Strategy Software Development opinion CIOs worry about Gen AI – for all the right reasons Generative AI is poised to be the most consequential information technology of the decade. Plenty of promise. But expect novel new challenges to your enterprise data platform. By Mike Feibus Sep 20, 2023 7 mins CIO Generative AI Artificial Intelligence Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe