The internal disruption of AI

Much has been made of AI’s ability to upend existing industry orthodoxies and disrupt markets. But could it disrupt your culture?

deep thinking ai artificial intelligence
Thinkstock

Artificial intelligence is de rigueur with executives and boards of directors. Show me an industry and I’ll show you an application for AI that’s captured management’s mindshare, snagging both budget and headcount (often from less-sexy initiatives). Accelerating disease diagnosis in healthcare, flagging risky behaviors in financial services, reducing carbon footprints in energy, or responding to support requests using the customer’s own vernacular—all offer compelling business and social value, promising new markets, competitive advantage, and yes, disruption.

What managers don’t bargain for—and often don’t want to hear—is that disruption can be internal as well as external. Nascent AI efforts can affect morale, causing inertia and resulting in staff attrition.

The way we’ve always done things

New technologies often beget new delivery processes. Your IT team, proud of its agility, nevertheless applies rigor and oversight to technology delivery, often sending code through peer review and scrumming to unravel knotty functionality.  

But AI delivery often has its own rules. A manufacturing client published an 8-stage development structure that had become the standard for advanced analytics deployment. Data prep, algorithm testing, training and tuning models… although each stage was designed to be agile, the entire process would become unwieldy.

The new AI program manager refashioned the delivery process, announcing three stages: (1) prep and build; (2) train and tune; and (3) test and deploy. Everyone’s job function was squeezed into one of the three stages. The AI manager acquired Amazon Sagemaker, a development framework packaging key functions, obviating the need for steps like manual data prep and A–B testing. Developers began commenting that their skills were being commoditized.

And it’s not just IT. Risk managers at a midmarket bank were asked to move away from their traditional analytics tools and trust a “black box” algorithm that would approve or deny credit applications. They could no longer control the data being ingested by the algorithm. Instead they were expected to accept credit worthiness recommendations without the ability to explain the decisions. “I had a ‘gold level’ customer denied for an auto loan,” said a beleaguered mortgage analyst, “and I had to explain to his private banker why. I couldn’t justify the decision. She couldn’t explain the denial to the customer, who took his business elsewhere.”

Failing to plan

With new technologies come new delivery mechanisms. And often, new job roles or organizational structures. Like the technologies themselves, these changes should be tested for effectiveness, and tuned to ensure they work within incumbent processes.

Managers advocating AI projects are often enamored of so-called “skunkworks” that prove the efficacy of a solution. Such projects can quickly migrate from trials to sanctioned programs, formalizing ersatz teams and documenting frameworks before they’ve been tested.

Indeed, AI can have an unintended ripple. Managers should be aware of the internal shifts caused by AI efforts and consider these factors in their planning:

  • New organizational structures. Should an AI team or workgroup be separate from the existing analytics team or competency center? Will the chain of command also differ? Be prepared to explain why.
  • The need for new talent. Many developers would like to work with emerging AI solutions. Technical staff accustomed to high-hypothesis analytics might lack AI development skills. How much are you willing to invest in training existing staff, versus hiring anew?
  • Adhering to existing success metrics—or not. In many companies, time-to-delivery is a KPI. But you might be willing to temporarily waive speed for comprehensive tutorials on new processes and toolsets, thus mitigating risk. Know the trade-offs before doubling down on traditional measures.
  • New investments. How much time and money do you want to invest in the development frameworks, automation, and testing that accompany AI? How will you explain this to the executives approving the funding?
  • Sunsetting existing tools or packages. AI algorithms could prove more accurate than the fraud prediction toolset you’ve been using for the past decade. Are you willing to replace the old with the new? How will you manage the expectations of loyal users reluctant to surrender their favorite software? Will they be involved in the decision?
  • Color in the goal. For all their hype, AI projects are subject to many assumptions. Using a convolutional neural network to diagnose skin lesions via digital images is very different than testing unsupervised learning to look for unanticipated correlations. Communicate the objective, even if it’s merely learning something new.

AI has the potential to revolutionize how companies create value. It also has the potential to foment unease and raise questions about corporate objectives. Forewarned is forearmed.

This article is published as part of the IDG Contributor Network. Want to Join?

SUBSCRIBE! Get the best of CIO delivered to your email inbox.