Artificial intelligence (AI) techniques are reaching deeper into work environments, not only replacing and enhancing mundane jobs, but also augmenting or otherwise changing those that remain. They are permeating every aspect of business and are driving organizational strategies. In fact, Gartner predicts that by 2025, AI will be the top category driving enterprise infrastructure decisions.
Yet even as interest in AI rises, several myths about this technology persist. CIOs must identify and debunk those myths, in order to devise sound strategies—or enhance existing ones—when driving implementation of AI projects. By understanding how AI works and where its limitations lie, CIOs can better utilize this technology to deliver business value.
Myth: AI is a luxury during the COVID-19 crisis
Reality: Interest and investment in AI continues to grow even amidst the COVID-19 crisis. In fact, a recent Gartner poll found that 24% of organizations increased AI investments since the onset of the pandemic, while 42% kept them unchanged.
Throughout the pandemic, AI has not only been critical for helping healthcare and government CIOs with tasks like predicting the spread of the virus and optimizing emergency resources, it has also been essential for businesses of all kinds to hasten their recovery efforts. AI has served as an important enabler of cost optimization and business continuity, supporting revenue growth and improving customer interaction as disruption continues.
Although AI is not a silver bullet, most organizations cannot afford to ignore its potential to fight both the immediate and the long-term impacts of the pandemic. CIOs must proactively promote AI not as a luxury, but as a powerful technology that can be used for pragmatic scenarios such as analyzing more data faster and augmenting decision making—both during and post-pandemic.
Myth: We don’t need an AI strategy
Reality: AI can be applied to a wide variety of business problems, but transformative business value is only realized when there is an AI strategy in place.
CIOs can maximize the value of AI by pairing business priorities with near-term opportunities, especially those that tap AI’s power to augment human work. Start by identifying the most promising AI use cases that align with strategic initiatives and critical business functions, such as automating administrative tasks to free up more time for innovation. Periodically revisit your organization’s approach to AI and ensure and that decisions pertaining to AI implementation (or decisions not to use AI) are backed by research and deliberation.
Myth: AI will only replace mundane and repetitive jobs
Reality: Over time, many technologies have impacted how people work and what skills they need to access well-paid opportunities. Thus, some professions have disappeared while new ones are constantly created. For instance, it is rare to encounter paid typists today, just as ten years ago it was rare to find social media marketing managers.
AI technologies are expected to have a significant impact on how we work and learn, as well as what work we do. AI has the potential not only to automate tasks considered mundane or repetitive, but it can also help to improve or change the jobs that remain by accomplishing higher-value tasks. For example, AI can read thousands of legal contracts in minutes and extract all the useful information from them faster and with fewer errors than lawyers can.
CIOs can ascertain the potential impact of AI on existing tasks by identifying activities that could be augmented or automated by AI, such as project management or customer service. Staff can then be retrained to do their jobs better or faster with AI’s help. It’s important to communicate frequently and transparently with employees and stakeholders to allay concerns about the use of AI, decreasing negative sentiment and helping teams prepare for the change that is coming.
Myth: AI and ML are the same
Reality: AI is an umbrella term for a broad set of computer engineering techniques. Within AI, there is a large subfield called machine learning (ML), which is the ability of machines to learn without being explicitly programmed. ML can be orchestrated to recognize patterns from data, and it is usually good at solving one specific task. For example, ML can be used to classify whether an email is spam or not.
Similarly, ML is not the same as deep learning. Deep learning techniques or deep neural networks (DNNs) are a type of ML that is enabling amazing breakthroughs. But this doesn’t mean that deep learning is the best technology for all problems falling under the AI umbrella—and it doesn’t mean that DNNs will always be the most successful AI technology for a specific challenge. In fact, many current AI problems can be effectively solved using rule-based systems or traditional ML.
The latest cutting-edge AI options are not always the most efficient solutions to business problems. Encourage data scientists to look at AI technologies as a whole and to implement those that best match the business model and goals. For complex problems, especially those requiring more human insights, it’s often best to combine deep learning with other AI techniques such as physical models or graphs.
When speaking to stakeholders, it’s important that CIOs clarify these commonly interchanged terms. Break down the overall discussion of AI into conversations about individual techniques, like ML, to demonstrate how each can solve real-world problems.
Myth: AI is all about algorithms and models
Reality: Building and applying ML algorithms to create a predictive model is often the easiest part of an AI project. The more challenging parts include ensuring the problem that is being solved with AI is well-defined and that enough of the right data is gathered and curated, with deployment being the most difficult part of an AI project. In fact, through 2023, at least 50% of IT leaders will struggle to move their AI predictive projects past proof of concept to a production level of maturity.
CIOs should focus on defining the business problem that AI will solve by consulting with key stakeholders. And explicitly organize and manage the people, processes, and tools that are needed for testing, deployment, and other AI operationalization activities well in advance.
Myth: All black-box AI needs to comply with regulations
Reality: A black-box AI is an AI system in which inputs and processes are hidden from users. Different applications of AI have different levels of requirement for explainability, depending on the customer as well as the regulatory need for privacy, security, algorithmic transparency, and digital ethics.
AI that generates insights for internal use doesn’t necessarily need as much explainability. However, AI that makes decisions about people (for example, in relation to eligibility for loans or credit) requires explainability. AI that makes decisions in a “closed loop” with important consequences (such as when enabling autonomous driving) has a high requirement for explainability, for ethical and possibly legal reasons.
CIOs must ensure that AI applications comply with existing ethical and legal directives. Provide support to testing and validation teams, as the data they gather will determine the need for explainability of the AI applications used.