In my last article, I wrote about how industrial organizations, in the rush to implement new technologies like AI, the cloud, and the Industrial IoT, have found themselves with a technology stack packed with legacy, plumbed-together, on-premises solutions. The result is an environment with not only multiple siloed data sources, each storing, formatting, and securing data in their own unique ways, but also an equally siloed approach to understanding how to leverage that data into something actionable across the enterprise. Domain experts become not just go-to sources for understanding a certain process or workflow, but the only people with insight and meaningful context into different data sets tracked or generated by different sources.\nWorkforce shifts put industrial data value capture at risk\nIn a rapidly digitizing organization, this is a poor way of maintaining and processing data across a site \u2013 but it\u2019s especially counterproductive when you consider the generational churn happening in today\u2019s industrial workforce. Veteran domain experts are increasingly retiring and being replaced with newer workers who neither academically trained to handle such specific, legacy technologies nor do they have the domain knowledge and operational expertise their predecessors had. This operational expertise gap leaves industrial organizations with not just more aggregated data than they know what to do with, but data that they don\u2019t have any real visibility into.\nMaking industrial data useful and actionable in this scenario is a two-step process.\nStep one involves leveraging next-generation data historians to democratize data access, ensuring that everyone within a plant and across the enterprise \u2013 regardless of skill, training, tenure, or expertise \u2013 has equal access and ability to tap into data that sits in any source, across the plant, from the edge to the cloud. Making data truly universal means using edge-to-cloud integrated data historians to eliminate silos, clean up data lakes, give structure to unstructured data, apply tags to make datasets easier to find, and make industrial data accessible in an AI-ready state to drive industrial intelligence.\nStep two is then making that data actionable so that decision makers, from the production floor to the management level, are able to understand not just what the data is telling them, but what next steps to take.\nEvolving data historians with Artificial Intelligence of Things (AIoT)\nTo turn raw data into actionable insights, industrial organizations need to evolve their data historians to benefit from machine learning (ML) and AI algorithms, by leveraging an Industrial AI infrastructure that helps accelerate business value from industrial data. Data historians can\u2019t just be used to collect process data; they have to be treated as the core of a greater industrial data management strategy, one that shifts gears from mass data accumulation to more thoughtful application, integration, and mobility of industrial data. Purposeful application of AI and ML are key to facilitating that evolution in the data historian\u2019s function in an industrial organization, to tap previously undiscovered or unoptimized industrial data sets for new business value.\nMany leading industrial organizations are adopting an AIoT strategy to accelerate time-to-value from their AI investments. An AIoT strategy provides integrated data management, edge and cloud infrastructure, and a production-grade AI environment to build, deploy, and host Industrial AI applications at enterprise speed and scale. It also serves as the foundational infrastructure to realize a transformative vision for the Self-Optimizing Plant.\nScaling AI for real-world applications requires providing the tools, infrastructure, and workflows for powering Industrial AI across the solution lifecycle. It also requires the software, hardware, and enterprise architecture needed to productize AI in industrial environments, including broader collaboration between development, data science, and infrastructure capabilities such as CloudOps, DevOps, MLOps, and others. This dimension is critical to helping organizations mature beyond sporadic AI proof-of-concepts to an enterprise-wide Industrial AI strategy.\nIndustrial AI supersedes \u201cgeneric\u201d AI in delivering real-world value\nBut not all AI is equal, and trying to apply a \u201cgeneric\u201d AI approach to your data historian in an industrial setting can undercut any ROI you\u2019re hoping to get out of it. It may be tempting to think that training a generic AI model on large volumes of plant data would acclimatize the model to the plant\u2019s needs. But if the plant, for safety or design reasons, is working within a limited scope of conditions, then the AI model is also ingesting that narrow band of data and teaching itself to operate within those guardrails. As a result, a generic AI model trained on plant data may not be as nimble as you want and expect an AI model to be \u2013 for instance, being able to respond to real-time market changes and adjusting production schedules accordingly.\nEven worse, this generic AI model could end up producing inaccurate correlations or causations between industrial processes and plant equipment, giving decision makers insights or prescribed next steps that aren\u2019t correct. This doesn\u2019t just harm the plant\u2019s ability to function and how leaders can make the plant more optimized or efficient; it also undermines the ability to productize AI in the industrial space and harms AI adoption overall.\u00a0\nGeneric AI and ML won\u2019t do. Evolving your plant or refinery\u2019s data historian to match the needs of a more complex data environment means using more specific, fit-for-purpose Industrial AI \u2013 in other words, AI that has been embedded into domain-specific applications focused on targeted business needs, rather than trained on a larger pool of plant data.\nBy deploying AI through specific purpose-built Industrial AI applications, rather than spray-and-pray AI approaches across the entire plant, industrial leaders both evade some of the (perceived) hurdles associated with implementing new technologies, and ensure that the AI algorithms are incorporating domain knowledge that\u2019s specific to industrial processes and real-world engineering. This ensures that the Industrial AI is both ingesting relevant data guided by domain-specific purposes and generating insights that give decisionmakers a more accurate picture of their environment. This creates a safe, sustainable, and holistic workflow for decision-making that guarantees reliable long-term results.\nTo support and achieve their profitability, production, and sustainability goals, industrial organizations must evolve their current data historians into next-generation, industrial-grade data management solutions powered by an AIoT strategy, which provides the anchor technology for deploying Industrial AI applications across the enterprise. Having a data historian capable of mobilizing and integrating volumes of complex industrial data across the enterprise is not just a convenience; it\u2019s business critical. And to do that, industrial leaders need to invest in cloud-ready, purpose-built Industrial AI infrastructure and applications to future-proof the business against volatile and complex market conditions.