BrandPosts are written and edited by members of our sponsor community. BrandPosts create an opportunity for an individual sponsor to provide insight and commentary from their point-of-view directly to our audience. The editorial team does not participate in the writing or editing of BrandPosts.
How do we know we’re living in the age of insight? Because today, every enterprise has a board-level mandate to create digital advantage by unifying data to glean business insights faster. Data is now the lifeblood of the enterprise — and data insights are key to evolving the organization’s business and operational models.
Whether it’s accelerating decision velocity, driving continuous innovation, or delivering new experiences for customers, data is and will remain essential to guiding the enterprise into the future. Which is why efficiently managing and analyzing rapidly growing volumes of data is absolutely critical for businesses across every industry.
Let’s unpack the state of enterprise analytics a bit. Organizations looking to initiate or expand their analytics and artificial intelligence (AI) workloads to harness the benefits of data-driven insight typically encounter a few common roadblocks. These include deploying infrastructure, hosting and managing billions of files, enabling edge-based data processing, and finally balancing hybrid cloud deployments to eliminate data silos. Here’s what you need to focus on to overcome these challenges.
Skyrocketing data volumes require new approaches
Not many years ago, enterprise analytics was largely associated with Apache Hadoop workloads crunching (by today’s standards) comparatively modest amounts of data, sent from edge sources and stored on disk-based systems in a centralized data center. But then data volumes exploded, and old constructs and applications like Hadoop were no longer sufficient.
In recent years, data sets — particularly unstructured data sets from IoT, images, and video at the edge — have expanded at a blistering pace. And organizations continue to adapt to, and plan for, exponentially larger data ingest in coming years — a trend we’ve discussed previously.
Enterprises today face an environment in which expanding data volumes are trapped in a siloed, multi-generational IT landscape, and that’s a problem. Siloed data and compartmentalized workstreams place a drag on the speed of business and translate into missed opportunities. Equally problematic, petabyte-scale data volumes have their own gravities, and — for reasons of efficiency and economics — now require processing closer to their sources. Why? First, businesses want to minimize time to insight, and processing data at speed remains the priority. Second, the costs of data egress and data movement mean that localized processing is now the most efficient approach.
Meanwhile, from an organizational perspective, data science and analytics teams want frictionless access to all data, wherever it resides, to speed time to insight and foster innovation via broad-based collaboration across diverse teams. In this context, we see five critical trends in enterprise analytics that any organization looking to build out analytics and AI initiatives should keep top of mind:
More data is now produced outside traditional data centers — quite a lot more, and particularly at the edge. All of that data needs to be processed efficiently and at the speed of business. Data science teams seek to deliver real-time insights and advantage by analyzing data in-place, and organizations need strategies to make it possible.
The edge has evolved to meet the demands of efficiently ingesting and analyzing a tsunami of data, and that has meant deploying compact computational capabilities that enable edge-based analytics to process data as close to the source as possible. IT needs to master edge-based infrastructure and deploy holistic data management solutions that cover the edge as easily as they do the data center.
There’s broad-based demand for analytics across the enterprise. IT leaders need to enable business intelligence (BI) users alongside SQL users. They also need ways to accelerate AI initiatives even while supporting legacy Hadoop workloads and new Spark and machine learning (ML) projects. And each of those customers needs access to the same data! Organizations need to power analytics team productivity by eliminating data silos and securing flexible, high-density infrastructure that can serve all data customers at speed and at scale.
Customers are embracing new open source analytics solutions. Apache Spark 3, with far greater performance by virtue of its in-memory processing and support of GPU acceleration, has displaced Hadoop in the race to accelerate analytics projects (though Hadoop still plays a significant role). Spark 3 also enables the separation of compute and storage, which facilitates multiple, diverse analytical workflows on the same data infrastructure. Other open source analytics solutions are also in the mix, and as a result enterprises will need to provide data scientists and engineers a unified approach across different data types.
Hybrid cloud is now the preferred option for cost-effectively scaling, persisting, and processing data while minimizing data movement and avoiding vendor lock-in. As a consequence, seamless data and app mobility across the environment — powered by containerized workloads — is now a must-have, as organizations embrace a hybrid approach to analytics, AI, and ML.
That’s quite a to-do list. But it’s all possible with a data-first modernization strategy and the right edge-to-cloud infrastructure. In our next article , we’ll examine some important considerations for building out modern analytics initiatives in your own environments — including the advantages of object storage and containerized platforms in unifying data across clouds.
About Matt Miller
Matt leads Solution Marketing for the HPE Storage business, covering such areas as file and object storage, scale-out storage, virtualization and containers, and cloud-native technologies. Matt has a nearly a 20-year tenure in the storage industry, and came to HPE through the acquisition of Nimble Storage in 2017. At Nimble, Matt held product and solutions marketing roles where, in part, he grew the converged infrastructure business to over $100M and also led marketing for Nimble’s ground-breaking AIOps engine, InfoSight. Matt has also worked for industry innovators such as NetApp, Sun Microsystems, Veritas Software and Compaq. He holds a Bachelor’s degree in Business Management from Marist College, and an MBA from Vanderbilt University. Matt resides in the San Francisco Bay Area with his wife and two daughters. Connect with Matt on LinkedIn and Twitter!