BrandPosts are written and edited by members of our sponsor community. BrandPosts create an opportunity for an individual sponsor to provide insight and commentary from their point-of-view directly to our audience. The editorial team does not participate in the writing or editing of BrandPosts.
By Ravi Naik, CIO & Executive Vice President of Storage Services at Seagate Technology
Considerable challenges with data access and mobility, along with finding reasonably priced storage acquisition for log streaming—and associated analytics—ultimately undermine business value. Furthermore, increasingly complex cloud architectures are forcing CIOs to rethink scalability. As they enter the new year, IT organizations are asking for a more streamlined access to data. Let’s take a look at the 3 most desired solutions right now:
1. Geographic disaggregation of workloads
The term multicloud is, of course, a misnomer: there is no single multicloud. Rather, there are multiple clouds. Increasingly, this reality causes friction and delays.
CIOs want their teams to be able to leverage applications interchangeably so that data resides in one place while the application stack accessing it can change. A number of data access and mobility issues could be solved by making the data plane independent and allowing hyperscalers to plug in depending on business goals.
Using the same data for various purposes without having to transfer it in and out to locations where different functions could be performed would greatly diminish friction and time to insights, and it would eliminate the fees (egress, API, etc.) associated with access and migration.
This is already the case higher up the stack. Organizations can run native AWS applications and disaggregate between compute and storage at higher stack.
The geographic disaggregation is the next step of evolution. It won’t make sense for all workloads—ultra latency-sensitive workloads cannot accept this—but it does for many others.
2. Cost-friendly storage for log streaming and intelligent analytics
In both the cyber world and for businesses that set out to optimally deploy assets and resources, real-time log streaming is gaining importance.
When IT organizations collect logs, they can run analytics to get insights indicating where and how to best deploy the non-human assets and resources. It allows them to respond to incidents in real time.
Of course, as log volumes grow, the desire to store more e-data is increasing—but costs can quickly add up. Hence, to enable log streaming and analytics, there’s a need to build repositories that rely on vastly more affordable mass-capacity storage.
The result? You get “warm” (more frequently accessed) data storage performance with the kind of TCO that corresponds to archival (“cold”) storage.
3. Cloud simplification
It’s a common story: organizations move from on-premises data centers to the cloud to achieve simplicity, and they often do—for a time. This enables CIOs to quickly score in the areas of desired growth and innovation.
However, as soon as they reach scale, companies are faced with the fog of cloud complexity. Among other things, this amounts to much higher costs to support increasingly complicated cloud architectures. Suddenly, organizations take notice of what they see as unfulfilled promises from a TCO perspective.
As a result, organizations are seeking simplification of architectures without the unpredictability of costs, services, and complex layers of tech at scale. It’s a tricky endeavor because the fixes themselves can invite more complexity and disruption.
When CIOs and data architects solve this challenge, as they surely will, the reimagined architecture will make access to data—and its value—a lot easier.
Click here to learn more about reducing cloud complexity.