Last year, I wrote two posts on Network World about mass data fragmentation (MDF). The first one defined the problem, and the second one highlighted how the issue can be addressed. Since the articles were published, I’ve had a number of discussions with IT professionals about this topic and how important it is, and I believe it deserves some attention from CIOs.
Secondary storage shouldn’t be the ugly stepchild of data
Historically, the management of data, particularly storage has been done by a group of mid to high-level engineers with CIOs not paying much attention to it. The reason why is because secondary storage has been the ugly stepchild of the data industry for years. It’s typically hard to access, frequently stored on antiquated local file servers and tapes, is often duplicated, and is generally difficult to work with.
The reality is, though, digital success depends on having access to ALL data. CIOs are tasked with helping businesses drive innovation, and that starts with data. Eliminating mass data fragmentation will improve worker productivity and enable businesses to be more competitive and data-centric.
3 steps to eliminate mass data fragmentation
- Consolidate previously separate solutions for backup, archiving, file sharing, test/dev, and analytics onto a single platform. This eliminates the need to keep legacy infrastructure that’s currently in place. This wasn’t possible before, but hyper-converged infrastructure (HCI) now makes it easier.
- Manage all aspects of secondary storage through a single GUI. This includes tasks such as setting protection policies and SLAs, managing data center or cloud environments globally, and ensuring the optimal use of resources. This also makes it easier to ensure compliance with regulatory mandates.
- Run applications on the same platform, as this can uniquely exploit the value of secondary data and accelerate digital transformations. This includes homegrown applications but also ones developed by ISVs or partners.
Benefits of eliminating mass data fragmentation
There are several benefits to eliminating mass data fragmentation, including the following:
- Elimination of silos. Consolidating platforms obviates the need for siloed legacy infrastructure, such as backup and deduplication appliances, NAS storage, cloud gateways, and media servers. This can be achieved by running fully functional software-defined replacements on an HCI platform to handle all secondary workloads together. I’ve talked to companies that have seen a TCO savings of up to 70 percent for backup alone. It’s important to understand the data isn’t consolidated. Instead, the control of it is what effectively masks the underlying complexity in software.
- Eradication of copies. Multiple copies of the same data can be the death knell of business processes, as more time is spent finding the right version than on the actual task at hand. Consolidating the data enables deduplication to be used to reduce the data footprint being managed and eliminates the proliferation of copies. If a copy is required for something such as test/dev, snapshots can be taken with zero impact on resources.
- Data become location-independent. The centralized control of data enables businesses to manage it in any environment, including data centers, public and private clouds, remote offices, and edge locations. This enables secondary storage and applications to be controlled wherever they are located rather than having to use gateways or other intermediary devices.
- Focus shifted from data collection to data connections. By spanning between physically isolated data islands, a logical data fabric that acts as if it were a single centralized pool can be created. This allows in-place analytics without having to move or copy the data. A big benefit here is that it reduces the need to assemble data lakes for many analytics use cases because the data can be left in place.
- Operational simplicity. By eliminating mass data fragmentation, you obviate the need for multiple specialist operators for secondary operations, such as backup, file sharing, clouds, and test/dev. A single management GUI can be used by a single administrator function.
- Unlimited scale. HCI solutions are designed to scale limitlessly, on premises or in the cloud with a pay-as-you-grow model. Everything is fully distributed, and there is no single choke point, so businesses can start small and add more as needed with no disruption.
- Improved visibility and search. The consolidation of the control plane enables the data to be automatically indexed and immediately searchable, thus “shining a light” on previously dark data. This can show the “where” and “what” of their secondary data estate, allowing organizations to check their GDPR compliance, make intelligent decisions about data retention, meet e-discovery requests promptly, or monitor unusual behavior that may be a security threat.
The inability to connect information buried within silos has been problematic for decades. However, the impact was limited to user inconvenience because secondary storage wasn’t mission critical. Today, the stakes are much higher, as data is the lifeblood of digital businesses. CIOs need to make eliminating mass data fragmentation a top priority, as companies that can do this will have better insights to leapfrog the competition. Companies that can’t, will fall behind and struggle to survive.