BrandPosts are written and edited by members of our sponsor community. BrandPosts create an opportunity for an individual sponsor to provide insight and commentary from their point-of-view directly to our audience. The editorial team does not participate in the writing or editing of BrandPosts.
By Omer Asad
For many CIOs, enterprise storage is synonymous with mission-critical performance and availability, but most organizations need more as they rev up to meet the new digital reality: they need to reinvent their infrastructure, and that can be an uphill battle. Despite the urgency of the drive for digital transformation, IT departments are held back by the constant need to administer, tune, and maintain storage infrastructure that supports existing mission-critical apps and data.
It’s past time for traditional approaches to give way. Future-leaning organizations have learned to steer clear of four common pitfalls as they move away from legacy architectures.
1) Don’t get trapped by complexity
The “Big Iron” storage era required specialized technical know-how to handle inescapably complex tasks. The countless knobs and configurations needed to keep infrastructure tuned and optimized also created opportunities for human error. This led to rigid infrastructure and prevented businesses from responding to IT needs that spanned disciplines.
Now, future-oriented enterprises are shifting to an IT generalist model that relies on the simplicity of self-service infrastructure to deliver business acceleration and responsiveness. Key to that move is a new generation of infrastructure with cloud-like agility that enables developers to provision for themselves. Highly automated and policy-based, this infrastructure delivers instant access to data and the flexibility to self-install and self-upgrade in minutes. Instead of thick manuals and a phone book of best practices to follow, IT can now provision with a few simple clicks and avoid tuning altogether. Meanwhile, by adopting infrastructure as code, DevOps is now free to use their familiar tools.
2) Don’t settle for less than perfect resiliency
Another feature of Big Iron storage: it’s typically architected with an extreme redundancy to minimize impact when storage problems appear. That’s why it’s called mission-critical; every second of downtime prevents doctors from treating patients, banks from fueling the economy, and online retailers from serving their customers. However, according to IDC, 90% of application problems arise above the storage layer. As a result, traditional storage-level resiliency often winds up as a substantial investment that doesn’t necessarily provide the necessary application protection. In a word, it’s incomplete.
To meet the demands of today’s digital world, IT must be always-on and always-fast, period. For that, successful organizations are using infrastructure that provides application-aware protection across the entire stack. The new generation of mission-critical storage not only delivers 100% availability, but also has the intelligence to predict and prevent problems up and down the stack before they occur.
3) Don’t be held hostage by your storage
The current ownership model for Big Iron storage is broken. Organizations are stuck between a rock and a hard place — forced to choose between escalating maintenance costs versus disruptive and time-consuming upgrades. Or worse, a forklift upgrade and months of data migration. Even incremental upgrades to firmware can be disruptive to your apps when legacy storage requires updating the entire operating system.
It’s time to quit the endless cycle of expensive storage refreshes, unplanned downtime, and forklift upgrades. Successful organizations have embraced a new approach, one that flips the traditional model on its head with future-proof storage and a seamless path to innovation.
4) Don’t pay for storage you don’t use
One of the biggest selling points of the public cloud is its financial simplicity. It’s a myth, of course, that tapping into a cloud economic model is as simple as swiping a credit card. But enterprises love the ability to bypass big upfront costs in favor of a monthly bill. By opening up a range of new models for budgeting and funding IT, cloud made IT resources more easily measurable by usage.
Why can’t technology enable the same thing for on-premises infrastructure? For example, new metering systems can track, record, and report on how hardware resources are being used on-prem. When combined with a comprehensive set of services to deliver, monitor, and support the infrastructure, these technologies can provide a true on-demand solution for the physical data center.
Successful enterprises are leveraging IT pay-for-use options to gain flexibility, increase control, and lower their total cost of ownership. Today’s as-a-service offerings deliver on-demand capacity and planning, combining the agility and economics of public cloud with the security and performance of on-premises IT.
It’s time to rethink mission-critical storage
Taken together, these four mandates point to primary storage solutions that deliver the cloud experience of simplicity and agility without compromising the performance and availability of mission-critical applications and data. Enterprises across industries can and should take advantage of them.
At HPE, we’ve left the agility versus reliability tradeoff behind by redefining mission-critical storage with HPE Primera. Delivering extreme availability and performance with the agility of the cloud, HPE Primera is breakthrough storage that ensures always-fast and always-on reliability for all mission-critical apps. An industry first, HPE Primera can be self-installed and self-upgraded in minutes and is backed by a 100% availability guarantee. Even better, HPE Primera provides a modern, as-a-service experience through HPE GreenLake.
In closing, I’ll leave you with a few questions: Does your mission-critical storage help power your business by enabling you to:
Omer Asad is the Vice President & GM of Primary Storage in the HPE Storage & Big Data group. In this role, he leads HPE Primera/3PAR & Data Management Product Management, Nimble Product Management, and the Nimble Support teams. In addition, Omer is responsible for driving the Next-Gen strategy and overall business plan for our Primary Storage business. Omer holds a Master of Science degree in Computer Science from Duke University. He is based in San Jose, CA.