For cash-strapped IT shops looking to get out from under manual storage management chores, storage orchestration software looks like a lifeline: It promises to let users choose from a catalog of predefined storage services and then handle the provisioning details behind the scenes.
It's a worthy vision, and one vendors are moving toward. However, there's currently no "single pane of glass" product that can automatically provision, resize, back up and recover storage across multiple public and private clouds, across systems from different vendors and for virtual machines running hypervisors from multiple vendors. Most orchestration tools support only a single product line, are optimized for certain functions or don't support the public, multitenant object-based storage services that provide the lowest cost and most flexibility.
It's even more rare to find orchestration tools that can manage both virtual machines and storage. Creating true global orchestration is an expensive, complex task usually tackled only by the largest enterprises or service providers that can spread the investment across multiple customers.
Today, storage management is "very fragmented, and things don't necessarily work well together," says Forrester Research storage analyst Andrew Reichman. "For the most part, [tools] are quite expensive, complex to use and have mixed results with [other vendors'] products.... The automation level of storage lags that of servers," especially when comparing storage management systems with server virtualization platforms such as VMware. With storage, "there is still a lot of manual, mundane work being done," says Reichman.
Despite some acceptance of standards for defining common storage and server functions, vendors are understandably reluctant to use them to make it easier for customers to move data from their products to those of their competitors. Some are also too busy integrating technologies they have acquired to focus on interoperability with their competitors.
Many of today's orchestration platforms are more like service catalogs that offer various service levels for different applications and use application programming interfaces (API) to storage and server management tools to deliver the services. HCL Technologies' MyCloud, for example, is "not like a management tool, but more like an aggregation platform [that] can integrate with the native management tools" from infrastructure providers such as VMware or Amazon, or existing management vendors such as BMC or CA, says Kalyan Kumar, associate vice president and head of cloud at HCL. Customers can request compute and storage services through it, but they must log in to each platform's management console to perform more sophisticated operations, such as archiving data, he says.
In the absence of universal orchestration, customers are using tools that support their hardware and software to solve problems in areas such as application availability, disaster recovery and quality of service. These products fall into several broad categories.
A growing number of vendors are offering "storage hypervisors" that virtualize the storage and, in some cases, their associated file servers to create scalable, flexible pools of storage. This virtualization layer often runs on standard x86 servers and is optimized for specific functions, storage protocols or applications. One example is DataCore Software's SANsymphony-V, which links to VMware's vCenter to automatically discover VMware servers running in a customer's environment. A systems administrator can then associate a given class of storage with various servers, and SANsymphony automatically provisions it.
Hosting and integration services firm Amnet Technology Solutions has been using SANsymphony for close to three years, and senior technologist Rich Conway says the product has provided "absolutely phenomenal" redundancy. "The entire storage infrastructure was essentially mirrored, where both sides are active/active, and if any component of either side fails for any reason, our entire grid stays up and our customers don't even notice," he says. SANsymphony has also enabled Amnet to eliminate planned downtime for routine maintenance such as firmware upgrades, says Conway.
Later this year, IBM plans to release IBM SmartCloud Virtual Storage Center, an appliance-based virtualization layer that will provide services such as backup, load balancing and snapshots across applications and provision the right storage for each class of service, says Steve Wojtowecz, vice president of Tivoli storage software development at IBM.
Combining IBM's SAN Volume Controller storage virtualization platform with its Tivoli Storage Productivity Center management software and the Tivoli Storage FlashCopy Manager, the SmartCloud Virtual Storage Center will provide consistent performance on multiple vendors' storage arrays in data centers within 300 kilometers of each other, says Wojtowecz. But it doesn't currently support block storage, he adds.
Zadara Storage runs its storage virtualization layer on commodity servers in its own colocated cloud facilities, turning direct-attached disk drives into virtual SAN arrays. Noam Shendar, vice president of business development, says this gives those drives the performance, reliability and security of more expensive SANs, and provides capabilities such as clustering using familiar SAN management tools.
Other vendors use a global file system to separate the details of where and how VMs or data are stored from the higher-level management objectives, such as meeting the terms of various service-level agreements (SLA).
Among the vendors coming the closest to offering combined server/storage management with this approach is Tintri, whose "VM-aware" storage appliances are designed to replace traditional storage units such as volumes, LUNs and files with virtual disks. Tintri's VMstore file system monitors and controls I/O performance for each virtual disk, communicating with the VMware vCenter to detect which virtual machines are active and how they are using storage. It then automatically chooses the best combination of storage for each virtual machine, including fast but expensive solid-state drives and slower but less costly disks.
6 Tough Questions
When choosing a storage orchestration tool, Greg Schulz, senior adviser at the Server and StorageIO Group, recommends asking the following questions:
1. Does it enable the setup and scheduling of snapshots, replication, backup and other functions that ensure data availability?
2. How does the platform coordinate with other technologies, such as dynamic path management, that provide load management as application loads change?
3. How will the platform's performance and price be affected as your company adds more servers, storage and networks?
4. Will it be easy to install the vendor's system and integrate it into your company's environment?
5. How well does the vendor's platform integrate with your existing service catalog?
6. Can the platform recognize and comply with your policies on security, regulatory compliance and quality of service?
- Robert L. Scheier
Meanwhile, open-source vendor Red Hat claims that its Red Hat Storage Server, based on its GlusterFS file system, provides better scalability than rivals because it doesn't rely on a metadata server, more effectively distributes data and uses parallelism to maximize performance. Nutanix combines storage and server management, along with its own storage and performance management software, in a physical package that includes three to four x86 server nodes. Cisco takes a similar approach to combining computing, storage and networking with its FlexPod products.
One approach to cross-cloud storage management uses gateways that mask the differences among the APIs used by various cloud storage providers. TwinStrata's physical or virtual CloudArray (bundled with SANsymphony), for example, makes storage from any of 13 cloud providers appear as iSCSI devices to customers and applications. This allows connectivity and the use of a common management platform for functions such as disaster recovery and replication, says CEO Nicos Vekiarides.
Benefits plan administrator RxStrategies uses the TwinStrata gateway for cloud-based backup of its virtual machines and data. "On the outside, it looks like a SAN, which is old technology, but on the other side, it was actually part of the cloud, which enables us to transparently push our backup to Amazon or Rackspace," says senior developer Rick DeBay. In the future, he says he would like to be able to store data on more than one public cloud and easily move compute workloads to Amazon's EC2 public cloud and Amazon's S3 storage platform.
Other orchestration offerings are, however, limited to certain products or certain parts of the cloud.
CA Server Automation and CA Automation Suite for Clouds integrates with NetApp's OnCommand storage management software to provision NetApp storage for various classes of servers.
Caringo's CloudScaler virtualization layer provides automated, policy-based management -- but only of storage, not virtual machines. Like many other orchestration platforms, it doesn't currently support the block-based storage used in low-cost, multitenant public storage clouds such as Amazon S3, but Caringo is working to offer that in the future.
Storage Automator, a storage service catalog and policy engine from iWave, currently supports only selected EMC and NetApp arrays, although broader support is due this year.
While it's the leader in server virtualization, VMware is working to differentiate itself from competitors such as Microsoft and its Hyper-V offering by "pushing to include more orchestration," says Reichman. With VMware vSphere 5.0, for example, it introduced storage profiles that let users map the capabilities of a storage system to a storage profile, helping to ensure each virtual machine uses the appropriate data store.
This summer, VMware acquired DynamicOps, whose architecture will allow vSphere and infrastructure administrators to model infrastructure services. This will enable the policy, governance and self-service management capabilities in vSphere to be extended to other hypervisors, hardware and clouds, according to a blog post by Ramin Sayar, VMware's vice president and general manager for cloud infrastructure and management.
Storage Management Portals
Don't Try This at Home
Customers pay managed services providers such as NaviSite to mask the complexity of the technology they use. That's why it was worthwhile for NaviSite to devote a "significant amount of work and time" to building its AppCenter portal, says Chris Patterson, a product manager for NaviSite's cloud and hosting services.
NaviSite expanded its R&D team "significantly" to integrate its underlying platforms with AppCenter, he says. The project included coding to the APIs of vendors such as Actifio, which is one of the "disk-to-disk" platforms that NaviSite uses for backup and recovery. "We worked with Actifio to create simple menu options," says Patterson. "So the customer says, 'I want to back up using either this profile or that profile,' and they can see what they've done."
NaviSite has a staff of 30 to 40 people who continually revise AppCenter and add new features to it. "Anyone could write this," Patterson says. "But unless you're a service provider, unless this is something you [must provide], I wouldn't recommend it."
- Robert L. Scheier
Many vendors' offerings are focused on areas such as data protection and disaster recovery, which were the most common needs cited by VMware users in a July 2012 survey conducted by the Wikibon technology analysis website. Again, many tools are limited to specific vendors' products or storage protocols.
Actifio, for example, tackles backup, disaster recovery and business continuity with its Protection and Availability Storage (PAS) appliance, which virtualizes both storage and storage functions such as copy, store, move and restore. But the PAS appliance supports only Fibre Channel-attached storage, such as SANs, and only disaster and recovery, not the dynamic reprovisioning required to maintain the performance of production applications.
Even if this creates a stand-alone silo of tools and data for backup and recovery, that's an improvement over the multiple silos (and multiple copies of data) many companies use for anything from testing to disaster recovery or data analytics, says Andrew Gilman, senior director of global marketing at Actifio. He also says Actifio's globally deduplicated object-based file system reduces costs by storing and moving only changes to data.
VirtualSharp Software says its ReliableDR "goes into the different layers of virtualization inside the cloud" and uses the APIs provided by storage vendors to create runbooks (defined sets of operations) to execute and verify disaster recovery and failover. However, it does this only for applications running on VMware hypervisors, and only for applications, not for the data they use.
Also, the tool supports only clouds running within corporate data centers, because, says CEO Carlos Escapa, "the market is so huge behind the firewall and the protection mechanisms are lacking." He adds that the fact that ReliableDR is capable of running multiple disaster recovery tests per day more than makes up for its lack of broader management capabilities.