Re-Architecting the Data Center

BrandPost By Lynn Comp
Jan 13, 20155 mins
Data Center

The benefits and opportunities that disruption can bring

nvme uth image

“Disruption” has been celebrated in the IT industry for at least a decade now, despite the definition of the word being “to throw into turmoil or disorder.” Anyone who has experienced disruption knows that it can be challenging to view it in light of the opportunity that it provides. Yet, whenever disruptions have occurred throughout the history of business, amazing opportunities and surprising growth have sprung up as a result of the disruptions.

In 2014 at GigaOM Structure, Diane Bryant went on stage with Tom Krazit to talk about the next wave of disruption the IT industry is experiencing. To quote from Diane’s blog about Intel’s vision: “We are in the midst of a bold industry transformation as IT evolves from supporting the business to being the business. This transformation and the move to cloud computing calls into question many of the fundamental principles of data center architecture. Two significant changes are the move to software defined infrastructure (SDI) and the move to scale-out, distributed applications.”

It can be a challenge to determine the right place to start when implementing “software defined” architectures, given there are multiple stress points within today’s data center architectures – from storage, network, and compute all the way through data center operations. One specific example mentioned in Diane’s blog was the challenge related to the CAGR (compound annual growth rate) of data – structured and unstructured – estimated by IDC to be on the order of 40%. Unlocking robust economic value through big data insights is difficult – if not impossible – to achieve unless data centers begin to open the silos constraining their data operations. This starts by shifting to a more off the shelf, standards based hardware model, with standards based software on top – breaking out of today’s box.

What type of innovation is necessary for emerging data center software solutions like Cloudera to operate in an increasingly efficient way? Intel is going under the hood with the NVM and storage industries to break out of the higher latency, more expensive box, making today’s “impossible” tomorrow’s possibility.

There are three major components to Intel’s next generation NVM strategy today: Intel SSDs, the NVM Express I/O protocol, and Intel® Cache Acceleration Software. All of these work together to remove workload latency, shift the industry to a more standards-based plug and play I/O protocol, and drive improved TCO in a solution. Each one of these components offers a step towards the responsiveness and agility necessary to support “software defined” infrastructure.

First, there are Intel SSDs: Intel SSDs offer fast and consistent performance that is up to 1400x better than a disk drive, with less power and space than the equivalent HDD. Depending on the workload, the replacement ratio between a hard drive and SSDs can be as high as six or seven hard drives replaced with a single SSD. When data growth per year is 40% or more, eventually you run out of space – it is much less expensive to replace hard drive footprint with SSDs, than it is to change the physical configuration of a data center or to build a new data center.

Next, there is the NVM Express (NVMe) I/O protocol: Intel’s storage architects realized that the cost for an I/O should not be measured just in the latency or responsiveness but should include the impact on the number of cores required to execute the IOP. Multiple AHCI controllers results in a tax on multiple system cores. When shifting from legacy stack protocols such as SAS or SATA to NVMe, Intel measurements have shown that latency significantly decreases by as much as a third and the number of cores required to maintain the I/O rate decreases by half. Intel recently launched our NVMe connected SSDs at Computex with significant interest from customers wishing to achieve less latency and more performance.

Lastly, there is theIntel Cache Acceleration Software (Intel® CAS). The importance of the Intel caching software comes down to even more increases in responsiveness and performance, with up to 3 times more performance on transactional database processing and up to 20 times faster processing of read-intensive business analytics. Intel CAS provides additional efficiencies in the replacement ratio necessary between hard drives and SSDs, since only the most frequently used data is put on an SSD vs. a simple capacity to capacity replacement. This preserves any recent investments in additional hard drive capacity while offering immediate performance benefits.

Transformation of the data center won’t occur as an overnight event, but there are very practical first steps that offer immediate benefits while supporting a transition to software defined infrastructure. The three steps outlined; Intel SSDs, NVMe protocol, and Intel Cache Acceleration Software; lay a foundation for a transformed data center while offering an immediate benefit in the performance, footprint, and agility of your data center solutions.

For more information, check out this video on Intel’s strategy for NVM.