Don’t let bottlenecks in the Data Center bury your insights

At a time when 89% of companies compete primarily on the basis of customer experience, legacy systems can be an organization’s worst liability.

Old, heavily patched systems operating on fragmented data create bottlenecks that throttle performance, frustrate analysis, and bog down customer response times. Fixing the problem means understanding your data and moving storage and processing to the most appropriate parts of your infrastructure. Here are some ways to do it.

Stop hoarding data. It’s often said that the cost of storage is now so low that it’s cheaper to keep information than to throw it away. That may be true, but it’s a terrible philosophy for running an IT organization. Useless or unused data not only takes up space but can create regulatory and legal risks if it’s kept on hand longer than required. Archive old data to tape or move it to a low-cost Hadoop cluster.

Find and categorize data. A good tool to use in the data-taming process is a data catalog, a specialized database that stores information about data. Catalogs can automatically discover much of the data an organization already has and apply tags based upon such factors as mission criticality, currency, intended use, and expiration date. You’ll need a disciplined process that includes limits on metadata and who can apply tags. This is a great way to understand what data you have, which is the first step toward getting it to the right place.

 

[ Industry Efficiencies Delivered by Intel AI ]

 

Eliminate data duplication. Next, look for duplicate data. Organizations tend to make a lot of copies of data for a variety of purposes, such as conducting analytics, development, testing, or simple convenience. IDC estimates that up to 60% of total storage capacity in most enterprises is now duplicate data. But copies are a bad idea. They create security and compliance risks as well as confusion about data integrity. Get rid of as much duplicate data as possible and follow governance principles that discourage the use of copies.

Retain crucial data. Most data eventually loses value and can either be safely deleted or archived. If there are no legal or business reasons to keep old data, get rid of it. Most regulated industries have rules about how long to keep data, and a good data governance policy should specify retention guidelines. The small amount of information that needs to be kept on hand in perpetuity for legal or archival reasons can be tagged for long-term storage.

Identify and prioritize high-value data. The cataloging process should help you identify data that needs to be quickly available or close to the customer. Examples include current customer account status, buying histories, preferences, and any data used to personalize interactions. As a rule of thumb, consider data that you want to make available to a customer immediately over a mobile device or to complete a transaction. These records should be moved to high-speed flash storage and/or to servers close to the point of customer contact. Regional colocation facilities are a popular way to position data at the edge of your network to minimize latency. 

Use tiered storage. With recent announcements of new storage and memory technologies, the traditional storage tier is changing, giving organizations more options for storing data cost-efficiently while also improving retrieval times. For on-premises deployment, the growing number of options includes flash, networked storage, direct-attached storage (DAS), and tape. Newest to the list: Intel® Optane technology sits closest to the CPU, combining the strengths of memory and storage to create a new data storage tier.

Flash is a traditional technology used for timely data and is moving toward large capacity. With access times that are faster than disk, it’s growing in popularity and ideal for most customer-facing and business applications. When response times or fast data access is critical, Intel® Optane™ SSDs combine high throughput, low latency, high QoS, and high endurance to provide a new level of performance that accelerates applications for better customer experiences, business agility, and TCO. For even greater performance, Intel recently announced the coming availability of Intel Optane DC Persistent Memory, designed to bring very high-speed and low-latency non-volatile memory to a system’s memory bus. Think of this as an SSD that lives in a DDR4 DIMM slot and is written to as if it is system memory.

At the other end of the spectrum, Intel QLC NAND technology is a new type of NAND technology that allows much higher-density and cost-efficient SSDs, presenting a reasonable option for HDD replacement.

Disk isn’t going away, though. For applications that can tolerate delay, it’s still the least expensive option. Networked storage is good for shared files in which retrieval speed isn’t a big issue. Direct-attached storage is faster than networked storage and is a popular option for Hadoop clusters, which can handle very large volumes of data at low cost with reasonably good performance. Tape is the least expensive option, but also the slowest. It’s great for archival data that can withstand retrieval speeds of minutes instead of seconds.

So, segment your data according to the retrieval speed you need, explore new technologies that provide additional tiering options, then move your data to the most appropriate tier.

Consider cloud storage. Object storage services in the cloud provide nearly limitless scalability, excellent reliability, and performance that can rival on-premises networked storage. Many companies use the cloud for backup or archival data because of simplicity and the opportunity to jettison on-premises equipment and software licenses. Cloud infrastructure providers are offering an increasing variety of options, including low-cost archival storage. Be aware that vendors make it cheap and easy for you to move data into their clouds but may charge you to take it out. That makes them a good option for little-used backup files but not for frequently retrieved data.

 

[ Key Considerations for Building a Data Management Strategy ]

 

Modernize servers. Customers have less and less tolerance for delay, so delivering results in seconds is critical. Moving data to speedy storage won’t do you much good if your servers can’t manage it. Modernizing infrastructure with Intel Optane technology enables you to deliver services at the highest possible speed and to consolidate workloads to save costs and improve reliability.

Many companies are now using machine learning to better understand their customers and to deliver personalized experiences on the fly. These applications are highly compute-intensive, which makes infrastructure optimization a must. An analytics process that takes hours to deliver a result isn’t of much value for making split-second decisions.

Adopt hybrid cloud. This deployment architecture gives you the flexibility to build and deploy applications wherever they make the most sense, whether that be on your own infrastructure, at a colocation facility, or in the public cloud. By adopting platform-as-a-service and containers, organizations can make a significant portion of their application portfolio portable, enabling them to shift platforms and allocate resources fluidly. This gives them the flexibility to accommodate demand spikes by adding infrastructure wherever it’s needed.

Customer expectations are now defined by the response times of world-class e-commerce companies. Use them as a benchmark to revitalize your own infrastructure.

Go here to learn more.