sponsored

AI Systems Push Data To Its Limits

roy kim 0348 site2

Roy Kim leads the FlashBlade products and solutions team at Pure Storage. Previously, Roy spent eight years at NVIDIA, leading product management and marketing efforts focused on artificial intelligence and analytics, helping drive a start-up within the company to a multi-billion dollar business. Roy has a Masters of Computer Science and a Bachelors of Science from M.I.T. 

IDG recently sat down with Roy Kim, an artificial intelligence and deep learning expert at Pure Storage®, to discuss the data and storage needs of AI systems. Data is the fuel for AI, and as it turns out, AI challenges data storage systems like no other application ever has.

At what stage of the artificial intelligence (AI) lifecycle is fast storage most needed?

There are generally two stages of AI processes, training and production. Modern storage is especially critical for the first. An example of a training process might be a healthcare research center that wants to use decades of medical imaging data to train an AI system, ultimately to help radiologists better interpret MRI images for future patients. The training phase may take months, because it takes a great deal of computing and data to build AI-powered solutions with more intelligence. 

The data scientists building this system are the most valuable asset you have, and you want to keep them as productive as possible. If an AI model takes hours to train, and not weeks, that’s a big deal for data scientist productivity. They need the right tools to do their work, such as massively parallel GPUs (graphical processing units), high-speed networks, and really fast storage. AI models and GPU processors consume 100 times more data than they could 5-10 years ago. You just can’t apply a legacy storage system to feed that beast.

What other demands does AI model training put on storage?

Humans learn in a random, parallel world, rather than a predictable, sequential world, and AI models learn as humans do. For the best outcome, AI models need to access data randomly, not in a sequential manner, so the storage system where all your data sits must efficiently deliver data in a similar manner.

The problem is that legacy storage systems are built for a predictable, sequential world. They fall apart when faced with randomness because they weren’t built to operate in this fashion. Spinning hard disk drives are very inefficient in getting to random data, for example. A legacy software stack built on top of storage systems must be rearchitected for the massively parallel and random modern world.

How does the AI-Ready Infrastructure (AIRI) address these challenges?

AIRI™, the AI-ready platform offered by Pure Storage in partnership with NVIDA, is engineered to simplify and accelerate the AI training phase. Today, data scientists are asked to do quite a bit. Learn new models. Become experts on new tools. Turn models around more quickly. We wanted to lighten the load on data scientists as much as possible. So AIRI combines the entire AI technology stack, from AI frameworks and GPU processors to data storage in an optimized solution. Underneath the hood, the engine is powered by Pure Storage’s FlashBlade™ storage platform and four NVIDIA DGX-1 supercomputers.

AIRI is designed to be a workhorse for companies doing AI in the real world. One early customer was only getting about 20% average utilization out of their GPUs because their legacy storage system was a bottleneck. This meant data scientists were waiting five times longer than they should. With AIRI’s FlashBlade, the GPU utilization rose to nearly 100%.

We introduced AIRI Mini to help companies that are just getting started in AI. With AIRI Mini as their entry point and AIRI as a future option, they can scale out their AI processes as they gain experience and knowledge.

For more information on Pure Storage and its AIRI and AIRI Mini offerings, click here.

Copyright © 2019 IDG Communications, Inc.