To compete in today’s data-driven world, organizations need to accelerate the digital transformation process that puts technology at the heart of products, services and operations. Digital transformation enables both private and public entities to provide better outcomes and experiences for the people they serve — from smarter vehicles to personalized healthcare, from customized shopping experiences to the prevention of credit card fraud.
A common thread to these and countless other digital transformation use cases is artificial intelligence. AI applications and their underlying technologies, including machine learning and deep learning, enable organizations to train systems to use massive amounts of data to sense, learn, reason, make predictions and evolve. Under the hood, the engine that makes it all go is the blazingly fast processing power of high-performance computing (HPC) clusters.
AI, ML and DL: What’s the difference?
- AI is the broadest term, applying to any technique that enables computers to mimic human intelligence.
- Machine learning is a subset of AI that includes the process feeding large amounts of data into algorithms to train systems, giving them the ability to improve at tasks with experience.
- Deep learning is a type of machine learning that uses neural networks to solve problems like speech and image recognition in a hierarchical manner, similar to the way the human brain solves problems.
Here’s a case in point, from a research initiative at Simon Fraser University (SFU) in Canada. SFU bioinformatics and genomics professor Fiona Brinkman, who leads the university’s Integrated Rapid Infectious Disease Analysis Project, uses the sophisticated and secure compute power of an HPC system from Dell EMC to understand disease outbreaks. That system, named “Cedar,” is designed to run a wide variety of scientific workloads, including those related to AI, machine learning, deep learning, personalized medicine and green energy technology.
“Basically, we’re using computers to study the DNA code in infectious disease microbes, and using that to understand how the diseases are spreading and how to better track them,” Brinkman says. “It’s sort of like being a DNA detective for infectious diseases.” 1
In this case, the DNA detective must sort through the billions of base pairs, or nucleotide letters, in the DNA of bacteria and analyze that code of life for many organisms. And that takes a huge amount of computational horsepower. We’re talking about scientific investigations that couldn’t be conducted without sophisticated algorithms and the processing power of HPC systems.
With the ability to churn though billions of data points in real time, HPC systems provide the power for machine learning and deep learning algorithms to identify trends and patterns in data that might otherwise be all but impossible to detect. Regardless of the use case or industry, these tools for artificial intelligence can make things that were previously impossible now entirely possible.
“The idea of AI is not new, but the pace of recent breakthroughs is,” notes an executive briefing from the McKinsey Global Institute.2 McKinsey attributes the acceleration of AI innovation to three factors:
- Machine-learning algorithms have progressed in recent years.
- Computing capacity has become available to train larger and more complex models much faster.
- We’re generating massive amounts of data that can be used to train machine learning models.
“For businesses, the opportunities are clear,” McKinsey notes. “Leaders should embrace the transformation and performance opportunities already available to them (and their competitors) from data, analytics and digitization, as well as the rapidly evolving opportunities in AI, robotics and automation.”
Ultimately, we’re talking about putting technologies like machine learning and deep learning to work to accelerate the digital transformation process that keeps organizations competitive. And to put those technologies to work, we need the power of high performance computing systems — which is another reason why HPC matters.
For a closer look at Simon Fraser University’s use of high-performance computing, and it’s new “Cedar” supercomputer, watch the case study videos “Simon Fraser University: A Super Cedar” and “SFU DNA Detection with Machine Learning Opens Up a Whole New World.”
Making a difference with HPC
High performance computing touches virtually every aspect of our lives. HPC is making weather forecasts more accurate, cancer therapies more precise, fraud protection more foolproof and products more efficient. In this series of articles, we explore these and other use cases that capitalize on HPC and its convergence with data analytics to illustrate why HPC matters to all of us.
 Dell EMC case study video, “SFU DNA Detection with Machine Learning Opens Up a Whole New World,” April 2018.
2 McKinsey Global Institute, “What’s now and next in analytics, AI, and automation