BrandPosts are written and edited by members of our sponsor community. BrandPosts create an opportunity for an individual sponsor to provide insight and commentary from their point-of-view directly to our audience. The editorial team does not participate in the writing or editing of BrandPosts.
By Janet Morss
In the development of artificial intelligence applications, the holy grail is the creation of an artificial neural network that functions like the human brain. This is an elusive goal, because the human brain is an extremely complex organ that functions in flexible and fluid ways that can be difficult to replicate in the world of AI.
Today, a team of leading-edge scientific researchers are making breakthroughs in this area by using functional magnetic resonance imaging (fMRI) of the brains of people carrying out various cognitive tasks. The goal is to better understand and create computational models of how the brain works, and then use those models to train artificial neural networks to map images to actions quickly and accurately.
For example, having a fully developed computational model of how memory works would make it possible to compare brain activity and to understand which model is playing out in the simulated brain of a patient. With this base, the research team could gain deep insight into the mechanics of memory function in those suffering from age-related brain illnesses, including Alzheimer’s disease and other forms of dementia.
This would be a big leap forward for the AI world, according to one of the lead researchers on the project, Dr. Pierre Bellec, an associate professor at the University of Montreal. Dr. Bellec is the scientific director of the Courtois Project on Neuronal Modelling (NeuroMod), which is spearheading the collaborative research effort.
“Something the brain does really well is to switch from one context to another,” Dr. Bellec explains in a Dell Technologies case study. “It has very elaborate organization, and specialized networks and subnetworks, and those networks and subnetworks are able to reconfigure dynamically. By contrast, current architectures used by AI researchers are extremely specialized for certain types of tasks, and have a hard time generalizing over different contexts.”
The researchers hope that by mimicking the architecture of the human brain, they can develop a more versatile AI model that can generalize over different tasks, much the way the human brain does.
To collect the datasets for this ambitious effort, the research team has recruited a small group of volunteers to watch videos, look at images and play video games while they are in an MRI machine. To enable these studies, the research team had to build a new game controller without any metal, printed in 3D plastic with a fiber optic cable connection. The machine allows the researchers to track and record the activity in the brains of the subjects as they carry out their tasks. The research team expects to gather many terabytes of data over the course of the five-year study as each subject will spend around 500 hours in the MRI machine.
“Essentially, we are trying to find a new way to integrate activity from human neural networks to help train artificial networks,” Dr. Bellec says. “The hope is that if we manage to do that, we can create computational models of how the brain works. And potentially we can train new artificial neural networks that may perform better in some settings than what we have now.”
To move this project forward, researchers from the University of Montreal teamed up with researchers from Dr. Alan C. Evans’ lab at McGill University who have extensive experience in high performance computing and work with MRI images that require large memory capacities.
They also sought the help of Dell Technologies and Intel, along with the data science and supercomputing resources of the Dell Technologies HPC & AI Innovation Lab in Austin, Texas. The team is using the lab’s Intel-based Zenith cluster, which includes Dell EMC PowerEdge™ servers with Intel® Xeon® Scalable Processors and the Intel® Omni-Path Architecture.
A CPU architecture with big memory
After testing on a GPU architecture, the team found that a CPU-based model can maintain similar performance — with validation accuracy reaching 99 percent after 10 epochs in distinguishing five types of body movements, and 91 percent after 20 epochs in classifying eight types of visual working-memory tasks. At the same time, the CPU-based model requires much less training time — 20 minutes vs. 3 hours per epoch — when using 10 CPU nodes and two GPU cards, respectively. Considering CPU resources can often be more easily accessed, the project provides a feasible solution for the application of deep neural networks on large-scale neuroimaging data by training the model directly on CPU hubs instead of waiting for other resources.
The deluge of data associated with this research effort makes it even more important to have ready access to systems with big memory, which is what the team is getting through the Zenith supercomputer. This all-CPU system is on the Top 500 list of the world’s most powerful HPC machines, and it has been designed to support massively parallel traditional scientific applications as well as emerging machine learning workloads.
“Many people are excited about being able to evolve neural networks in ways that are inspired by biology, and it’s increasingly clear that we need a different type of hardware to do that,” Dr. Bellec says. “And that’s what we have with the Zenith cluster in the Dell Technologies HPC & AI Innovation Lab.”