IBM is making machine learning technology available in the place where much of the world’s enterprise data resides: the z System mainframe.
Today Big Blue announced IBM Machine Learning, a cognitive platform for creating, training and deploying a high volume of analytic models in the private cloud. The platform draws on the core machine learning technology from its Watson Machine Learning service on its Bluemix public cloud offering.
“Our mission is making data simple and accessible to clients,” says Rob Thomas, general manager, IBM Analytics. “If you look at the data landscape today, over 90 percent of the data in the world today cannot be Googled. It’s neither simple, nor accessible. Most of that data resides behind corporate firewalls in private clouds.”
Watson meets mainframe
The z System mainframe, Thomas says, is the operational core of global organizations processing billions of daily transactions — banks, retailers, insurers, transportation firms and governments.
“We’ve extracted the core machine learning capability from Watson, available on the public cloud,” he says. “We’re making it available for any private cloud data. No matter where it is, we’re going to make machine learning available. We’re starting with the world’s most valuable data, which is the data on mainframes.”
While IBM Machine Learning will start by making the capability available on mainframes, Thomas notes IBM will be making releases available throughout the year for other platforms, including IBM POWER Systems.
IBM Machine Learning allows data scientists to automate the creation, training and deployment of operational analytic models that will support the following:
- Any language (e.g., Scala, Java, Python)
- Any popular machine learning framework (including Apache SparkML, TensorFlow and H2O — though the release will only initially support SparkML)
- Any transactional data type
Thomas adds that it will offer these capabilities without incurring the cost, latency or risk of moving data off-premise. He notes that it will be the first platform to deploy Cognitive Automation for Data Scientists from IBM Research to help data scientists choose the right algorithm for their data by scoring their data against the available algorithms and providing the best match for their needs.
IBM customer Argus Health has been evaluating how it can use the technology to help payers and providers better manage complexity and optimize outcomes. It has been exploring the creation, training and deployment of applications that can help them better manage pharmacy costs.
The new frontier of analytics
“We are excited about the possibilities and the potential we have seen from IBM Machine Learning in working in concert with our RxNova claims processing platform, clinical solutions and applied analytics in creating models that are constantly improving by using new data and enabling real-time results to the benefit of members, their caregivers and physicians,” Marc Palmer, president of Argus Health, said in a statement today.
“Machine Learning and deep learning represent new frontiers in analytics,” Thomas says. “These technologies will be foundational to automating insight at the scale of the world’s critical systems and cloud services. IBM Machine Learning was designed leveraging our core Watson technologies to accelerate the adoption of machine learning where the majority of corporate data resides. As clients see business returns on private cloud, they will expand for hybrid and public cloud implementations.”
Thomas notes that IBM Machine Learning’s genesis in the Watson service ensures that it is designed around collaboration and portability in a hybrid environment. Machine learning algorithms can be composed and trained in a private cloud and seamlessly moved to the public cloud and vice versa, all with common management.
“I do think what you’ll see is that hybrid will become a dominant use case here,” he says.