Just about every digital transformation journey banks upon cloud, artificial intelligence (AI) and a robust security protocol. And IBM has been a firm believer in the fact that the three entities are not mutually exclusive, but tightly inter-connected.
In CIO India’s exclusive interaction with Subram Natarajan, CTO of IBM India/South Asia, he shares the magic formula to operating in a multi-cloud environment and why artificial intelligence models ought to be explainable.
We also take a look at IBM’s increased focus towards open source, why it bets big on blockchain, and how its POWER9 fares against Intel’s crown jewel – the x86.
Subram, you have been a firm believer in the fact that cloud, AI and security are inter-connected. Could you share the rationale behind that?
Cloud, AI and security are inter-connected. You will run into a lot of challenges, particularly with respect to hybrid – when public cloud services are used in a private cloud format.
Companies have only begun to scratch the surface when it comes to harnessing the cloud, especially on the infrastructure-as-a-service (IaaS) side. There are enormous benefits organizations can derive by leveraging the hybrid cloud. And this is why hybrid clouds or an open platform matters a lot to our customers.
Managed services has become a key point of contention for customers. Our Multicloud Manager allows people to not only monitor, but also cross-provision for multiple vendors.
The crucial thing most AI solution providers fail to incorporate is explainability. You have to ensure there’s no bias in AI algorithms. This is a very important factor in any AI model.
CTO, IBM India/South Asia
Now cloud is the arterial component of any transformation. If you really want to make a change in the AI space, you need to have a very sound data platform, and data platforms need an elastic infrastructure.
If you take any of the emerging technologies – be it AI, blockchain, IoT or even digital transformation, the underlying component is that of a cloud-native developer base.
Security has to be considered in totality, not in isolation – you cannot secure only your end-points or database alone. The challenges with respect to open standards have prompted us to provide open security data integration services – mainly for sharing and normalizing threat intelligence.
A lot of organizations are now operating in a multi-cloud environment. How is IBM leveraging this trend to enhance its security portfolio?
When you are operating across multiple clouds, security becomes a lot more critical because you find yourself amidst different service providers, multiple security products, data repositories, and different security frameworks.
In our own security services, we use AI and machine learning for global threat analytics and orchestration.
The data that comes in from multiple security providers can be analyzed in an open framework. The AI-based threat analytics we do is based on data from multiple clouds. And this is a part of the IBM Security Connect portfolio.
“Through the OpenPOWER consortium, we innovated the entire system design – right from the chip all the way to systems. We were able to create the NVLink, which provides the capability to communicate between the GPU and the processor, in a much faster and seamless manner. It is, in fact, two or three orders of magnitude faster.”
What’s your take on Secure DevOps taking center stage in 2019 and beyond?
Traditionally, DevOps was thought of as the last place to implement a strong security framework. But the fact is this is your biggest IP and therefore you need to secure it well.
Supposing you take an AI-based application as part of your DevOps rollout, one of the things you’ll use to train the AI model is data from your own company. So the security framework deployed in a production environment needs to be applied in DevOps as well.
Now that we’ve broached the topic of AI, a lot of security providers claim to have deployed AI solutions, but in reality, they are condition-based ML algorithms. What factors, in your opinion, truly qualify as AI solutions?
The crucial thing most AI solution providers fail to incorporate is explainability. You have to ensure there’s no bias in AI algorithms. This is a very important factor in any AI model, not just security.
OpenScale technology is a great example of explainable artificial intelligence.
AI Fairness 360 can be integrated with business applications to tell you how AI inferences are drawn. This can help one determine, very quickly, whether there’s bias in an AI model.
Could you throw some light on IBM’s continual drive towards open source? And how does the Red Hat acquisition fit in to the scheme of things?
Both companies are committed to an open architecture. Red Hat is known for having the Number one operating system in business applications – both on-prem and in the cloud.
IBM has been one of the largest contributors in open source. There’s going to be a tremendous synergy between IBM and Red Hat.
How important a role will blockchain be playing in the enterprise? What’s IBM’s ploy?
Blockchain is going to revolutionize the way we look at transactions. What internet did to information, blockchain will do to transactions.
People will begin to look at secure transactions in a very different way – how it can be syndicated between trusted parties, how it can be open, transparent, and accelerate resolution of conflicts.
Unlike cryptocurrency-related blockchain implementation, blockchain for transactions will be much more controlled and between trusted parties. Blockchain is going to be one of the most important technologies that will be impacting businesses across the world.
IBM currently has 37 blockchain installations around the world, and all of them in production. This goes to show that enterprises are beginning to embark on this platform and you will see them accelerating over the years to come.
CIOs and CSOs have expressed concern over the risk of false positives and false negatives in AI. How can these anomalies be kept to a minimum?
The most important factor to consider while deploying AI is the data that’s being used to train the AI model. The model can only be as good as the quality of data you’re using to train it.
Constant scoring of the model and validating whether the model is giving you the right level of confidence is critical. Once you create an AI model, your job doesn’t end there – you have to constantly evaluate and retrain to ensure the model gets better each time.
The most common mistake people make is that they create a model, publish it and then forget about it. Also, it is one thing to use inferences coming out of AI; but it’s another thing to be able to explain it. All of these activities must be put in one integrated development framework.
The IONEX CODE-Database Processing Tool (ICPT), for instance, is an excellent tool that allows you to carry out training in a scheduled manner.
In what ways do you think IBM’s POWER9 can get a one up on Intel’s x86?
Fundamentally, the key difference lies in how processors communicate with memory. In a typical Intel architecture, the data flow goes through the PCI gateway, which has limited bandwidth.
While CPUs and memory has become much faster, the highway between these two has remained stagnant. As a result of this, you have two very high performing endpoints with a very weak link in between.
When IBM partnered with companies like Google, Nvidia and Mellanox through the OpenPOWER consortium, we innovated the entire system design – right from the chip all the way to systems.
We were able to create the NVLink, which provides the capability to communicate between the GPU and the processor, in a much faster and seamless manner. It is, in fact, two or three orders of magnitude faster.
These are the key changes that we brought into our systems, and that makes us stand apart from any other hardware vendor in the market today.