In a poem first published in 1967, Richard Brautigan imagined a world in which people were freed of all their labors and reunited with nature in a world “all watched over by machines of loving grace.” In different words, this theme was a hot topic for attendees and presenters at the recent Dell EMC World conference in Las Vegas. Not so much about machines of loving grace, but it seemed that everywhere you went people were talking about machine learning and artificial intelligence, and what this new era means for all of us.
In a “Guru Session,” Sir Tim Berners-Lee, the inventor of the World Wide Web, explored some of the implications of the rise of AI, including the notion of machines taking over jobs long held by humans. That’s not necessarily a scary proposition from his perspective. He noted that there are many mundane jobs that humans really shouldn’t be doing. So, the issue is not one of robots taking jobs away from humans, but of humans doing the jobs of robots. Sir Tim predicted that the future will bring many AI applications that will allow robots to take care of everyday mundane tasks, freeing humans for higher-level work.
Even more so, he said we are approaching the day when machines will program themselves, rather than having humans program the machines. That’s the future of machine learning. So, does that mean that machines will become overlords that unleash destructive robots? The jury is still out on that, and personally I think there are many physical hurdles to overcome before this scenario is even possible. However, on a practical front, I feel as their primary goal, machines will program themselves to complete tasks in a safe and efficient manner. That’s already happening today with algorithms that train themselves to drive autonomous vehicles.
Elsewhere at Dell EMC World, Pat Gelsinger, CEO of VMware, shared his perspective on the future of AI and the next wave of innovation in mobile-cloud technology. He noted that the technology is in our grasp today to implement AI in virtually every aspect of our lives.
Gelsinger offered an example from the healthcare space where an AI application from a cardiac monitoring device detects arrhythmia, make a doctor’s appointment for the patient, and rearranges the patient’s schedule for the day to reflect the appointment.
Given the dramatic momentum of AI and the burgeoning volumes of data that machine learning system can leverage, the question then becomes: How do we prepare ourselves for the onslaught of AI capabilities that will come our way over the next few years? Or, in simple terms, how do we become AI-ready? The implication of this question is that technology is just one part of the adoption equation.
For starters, we need to think about the legal framework for AI. For example, what happens if a self-driving car steers itself off a highway to avoid an unexpected obstacle and ends up hitting a pedestrian on the side of the road? The self-driving car did what it had to do to protect its occupants. Where is the culpability? Where does the responsibility lie? Is the manufacturer of the self-driving car responsible for the pedestrian’s injuries?
We also must take into account the quality of AI. Deep Learning algorithms today are literal black boxes when it comes to understanding how they arrive at their results. For example, when an AI system determines there is a 95 percent chance that a patient has a particular tumor, how much faith can we put in that assessment? Unless we are deeply aligned on where the diagnosis came from, I’m afraid we can’t put much faith into the assessment, especially if we don’t know the criteria the AI system used to come up with its diagnosis.
And therein lies one of the main challenges to AI practicality. The models used today for deep learning are so complex that it is nearly impossible to disambiguate what criteria the trained model relied on to arrive at its conclusion. Here it seems as though we may need a second AI program to monitor the operations of the first to educate us as to why it came up with the particular result. Machine overlords for the machine overlords!
We need to think long and hard about these machine-to-machine relationships. How do we program for accuracy and safety? How do we ensure that competitors share information to enable that accuracy and safety? There are many questions when it comes to the age of AI. And there are many machines that will help us get to answers.
I plan to address these AI-readiness topics in an upcoming blog post. In the meantime, for a broader look at the context for AI, including the convergence of high-performance computing and data analytics, see my February post on The Evolution and Maturation of HPC in the Enterprise.
Adnan Khaleel is the Global Sales Strategist, HPC and Data Analytics, at Dell EMC.