An interview with IoT pioneer Professor Sanjay Sarma Professor Sanjay Sarma is best known for his ground-breaking work in co-founding the MIT Auto-ID Center, the predecessor of today’s MIT Auto-ID Labs, and developing many of the key technologies behind the EPC suite of RFID standards now used worldwide. He was also the founder and CTO of OATSystems, which was acquired by Checkpoint Systems in 2008. The Auto-ID Center was first created, it was chartered with creating the infrastructure, recommending the standards and identifying the automated identification applications for a networked physical world. It was during this period back in 1999 that the term “Internet of Things” was coined and “things” truly started to get connected to the Internet. Today, Prof. Sarma continues his leadership role as a director of the Auto-ID Lab at MIT and vice president for Open Learning at MIT. The Auto-ID Labs is an independent network of seven academic research labs that research and develop new technologies for revolutionizing global commerce and providing previously un-realizable consumer benefits. It is now the leading academic research network on the Internet of Things. SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe I had the privilege of working with Prof. Sarma back in the RFID days as a management consultant and spoke with him recently about the past, present and future of the IoT. Here’s what transpired from our conversation: Evans: What was your initial vision for the Internet of Things and how did this come about? Sarma: The initial vision was a collective effort. This was back in the early days of the MIT Auto-ID Center. Kevin Ashton (another co-founder, along with David Brock, of the Auto-ID Center) came up with term. We were exploring ideas related to connecting things onto the Internet and we were doing just that — i.e. connecting “things.” One of my former students, Joe Foley, connected a microwave oven to the Internet with an RFID tag and reader. The idea was to see if we could put food in the microwave and have the oven recognize the food item, download appropriate cooking instructions from the Internet, and proceed to cook the item a specific amount of time and so on. This was an old microwave from his grandmother that we put into the lab. Back in 1999, as we were working on identifying objects, we got into RFID in a big way. We knew the vast majority of things were inanimate objects that the world had no sense of. This was how the whole Electronic Product Code (EPC) movement started. We began with the Distributed Intelligent Systems Center (DISC) and then renamed to EPCGlobal after we received funding from GS1 and the Uniform Code Council. In 2001, we wrote a paper called “The Networked Physical World“ that laid out a vision for these connected things — inanimate objects acting as first class citizens in a connected world. Over the next seven years, we — all the sponsors and researchers — focused on bringing RFID to market. Vendors were off making chips, tags, readers and middleware. Gen 2 tags came along and by 2007 things were off and running. When the economic crash of 2008 arrived, we thought it would kill the RFID industry, but it actually helped. Companies realized the issues around toxic inventory and after 2008, the RFID industry really started moving. As all this was happening in 2008-2009, we started looking at connected cars, connected buildings, and inexpensive disposable sensors based on RFID technology to bring down the cost of sensors. Around that time people started talking more and more about the Internet of Things. What were some of the main business and technical objectives for the IoT back then? We started with “what if” questions. What if everything was connected? We then progressed by addressing questions related to how to make the connections, how to scale, and how to develop the various business models. We found that “what if” was only the first question to be addressed. How, when, and why followed quickly afterwards. We focused on answering these questions for RFID over the course of the next seven to eight years. The world of IoT was a bit like the Jetsons back then. We only started answering questions related to the “how” and “why” for IoT in the 2008 timeframe. Our research focus honed in on topics such as energy management, connecting cars, connecting buildings and homes, and distributed sensing and smart cities. This enabled us to start getting more refined in terms of addressing how, when, where, why and the costs involved. As we fast-forward to the present day, what has surprised you the most about the IoT market evolution? Perhaps the most surprising element has been the lack of any particular direction. We would have expected the government to have stepped up a lot earlier to help generate the kind of lightning rod activity that we saw with RFID. Right now, there’s a lack of swim-lanes and an absence of governmental or concerted end-user intervention in terms of helping to create challenges and test beds. There’s been no activity similar to the Auto-ID Center to bring together industry and research. There’s also too many standards and not enough commercial, academic and government coordination to help create a dominant architecture. Outside of a few exceptions there are no toolkits and everything is open ended. In contrast, the World Wide Web Consortium has done a good job so far. What do you see as some of the biggest obstacles to the realization of business benefits from the IoT at the industry level? One of the biggest obstacles is that the absence of a dominant architecture means there’s a lack of cost savings. With RFID there was strong competition among providers due to uniform standards. The absence of standards leads to “kitchen sink” implementations and customers having to support many protocols. The second obstacle is security which I wrote about recently in Politico (see “I helped invent the Internet of Things. Here’s why I’m worried about how secure it is”) and the third obstacle, in the absence of a dominant architecture, is real-world deal-breakers such as maintainability, upgradeability and other post-honeymoon period issues. I say “post-honeymoon” because it occurs a year after your first implementation. In the absence of answers to these obstacles, there’s also uncertainty about business cases. All four play into each other: standards, security, maintainability/upgradeability, and business cases — all are currently too fuzzy. Is the IoT subject to the same technical barriers to adoption (e.g. security, standards, interoperability) as other emerging technologies, or do you see other forces at play as well? In absence of clarity, we tend to have walled gardens — very vertical solutions that are vendor-specific (e.g. Nest) and which open up their interfaces in a private way via a private ecosystem. We saw a similar thing happen around the world of cell phones when Europe had the GSM network and in the U.S. we had a variety of standards adopted by various carriers. These private ecosystems still provide useful progress for the IoT industry as a whole, but the lack of leadership from the collaboration of industry, government and academia is delaying the realization of benefits. In contrast, we saw the rise of the Internet due to ARPA and the rise of RFID due to the Auto-ID Center. As we look ahead over the next 3 to 5 years, do you see the market addressing these issues via incremental innovation, or is something more needed? We’re currently locked into a couple of approaches. There’s a proliferation of protocols at the link layer such as ZigBee and Bluetooth that won’t change. Interoperability will be difficult. One option is to look for interoperability in the cloud instead. The concept is that getting products such as Nest to work with Amazon Echo, for example, will take some time, but if the Nest cloud can speak to other clouds at the web level then that’s where we have hope. In addition, connecting things within the cloud, instead of in the real world, is a good solution for security and maintainability since we, as an industry, know how to do things in the cloud. One research area we’re currently working on is a cognitive firewall that sits between the home and the cloud. That way, if someone calls and asks the network to turn on the microwave for 100 minutes, it can say, “Are you sure about that 100 minutes?” It’s a model of what’s reasonable and what’s not and we’re looking at a number of technologies including A.I., control theory and machine learning to provide these layers of intelligence between the cloud and the real world. Related content opinion 4 ways to ask hard questions about emerging tech risks For too long we’ve accepted all technology as progress. Today, that comfort zone can be a detriment to the business unless tech leaders start scrutinizing beyond face value. By Nicholas D. Evans Aug 04, 2023 6 mins Risk Management Emerging Technology IT Leadership tip 6 strategic imperatives for your next data strategy As a dominant tech trend, AI has created an early-mover advantage for the data strategies that fuel it, and the business strategies that leverage it. By Nicholas D. Evans Jun 23, 2023 6 mins Data Center Management IT Strategy tip 6 best practices to develop a corporate use policy for generative AI Consider the scope of emerging AI capabilities before mapping out what’s best for the organization. By Nicholas D. Evans Apr 14, 2023 6 mins CIO Generative AI IT Training tip A CIO’s 10-part guide to personal branding Whether securing your first management position or stepping into a larger role, personal branding is vital to any stage of your career. Here are some best practices to utilize, even if your place within the C-suite is already justified. By Nicholas D. Evans Mar 17, 2023 6 mins CIO Careers IT Leadership Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe