sponsored

Using today’s technologies to create AI safeguards for tomorrow

blockchain
Dell EMC

By Steve Todd

In a proactive move to “eliminate a future conflict,” Elon Musk recently stepped down as chairman of OpenAI, a nonprofit research company he cofounded two years ago aimed at building safe artificial intelligence (AI) with wide benefits for all. In 2014, Musk suggested AI could be “more dangerous than nuclear weapons,” a statement he reiterated at a recent SXSW event in Austin, TX.

Bill Gates also went on record several years ago about his concerns regarding machine superintelligence.  The two technology leaders joined others in creating an open letter encouraging research into AI safeguards:

“It is important to research how to reap [AI] benefits while avoiding potential pitfalls.”

The open letter was accompanied by a research priorities proposal, highlighting work that can be done to make AI “robust and beneficial.”

Perhaps the most pressing question today is whether we can use current technologies — such as historical and preventative tracking — to build AI safeguards that not only figure out why an AI algorithm made a poor decision but also preclude other AI algorithms from making the same poor decision?

I believe the answer to that question is yes. Two existing technologies can come together to provide the ability to audit the decisions machines are making: a) Blockchain technology and b) any off-chain storage systems that are referenced by Blockchain. Together, they can form the foundation of a digital forensics platform that can be extended to monitor (and potentially regulate) super-intelligent decision-making.

Blockchain, introduced with Bitcoin in 2009, can be thought of as a tamper-proof ledger of transactions, though ledger entries do not have to necessarily contain records of financial transfers. Each blockchain transaction is time-stamped and can reference any type of “off-chain” data records that live in well-protected and secure storage systems. References to the data are implemented via very small character strings that can be easily stored within the ledger.

Off-chain storage technology is especially powerful when implemented by a special class of storage systems known as content-addressable storage (CAS); the first CAS system, Centera, was initially shipped in 2003. This type of storage system assigns unique digital fingerprints to every piece of digital content. Once stored, the data cannot be tampered with or deleted and can be retrieved based on content, not storage location.

The power of these two technologies is already on display as a solution for the medical industry. In “How to Achieve Data Privacy in Blockchain Ledgers,” Dr. Marten Neubauer details how the solution allows a medical record to be stored in an off-chain storage system. The record is assigned a unique identifier and the asset is then registered on a Blockchain system.  When any doctor looks at (or tries to access) a patient record, it’s possible to also record that attempt on the blockchain, so a patient can track which doctor is looking at a particular private data record.

It’s easy to imagine how this solution could be applied to AI. Consider a use case in which an AI algorithm makes decisions in the context of a self-driving car. An AI vendor, for example, could store a new algorithm in an off-chain storage location. They could then “register” their algorithm as a blockchain transaction. The registration entry can require a vendor’s unique key, creating a digital signature that establishes ownership of their algorithm.

In this scenario, a connected car can then see this registered algorithm and download the software. As new data continually arrives to the car, this data can undergo a similar registration prior to analysis. The algorithm, the input data, and any output data, can all be stored off-chain and registered on-chain. This provides a chain of ownership in terms of which algorithms have analyzed particular data sets. If an accident should occur based on AI decisions, the blockchain provides a trustworthy record of how the decision was made.

This type of system will be critical to perform forensics on situations in which self-driving cars malfunction, perhaps crash, or cause damage and even loss of life. Just last week the first death in the self-driving car experiment occurred in Arizona. This is just one tragic example amongst many articulated by technology luminaries like Musk and Gates. If today’s simple AI algorithms can make mistakes that result in loss of life, what kind of mistakes will super-intelligent algorithms make?

Fortunately, Blockchain and off-chain digital forensics systems can provide time-stamped and immutable proof of why AI algorithms make their decisions. This offers a regulatory opportunity for holding companies responsible for ethical and reasonable use of AI technology. Not only can this technology conduct forensics for poor decisions, but also use the forensics data to train new models to avoid those poor decisions. An off-chain digital forensics platform can therefore be augmented to attack the problem of AI algorithms overextending their intelligence into areas in which they should not be participating.

Once these safeguards are put in place, regulated AI algorithms can then focus on bringing maximum benefit with minimal harm. Everyday examples of the benefits AI can bring include:

  • Better fuel consumption and reduced carbon footprint
  • Better student education
  • Fraud reduction and prevention
  • Safer social media
  • Smarter personal assistants

While there are certainly obstacles to implementing this type of system (e.g., the network latency required to connect thousands of cars to a blockchain), a lack of research is not one, and the solution outlined is clearly within the realm of the possible.

In a recent study commissioned by Dell Technologies and conducted by the Institute for the Future,  we are reminded that AI technologies, “enabled by significant advances in software, will underpin the formation of new human-machine partnerships.” While the fears articulated by Musk and Gates are real, we should be encouraged that existing technologies can already go a long way toward mitigating those fears. Blockchain and off-chain storage platforms are “trust technologies;” they allow us to trust the integrity of data across all AI connection points.

There are tremendous benefits to the adoption of AI in healthcare, in industry, in transportation and more. When measures are taken to deploy these two trusted technologies properly, the benefits to our world far outweigh the risks.

Steve Todd is a software engineer and inventor for Dell EMC with more than 170 patents granted by the USPTO.  He earned Bachelors and Master’s Degrees in Computer Science from the University of New Hampshire. His inventions have generated tens of billions of dollars in revenue for Dell EMC. Steve is a Dell EMC Fellow and currently serves as the Vice President of Strategy and Innovation in the Office of the CTO, with a research emphasis on multi-cloud solutions, data value, and Blockchain.