Africa

Americas

by Kumar Srivastava

The ‘A’ in AI needs to be for accountability

Opinion
Jun 27, 2017
Artificial Intelligence Technology Industry

machine learning cloud ai artifical intelligence 3
Credit: Thinkstock

When we think about the blockers to adoption of AI, one can name several issues. For example, the need for specialized hardware, experience and expertise in AI development, production operations of AI systems and user interface and experience design. However, none of these issues is as significant as that of accountability i.e. Accountability of the insights, predictions, and classifications served by the AI.

Accountability

Traditionally, enterprises and the business world operate through complex supply chains that have accountability baked in through buyer and supplier relationships. The buyer demands a service or product, pays for it and the supplier is then held accountable to deliver that promised product/service to the buyer within established quality criteria that have been mutually agreed upon by the buyer and supplier.

Internally, enterprises have organization structures where are optimized to ensure that for every promised product or service, there are people who have been identified as accountable for the delivery of the product/service to either internal or external buyers according to the established guidelines.

Deterministic vs. non-deterministic systems

Non AI-driven systems are uniquely characterized as deterministic in their behavior. Barring product bugs or inefficiencies, the user can expect the product to behave in the same exact manner every time it is used. The user has the assurance that the product will operate and behave in exactly the same manner and the enterprise that is selling the product promises to abide by the product and the mutually agreed upon service parameters.

However, AI-driven systems are non-deterministic in nature as they are constantly learning from new data and new interactions. This means that the behavior of the system can and will change over time. The data scientist building the AI model can and should monitor the system and ensure that all changes are positive in nature. However, given there can be a delay in when the data science team detects the model degradation and even longer delays in how fast the model is updated and delivered to production. This can cause unexpected behavior of the system to last anywhere from minutes to days to months. In such a situation, the question of accountability becomes paramount.

Accountability to customers

In addition, the black box nature of AI-driven systems makes it extremely hard to walk and monitor why the response of an AI-driven system is what it is. However, customers do not expect or care about what technology is used to deliver the product and service to them. This means that when things go awry and the product/service behaves in a manner that violates the promise or the quality expected by the user, customers expect answers to why the product/service behaved the way it did and what the enterprise intends to do to prevent this from repeating itself.  AI-driven systems break this paradigm and thus can be inherently harder to support especially in B2B scenarios that are defined by specific and binding contracts and SLAs.

Business expectations and accountability

The means can be technology, systems, employees, processes, employees and systems and processes working together or any other combination. From a business and client perspective, it does not matter how the service is provided, what matters is that service is provided within the correct levels of quality. To that end, business users are able to hold the technology teams accountable for delivering high levels of service quality in exchange for funding technology investments, initiatives, and projects. 

When business teams are offered the inclusion of AI technology that often removes humans from the value delivery chain replacing it with AI-driven intelligence. In such scenarios, if the technology team does not offer 100% accuracy and quality, business teams face the prospect of not having anyone accountable for the quality of the service. When the inclusion of a technology limits an employee’s ability to excel at their jobs, it can be daunting and nerve racking.

Building accountability in AI-driven systems

1. Thoughtful introduction 

Careful thought needs to be applied to the introduction of AI in the enterprise. Introduction points can range from Mission Critical AI in Mission Critical Systems, Non-mission Critical AI in Mission Critical Systems and Non-Mission Critical AI in Non-Mission Critical Systems. Each approach has its own pros and cons though a good strategy to begin with, is to introduce AI in a non-mission critical part of mission critical systems. This leads to clear and demonstrable value from AI with medium to low risk introduced as a result of the AI. 

In addition, even though AI has the potential to reduce manual input and support and thereby reduce the cost of employees in an enterprise, blanket replace, ent and removal of all employees in critical scenarios can be a foolhardy strategy. A more pragmatic approach can be to focus on increasing the efficiency and quality of the output a smaller group of employees 

In addition, AI should be introduced in scenarios and workflows whose quality is low and existing approaches (automated, manual or both) have struggled to address these issues. Finding such niches of unaddressed problems and pain points can be ideal launch pads for AI driven systems.

2. Managed risk exposure

Enterprises looking to introduce AI should be concerned with the resulting risk exposure of the enterprise due to change in accountability models.  Poorly designed AI driven systems that are introduced without thoughtful interfaces and guide rails can increase the cost and complexity of supporting users, dealing with escalations and maintaining the required quality of service.

Introduction of AI should be preceded with appropriate budgeting and training of support personnel, definition of processes for operations and management and a good archival and logging infrastructure to enable replay abilities of past versions of AI on a given data set.

3. Traceability and deterministic behavior

Traceability and ensuring deterministic behavior is one of the toughest hurdles and most prominent contributor to lack of accountability in AI driven systems. It is very hard for systems to be designed to return the same output very time when provided the same input. It is even harder to design AI driven systems that return the same output given the same input but at different points in time. In addition, techniques like deep learning make it very hard to determine why and explain how the system generated any particular output. Care must be taken to ensure that the introduction of AI does not violate audit and compliance requirements on the enterprise.