Embedding artificial intelligence into the business like a skilled new employee. Credit: Jonny Lindner As artificial intelligence (AI) makes its way into the business world, it helps to view this process as onboarding a new and highly qualified team member. There is typically an initial training period where the individual learns their new role as well as how the business works and its values. Organizations have processes and people in place to impart this kind of knowledge on new colleagues and help them learn the ropes, so they can communicate and collaborate effectively with others. To a certain extent, you can apply the same principles and steps to embedding a new AI system. Structuring learning Most new, highly skilled employees are anxious to learn what they need to know to hit the ground running. But to do so effectively, they first need to pick up the ‘company lingo’ – the unique language every organization has developed internally over time. SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe Similarly, a company needs to ensure that AI systems start with basic principles, then progressively build skills from set taxonomical structures. In this phase, the organizations that have the best data available to ‘teach’ their AI will end up having the most capable AI systems. Take Google – it released a dataset that helps companies teach their AI systems to understand how people speak. To create the best dataset, Google recorded 65,000 clips of thousands of different people speaking. This scale of training data has enabled Google’s voice recognition to reach 95 percent accuracy. Enabling collaboration A key element of the learning process is explainable decision making – both on the part of established employees and new ones. As new coworkers onboard, having team members explain the decision-making process around certain aspects of business operations is essential. Likewise, new workers will have to explain their decisions as they bring about new ideas that challenge current thinking. People expect to be able to understand why someone (or something) else acts and decides the way they do, especially if those actions and decisions affect us directly. This transparency is key to successful collaboration. As AI promises to empower people and be an effective co-worker, advisor and helper, organizations will need to ensure that their AI systems are able to explain their actions and decision-making process. This drive to understand AI decisions has led to several new regulations and advancements in technology. For instance, the new European Union’s General Data Protection Regulations give individuals a “right to explanation” for decisions made by AI and other algorithms. In the tech realm, NVIDIA, which has an AI-infused self-driving car platform called Drive PX that can “teach” itself to drive, recently added a capability to the platform that allows it to visually explain its driving style by displaying a video of a recently driven streetscape, highlighting areas that it gives the most weight to during navigation. This creates transparency, which enables NVIDIA to build trust between its AI systems and customers. Imparting values Each organization has values that ground it. Having them, adhering to them, and defending them, has never been more relevant to business success than today. My colleagues at Fjord recently proclaimed that we’re witnessing the rise of an Ethics Economy. Values live primarily in the actions and decisions of employees. Now more and more of an organization’s decisions are being made by AI systems, so these systems need to ‘live’ these values too. This is especially important as advances in technology create opportunities, but also fear and resentment. Imagine what would happen if an AI-powered mortgage lender denies a loan to a qualified prospective home buyer or if an AI-guided shelf-stocking robot collides with a worker in a warehouse. Ultimately, AI represents its owner in every action it takes. It is their responsibility that AI algorithms act in a responsible way, as it’s the organization that will be made liable for every misstep. The importance of ‘Responsible AI’ cannot be stressed enough and I will address this in more detail in one of my next posts. Looking at the similarities of onboarding a skilled new employee and integrating an AI system will help us understand the crucial steps we need to take to embed a new level of intelligence at the core of business. Related content opinion Humans and machines meet in the missing middle Why understanding the u201candu201d is a fundamental principle for becoming an AI-empowered business. By Narendra Mulani Sep 11, 2018 5 mins Technology Industry Artificial Intelligence Emerging Technology opinion In data we trust – or do we? AI is the engine that drives insights. Data fuels the engine. So what happens if the fuel is diluted? By Narendra Mulani Jul 11, 2018 5 mins Business Intelligence Analytics Artificial Intelligence opinion Taking responsibility for responsible AI AIu2019s future is wide open u2013 which is why we need to shape its effects on people and society. By Narendra Mulani Jun 11, 2018 5 mins CIO Artificial Intelligence IT Leadership Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe