One of the more interesting exchanges from IBM Interconnect 2017 was between IBM CEO Ginny Rometty and Salesforce CEO Mark Benioff [Disclosure: IBM is a client of the author]. Benioff commented that both had recently gone to Washington to address the issue that the U.S. workforce isn’t ready for artificial intelligence (AI). Both companies have platforms that are now partnered, IBM Watson and Salesforce Einstein. The problem is twofold, both firms are currently focused on augmenting people, but if people aren’t trained to work with AI, the easier path may become replacement and that path creates a massive problem connected to unemployment and unemployed people not only don’t buy products, they tend to revolt.
[ Related: Watson could keep business and world leaders from looking like idiots ]
Let’s chat a bit about what it might mean to prepare the workforce for AI.
A matter of trust
At the heart of the problem is the fact that we simply don’t trust systems to the level we’ll need to for AI assistants to truly be helpful. We came into the workforce with concepts like intuition and “gut” driving our decisions. Even though we are surrounded by data, the actual use of information based on valid data seems to decrease, not increase. I know I’ve seen this from executive after executive, and this is showcased by our current U.S. President — basically ignore the data and make an orthogonal decision that seldom ends well largely because they don’t trust the data underneath the advice they’ve been given.
[ Related: Is President Trump a model for AI? ]
Now there have actually been good reasons for this because the quality of the data has been all over the map. In addition, those seeking the data may have their own agendas, which oftentimes have little to do with where the unaltered data would otherwise point.
Addressing this is a two-step process.First there really has to be increased effort on assuring that the data is both complete and unbiased and the analyses is completely based on this reliable data. The foundation for trust has to be trustworthy results. The second, and equally critical step, is then to reeducate decision-makers to this new reality where the resulting information can be trusted. If the second step is done before the first it’ll only make decision makers distrust the advice that these AIs provide more and move the ball in the wrong direction.
This was partially showcased with the presentation by H & R Block on stage at InterConnect. This presentation showcased that a tax preparer at Block now has two monitors. One has the information that is typically part of the tax preparation interview. The other is Watson, acting almost as a peer, providing advice real time as the form is filled out suggesting items that will improve the return either by increasing the deduction or assuring accuracy. You get a team now of a human and an AI who collectively provide a service that is better than either could do separately.
[ Related: The future of AI is humans + machines ]
This is the idea of coupling, a tight partnership, between the human and the AI that together create a powerful solution. But this isn’t like a typical digital tool, if the tax preparer treats this like a calculator rather than a partner then the result is sub-optimal and improvements wouldn’t be as great. H&R Block reports that their tax preparers love the product, the customers love the product, and already loyalty and customer satisfaction scores have been showing a significant increase.
Collaborative learning and care
This is the part of the solution that really needs to be fleshed out more. Systems like Watson and Einstein need to be trained and those that use these systems have the practical knowledge to help do that. But these systems can in turn train their partners helping them become more efficient and even more satisfied with their jobs.
There is clearly an effort to have humans help train the AIs, but I’m not yet seeing much effort in returning the favor to the humans. We have massive growing problems with the care and effective development of people as well. AI’s have massive knowledge on how to help recognize these problems and advise the employee how to deal with them.
This is where I think we need to make a breakthrough so that the human isn’t just making the AI a better part of the partnership by advancing its knowledge, but where the AI makes the human a more productive member of the team by dealing with his or her shortcomings as well. Then we get the kind of synergy an augmentation model anticipates and have the potential to reach the full potential of this new class of team.
Two roads to AI in the workplace
Currently, there are two potential paths connected to the advancement of AI in the workplace. One is what IBM, Salesforce and H&R Block are working for and it is focused on augmenting and improving the human. The other is replacement and in many ways, it is far easier because it doesn’t try to create a successful system combining human and AI elements. However, replacement has a nasty side effect of massive unemployment and customer loss.
We want the first path of augmentation to be successful if we don’t want to live in a dystopian future. In order to get there systems have to be trusted and worthy of that trust, AIs and people have to be more like partners, and, as partners, both AIs and humans have to have a focus on making their partner better. This last will likely be the hardest but, I think, will also make the difference between whether people and AIs co-exist or a future where far more humans than states can manage are surplus.