David Shrier thinks we need a better model of how we deploy new technologies (and he’s right)

Oxford and MIT futurist David Shrier explains how augmented and extended intelligence may protect global governments against cyberattacks.

Recently, I interviewed David Shrier, an Associate Fellow, Said Business School, University of Oxford, and originator of the online programmes Oxford Fintech and Oxford Blockchain Strategy.  David is also the CEO of Distilled Analytics, an MIT spinout that is transforming financial services through behavioral analytics; and Chairman of Riff Learning, an AI-driven collaboration technology platform provider. He currently counsels the Government of Dubai on blockchain and digital identity, and he previously advised the European Commission on commercializing innovation with a focus on digital technology. His pro bono activities include serving on the advisory board of WorldQuant University, a program offering a totally-free, accredited, online Master’s degree in financial engineering; membership on the FinTech Industry Committee for FINRA, the securities industry's self-regulatory body, and serving as a charter member of the UK’s Fintech Trade & Investment Steering Board for HMG’s Department of International Trade.  David and MIT Professor Alex Pentland have published books including Frontiers of Financial Technology, New Solutions in Cybersecurity, and Trust::Data.  David Shrier was granted an Sc.B. from Brown University in Biology and Theatre and worked professionally as a dramaturg and director after college.

david shrier David Shrier

David Shrier

Opinions expressed in this interview are his own, and do not necessarily reflect the views of the University of Oxford or its faculty.

In 2015, we met at the MIT Media Lab, where Code for America held there annual #CodeAcross event.  Then you were Managing Director of the MIT Connection Science Initiative and kindly gave my mentor, then Massachusetts Government Innovation Officer Tony Parham, a demo of MIT’s CityScope, a LEGO-based urban planning system. We have been connected ever since, and I LOVE, love, love seeing and reading your clips on global innovation!  So – let’s get started: In an InformationWeek interview last year, Andrew Moore, Dean of Carnegie Mellon's School of Computer Science, estimated that 98% of AI researchers are focused on engineering systems that can help people make better decisions rather than simulating human consciousness.  He's essentially talking about augmented intelligence, but what is augmented intelligence?  How can AI and people work together?

David Shrier: Well, I would describe what he is talking about as decision support, which is good but only a first step.  There’s a next evolution we call “extended intelligence”.  Extended Intelligence is the idea of human/machine hybrids that are better than either one alone.  Imagine if you walked out of a meeting and your little AI app in your phone prompted you with 2-3 things you did well in addition to what you could improve in the next meeting, and the AI system could help the organization become aware faster of emerging problems so that they could be addresses faster and better.  Or your smart agent on your computer or phone brought you the document you needed, when you needed it, without you having to ask, because it understands your work, your workflow, and how you think, and can simulate you in going out and finding what you need, when you need it, invisibly, instead of forcing you to search for it or ask for it. Siri is great, but you have to know how to phrase your query.  You couldn’t say to Siri like you could a human “bring me that thing from that guy last week”.  Future extended intelligence systems would be able to not just figure out what you meant by that, but actually anticipate what you need in advance.  A future system of people and smart machines networked together could do amazing things we’ve only just begun to dream of.   

In the 4th Industrial Revolution, we have been thinking through how to create the new workforce of the future.  Many global countries are trying to figure out how automation will affect their global economy.  McKinsey actually developed a telling report for Denmark on how automation will affect their country.  This McKinsey report has shown that automation will create jobs and evidently revealed that we have a digital skills gap.  In your interview with CNBC, you talked about Oxford Fintech, and how the average age of programs participants is 40.  Can you tell us more and about how your academic institution is working to re-skill and up-skill workers? And, what are some of the innovative workforce programs (a la World Quant)? 

David Shrier: Conventional wisdom (and Citi’s research group) hold that there will be more than 2 million jobs lost in the next 8 years to the fintech revolution, just in the US and Europe in financial services. Rather than wait for employment Armageddon to happen, Oxford is proactively stepping forward to help retrain the workforce.  We’re using digital platforms to enable this journey of transformation through our programs like Oxford Fintech and Oxford Blockchain Strategy, and applying neuroscience and cognitive science of learning in how we’re designing and delivering these programs.  This has led to completion rates as high as 98% versus the 5% of a typical online course.  Our students are typically mid-career professionals looking at the looming financial innovation and seeking to prepare for where to go with their careers when everything changes.  We seek to impact not only knowledge, but also tools, frameworks, and a network of innovators, so that our students are prepared for future disruptions as well.  Many are using the program to drive corporate innovation ideas within large financial organizations, and others are creating free-standing startups to invent a new future.  We’re exploring other ways to extend the experience beyond the initial program, so look for more announcements this year.

I’m also on the board of a different nonprofit institution, WorldQuant University.  It’s got a different pedagogical model than the Oxford programs but is offering a totally-free online master’s program and seeks to fast-track employment for people from emerging economies.    

You once said, "The ethics of AI is the ethics of the user."  Can you talk more about what you mean by this?  Also, what is responsible innovation? 

David Shrier: AI is just a tool, no different than a hammer and chisel or nuclear radiation.  You can use this tool for bad things, or you can use it to help people.  The question is, what do you decide to do? 

Facebook is a great example of serious, company-wide ethical failures in how AI was applied to a product and market.  They are trying to disclaim responsibility for the propaganda and election subversion carried out on their platform – and believe me, Facebook has harmed both Democrats and Republicans alike.  “We’re just a platform,” they say.  That argument didn’t work for Facebook with the child pornography scandal a decade ago, and it shouldn’t work with the current crisis either.  When teaching behavioral analytics or data science, we make a point of introducing ethics into the conversation early.  I’ve written a bit on this; check out my post on LinkedIn on the Ethics of Data Science.

Responsible Innovation is thinking ethically about how we adopt new technologies like AI or blockchain.  Do we just replace people with machines, or do we figure out ways to reorient our people around a new reality and figure out how to get people and computer systems working together, to produce something neither could do alone?

As I put together a new company, Distilled Analytics, I think a great deal about not only what we are doing, but why we are doing it and which problems we are choosing to take on.  One of our products, Distilled IMPACT, applies AI to assessing the non-financial factors around an impact investment, in an effort to catalyze the $23 trillion of impact capital that is still seeking a more credible model for investing in profit and purpose than one based on surveys and guesswork.

Inclusivity and access are key to vulnerable populations, especially when it comes to bridging the digital divide, so we don't end up digitally excluding poor people, which would further Thomas Picketty's assertions.  How can we have ethical AI and machine learning?  For example, Reverse Innovation also has a dark side. Can you share your bank example around not loaning to certain neighborhoods and the dark side of AI (and how this ties into your digital identity startup!)? 

David Shrier: So, if we just let AI run around and learn unsupervised, it can do strange (and sometimes harmful) things.  There was a large financial institution that used an AI to make lending decisions but failed to put appropriate parameters around it.  When they did a technology audit later, they learned that the AI had figured out how to not lend to a certain zip code (red lining), which is illegal in the US.  The AI had determined that the loan portfolio performance improved by excluding poor people of color in a certain geographic area.  A human would know that was wrong if they were properly trained. 

If we don’t behave thoughtfully around how we use digital platforms, we can create problems just like we did when we made cars faster but didn’t have seat belts and air bags.  We need to create the digital equivalent of a seat belt for our AI, to mitigate the consequences when something goes wrong.

On the other hand, if we apply ourselves, we can use AI to solve previously-unsolvable (or difficult to solve) societal problems.  Over a billion people in the world lack legal identity, and another product that Distilled Analytics is developing, Distilled IDENTITY, can help address this issue using machine learning.  In addition to better identity using behavioral biometrics, we can not only figure out who these people are, but also assess their credit riskiness (and that of the other 2.5 billion people who are underbanked or unbanked), bridging the digital divide and solving the failures of the credit bureau model that to date have excluded almost half the world’s population.

Recently, you published a book on cybersecurity.  What is the central message of the book that you're hoping the audience understands?  Also, what is the role of augmented intelligence in the cybersecurity realm? 

David Shrier: Sandy Pentland, Howie Shrobe and I were prompted to put together New Solutions for Cybersecurity because we felt that there wasn’t sufficient awareness of exactly how broken our cyber infrastructure is.  From management systems to hardware to access control and more, we have created a fragile, poorly-constructed house in an earthquake zone.  The US energy grid has already been hacked by a state actor, for example, and our entire power system is vulnerable.  Baby monitors and webcams have been turned into zombie botnet armies to launch large-scale cyberattacks, because no one bothered to secure the embedded processors as millions of these devices were sold.  We need to raise awareness of the magnitude of the problem and provide suggestions for how to begin fixing our cyber infrastructure.  We got 30 of the top researchers in the field together to contribute their assessments and recommendations.

In addition to editing, I also wrote a chapter on behavioral biometrics, which can not only be used to solve the identity crisis I mentioned earlier, but also to better secure our computer systems. Human-in-the-loop AI, what we call “extended intelligence”, holds the potential to address aspects of cyber (in)security, such as ascertaining patterns of coordinated attack.  Our next book is probably going to be on Extended Intelligence.

Whoahhh!!!! To this end, AI has been mostly informed by western thinking and has inherently create biases.  How would you recommend creating ethical AI?  Does this require hiring more women to increase diversity of thinking in leadership? How can we avoid a Minority Report or Skynet future? 

David Shrier: We definitely need to come up with a better way to train AI’s.  Part of how we bring ethics into the discussion is to have people more thoughtfully and actively involved in both the training and deployment of AI systems.  We need to design processes that incorporate human judgment in addition to AI scale.  We need to train computer scientists and data scientists continuously on awareness of the ethical implications of their decisions – not just how to build something, but why they are building it and what its impacts will be.  I don’t know that it’s an east-vs-west issue; the Government of China is using AI in ways that I find scary, straight out of Black Mirror, although China will point at the US elections and Brexit and say that they prefer stability to insanity.  What I do know is that we can come up with a better model of how we deploy new technologies like AI.  

This isn’t an AI issue per se, but we also need to educate men, and women, better about how to function effectively as an organization.  At Davos this year, I listened with astonishment as one young tech CEO proceeded to tell me he doesn’t have women working at his blockchain company because women don’t like staring at screens all day or working long hours (news flash, both of these statements are fallacies).  Collaboration research shows us that gender and race-diverse teams come up with better and more new ideas.  Improving diversity of race and gender in the workplace will give us better companies, making better decisions, and delivering better products. 

This article is published as part of the IDG Contributor Network. Want to Join?

NEW! Download the Fall 2018 digital issue of CIO