by Arik Hesseldahl

If artificial intelligence changes everything at work, then education must change, too

Jan 11, 2018
Artificial IntelligenceIT SkillsTechnology Industry

If AI is changing everything about our jobs, what does that say about how we as humans prepare for those jobs? How must education – from preschool through grad school – also change?

At the end of 2017 if it didn’t become abundantly clear that artificial intelligence is well on its way to changing everything that matters about the way we work, then you weren’t paying attention.

The examples were too numerous to list, but here’s a few: In 2017 AI systems beat human doctors at detecting irregular heartbeats, tracked player statistics for NFL football fans, and out-bluffed the world’s best poker players at Texas Hold’Em.

When it comes to solving problems, and doing jobs that are inherently based on patterns, the machines have overwhelmingly won the race against the human brain.

So as 2018 gets underway, it’s worth asking this out loud: If AI is changing everything about our jobs, what does that say about how we as humans prepare for those jobs? How must education from the pre-school through graduate school, change?

I got into a conversation about this last month with Gordon Ritter, founder and general partner at Emergence Capital. The San Mateo, Calif.-based venture capital firm has made some notably strong thematic bets with its investments over the years. It was early to back cloud application companies. Its first investment in 2003 was a $1 million stake in You can guess how that one turned out, but in case you can’t, Salesforce is worth a whisker less than $80 billion today.

Five years later Emergence paid $4 million for a 30 percent stake in Veeva Systems, the cloud software company that specializes in life sciences. By the time of Veeva’s 2013 IPO, the value of that stake had grown to north of $1 billion, and Ritter is currently the chairman of its board. The company is today worth more than $8 billion.

More recently Emergence has been making its funding bets in the areas of AI and machine learning. Its portfolio includes Chorus, an AI-based tool that helps coach sales teams to close more deals by telling them what to say to a potential customer and when to say it. Another portfolio company, Textio, uses AI to help companies write better job postings that have a higher chance of attracting qualified candidates.

Those two companies exemplify what Ritter has dubbed “coaching networks” a vision he articulated in an Op-Ed for Recode in October. The basic idea is that AI agents will train us to do our jobs better, at the very moment that we’re doing those jobs. The process will be fueled largely by what machines learn by watching what factors lead to success in a given job, and then driving workers to repeat those factors again and again. Depending on our jobs, during our working hours, we’ll become accustomed to machines who guide us toward more effective outcomes. That got me wondering: If AI agents are going to watch us work and train us along the way, what should schools be teaching us during the 12 to 18 years we spend in school preparing for the world of work?

The answer: “Original thinking.”

In a world where we’re surrounded by AI agents that watch our every move and boil our choices down into statistical models that predict how millions, or billions of people behave, only the outliers have value.

To illustrate the point, Ritter pointed to the world of Web advertising. “We all have biases. You like a certain brand of car, or dresses from a particular designer. You may think you’re an original being, but we’re all really part of big groups and we’re being led by advertisers to give up our money,” he says. “We have these biases and most of the commercial world wants us to keep them because it makes it easier to sell things to us as big groups of people who all tend to act the same way,”

But what makes the coaching network work is the opposite: “If enough of us become original thinkers, we blow up the patterns and statistical models and make it harder for advertisers to reach us.”

What that means is that in an age of AI-driven processes, being an outlier may be the most important quality a human being can possess. “The only way for you to have value as a human is to think originally,” Ritter says. “If you don’t stand out in some way, you fit into the established models, which means the AI system is already done with you. If you do things the same way every time, the coaching network learns nothing from you.”

What’s needed then, to borrow the phrase from the old Apple advertising campaign are people who “Think Different.” Those who ignore or challenge the status quo can sometimes end up changing it for the better.

“We need workers who are willing to take risks and to push the envelope, and that means we need kids who show up at the workplace with the emotional and intellectual tools to do that,” he said.

Machines are substantially better than humans at detecting patterns. But they aren’t creative, and they aren’t curious. They aren’t affected by art or music or poetry, or even a change in the weather and so they don’t make the unexpected connections that we humans struggle to describe but which we sometimes label as inspiration, instinct or following your gut.

On the other hand, humans are curious but not reliably so. How many times during your schooling life did a teacher or professor ask a roomful of students for questions, only to have no hands go up? Social scientists have recently shown that people find value in asking and being asked questions about themselves. This is true despite the fact that humans have demonstrated the power of curiosity over and over through the course of history.

Asking questions is fundamental to creativity, a field where humans excel, and machines have tended to fall short. Humans can sometimes see solutions that don’t appear in the statistical models.

But creativity also involves risk. Social risk. Economic risk. Even physical risk. People may dislike you, you may lose money or a job, or hurt yourself in the act of trying to be creative. Humans are typically risk-averse beings.

“So many of us live our lives without taking risks, but just doing what everyone else is doing,” he says. “There’s not a lot of value in being just one of the human herd. None of us wants that to be true about ourselves, and if we can give our young people the tools to break away from the herd, it’s going to help us all as a species.”

This idea, Ritter says, gets at one of the most fundamental questions we’re going to face as machines take over more aspects of our daily working and personal lives. If machines are managing our existence with the goal of reinforcing the status quo, how does the status quo ever get better for us?

By embracing the weird. Because over time, when the weird way is better, it becomes normal. “Humans are the only mutation engine in this future world of AI,” Ritter continues. Remember how unconventional it was in the late 1980s for people to carry a cellular telephone with them? Or in the 1970s how it was strange for people to care about how their sneakers look? Machines will never understand what it means to be “cool” which is to say meaningfully ahead of the mass trends as the result of purely human aesthetic, emotional and cultural considerations. Malcolm Gladwell codified the rules of cool for The New Yorker in 1997. The human capacity to understand and account for these intangible, yet high valuable qualities, is our ultimate and perhaps permanent advantage over machines.

And that means the machines will need us just as much as we need them. The “cool” people are often human outliers who choose to challenge cultural conventions in ways that often irritate those of us who are happy to follow those norms. In a world that will be guided by AI, outliers will become crucially important grist to the algorithmic mill. And so, we must encourage people to break from the pack not only in what they wear, but in how they do practically everything.

I thought back to that ongoing metaphor of “Human Vs. Machine” where we tend to view our relationship with artificial intelligence as a game or a race. I asked Ritter what we should do at this moment where by all appearances, humanity is on the losing side of that match-up.

“We have to encourage our young people to try wacky, unexpected things, to think in unconventional and unpredictable ways,” he says. “This is how we find unique strategies that will move us forward as a species. …Now is the time to double down on our humanity.”