by Maria Korolov

Workplace AI: Emerging technologies, ethical questions

Feature
Dec 11, 201819 mins
Artificial IntelligenceEnterprise ApplicationsIT Leadership

Workforce-focused AI offers significant upside, but short-term gains might backfire in the form of lower employee morale, higher turnover, lower productivity, and public relations backlash.

virtual eye / digital surveillance, privacy / artificial intelligence / machine learning
Credit: Vijay Patel / Getty Images

AI is making quick inroads into the workplace. Equally capable of whipping up informed predictions in a flash and completing specific tasks on a scale that humans can’t match, artificial intelligence is being applied to everything from business processes to analytics.

While much of the scrutiny on AI’s impact on the workplace has been focused on the kinds of jobs ripe for replacement, AI efforts aimed specifically at workforce issues, such as job candidate screening and performance evaluations, present particularly thorny questions — especially as bots begin moving into management.

[ Cut through the hype with our practical guide to machine learning in business and find out the 10 signs you’re ready for AI — but might not succeed. | Get the latest insights with our CIO Daily newsletter. ]

True, workforce-focused AI offers significant upside, but short-term AI-fueled gains in productivity or security might backfire in the long term, with employee dissatisfaction and lower morale leading to higher turnover and, ultimately, lower productivity. Plus, AI fallout can lead to public relations issues that turn off customers, investors, and job seekers, not to mention the legal and compliance aspects of workplace privacy violations.

Following is an examination of new AI technologies that are emerging in the workplace, how they work, what their intents are, and how they might cross the line.

Screening your job application

Last month, Amazon scrapped an AI-powered recruitment tool because it was biased against women. As it turns out, when a company’s developers are mostly white men, the AIs they create conclude that white men are a better fit for the company than other demographics. And if gender is specifically removed from consideration, AIs will find proxy measures that accomplish the same goal.

But that’s not keeping other companies from applying AI to resume screenings. Oracle, for example, is rolling out its own tool to find “best fit” candidates, says Melissa Boxer, vice president of adaptive intelligent applications at Oracle.

“We always drink our own champagne,” says Boxer. Oracle is a data company that is looking to sell a variety of AI-powered tools to its customers. “It’s one of the non-negotiables. We roll out internally the products that we’re building.”

Oracle hires a lot of people, Boxer says, so that was a good place to start. HR employees have to wade through a very large number of employees. “How do we make that job easier and make sure we’re hiring the right people?”

But the question of inadvertent bias, data privacy and other ethical issues are top of mind as well.

“We have a board of ethics,” Boxer says. “They are in charge of establishing best practices in fairness in machine learning, and associated concepts of accountability and transparency.”

For example, that includes guidelines of what kinds of personal data may or may not be used as part of the learning process.

She also says that Oracle is committed to making AI explainable — an effort to tackle the “black box” problem of AI. “We have put in place a series of supervisory controls within our applications that foster transparency and accountability.”

Oracle also plans to roll out tools that recommend actions to employees, Boxer says. Salespeople, for example, might get suggestions about the most promising prospects to contact next. While employing AI to suggest tasks for workers to undertake could have a range of negative implications, Oracle’s goal is to make it easier for employees to do their jobs, says Boxer.

“It’s not a Big Brother AI reminding you to do things,” she says. “It’s giving the employee insights into their sales pipeline. Are we focusing on the right opportunities?”

Taking your help desk calls

In the movies, when AIs talk to humans, it often ends badly. Hal 9000. The Terminator. That happens in real life, too. Most famously, in 2015, it took Microsoft’s Tay chatbot just a few hours of internet exposure to turn into a  racist.

Most companies are deploying chatbots in much more controlled ways — for customer service, for example, or tech help desks.

This month, Microsoft announced that it is tackling the issue head on, releasing a set of guidelines for building responsible chatbots.

For example, it should be clear to users that they’re talking to a chatbot and not to another human, the company says. And when the conversation isn’t going well, the chatbot should quickly turn the issue over to a real human being.

Microsoft also includes a section on ensuring that the bot has built-in safeguards to keep it from misuse, and to respond appropriately to abusive and or offensive users.

“Acknowledge the limitations of your bot, and make sure your bot sticks to what it is designed to do,” writes Lili Cheng, Microsoft’s corporate vice president for conversational AI in a blog post. “A bot designed to take pizza orders, for example, should avoid engaging on sensitive topics such as race, gender, religion and politics.”

The guidelines also address issues of reliability, fairness, and privacy.

Evaluating your performance

This summer, Bloomberg reported that IBM was using Watson, its AI platform, to rate employee performance, and even predict future performance. These ratings would then be used by managers when making bonus, pay and promotion decisions.

IBM claims an accuracy of 96 percent, but introducing AI into the employee evaluations and raise considerations can be cause for concern. After all, what do you do when it’s wrong? And there are plenty of things that can go wrong, especially if the technology isn’t rolled out properly.

“We can devise an HR system for performance evaluations, and that system can run with intelligent algorithms, learning from past behavior of the individual, or giving some kind of test to measure their decision-making ability,” says Ayse Bener, a professor at Ryerson University. “Whatever it is, if it’s not designed properly, it could be heavily biased, and also might end up giving wrong recommendations.”

To avoid problems, any such system should be built with transparency from the start.

“Every step of this model should be validated by the company, so they are not dealing with any black box algorithms,” she says. That’s true both for internally developed systems and commercially available tools.

Much of the design of the algorithm could also be subjective, she says, such as deciding which factors to consider when making a recommendation. These design decisions need to be shared with the people using the technology, she says, and they need to be understood by the people involved.

“If you remember the financial collapse, that involved complex mathematics that people didn’t understand and could audit,” she says. “They couldn’t see what went wrong.”

In addition to checking that the algorithm works as designed, companies should make sure they’re using the right data set, and that the system can recognize when there’s insufficient data to make a recommendation, or the data is inappropriate.

For performance evaluations, given that each job category requires different criteria, and those criteria can change depending on department or other factors, the data set may be inadequate in many situations, or over-represent certain categories at the expense of others.

“And there is also in AI terminology something called concept drift, which means that today my model assumptions are correct, but things change, my data changes, my environment changes, my customers change,” Bener says. “So there’s a drift in the whole system and I need to revisit the algorithms to calibrate it and tune it again.”

“Audits are very important,” she says. “But I don’t think the current audit systems cover these algorithms properly, because there aren’t enough trained people.”

It takes 10 to 15 years of experience before an auditor is well-seasoned in evaluating AI algorithms, she says, and it’s too early. “We don’t have those people.”

Managing your work

Worried that your boss might be replaced by AI, and you’ll soon be working for a robot? For some, that future is already here.

​Paulo Eduardo Lobo, a software developer with 15 years of experience, works for Parana State Government in Brazil. But lately, he has also been freelancing for tech vendor Zerocracy, which has replaced its project managers with AIs.

The AIs assign project tasks based on developer reputations among other factors. They also set schedules, predict delivery times, and calculate budgets.

“We do not need meetings or human interference to assign tasks to team members,” says Lobo.  Plus, it helps with morale, he says. “We do not have people trying to please the project manager because it’s an AI and on the other hand the AI won’t lose time trying to please team members or improve morale.”

It helps that the company uses a programming methodology that allows projects to easily be split up into small tasks, typically of 30 minutes duration or less.

But some tasks can’t — yet — be automated, Lobo says, such as establishing the scope of a project. “Someone has to define what our software will do and what we will do first,” he says. In addition, people are still needed to evaluate code quality, and to take responsibility if something goes wrong.

Human intervention may also be needed if a task is taken on by a developer who’s in no rush to get it done, says Kirill Chernyavsky, a software architect at Zerocracy.

AI-powered project management is a new idea, and can be a hard sell. “Customers prefer traditional management as time-proved,” he says.

Palo Alto-based Zerocracy, founded two years ago, has been using its AI system since last spring, and now has five customers using this system. But developers have been working for AIs for about a year now, says CEO Yegor Bugayenko. There are 60 developers now working in the platform, some of whom are Zerocracy staff, some freelancers, and the rest employees of customer firms.

“Initially, when people start working under the management of robots, they’re surprised, and they get skeptical,” he says. “They don’t think it’s possible that a computer can tell them what to do.”

Surveilling your movements

Early this year, Amazon was forced to publicly deny that it was patenting a device to track every movement of warehouse workers. “The speculation about this patent is misguided,” the company said in a statement. Instead, the wristband tracking system would be used as a replacement for handheld scanners.

That assurance became less convincing over the course of the year, as reports rolled in about abusive conditions in Amazon warehouses, including limits on whether employees could take bathroom breaks.

Of course, companies looking to maximize efficiency at any cost don’t need to wait for Amazon’s wristbands to hit the market. Video cameras, combined with image recognition technology, can just as easily track employee movements down to the second, says Jeff Dodge, director at Insight, a Tempe-based technology consulting and system integration firm.

“If you’re an employee, and you’re in a space with cameras, your employer is quite literally watching, analyzing, every single movement you make, every action you take,” he says.

And expect for certain specific compliance-related situations, it’s all perfectly legal and possible today.

This surveillance can be completely benign — for security purposes, say, or to help optimize office layouts. And that’s how companies are currently using it, says Dodge.

“We often get into conversations with clients, not about what is possible, but about what is right, what is morally correct,” he says.

Yes, it’s possible to track how much time individual employees spend in the bathroom, he says. “But people are very aware that if they should do that, and someone finds out, it would be a very negative press consequence.”

The key, Dodge says, is to be up front with employees about what the AI project is meant to accomplish. “‘Hey, we’re thinking of putting bathrooms in a place where they’re better accessible, but do that, we have to monitor where people are going.’ And if you do get buy-in, having transparency about who has access to that data is equally important to building trust. If it’s perceived that some AI is making decisions to help management, you can have an us versus them environment, and trust is destroyed,” he says.

Surveillance doesn’t have to be limited to video monitoring. Companies also have access to employees’ emails and web browsing histories, and a wealth of other behavioral data.

Gauging your loyalty

Sentiment analysis involves the use of analytics to determine whether people have a favorable view of a company or product. The same technology can be used on employees, except here a company not only has access to their public social media profiles, but also all their internal communications.

Done properly, sentiment analysis could be useful to a company without being invasive.

“Almost every large enterprise does use Net Promoter Scores,” says Dodge. “This is a measure of whether people are generally going to talk favorably about an institution. Are they going to go out and say, ‘This is a great place; you should come work here’? Or are they a net detractor?”

One specific application of this technology is in predicting which employees are likely to leave a company, because, for example, they’re surfing job hunting sites and sending out resumes, or sending more emails with attachments to their personal accounts than usual.

This is easy to do, says one top information security professional at a Fortune 500 company, who did not want to be named. The professional says he’s been doing it for ten years and it’s quite reliable.

Such technology can be used innocuously, for example, to ensure that proprietary information isn’t about to leave the company, which is how it is used at security professional’s company. Or it can be used in aggregate, to help a company address widespread morale problems before everyone starts leaving the company. But there are other ways in which a company could use the knowledge gained from these methods, such as sidelining a perceived disloyal employee from prestigious assignments, travel opportunities, or professional training.

Doing so could find a company in a lot of trouble, which is  why most companies steer clear of these types of uses of AI, says Dodge. “Generally speaking, companies are over-indexing on transparency. They all have public policies about what they do and don’t do with data — they’re really thinking about the long-term implications to their brand and public perception of misusing this technology.”

Managing your health

More and more companies are rolling out employee wellness programs. But there’s a lot of room in this space for privacy violations, and companies have to be careful, says Dodge. That’s especially the case when companies are also managing health insurance programs for their employees.

There’s a lot that can go wrong if things get too personal. And when employees wear personal fitness or health monitoring devices, it can get very personal very quickly.

“It’s toxic, if you will, from a privacy perspective,” Dodge says. “Imagine saying, ‘We recommend you go see a specialist about IBS because you’ve been going to the restroom so often’.”

Dodge recommends companies stay out of this business altogether, and leave it up to specialist firms to collect and analyze the data and make their recommendations directly to employees. Even if those recommendations come through a company-sponsored employee portal, the fact that it’s an outside service can help reduce backlash and the possibility of creepiness misuse.

AI and the balance of power

One common element that underlies the abuse of AI in the workplace is how the technology affects the power dynamic between the company and its employees.

“You can use [AI] to increase the agency that an employee has,” says Illah Nourbakhsh,  professor of ethics and computational technologies and the director of the CREATE Lab at Carnegie Mellon University. “Or you can use it to take the power structure of the company and reinforce that. It’s about the power relationships. You can use AI to create balance, or to empower — or to reinforce existing tensions in the workplace.”

It’s tempting for a company to try to get more power over its employees, such as by using surveillance technologies to boost productivity.

“Tactically, this may yield results,” he says. “But strategically, it makes employees feel less empowered in the long term, and hurts productivity and increases turnover. If Amazon fines you for walking too slowly, it makes the employee feel less valued, less part of a team. Short term, it makes people walk faster, but in the long term it hurts relationships.”

Like other experts, Nourbakhsh recommends a strong focus on transparency and explainability when it comes to rolling out new AI projects.

That includes issues of both accidental and deliberate bias in the underlying data set and in the algorithms used to analyze it, as well as mistakes in either the data or the algorithms.

When good AIs go bad

In addition, companies need to be prepared for the AI to fail in unexpected ways.

“AI is alien,” Nourbakhsh says. “We assume that an AI system will make errors the same way that humans make errors.”

That assumption can be damaging.

An AI system, for example, can look at an image that’s modified in a relatively subtle way, and, with extremely high confidence, classify it as something else entirely.

For example, last year, researchers showed that small pieces of masking tape would make the AI think that a stop sign was a sign that said that the speed limit was 45 miles an hour — a critical mistake for an AI-driven car.

“This is a stunning error,” says Nourbakhsh. “It’s horrible. But it helps us understand that we have a preconceived bias for how things fail. And when it fails, it doesn’t just fail a little bit, the way a human fails. But it can fail in a really different way.”

Sometimes, the results can be deadly.

In October, a Boeing 737 MAX plane crashed near Indonesia, killing all 189 people aboard, due to a problem with the autopilot system.

The way the system was designed, it was too difficult for human pilots to regain control, he says. “Humans need to have the final oversight. When companies are implementing AI systems and giving them autonomous control, that’s problematic.”

One way to give humans control is to make sure the AI system is able to explain itself, and the explanation needs to be understandable to users. This should be a fundamental requirement for any company building or buying an AI system, he says.

It would also help if companies had tools available to monitor AI systems, to ensure that they were working correctly. Those tools aren’t available yet, not even for common problems such as racial or gender bias. Instead, companies have to build their own, says Vivek Katyal, data security leader for cyber risk services at Deloitte & Touche. “Or you can do manual reconciliation and figure something out.”

And, of course, no technology can prevent companies from deliberately misusing their AI systems. According to a recent survey of early adopters of AI, less than a third ranked ethical risks in the top three AI-related concerns. By comparison, more than half said that cybersecurity was one of their top three concerns.

Other than reputation damage, and privacy-related compliance standards, there’s nothing out there to force companies to do the right thing, he says.

“In the absence of regulations, people will do what they will do,” Katyal says. There are bills in the works at state levels to expand privacy protections, and it would make sense to have a national standard. But even with privacy regulations, he says, that won’t help deal with the problem of algorithms that are biased, or abused. “How do we control it? That’s the big piece that I see that organizations need to grapple with.”

More on AI and machine learning: