by Stephanie Overby

Bias Beware: More Time Does Not Equal Better Quality

Feature
Mar 01, 200412 mins
CareersIT LeadershipProject Management Tools

It's commonly believed that the more time we devote to a project, the better the results. Not so. Wharton professor Maurice Schweitzer tells Senior Writer Stephanie Overby how CIOs can correct "input bias" and stop confusing quantity with quality.

Advertisements get under the skin of professor and human behavior expert Maurice Schweitzer. There’s the beer commercial that brags about its slow brewing process. And the billboard from a luxury car manufacturer that boasts about how its engineers haven’t taken a vacation in years. “Three hundred thousand people vacationed in the south of France last year, and none of them was a Lexus engineer. Who cares? That’s not very informative to me,” says Schweitzer. “And I’m not drinking a beer because of how long it was in a vat. I drink it because of how it tastes.”

Schweitzer, who specializes in behavioral decision research as assistant professor in operations and information management at the Wharton School at the University of Pennsylvania, uses these advertisements as examples of what he calls “input bias.” According to his research, people automatically associate input related to quantity (how long it takes to make a car) with output quality (how well it performs). While in many cases, input information does directly correspond to outcome, in some cases it does not. Yet humans are hardwired to automatically associate input and output. And people can prey on your input bias, causing you to make poor decisions or judgments to their advantage.

It’s no surprise that advertisers exploit this basic fact of human nature. But CIOs, Schweitzer says, fall victim to the same input bias. Employees, vendors and fellow business leaders all take advantage of these natural biases in manipulating IT decisions. Fortunately, as Schweitzer told CIO Senior Writer Stephanie Overby in a recent interview, there are ways to guard against making mistakes based on bias.

CIO: Can you explain what your research has revealed about input bias?that is, how information on the quantity of something is often misused to infer quality?

Maurice Schweitzer: In general, input quantities are positively related to the quality of outcome. The more you invest in a project, the better that outcome will be. Companies that spend a lot of money on R&D typically produce the most innovative products. The more time an employee spends in the office, the more productive she is. Students who study the most do better on exams. It’s a natural assumption that’s usually right.

However, there are many cases where that direct relationship does not exist. For example, people assume that longer hospital stays are better and propose legislation that women who give birth should spend a certain length of time in the hospital. They figure the longer you’re in the hospital, the better care you’ll receive. But in fact, there are so many sick people in a hospital that it’s actually not a great place to be unless you have to be there.

We live our lives mostly on automatic pilot, and we have heuristics — decision rules or shortcuts — to make a lot of our decisions. Assuming that input quantity directly correlates to outcome quality is one of those. These heuristics can lead us afoul. There are times when we need to step back and give decisions some extra attention. That means participating in a deliberate thought process that takes our natural biases into account.

Certainly we all use a lot of shortcuts in making decisions every day. Why is the input bias particularly dangerous?

It’s a social bias that relies on information from people around you and is thus much more dangerous because people can manipulate you. The classic example is face time. Someone puts in a lot of time in the office; you assume they’re working hard. Or he says, I had this many people working on this project, or This program I created has so many lines of code. He’s giving you some measure of the input in a way that might skew your judgment of the outcome. The input information they give you may be accurate. But of particular importance to managers and business decision-makers is the fact that people can manipulate or misrepresent input to prey on this bias.

It’s obvious that a vendor trying to sell you a software product might take advantage of the input bias. But are employees really that manipulative, or is it simply an ingrained part of corporate culture?

Sure, organizational culture is part of it. But people really are that manipulative. Think about the concept of people overstating expense reports. Does that really happen? Yes. And it’s even more subtle than that; it’s about creating impressions. I have students who come up to me and say they spent five hours in the library working on a project, or they studied for a week for my exam. Those measures shouldn’t matter. I’m interested in how well you did on the test, not how many hours you spent staring at the book. But people will convey that information nonetheless. And even if you know it doesn’t matter, it still has an influence on your judgment of the outcome.

What are some instances in which CIOs would likely rely on irrelevant input information?

In one of our experiments, we submitted two presentations on emerging technologies to [college students]. They saw two videotapes of someone describing a new technology they knew nothing about. With each presentation, we told half of the group that the presenter had spent a long time preparing the presentation, and the other half that he had spent a shorter time preparing. After viewing the tape, they judged the quality of the presentations along several different dimensions, and with every one, they gave higher marks when we told them the presenter had put in more time. What was surprising was that even when they indicated that they knew the preparation time didn’t really matter, they still were influenced by it. People’s judgments often depend on how easy it is to evaluate something. If it’s easy to measure the outcome, we may not rely on input measures. If someone is performing in the Olympics, their time is a clear measure of how they did. But in other cases, such as judging how innovative a pharmaceutical company is, it’s harder to reach a decision. So you might rely on whatever objective measures are available, such as how many patents the company has. But in reality, that’s not a good measure because more patents are filed for small modifications of existing compounds, and there aren’t really that many big blockbuster drugs involved.

OK, so how are CIOs influenced by input bias?

A CIO may need to judge the innovativeness of a technology company. They might rely on quantitative input information such as R&D expenditures. But that may not be the best indication of innovation.

In judging the quality of a software package, they might look at the number of lines of code in a program. But that’s not really what they should care about — it’s whether the program is effective. If they could measure speed at which a program could complete a task, that would be a better measure for them. But if they are comparing packages that perform different kinds of tasks, that’s hard to do, so they rely on things they can actually measure.

Similarly, when judging employee performance or interviewing job candidates, CIOs may rely on whatever objective measures are available, relevant or not: how much face time employees put in, how many hours they spent on a project. But that’s not really what CIOs should care about.

Are you saying that even when faced with a really rotten employee or a project that’s tanking, CIOs can be fooled into thinking the outcome is better than it actually is based on input like face time or the number of people working on a project?

No. If something is really bad, input bias isn’t going to work. In one experiment, we offered people two samples of iced tea. In one instance, we asked people to compare normal raspberry and lemon iced teas. We told them one was made with an expensive machine and one was made with an inexpensive machine. We got the same results we did with other experiments: Input bias mattered, and most participants rated the tea made with the expensive machine a better tea. We conducted the same experiment with the same teas, but we added lime juice and salt to them. They were really terrible. For the bad teas, they didn’t care about the expense of the equipment used to make them. They were just bad. People tend to think more critically when they experience a low-quality outcome. It shocks them into being much more careful and calculative. When a CIO encounters a bad outcome, he will go off automatic pilot and make judgments based on quality. If you have an employee who’s just a disaster — even if there’s high input — it’s not going to help. If there’s a software product that keeps crashing, you can’t sell it based on input measures.

You say the more accountable people are for their decisions, the more likely they are to rely on irrelevant input. Please elaborate.

It comes down to the issue of justifiability. If you have to justify your decisions to someone else, tell someone why they were passed up for a promotion or explain to your board why it should invest in a certain system, you’re more likely to rely on input measures because they give you some justification for your decisions.

So what can CIOs themselves do to safeguard their own decisions from irrelevant input information?

People are most deliberate in decision making when the environment around them is quiet, when they are at rest, when they’re not under time pressure. Those are situations a CIO is rarely in. CIOs are very busy, and they are likely to be on automatic pilot most of the time. They have to be very careful to identify situations in which their decisions are really important. You can stay on automatic pilot when deciding which bowl to use for cereal or what tie to put on. When it’s something that’s important — when it’s a major decision — they need to set aside time to be deliberate and careful in order to make an unbiased decision. That really speaks to the psychology of the input bias. This is something that is hardwired in our cognition as human beings. We can disengage from it and be very deliberative in our decision making, but it’s hard. It doesn’t come naturally. We’re all much weaker mentally than we think.

Might CIOs and IT managers inadvertently encourage the manipulation of the input information?

Yes. CIOs need to take a look at how their incentive systems are set up. Law firms reward people for the number of hours they bill. The incentive isn’t to be the fastest. IBM used to reward people based the on number of lines of code they produced when, in fact, programmers actually might be more efficient by producing fewer lines of code. Instead, you need to create an evaluation system that measures what you actually want and reward employees based on that. In the case of programmers, you would want to look at objective measures of their work (how fast the code performs a particular computation, how often it crashes, how much memory it takes) and assess those criteria. If objective measures are too difficult to construct, you might have a manager familiar with the program judge the outcome. Or, if the programmers’ project is complete, you could go “downstream” — to look at customer satisfaction or adoption.

What can CIOs do to correct the input bias?

Suppose someone is trying to sell you software. You have software package A and software package B. Both companies make their pitches. The CIO can then assemble a panel of employees to test out both software packages. These individuals will not know as much about what the inputs are: how many lines of code there are, how long the software engineers worked on it, how much money it cost to produce. That irrelevant information is stripped out. The CIO himself may have input information, and some of it may be relevant. But the review process helps to mitigate the overreliance on input. Or suppose you have someone coming up for a promotion. You’d like to have a diverse panel of people who can judge the quality of his work. These people might be more likely to evaluate the person’s accomplishments without information such as how much time he spends in the office or how hard he appears to work. Again, in some cases, you may want someone who works very hard, even if they’re not having any success yet. CIOs just need to make an effort so that they don’t put an inordinate amount of weight on input measures that may not matter.

When it comes to justifying decisions, what can CIOs rely on rather than input measures?

When they implement these kinds of processes, they can truly focus on the merits of a piece of software. The advantage of using a panel of employees to review different products is that once the employee panel makes a unanimous recommendation for product B, CIOs can use that conclusion as their justification. What’s important is often the speed at which your algorithm solves a problem, or how the end user rates the product in terms of its interface. Those are the measures CIOs want to look at.