Advertisements get under the skin of professor and human behavior expert Maurice Schweitzer. There\u2019s the beer commercial that brags about its slow brewing process. And the billboard from a luxury car manufacturer that boasts about how its engineers haven\u2019t taken a vacation in years. "Three hundred thousand people vacationed in the south of France last year, and none of them was a Lexus engineer. Who cares? That\u2019s not very informative to me," says Schweitzer. "And I\u2019m not drinking a beer because of how long it was in a vat. I drink it because of how it tastes."Schweitzer, who specializes in behavioral decision research as assistant professor in operations and information management at the Wharton School at the University of Pennsylvania, uses these advertisements as examples of what he calls "input bias." According to his research, people automatically associate input related to quantity (how long it takes to make a car) with output quality (how well it performs). While in many cases, input information does directly correspond to outcome, in some cases it does not. Yet humans are hardwired to automatically associate input and output. And people can prey on your input bias, causing you to make poor decisions or judgments to their advantage.\nIt\u2019s no surprise that advertisers exploit this basic fact of human nature. But CIOs, Schweitzer says, fall victim to the same input bias. Employees, vendors and fellow business leaders all take advantage of these natural biases in manipulating IT decisions. Fortunately, as Schweitzer told CIO Senior Writer Stephanie Overby in a recent interview, there are ways to guard against making mistakes based on bias.\n\n \n\n\n\n\nCIO: Can you explain what your research has revealed about input bias?that is, how information on the quantity of something is often misused to infer quality?\n \n\nMaurice Schweitzer: In general, input quantities are positively related to the quality of outcome. The more you invest in a project, the better that outcome will be. Companies that spend a lot of money on R&D typically produce the most innovative products. The more time an employee spends in the office, the more productive she is. Students who study the most do better on exams. It\u2019s a natural assumption that\u2019s usually right.\nHowever, there are many cases where that direct relationship does not exist. For example, people assume that longer hospital stays are better and propose legislation that women who give birth should spend a certain length of time in the hospital. They figure the longer you\u2019re in the hospital, the better care you\u2019ll receive. But in fact, there are so many sick people in a hospital that it\u2019s actually not a great place to be unless you have to be there.\nWe live our lives mostly on automatic pilot, and we have heuristics -- decision rules or shortcuts -- to make a lot of our decisions. Assuming that input quantity directly correlates to outcome quality is one of those. These heuristics can lead us afoul. There are times when we need to step back and give decisions some extra attention. That means participating in a deliberate thought process that takes our natural biases into account.\n\n \n\n\n\n\nCertainly we all use a lot of shortcuts in making decisions every day. Why is the input bias particularly dangerous?\n \n\nIt\u2019s a social bias that relies on information from people around you and is thus much more dangerous because people can manipulate you. The classic example is face time. Someone puts in a lot of time in the office; you assume they\u2019re working hard. Or he says, I had this many people working on this project, or This program I created has so many lines of code. He\u2019s giving you some measure of the input in a way that might skew your judgment of the outcome.\nThe input information they give you may be accurate. But of particular importance to managers and business decision-makers is the fact that people can manipulate or misrepresent input to prey on this bias. \n\n \n\n\n\n\nIt\u2019s obvious that a vendor trying to sell you a software product might take advantage of the input bias. But are employees really that manipulative, or is it simply an ingrained part of corporate culture?\n \n\nSure, organizational culture is part of it. But people really are that manipulative. Think about the concept of people overstating expense reports. Does that really happen? Yes. And it\u2019s even more subtle than that; it\u2019s about creating impressions. I have students who come up to me and say they spent five hours in the library working on a project, or they studied for a week for my exam. Those measures shouldn\u2019t matter. I\u2019m interested in how well you did on the test, not how many hours you spent staring at the book. But people will convey that information nonetheless. And even if you know it doesn\u2019t matter, it still has an influence on your judgment of the outcome.\n\n \n\n\n\n\nWhat are some instances in which CIOs would likely rely on irrelevant input information?\n \n\nIn one of our experiments, we submitted two presentations on emerging technologies to [college students]. They saw two videotapes of someone describing a new technology they knew nothing about. With each presentation, we told half of the group that the presenter had spent a long time preparing the presentation, and the other half that he had spent a shorter time preparing. After viewing the tape, they judged the quality of the presentations along several different dimensions, and with every one, they gave higher marks when we told them the presenter had put in more time. What was surprising was that even when they indicated that they knew the preparation time didn\u2019t really matter, they still were influenced by it.\nPeople\u2019s judgments often depend on how easy it is to evaluate something. If it\u2019s easy to measure the outcome, we may not rely on input measures. If someone is performing in the Olympics, their time is a clear measure of how they did. But in other cases, such as judging how innovative a pharmaceutical company is, it\u2019s harder to reach a decision. So you might rely on whatever objective measures are available, such as how many patents the company has. But in reality, that\u2019s not a good measure because more patents are filed for small modifications of existing compounds, and there aren\u2019t really that many big blockbuster drugs involved.\n\n \n\n\n\n\nOK, so how are CIOs influenced by input bias?\n \n\nA CIO may need to judge the innovativeness of a technology company. They might rely on quantitative input information such as R&D expenditures. But that may not be the best indication of innovation.\nIn judging the quality of a software package, they might look at the number of lines of code in a program. But that\u2019s not really what they should care about -- it\u2019s whether the program is effective. If they could measure speed at which a program could complete a task, that would be a better measure for them. But if they are comparing packages that perform different kinds of tasks, that\u2019s hard to do, so they rely on things they can actually measure.\nSimilarly, when judging employee performance or interviewing job candidates, CIOs may rely on whatever objective measures are available, relevant or not: how much face time employees put in, how many hours they spent on a project. But that\u2019s not really what CIOs should care about.\n\n \n\n\n\n\nAre you saying that even when faced with a really rotten employee or a project that\u2019s tanking, CIOs can be fooled into thinking the outcome is better than it actually is based on input like face time or the number of people working on a project?\n \n\nNo. If something is really bad, input bias isn\u2019t going to work. In one experiment, we offered people two samples of iced tea. In one instance, we asked people to compare normal raspberry and lemon iced teas. We told them one was made with an expensive machine and one was made with an inexpensive machine. We got the same results we did with other experiments: Input bias mattered, and most participants rated the tea made with the expensive machine a better tea. We conducted the same experiment with the same teas, but we added lime juice and salt to them. They were really terrible. For the bad teas, they didn\u2019t care about the expense of the equipment used to make them. They were just bad.\nPeople tend to think more critically when they experience a low-quality outcome. It shocks them into being much more careful and calculative. When a CIO encounters a bad outcome, he will go off automatic pilot and make judgments based on quality. If you have an employee who\u2019s just a disaster -- even if there\u2019s high input -- it\u2019s not going to help. If there\u2019s a software product that keeps crashing, you can\u2019t sell it based on input measures. \n\n \n\n\n\n\nYou say the more accountable people are for their decisions, the more likely they are to rely on irrelevant input. Please elaborate.\n \n\nIt comes down to the issue of justifiability. If you have to justify your decisions to someone else, tell someone why they were passed up for a promotion or explain to your board why it should invest in a certain system, you\u2019re more likely to rely on input measures because they give you some justification for your decisions.\n\n \n\n\n\n\nSo what can CIOs themselves do to safeguard their own decisions from irrelevant input information?\n \n\nPeople are most deliberate in decision making when the environment around them is quiet, when they are at rest, when they\u2019re not under time pressure. Those are situations a CIO is rarely in. CIOs are very busy, and they are likely to be on automatic pilot most of the time. They have to be very careful to identify situations in which their decisions are really important. You can stay on automatic pilot when deciding which bowl to use for cereal or what tie to put on. When it\u2019s something that\u2019s important -- when it\u2019s a major decision -- they need to set aside time to be deliberate and careful in order to make an unbiased decision. \nThat really speaks to the psychology of the input bias. This is something that is hardwired in our cognition as human beings. We can disengage from it and be very deliberative in our decision making, but it\u2019s hard. It doesn\u2019t come naturally. We\u2019re all much weaker mentally than we think.\n\n \n\n\n\n\nMight CIOs and IT managers inadvertently encourage the manipulation of the input information?\n \n\nYes. CIOs need to take a look at how their incentive systems are set up. Law firms reward people for the number of hours they bill. The incentive isn\u2019t to be the fastest. IBM used to reward people based the on number of lines of code they produced when, in fact, programmers actually might be more efficient by producing fewer lines of code. \nInstead, you need to create an evaluation system that measures what you actually want and reward employees based on that. In the case of programmers, you would want to look at objective measures of their work (how fast the code performs a particular computation, how often it crashes, how much memory it takes) and assess those criteria. If objective measures are too difficult to construct, you might have a manager familiar with the program judge the outcome. Or, if the programmers\u2019 project is complete, you could go "downstream" -- to look at customer satisfaction or adoption.\n\n \n\n\n\n\nWhat can CIOs do to correct the input bias?\n \n\nSuppose someone is trying to sell you software. You have software package A and software package B. Both companies make their pitches. The CIO can then assemble a panel of employees to test out both software packages. These individuals will not know as much about what the inputs are: how many lines of code there are, how long the software engineers worked on it, how much money it cost to produce. That irrelevant information is stripped out. The CIO himself may have input information, and some of it may be relevant. But the review process helps to mitigate the overreliance on input.\nOr suppose you have someone coming up for a promotion. You\u2019d like to have a diverse panel of people who can judge the quality of his work. These people might be more likely to evaluate the person\u2019s accomplishments without information such as how much time he spends in the office or how hard he appears to work. Again, in some cases, you may want someone who works very hard, even if they\u2019re not having any success yet. CIOs just need to make an effort so that they don\u2019t put an inordinate amount of weight on input measures that may not matter.\n\n \n\n\n\n\nWhen it comes to justifying decisions, what can CIOs rely on rather than input measures?\n \n\nWhen they implement these kinds of processes, they can truly focus on the merits of a piece of software. The advantage of using a panel of employees to review different products is that once the employee panel makes a unanimous recommendation for product B, CIOs can use that conclusion as their justification. What\u2019s important is often the speed at which your algorithm solves a problem, or how the end user rates the product in terms of its interface. Those are the measures CIOs want to look at.