Information, according to the mathematical theory that bears its name, reduces uncertainty. If, for example, I tell you I tossed a coin twice, you\u2019ll know there were four equally probable outcomes. But if I then tell you the first toss came up tails, the number of possible outcomes cuts in half: tails\/heads or tails\/tails.\n\nIn this way, the information I have given you has cut your uncertainty in half. Everything we do in IT starts here, with the definition of a \u201cbit.\u201d\n\nAnd yet when it comes to reading about our industry, content that too often fails to reduce our uncertainty about a subject in any useful way.\n\nWhy do I say that, you ask? One reason is that surveys dominate research into IT practices, and their results generally follow the well-worn template: X percent of Y does or is planning to do Z.\n\nSurveys, that is, only reduce our uncertainty about how many people or organizations are doing something we care about (or are supposed to). And even that is clouded by our lack of certainty as to how truthful the respondents are.\n\nYou can\u2019t trust the answers\n\nLet\u2019s take a random example in which a CIO\u2019s survey response indicates they\u2019re planning to rationalize their applications portfolio. That doesn\u2019t mean they\u2019ll get the budget to actually rationalize it. Often their \u201cyes\u201d answer to a question is wistful yearning \u2014something they\u2019d like to do, if only they could.\n\nOr, as they\u2019re being surveyed by a prestigious analyst firm, they don\u2019t want to admit they have no idea what the question means. Or, if they do, they\u2019re embarrassed to admit that even though the analysts tell them that if they don\u2019t follow this latest industry trend they\u2019ll be out of business, following it just isn\u2019t in the cards this year.\n\nFor the most part, survey value comes down to this: You think your company should be doing something. Someone\u2019s survey associates a big bar with that subject. A big bar looks important. But really, using a survey to justify a course of action is little more than playing follow-the-leader.\n\nWhom you measure matters more than what they say\n\nSurveys also fail to reduce our uncertainty when they aren\u2019t accompanied by an account of who responded to it \u2014 not only which companies or types of company, but also the specific job title or titles. After all, ask a CIO what they plan to spend on next year and compare it to what information technology the CEO or chief marketing officer plan to pay for and it\u2019s far from guaranteed their responses will sync up.\n\nError bars offer little more than false precision\n\nYes, survey perpetrators are getting better about letting us know their survey\u2019s sample size. But does anyone have the time and energy to use this information to compute error bars?\n\nEven if you did, the thing about error bars is that \u2014 speaking of uncertainty and the reduction thereof \u2014 error bars have an interesting property: They reduce our uncertainty about how certain survey results are.\n\nError bars are a useful remedy for how so many surveyors indulge themselves in the sin of false precision. They might, that is, \u201cinform\u201d their audience that 53.7% of respondents say they\u2019re doing something or other. This is a bit like the officials at a football game unpiling the stack of players who are all trying to shove the football in a favorable direction, placing the ball in what seems to be a fair spot, then bringing out the chains and measuring, with micrometer precision, whether the team on offense earned a first down.\n\nThose who survey skew results\n\nWhich brings me to one final issue: Here in the world of IT many of the most prominent firms that conduct surveys and report on the results also pride themselves as being industry thought-leaders, oblivious to the logical circularity of their branding.\n\nYou might think this carping is less than fair to the community of researchers into IT management practices. After all, just getting a decent value of n for their survey is hard enough.\n\nAnd it is. But.\n\nThe point of a typical survey is to inform its audience that something or other, whether it\u2019s a specific product, class of technology, management practice, workforce preference, or what-have-you, is important enough to pay attention to.\n\nSurveys might accomplish this, if your definition of \u201cimportant\u201d is alotta, as in \u201cAlotta folks say so.\u201d\n\nBut the history of the world is filled with examples of the majority opinion being wrong. Here in IT-land many of CIO.com\u2019s readers will remember, with varying degrees of fondness, IBM\u2019s ill-fated OS\/2 operating system, whose success was, according to the surveys of the era, assured.\n\nA possible antidote\n\nIn principle, I\u2019m violating the principle that nobody should identify a problem without proposing a solution. So if surveys aren\u2019t as useful as they purport to be for helping decide what new technologies are likely to matter; what IT services the enterprise should invest in; what IT management practices should change and how they should change \u2014 the question is, what would be more helpful?\n\nMy best answer isn\u2019t particularly empirical. It follows this template:\n\nNo, it isn\u2019t an evidence-based approach. But then, alotta surveys are about the future. And there\u2019s a funny thing about the future \u2014 there just aren\u2019t any facts about it to be had.\n\nYet.