by Esther Schindler

Towards sanity in software project estimation: a chat with Steve McConnell

Dec 21, 200610 mins

Steve McConnell is CEO and chief software engineer at Construx Software, but you’re more likely to recognize his name as the author of Code Complete and several other books that made him a software development rock star. He has a new book out, Software Estimation: Demystifying the Black Art. Recently, we found a few minutes to chat in IM, about topics ranging from the reasons behind project failures, to the best way to get the right information from software developers. “Estimation has been a strong interest of mine for many years,” McConnell says, “and I think there’s a tremendous opportunity for the software industry to improve in this area.”

Steve McConnell: The Standish Group published its first CHAOS report in 1994, which is the report that’s most often cited. The Standish Group has updated its report every two years since then. In round numbers, the Standish Group’s data says that about one-quarter of all projects fail outright (are cancelled before completion), about one-quarter succeed fully (deliver most intended functionality on time and on budget), and about half are “challenged”—overrunning schedule, budget, or both. The results have fluctuated somewhat year to year. Some people see a trend, but I see only fluctuation.

Does this constitute failure? The quarter that fail outright seem to fit that bill. The quarter that succeed don’t. So what about the half that are challenged? The root causes of why half the projects are late are complicated. They are over budget and schedule partly because they are poorly managed, partly because they experience a lot of untracked requirements growth, and partly because the original “estimates” weren’t what I would call a valid estimate in the first place. Rather, they were business targets that weren’t grounded in any kind of meaningful technical estimate.

So at the end of the day, I think some projects genuinely do overrun, but there is also a significant percentage of projects that were never really estimated in the first place. You can say those projects failed to meet their business targets, but it wouldn’t be accurate to say they failed to meet their estimates. How the half of projects in that category break down into “missed targets” vs. “missed estimates” is hard to say.

Esther Schindler, To miss a target you have to know where you’re aiming. And to reach one you need to know when you hit the target.

McConnell: If you aim at a target 200 feet away and miss, that’s one thing. If you aim at the moon, that’s a different matter entirely. Do you think there’s a correlation between the distance to the target and managerial ability to plan well to reach it? That is, do you think a moon launch is much less likely to succeed than a jump across the pond?

McConnell: Yes, but the relationship is much more complicated than most people probably think.

The Standish Group has observed, but not really emphasized, that as the industry moved toward more smaller projects in the late 1990s and very early 2000s that success rates went up. They also point to a “trend” in project success rates. I think what they’ve missed is that it’s a lot easier to succeed on small projects than large projects, and we were doing a lot of small projects in the early days of the Internet and for Y2K remediation.

Capers Jones, another estimation guru, has published data that shows a dramatically higher success rate for smaller projects. Larry Putnam has published data showing that as the industry has moved back toward larger projects, as Internet applications mature, that project success rates are going down again.

So I don’t see any trend in the data. I see fluctuation from larger to smaller projects and back to larger projects again. Having said all that, we sometimes see organizations estimate better on larger projects than smaller projects because they take those projects more seriously. All that sounds as though it supports the various development methodologies for “small iterative projects.” Has anyone compared projects using those methodologies to, say, waterfall?

McConnell: Yes, that research is part of what I was referring to when I commented that small projects succeed more often than large projects. So some people might naturally think, “Let’s just do small projects.” In fact, I worked with one Canadian company that had defined as its business model that they would only do projects that could be completed by teams of 5 people or fewer and would run 6 months or less.

The limitations to this strategy should be blindingly obvious, but we still see various industry pundits advising people to do nothing but short iterations. The problem is that businesses need more long-range predictability than can be provided by highly iterative 1-3 month development cycles.

So there’s a justification for adopting more linear development approaches. I say “linear” rather than “waterfall” because few organizations are still doing true waterfall development. The waterfall model was subject to all kinds of problems, and many people seem to paint all linear development approaches as “waterfall.” In fact, for many businesses, the most appropriate development approaches are pretty linear, but that doesn’t mean they’re subject to most of the problems associated with pure waterfall development, at least not if they know what they’re doing! Given the 25% or so of projects that don’t complete at all—what percentage do you think would be healthy? That is, I assume it isn’t zero.

McConnell: 25% isn’t necessarily a bad number. What’s bad about it is that the average project is something like 100% late and 100% over budget at the time it’s shut down. With better development approaches, a lot of those projects would get shut down when they’ve used 20% of their budgets rather than 200% of their budgets.

We as an industry need to do a much better job of evaluating projects’ viability earlier rather than later. It’s staggering when you think that roughly a quarter of the development dollars in the U.S. are wasted on projects that ultimately fail. If an organization could reduce that number to something like 5%, it would free up a tremendous amount of resources that could be refocused on projects that will ultimately succeed. Where do you see the poor viability decisions are made? Or, more to the point, by whom? The users, who say “this is what I want”? The developers who write the code? The managers, at whatever level? What do you see as the most common missing link?

McConnell: The viability decision needs to be a combination of a business case analysis and a technical feasibility analysis. The most common reasons that projects fail are not very technical. It’s mostly due to a mismatch between the business case and the project’s schedule and budget requirements. I’ve found that most projects are commissioned with what I think of as “no-brainer business cases.”

In other words, “If we could get this amazing amount of functionality for this small price in this short timeframe, we would have a great business case.” The technical leadership receives that business case and is usually asked to come up with an estimate for the project. Typically, the initial project estimate is at least a factor of two higher than the business case that was used to justify the project.

That isn’t bad information, if it’s interpreted correctly. The business could decide at that point (very early in the project) that there isn’t a business case for a project that will cost twice as much as originally expected. But what usually happens is that a dialog ensues, there’s little or no adjustment made to the business case, and the business pretends that it can do the project for much less than it can.

The consequences at that point are inevitable: the project is underscoped, understaffed, and undermanaged. It runs longer than expected, and takes more budget than expected. The irony is that I’m convinced that there is a workable business case for most of those projects, but it requires the business to engage more seriously with the initial estimates that are so far out of alignment with the initial business case. Twenty-five years ago, I was taught a reasonable methodology for estimating the length of a project: Ask the developer how long it’ll take, then increment the time unit by one and double the number. Thus “1 day” –> “2 weeks” and “3 hours” –> “6 days.” It’s been frighteningly accurate.

What you describe sounds like the various riffs on “This is the due date because Marketing says we have to ship by then. So get it done!”

McConnell: Sometimes it’s marketing, sometimes it’s a customer, sometimes it’s upper management. One factor is that marketers, sales staff, and upper management all tend to be better negotiators than technical staff, and so when marketing (or whoever) says “get it done,” technical staff ends up losing that negotiation.

But it isn’t really the technical staff that loses; it’s the business that loses, because it sets up a situation in which it pretends for months or years that it can do something that it can’t. And that is ultimately at least as harmful to marketing, sales, and upper management as to the technical staff. Oh! What a great point, and very true.

Do you think that cio.coms and app dev managers are just so used to “negotiating” that they do it automatically instead of listening to the tech staff? Or is it something else? In any case, what can they do to ensure that they both get accurate estimates (not “what management wants to hear, and we’ll slip anyway”) and learn how to listen to the out-of-expectation answer?

McConnell: One executive told me that he’d experienced very good results from putting technical staff through assertiveness training.

Executives and managers tend by nature to be more assertive than rank-and-file technical staff, which is not a problem. The problem is that they assume, incorrectly, that technical staff will be assertive with them if they need to be, and that isn’t the case. Technical staff often feel that they’re being very assertive, but an objective observer would probably say the technical people cave in far too easily.

Business executives with non-technical backgrounds don’t have any objective ability to judge the analytical validity of an estimate, so they probe the person they’re talking with to see where they hit that person’s point of discomfort, and they make an assessment based on that. Technical people who don’t push back hard enough, soon enough, are implicitly sending a message that they can do more than they really can. When I talk with executives, I emphasize that they need to account for the fact that technical people are intimidated by them.

When I talk to technical people, I emphasize that they can usually be far more assertive than they are without worrying about that looking bad. Most executives assume the people around them are as assertive as they are, but that isn’t true—there’s a reason that they’re executives! I don’t think I’ve ever seen someone make that point before. And it’s utterly accurate, especially since a lot of developers don’t trust their managers and thus withhold a lot of important communication.

McConnell: It’s a cliché, but a lot of success in business really does seem to boil down to effective communication. There are significant communication style differences between the typical technical person, typical marketing person, typical sales person, and typical executive, and it helps to be aware of those differences.