Programmer spouses get used to it. We call the developer to dinner. “I’ll be downstairs in a minute,” the programmer says. “I’m just fixing this one last bug.” Dinner turns cold. So does the spouse.
Software development is far more than the process of designing great applications that give their users joy. It’s more than writing elegant code that, to the trained eye, is a work of art. A huge amount of software development time is spent in testing for defects, and fixing the problems once they’re found. Ideally, the bugs are found before the application gets out the door, but — as we all know too well — many are found only after the software is deployed. The users report the problems…
…and then they wait.
Or do they?
According to a survey commmissioned by BMC and conducted by the consulting division of Forrester Research, the average time to resolve an application problem is 6.9 days for enterprise developers and 6.7 days for software vendors. Ten percent of those problems take 10 days to solve, says the report. Developers spend just over an hour documenting the problem; and, if given that hour back, they’d use it to create enhancements to the application they are working on.
BMC wants you to conclude, somewhat unsurprisingly, that BMC Identify’s AppSight problem resolution software can help developers see where problems lie (and thus spend the rest of their time fixing the problem, and annoying their spouses by not showing up for dinner). It’s a reasonable conclusion (the product plug, I mean, not the dinner dispute), and I applaud any tool that helps developers find bugs, particularly if they’re in an application I need to use. (BMC says it’ll soon have a white paper available on the subject, in case you want their specifics.)
But there may be more to it than that, when contrasted with another user community: open source developers. For open-source developers, the time reported between bug reports and the community’s response is far, far less.
In its survey, Forrester conducted phone interviews with 150 managers, directors, and vice presidents in charge of application development teams and organizations. They included 100-120 respondents from enterprises with at least 1,000 employees and 50 developers.
As it happens, Evans Data Corporation (EDC) just finished its twice-yearly report, resulting from a survey of several hundred open-source and Linux developers (with some managers, but primarily folks-who-code). The EDC numbers are somewhat different. The average time between discovery and solution of a serious bug, for 36 percent of open-source developers is under 8 hours. Hours. Not days. Not a week.
In the BMC/Forrester report, application development managers said 39 percent of bug fixes take under a day to address; maybe that’s not too different on the surface. However, 57 percent of open-source developers say that bug fixes typically take more than two days.
Interesting contrast, huh?
I believe both reports, even though they appear to contradict one another slightly. I’m sure that it does take longer for enterprise developers to fix application software than it takes an open source project to address an issue. And the conclusion (however much my open-source friendly soul might like to say so) isn’t “Obviously, open source is better,” but rather is a function of the infrastructure: people and processes work differently depending on the environment. I suspect that each community can learn from one another here, though I haven’t yet figured out what the lessons ought to be. (I’m also sure that you’re scribbling off an e-mail message to your application development managers to ask how quickly your shop’s bugs are fixed.)
One difference between the results may be the survey respondent. The developers on the ground, writing the code, are likely to have different opinions than do their managers. For example, managers have a different idea of proper process (or, depending on viewpoint, CYA requirements); in the Forrester/BMC study, 45 percent of respondents required more than an hour to create a problem report. In the open-source community, a bug report could be a casual conversation in an IRC channel with the developer responsible for that code module.
Another issue is the granularity of what’s considered a problem. Developers, by nature, think tactically and with great specificity (“the calendar doesn’t display right in IE; the fields are overlaid instead of in an ordered list”) compared to a manager’s strategic viewpoint (“transaction speed slows down when the database has over 100,000 records”) or the users’ vague view when passed through a manager’s translation mechanism (“Accounting says it doesn’t work right; please fix”).
Plus, one must consider the number of people and departments involved. In a typical enterprise environment, a developer who’s trying to reproduce a bug might have to interview testers, users, vendors, developers. She might have to, oh dear, attend meetings at which bug fixes are prioritized and sometimes postponed until “the next round.” Given that the EDC survey asked the developers themselves, the bug reporting and resolution process may be more streamlined. An open-source programmer who finds a limitation (such as the calendar not working right in some browsers) is often the person who fixes it; there’s a lot to be said for personal power.
I find the whole thing absolutely fascinating. But then, I’m a fool for any kind of analysis of what developers are doing (when they aren’t playing foosball or annoying spouses who slaved over a hot stove, that is). Another example, just in passing (and because it showed up in my inbox at the right moment) is the results shared by Sixth Sense Analytics, which aggregated and de-identified data about its user community. Says their site, “This data allows you to understand patterns and trends and serves as a representative basis for benchmarking your performance goals.” Neat-o.
Do you see any other reasons for the variation in experience? What other lessons might developers and managers learn from these reports?