How do you measure the progress of a software project, or the programmers on that project? How would you prefer that it be done? Everyone wants a way to dispassionately evaluate the productivity of a software development team. The boss wants to know at both a high-level (are these projects on-track? are we going over budget?) and in granular terms (which programmer creates the best quality code fast, and should I give her a raise before she asks for one?) Developers, too, want a way to measure their personal and team's progress because we all do want to do the best job possible. The team wants to know where the problems are so they can be addressed, ideally before the boss notices on her bright shiny dashboard application. Plus, at a personal level, every IT worker yearns for a dispassionate metric to say, "This is how good you are, at least in comparison to your coworkers." Such a measurement at least suggests that, at employee review or promotion-time, the decisions might be made on merit rather than office politics, self-promotion or "social networking" (doesn't that sound better than sucking up? I thought so). So far, so good. Everybody wants useful metrics. The only problem is, no one has ever developed a metric that is universally accepted as useful. The easiest one to make fun of\u2014c'mon, let's all point and laugh\u2014is "lines of code" (LOC), a measurement that inherently does not take into account the language in which the developer writes (some are wordier than others), nor the generally accepted assumption that better code is generally more concise (particularly with an eye to code re-use). And, since in any endeavor, you only "get" what you measure and reward, shops that pay attention primarily to LOC (if there are any, and I sincerely hope not) encourage flabby code with no especial attention to making it quality code.Lots of other metrics have been tried, I believe, but as far as I know there's none everyone likes and accepts. In any programming team, everyone seems to know who's the "smart" member, who can crank out code that's ugly but works, who would never be considered a brilliant developer but is team-glue and an idea catalyst. All those roles are necessary. Are there any metrics that capture all that? Probably not, though as with all such things it's fine for us to approach perfection one attempt at a time. But the stats are still necessary, at least at a managerial level, so that a CIO can say authoritatively, "We're on schedule to ship by the end of the year," or recognize that the performance testing team needs more help, or whatever. So there's a whole branch of Application Lifecycle Management that aims, in various ways, to help CIOs and AppDev managers with the essential skills of cat-herding. That's a good thing, mind you, especially when I am the user who is anxiously awaiting the release of a software project (like, say, updates to the Drupal code for the Advice & Opinion site... just to pick something out of the blue, you understand). The thing is... I wonder whether the managers and developers are on the same page. Are the metrics that help CIOs make strategic decisions the same ones that help developers keep their projects on track? What statistics are useful... for either community? Do numbers and reports which help one community make decisions get in the way for the other? This has been on my mind since I met with Borland last week to discuss the results of a survey they'd commissioned from Forrester regarding developer metrics, called "Changing the Cost\/Benefit Equation for Application Development Metrics." Through 20 interviews with development managers and executives in charge of application development organizations at $1 billion plus companies, said the company, Forrester Consulting concluded that "Two factors\u2014the cost and complexity of metrics collection, and the reliance on superficial metrics\u2014conspire to deter application development organizations from attempting to improve their metrics programs." From the company's press release:The number one obstacle to gathering meaningful metrics is the manual effort involved. Nearly half of the companies Forrester interviewed cited this as a challenge, and several of the companies reported that they spend nearly a third of their time on metrics collection. To further complicate the situation, development organizations struggle with the technical complexities involved in the trending and aggregation of metrics - where the bulk of the value of measurement is found. Eight of the 20 participants were unable to trend or aggregate the metrics they collect.All this is tied, unsurprisingly, to Borland's plans to introduce new products, solutions and services over the next 18 months, intending to bring business intelligence capabilities to the software delivery organization. When I spoke with Marc Brown, VP of Product Marketing, he broke down the nature of "executive" metrics into three categories: Alignment between business and IT: delivering the wrong stuff or not working on the apps with the best ROI for the business Inflight metrics: team members gather data in a realtime fashion using huge, heterogenous tools, collect a lot of data in individual practitioner areas ("data islands that are hard to collect and collate," said Brown), and turn them into metrics for how teams are performaing, to determine if they'll slip. Generally considered the biggest pain by development teams, he says (and I have no reason to quibble). Post mortem metrics: collect data about delivering on time and budget. The survey showed that these top companies feel they're doing some of this, but they want to know if can prove or measure business value and customer satisfaction. Useful stats would include application adoption rates, or revenue generated or cost reduction from the project. All of which, of course, are great points \u2014 and Borland is surely hoping that I'll mention, somewhere in here, that the company's Open ALM platform should be available mid 2008, at least as, in Brown's words, "a stepping stone in getting their fingers around this beast of measurement." And yeah, this definitely sounds like A Good Thing.But, of course, Borland isn't the only company thinking about such things, especially not from the enterprise\/managerial view. New features in Microsoft's Visual Studio 2008 Team System enhance managers' view of software projects, as I wrote about here. And 6th Sense Analytics has been around for a couple of years: they're "a hosted solution that automates the collection of software development metrics that bring software development into focus." (One aspect of their tool that I think developers will appreciate is hat it works in the background, unobtrusivelying collect data (such as the time spent in the Eclipse editor or testing or creating unit tests). Then, "This data is securely transmitted to our hosted server where it's mined and rolled up as powerful and actionable analytics for managing software projects.")Those are keyed to managers, probably because they're sold to managers. There are plenty of other systems to collect development data, some of which are incorporated in other tools, such as reports one can get out of a version control system. Which do developers feel help them get their work done\u2014that also help management make accurate decisions? This seems like a good place to put the two communities in one virtual room, and let them compare notes.