It doesn’t make any difference whether enterprise software is purchased or developed; project success can be elusive. So, what do we mean by “success”? Most people would say there are three parts to success, namely
- The project must be completed on time
- The project must be completed within budget
- The customer must be satisfied
I would argue that while these are important, they do not in and of themselves define a successful software project. The ultimate definition of success is whether the software meets the ROI that was used to justify the project in the first place. Of course, this means the ROI must be defined up front before the project starts. That is often not done, which means the project is at risk right from the get go.
Many companies and government organizations use consultants to develop or implement enterprise software. Far too many of these projects end up as failures, even when big name consulting companies undertake the work. Part of the problem is caused by the customer paying the consultant by milestones achieved, rather than by outcome.
For example, consider the case where Avantor Performance Materials sued IBM for a failed SAP implementation. In a Computerworld article by Chris Kanaracus, he quoted IBM as stating that they met their contractual obligations and delivered a solution that Avantor continues to use in its operations. This was clearly not so, with even the IBM contractors saying that the project was the worst SAP implementation they had ever seen. According to an article in The Morning Call, the suit was settled for an undisclosed amount the following year. This software disaster cost Avantor tens of millions of dollars, as well as damaging its reputation with its business partners and customers.
When the customer agrees to pay the consulting company by milestone instead of by outcome, they run the risk of being liable for payments even when the software does not work as expected. For example, a billing milestone may be the completion of specific system tests. The problem is that meeting this milestone does not necessarily achieve the desired business outcomes.
Another example is summed up in an article by Michael Krigsman: An IT failure unicorn: Endless 19-year project in Massachusetts. Here Deloitte was developing software for the Massachusetts Court System. Deloitte claimed success and stated, "We fully believe that the system is a success." They may well believe it, but they believe something that is not true. It is irrelevant what Deloitte believes, what matters is what the customer believes, and here the customer did not get the desired outcome.
In both cases, the projects were paid for tasks completed rather than outcomes. It also seems that the customers simply didn’t do their homework from the perspective of requirements. In the MassCourts project, the contract was very one sided, which strongly suggests that MassCourts was in over their heads from a contractual perspective.
It’s the oldest trick in the book for implementation consultants to focus on achieving milestones so they can get paid. However, the customer is interested in the outcome, the benefits that flow from using the system and not whether the implementation consultants have met milestones or not.
In 1999, David Dunning and Justin Kruger of Cornell University observed a cognitive bias that is now known as the Dunning–Kruger effect. This is where relatively unskilled people think their skills in an area are much greater than they actually are. The converse is that highly skilled individuals assume tasks that are easy for them are also easy for others. The two examples above (and many others – see Software Failures) strongly suggest that people who are purchasing enterprise software seriously overestimate their abilities to make the right decisions.
To sum up, software success is defined as meeting the ROI that was used to justify the project in the first place. To achieve that success, customers must realistically estimate their software selection abilities, and use external help where appropriate. The real difficulty is that many of the people who make these decisions have been in IT for years and “know their stuff”. The Dunning-Kruger effect suggests that their technical experience does not readily translate into successful enterprise software projects. In all cases, it is Caveat emptor, or let the buyer beware.
This article is published as part of the IDG Contributor Network. Want to Join?