Imagine entering a courtroom where the trial consists of a prosecutor presenting PowerPoint slides. In 20 compelling charts, he demonstrates why the defendant is guilty. The judge challenges some facts from the presentation, but the prosecutor has a good answer to every objection. So the judge decides, and the accused is sentenced. That wouldn’t be due process, would it?
If this process is shocking in a courtroom, why is it acceptable when selecting enterprise software? Substitute the prosecutor above for an aggressive and slick sales team, and you have the same situation. Only yesterday I learned of a public agency in California that is having a vendor do the gap analysis for the software they are considering purchasing. What answer is that agency expecting from their gap analysis? This seems to be a deeply flawed process riddled with conflict of interest.
Too often, organizations are lulled into a false sense of security by the work done in the analysis of a software selection. However, the above example suggests that process should guide the analysis. Process can also reduce the effects of bias.
Everybody has cognitive biases. There is no way to avoid them, and they can be a primary cause of selecting “less than optimum” enterprise software. A few of the more common biases encountered when evaluating and selecting software are:
See Wikipedia for a comprehensive list of cognitive biases.
Unfortunately, knowing you have biases does not free you from their effects. The best way to avoid biased decisions is to use a well-designed and tested decision-making process, along with the insights and experience from multiple people.
What is a good process?
A good software selection process is one that is deterministic because that will reduce or eliminate bias. This means the software selection is driven by the data collected, rather than by the opinions of those involved in the project. The selection should be transparent and auditable, particularly important for any community-funded purchase. Finally, the process should have been tested with real world software evaluations and found to work. A good software selection process contains three essential phases:
1) Requirements development
The core of requirements development is capturing all requirements, and then rating these requirements for importance to the organization. When writing software you can never be sure you have all requirements, but when purchasing off the shelf software you can. The reason is that the total list of potential requirements is defined by the features of the potential software packages that can be considered. There are three main sources of requirements:
- Asking users.
- Libraries (which can be purchased), other evaluations, RFPs, etc.
- Reverse engineering requirements from the features of potential products.
Once a comprehensive list of requirements has been gathered, they must be rated for importance to the organization. This is where the collective insight and experience of the team is brought to bear on the problem. That creates your Requirements Profile, which is unique to your particular situation and needs.
2) Software evaluation
The requirements profile is used to generate an RFP, and vendors are invited to respond. Once vendors return their RFP responses, they must be captured in a scoring system that measures how well their product meets your requirements. If scores are normalized so a product that fully meets every requirement scores 100%, you can easily compare products.
When selecting software, organizations can put too much weight on analyst opinions. The problem is that analysts must write to a general audience. On the other hand, you must select software based on how well it meets your particular needs. The key concept here is that products are evaluated and ranked by how well they meet your particular requirements, which measures product fit for your situation.
3) Software selection
Once software products are ranked against your needs, you can select the top one to three products for a demo. Bear in mind that the function of the demo is to confirm your analysis, not to make it.
After the demos, you make a provisional selection, which usually is the top ranked software. After this, references must be checked, including references not supplied by the vendor.
When responding to RFPs, the more aggressive vendors might be far too optimistic in rating how well their product meets your requirements. To counter this, you need to audit the RFP of the provisionally selected software product. If the selected product passes the reference check and the audit, that confirms the choice.
The above three-phase process is deterministic because the scores of software products are calculated by how well those products meet the requirements profile. Because the selection is data driven, the biases of the participants are eliminated.
It is auditable because anybody can examine how the scores were calculated. Finally, this process has been tested with multiple evaluations and proven to deliver the best-fit software.
Part of the information in this article comes from an interview by Bill Huyett and Tim Koller of McKinsey & Company: How CFOs can keep strategic decisions on track