Most enterprise software RFPs (or RFIs or RFQs) contain hundreds or thousands of requirements. When vendors respond to these RFPs, how do you deal with so many requirements? How do you take the gamble out of selecting software? The CIO of one large organization told us that he sometimes reads RFP responses at bedtime. There has to be a better way. There is, and that is to use an RFP scoring system.
The purpose of the scoring system is to identify the best-fit software product for the particular organizational needs. This means evaluating products against the requirements, and distilling their ratings down into one number. We call this number the fit score and use it to rank products, and ultimately make the selection. Although this may seem like a gross simplification, ultimately only one product will be purchased. More complex scoring systems only get in the way of that final decision.
To score an RFP, write requirements as closed questions that can be answered by selecting a rating from a drop-down list. For better usability, the drop-down list should always display the rating name, and not just the weight. To ensure rating consistency and accuracy, rating descriptions must be available. When vendors respond to a requirement on the RFP, they select a rating from the drop-down list and can add supporting comments. See example below of a Product Rating Table.
A previous blog post described how to rate requirements for importance. To calculate a product’s score for one requirement, multiply the requirement importance weight by the product rating weight. Sum all scores for each group of requirements, and multiply by the group weight. Finally, sum all group scores to get the overall product fit score.
Normalizing the fit score to a percentage is a way of measuring and comparing products against your specific requirements. To normalize scores, create a hypothetical reference product that fully meets every requirement and calculate its score. Put the real score over this reference score and multiply by 100 percent to get the normalized fit score. Then rank software products by the fit score.
When vendors don’t respond to all requirements in the RFP, you might want to calculate two versions of the fit score for a better perspective:
The estimated fit score includes only rated requirements. Unrated requirements are ignored.
The absolute fit score includes all requirements. Unrated requirements score zero.
As the percentage of requirements rated increases, the estimated score approaches the absolute score. If more than 90 percent of all requirements are rated, the estimated score is usually accurate. However, verify that 100 percent of the showstopper and critical requirements have been rated.
While fit scores measure how well products meet all requirements, for a deeper look at the relative strengths and weaknesses of individual products, use a heat map. You can examine and compare the weak areas of competing products against your requirements. See the example below.
Each group (column 1) contains one or more related requirements. Req Count (column 2) is the number of requirements in each group. On the right (columns 4 – 9), the numbers in the blocks show the fit score for each product for that group of requirements. The color also indicates the match: White is a 100 percent match for your requirements. The redder the color, the weaker the product for that particular group.
Column 3, the Group Average column, shows the average score for each group of requirements across all products. If there are groups with relatively low scores, e.g. Security Access Control at 59 percent below, this indicates that no products come close enough to your needs for that group. If there are too many groups like this, consider:
Adding other products to the evaluation, usually more high-end.
Adding third party products to resolve the deficiencies.
Changing the scope of the evaluation by revising those groups of requirements.
Even the best-fit software is not perfect, and one of the major benefits of this kind of analysis is that it sets realistic expectations. You know how well the software will work in your environment before making the purchase. There are no implementation surprises caused by missing or weak functionality.
Scoring RFPs is a practical way to deal with hundreds or thousands of requirements. Software products are ranked by how well they meet your specific needs while the heat map compares products in detail. If selecting software is in any way likened to gambling, you have stacked the odds in your favor.
The next post will describe a way to solve the problem of aggressive vendors and “over-optimistic” RFP responses.
Chris Doig graduated from the University of Cape Town, South Africa with a bachelor of electrical engineering degree. While at university, he founded Cirrus Technology to supply information technology products to the corporate market. The focus at Cirrus was helping companies buy the best IT products for their particular needs. Cirrus also developed custom software for the South African 7-Eleven franchise holder and other corporate clients.
In the 1990s, Chris immigrated to the United States and worked at several companies in technical and IT management roles: Seagate, Biogen, Netflix, Boeing, Bechtel SAIC, Discovery Communications and several startups. At all of these companies he repeatedly saw software being purchased with an immature selection process. Invariably this software would take longer to implement than planned and cost more than budgeted. To make matters worse, the software seldom met expectations.
Having struggled with software selection himself, Chris founded Wayferry, a consulting company that helps organizations acquire enterprise software. He is also the author of Rethinking Enterprise Software Selection: Stop buying square pegs for round holes. While ERP projects account for much of Wayferry's work, other types of enterprise software acquisitions include CRM, HRIS, help desk, call center software, clinical trials management systems and so on. For Chris, the ultimate satisfaction is when clients report meeting or even exceeding expectations with their new software.
The opinions expressed in this blog are those of Chris Doig and do not necessarily represent those of IDG Communications Inc. or its parent, subsidiary or affiliated companies.