Why does the traditional RFP process return such poor results when selecting off-the-shelf or cloud software?
A primary purpose of software RFPs is to discover how well potential products meet requirements at an acceptable price. Vendors respond to the RFP with their proposal. The organization selects the “best” proposal and makes the purchase. However, problems frequently appear after starting the software implementation, leading to missed deadlines, increasing costs and occasionally outright failures.
Sadly, the traditional software RFP process is broken. In fact, when it comes to purchasing off-the-shelf or cloud enterprise software, this process never worked properly in the first place. It is the reason stories about software failures regularly appear in the technical press. These stories are only the tip of the iceberg; people don't talk about most partial or outright failures because they don't want to be associated with them.
By definition, any enterprise software project that does not meet the target ROI is a failure to some degree. This article examines reasons why the process of selecting software via an RFP is broken and offers suggestions for fixing the problems.
1) Requirements management
Enterprise software systems can have many thousands of requirements. Managing these requirements with spreadsheets or word documents is labor intensive and error prone. Problems include a lack of version control, manual workflows, and missing files.
Use a system designed to manage requirements for software purchases. Note that while superficially similar, systems optimized for software purchasing have a very different emphasis to those optimized for software development. For example, a purchasing focused system has a means of rating products against requirements and comparing them to the organization’s needs. Systems focused on software development do not have these features.
2) Inadequate requirements breadth
At the start of any enterprise software selection project, you have only a very high-level idea of needs. Methods of discovering requirements involve interviewing users, using experienced consultants and so on. However, these usual methods don’t identify all requirements. Missing requirements are found during implementation where they cause delays and cost overruns.
To gather all requirements use the technique of reverse engineering features from multiple products back into requirements. After doing this in enough detail with all potential products, you will have created a comprehensive master list of requirements that covers the entire problem space the software purchase is hoping to resolve.
3) Inadequate requirements depth
When requirements are not specified in enough detail on the RFP, the wrong software can be selected. The reason is that the software may have a relatively basic implementation of certain features, but far more comprehensive features are needed. It takes detailed requirements to expose this gap.
Examination of software failures appearing in the press suggests this is a significant problem. Inadequate depth can obscure requirement mismatches and lead to unrealistic expectations by the people involved with the project. In both cases, problems appear during implementation when workarounds must be developed to address the inadequacies of the software purchased.
Write requirements in enough detail so the system can be implemented from that detail. Split overloaded requirements into multiple simpler requirements. No software is perfect, so set expectations by making people aware of weak areas when the software is being selected.
4) Requirement importance
Traditional RFPs sometimes omit requirement importance ratings. If there are no ratings, all requirements logically must have the same importance. Since some requirements always have differing importance, accuracy demands some form of importance rating scale.
Rate all requirements for importance to your organization.
5) Narrative requirements
Narrative requirements are those that need a short essay to answer. Examples are: "Describe the configuration management process" or "Describe your disaster recovery capabilities." The problem is that there is no effective way to roll up the information from narrative responses into the final decision. Usually, the software selection team members read the answers and then reach a consensus or vote. Unfortunately, voting is a non-deterministic process open to bias.
Write requirements as closed questions that the vendor can answer with a selection from a drop down list. These selections have numbers behind them and are rolled up into a fit score for each product.
6) RFPs request too much information
Related to narrative requirements above, some RFPs demand too much information in the form of vendor comments for each requirement. The amount of work needed causes fewer vendors to respond, which can mean the best-fit software is not selected. In one recent project, we had to plead to get the vendor to respond to the RFP. They did and won the deal because they truly were the best-fit for that client.
Ask the vendor to rate how well their products meet your requirements, and suggest they minimize their comments. Less work means more vendors will respond to your RFP, which means you have a better chance of finding the best-fit software for your particular needs. Rely on auditing the provisionally selected product to uncover “over-optimistic” RFP responses.
7) Duplicated requirements
In RFPs, related requirements are grouped together to keep the list manageable. All lists have some requirements that belong to more than one group. For example, “No cost for deactivated user accounts.” could be in the “Software license” group and the “Compliance \Audit Trails” group. Traditional RFPs duplicate requirements like these, resulting in more work and inconsistent product ratings.
The system used should allow one requirement to appear in multiple groups. When rating a product against that requirement in any one group, it is rated in all groups. This reduces product evaluation work and ratings are consistent. When related requirements are collected in groups, it is a lot easier to find and remove any duplicates, especially where the same requirement is expressed in different ways.
8) Waterfall vs. Agile
Traditional RFPs tend to be like Waterfall Software Development. Everything is defined up front, documented in the RFP, and then handed over to the vendors. Vendors respond with proposals. The software is selected. The software is implemented.
In reality, organizations learn more about their needs all the way through the evaluation and selection phase and even in the implementation phase. As organizations learn more about potential products, they refine their requirements. For example, features from potential products trigger ideas for new requirements, and existing requirement weights are adjusted up and down. Traditional RFPs can’t handle this dynamic, agile-like environment. Because it is so rigid, the traditional RFP process loses valuable information along the way. This translates to greater risks of project failure.
Move that learning earlier in the software selection process. It should be before the purchase because unknown requirements discovered can cause different software to be selected. The best way to discover unknown requirements is to use the process of reverse engineering features from potential products back into requirements. This forces the team to think through issues that otherwise would have been overlooked. It is the primary way to discover unknown requirements, and also incorporate the latest technology into a software evaluation.
Whenever new requirements are discovered, they should be added to the system, even if it is as late as the implementation phase. The system should also allow requirement and group weights to be dynamically adjusted during the evaluation as the organization builds a better understanding of their needs.
9) Manual workflow
Software RFPs should measure how well potential products meet your requirements so you can select the best-fit software for your budget. The problem is that traditional RFPs can have many thousands of requirements, yet they use a manual process to consolidate this information into the purchase decision.
RFPs using word or pdf documents lack the means to consolidate information collected. They are totally unsuitable for selecting software. Spreadsheets are better than documents but are also limited and manual. While spreadsheet RFPs can use macros to consolidate product response scores, this takes coding skill and is seldom written. Spreadsheets may work for evaluations with relatively small numbers of requirements, but they don't scale up to enterprise software selection projects.
For both documents and spreadsheets, there is no way to compare multiple products. Each RFP response stands alone. Information collected on different RFPs must be manually consolidated, which is tedious, error prone work. The people doing this decide what is or is not included, and their biases can affect the selection decision. At the end of the traditional RFP process the software selection team may reach consensus or vote. The hope is that individual biases cancel each other out. Hope is not a rational approach to selecting software.
The solution is to use a system to measure how well products score against your requirements, and distil those numbers into a fit score that can be used to rank products.
10) No traceability matrix
In the context of purchasing requirements, traceability refers to the source of those requirements. Traditional RFP requirements are often collected with no record of who wants them or why. If a requirement does not have an owner, it does not belong on an RFP! (If a business process demands a requirement, that process has an owner). Requirement ownership is used in customer acceptance testing. It is also used in system validation, e.g. as practiced in the life science industry. Note that the names of who the requirement is important to may be hidden from the vendors responding to the RFP, but should nevertheless be recorded along with the requirement.
Rate requirements for importance. For each requirement capture how important it is, to whom it is important, and why it is important to them. If there are several people who have differing levels of importance, record the highest importance.
11) Too simple or too complicated product scoring systems
Some RFPs have a simple “Yes” or “No” for responses to a requirement and room for a comment. Others have complicated arrangements of multiple columns, e.g. a Y/N column, a column for how much effort is required and a third column for the level of customization required. These approaches do not capture the information in a way that can be effectively consolidated into the purchasing decision.
Use the simplest possible scoring system that adequately captures how well the product meets your requirements, and then rank the software. Remember you will only buy one software product, so complicated systems that allow you to look at the evaluation from multiple angles can sometimes get in the way of a decision.
12) RFP response validation
Some vendors are "over-optimistic" when responding to an RFP, especially if requirements are not clearly expressed. These over-optimistic responses must be found and corrected before the purchase, because they can, and do, change purchase decisions. The traditional RFP process has no way to validate vendor responses. If this step is skipped, there is a serious risk of falling into an “over promise, under deliver” situation especially when there are large amounts of money involved.
Before making the purchase audit the winning RFP to validate that vendor responses truly measure how well the software meets your requirements.
We suggest that the limitations of the traditional RFP process are a significant cause of outright or partial software selection failures. Being aware of the problems highlighted in this article along with the fixes suggested should reduce your risks next time your organization undertakes an enterprise software selection project.
This article is an updated version of a white paper published by Wayferry.
This article is published as part of the IDG Contributor Network. Want to Join?