Enterprise software selections contain hundreds or thousands of requirements. The most practical way to handle such large numbers of requirements is to use an evaluation system. The output is a deterministic data driven decision, typically ranking evaluated products by how well they satisfy the requirements. This article covers the steps needed to finalize that software selection, and then verify and validate the decision.
Depending on how competing products meet the requirements, it is a typical practice to select a shortlist of one to three products for the demo. For large products like ERP, demos can last several days. More than three product demos will induce demo fatigue.
The demo is the last stage where end users are involved in product selection. Vendor demos are an extremely thin slice through the absolute best features of the product and are designed to sway the audience. To counteract this, you need to prepare a demo script based on showstopper requirements that will be used for all demos. Vendors will push back hard, but you need to be firm. If a vendor refuses to demo what you want to see in the product, how do you think they will behave once they have your money?
It is vital to collect feedback from users who attended the demo immediately, before they leave the room. For example, we keep it very simple and ask the questions listed below:
From your perspective:
- What do you like & dislike about the product?
- Rate the software product on a 0 – 5 scale.
- What do you like and dislike about the vendor?
- Rate the vendor on a 0 – 5 scale.
- Any other comments?
Recall that the provisional product selection is data driven. The purpose of the demo, then, is to confirm that decision. Summarize this feedback and provide it to the final decision maker, along with the data-driven product selection. The output from this step is the provisional product selection.
Vendor response validation
The next step is to audit and validate the provisionally selected product. Here you select a sample of the most important requirements the vendor claims they fully meet, usually between about 10 percent and 20 percent of the total number of requirements. If you use a scoring system where “fully meets” = 100 percent, by definition the product will score 100 percent for this sample. You ask the vendor to provide evidence the software does indeed fully meet these requirements as they claimed.
The kind of evidence needed comes from user or admin documentation, “how to” videos or a test account where you can login. After verifying the product against the sampled requirements, if the score drops reduce the overall product score by that percentage. This drop can change the product ranking, and cause a different product to be selected. If this happens, be grateful that the process caught the problem before the purchase. You have just avoided an expensive software failure.
When buying enterprise software, everybody knows reference checks should be done. But what are you trying to achieve with them? The aim of reference checks is to uncover reasons why you should not purchase that software based on real world experience in an industry similar to yours. Thus, reference checks are more about the vendor and user experience than the product itself.
Typically vendors supply “cherry picked” references. A reference that will not say anything significantly negative about a vendor is not credible. Such references are a red flag that something is wrong (e.g. they are paid for the reference), so discard their comments.
How can you find references not supplied by the vendor? One way is to search job boards for jobs that contain the name of the product you are investigating. If the job was posted by the employer, you can use tools like data.com connect from Salesforce to identify possible owners of the product. Alternatively, you can search LinkedIn for people who have used the product and mention it in their job history. If the product has user groups, you can reach out to some of the most active members. In our experience, a significant number of these people will respond and provide a reference. The important aspect here is that these references are independent of the vendor.
If you used a data-driven analysis to select the product, by the time you speak to references you should know how well the product would meet your needs. Focus questions on non-functional requirements and phrase them to uncover negatives. Examples include asking how well the vendor responded to problems, what implementation issues they had and how they were resolved, problems experienced by users, problems with support and so on.
When speaking to a reference you are unlikely to have much more than an hour, so the choice of questions is critical. Make sure you have a list of questions to ask before the call. Along with each question, include the context of why you are asking it because references may want to know this before answering.
At this point, you have used a data-driven approach to selecting best-fit enterprise software and have a pragmatic understanding of how well it meets your requirements. Product demos confirmed your selection. You validated the RFP to ensure vendor’s responses were accurate and not over-optimistic. You checked references to verify real world experience was acceptable. Now you can proceed with the purchase confident that your up-front work has minimized software selection risks and set the stage for maximizing your ROI.
This article is published as part of the IDG Contributor Network. Want to Join?