by Chris Doig

How the Software Selection Maturity Scale can improve enterprise software selection

Opinion
Oct 28, 2015
Enterprise ApplicationsIT GovernanceProject Management Tools

Ever wondered why selecting enterprise software is so difficult? Use the Software Selection Maturity Scale to evaluate the state of your organization’s selection process, and as a roadmap for improvements.

Buying business-critical enterprise software entails significant risks for an organization. The goal of an evaluation and selection project is to purchase software that maximizes the ROI and minimizes implementation risks, but the sheer number of requirements make this difficult to achieve. For example, a comprehensive ERP selection for a larger company can approach 10,000 requirements. Also, there is the risk described by the Dunning-Kruger effect where IT professionals overestimate their software selection abilities and underestimate the project effort.

Software selection process improvement

This article focuses on a very critical part of software acquisitions, namely the evaluation and selection process, and examines how to improve that process. You could think of it as business process improvement focused on software selection and optimization. Although information from evaluations is passed over to implementation teams, this article does not cover the actual implementation of software.

The Software Selection Maturity Scale

Based on CMMI publications we have developed the Software Selection Maturity Scale (SSMS) to measure the maturity of the software selection process. While CMMI for Acquisition covers processes for acquiring products and services, SSMS covers just enterprise software evaluation and selection. Note that there is not a one-to-one correspondence between CMMI and SSMS.

SSMS has six levels, numbered 0 to 5 as shown in the diagram below. As you move up the list, processes are more mature and purchasing risks reduced. Each level rests on the one below.

Software selection maturity scale Wayferry

Software selection maturity scale

Level 0 – None

There is no process in place for evaluating and selecting software. Selection is based solely on the advice of others, e.g. referrals from colleagues, product reviews or sales people. Software may be chosen for unethical reasons like favors or bribes. If there are demos, the most persuasive pitch wins.

Risks

  • There is a major risk of the software failing to adequately meet organizational needs.
  • Little leverage over vendors who over promise and under deliver.
  • Implementation schedules slip, and budgets are exceeded.

Level 1 – Initial

Evaluating and selecting software is ad hoc and chaotic, with success depending on individual efforts. Even if a software purchase is successful, there is no ability to repeat that success with the next project.

Characteristics

  • Minimal project management. IT usually manages the selection project with little planning for needed resources.
  • Inability to articulate organizational needs. Poorly defined requirements, usually in Word documents or spreadsheets.
  • Software is informally evaluated against these requirements. If used, scoring is rudimentary.
  • Software selection is subjective, and may be chosen by a committee. May also be selected based on previous experience, e.g. “I know what I am doing” (see Dunning-Kruger effect), but with little attempt to examine organizational differences.

Risks

  • Unrepeatable and subjective decision-making.
  • Implementation schedules slip and budgets are exceeded. Little leverage with vendors who over promise and under deliver.
  • Significant risk of software not adequately meeting organizational needs.

Level 2 – Basic

A basic software evaluation process is established and followed. Organizations can achieve partial success but usually fail to meet the ROI used to justify the project.

Characteristics

  • Informal ROI estimate. Some project management. Limited IT resources with no specific software selection training.
  • Requirements managed in spreadsheets in moderate detail. Requirements may be rated for importance.
  • Mainly functional requirements gathered mostly from users and stakeholders
  • Moderate product research into alternatives.
  • Basic gap analysis done: products may be scored with a simple system (often just yes or no.)
  • The selection decision is often made by committee.

Risks

  • Non-functional requirements like support, licensing, etc. tend to be examined only superficially, leading to risks of problems later.
  • Somewhat subjective software selection process that is not readily repeatable.
  • Moderate risk of software failing to meet organizational needs adequately.
  • Implementation schedules often slip, and budgets exceeded.

Level 3 – Defined

A software evaluation and selection process is established, followed and managed.

Characteristics

  • Formal ROI estimate before a project starts. Reasonable project management in place. Adequate resources are available that may include external consultants. Resources are trained on the principles of evaluating software. Some resources are familiar with the type of software being evaluated.
  • Organization understands the value of front loading requirements analysis. Users and stakeholders interviewed to gather initial requirements, which are collected in an application designed for software evaluations, e.g. see the Wayferry app. Requirements are well written and captured in enough detail. External sources of requirements may be used, and requirements are reverse engineered from potential products. Requirements are rated for importance to create the requirements profile and traceability matrix.
  • Adequate research into potential software products. Products are rated by how well they meet requirements. Formal gap analysis is done by the evaluation app and products ranked by fit score.
  • Organization knows how well the software will work for their particular needs before making the purchase. Objective & repeatable selection decision.
  • All significant requirements were captured in the analysis phase. No “new” requirements are found during the implementation, which is within budget and on time.

Risks

  • There is a risk of some vendors not responding to RFPs or RFIs, which can lead to the best-fit software being overlooked.
  • No formal scope analysis is done if there is a mismatch between requirements and available software.
  • RFPs or RFIs are not audited to expose “over-optimistic” responses by vendors
  • Typically uses the vendor’s purchasing contract, which is very one-sided in favor of the vendor.
  • There is a risk of inexperienced consultants being used in the implementation.
  • Inadequate user acceptance tests.

Level 4 – Verified and adjusted

Vendor claims are verified. Project scope is adjusted if there is a mismatch with what the market can supply.

Characteristics

  • Adequate project management.
  • Adequate requirements analysis. Requirements common to enterprise apps like security, usability, licensing, etc. are collected in libraries for future projects.
  • Formal requirements scope adjustment ensures expectations are matched with software that the market can deliver. Repeatable decision-making selection process.
  • The winning vendor RFP is audited to verify vendor claims are realistic.
  • Non-vendor references are identified and checked.
  • The winning evaluation is used by the implementation team for estimates and  implementation planning. No “new” requirements are discovered during implementation.

Risks

  • Acceptance tests are not created as part of the requirements.
  • The buyer accepts more risk than necessary because they do not use the contract to transfer some risks to the seller.
  • The contract is written in terms of milestones rather than outcomes. It is too easy for vendors to meet milestones while missing outcomes.

Level 5 – Optimized, tested and improved

Software acquisition risks are managed and minimized. The purchase contract is optimized to share risk with the vendor. Testing verifies the software’s business performance.  Post purchase process improvement is undertaken.

Characteristics

  • Adequate project management. Post implementation analysis is actively used to improve future software acquisition projects.
  • Adequate requirements analysis, especially using reverse engineering.
  • Adequate product research and evaluation of potential products. Vendor claims audited for accuracy. Objective and repeatable software selection process.
  • Purchase contracts are written to fairly share project risk with the vendor. The purchaser pays more in exchange for reduced risk of project failure. Contracts are written based on outcomes (as opposed to meeting milestones).
  • Information from the winning product is used to estimate and plan the implementation project. No “new” requirements appear during implementation, which is on budget and on time.
  • Implementation verified against requirements with formal user acceptance tests. Purchase contract includes terms that require the software and / or implementation vendor to achieve a particular score on those tests. Failure to meet scores triggers penalties.
  • Post implementation analysis identifies requirement and acceptance test gaps. These are added to the requirements libraries for future software selections. Requirements library is actively managed for quality.

Conclusion

Use the SSMS to measure the maturity of your software selection process. Note that different projects can operate at different places on the scale. For example, an organization operating at Level 3 may hire a new VP who buys a particular software product that he likes, without considering alternatives. That project will have performed on a Level 0.

Use the SSMS as a road map for software selection process improvement. As an organization moves up the scale, the ROI achieved gets closer to the ROI used to justify the software project. Also, the implementation stays within budget and is completed on time.

Ultimately, a mature software selection process helps organizations meet or exceed the ROI that was used to justify the software purchase in the first place.

Acknowledgments & references

This article builds on the CMMI concepts, especially CMMI for Acquisition

CMMI for Acquisition Primer 1.3 (March 2011, 57 pages)

CMMI for Acquisition 1.3 (November 2010, 438 pages)