by Chris Doig

Pennsylvania Turnpike Commission’s ERP project takes wrong turn

May 01, 20154 mins
CIOEnterpriseEnterprise Applications

Pennsylvania Turnpike Commission implements ERP only to end up in court. A case study on reducing enterprise software risks.

Previous articles have examined techniques for evaluating and selecting best-fit enterprise software. This article draws many of those ideas together using the example of a software selection failure currently in the courts.

The Pennsylvania Turnpike Commission hired Ciber to develop an enterprise software RFP covering finance, payroll, human resources and so on. They awarded the implementation contract to Ciber without taking bids. Now they are suing Ciber for more than $45 million, alleging the company overbilled while implementing software that doesn’t work properly.

While the issues of selecting the wrong software and overbilling during implementation can never be wholly eliminated, what strategies can reduce risks? Reducing risks comes down to the foundations. In the case of enterprise software, this means a thorough requirements analysis. Unfortunately, most organizations entirely underestimate the work involved in developing requirements, which is often over half the total work in a software selection project. Developing a detailed list of requirements starts with:

  • Asking users. Our experience is that users are very quick to tell you what they don’t want, but beyond a few immediate needs they struggle to tell you what they do want. Nevertheless, it is crucial to start software selection projects by talking to people because that is where the end-user buy-in begins.
  • Requirements libraries. We have found that for a given type of software, all organizations have very similar requirements. What makes each organization unique is the relative importance of those requirements. When available, using libraries is the fastest way to develop the list of requirements.
  • Reverse engineering features into requirements. This is a particularly important part of the process because it captures requirements the organization doesn’t know they need. It also fleshes out inadequate areas of requirements.

The above steps produce a comprehensive list of requirements that covers the organization’s “problem space”. The next step is to meet with the appropriate groups of users and ask them to rate the requirements for importance. Once completed, you will have a requirements profile that is the standard against which all potential products are evaluated.

With the requirements profile in hand, RFPs or RFIs can be sent out. Using vendor responses, the gap analysis is performed, and product score against organizational needs is measured. The gap analysis is where requirement and requirement group weights are often fine-tuned because by now the organization has a far better understanding of their needs.

Once the best-fit software has been selected, the winning RFP should be audited. Auditing ensures the vendor was not “over-optimistic” in their responses, and the software can indeed perform as promised.

Detailed information gathered from evaluating the winning software against a comprehensive requirements analysis should be used to help vendors estimate implementation costs. Vendors are then held accountable to this baseline.

What reasons do we have for thinking these strategies will produce the desired results? It all goes back to the foundations: Everything depends on a thorough requirements analysis.

  • The software is selected with a data-driven selection process, based on the requirements profile. There is no room to select the wrong software in return for bribes or even political influence.
  • Auditing the winning RFP verifies the software can indeed meet the requirements.
  • The implementation vendor is given a detailed list of all requirements that will be met with configuration, coding, etc. The vendor will base their implementation estimate on this list, and can be held accountable for it. Overbilling is quickly identified when actual work times deviate significantly from estimations.

To return to the Pennsylvania Turnpike Commission: Had a thorough requirements analysis been done up front, they would have been able to measure just how well the software would meet their specific requirements before the purchase.

A thorough requirements analysis also tells the implementation team exactly what to implement. Too many organizations leave the detailed analysis to the implementation stage and pay the price through change orders.

Finally, a detailed list of how the requirements must be implemented (i.e. with configuration, coding etc.) lets the vendor provide an accurate estimate of implementation costs and time. If the vendor quoted too low, this could be identified early on in the project.

When it comes to new enterprise software, it is essential to construct a robust foundation with a thorough requirements analysis, and build on that. Experience shows that shortcuts taken early on always inflate costs at the tail end of the project. Perhaps the Pennsylvania Turnpike Commission could have used these principles to avoid a detour through the courts.