Buying a car is much simpler than building one. Likewise, requirements for buying software are much simpler than those for building software. But don’t let that fool you. Poorly written requirements can be a problem in any software selection project. Consequences include missed deadlines, not selecting best-fit software, unrealized ROI, and occasionally outright failure. Writing good requirements accelerates software selection projects and reduces risks.
A while ago we were working with a client team and came across an incomprehensible requirement. The author of the requirement was in the meeting and was asked to explain. He answered: “I have no idea of what I meant with that requirement!” This type of situation is all too common. Why does it happen, and how can you avoid it? The answer is that requirements have meaning in a context. When writing the requirement, the context was assumed. Over time, context changes or is forgotten, and the meaning of the requirement dissipates.
It takes some practice, but one of the secrets of good requirements is to write them so they can stand alone without the support of the context. The ability to stand alone is particularly important if requirements will be collected in a library and reused on subsequent software evaluations. Well-written requirements should have:
A Requirement ID. This unambiguously identifies a requirement after the title has been changed.
A short descriptive title, preferably not more than about 5 or 6 words. The title is the shorthand way to refer to a requirement in conversation, emails, etc.
A good requirement description. This amplifies the title and removes ambiguities. It explains, in brief, general terms why the requirement might be wanted, and how it might be implemented.
If the description contains acronyms or concepts, there should be links to resources like Wikipedia to explain them. There can also be links to articles in online technical publications that provide background.
An example that shows how to use the requirement.
An edit history. Unexpected changes to a requirement can be traced to the source.
It is vital to separate writing requirements from deciding how important those requirements are to the organization. In enterprise software evaluations, requirements are often written by IT. The writer may not know how important a requirement is because they are not familiar with those particular business areas. Later on, in a separate step, teams from those business areas will rate requirements for importance in the context of their job functions.
Most people writing software requirements tend to be technical as opposed to skilled in writing. However, there are tools that can help:
Grammar & Spell checkers catch obvious mistakes. If you are using a browser based tool to capture requirements, we have found the free version of Grammarly finds more errors than the browser’s spell checker.
Text to speech. When the computer reads a requirement back to you, problems like poor wording, clumsy phrasing, and the wrong order of ideas become apparent. Your writing improves in ways that grammar checkers just can’t match. If you are using Microsoft Word, customize the Quick Access toolbar to include “Speak”. If you are working in a Chrome browser, try Speakit! from the Chrome Web store.
Enterprise software evaluations have hundreds if not thousands of requirements. The most practical way to manage these large numbers is to organize related requirements into groups. For example, you could put all requirements related to passwords in one group. Collecting related requirements in groups allows duplicates to be weeded out.
Sometimes one requirement belongs to multiple groups, for example “User inactivity timeout” could belong to both “Compliance” and “Security: User Management” groups. In our experience, we find as many as 15 percent of all requirements belong to multiple groups. Inability to have one requirement in multiple groups is one of the limits of using spreadsheets for software evaluations.
Requirements can be written top-down or bottom-up. We use both, but emphasize bottom-up because with enterprise software, the devil really is in the details. These need to be identified and factored into the selection. Software with high-level problems is relatively easily noticed and eliminated early in the evaluation process.
Well-written software requirements reduce risks and increase the probability of the best-fit software being selected. As a bonus, requirements for things like security, usability, contract, vendor due diligence, etc. can be collected in a library and reused on future evaluations.
[Disclosure: I have no interest, financial or otherwise, in any of the products mentioned in this post. They are shared as examples of tools we use when developing requirements.]
Chris Doig graduated from the University of Cape Town, South Africa with a bachelor of electrical engineering degree. While at university, he founded Cirrus Technology to supply information technology products to the corporate market. The focus at Cirrus was helping companies buy the best IT products for their particular needs. Cirrus also developed custom software for the South African 7-Eleven franchise holder and other corporate clients.
In the 1990s, Chris immigrated to the United States and worked at several companies in technical and IT management roles: Seagate, Biogen, Netflix, Boeing, Bechtel SAIC, Discovery Communications and several startups. At all of these companies he repeatedly saw software being purchased with an immature selection process. Invariably this software would take longer to implement than planned and cost more than budgeted. To make matters worse, the software seldom met expectations.
Having struggled with software selection himself, Chris founded Wayferry, a consulting company that helps organizations acquire enterprise software. He is also the author of Rethinking Enterprise Software Selection: Stop buying square pegs for round holes. While ERP projects account for much of Wayferry's work, other types of enterprise software acquisitions include CRM, HRIS, help desk, call center software, clinical trials management systems and so on. For Chris, the ultimate satisfaction is when clients report meeting or even exceeding expectations with their new software.
The opinions expressed in this blog are those of Chris Doig and do not necessarily represent those of IDG Communications Inc. or its parent, subsidiary or affiliated companies.