From a high-level perspective IT budgets have two parts: maintenance and strategic initiatives. Maintenance tends to grow at the expense of strategic initiatives and, left unchecked, ultimately stifles innovation. In mid to large IT organizations, this has resulted in the emergence of Application Portfolio Management or APM. APM is a disciplined approach to aligning enterprise applications to maximize business value while minimizing lifecycle ownership costs.\nA lack of APM results in uncontrolled application growth,\u00a0which is sometimes called application sprawl. See Appendix 1 for typical causes of application sprawl and Appendix 2 for the resulting problems. Note that APM is a continuous process and not an objective. The essentials of APM are:\n\nAn inventory of enterprise applications. This can be as simple as a spreadsheet, or it can be portfolio management software.\nRegular review and analysis of enterprise applications, e.g. using a quadrant analysis and evaluating applications against business requirements.\nExecution, which is acting on the results of the analysis. This includes\n\nRetiring obsolete applications\nReplacing applications that have high maintenance costs or poor functional fit\nConsolidating multiple applications where there are significant functional overlaps.\n\n\n\nThis article focuses on the review and analysis of applications, which culminates in software rationalization projects. It covers cloud and off-the-shelf software, and applications developed in-house.\nApplication review\nA simple way to identify applications in need of rationalization is to plot them on a quadrant chart in terms of ownership costs and business value. See Appendix 3 for examples of direct and indirect ownership costs. Applications in the bottom left quadrant are prime candidates for rationalization.\nIn each case, estimate the ROI of the replacement, which is the basis of the business case for undertaking a software rationalization project. Start with the highest ROI projects because they will bring the greatest return for the effort.\nThe software rationalization project\nThe core of a software rationalization project measures how well applications meet the needs of the organization, which means developing a comprehensive requirements profile that accurately captures those needs. Existing or potential new applications are measured against this frame of reference.\nThe requirements list\nFor the purposes of this article, a requirement is defined as an organizational need expressed in a quantifiable way.\u00a0For any given type of software, most organizations have very similar requirements. What makes each organization unique is how important the individual requirements are to them.\nThere are three main parts to building a comprehensive list of requirements:\n\nAsking users, analyzing current business processes\nExternal sources of requirements like purchased lists or RFPs found on the web etc.\nReverse engineering features from potential products into requirements.\n\nWhen developing software it is not possible to collect all requirements, but when buying software\u00a0it is.\u00a0Requirements are the foundation for selecting best-fit software, and those requirements need to be well written\u00a0and in enough detail.\nFunctional requirements\nFunctional requirements specify what the application must do, and most people start here. Reverse engineer features from existing applications into requirements to capture current functionality. Ask users where these existing applications could be improved and capture their answers as requirements.\nNext, look at potential replacement applications. Be sure to include the market leaders in the appropriate software category, which does not necessarily mean \u201cbig name\u201d products. When considering mid-level systems, include the mid-level market leaders. Reverse engineer the features of those products into requirements, a critical step in developing a comprehensive requirements list because it captures unknown requirements and the latest advances in the market. It ensures existing applications are compared with the best the market has to offer.\nOther requirement types\nMany other requirement types should be considered when rationalizing software. Sometimes these are called non-functional requirements. Examples are:\n\nCompliance requirements: vendor compliance, quality, standards (ISO, SoX, HIPPA, 21 CFR Part 11, etc.), vendor standard operating procedures (SOPs), audit trails, tracking end user training.\nContractual requirements: Legal, license, performance, contract terms and termination (All contracts eventually end; make sure there is a graceful way to exit).\nSecurity requirements, especially in the case of cloud or hosted applications: physical and logical security, configuration, security testing & audits, logging & reporting, authentication and passwords, encryption.\nSystem requirements: performance, monitoring, integration, configuration, compatibility, architecture (front & back end), user management, backups.\nTraining requirements: content, delivery, training management.\nUsability requirements: user interface, navigation, searching, user help and searching user help, languages.\nVendor requirements: due diligence, implementation, support, payment arrangements, and application ecosystem: things like user groups, add-on products from other vendors.\n\nNote: some of the above examples apply only to cloud or vendor hosted applications.\nRate requirements for importance\nRequirements must be rated for importance to the organization. For the purposes of traceability (sometimes called a traceability matrix), be sure to record who wants each requirement, why they want it and how important it is to them. Employees rate requirements in their areas, for example, the Finance team rates financial requirements, the IT team rates security and usability requirements, and so on.\nWhen rating requirements for importance, consider how important each requirement is now, and how important it will be in the next 3 to 5 years. Organizational subject matter experts provide invaluable input here. If there are no people with experience in specific areas, it pays to use outside help. For example, if software-licensing costs will run into tens of millions of dollars hire a licensing specialist to help develop those requirements and negotiate the deal.\nThe output of this process is a comprehensive requirements profile that accurately and adequately captures the needs of the organization. Current and potential replacement software will be rated against this reference standard.\nRate applications against the requirements profile\nOnce the requirements profile is complete, the next step is to evaluate current and potential replacement applications against that profile. This evaluation objectively measures how well those applications meet organizational needs. Knowledgeable users should rate current applications because they know these applications and their limitations.\nRFPs and RFIs\nRatings for potential replacement applications are usually done by the vendors in the form of an RFI (or RFP). One of the challenges is getting vendors to respond. One way to improve responses is to reduce the amount of work the vendor needs to do, for example by using two rounds of RFIs. With the first round, send out only showstopper requirements; usually this is about 10% of the total number. Since there is much less work, more vendors will respond. Shortlist based on the RFI responses\u00a0and send out the full RFI to only the top 6 or so vendors from the first round.\nScoring RFIs or RFPs\nWhen vendors return completed RFIs, the potential applications\u00a0must be scored. A useful technique is to normalize scores: If an application fully meets every requirement, then that application would score 100%. The advantage of normalized scores is that they provide an intuitive measure of how well applications meet organizational needs.\nThe Fit Score is defined as the Normalized Score above but excludes requirements against which the application is not rated. By definition, the Fit Score equals the Normalized Score when an application is rated against all requirements.\nThe Fit Score distils and entire evaluation into one number that is used to rank applications. The advantage of the Fit Score is that applications can be compared before they are fully evaluated. By observing Fit Score trends when about 50 percent of the requirements have been rated, applications that clearly will not make the shortlist can be dropped from the evaluation.\nGap analysis\nThe Gap Analysis is where current and potential new applications are evaluated against the requirements profile, and ranked by Fit Score. While it is unnecessary to fully rate every application in the evaluation, potential winning candidates should be rated against all showstopper and critical requirements, and against about 90 percent of all requirements in total. Once the gap analysis is complete and applications are ranked by Fit Score things start to get interesting.\nThe Fit Score objectively measures how well applications meet the requirements profile, and allows them to be compared and ranked. For example, take a post-merger scenario where an organization is deciding between two existing CRM applications, or if both CRM applications should be replaced by a market leader like Salesforce.\n\nIf both existing CRMs have a very high Fit Score, e.g. > 95 percent, then it does not matter which is selected \u2013 both will do a good job.\nIf one CRM has a significantly lower Fit Score, e.g. < 80 percent while the other has a high Fit Score of > 95 percent, pick the CRM with the highest score.\nIf both existing CRMs have a relatively low Fit Score, e.g. < 75 percent, and something like Salesforce has a high Fit Score like > 90 percent, then it may be worth selecting SalesForce.\nIf all applications have relatively low Fit Scores, e.g. < 75 percent, then the scope of the evaluation needs to be adjusted. Alternatively, other applications that could better meet the requirements (and were not evaluated) may need to be considered.\nAlthough this would not apply to the CRM example above, if all applications had exceptionally low Fit Scores, e.g. < 60 percent and adjusting the scope of the evaluation does not make a significant difference, then you have a prime candidate for internally built software.\n\nAuditing RFIs\nSome vendors can be \u201cover optimistic\u201d when responding to RFIs. If a new application is selected to replace an existing application, that vendor\u2019s RFI response should be audited.\nConclusion\nIf the process outlined here was followed, applications with high ownership costs and low value were selected for potential rationalization. A comprehensive list of requirements was developed. Employees rated them for importance to create a requirements profile, an objective standard unique to the organization for that type of application.\nIn the gap analysis, current and potential replacement applications were evaluated against the requirements profile. The results of each application evaluation were distilled into one number, the Fit Score, which was used to measure how well applications meet organizational needs.\nA data-driven analysis identified the best-fit application for the organization\u2019s particular needs. The question of which existing applications should be kept, or if an entirely new application should be bought, has been rationally and objectively answered.\nAppendix 1: Common causes of application sprawl\n\nAcquisitions & mergers\nBusiness strategy changes\nBusiness growth and the need for immediate solutions leads to software purchased just to solve a problem.\nIn-house software developed as point solutions. Technology siloes develop where project teams fail to communicate.\nNew software with better features overtakes existing applications, and the old applications are not retired.\nCompliance requirements cause obsolete applications to hang around. Access is not disabled, and some people continue using them.\nOrganizational siloes where different departments bring cloud applications online to solve similar problems.\nPolitical purchases. New senior executives introduce software \u201cbecause it worked well at my previous company\u201d. This new software is a poor fit for the organization so the original software that was supposed to be replaced can\u2019t be retired.\n\nAppendix 2: Typical problems caused by application sprawl\n\nUnnecessary software costs for underused applications. This takes the form of annual software maintenance paid to vendors, or fees for cloud applications.\nIncreased administration costs. All applications require some level of system administration, and these costs are often overlooked because they tend to come out of general IT budgets.\nIncreased support costs. Each supported system requires helpdesk staff to support it.\nIncreased training requirements for new users. Also, when there are too many applications, people tend to use each application less frequently and forget how to do things.\nUser confusion caused by a duplication of functionality. Different departments use different applications for the same business processes.\nDe-normalized data. The same information is stored in different systems in different formats. For example, after a merger two different sets of customers exist in two different CRMs. Some customers can be in both systems. Even if each customer is in one or the other system, automated reports covering the whole customer base cannot be obtained.\nReduced efficiency. Older applications often don\u2019t have the functionality or ease of use delivered by current applications.\nIncreased interface costs. As the number of applications increases, the costs of those applications exchanging data increases exponentially. Data tends to be siloed in different applications, which prevents users from getting the big picture.\nIncreased development costs. Custom applications developed in-house may have to work with de-normalized data in multiple repositories with different APIs and data schemas. This significantly increases the cost of internal software development.\nReduced security caused by an increased attack surface. More applications running mean more potential security holes that hackers can exploit.\nUnnecessary data center resources consumed. Organizations find the number of VMs explodes, but also find the usage of those VMs (and the applications that run on them) is lower than expected. More applications mean more systems to back up, and more effort to manage those backups.\n\nAppendix 3: Application ownership costs\nOwnership costs include all regular, ongoing direct and indirect costs associated with applications. They do not include once off costs like implementation consulting or initial training. Examples are:\nCommercial off-the-shelf software ownership costs\n\nAnnual software maintenance costs\nPeriodic upgrade costs\nData center costs, including things like backup, failover, etc. Also indirect costs like power, cooling, floor space, physical security, etc.\n\nCloud or SaaS software ownership costs\n\nUser access fees\nOption fees, e.g. base, standard or premium access.\n\nIn-House application ownership costs\n\nBug fixes\nEnhancements\nRelease testing\nChange management\nAnalyst & developer salaries & overheads\nManagement of analysts, developers, testers, technical writers, etc.\nRecruiting costs for developers to maintain obsolete applications, e.g. written in Cobol\nApplication documentation costs\nData center costs, including things like backup, failover, etc. Also indirect costs like power, cooling, floor space, physical security, etc.\n\nOwnership costs common to all applications\n\nEnd user training\nHelpdesk support. Also, as support staff leave, replacements must be trained.\nLost user productivity, e.g. when users should be able to do something with the software, but they need support\nCustomer costs, e.g. when slow response from the organization caused by poor software fit results in customers being lost\nOpportunity costs of downtime\nCompliance & auditing costs\nSecurity testing, auditing\nInter-application communication costs, where one application needs data from another, and that must be maintained.\nReporting costs, where data from multiple applications must be normalized and merged. Often done manually in spreadsheets.\nDisaster recovery & business continuity planning and testing\nIT staff management\n\nAcknowledgment: This article is an updated version of a white paper originally published by Wayferry.