IT success is achieved by excelling in five areas: running the production systems (operational excellence); delivering new functionality (solution delivery excellence); having an engaged workforce (organizational excellence); keeping costs in line (financial excellence); and exploiting the capabilities of new technology (transformational excellence). For more details on this concept, read my three-part series on transforming technology organizations.
Metrics can play an important role in achieving excellence as they force the organization to pay attention to their performance and prompt management to make adjustments when goals are not being achieved.
Online application performance. The average time it takes to render a screen or page. It is also important to measure the variability of performance (discussed further in the supplemental operational metrics section).
Online application availability. The percentage of time the application is functioning properly. This can be difficult to define. If the application is available for some users but not all, is it "available?" What if most functions are working but a minor function is not? To address this problem, I like to define the primary functions an application performs. Then, if any of these functions are unavailable, the application is considered down even if most of the application is usable. Also, if the application is primarily used during business hours, I like to have separate metrics for that time versus other times. So, the metrics might be: primary functions during business hours; all functions during business hours; primary functions 24x7; and all functions 24x7.
Batch SLAs met. The percentage of key batch jobs that finish on time.
Production incidents. The number of production problems by severity.
Supplemental operational metrics. Other metrics that might be used to enhance operational effectiveness include the number of unscheduled changes to the production systems, the throughput of batch processes, complexity scores for major applications (indicating how difficult they are to maintain), architectural integrity (the percent of applications on preferred technologies, another indication of how difficult applications are to maintain) and the variability of online application performance.
This last item requires a quick but important note. Business users can get quite frustrated when online production issues are simply defined as "we are experiencing slowdowns." One technique to solve this problem is to set a target for each screen or page in the application with the target defined as the time 90 percent of the screen or page occurrences will render. Then, actual performance can be compared to this goal and the percentage of time the goal is hit provides a good indicator of the customer service level (CSL).
With this technique, both the business and technology know how the application is doing when the CSL number is reported. If an application's CSL is 90 percent, the application is running exactly as expected whereas a CSL of 85 percent or 50 percent describe different degrees of not achieving expected results.
Project satisfaction. The average score from post project surveys completed by business partners. After each project, it is important to solicit feedback from the business. The survey should contain one summary question for the project satisfaction metric (e.g., what is your overall satisfaction with this project on a scale of one to five?), a few more specific questions and an area for written comments. The survey should also be completed by the technology group to gain further insights on the areas that could be improved moving forward, but these scores are not included in the metric as they tend to be biased on the high side.
Project delivery. The percentage of projects delivered on time. "On time" is another tricky concept. For projects using the waterfall methodology, the projected delivery date can vary greatly once the team engages in the design process. I have found it useful to make sure business partners know that the delivery date is not set until design is done and therefore, this metric uses that date for a target. For Agile projects, this metric is not relevant as the delivery date is almost always met by adjusting scope.
Project cost. The percentage of projects delivered within the cost estimate. For this metric, I also use the post design cost estimate for the same reasons noted in the previous section. Again, Agile projects are less likely to benefit from this metric.
Defect containment. The percentage of defects contained to the test environments. It is well known that defects are much more expensive to fix in production. This metric counts the defects corrected during the development process and compares this count to any defects found in the first 30 days of production. While 30 days may seem like a short period, I have tried using the first 90 days of production for this, but the wait to determine the metric was more problematic than the additional information provided by the longer timeframe.
Supplemental delivery metrics. Additional metrics that might be included in this area: how well interim deliverables, such as the completion of design, are hit on time; how well first estimates compare to the final project cost; how many changes are made during the freeze between project completion and the production install; and how many projects require an unscheduled change after installation.
Attrition. The percentage of employees who move to other jobs. For this metric, it is important to only include voluntary separations, as you do not want to provide managers with an incentive to retain poor performers. It is also important to differentiate between employees who leave the company versus those that leave to take another position within the company.
Performance reviews. The percentage of employees with current written reviews. Providing employees with constructive feedback is one of the most important steps an organization can take to improve productivity. Unfortunately, in many organizations, managers and employees dread this process and it is often neglected. The problem is often the enforcement of a grading system, which becomes the focus rather than the specific feedback. If you can do it, skip the grade and have the manager focus on what needs to happen for the employee to get to the next level — a discussion everyone should find useful.
Supplemental organizational metrics. There are many other metrics that can be useful in creating an engaged workforce. Examples include making sure employees have written performance expectations and goals at the start of the year, tracking the amount of training provided to employees (e.g., setting targets just like CPAs and other professionals mandate) and highlighting the number of employees in formal mentoring relationships.
Budget variance. Actual costs compared to budgeted costs. This should be done for both direct expenses (salaries) and inter-company expenses (allocations from other areas) since direct expenses are more controllable.
Resource cost. The average cost of a technology resource. This metric provides a good view of how well managers are controlling costs by using cheaper outsourcing labor, being thoughtful in the use of higher priced temporary labor and managing an organization that is not top heavy with expensive employees (discussed in more detail in the Supplemental Financial Metrics section). Some organizations set targets for outsourcing (e.g., 30 percent of the workforce), but I think the overall resource cost metric is much more powerful. If managers believe they can be more productive and keep costs down using a variety of techniques, why not let them rather than focus on a single strategy?
Supplemental financial metrics. There are several other metrics that can be useful for organizations. Simply keeping a running total of the dollars saved from cost initiatives (e.g., moving to cheaper technologies) can help keep the focus on these projects. Tracking costs by activity (e.g., development versus maintenance versus running the systems versus other costs) can highlight areas for improvement.
Finally, as alluded to above, many organizations have a tendency to become top heavy over time so it is useful to track this in a metric. For example, if an organization has eight levels starting with new college graduates and going to vice presidents, a simple metric can be created by assigning a number to each employee (e.g., new college graduates = 1, VPs = 8), adding up the numbers and dividing the sum by the number of employees to determine the average level in the group.
Critical metrics for IT success: Summary
It might be useful to provide an example of how metrics can help organizations be more proactive in identifying and solving issues. The online performance of an application I once supported jumped from an average of 1.2 seconds per screen render to 1.6 seconds in a single month. There were no complaints from users and our team probably wouldn’t have noticed this degradation without our metrics.
After some investigation, we found that a central support group had installed new security software that was interacting with our application in an inefficient manner. We worked with that group and were able to get our performance back down to 1.2 seconds by using the new software in a different way. While a 0.4 second increase in performance is not the end of the world, this could have become a problem if a few different issues happened over time and were not caught.
Successful IT organizations solve issues like this before they become problems, while less successful organizations are caught off guard when the business complains about a degradation in their technology. Metrics are key to making sure your organization proactively addresses symptoms before they become real problems rather than reacting when a problem is real and there is a crisis.
This article is published as part of the IDG Contributor Network. Want to Join?