In the previous two articles of this series, I made bold assertions about the effectiveness of today’s enterprise IT: First, I said that a significant portion of IT effort falls by the wayside and contributes little or no economic value. Then, I shared scientific evidence demonstrating that the problem was not about the leaders, culture, or people, but the technology management practices that don’t meet the needs of a digital world. Finally, I concluded that conventional IT strategies are not delivering enough productivity and speed to elevate IT effectiveness; and therefore, they are distracting executive focus, team resources and technology funding away from the real cures to an ailing IT effectiveness.
The above narrative has a lot packed in it and, in many ways, goes against the conventional wisdom. When I explain it to my clients and colleagues, their first reaction is a mix of disbelief and self-assurance, and only after many hours of in-depth conversations, we arrive at an ‘aha!” moment. In this article, I will bring the pieces of the IT effectiveness puzzle together so that you can arrive at your own conclusion much sooner.
IT effectiveness is the strive to achieve the best desired outcomes – strategic, tactical, goodwill, transactional, operational, regulatory, risk aversion, or the like – from the available technology spending. During our extensive research on the conventional and emerging technology management practices, we looked for the systemic behavioral patterns that inhibits IT effectiveness by either reducing desired outcomes or increasing spending.
Patterns that reduce desired outcomes:
Every delay has an economic cost
Digital opportunities are perishable as they come with a short window of opportunity, and competitors catch up quickly. When IT delays, enterprises feel a pain of diminished strategic advantage, lost revenue, legal and regulatory exposure, tarnished reputation, or increased cost. With the conventional technology management practices, it is easy to save an additional $100 on third-party expenses, but it is almost impossible to know how to avoid a million-dollar cost of delay.
IT delays are difficult to pinpoint as scope and schedule are constantly renegotiated and updated. Still, they are predictable and often avoidable, when relevant insights across all corners of the IT operating environment are gathered to inform key managerial decisions, such as work prioritization and scheduling, resource allocation and consumption.
Missing functionality means a lost outcome opportunity
When demand exceeds the ability of IT to deliver, and delays are not acceptable, a required functionality ends up being deployed partially. Like cost of delay, the opportunity cost of partial deliveries is also predictable.
Patterns that increase spending:
IT resources are not assigned to the highest value tasks
Customer A has budget but no good projects, and customer B has great projects but no budget. Stakeholders overstate the urgency of their project to buffer against delays. Project proposals underestimate cost, complexity or duration. When observed individually, these events seem to be occurring randomly; however, from a holistic perspective, they are predictable.
IT resources are not working on the highest value tasks
To elevate IT effectiveness, the rule of thumb is to always assign minimum effort (constraint) to maximum expected outcome (value). During project execution, expected outcome and effort estimates continuously change as needs evolve, market trends shift, customers provide feedback, teams discover challenges, and scope expands. When information about these changes is not timely incorporated into work prioritization and resource allocation decisions, IT effectiveness suffers.
IT value stream flow is inhibited
Dependency management is a complex but crucial process because every missed dependency has a ripple effect in a tightly organized value stream. Demand and supply imbalances across numerous teams aligned to customers, products, technology towers, functions and geographies can quickly bring the flow of work to a stand-still. Uncontrolled variance in team velocity is another way to slow down flow.
Work that should be avoided is done
Have you ever seen a case when IT had diligently worked on a functionality, and during the UAT, the team found out that it was no longer needed? Did you know that most CIOs are not informed about what portion of the current work, e.g., user stories, won’t see daylight in production? How often IT teams keep track of the effort associated with reworked or cancelled tasks?
Thanks to a robust governance and executive oversight, IT work is often rationalized at an initiative, program or project level, but when you look beneath them, a meaningful portion of IT tasks, if predicted accurately, can be avoided with no impact to customers.
Risk-aversion feeds embedded contingencies
How do you manage delivery risks when the cost of failure is unacceptable, environment is unpredictable, and dependencies are unavoidable? Like everyone else, I would bolster embedded project contingencies like schedule and cost buffers, over-specified or redundant requirements and soft delivery commitments. These contingencies can be reduced if the underlying operating risk factors are well understood.
The effect of these behavioral patterns is significant on IT effectiveness
Based on the results of our empirical studies, observations at client environments, and industry workshops we conclude that up to 40% of desired outcomes may be missed due to delayed or partial IT deliveries, up to 20% of IT effort can be avoided with little to no impact to stakeholders, and advanced management decision algorithms can yield double digit IT resource productivity gains.
This is when the “aha!” moment arrives
Most organizations are self-assured that they are already managing a number of these patterns rigorously. Still, their success in optimizing IT effectiveness is bound to be limited due to following factors:
Reactive vs. predictive
The conventional technology management practices are designed for a plan-driven operating environment, they work better with a certainty in the past than a possibility in the future, and they rely on backward-looking performance measures; consequently, they can’t predict the impact of systemic behavioral patterns. Here is an actual example:
“One of my clients said that their error of estimation ranges between 10% – 30%. With predictive analytics, I found out that estimates were typically 40% – 110% off their marks. Although 10% of the portfolio resources were already consumed by project management and reporting activities, projects were continuously in red because of the backward-looking implementation of the key performance metrics such as earned value and burn down charts.”
Introverted vs. extroverted
IT is persistently presented with more demand than it can handle and expected to deliver as much as possible. Without knowing enough about the collective capacity of IT, project executives inadvertently overcommit during planning, reset expectations during execution, and deliver what is humanly possible. Concerned stakeholder call for additional execution oversight and cost controls, which further tighten capacity and inhibit IT effectiveness. (see the s-shaped curve of marginal throughput of IT)
Most IT organizations respond to this chain of events by strengthening management controls, increasing reporting requirements, and emphasizing personal accountabilities. Unless there are glaring performance issues within IT, these actions are not effective, because they assume that the problem is within IT. The actual root-cause may be an honest misalignment between expectations and the execution capacity of IT.
Efficient vs. effective
Efficiency and effectiveness objectives are incompatible and cannot be simultaneously maximized due to the unique characteristics of the marginal throughput of IT. Extreme efficiencies cause delays, partial deliveries, overconsumption, and demoralized workforce. When technology spending is governed as a cost of doing business (efficiency) and digital aspirations for a technology-led differentiation run high (effectiveness), IT becomes neither efficient nor effective.
Shot-gun vs. surgical
A financial services company decided to replace 30% of its IT workforce to create a digital-savvy IT organization. An enterprise PMO broadened reporting requirements to improve project execution. An insurance company tried to consolidate pockets of analytics capability residing in the business units into a shared service to improve productivity and cost. What is common among these examples is that they all use a shot-gun approach: A familiar method is applied broadly to increase the chance of hitting a target. It is quick and works well to lift performance from “bad” to “good”. To become “great” however, a shot-gun approach is not as effective since opportunities are often embedded in the depths of an operating environment and require surgical precision for extraction. For example, without an effective forecasting capability, accelerating delayed tasks may be counterproductive and cause further project delays; improving velocity while work-in-progress (WIP) inventory is growing may become a major distraction. Partial automation of a value stream may shift the bottleneck elsewhere. Mandating project teams to report on detailed milestones usually quashes the remaining credibility of an enterprise PMO.
Siloed vs. holistic
For historical reasons, today’s IT organizations are siloed and layered, while the underlying operational data, which is collected by the common IT management systems, have an increasingly holistic view of the entire IT operating environment – thanks to the proliferation of emerging operating models like cloud, agile, XaaS, and devops. These organizational silos and layers are creating an artificial barrier for IT leaders to think about IT effectiveness. For example, leaders and practitioners at agile development organizations still debate the value of estimation and forecasting. They argue that these processes consume scarce team resources and produce inaccurate information. True, if the objective is to be precise, data collection is manual, and analysis results are open to interpretation. I had firsthand experience on how this process can become bureaucratic and information can be lost at every passing of a layer or silo.
However, I have also seen how this process can be automated and streamlined. In our studies, we were able to forecast effort and milestones for an agile portfolio with thousands of developers, tens of thousands of active work items and hundreds of teams within hours.
Systemic behavioral patterns are predictable, hence they can be managed
To compare the performance of the conventional and emerging technology management practices, we have conducted hundreds of simulations and analyzed IT project execution in a total of two-thousand-plus-year perspective. During this study, we were able to accurately identify several behavioral patterns by harvesting financial, operational and organizational data accumulated in the common IT management systems – e.g., project portfolio management (PPM), technology business management (TBM), application lifecycle management (ALM), etc. By formulating new management practices, – which involves policies, controls, metrics and reports, – we were able to achieve significantly better IT effectiveness performance with predictive analytics when applied to investment mix optimization, schedule optimization, project prioritization, supply-demand balancing, and forecasting. Finally, we demonstrated the stunning performance contrast among conventional and emerging technology management practices in a diagram inspired from modern portfolio theory of finance.
Implementation requires no big-bang, is self-funded, energizes the domestic workforce, and sustainably improves the business-IT partnership
Historically, CIOs have been challenged to transform their IT operating models. The conventional strategies – such as modernization, centralization, consolidation, rationalization, and automation – are often executed with a shot-gun approach that requires an expensive and upfront political capital investment to engage stakeholders, creates winners and losers due to imprecise implementation of the assumed value levers, consumes scarce discretionary funding, and produces questionable long-term benefits.
I believe that the innovative technology management practices discussed in this article can be implemented without these historical handicaps. I propose a four-step journey:
Step 1 – Show me the money
A significant portion of IT effort falls by the wayside and contributes little or no economic value due to IT byproducts, i.e., outputs that are never utilized (i.e., dormant output) or delivered too late with a little value (i.e., cost of delay). There is no better way to get people engaged and energized than showing them how much of their own effort is wasted on IT byproducts. This goal can be achieved through a small fact-finding exercise within the IT organization. For waterfall portfolios, task level delays could be analyzed, and for agile portfolios, the user stories that are committed to development but never activated in production would be interesting.
Organizations with an inadequate operational maturity may not be able to conclude this exercise, indicating that they are fully exposed to IT byproducts. Others may likely be surprised by the amount of IT byproducts they discover in their operating environment. In either case, a leadership consensus would soon emerge that this waste is too big to overlook.
Step 2 – Eat your own dog food
The IT effectiveness initiative starts as a small internal IT project and grows in time commensurate to the size of the validated benefits. An immediate priority is to identify and rationalize portfolio work items that are stuck at the bottom of the work queues and backlogs, i.e., dormant output, and tasks that are delayed and deemed no longer valuable, i.e., cost of delay. The savings achieved by eliminating identified IT byproducts provides serves as the seed funding of the initiative.
Next, the initiative focuses on the enhancement of several management capabilities that are essential to IT effectiveness – delivery cycle-time, customer commitment, dependency, demand decomposition, product and team alignment, effort estimation, work-in-progress (WIP) inventory, to name a few.
As management capabilities are enhanced, several improvements become visible to IT executives and stakeholders. Most significantly, the delivery cycle time becomes shorter, milestone estimates are more reliable, WIP inventory shrinks, and resources are optimally allocated to teams. Then, the IT workforce notices that the environment is becoming more stable with fewer fires to fight, they feel empowered and recognized.
With these demonstrated improvements and the accumulating benefits, the IT leadership team is ready to engage IT stakeholders.
Step 3 – Gain support through transparency
Project and feature priorities are decided by IT stakeholders, who often lack sufficient visibility into the cost-benefit tradeoffs that their decisions may trigger within the IT value stream. In this phase, the initiative focus is on providing the project stakeholders with an in-depth transparency about prioritization decisions. Hence, IT establishes a robust prioritization framework and implements several advanced algorithms such as cost of delay (CoD) calculator, weighted shortest job first (WSJF), backlog optimizer, and ROI calculator.
These improvements yield two distinct benefits: First, the stakeholders are informed about the actual cost-benefit effect of their prioritization decisions not only to their projects, but also to the whole portfolio. This helps align the individual stakeholder decisions with the overall portfolio goals. Second, accurate effort and duration estimates allow IT executives and stakeholders prevent the overloading of IT teams beyond capacity, which is a proven drag on IT effectiveness.
Step 4 – Stay united and go after the big-rocks
As IT executives and stakeholders strengthen their partnership, the priority of this initiative shifts from project execution to portfolio planning. Even if every business unit accurately prioritizes initiatives and only proposes best opportunities to IT, the business outcome opportunity of each initiative continuously changes during execution, e.g., needs change, market trends shift, value diminishes due to delays. Some of these patterns can be predicted. If this information is leveraged during portfolio planning, IT executives and stakeholders can accurately estimate which projects are more likely to deliver a better outcome per required spending, a fundamental requirement to elevate IT effectiveness. Our empirical studies indicate that the portfolio opportunity quality is as important to IT effectiveness as the performance of project execution.
In conclusion, IT effectiveness is ailing because a significant portion of the scarce technology resources are consumed by the unintended IT byproducts, which are a consequence of the outdated technology management practices.
Technological innovations have been the primary driver of digital transformation until now. To maintain the momentum, digital transformation needs a corresponding wave of innovations in technology management. By leading the way, CIO can energize their teams, strengthen the business-IT partnerships, and accelerate digital transformation.