I\u2019ve written a number of times about the sea change in technology that is occurring and how it\u2019s affecting enterprise IT organizations. Most recently, I wrote about how open source is eating the technology industry and how that will affect these organizations.\u00a0\nIt\u2019s no exaggeration to say that IT is witnessing more change now than it has ever before seen. I expect more innovation \u2013 and turmoil \u2013 in the industry over the next five years than in the past 20. And all of that innovation has a common underpinning: cloud computing. Cloud computing is enabling \u2013 and driving \u2013 all of this innovation and disruption; from the perspective of IT it\u2019s important to understand what this implies for the most important activity IT undertakes: applications. Applications, after all, is where all the value of IT lies. Everything else is just an enabler.\u00a0\nSo what does cloud computing mean for your applications?\u00a0\nLet\u2019s start by looking at the canonical enterprise stack, circa 2010, as represented in Figure 1.\u00a0\n \nFigure 1.\n\nThe foundation of 2010 enterprise IT is legacy infrastructure. The key feature of legacy infrastructure is how slow and expensive it is. Everything takes forever \u2013 weeks or months to procure and install equipment, and that\u2019s after the capital is obtained to purchase it. And, by the way, since all of the processes associated with racking and stacking are manual and take forever, the infrastructure, once installed, is incredibly difficult to change, so it\u2019s static.\u00a0\n[Related: 4 principles that will shape the future of IT]\u00a0\nThe application tooling running in that legacy infrastructure is primarily proprietary software packages \u2013 think Java application servers and relational databases from IBM and Oracle. The processes used by application groups in this environment are slow and deliberate. ITIL is a common governing process, featuring change control boards and infrequent application modifications. Which is OK \u2013 slow application processes are masked because the underlying infrastructure takes so long to change. If you\u2019re racing with a turtle, you don\u2019t really have to be very fast to look good.\u00a0\nAnd the primary application interface is the browser. Used by people. For the most part, driven by stable workload processes. Like invoice processing. The user base doesn\u2019t change much, the numbers of users doesn\u2019t vary much, and the applications change infrequently.\u00a0\nSo overall, a tightly aligned marriage of infrastructure, tooling and workloads. Everything slow-moving and stable.\u00a0\nExceptions have become the rule\u00a0\nOf course, this overview sounds idyllic. Of course, there are always applications that didn\u2019t fit very well to this environment. The externally facing website with huge jumps in traffic and user numbers during the holiday shopping season. There are always business units that want to try an experiment, but can\u2019t because by the time the experiment is built, the opportunity would be passed. And development and test \u2013 well, they\u2019re always bellyaching because there\u2019s no equipment available for them. But because of the primacy of the traditional applications, these unusual use cases are always treated as exceptions that don\u2019t justify upsetting the current state of affairs.\u00a0\nWhat\u2019s happening today is that these \u201cexceptions\u201d have become the rule. The relationship between companies and their customers has gone digital. Mobile applications are rapidly becoming the de facto way those relationships take place, with Web taking a secondary interface role. Companies want to mind the vast amounts of data their digital interactions generate. And looming on the near horizon is the shift to machine learning and the Internet of Things.\u00a0\nFigure 2 depicts the new enterprise stack. The common foundation for all of these interactions and interfaces is cloud computing. The public cloud providers have changed everything about infrastructure expectations. The new assumption is that infrastructure will be immediately available, low-cost and scalable to whatever extent you need. Static is out the window, discarded in favor of agile.\u00a0\n \nFigure 2.\n\nMany people assume the key challenge for enterprise IT groups is at the infrastructure level. Nothing could be further from the truth. The working assumption by all infrastructure consumers \u2013 i.e., developers, application groups, IT executives and business unit customers \u2013 is that infrastructure capability will meet the normal: fast, cheap, and scalable. If the on-premises environment meets those requirements, fine. If it doesn\u2019t, nothing in the world will persuade those consumers to stick with an inferior offering.\u00a0\nInstead, the key challenge for enterprise IT is to reconfigure the layer above infrastructure \u2013 the application tooling. We are going to see enormous change in the kind of applications that are built, the software components used to build them, and the processes by which they are delivered. Put bluntly, the infrastructure change affects certain portions of the IT operations groups; this change will affect everyone.\u00a0\nI wrote about open source in \u201c4 principles that will shape the future of IT,\u201d but suffice it to say that everything interesting going on in software is based on open source. Proprietary can\u2019t innovate quickly enough, and is unaffordable at the scale required for these applications.\u00a0\nBeyond this, the core architecture of enterprise applications will have to change. Monolithic code bases running in proprietary application servers can\u2019t change fast enough to keep up with \u201crun the business\u201d update requirements. This pace of change demands breaking applications up into service-based applications, aka microservices.\u00a0\nThe new normal\u00a0\nThe execution environments for those services will change as well. Virtual machines, notwithstanding their many virtues, are too large for distributed code components. In addition, their lengthy instantiation timeframes means it\u2019s hard to respond quickly enough to erratic application loads. The solution to these issues is to move to a different execution environment: containers. There is huge interest in containers within enterprise IT organizations, but until their usage moves from developer workstations to production environments, those organizations will not be able to meet the code deployment and execution speeds needed for microservice-based applications.\u00a0\n[Related: 5 cloud computing predictions for 2016]\u00a0\nFor all but the largest and most sophisticated IT organizations, trying to write orchestration (or scheduling) for container-based microservice applications is much too challenging. Mainstream IT organizations will leverage a PaaS or container scheduling framework to manage their distributed applications. Again, these will be open source-based, because that will drive the fastest innovation and largest ecosystem for this critical application enabler.\u00a0\nThe framework portion of this new application stack is simultaneously the most important and most difficult decision IT groups will make over the next two years. Important because the capabilities of this portion dictates whether these groups will be able to meet company and market requirements for application richness and update frequency. Difficult because all of the contenders in this space are low to moderate maturity. Essentially, one has to bet on the outcome of a horse race while many of the entrants are still entering the starting gate.\u00a0\nAnd, of course, the tools can\u2019t solve the process problem. Absent a restructuring of process, adopting containers or a framework is like dropping a bigger engine into a car with flat tires. One can look to enormous disruption in IT organizations as they seek to blend roles and groups in an effort to streamline application lifecycles. Some employees will resist this trend, while others will embrace it. Transformation is one of the most difficult tasks for leaders, far harder than improving the performance of an existing but suboptimal organization. Again, the opinion of participants is unimportant; the expectation is that application lifecycles must accelerate, and any roadblocks will be removed.\u00a0\nUnlike previous changes in IT, which tended to change one part of the people\/process\/technology triad while leaving the other aspects undisturbed, this shift is occurring in all three domains at once, which means an awful lot of balls in the air. However, the ongoing digital shift in business practices means that change cannot be deferred; there is a palpable sense in the air that business-as-usual is no longer sufficient. The bottom line is that you can expect enormous attention to be focused on the application tooling and process layer as IT organizations seek to handicap the field and place their bets, and prepare their staff to deal with the outcome.