Companies considering moving workloads to cloud environments five years ago questioned whether the economics of cloud were compelling enough. The bigger question at that time was whether the economics would force a tsunami of migration from legacy environments to the cloud world. Would it set up a huge industry, much like Y2K, of moving workloads from one environment to another very quickly? Or would it evolve more like the client-server movement that happened over 5 to 10 years? It\u2019s important to understand the cloud migration strategy that is occurring today.\nWe now know the cloud migration did not happen like Y2K. Enterprises considered the risk and investment to move workloads as too great, given the cost-savings returns. Of course, there are always laggards or companies that choose not to adopt new technology, but enterprises now broadly accept both public and private cloud.\nThe strategy most companies adopt is to put new functionality into cloud environments, often public cloud. They do this by purchasing SaaS applications rather than traditional software, and they do their new development in a Platform-as-a-Service (PaaS) cloud environment. These make sense. They then build APIs or microservices layers that connect the legacy applications to the cloud applications.\nWhen workloads migrate from legacy into cloud, it\u2019s usually because of replacement tools rather than actually moving the workloads. Most companies implemented 60 to 80 percent of their current applications in the last seven years. They continually reimplement applications as new software versions come out with new functionalities. When a company upgrades its ERP system, for example, it upgrades into a cloud version rather than a legacy version.\nEven if a company spends 20 percent of its IT budget on new applications, over time that results in a steady refresh of applications. Most organizations have exceptions \u2013 a small portion of applications that are 30 to 40 years old. They are highly stable, don\u2019t require a lot of new functionality, and don\u2019t migrate to the cloud. \u00a0\nMany third-party service providers advise clients that they need to reengineer their legacy environment and deploy it to a cloud environment to get the desired end-to-end performance of the applications. Although some of this is happening, typically it\u2019s very selective. Enterprises\u2019 preference is to develop APIs and a microservices layer, which allows deferring the need to redeploy and enables moving legacy applications only when their underlying functionality changes enough or new versions become available in a SaaS or cloud version.\nAnother factor in the cloud migration strategy today is the fact that capital is limited and business units are increasingly driving that spend. They want new functionality and see little or no gain in putting money, time and risk against duplicating existing functionality that works into a new environment.\nSo instead of a substantial investment into a quick Y2K movement, companies are making substantial investments in APIs and microservices, which allow hybrid environments (legacy and cloud combined) to work well together. It doesn\u2019t mean that companies won\u2019t have to redevelop some of their applications in the cloud. But it allows time to be selective as to which ones to redevelop. And the timing can be tied to when there is a compelling reason of cost savings or when having applications in a homogenous environment doesn\u2019t seem to be strong enough for the desired performance.\nThis strategy of investing in APIs and microservices layers to connect legacy apps to cloud environments is much less risky, and cheaper, than moving legacy workloads.