Note: This is the second in a series of three posts on the five stages of a DevOps evolution. If you\u2019re just starting out, check out the first post, Building a Strong DevOps Foundation.\nSoftware teams have to be fast, especially when it comes to adapting to change. As performance expectations from the market and the business increase, DevOps has arisen as a prescription for improvement. However, teams need to know not just what success looks like but how to achieve it.\nA survey of more than 3,000 technology professionals, the 2018 State of DevOps Report, co-authored by Puppet and Splunk, defines DevOps as an evolution comprised of five stages. Our data indicates that these practices correlate to DevOps success, as measured by the CAMS model.\nThe Five Stages of a DevOps Evolution\n\nNormalize the technology stack\nStandardize and reduce variability\nExpand DevOps practices\nAutomate infrastructure delivery\nProvide self-service capabilities\n\nIn this post, we\u2019ll provide an overview of stages 1-3, exploring how to accelerate and expand your DevOps initiative. Get more information on all five stages in the 2018 State of DevOps Report.\nStage 1: Normalize the Technology Stack\nAutomation is often believed to be the starting point for DevOps. And while it\u2019s true that automation underlies any successful DevOps initiative, it requires a lot of preparation. The simpler your tools and systems are, the easier they will be to automate down the road.\nNormalizing your organization\u2019s tech stack \u2014 adopting a standard set of tools and reducing redundant technologies \u2014 makes management easier and enables future automation efforts. Normalization is achieved through two key components: adopting version control and standardizing operating systems.\nPrioritizing Version Control\nAdopting version control is the first step to implementing continuous integration. When application development teams adopt version control, they\u2019re able to produce deployable code more frequently. This is great for devs, but puts added pressure on ops teams to deploy quickly while maintaining system stability and security. This tension is often a key driver for DevOps.\nAs development gets more distributed, teams have a greater need for version control and put both application code and app configuration files into their version control system. At some point, configuration data is separated from code to ensure that sensitive information is protected. This builds the foundation for automated deployment, allowing you to track who made what changes, and to roll back changes if necessary.\nStandardizing Operating Systems\nIn a pre-DevOps environment, it's common to find applications deployed across a variety of operating systems. Each one-off installation can place a heavy burden on an IT team and increases the risk and impact of maintenance efforts, especially emergency maintenance. Reducing the number of outliers and unique OSes that need to be monitored and managed simplifies the process of automation and helps create a shared pool of knowledge around a common tech stack.\nBuilding on a standard set of technology is a contributor to success at multiple stages of the DevOps evolution. Start small by addressing technologies within a single team, and gradually expand to those that require cross-team buy-in. Use proven technologies and reliable processes for what goes into production, and define clear guidelines for adding any new technology at a later date.\nStage 2: Standardize and Reduce Variability\nBy the time you reach Stage 2, your organization should have:\n\nBegun standardizing on a set of technologies\nSeparated application configurations from data and placed them in version control\nAdopted a consistent process for infrastructure testing and sharing source code\n\nThe theme of standardization continues in Stage 2, and it\u2019s here that the reuse of technologies and patterns becomes important. The practices that define this stage \u2014 building on a standard set of technology and deploying on a single, standard operating system \u2014 allow teams to achieve more repeatable outcomes with less variance.\nBuild on a Standard Set of Technology\nAs enterprises grow, complexity increases. Teams must manage new applications, services, and technology stacks in addition to legacy applications and systems. All of this oversight forces teams to be reactive and leaves little time for innovation.\nStandardizing patterns and components means teams no longer need to continually re-learn how different technologies operate, scale, fail, recover, and upgrade. Start by choosing foundational elements to normalize on, such as testing workflows, build and shipping patterns, with a focus on improving and optimizing processes that impact multiple applications.\nReduce Operating System Variability\nAs we discussed in Stage 1, adopting a single, standard operating system or a small set of OSes enables teams to move faster by saving time on patching, tuning, upgrading, and troubleshooting. Unfortunately, OS standardization isn\u2019t always straightforward. Software applications with long lifecycles may not be compatible with newer operating systems, or specific patches may not work for a particular application.\nAs you\u2019re getting started, remember that less is more; even if it\u2019s not feasible to use a single OS, two is better than five. Within the operating systems you do run, reduce variability by normalizing compute resources. This will make troubleshooting and maintenance easier.\nStage 3: Expand DevOps Practices\nStandardizing your tech stack and prioritizing version control enable collaboration between Dev and Ops. When these teams share tools, applications, and services \u2014 as well as knowledge \u2014 they can work better together. When you get to Stage 3, it\u2019s time to expand these early pockets of success across the organization.\nStage 3 is defined by two key practices:\n\nIndividuals can do work without manual approval from outside the team.\nDeployment patterns for building apps and services are reused.\n\nReduce Bureaucracy\nWhen the authority to make decisions is removed from the people who have the relevant information and are doing the actual work, productivity, and efficiency suffer. The authors of \u201cAccelerate: The Science of Lean Software and DevOps\u201d studied software teams and concluded that \u201cteams that required approval by an external body achieved lower performance.\u201d Our 2018 State of DevOps survey reinforces this finding.\nWhile it\u2019s unrealistic \u2014 and irresponsible \u2014 to completely eliminate change oversight, a DevOps transformation is a good time to revisit existing approval processes. Ask yourself: Are delays and wait times justified, or are they part of a bureaucratic legacy workflow? Identify the unnecessary bottlenecks and experiment with ways to eliminate them.\nFor example, you might consider giving your operations team the power to approve specific types of changes that are relatively low-impact. This not only saves time and increases agility, but helps build trust and provides team members with a sense of ownership \u2013 two essential parts of a DevOps practice.\nReuse Deployment Patterns\nIf each team within your organization creates its own deployment patterns, agility is limited, and it\u2019s harder for developers and infrastructure engineers to move between teams. Developing reusable deployment patterns \u2014 however simple or complex \u2014 helps eliminate silos, documentation, and hand-holding.\nWhen running several types of applications and systems, some deployment patterns may be applied universally, while others, such as n-tier web apps or cloud-native services, are specific to certain families of applications. The more broadly patterns can be applied, the greater the benefits. An optimization applied to a deployment job or pipeline is immediately applied to all the applications that use it.\nHeading to Automation and Self-Serve\nThe 2018 State of DevOps Report is designed to guide you through the stages of a DevOps evolution, but every organization is different. We\u2019ve found that some organizations take on Stage 3 before Stage 2, while others carry them out simultaneously. However, our research shows that both steps are essential in order to achieve success in Stage 4: automating infrastructure delivery.\nTo learn more about succeeding in Stage 4 and beyond, stay tuned for our next post, or download the 2018 State of DevOps Report for more information on each stage, plus the key findings and methodology behind the report.