In the beginning, there was the code and the coder who built it. Developers were responsible for everything. They crafted the logic and then pushed the buttons to keep it running on the server. That changed as teams expanded and labors differentiated, with some team members remaining with the code (devs) and others tending to the machines (ops).
These days, thanks to the cloud and the rise of microservices, software has become a constellation of dozens, even thousands of components running on separate machines. Each is technically independent but all of these machines must work together. Ensuring that they do is best accomplished with automated scripts. Enter DevOps.
The DevOps team’s main task is to provide all high-level orchestration of these multi-faceted apps. They may not deal with the deep corners of the software’s architecture, but they keep the parts running smoothly.
Still, the role remains relatively new, with responsibilities that aren’t clearly defined or assigned and skillsets that are still evolving. Some DevOps’ pros straddle job descriptions, performing a mixture of programming and operations, but many teams are finding that keeping all the servers running smoothly is enough work alone. Configuring them requires a mind-numbing attention to detail, all while the programming team is changing the code and thus how it needs to run.
As more organizations turn to DevOps in support of their digital transformations, it’s important to get a clear-eyed view. Here are a few hidden truths and widely held misconceptions about the emerging field of DevOps.
DevOps is not programming, but it is
Many managers think DevOps isn’t programming and they’re right. The jobs have diverged and much of the messy work of dealing with bytes and data structures is assigned to coders who live in a different abstract world. Strategically, it makes sense to relieve programmers from the responsibility of keeping everything running because their heads are lost deep in the thicket that is a modern stack.
But DevOps team members still have to write bits of code. They still need to think abstractly about hidden data structures. Merely keeping everything running requires endless command line invocations that can usually be collected and simplified as shell scripts. While some coding purists might not classify high-level work like this as coding, even if it includes function calls, parameters and variables, the reality is that many of the same types of skills as those of programmers are at work there.
Herding programmers is work
Even if DevOps pros don’t write the code, they end up managing the programmers and that’s often just as much work. Each developer is creating something new and beautiful. Their code is art. Each wants to get his or her container into production right now so they can check it off the list. Does the code run smoothly? They think so. But will it all come crashing down? Ensuring the coders don’t mess things up is the heavy lifting of DevOps.
DevOps is slowly taking over
When software was monolithic, programmers had all of the control. Now that apps are typically broken into dozens or even hundreds of microservices, DevOps is in charge of how well they run. Yes, there are still architects and programmers who make decisions about the way the services are linked together, but DevOps pros are in charge of how they’re linked together, which is an increasingly important piece of the puzzle.
We don’t watch pennies
The cloud companies were smart when they priced their machines in pennies per hour. Who doesn’t have some spare change around? But the pennies add up as the number of cloud instances spins upward and the hours tick by. There are 720 hours in a 30-day month and so a fat machine that costs just $1 per hour will be $8,760 for a year. Suddenly buying your own box starts to look cheap.
After getting some shocking bills, some teams are setting up DevOps auditors with the sole job of poking around in the mess of machines, looking for ways to save money. They scrutinize the decisions of the people on the line and start to say, “No.” They will count each fraction of a cent because they know the budget requires it.
There are only a few levers to boost performance
The work of managing the cloud is made harder by the fact that the DevOps team often has just a few levers that they can pull. Once the programmers commit the code and build the containers, the DevOps team job is to make them run. If they seem slow, they can try adding more virtual CPUs or more RAM. If things are still slow, they can add more machines to the pod to spread out the load. That’s about it.
It’s a demolition derby
One of the deepest issues is that the computers are always papering over their mistakes. I once inherited a collection of containers that were crashing every few hours. Maybe it was a failed database connection. Maybe it was a misconfigured parameter. The answer may have been deep in the log files, but I never found out. Kubernetes was kind enough to boot up another instance and the pod sailed on, answering queries and doing its work. It was a beautiful example of fail-safe architecture, even though it was a mess inside.
Sometimes underneath the surface, the containers and instances are crashing all over the place. As long as the users and the customers are getting work done, it’s often easier for everyone to look the other way and ignore all that virtual demolition.
Databases rule everything
We may fuss with all of the homegrown code and fiddle with AJAX this or CSS that, but in the end, all of the data finds a home in the database. The classic databases remain the sun around which the code orbits. It is the single source of truth. If the team can keep it running and answering queries, that’s almost all of the job. Users can tolerate misaligned DIVs or strange new layouts, but not corrupted databases. I once audited the code for a team that was using the latest, greatest Node.js packages, relentlessly updating their stack to stay at the cutting edge. But the database was more than 10 years old. No one wanted to mess with that.
We know only so much about how the code is running
The level of instrumentation available today can be amazing. We can feel the data surge through the various pieces of software like a sailor feels the wind surge. As the loads of the pods ebbs and flows, we know when things are running correctly and when they’re straining. If we’re in charge of an ecommerce web application, we’ll be the first to know when the discounts kick in because the load for that corner of the app will spike.
But the instrumentation can tell us only so much. The numbers summarize the average strain and response time of component but it can’t tell us why. That’s left for the programmers who know what’s going on inside the components. They can isolate the bugs and find solutions.
Some business folks may wish that there were all-powerful, omniscient tech staff that can understand the entire stack from bottom to top. There are a few superhumans out there, but for many companies the job is too big and the lines of code are too many. It’s better to work on finding an easy way for DevOps and the programmers to collaborate.
It’s all a bit of a mystery
Computers may be completely logical machines where the code evolves in a predictable, deterministic way. Every bug happens for a reason. We could haul out the debuggers, scroll through the log files, and scrutinize the code but who has the time?
Just as it feels like 90 percent of tech support problems can be solved by power-cycling the device, much of DevOps involves doing much the same thing. Oh sure, we use words like “containers” and “instances” and have endless dashboards to track what’s happening, but in the end it’s often faster and simpler to just move on and, as Iris Dement suggested, let the mystery be.