I laughed. Even though I had only been at Microsoft a few months, I knew that there was no such thing as somebody approving my spec. Hell, nobody had time to read my spec, let alone approve it. The programmers were bugging me every day to get them more pages so they could write more code.—Joel Spolsky
The story, taken from Spolsky's experience as a program manager at Microsoft, does more than illustrate the point of employee empowerment, of the "single wringable neck" for one product. It also illustrates the idea of a fluid set of requirements: An exact feature set can change over time. The Agile Manifesto reflects these values, emphasizing "responding to change over following a plan" and "working software over comprehensive documentation."
This idea is not universal. Plenty of medical and physical device companies get but one chance to install their product; for them, a specification continues to be an important part of the process. For software delivered in increments, though, the idea of a comprehensive requirements document with formal signoff can seem downright archaic, a relic of the previous century.
Today, requirements are often fluid, and the signoff decision a memory. The next step to fluid software is to eliminate the ship/no-ship "approval process." And it's happening right now, often in one of three ways: continuous deployment, team-wide consensus and testers serving as de facto staff officers.
1. Continuous Deployment: Push Code to Production With Each Commitment
Agile compresses the release timeline from months to weeks. Continuous deployment shrinks it further by asking, "What would it take to release to production with every commit?" It's being used by real companies, including Flickr, Twitter, imvu and Etsy.
Moving to continuous deployment means more than hooking up a build system and finding ways to upgrade without rebooting the server. It means the technical team needs to have code that is near production-ready with every commit.
Last year I went to Brooklyn to learn about continuous deployment at Etsy, and I was amazed at the takeaways:
- Most code is rolled to production "dark" (not executing), then activated over time with the use of configuration flags.
- The company invests heavily in monitoring and branching infrastructure in order to catch problems quickly.
- The tools group makes up a large percentage of the developer team and has a great deal of infrastructure to sense, fix and push fixes to production quickly.
- Since Etsy is PCI complaint, code involving credit cards, money and customer data goes through a more formal process.
Timothy Western, a tester at Rackspace, says deploying features "dark" means the actual timing of the feature lets the team defer the decision on when to activate the feature—and to continue to debate what it will be or how it will work. For example, the team could move to a different database and deploy code to write to that database but not read from it, all while keeping the old code around. This lets the company do performance, even functional, testing in production with much less risk.
Responsible, reasonable engineers have dropped the tester-as-gatekeeper metaphor and are pushing code to production with every commit. Doing that requires a serious investment in infrastructure and discipline that many teams are unwilling, or unable, to make. That brings us back to testers as gatekeepers—only it turns out that even the test community rejects that approach.
2. The Software Tester as Tester, Not Gatekeeper
Bret Pettichord's 2007 Schools Of Software Testing talk split testers into different groups. According to Pettichord, the "quality" school views testing like manufacturers view mass inspection: Too expensive and too late. Instead of quality control, the quality school tends to focus on reviews, inspections, "getting the requirements right" and making sure a "quality process" is followed in order to ensure that quality software is delivered. (Pettichord himself aligns with the "context-driven" school that views testing as valuable and essential work that's unlikely to be eliminated or completely automated away.)
Meanwhile, Pettichord's 2002 article, Don't Become The Quality Police, advanced the argument that the process police role is confrontational. This degrades relationships among programmers and other technical staff and hurts the project.
Instead, Pettichord suggests that testers test&mdfash;that is, provide information about the status of the software to decision makers. It then falls to the customer, or business sponsor, to makes the ship/no-ship decision. The article served as a breath of fresh air and advanced the conversation about what testers do and how they do it.
Ten years later, as a tester on an agile team, I was faced with a different problem. We maintained 74 applications and had between four and 12 deploys to production in a given week. The old-fashioned techniques—which often involved getting key players into a war room, triaging bugs and debating risks—simply would not scale to deploys at that pace.
Our leaders and executives preferred to delegate the ship decision, to have the whole team make the decision and to judge the team by the outcome of those decisions. Or, to rephrase what Spolsky wrote, management was too busy feeding us a spec to get into the details of approving a release. Couldn't the technical team just figure it out?
I believe it can.
3. Entire Agile Team Serves as 'Process Police'
As mentioned, micro-releases call for a different strategy than classic releases. One way to defer the ship decision is to make testers responsible for the ship decision. When I've been in that position, my goal has been to encourage the product owner to make the same decision he would make if he had the time and attention to devote to the issue. This is called the general staff concept.
Anna Royzman, a test manager at Liquidnet, puts it this way: "As a QA lead and tester on an agile team, I don't consider myself process police; the whole team is. The ship decisions heavily depend on the information that testers provide. I, however, am responsible for making decisions on whether there was sufficient testing in order to make ship/no ship call. I use various test methodologies to decide when the testing is 'good enough.' It depends mainly on the context: scope of changes, risks, how well developers understand the system and features required by customer and so on."
The "whole team" is another core concept of agile software development. A second, more popular way for agile teams to make the ship decision is by consensus—a vote of the participants in the story or the team room. This vote can be as simply as thumbs up/thumbs down. The team can also decide, by consensus, what kind of vote is needed to move to production—anything from simple majority to unanimous agreement.
Give Your Agile Team Something to Talk About
James Bach, co-author of Lessons Learned in Software Testing, describes his release this way: "I want to create a situation whereby, from the perspective of any foreseeable future world, our decision to release the product will appear to have been wise, or at least reasonable."
As economic and social conditions continue to compress release schedules, each team will need to find ways to keep that decision wise, or at least reasonable. We just looked at three options: continuous deployment (with tools), consensus and the tester as a staff officer.
What's the right decision for your team? That certainly sounds like something to talk about.
Matthew Heusser is a consultant and writer based in West Michigan. You can follow Matt on Twitter @mheusser, contact him by email or visit the website of his company, Excelon Development. Follow everything from CIO.com on Twitter @CIOonline, on Facebook, and on Google +.