Creating Synergy and Pride in Software Testing Departments

In this chapter from Judy McKay's Managing the Test People: A Guide to Practical Technical Management, you'll learn valuable skills in motivating QA teams.

Excerpted from Managing the Test People: A Guide to Practical Technical Management, by Judy McKay. (Rocky Nook, $39.95 US, $51.95 CA)

We Love Us, Why Doesn't Everyone Else?

Image

It's time to be honest here. QA is often considered a necessary evil. This isn't exactly the description we were hoping for when we took the job. I bet you didn't go to college and look at the career center postings thinking, "Ah, Necessary Evil, that's what I want to be." On the other hand, the "necessary" part is a good thing. QA has to be recognized as necessary. Once you've hurdled that obstacle, you can fix the evil (unless of course you like being referred to as evil, in which case you might want to consider a different career—we don't need you cluttering up the world of QA).

Emphasize the Necessary

How do you make sure your department is considered to be necessary? By providing a tangible product. Remember, you may be competing for funding and headcount with development. If the VP of Engineering has money to spend, how will he decide? If he spends it on development, he'll get this cool new program that does stuff. If he spends it on QA he'll get...what exactly? More bugs identified? Well, what good is that? You need to be sure your department is producing a tangible product that can be understood and described to your management. Did you already do the test plans?

Great. You probably also keep track of the test cases, and the bugs you find. But what does that mean to your management when they are thinking about spending money on you? Can you guarantee to your boss that if he gives you two more people, you will find 500 more bugs and make the product safe for humanity? No. But the development manager can promise that two more people will result in two more cool programs that can be seen, demonstrated, and sold for revenue.

Determining ROI (Return on Investment) for QA

This has always been a quandary for us in QA, and particularly for QA management. For all the years I've done QA management work, this has been one of the most difficult aspects of the job. Fortunately there is a good way to track and project what the QA group will accomplish, and how additional resources will affect the outcome of the goals. I'm a strong advocate of approaching quality assurance as a risk management function. Given any project (of any size), you first define the risks that you are trying to mitigate. What are all the bad things that could happen if this software doesn't work? It is a scary thought, isn't it? But that's where you start.

Clarify your goals and contributions.

For example, what happens if the software will not install? That is bad: a high-priority risk that must be mitigated. Is it something you can catch in testing? Yes. Can you build a test case or a suite of test cases to check for this problem? Yes. By creating a risk analysis and a mitigation strategy (testing and quality assurance), you are able to present a picture that management will understand. You can offer a giant list of all the risks inherent in this software. You can offer a list of all the things we can do to mitigate the risks. And you can explain how long it will take to carry them out. This can mean executing test cases, participating in design reviews, or even verifying the documentation. Each action requires time and resources. If you have a limited amount of time and resources, you can draw an accurate picture of how much risk mitigation you can offer.

It is important to emphasize that your department's goal is not to find and report bugs, but rather to provide a risk management system that will help deliver a higher quality product to your customers. For more information regarding how to build and maintain this approach, see Rex Black's book, Managing the Testing Process.

But They Still Think We're Evil!

We will assume you've established that your department is a necessary cog in the corporate machine. Now we have to dispel the evil aura that sometimes surrounds the QA group. Our industry is not immune to the occasional bad apple. I personally have met some really obnoxious QA people with whom I had absolutely no desire to work. I pity their developers. In the QA world (and more often than in other technical fields), diplomacy is a requirement. As was discussed in the section on interviewing and hiring, you have to start with people who can do their job and who can work effectively with others. Still, even with a group of technically sound and diplomatic folks, there is sometimes the predisposition among developers to judge QA as evil.

Problem Solver or Problem Causer?

The good news is that there is one thing you personally can do to help improve your department's reputation. You must always be perceived as part of the team that will solve the project problems. This isn't an easy role, because you are often in the position of pointing the project problems out in the first place. Everyone in management knows it is better to be presented with a problem and a solution, rather than just a problem and shrugged shoulders. So, think about it. Let's say the schedule is in the tank. What can you do?

Don't take ownership of a problem that isn't yours.

First of all, meet with the development manager and see if he has any ideas. He may be the reason your project is in trouble, so perhaps he has thought of ways to get it back out of trouble. If that doesn't work, get out your carefully prepared risk analysis. You were planning on testing 75 percent of the high-risk items. Now you may only have time for 50 percent. Do you simply move the cutoff line up the chart and get back to work? Probably not. One of the common mistakes we make in QA management is taking ownership of a problem that isn't ours. So, now our schedule is in trouble, but not as a result of testing issues. What do we do next?

This isn't a decision you can make. It is time to reconvene the project group (including project management and development), and get an opinion. This is objective data, so you can treat it as such. "Here's the plan, here's where we are. We could test just to this point, but I'm worried that we'll leave these important areas untouched." It may be that you can reorganize the priorities, and get a wider range of testing done at a higher level. This might be safer in the long run, depending upon the product. The actual approach you settle on will vary by project. The important thing is that you're working integrally as a part of the project team to find a solution to the problem. It's not just your problem; it is the team's problem. By getting the entire team involved, ownership becomes shared (and, negatively speaking, the blame is also shared), proper exposure is given, and a more creative solution may be possible.

Unrealistic Expectations

If expectations are unrealistic, who set them?

As a manager or lead, you may find yourself defending your group against someone's unrealistic expectations. That's never a comfortable position, but if you find yourself in it, you need to figure out how those unreasonable expectations got set in the first place. Did you fail to do adequate project preparation, making clear how the testing would be executed? Did they expect the performance problems to be found at the beginning of testing, before you had adequate functionality to run the load tests? Anytime you find yourself on the defensive, you should be able to point to documentation that was previously presented that explains your position. If you don't have that documentation, maybe you didn't do your job in the project planning stages. Now you know what to improve upon next time.

The same applies if you find yourself giving anything but an objective response with supporting data. The only reason to be giving an emotional or off-the-cuff response to project issues is that you didn't do the planning and gather the documentation you needed to support yourself. Have you ever been asked how you know a product is ready to ship? The correct answer is not, "Because it feels right." The correct answer is demonstrated in charts and graphs showing the risks that were mitigated by the successful execution of the specified test cases. Since (after reading this) you will be doing excellent planning and documenting, remember to regularly publish your progress documents so issues can be raised early.

Presenting Your Information

I worked with a QA manager who had been backed into a corner by a project that was on fire. The development manager had been telling the executive staff that the project was on schedule. The QA manager had not wanted to contradict the development manager in a meeting, but clearly there were still glitches in the software. After much discussion with him, it became apparent that the plan still called for the software to be delivered on schedule, but not in such an order that allowed any functional testing to take place. This was a huge problem. The development manager could honestly say that he had delivered 90 percent of the software to QA, but the vast majority of the software was not testable because the missing 10 percent provided the communication between the UI (user interface) and the database backend processes.

We worked out a way to present the data clearly and accurately to our management, and got the development manager to agree that it was an accurate representation. We took the development manager's original slide that showed the percent of software delivered by subsystem, and added a column indicating how much of each subsystem was testable, and another column indicating how much had actually been tested. The chart looked like the following:

Bad news
Subsystem % Delivered to QA % testable % tested
SubsystemA 90 25 20
SubsystemB 85 30 25
SubsystemC 95 10 5
SubsystemD 100 15 0
Average 92.5 20 12.5

This gave quite a different picture to management—not a happy picture, but one they could clearly understand, which was our goal. We had the data and were able to present it objectively. Now everyone understood that we had a problem project, and we could work together on a solution. It was also vitally important that the development manager agreed with our data. When we presented this information, he was the first one to be questioned about its veracity. Fortunately, he was honest. He was also an excellent developer and manager. We were lucky.

What if he hadn't been willing to share the blame and acknowledge the problem? We had documentation for that too. We were prepared to show the detailed risk analysis, which showed the prioritization of our test cases. In our risk analysis, we had highlighted which risks we had been able to mitigate—a pitifully small number. We also had the list of all the necessary test cases and which ones we could actually execute. Knowing that this would bore our non-technical executive team, we picked out a few selected cases that showed the glaring lack of functionality. All of this documentation supported our information (which, by the way, was correct and unembellished).

Present a united front with development.

These kinds of situations highlight the importance of working with the development team and the development manager, right from the start of the project. If you don't have a solid relationship built on honesty, this kind of crisis will escalate. If you present your data and the development team debates it, you can haul out all your backup information (as we were prepared to do), although management is still left with the uncomfortable impression of internal quarrels and unnecessary impediments to getting the work done. If development is favored in your work environment, you are likely to be the recipient of the blame for the problems of not being able to "work together."

This is an unfortunate reality in many organizations. The only way to fix it is to build a positive relationship from the beginning, so you are always working together on the project with the developers. This is easy to say, but hard to do. Unfortunately, you can be technically sound and absolutely correct with all your information, yet still find yourself in one of these situations where the development group has managed to cloud the water with enough squid ink that no one is sure who is at fault. Uh-oh. It is best not to infuriate the squid. Work with him, and make him work with you throughout the project.

Job Satisfaction

That's right, we're cool. But sometimes, even with all the correct planning and documentation, a product ships before it should. I worked in one company that was completely driven by hardware schedules. When the hardware was ready, the accompanying software shipped, regardless of its quality. And, to make matters worse, my division only did the initial software release. Another company did all subsequent maintenance releases, so we never got to see any product improvements over time.

Pride and Ownership

1 2 Page 1
Page 1 of 2
7 secrets of successful remote IT teams