As an automation architect at a consulting firm I see a lot of clients fixing or replacing existing automation efforts. A lot of these failed or troubled projects share the same characteristics. The main characteristic is that they tried to have non/light-technical resources create their automation. All too often this means record & playback.
When you first look at record & playback, it looks like a great option. It allows your testers to create automation without needing to know how to code. The tools are typically easy to use and you can start seeing results day one. Given these benefits it is no wonder so many projects fall into the record & playback trap.
What’s wrong with record & playback
1. Very high maintenance cost
These tools often store procedural steps or create procedural code. Procedural tests are a problem because even minor changes can require that all your tests need to be updated or rerecorded. This largely defeats the purpose of having automation in the first place. In many cases the cost of maintenance outweighs any realized value.
2. Limited test coverage
The main thing to understand about record & playback tools is that they typically follow the exact steps you recorded, no more, no less. That means these tests typically do little more than basic navigation testing. Navigation testing is important, but navigation testing alone is pretty low value automation. You are also typically limited to just testing against the user interface.
3. Poor understanding of the tools
Most testers have an incomplete understating of what exactly these tools are doing. This can lead to huge gaps in your test coverage. I have found the testers that typically use these tools assume the tools are do things, like verify a page actually loaded, that they are not doing.
4. Poor integration
These tools by and large don’t integrate well with your SDLC process. They typically have their own interface and test runners. This often means you need to manually kick off tests. This can also mean tests need exclusive control over the machine they are running on, which means the tester running the tests doesn’t do much to do while the tests are running. Once you do have test results they are often in a flat file or "fancy" report format that follows a custom format for that specific tool. These custom formats make uploading the results to your test management or build tool challenging.
5. Limited features
Remote execution, parallelization, configuration, data driving and test management integration are just some of the features that you can leverage to create very robust and cost effective automation. Sadly, most record & playback tools are missing most (or all) of these features.
6. High price
Feature rich options are often expensive. Many require annual renewals and are tied to specific individuals. Along with being expensive they often require higher technical skill to use them effectively, largely defeating the purpose of going with a record & playback tool. There are free options, but they typically have very limited features.
7. Locked in
When you go with a record & playback it can be very hard to switch to another tool. Switching typically means starting from scratch. Due to the level of effort that is required to switch tools I often see teams holding on to their old record & playback tools and tests far longer than they should.
When may record & playback be a good option
1. Learning tool
Many of the pay tools allow you to do both record & playback and coding. It can be incredibly useful to record your steps and see what gets generated. Reviewing these recorded steps can give you real example of how the underlying automation framework can be leveraged from code.
2. Load testing
Many load testing tools do record & playback under the hood. Testers record test scenarios to capture the requests and responses between the client and server. Then they often alter and data driving these recorded steps as needed to create a suite of performance web tests. At a very simple level they basically just capture your network traffic and play it back. Without record & playback creating load test scenarios can be impractical.
3. Simple proof of concept
These tools can be used to see is an application can be automated. You can also use the generated scripts or steps to get a sense of where your pain points would be. There are many examples where a tool can be used directly from code or via record & playback. In these cases, it is typically pretty safe to assume the code version can do at least as much as the record & playback implementation.
This article is published as part of the IDG Contributor Network. Want to Join?