TL;DR
Software development that supports software testing (usually wrongly called test automation) is a handy skill that could help in the testing effort. This post is about a very tricky business of metric manipulation.
In post Racing against Time with Test Automation by Nick Karlsson, Nick explains how his team used programming in his project (aka test automation) to meet the deadline and still manage to deliver a product with the required quality.
His team used automation to mitigate the regression testing risk. Regression testing is testing where we checked that previously developed features still work as expected after new software release. This use case of automation is mostly used, but this is only the tip of the iceberg for possible use cases of automation.
At the end of the article, Nick states that his four-month project had more than 1000 test cases. I see two issues with this number:
- the number of test cases is not reasonable.
When you include in test cases unit tests, then you will get such a high number. But those tests should not be included. A test case should always be traced to some requirements. Unit tests are on lower levels and could not be traced to any specification. A developer writes those test cases to provide proof that the internal design of a module or a class holds the developer concept.
- test case redundancy
If there were really 1000 test cases executed every time by testers, then those test cases had redundancy. They tested the same features.
When your four-month project has more than 1000 test cases, you should revisit the design of those cases. This is a real project issue, not the testers that take too long time to execute those cases. Test design by BBST is an excellent start to learn how to create compelling test cases.