This post is about my learning experience from Listen Beyond The Pass And Fails of Online Testing Conference 2020, presented by Lena Wiberg. Lena stated that a perfect test is a test that never fails. I will give my view of why there is no such thing as a perfect test.
Lena is an AST director, same as James Thomas. In this talk, she gave her story of how she started her testing work in a new job position. At that job was a test suite that ran every night, the run took several hours, and nobody paid attention to failed tests. She was on a quest to investigate those test results.
She identified two problems:
- Analysis of why the test failed
- Test suite run time.
She started her investigation to find test failure patterns and trends first by asking questions to close the gap in the triangle of:
manager => developer => tester => manager
She had at her disposition test run historical data.
You should always start with this. For example, I had a client that kept only the last five Jenkins workspaces to save on disk space. The problem was that test runs were triggered on a merge to master, and each there were at least 10 merges. So no historical data to analyze.
What was the developer’s knowledge of software testing?
Asking that question, Lena wanted to get information about the quality developers test code.
She quickly identified two patterns:
- the test fails on test environment but passes on a developer laptop environment
- concurrency issues when test data was locked by a test user.
On UI tests, errors were related to unexpected pop-up windows.
It was evident that the test suite had a massive technical debt. Lena used the following heuristic to identify test suite error pattern:
If it occurred three times, it is a pattern.
There was a pattern of never-failing tests. The question was, had those tests any value for the team? João Proença has a great talk on this topic. Lena’s advice is to ask the following questions to determine value answer:
- are those tests take a lot of resources?
- Check of never changing area?
Then you should remove those tests. If that area becomes changing, then you should bring those tests back. Agile is about having fast feedback on developers’ work. Developers with TDD get instant feedback. Testers provide release feedback for product owners using scenario, exploratory, and black-box testing. Those two feedbacks should operate together. Doing that, they would maximize testing value for the team.
Operations and hardware also caused test suite failures, firewall reconfigurations, various patches, scheduled backups, interfaces with external systems. External systems should be handled in TDD using mocks, or you could emulate them using containers.
In the end, Lena used a clumsy statement that perfect test is a test that never fails. I think that due to fact that it is not possible to test everything, there is no such thing as a perfect test.