In the previous post, 100% Code Coverage Myths, we stated that complete code coverage does not mean complete testing. We extend this further by saying that complete testing is not possible. The post is aligned with the Black Box Software Testing Foundations course (BBST) designed by Rebecca Fiedler, Cem Kaner, and James Bach.
We can not achieve complete testing, we might be able to achieve adequate testing.
Let’s explore Adequate dictionary definition:
adequate | ˈadəkwət |
satisfactory or acceptable in quality or quantity: this office is perfectly adequate for my needs | adequate resources and funding | the law is adequate to deal with the problem.
Let’s examine the keywords:
- Might – because testing could fail in many ways. Tester is not skilled enough, testing is not team responsibility, testing could be just the phase, not the activity.
- Satisfactory or acceptable. Based on the tester report, the customer decides is this good enough for his/her acceptance criteria.
- Quality or quantity. Quality is the product value of the person that matters [Weinberg]. We can not measure quality, we could only asses it [Bach].
Before we define complete testing, let’s try with the opposite. What is incomplete testing?
We stop testing when we know that there are no remaining bugs or we run out of time. At this point, testing is finished, but not complete. What about bugs that are unknown to us (unknown knows and unknown unknows)?
To have truly complete testing, we need to run all distinct tests.
Two tests are distinct if one test would expose a bug that other tests would miss. It is not so obvious, but if you run all distinct tests, you are sure that there are no bugs left in the software.
Complete testing is not possible, because we could not run all distinct tests. Software tester must be able to explain it why it is not possible.