Among testers that claim to be context-driven, I hear a lot of statements against detailed test cases. In context-driven testing, context is a King, so I will explain a situation that requires detail test cases.
Paul Seaman wrote an excellent blog post: THE CASE AGAINST DETAILED TESTS CASES. This is true for some context, but it does not work for my context.
What Is Detail?
In the article that Paul comments, he could not find the definition of the detailed test case. So here is his definition:
“… detailed test cases are those that comprise predefined sections, typically describing input actions, data inputs, and expected results. The input actions are typically broken into low-level detail and could be thought of as forming a string of instructions such as “do this, now do this, now do this, now input this, now click this and check that the output is equal to the expected output that is documented.”
In his post, Paul provides a context where there is no need for detailed test cases.
When Paul mentions clicking in the description of the test case, this gives me a hint that he could refer to a web application with HTML UI elements. But what if there is between a click and output message a complex calculation? And input to that calculation is a large file with commands and output is many files with commands? And by just observing those files, you do not have a clue about the rule between input and output?
I that case, you need to have detailed test cases. They will contain an explanation for the content of input and output files and rules that connect those files. I agree that there is no need to explain how to start that calculation.
I asked Paul what he thinks about my context:
Hi Paul, great post!
After reading the linked article, my first question was addressed in your first heading. What is meant by detailed test cases?
What if the context of application domains that you are referring to?
Let’s have the context in telecommunication protocol, and one of the testing missions is to cover all the use cases of that protocol? You have documentation of communication parameters and their values, but no use case document? Is this context valid for writing detailed test cases (as you defined them)?
And Paul thoughtful answer:
Hi Karlo, thanks for the feedback.
To address your question, which is I think is, “would I write detailed test cases for the scenario you provided,” my answer, in brief, is no.
You have told me that there is documentation about parameters and values. I’m going to assume that I also have some access to stakeholders to ask questions (simply because I can make that assumption, although). Perhaps I’m also working with a team of testers. At no point in my response, am I considering “covering all the use cases”? My focus will be on checking functions we say will work in particular ways and then exploring to learn about ways in which the system might misbehave. I think this is a far more valuable focus than chasing the illusion of an “exhaustive use case list.” As I understand your scenario I see a number of ways I can explore the information on parameters and values. I could create a decision tree, I could use some pairwise testing, I’m probably going to consider boundaries and other things such as parameter dependencies. It’s a lot of information, and I really want to keep it light. It seems to me that writing about what these relationships should do according to documentation is far less valuable than having some ideas about how and what to test and then adjusting these as my testing progresses and I learn more. During all this I am compiling test notes, converting these into both oral and written reports to stakeholders and sharing my learning with fellow testers (and anyone that might be interested). There is a complete absence of detailed test cases but there is planning, there is direction, there is risk assessment and there is a stream of information to stakeholders to inform them of risk via evidence-based reporting. I feel like I’m hitting valuable test mission targets here. There is one circumstance where I would suggest using detailed test cases. That is when key stakeholders cannot be swayed from this as an artifact of testing. Of course, it would also be my preference not to engage in a testing role that demands detailed test cases.
I hope this answers your question.
Paul does not think that formal verification of “happy use case flow” provides valuable input. I agree with his approach with addition of formal verification.
As I did Rapid Software Testing, I have access to the RST Slack channel. This is true for all students that did RST. I asked the following question:
Hi! Testing is not just verification (confirmation). But I think that it is also wrong to remove verification (validation) from test strategy. Thoughts? Thanks!
James Bach answer that formal verification based on the right specs and skills is useful in some situations. But we have to be careful because formal verification is based on a lot of assumption (e.g., libraries will not fail, os API calls will not fail)
When you have questions, do not be afraid to reach out. Do not be scared to comment on other tester’s thoughts. I agree that it is ok to disagree with my thoughts. Apply for the Rapid Software Testing course and gain access to the RST Slack channel.