TL;DR
This week’s testing from the trenches story explains our test strategy for software that generates other software commands.
Test Strategy For Generated Commands
Imagine a complicated system that creates many commands for system A, where input is files that describe the configuration of System A. To create a test strategy, you first need to learn the application under test. We are learning about the application by generating commands. For system A by changing system A configuration files (input for application under test). Our first strategy is to use minimal configuration files of system A to analyze a reasonable number of generated commands.
We wrote a document with generated cases (A Test Case Document !@@!), but instead of words, we mostly used pictures (diagrams) to describe system A complicated configurations. A set of such test cases represents one feature of an application under test. We first run test cases manually on the first release, and then we automated them using in house automation tool. Each generated command set was mapped against the test case document picture diagrams to learn the product. We knew command syntax, but there was no documentation when commands needed to be generated as a group of commands. If we had questions, domain experts help us make a verdict with generated commands aligned with picture test case diagrams.
We run test cases on the second release, where there was no clear mapping of how new features influence command output. That information is in the developer’s heads. We extracted that knowledge (for generated commands diffs), first as test script comments. Those are real change requests for the application under test.
When we observed the generated commands, we had the following strategy:
- number of commands
- order of commands in the output file
- command parameter values
When that differ, that could be a change request or a bug. The change request was more frequent. Each difference was consulted with developers as a process of extracting domain knowledge into test script comments.
For now, this is our current test strategy that evolves with each release.