TL;DR
This post gives one daily example how metrics could be used in wrong way.
On radio I heard weather condition report: “… and current temperature is missing one degree”. I know that this radio speaker always tries to be comical, but this time he was wrong. Here is why.
BBST Foundations Lecture 6, Introduction to measurement
Measurement is not about counting things, it is about ESTIMATING the value of something.
Measurement is the empirical (derived from or guided by experience or experiment), objective assignment of numbers to attributes of objects or events (according to a rule derived from a model or theory) with the intent of describing them.
Measurement includes:
- attribute
- instrument
- reading
- measured value
- metric => reading on the scale
In the context of weather report, attribute is AIR temperature, instrument is Thermometer, reading is number showed on the instrument, metric is INTERVAL SCALE of Celsius (Celsius because I live in country that uses this scale as de facto a standard for temperature). In United States, I would use Farenheits.
The punchline.
And here comes the punchline, we use interval scale when we do not know/use true zero. In the air temperature contex, on planet Earth, we do not use true zero value ( 0 Kelvins or -273 Celsius) because that value has never been measured as Earth’s air temperature attribute.
Interval scale have its own ZERO value, which is the context of air temperature is value when water freezes. Which is much probable event.
So, saying that we are “missing” one Celsius is wrong, because we do not missing air temperature, air temperature is not missing.
For software testers it is important to understand measurements because tester must be able to give answer to question:
“HOW MUCH testing have we completed and how well have we done it?”
You would not say to your manager: “We are missing one bug”.