Close the risk gap: find out what we need to know
This three step strategy aims to provide a framework or story within which to apply the catalog of heuristics found in the HTSM.
1. Estimate the gap
Estimate testing scope based on current knowledge
We need to know the boundaries of the gap in order to close it. Those boundaries can be expressed in terms of testing scope.
- Scope is 4 dimensional:
- Product areas
- Functions and logic
- Estimation is done via extrapolation of current knowledge in light of the system and real world:
- Domain knowledge
- Testing skill
- Estimate becomes more accurate over time, as current knowledge increases via testing
- The estimation itself is an important skill
2. Explore the gap
Design and execute experiments
Design interactions, or just jump right in, to evaluate the accuracy of the estimation and shed light on system behavior.
- Play and exploration
- Paths, workflows, data variations, other conditions
- Design and preparation of conditions
Testing artifacts: plans, mind maps, test cases, and more, exist as aids to keep our minds organized and communicate our findings with others.
Results increase current knowledge, enabling more accurate estimation, until the estimates transform into sufficiently certain conclusions.
- Is the scope estimate accurate? Is it wider than we thought? Narrower? Deeper? Shallower?
- Are there bugs?
- Is our technique working well enough? Will a modification to our approach work better?
- Is the product testable? Why or why not?
3. Close the gap: create a logical evidence-based argument
We repeat the above 2 steps, estimation and exploration, until we can close the risk gap, that is, have discovered everything we’ve inferred we need to know.
To close the gap means to create an evidence and reason-based argument that the level of risk in the system is sufficiently low. A good argument can stand up to serious scrutiny.
Logic is necessary to incorporate results of tests within a holistic system model so that each test provides meaning towards the risk evaluation. We must apprehend the system behavior during testing, make a judgement on its meaning, and infer the significance of that meaning to adjust our system model.
The system model is largely composed of the real world goals, the users we offer solutions to, the processes and abilities we implement to allow our users to achieve their goals, the business logic that seeks to reduce errors and human effort, the data that we process and its relationship to the real world, changes made to the code and other parts of the system, and real world circumstances such as those within which we work and those within which users will engage the product.
As part of an agile team, I focus testing almost exclusively around changes made to the system as tracked in each ticket in reach release cycle, relying on the results of previous testing that the general level of risk in the system before a given change is already sufficiently low. This trust in the results of previous testing allows a greater focus on testing changes, which creates a beneficial feedback loop: as each change is tested more thoroughly, those areas are demonstrated to be more stable than could otherwise be possible if more time was taken in regression testing.
Production bugs are tests of my risk gap evaluations. They mean that my estimates missed an important set of conditions or area of the product that was affected by a change I tested. Just as a developer fixes bugs in code, I can learn from these bugs in my risk estimations to make future estimates more accurate.
Testing is a process by which we transform from estimation to conclusion, less certain to more certain. Testing is like both a Fermi problem and the scientific method.