Risk Gap Model

The Risk Gap is what we need to know before we ship. Testing closes the gap via defining its dimensions (testing scope) and filling those dimensions with data (test results) via application of testing techniques.

These are rough notes which, when filled out, I hope will describe a general testing practice that is applicable to many modern software development teams. These principles are abstractions of how I perceive I actually test, whether I consciously think through them or just execute them subconsciously.

How do we determine what it is we need to know?

Estimation of scope via extrapolation on current knowledge

“If X is changed, then I know I need to check behaviors A and B under conditions N and M. I might also need to check condition O. My memory of behavior A is a little fuzzy, testing it might reveal that more behaviors could be affected, like D and E.”

– subconscious thought process occurring within seconds of reading a ticket description
  • This estimation can happen instantly, as soon as we read a ticket description we can think of a couple tests that could reveal serious errors.
  • It can also take longer to determine, and is strongly affected by actual knowledge gained through interactions, meaning interactions often shouldn’t be postponed until a great plan is ready.
  • Estimation becomes more accurate as more knowledge is available.
  • Knowledge categories:
    1. Interactions with the product
      • Real time interaction is the most certain and up to date form of knowledge available
    2. Claims
      • Conversations
      • What the application is telling you through its interface
      • Documentation
      • Requirements
      • Comments
    3. Code
      • Extent of code change
        • Number of files
        • Number of changed lines in files
        • Context of change within files
        • Object names within the code
        • Conditions within the code
        • Files in context of application structure: common, specific names, folder locations
    4. Domain and Real World Experience
      • Purposes of application workflows in connection with real users and their real goals
      • Usage of data in various locations and processes
      • Historic logic
      • Infrastructure, databases, APIs, models of system architecture
      • Interconnection of settings and variations based on data
    5. Testing Skill and Experience
      • Experience of “gotchas”
      • Modeling
      • Knowledge of techniques (HTSM)
      • Diving right in

Design of tests to shed light on and in the scope

  • Design tests to reveal information within the estimated scope.
  • Again, interactions with the product don’t necessarily need to wait long, and knowledge gained through them can help the planning better quickly.
  • Many test techniques on HTSM
  • Quickly reveal knowledge through interaction: don’t need to wait for a great plan: the sooner we have test results the sooner we have feedback and the sooner we can make the plan better
    • Cursory skimming of an item to test can reveal enough information to start testing right way and find bugs immediately
    • Immediate interaction with the product provides context for the claims in the ticket
  • Aspects of testing
    • Ability to absorb what’s on the page, meaning of words, available functions, possibilities for data variations, extrapolations on those meanings
    • Test techniques such as combinatorial, functional, user, etc. See HTSM
    • Usage of Dev Tools and other various tools
    • Knowledge of HTTP and how to manipulate the network
    • Ability to rapidly interact and absorb information while creating a mental model and getting inspired
    • Discipline to stay focused and get through occasional drudgery

Refinement of estimation given new knowledge

  • Testing reveals three important types of knowledge
    • Whether the scope is accurate: may need to be expand, contracted, made more specific, etc.
    • Finding bugs: things that violate pertinent oracles and reveal possibility of higher risk and deeper testing around the risk
    • Feedback on success of testing technique and testing plan
  • The product reveals many things to us: we connect meanings from the ticket to meanings gained from interaction with the product and assimilate both into our system model

How do we know when the risk gap is closed

We can’t. We can estimate when enough is enough. When our estimation of scope is confirmed and when we logically perceive the scope is filled with sufficient test data, we can start to consider we’re done. Both scope and completion of the scope must be ascertained with logic inextricably tied to the specific project context, aspects of the product we’re working on, and assessment of the changes being made.

  • Estimating remaining risk tells us when we can reasonably be done for now
    • Estimation based on logic applied to facts (previous knowledge combined and assimilated with new knowledge gained through testing)
    • Can the facts gathered be organized together with a sufficiently strong logical argument that the level of risk is low enough for the particular situation?
  • Actual prod bugs reported tells us over time how well we’re doing
  • Finding bugs when going back to areas we tested tells us how well we’re doing
  • Testing skill and Domain knowledge builds up over time, which remind us of instances of problems and circumstances which we might recognize again, allowing us to act on them

Leave a Reply

Your email address will not be published. Required fields are marked *