August 5, 2016 – Andro Cobarrubias
How much testing is ‘good enough’ testing?
If your organization is like most, there’s probably not a cut-and-dried answer to that question. And that applies to all aspects of software QA testing – whether you’re rolling out a brand-new application, or just doing some regression testing of an old legacy application.
Industry standards really aren’t much help. They can offer some guidance, but not a definitive answer. Some standards might say 50%, 60%, 70%, 80% test coverage is good enough. But in reality, those are just numbers pulled out of the air. There’s no empirical evidence backing those numbers.
So again, how much test coverage is enough test coverage?
A Question That Really Needs An Answer
The fact that there’s no clear answer to the test coverage question is a problem.
But though having an industry-wide test coverage percentage applicable to every QA scenario would be quite convenient, it simply isn’t possible. That’s because each application, and each testing scenario, is unique.
Assigning a specific test coverage number to all situations would be akin to declaring that all men wear size 12 shoes. A relatively small percentage of guys would be just fine with that declaration, but it simply wouldn’t work for everybody else.
But there is a way to determine the proper test coverage for each unique situation. And just like buying a pair of shoes, it involves measuring to determine the right number.
Scoring QA Success with Test Points
Test points provide a quantitative, measurable approach to determining test coverage on a case-by-case basis.
What is a test point?
Put simply, a test point is the smallest measurable unit of work that you can test per application. Ideally, you’ll define test points at each step of the application development process – though you can certainly also define test points retroactively for legacy applications.
And once you’ve defined the test points for your application, you can use them to eliminate guesswork in determining ‘good enough’ test coverage.
Defining Test Points
Still not quite clear on what exactly a test point is? Let’s consider a few examples of test points found within various components of a typical application…
- Screens Test Points:
- Number and types of screens
- Number and types of fields
- Field attributes
- Reports Test Points:
- Number and types of reports
- Report verification areas
- Incoming / Outgoing Files Test Points:
- Number and types of files
- File verification areas
- Data Entry and Processes Test Points:
- Number and type of relevant entity states/statuses
- Types of processes
Certain application-specific factors will also help to define test points. Security profiles within an application, for example, will each need to be tested and defined as test points. The same is true of specific business rules that are incorporated within an application.
4-Step Approach to Utilizing Test Points
An excellent approach to implementing the use of test points is to follow the following 4-step approach. It provides a systematic, organized methodology that will enable consistency in utilizing test points across a range of different applications.
- Define – The first step is to define a test point. A standard set of test point guidelines should be developed which can be used to verify which artifacts/objects are determined to be test points. Test point counters should be trained, and provided the access necessary for the performance of their jobs. And to help standardize the process of defining test points, test point detail & summary templates should be created.
- Create – This step involves identifying potential test points at detail level.
- Validate and Refine – The procedure of creating and implementing test points should regularly be reviewed and refined. Creating firm timelines for the validation process — once every quarter, for example, or upon the release of every 6 service packs — can help assure consistency in the validate/refine process.
- Implement and Measure – Implementing test points for a defined release should involve conducting an impact analysis. It’s also a time to look for ways to improve the process – searching for opportunities to increase test coverage, for example, or identifying opportunities for improving test productivity. It can also be helpful to incorporate metric impacts in a release scorecard.
Get the Point?
Test points provide a methodical, quantitative, customized approach to determining the proper test coverage for each unique scenario. Utilizing test points eliminates the guesswork commonly involved in determining test coverage.
Is employing test points as easy as pulling a random test coverage percentage from thin air? Certainly not. But using test points is a far more accurate means of defining good-enough testing for each unique scenario.
And when it comes to the critical process of performing QA testing, accurate trumps easy every time.
Subscribe to receive more blogs like this from RCG