What level of coverage is considered good?
Jul 3, 2023
A good level of coverage in software testing typically ranges between 70% to 80%. It varies based on the project's complexity, risk, and specific requirements.
Envision you're a farmer planting seeds in a vast field.
You'd want to cover as much ground as possible, but you know that it's not practical or cost-effective to plant every single inch.
There might be areas where the soil isn't fertile, or perhaps there are patches that are more susceptible to pests. So, you aim for the most productive areas, ensuring you cover around 70% to 80% of the field. That's enough to ensure a robust crop yield without wasting resources or effort.
The same principle applies in determining a good level of coverage in software testing.
Software testing coverage is analogous to how much of your software's functionality or code has been tested. Having 100% coverage might seem ideal, but in practice, it's often neither feasible nor necessary.
Attempting to achieve total coverage can lead to diminishing returns, with the effort, time, and cost required outweighing the potential benefits.
The 70% to 80% coverage mark is generally considered satisfactory because it focuses on the most critical areas of the application, while not consuming excessive resources.
This includes ensuring that all functionalities are working correctly, all possible user behaviours are accounted for, and all potential risks are mitigated. However, the 'right' level of coverage can vary depending on the specifics of the project.
High-risk applications may require a higher level of coverage, while others with lower complexity and risk may suffice with less.
As always, it's the balance between cost, effort, and risk that decides the optimal level of coverage.