How to Measure Testing Success?

2d minimalist style drawing of a monitor showing a ladybug under magnifying glass

The Goal of Testing

Many people know that software testing is an essential part of successful software product development, but why exactly? What are we trying to achieve with it? Are we trying to find as many bugs as possible or is it more complicated than that?

The engineer mindset tells us that we are always optimizing for something in life, whether it be the amount of money we make with the least time investment or the amount of tasty food we can buy with the least amount of money. So, what exactly are we optimizing for when we are testing software?

The primary goal of software testing is to make sure that the software released is working as intended without any problems. Apart from ensuring functionality testing can also ensure excellent user experience in terms of usability and performance. Software testing thus mitigates the related risks such as decreased customer satisfaction and loss of reputation. The question is how do we know if we have succeeded in our effort and how do we know for sure that no serious issue went undetected into the software release?


Number of Bugs?

First, let’s debunk why the number of bugs found is not a good ultimate metric to optimize for. What if our software does not contain any bug or what if contains a lot of bugs but none of them are severe? If we tested thoroughly and did not find any bugs, then are we unsuccessful? Of course not, we made sure that there are no defects which we could not be sure of if we did not test. This is the inherent nature of risks. Even though on some days your house is not burning down, but it still makes sense to check the chimney or other parts to make sure the possibility of something going wrong is low. Thus the number of bugs found on its own will not be a good metric to optimize for.

Risk Coverage?

If we want to reduce the risks related to the software release it is also a good way to assess risks and make sure that we test for those risks, meaning that the tests we run should cover the riskier areas thus achieving some kind of risk coverage, reducing the residual risk. The appropriate risk assessment and coverage level can vary for every software project, the main point is to try to use whatever best fits your scenario, for example FMEA (Failure Mode and Effect Analysis), QFD (Quality Function Deployment) or FTA (Fault Tree Analysis). But how do we know that our perceived risks and risk levels and coverage is affective? We could still end up with high severity issue that our risk mitigation strategy did not consider…

Number of Escaped Bugs?

What if we would optimize for having the least number of bugs going unnoticed into the release? Again, simply the number is most probably not what is good to optimize for. Small problems might even go unnoticed by end-user and do not cause any problem. So, it is okay to have low-severity issues go into the release, what we most probably want to avoid is to have a high-severity issue go unnoticed.

Escaped Defect Severity Index?

Defect severity index basically shows the average severity of bugs found, where severity can be assessed on different scales, for example on a scale of 5 from low to high severity. This could be tracked for escaped bugs separately. So, tracking that for a given software release or time period how many bugs have been found that were overlooked in the last testing round.

defect severity index

But would this truly be a good metric? In most cases in the average can be misleading since you might want to make sure there not a single high severity bug escape. If at the same time there happen to be many low-severity escaped bugs, the average might seem low but the end-user can still be frustrated since an important functionality is broken by a serious defect. What we want to avoid is that a serious bug escapes into the release software and causes problems.

Maximum Escaped Defect Severity?

If the average is misleading, because we want to avoid high-severity issues more then it makes sense to optimize for having the maximum escaped defect severity as low as possible. On its own this can also be misleading, since we might have a thousand low-severity defect and not a single serious one but the small issues unaddressed might add up to a huge frustration overall on the users’ end.

The Ultimate Metric

Is there one ultimate metric that we could optimize for? Probably not. As mentioned at each metrics, all of them has some flaws that another metrics might make up for. Keeping track of multiple indicators, for example the number of escaped bugs, their average and maximum severity as well as risk coverage could be useful for many software projects not to go off-track. And even if you find your ultimate KPI (Key Performance Indicator) it is still useful to keep track other testing related metrics, such as the time spent on different testing activities that help you optimize your processes. It is better to track more metrics and have continuous feedback loops from production based on which you can improve your processes then having no data at all on the success of testing. Ultimately every software has different purposes and thus slightly or completely different optimization techniques might apply.


So how do you know if you have succeeded? In the end, software products are successful when people decide it is useful and it does not have any serious problems. Software testing can make sure that this is the case. Adjusting the right KPIs for your testing activities is essential to know that your testing strategy is successful. But what is the catch here? Why isn’t everyone testing extensively as much as possible?

Having no bugs discovered late sounds awesome, but if it costs you a huge investment to hire even just one QA professional not to mention a whole QA team then it might seem unattainable.

Fuserwise’s solutions are designed to optimize your testing resource costs in a flexible and scalable way such that you can start small with a fraction of the price of a full-time employee while keeping an eye on the success of testing from the very beginning.

Optimize your testing: