We testers need to find in software applications under test as many bugs as possible. We need to find anything that is wrong; that simple.
… but what is wrong? What is defined as wrong and how?
Before jumping into answers which include words like specifications and requirements, usability and performance, testing phases and strategies, let‘s disconnect from software engineering for a moment and go back to the routes of science …
Things are as they are, they do not need to be observed, measured, categorized or characterized so as to exist, so as to be. It is only the need of the observer that brings these actions into picture. The need of the human observer to understand, to forecast and to handle. In order to measure, categorize or characterize something that is (the being), there should be a commonly understood and accepted reference. One minute, one mile or one pound has no sense if these metrics have not been defined a priori against a reference, a commonly known and accepted reference, so as to obtain logical meaning and physical value and to be used to measure. So in order to categorize or characterize, we need to measure. In order to measure we need a metric and in order to create a metric we need a reference.
Going back to software engineering, what is the reference so as to characterize something as a bug in a software application?
There are mainly two major points of reference for the correct behavior of an application
1. The specifications defined for the software application
2. The common sense and common logic of a standard and common user
These two major points of reference help define (measure and categorize) what a bug is. Consequently we have two major categories of bugs:
1. The non conformity-bugs, where the application does not function as described in the specifications
2. The logical error-bugs, where the application –regardless of its specification- behaves in an irrational or unexpected manner according to what would be supposed to be expected by a standard and common user
It is important to say that while there may be bugs that belong in both categories, there is no significant correlation between them as they emerge from different references. As such, they are different in nature, they follow different patterns and they need to be treated differently in a test strategy so as to be defined, identified, isolated and terminated. In other words, you need different armory to exterminate different kinds of bugs. Interestingly enough, there is rarely a clear separation of these two categories in test strategies, not to mention different methodologies followed so as to identify as many as possible from both categories.
We tend to focus as testers on the first category of non conformity-bugs while the second is usually not even referenced in test strategies and test plans and this is done, in purpose or by accident, for various reasons:
Well defined versus Undefined
Specifications of a software application are (or at least should be) clear and well defined. As such, it is straight forward to determine whether a certain behavior is a bug or not. Disputes and disagreements are usually solved when consulting the bible of the application – the specifications. Even if a certain behavior is not crystal clear in the specifications, a common practice is to enhance the specifications so as to make it clear. Just like in a legal system where the case law is continuously enriching the set of existing laws. When it comes to logical errors-bugs we sometimes realize with great despair that common sense is not that common but it is a mean value of generally different opinions. Excluding cases where the error is obvious (system crash), the more people they look in a reported bug, the more different verdicts we get about the bug-or-not-a-bug. The actual horror in everyone‘s eyes comes if and when someone makes the fanatic‘s question: where is this depicted in specifications? Like if specifications is the answer to everything, the software‘s nostrum.
Finite vs Infinite
Specifications come usually in a format of one or more documents. Depending on the application, this documentation may be quite big, huge or ridiculously monstrous. It is never just a few easily readable pages. Even so, testers find comfort in the defined territory of specifications in comparison to the wild unknown of what is supposed to be common sense. Common sense is not only subject to interpretation but practically boundless. Even the simplest application may have in theory innumerous logical tiny errors. If we put them into picture, try to plan for them and expect them to be revealed, eventually plan becomes more vague and a project more expensive. Not to mention the difficulty to forecast what should be expected out of a boundless and subjective factor like common sense. Again one more good reason to set aside this category of bugs.
Should we plan and test focusing on logical error-bugs?
So what happens with this category of errors? If we seldom have a strategy for them, if we have not advanced and specific methodologies to follow how don‘t we constantly fail due to them? In practice, we do find many logical errors while testing against the specifications. Especially in manual testing (and this is one of the reasons why automation and test-machines will never substitute completely the manual tester) against the specifications we find many logical errors and for sure the most important and apparent ones. Remember, how many times did you report a bug which cannot be directly linked to a specific test case or specification element? All these are logical error-bugs.
The ones that remain tend to be considered as “not-that-important” through management of expectations and not management of defects. They even give the feeling of a brand new application!! The customer acknowledges the smell of brand new with satisfaction when she/ he find such bugs, as long as they do not create issues in business continuity or in the neural system of end users. It is like the new pillow that is not yet fully comfortable or the brand new car which needs a bit more attention in the first few thousand miles. Minor flaws in the signature of brand new, not to mention that some of them may be treated with the question of horror: “where is this depicted in the specifications?” and thus be subject for a CR a.k.a. extra money.
For these reasons amongst others, logical error-bugs do not enjoy a special treatment in the software testing process. This does not imply by any means that we shouldn‘t evaluate and estimate their possible existence to the best level possible. Even if it is a black hole which seldom causes fatal issues in projects, this does not mean that should be fully neglected and set aside. We should know what we do not know. Put a tester which is not familiar with the specifications and leave him/ her to free-test the application along with the full-time testers of the application who may know by heart the specifications by now. If the new tester starts finding more bugs than the rest of the team, then … Huston we have a common sense problem!!
* as originally published in “Tea Time with Testers”, Year 4 Issue III