”Testers don’t break software!”

Picture of a paper heart on a string, heart broken in the middle
Photo by Kelly Sikkema on Unsplash

Introduction

I believe most of us have heard the statement ”Testers don’t break software, we break the illusion of it working”. For a while, something has bothered me with that statement and I felt a need to get my thoughts onto ”paper” to sort them out.

I understand why the view of ”testers break software” is hurtful and why people feel strongly about this. A lot of testers have gotten heat from managers, project managers, developers etc from ”breaking software” just before an important release. And a lot of developers have used ”testers break software” as an excuse for not taking responsibility for the quality of their work. I get it!

But to my weird brain, it does not ring 100% true. It is a matter of perspectives and how we view software and testing. I have seen the other end of the argument, where testers don’t want to take responsibility for not getting involved early (or/and deeply) enough and use ”I don’t break the system” as a way of distancing themselves from the responsibility.

I could not pinpoint my unease and I want to explore this more. While pondering why I reacted so strongly, and discussing with a friend who reacted just as strongly in the opposite direction, I realized that both arguments are simplifications that don’t fit my mental model of testing. Thus the feeling of something not quite right, I needed to sort my model out in comparison with the statements.

Testers break flawed systems

On one hand, we can view software as a physical thing, say a window. Something is not right in the production and the product comes out slightly flawed. It could be cheap material, a misconception in the construction or perhaps a twig hole in the wood. Not so bad that is breaks during construction. If we test it, put some pressure on it, the flaws are exposed and the product breaks. If we don’t test it, the product is likely to break when used by the customer.

This comes very close to how I feel about my testing when testing a feature or a system. I find pressure points, I expose them to risk and it either breaks, is damaged or holds. The product was flawed, but not broken, before I put pressure on it. There are tests I perform where I most definitely aim at breaking the system to a point where, if succeeding, it will have to be replaced/rebuilt. Examples I can think of are usually either data related or network/hardware related. I might manage to corrupt the database to a point where it is unusable. I might manage to take down message queues, fill up disks and/or memory or I might manage to corrupt other services that we need.

Sometimes we might even want to test just how far we can take it until it breaks. We test with the sole focus of breaking it. For the window analogy, we might expose it to wind and rain or we might put the frame under pressure. For software, we might put in incorrect data or run a million user sessions simultaneously.

Or we might actually, physically, break something. Not all software is run on a computer. Your software might run in a heart starter, a cancer diagnostics machine or an airplane.

Picture of an open window with foggy nature outside
Photo by Hannah Tims on Unsplash

Testers don’t break systems, we discover and exploit flaws

On the other hand, we can argue that software is not a physical thing and, unlike a window, it is not actually broken afterwards. Say we think of the software not as the actual window but as the production of the window, and the window is instead the output of the software. When we look at that perspective, we look for flaws in things like what we put in (the material), the work process (order of assembly, manual steps, waste), our resources (are we using the right machines, the right people).

This is actually a double perspective in relation to software development. You could translate it to the process of software development, which we can also test, or you could see our software as the window assembly and the output of the software as the window. If you take this perspective it makes total sense to say that testers don’t break the software. We may discover/exploit flaws in the product (assembly line) that results in a broken output (window) but we did not break the product (assembly line).

This perspective works very well with how I model how I test the work process and it fits with my mental model of how I test different kinds of flows, usability, accessibility and other things that look more at a chain of things.

Picture of a car being assembled by robots
Photo by Lenny Kuhne on Unsplash

Conclusion(-ish)

It still does not fit my mental model perfectly but I do believe I have managed to dig up the root of what was bugging me. My mental model has most definitely used both ”Software as a window” and ”Software as a production line”, depending on what, when and why I test. In one instance, it does make sense to say ”I break the system” and in the other it does not. Mostly I think it bothers me that people so firmly see it in black and white, true or false. What if it depends on context? What if sometimes we do and sometimes we don’t? And both are ok.

What I think we can agree on, whichever side you are on, is that it is not ok for people to avoid responsibilities, regardless of your role. We are all responsible for the quality of the end product and we are all better off taking joint ownership instead of pushing blame onto someone else.
Quality is a whole team responsibility and that means we should all be involved, from start to end. No matter if you feel like you break systems or illusions.