Would Heu-Risk it?

Would Heu-risk it? Part 5: Temptress´ trails

We are back to the traps, so prepare for another moral story of how we can be better. Again, let us start with the rhyme:

“The Vigilant Watchers, lured astray
Miss the dangers, hidden away
Temptress´ trails you follow, effort low
But it won´t show you what you need to know”


So, what does it mean?

Well, in short: This is about the lure of the golden path.
We can call it positive tests, happy path, default path, basic flow or whatever we want, there are many names. Basically, it is about checking that the system does what we want it to do, when given proper input and all dependencies are working perfectly.

In my experience: This is not how reality works.

I´ve noticed that a vast majority of the tests that testers around me seem to do are of a “positive nature”, while I believe I see way more bugs found by the negative tests. Then a few years back I read a really interesting dissertation about TDD that mirrored those observations. In their project they had a ratio of 70/30 percentage of positive vs negative test scenarios created. At the same time, they had a reverse ratio where the negative tests found 70% of the bugs. I can´t determine from my own experience and one dissertation if this is a pattern but I find it very interesting and I believe there is truth in it.
And, when looking for data to support a hypothesis, of course you find it! (Oh, the irony of me referencing confirmation bias here)

Why? Well, first of all because it´s usually more defined what the system should do than what it should not. It is more… tangible. Looking at the different types of “requirements” I´ve seen come and go over the last 20 years it is obvious to me that we spend a lot more time on defining positive routes.
And of course, it´s way easier. For every single happy path there are endless alternative routes, some more obscure than others. With a lot of inexperienced testers out there, lack of proper training for testers (there is awesome education out there but so many sadly get none or bad training) and less and less time set aside for testing, it is natural that the easy testing is prioritized.
And I also believe people start here because they feel that if the golden path does not work there is no point in looking at the rest. And then perhaps time and/or energy runs out.

But if there is truth to the ratio, should we not do the opposite? If 30% of our tests find 70% of our bugs, why are we spending 70% of our time on something else? (Notice the likeness with the Pareto principle? I sure did!)

Food for thought.

Moral of the story: Try to challenge yourself and your hypotheses and consider if a few negative tests might bring more value.

Story time

There are almost too many examples to choose from here!
One team that I was indirectly working with released a new way of uploading files. It was a great improvement from the old version. It removed a lot of dependencies to external parties while at the same time shortening the waiting time for the user from hours to “real time” (or as close as you can get). We had a of of old test documentation to draw from, so it seemed like a very simple thing to test. However, in the end it turned out that the testing was almost exclusively of a positive nature, meaning of course the first time a user uploaded an unexpectedly large file it all came crashing down, with pretty big consequences for the users.
A quick code fix and a few negative tests (file sizes, file formats, pulling down integrations) later the problem was solved, and everyone involved had learned something. They learned the importance of testing more than the bare minimum and I learned not to assume that others know what I know.

In another project we were implementing a new web service. We had a lot of issues with it so at one point I was sitting next to two developers who were discussing the flow from start to end. I was listening while doing something else, but something grabbed my attention and, almost without thinking, I asked “So, what will happen with the transaction if System X does not respond?”
The senior dev stopped talking. Looked at me for a while, sighed and then they re-wrote the flow again. The difference of course was in out approaches to the problem. He was trying to confirm that his solution solved the problem. I was trying to falsify it. Not break it, mind you. But I find challenging my hypotheses is way quicker than trying to prove that it is 100% bullet proof.


Quote of the day

Positive test bias is a critical concern
in software testing and may have
a seriously detrimental effect
on the quality of testing

Laura Marie Leventhal, Barbee Eve Teasley, Diane Schertler Rohlman

Reading suggestions

How-abouts and what-ifs – QA Hiccupps
Taking the negativity out of Negative testing – Sticky Minds, Jessica Lavoie
Analyses of factors related to positive test bias in software testing – Laura Marie Leventhal, Barbee Eve Teasley, Diane Schertler Rohlman
The Test case as a scientific experiment – Sticky Minds, David Coutts
Quality of test design in TDD – Adnan Čaušević
Negative Testing: Is it important – BoTree Technologies, Sejal Vala

Previous posts in the series

Title and linkCategory
Part 1: IntroductionNone
Part 2: Mischievous MisconceptionsTrap
Part 3: The RiftWeapon
Part 4: The FascadeTool

4 thoughts on “Would Heu-risk it? Part 5: Temptress´ trails

  1. The Happy Path can only ever be a short-term objective; I’ve used that a couple of times when a new app has been scheduled for a demo at some point in the near future, and the short-term objective is making sure that the app doesn’t fall on its face in front of senior people and clients. Of course, to get to that, you do have to go through an exploratory process to find out what lies on that path and what’s off it. And you return to the off-piste stuff as soon as the demo is finished.

    The danger here is that even your happy path may not be on solid ground (to mix metaphors rather). The first project I was involved with that used the ‘happy path’ scenario was going really well – hailed by the CEO as “best tested app we’ve ever released” and lots of enthusiastic feedback from stakeholders. But there were two problems lurking in the undergrowth: first, we didn’t realise that the requirements we were working to were completely unrealistic (they’d been drawn up by a consultancy who were no longer on the project, and who were managed by people who had since left the company, so no-one had any real idea how thorough the requirements gathering had been – or rather, hadn’t been); and second, after the successful demo, the roll-out plan for fifteen beta users out of a client base of some 500 customers was ditched because Marketing picked the app up and ran with it to a ‘big bang’ rollout.

    Of course, it all went very pear-shaped. No-one had fully explored how clients would actually use the app in the real world, so we quickly started getting complaints when it didn’t do what clients really wanted. And the requirements and resulting spec hadn’t been shared with the ops guys, so there were points of contact between the app and the legacy business systems’ API where there was no matching – some data types created by the app had no analogue in the accountancy system and we ended up with about £3.5 million of invoices stuck between the API and the legacy payments system.

    The CEO was eased out, the company’s owners decided that in-house developing and testing was costing more money than it was worth, and most of us ended up on the jobs market. Not a happy path by anyone’s description.

Leave a Reply