Today we are shifting left and looking at some static testing! Yay, so exciting! Another tool. But before we start with details, here is the rhyme for today:
“Always and never are never to be trusted Once challenged, they are quite often adjusted To something less set in stone but easier to believe Dare to challenge criteria you could never achieve”
So, what does it mean?
This one is inspired by both working with requirement analyisis/reviews and by The “Always and Never”-heuristic card in TestSphere.
Long story short: Any time you see an absolute in a requirement, specification, user story etc. – Challenge it. Challenge it good. We humans tend to describe things in absolutes to simplify but often there is a lot of if’s and but’s in there that aren’t communicated. Absolutes are also very hard to verify (one might argue impossible since we cannot do 100% exhaustive testing) and the cost of even trying can get very high.
Some examples: “A user must always be able to complete the application within 1 min” “The system must always be online” “The response time in the system must never exceed 3 seconds” “The system must be able to handle, and adapt to, every language”
Let us break each of those down: “A user must always be able to complete the application within 1 min” How would you prove that? Probably by having a reference group try it out and check that they all succeed. Does that prove it? Can you know that that user group includes every possible client you have? You can of could try to include someone who doesn’t speak/read the language perfectly, someone with mobility issues, someone with visual impairment etc. But you cannot be 100% sure, can you? And what if the cost and/or complexity of the application would increase tenfold to try and meet that goal, when 99% of your users completing it within 1 min and the last % needs 2 min might be a better business decision.
“The system must always be online” Ok. Let us get creative. How about in the middle of a nuclear disaster, world war 3 or if the data center is destroyed by terrorists? Is this still a requirement? I am not kidding, I might ask this. And people will laugh and call me ridiculous and then I can move on to: “Ok, are there other situations when it would be ok that the system goes down? Maybe some that are more likely?” And then you hopefully end up with a limit that is actually usable. Perhaps that the system can only have a down-time of 5% on a yearly basis, that any unscheduled downtime must be identified and analysed within 30 min and that loss of data can only be accepted for a maximum of 15 min. This might result in a disaster plan, multiple backup servers or a team of people always on standby. Or it might even end up with someone deciding the uptime isn’t worth that cost.
“The response time in the system must never exceed 3 seconds” This is something I see a lot. It is often an incredibly lazy way of getting around performance metrics. Say we have a system where 5 specific user flows make up 90% of the use of the system. And then we have a number of other tasks that are common but not daily. Lastly, we have a few reports that are generated once every quarter that do a lot of data collection, sorting, filtering and analysis. Do we believe all of those are equally important to run in 3 seconds? I don’t. I believe the things user do often need to be ultra-fast, the things they do maybe once every 20th time they use the system should be pretty and the things that are done very seldom can take way more time without anyone thinking the system is bad or slow. As long as we make the user aware that a certain task will take longer and preferably show them that things are being processed so they don’t think the system has frozen.
“The system must be able to handle, and adapt to, every language” Well, I don’t know. Dead languages? Languages used only in areas without computers? Languages only used in countries that we don’t have business in, neither now or in the foreseeable future? Imagine having to research, develop and test a system that handles all the different ways of reading (right to left, left to right, up – down etc…) and writing, dates, time, amounts, different character sets. Different calendars. Different cultures. Wow, this could get BIG. Sounds extremely expensive to me.
I could go on forever here but let me just say all of the examples above are things I have come across in real life.
One example is a system where the non-functional requirements said the system should always be accessible with SLAs for 95% up-time and a very short window for the ops and support teams to get it up and running again. Only problem is, this system would not be used by anyone outside of normal office hours so the cost to implement that was not defensible.
Another is another system where someone who shall not be named tried to demand that no (zero) data loss was acceptable in a certain system. The analysis showed that getting it down to less than 30 minutes of data loss would cost more than having all the users repeat the work. It would have required a completely different set of backup systems, rebuilding a LOT of the infrastructure and keeping a team on stand-by 24/7. In the end, this demand actually did force a new infrastructure in place, and we ended up with a great crisis plan, tests for that and a potential data loss of a lot less than before. But not zero!
Quote of the day
“Never say never, because limits, like fears, are often just an illusion.“