Testing vs. Checking – separate entities or part of a whole?

There has been a lot of
discussion, heated and calm, about the concept of testing. What it is. What it is not. Arguments about real testing vs. what a lot of people imagine when they hear the word. Twitter and blog posts talking about “real testing”, “the future of test”, “no testing” and so on, seemingly endlessly.

To be honest, I’ve had a hard time grasping a lot of it and some distinctions don’t make sense to me. But. I’ve also had revelations and that is what this blog post will be about.

So, let’s start. What is testing actually?

To me, testing = Checking + Exploration. Notone or the other, both. (Shoutout to Elisabeth Hendrickson and her awesome book Explore it!)

Let’s begin with checking. The stuff that some will say is not ”real testing”.

My opinion? Well. If you know something is true/false, black/white, start/stop, by all means – check it. Go ahead, it’s
fine. Yes, I promise! If you know – from a reliable source, that you are supposed to be able to login to the site with credentials XYZ from device A then do it.

  • Is it important enough that you want to check it all the time? Automate it.
  • Is it time consuming enough that you notice yourself avoiding checking it manually? Automate it.
  • Is it something that would help the team to get instant feedback on when it’s not working? Automate it.

Checking, to me, has an important part to play in what I call testing.

  • Important because knowing that I’ve checked the “obvious” gives me confidence to go further and deeper.
  • Important because knowing stuff works in the basic ways frees up headspace and time for more creative work.
  • Important because testing things in multiple different ways is more valuable than doing one way perfectly.
  • And let’s be honest: Important because it still find a lot of issues quickly.

Exploration, to me, is all the other parts. The stuff that might be hard to explain to someone else.

To me, exploration is about framing a hypothesis, inventing an experiment to try to falsify it (yes,
falsify and not prove), executing the experiment and then observe, analyze, learn, adapt.

  • It can lead to broken assumptions, a feeling of knowing less than before, Learning almost nothing useful. Which is ok, it will make me a better tester tomorrow.
  • It can lead to new stuff to check next time. Possibly/probably checks to automate. Which is awesome, it will allow me to do even more deep-diving next time.
  • It can lead to us having to re-think the entire solution. Which is GOOD, it will make the solution better.
  • It will find things checks can’t possibly do because we didn’t know about it and therefore couldn’t check it. It will find risks, loopholes, cracks in the ice and give us information that can let us make better decisions.
  • Some will say it’s more creative and fun, some will say it’s ”real testing”, some will say it’s a waste of time.

I say it’s the stuff that makes me a very valuable asset in any development team. It also makes me worry too much. Imagine being unable to shut off the ”hm. I wonder what could possibly go wrong with this scenario” at any point of time in your life, say for example while trying to look forward to travelling.

So, where am I going with this. Well I am trying to explain why I think we have such a big gap between
the ”Automate everything”-people and the ”That’s not real testing”-people.

I confess I struggle with models like the DevOps infinity-loop. It puts ”TEST” in bold letters in an isolated part of the loop and if that is your view of the world, how can you ever take someone like me seriously? I find it seems impossible to find common ground without a long discussion about ”what is test really?” and how do I explain that when I say test I don’t mean executing a particular scenario in an application, I mean basically most of the things I do from the time I hear a user express a need (yes, I am horrible to talk to) to the time I stop working on a particular system.

  • I test the importance of that need by challenging it. Usually by talking to people.
  • I test the proposed solution by challenging it. Usually by asking questions or making explicit assumptions.
  • I test the process by challenging it. Usually by observing where people tend to take shortcuts or get flustered from not knowing how to proceed.
  • I test the solution under creation by challenging it. Usually by writing some sort of description of how I will test it and making as many people as possible tell me what I did wrong.
  • I test the solution when it’s ”done” by challenging it. Usually by a combination of checks (the stuff I know) and exploration (the stuff I believe).
  • I test the delivery by challenging it. Usually by analyzing data, either from an analytics tool or from actually talking to people. (Did you know customer service tend to categorize calls? Do you know how much information you can get by just checking in with them after a release? It’s awesome!)

So, if that is what I mean with testing and the person I talk to see it as writing some sort of test
descriptions, executing them and then report the result… it will clash. And it does. Repeatedly.

Some will argue exploratory testing encompasses all the aspects I’ve mentioned and won’t approve of me
separating them. They will argue exploratory testing is more a way of thinking of testing and I totally get it. I really do!

But. What I’ve come to realize is that to me, checking and exploration are two totally different mind sets that
make me act and think differently. Yes, call them what you want but I need to do context-switching to change between them, just as I need to switch to another way of thinking when building something. And I do. I switch. And I need
to do that switching in order to be the best possible tester I can be. So, to me they are all tools I use, for solving different tasks.

Picture of a tortoise cat sleeping on a colourful blanket
Picture of my cute cat just in case anyone needs to calm down.

Author

  • Lena Pejgan Nyström

    Lena has been building software in one shape of form since 1999 when she started out as a developer building Windows Desktop Applications (Both impressive and scary - they are still alive and kicking) She later found her passion for testing and even though her focus has shifted to building organizations and growing people, she is still an active voice in the testing community. Her core drive is continuous improvement and she strongly believes we all should strive to challenge ourselves, our assumptions and the way things are done. Lena is the author and creator of “Would Heu-risk it?” (card deck and book), an avid blogger, international keynote speaker and workshop facilitator.