Ok, last of our categories is Tools, meaning something we as testers can use to extra awesome. These could be patterns, heuristics, testing techniques or something else that YOU use to your advantage. Maybe you have previous experience that translates into testing and creates your own superpower!
So, let us have a look at the rhyme for this card:
“On the surface, all nice and neat
But have you looked at what lies underneath?
Can you be sure it worked, from start to the end?
What looks to be true you should always contend”
So, what does it mean?
Well, one possible interpretation is that what looks ok in the UI does not necessarily mean it is actually right.
Too many times have I come across transactions that hit a road bump somewhere along the road from start to end and did not convey that problem back to the user.
Between you(-ser) and storage (whatever that storage may be) there can be many different points-of-possible-failure. If possible, try to always make sure that whatever you *think* should have happened, actually happened. Do not trust that that glossy interface is showing you the truth.
If you can, look directly in the database/file/other-way-of-storing-data and check that it actually happened. (OH! OH! Even better, try to manipulate in the storage and check that the UI can handle it! But that is another card. Darned.) Or at least, update the UI and check that nothing weird happens.
Learning the (different potential) paths your data takes trough your system(s) and at what points it could be corrupted will make you a much much better tester. Are there translations/conversions being made between systems/services? Are there error messages we try to polish in order not to confuse the user that might actually hide problems? Algorithms trying to calculate stuff from your data? (We will get back to numeric formats and conversions…. nnngh… can´t get stuck on that already….)
Track the data to the source and back again! It is a river you should learn how to swim, surf, cross and control in every possible way!
At one point in time I was working on a system consisting of a database (with a lot of triggers, rules and procedures), a number of services (external and internal) and a few different front ends.
We were implementing a new form for users to submit an application through, with all the bells and whistles available to us. We felt confident we had put all the safe guards and helpers in to make sure our users got through the process smoothly without any hindrance.
We tested all the different input fields for maximum length, injection attacks, weird special characters, everything we could think of.
It all looked GREAT!
Then at one point I noticed something weird. I was randomly opening a submitted application to edit something and noticed that the text was cut off. Nothing had shown up in the UI, even when refreshing it, so at first I thought it was just A Weird Coincidence (Oh yeah, one of those) but I found more when I started digging around.
It turned out that the database had a limit on that field that was shorter than the on in the UI, but the UI was caching my data so it didn´t fetch it from the database when I refreshed it.
From this I learned to check the database and that sometimes you need to empty your cache!
The second example has to do with backend and frontend developers interpreting error messages differently.
In this scenario, the database had a bunch of limitations and checks in place to make sure data integrity is not compromised. One of those was checking for duplicate entries and another was related to version handling in case of simultaneous editing by different users.
So the frontend sends a request.
Backend adds whatever magic it needs to and sends it on to the database.
The database responds that the request was denied due to a restraint.
Backend says “cool, the database handled it, all is good.” and sends an ok back to the frontend but also adds information that the restraint kicked in. (So basically a 200 OK with an error message attached)
Frontend happily tells the user that the request was successful and ignores the error.
The user believes that whatever change they tried to make was done and carries on with their life.
In reality – nothing was changed/added/removed and the user should have gotten an error back.
And all of the developers involved strongly believe that they were right in their interpretation of this scenario and that someone else should have handled it.
Quote of the day
“If something seems problematic/acts oddly,Alex Schladebeck on her Pimple heuristic
interact with it directly and indirectly to evaluate it more closely,
in different situations.
That can be editing, searching for, redoing steps.
Keep poking until something comes out, or you´re satisfied it won´t”
Heuristics and hunches in exploratory testing – Alex Schladebeck
Exploratory testing – Maaret Pyhäjärvi
SQL Introduction – W3Schools
HTTP Basics – Chua Hock-Chuan
Exploratory Testing: What are microheuristics and how can you find and use them? MoT masterclass with Alexandra Schladebeck
(note: Requires pro account)