Reacting to change – Pair-blogg #2

Photo by Ross Findon on Unsplash

Background

In case you haven’t read about my pair-bloging idea before, a short summary from the first post:

A while back I asked on Twitter for people who would be up for pair-blogging. The idea was that we agree on a topic and a date and then we each write a post about that topic. We publish on the same date and promote each other’s posts. 

I approached Arlene Andrews to pair with me on another pair-blog. We are both members of the same slack group and I have noticed her in a lot of discussions. I thought she could be a great inspiration for a topic and I was sure right!  

Arlene got her inspiration from GeePaw Hill, who talked about how much a person or system will react to a change, how unpredictable it can be and how it affects the testing.
The example used was that of a bug landing on you. Regardless of the type of bug, you will get a reaction. The size of the reaction can be predicted, a ladybug will give a smaller reaction than a wasp, but the reaction will also be individual. Just as small changes in the product bring a smaller reaction than the major feature changes that come out of the blue.

Such an interesting topic, I am so grateful I asked Arlene to pair 🙂
Make sure to read Arlene’s take on the topic here.


Reacting to change 

I immediately connected this to a talk I visited by Cassandra Leung: (Mis)using personas with the seven dwarfs. But I’ll get back to that later.

Let us first explore the topic a bit. If I summarize my interpretation the statement is that how a certain person/system will react to a change depends on both size/scope of the change and the individual. Meaning we can predict the general reaction to an extent but each individual will still react differently.

Looking at the example of wasp vs. ladybug – yes. It makes sense that the overall reaction to the wasp will likely be stronger than the reaction to the ladybug. A wasp is not only bigger, it is louder and it is more dangerous. Few people will fail to notice a wasp landing on you. A ladybug on the other hand is small, silent and I’ve more than once failed to notice it until I either see it or feel a tickle from it’s little legs walking on my arm. So, we can predict that the reaction will, in general, be different. But, of course, people are individuals and they all have different experiences. Some individuals will be allergic to wasps and react way stronger. Some individuals might love wasps and react in the opposite way. Maybe one individual is particularly afraid of ladybugs and has a panic attack. 

The same can of course be said for systems and I can certainly relate to using this as a skill, or heuristic, in my testing. Let’s look at an example.

Most applications use what is commonly known as CRUD functions: Create, Read, Update, Delete. In most cases I would say I generalize then in the following order, from safe to dangerous: Read, Update, Create, Delete. 

“Read” is usually pretty uncomplicated. Even though displaying information can be extremely complex, at least you don’t change any data. So in my mental model, read is the least likely to cause data corruption and is the easiest to test. (though not necessarily the fastest)
“Update” is a bit more prone to problems since you change data, but at least you have a complete (?) set of data to work with so in my model it is not too bad. You have to look into duplicates and similar but usually pretty ok.
“Create” is where my mental model starts ramping up. Here you have a bit more possibilities of problems because you might not create everything you need, you might run into duplication issues etc.
Finally: “Delete” is where I expect to spend my time. You need to make sure things are properly deleted, that you cannot delete things that should not be deleted, that you update any related data, that you make sure to handle concurrency and everything else. 

This is how I assess the risk, the impact to my testing, without any other information than that. However: this is just a model and any system, or even transaction, might act differently. One particular set of data might be extremely complex to view due to content, but be completely harmless to delete because delete in that case means setting a flag in the database to “deleted”. Or in a particular system it might be very hard to update something because of a very strict set of database rules, while it’s easy to create a new set of data because the rules only apply to existing data.

I can generalize and assume that delete will take X times as much effort as read, but I cannot know that I am right. 

Adding a bit of change theory to the mix we can add another angle: a small change, say we shift the shade of blue by 2%, will be accepted much easier than changing the shade to red. Adding a step to a user flow will cause less friction than changing the order of doing things. The bigger the change, and the more it goes against what the user is used to, the more resistance and reaction you will get.For reference, knowing about things such as the Satir change curve is relevant knowledge for software development too!

Cassandra’s talk came to mind as another aspect to this: Mental states.
“Dwarf personas” focus on users’ mental states and should help us understand how they might be personally impacted by the product. They appreciate that users are complex and can’t always be represented by a single persona.”
Meaning there is an aspect to this that we cannot predict at all. We do not know what state our user is going to be in at any particular point in time and the type and size of the reaction will depend on their current mental state. Going back to the wasp vs. ladybug example, if someone has just lost a loved one to a wasp they might react in a way that feels totally out of proportion.
The same can be said for systems actually. A system might handle change completely differently depending on their state. Something slightly off, say data formatted in a slightly different way, that does not even cause a breeze when in a happy state might cause a crash if the system is already under pressure.

Summary

All in all I completely agree with both the premise that general size/type of reaction can be predicted and the premise that we must also be prepared for individual rogue reactions that do not match that pattern. 

I found it very interesting that I could apply both general change theory and a talk I visited more than a year ago and find connection points.

Thank you Arlene for picking a very interesting topic to explore!
Make sure to read Arlene’s take on the topic here.