The Jurassic Park Problem & Software Development (Part 2)

What happens when the forces are not aligned

In the previous part of this three-part article we had a look at three aspects of building software: is it technically possible, is it ethical and is it legal. In this second part we will look further into some conflicts of interest and what can happen when our three powers don’t pull the same way. Finally in part three we will suggest how you can move towards including all of this in your daily work – whatever your role in software development may be. 

Drawing showing the text Maxumum power and three arrows pointing at Ethical, possible and legal

When the forces align – when we are working on something that is not only technically doable (but hopefully challenging enough to spark some interest) but also both clearly legal and (at least as far as we can tell) ethical – things can feel incredibly easy and smooth.  Work is fun. We can innovate. Things feel frictionless because no one slides in from the side to stop us, tell us no or ask us to slow down. 

If you also happen to get to experience this in a highly mature team, in an organisation with a lot of trust and – especially – the goal aligns with your personal morals – *chef’s kiss* 

But unfortunately, that is not always the case. Even with the best team, in a great organisation, working on something we are passionate about – we often get slowed down by things like unclear compliance or legal questions, contradicting regulations or areas where the law is behind technological advances. Maybe we get halfway through our project and then suddenly get stopped by the sudden appearance of someone from legal, risk or compliance who tells us they need to look into whether or not we are allowed to do something a certain way, or maybe even at all. This can create unnecessary friction as well as expand the distance between two areas of knowledge that are both really important when creating new products – engineering and legal.

From engineering’s point of view – it can be almost like a red flag to ask a simple question and get the same “It depends”-answer as every time. Law is law, it should be clear and didn’t these people go to law school for a thousand years?! Why can’t they give us clear instructions and guidance? 

From a legal perspective, it is incredibly frustrating to not get to be involved early enough, to be given time to understand, investigate and suggest changes to make a new product legal. 

And that is not even looking at ethics, an area where we unfortunately rarely have skilled professionals who can take charge and keep us honest. This falls on every single individual to consider – which we unfortunately might not do until it’s too late – unless it’s an area we already care a lot about.

But let’s look into some possible scenarios where different aspects of these forces are challenging.

Scenario 1

Something we’ve seen pop up in the last few years is Usage-based Car insurances – where your price is dynamically set on your insurance based on usage, driving patterns etc. It doesn’t look like it’s widely spread or has become a commodity yet but it is an interesting use case to look into.

In its current form, we assume that it is a somewhat closed system, where the hard- and software in the car communicates with a central system at the insurance company. To avoid complicating things, let’s assume this data is not used to feed any kind of machine learning model or anything similar. This kind of software could potentially make drivers more aware of their driving. Compare this to showing drivers how economical their driving is or setting taxes based on environmental impact can affect drivers and buyers to make “better” choices. Seeing as how driving in a safer way would save you money, this could definitely have a good impact. 

But let’s take it one step further. Imagine if this could be used to automatically send information about speeding, reckless driving or other types of traffic crime to a system that issues speeding tickets and fines automatically.  This would likely catch more crime, act as a deterrent and thus could introduce safer roads. All things that should be considered ethically good – bringing societal value. 

Drawing of a car, a phone and a speed dial - all connected and an arrow pointing toward a police icon

But with the current laws in place, sharing that information with a third party is likely not legal. And should it even be? Or is privacy worth more? And what else could that third party use the data for? Not to mention the fact that we would lose a lot of added context that only a human being (for now) can use to adjust their judgement. The reason you broke the particular rule might be that you were moving out of the way for an ambulance, or swerved to save a child who jumped into the street. None of these are things that automation can take into consideration. At least not yet. 

Scenario 2

Is it morally ok to have an interface that gives “regular Joe’s” access to something incredibly complex, with insanely high reward but equally high risk and impact? To what extent are we, as software providers, responsible for making absolutely sure that our users understand how something going wrong can impact them? How much safeguards, training and warnings do we need to put in place if the potential win is becoming financially independent overnight, but the risk is that you put yourself in a debt you can never recover from? 

These are things we think about, when we looked into the incident where a 20-year-old student committed suicide after ’mistakenly’ believing he owed $730,000 on the online trading platform Robinhood. This platform has attracted millions of young traders with little experience – giving them access to very complex and risky stock market options. For an experienced and well-versed trader, the platform likely makes sense, but to someone with little experience – it can be very confusing and offers a lot of possible settings that can have devastating effects on your result. This goes on top of a business that in itself is hard for someone except the experts to fully understand. 

Screen shot of a tweet https://x.com/BillBrewsterTBB/status/1273008513950928896

There is a reason customers have to fill out numerous knowledge tests and certify they understand what they are doing before starting to trade stocks or other assets, but it is also possible to bypass these by e.g. looking up the answers and they cannot cover all scenarios. Software design should help less knowledgeable customers understand what they are doing. 

However, accountability is difficult. Can companies be held responsible for scenarios like this? Probably not legally. Should they have taken more responsibility? Probably yes.

We live in a time where extremely complicated things become available to the general public and with that comes responsibility. Who will take responsibility for educating people in the dangers that come with the potential of becoming incredibly rich over night? How do we balance protecting our users from causing harm to themselves, or others, through using your product – while still allowing them to reap the benefits of using your product. Should we?

Scenario 3

Face recognition software is another hot potato. For the sake of this article – let’s include everything from full-on AI solutions to processes with actual humans being part of the decision process. Should we use face recognition software as a part of law enforcement? 

On the one hand, we will save incredible amounts of human labour and we should be able to solve more cases in a shorter amount of time. In some ways – this is already used and accepted. As an example, speeding tickets are automatically registered if you are caught on camera in several countries around the world. In Sweden, this is a manual process where someone has to try and match the photo to the person owning the vehicle. If it isn’t a visual match – law enforcement has to try and match it to someone like a spouse or a child living at the same address, and have the owner confirm who the driver is. This of course has a lot of loopholes, but also low risk of false positives.  

What is starting to gain ground around the world is the automation of this process, and similar ones. As anyone who has built software, or maybe even just used it, it is easy to imagine thow terribly wrong this could go. Let’s look at some headlines

With the rise of new technology – new dangers will also awake. With that, the “what’s the worst thing that can happen” becomes… rough.  And important. When the potential harm is small – it might be ok to shrug those questions off, but we need to act responsibly when the stakes get higher.

AI has potential to solve many of our current challenges, from identifying criminals to finding cancer early, but it is still just a very complicated excel sheet – and it cannot, maybe ever, do the type of elaborate judgement that a human can. 

There are many reasons to distrust technology as evidence, at least when it’s the only evidence. Particularly if you fully automate decisions. Not to mention how difficult and muddy the area of accountability will get.

Scenario 4

We are constantly pushing the boundaries of possibility. An area we find incredibly jaw-dropping is  medicine. The distance we’ve come just in the last deceased is just extraordinary, but unfortunately those giant strides for mankind come with stories that feel taken out of a sci-fi horror story. 

As with the headline below. An Australian woman received a brain implant for Epilepsy as part of a clinical trial. Before the procedure, she was struggling to even be able to leave her home due to the unpredictability and strength of her episodes. Afterwards, she could start living a full life, using the help of the implant to plan her life and predict upcoming episodes. The woman described it as becoming a new person, where the device and her bonded and created a symbiotic partnership. Then imagine the trauma of having that new part of yourself removed, because the company ran out of money.  So, what do we do, when personal interest clashes with business interest? 

Link: https://www.technologyreview.com/2023/05/25/1073634/brain-implant-removed-against-her-will/

This story highlights multiple problems. We have laws that can’t keep up when development goes really fast. We have laws and regulations around privacy and medical rights. And then we have the fast that ownership of software is a lot more complicated than physical ownership of a thing, such as a hip replacement. Software is subject to things like intellectual property laws. This was an extreme scenario perhaps – but the same thing could easily happen for something like… the software used to control your child’s insulin pump. 

Wrapping up part two

If part one set the scene, we hope part two gave you some food for thought and made you want to get better at thinking “…but should we?” before building something new and shiny. 

In the last part, we will try to give you some useful tools for how to get more intentionally ethical and legal. 

Reading list

Co-Authors

  • Lena Pejgan Nyström

    Lena has been building software in one shape of form since 1999 when she started out as a developer building Windows Desktop Applications (Both impressive and scary - they are still alive and kicking) She later found her passion for testing and even though her focus has shifted to building organizations and growing people, she is still an active voice in the testing community. Her core drive is continuous improvement and she strongly believes we all should strive to challenge ourselves, our assumptions and the way things are done. Lena is the author and creator of “Would Heu-risk it?” (card deck and book), an avid blogger, international keynote speaker and workshop facilitator.

  • Mathias Jansson

    Mathias Jansson works as a legal counsel at a Swedish government agency. He is currently working in project financing and has a long experience in application of public law, public access to information and secrecy as well as integrity and personal information regulation. He is a passionate power lifter, biker and loves climbing mountains or take seriously long walks in the woods. Some also say he is fun. Don’t ask him about history unless you are prepared for a lecture.