The Jurassic Park Problem & Software Development (Part 1)

Can we, are we legally allowed to and… should we?

A lot of the big achievements and progress in human history comes down to someone asking “I wonder if I could…” and gently, or harshly, bending a few direct or indirect rules. If we always played it safe, who knows how many of our species’ innovations we would have missed out on. A lot of these tries end up not being technically feasible (at least then, by those people, in that context). Some succeed but later on are outlawed due to the dangers they end up representing. Some prove to be safe and helpful. And a lot end up being technically legal, mostly ok but abused by scrupulous people. In Jurassic Park – this is presented through a man’s dream of experiencing living with dinosaurs. He ignores warnings, makes a number of morally “doubtful” choices, mistakenly trusts the wrong person and in the end – we end up in utter chaos.  

Technology has the possibility to change quality of life or even the direction of mankind.

With the use of technical innovations we are able to communicate not only across the world, but across the stars! We are able to do surgeries and medical replacements that were science fiction only decades ago. When building software, we talk a lot about what is possible from a technical point of view, what a user would want and what would benefit the business. We don’t talk as much about other aspects, like the ethical and legal aspects and implications of a solution. 

In this three part article we will look into the three exes of technically possible, ethically and/or morally right and legally acceptable and how they can work in tandem or against each other.. Part one, this post, will focus on giving an introduction to the concepts. Part two dives into what happens when the forces don’t align. And then finally in part three we will aim to give suggestions on how you can move towards being more conscious and deliberate when it comes to choices of ethics, morality and legal matters. 


Can we build it?

When it comes to technically possible – we are rather quickly running out of things that are truly impossible. We can put rockets on Mars, heck we can even debug and patch software on Mars. We are getting pretty consistent in cloning animals and have had excellent results in studies predicting diabetes and cancer using machine learning. 

Animated gif from the movie Minority Report

In 2002, Minority Report came out. 25 at the time, Lena remembers how unbelievably sci-fi it felt to her. No different to Star Trek. The technology used might as well have been magic to me at the time. But what felt like impossible feats to her, were actually just educated extrapolations from cutting-edge technologies at the time. The movie is argued to have shaped technological advancement for decades to come. Over 100 patents have been issued for ideas first presented in Minority Report.

Looking back, a lot of what she felt was impossible 20 years ago is actually used in a lot of our normal, daily lives. 

  • Voice activation is used in a wide range of devices from phones to elevators to automatic switchboards. 
  • Gesture recognition is used in everything from gaming, video calls and phones to healthcare and medical devices.
  • Face recognition software is standard in things like unlocking your phone or building an avatar in gaming, but is also starting to be used for law enforcement, with varying degrees of automation. 
  • Camera surveillance processing is used to determine parking occupancy (the green and red lights telling you where there are free spots, self service stores like Amazon Go stores and for issuing speeding tickets.
  • Flexible displays, wearable technology and personalized ads are so common today it’s hard for me to even understand that I would have considered them impossible not long ago

With that said – more interesting than technically possible is perhaps technically feasible. Can we build this with the available team and resources, to a cost and within a time that is reasonable, within existing constraints? 


Are we legally allowed to?

Mathias really likes to tell a story from when he got his law degree a professor told him “congratulations, you can now with the utmost confidence and authority say Well that depends…” Legal matters are seldom clear and binary but differences in outcomes depend on minute differences in circumstances, arguments and justifications. One should also remember that the law is argued, interpreted and applied by humans (well lawyers at least) in legal departments of companies, law firms and finally in the courts. 

Furthermore, we tend to regulate issues which are in the public discourse at the moment, which means that we don’t regulate science fiction but the actually possible and the plausible if we’re lucky. Most of the time legislation lags behind technical development by a factor of years. This also means that what is legal or illegal at any given time is shaped by public discourse and political needs.     

The construction of laws is in itself complex, it is difficult to find the right balance between detailed enough to be understandable and useful, but still flexible enough to adapt to scenarios that we did not consider (or were possible at that point in time).

That complexity increases with the geographical area the law is meant to cover – it’s easier to set rules that cover a small geographical area than something that should work for the entire EU or even world. 

Regulation can come in on different levels of a product’s life, in some cases possession itself is regulated, for example when it comes to firearms. In other cases usage is regulated, there is currently no law prohibiting you from trying to build Archimedes Death Ray, you are however not allowed to fry people with it. And in most cases manufacturing, environmental and safety standards apply.

And laws and regulations can seem to contradict each other. If you operate in different regions or different businesses, you might need to tread very carefully when deciding which regulation to apply. You need to have a solid understanding, or great legal connections, to make sure you are not doing something wrong.

Doing business in the EU, there are a lot of abbreviations you need to be aware of, like 

  • GDPR – General data protection regulation
  • DSA – Digital services act 
  • CRA – Cyber resilience Act
  • AI ACT – European Artificial Intelligence Act
  • NIS2 – Cybersecurity Directive
  • DORA – Digital Operational Resilience Act

In general, they are all regulating how you build systems, what levels of security and resilience you need to deal with and how to protect individuals and critical systems in an increasingly globalized reality.

From a legal perspective – we are currently very interested in privacy and consumer protection, protecting personal information and regulating system structures in order to protect that, protecting digital infrastructure. And of course: AI. 

Some areas where we are currently seeing conflicts and interesting discussions are 

  • Benefits and challenges of sharing data between e.g. different government agencies
  • The possibilities of cloud services vs. the importance of protection of personal data
  • AI innovation vs. copyright law

And lastly – remember: the law is also created by human politicians. Meaning if we want it bad enough, it will be made legal. Somehow, somewhere. 

Meme from Star wars saying "My lord, is that legal?" and "I will make it legal"

And should we – Ethics and morals

Morals refer to a sense of right or wrong, it is personal. Your own code of conduct if you will. Ethics refer more to principles of ”good” versus ”evil” that are generally agreed upon by a larger community. Sometimes they are clearly different, sometimes they blend together. We will use both terms throughout this series and we might sometimes fail to separate the two in a clear enough way, or use them in a way that you do not agree with. Hopefully not, but likely.

Moral philosophy is situational and requires you to assign value to different things and concepts. What is a human life worth? What is privacy worth? What is your personal safety worth? What is convenience worth?

You will need to weigh these things against each other in order to decide whether something is ethically justifiable.  

Does the end justify the means or should we consider what we have to give up in order to gain other things? 

Unfortunately this is a greyscale. Not everyone’s morals are the same, and sometimes what we think is our limit turns out not to be when put to the test. Different cultures have different views. Different generations. Even what is agreed upon in “modern civilization” is interpreted differently in different ages or parts of the world. For some the most important thing is feeling safe, for others it is privacy and freedom and for you it might be something else completely.

For one person, it is unthinkable to work with something that does not fully align with their morals while someone else is ok with compromising when told to, even if they don’t like it. And for the vast majority – ethics is something we don’t really consider, until asked to do something that goes against one of our core values, or it becomes too hard for us to turn a blind eye towards something. So reader, which one are you? Are you the person who leaves the first time you are asked to not report a bug, or are you the one who stays silent until the end – claiming you were just following orders?

As for technology – all things can likely be exploited and/or twisted into doing harm in some way. So the purely good software likely does not, and will not ever, exist. The best we can strive for might be “Mostly harmless”. But on the other hand: This goes both ways. Even things built to do harm might be bent into something beneficial for the overall society. 

Some software, or technology in general is… mostly harmless. It was built with the intent to help, to make life easier/better and rarely cause problems or issues for the user.

As an example, Mathias likes to ride motorcycles. Modern MCs have ride modes. With one press of a button you can modulate throttle response, traction control and ABS to help the rider in difficult conditions, like rain, or to just change the experience of the ride. As long as these systems are self contained – they are unlikely to be tampered with from the outside and they won’t use your personal information or ride statistics in harmful ways. 

In mostly neutral territory we have tech that can be used for harm without changing the actual use case and things where you can twist the use case into something harmful. If used as intended – all good. But if it falls into the wrong hands it can cause irreparable hands.

Some examples:

  • Tracking and collecting user data as a way of knowing our customers: Used right, it can enhance the customer experience immensely.
  • Facial recognition software: Simplifies our everyday life in many small ways, can be a great accessibility tool and it can be used to track down criminals.
  • Health data and Menstrual trackers: This type of technology has the potential and ability to improve our way of life, even save lives. Software like Ancestry, or different kinds of genetic registries, can be used to not only find relatives but to identify criminals and solve tough cases.
  • Passenger name registries can be used to fight crime and make it easier to identify victims
  • Tech like AirTags can be used to track your property, even to keep your kids safe if they get lost.

All of the above are great innovations that can improve our lives. Used wrongly, or falling into the wrong hands – they can also be used to breach our privacy and hurt us. Imagine if we end up in a society where health care could refuse help because you didn’t take care of your body, where that menstrual tracker can be used as evidence in a case against you or where facial recognition software is used to keep control of the masses in a future dictatorship.

Moving into hecking awful territory, we have a group of things so inherently dangerous that you need to have safeguards in order to do any kind of good with it  – and whatever good you can do is still conditional. Things like nuclear weapons, sarin gas, white phosphorus and other kinds of biological weapons regardless of origin. Noteworthy is that a lot of these were originally created with good intent. We made them thinking they could be used for societal good, such as Sarin and Zyklon-B (although both of those were pretty quickly identified as potent chemical weapons)- originally built as pesticides. 

So how about true evil? Things intended to do harm in terrible ways, intended for destruction?

Well, we found it very hard to find examples of proper, not-up-for-discussion, true evil – because humans are not supervillains. It is very rare that someone sits down and intentionally intends to create or invent the most terrible unethical thing imaginable, with no level of good intent. There is the potential in us to use things in the worst imaginable way possible – but most things are created with some kind of beneficial use case. The first computer virus was built as a security test. The Creeper Program, often regarded as the first virus, was created in 1971 by Bob Thomas of BBN. Creeper was designed as a security test to see if a self-replicating program was possible. With each new hard drive infected, Creeper would try to remove itself from the previous host. Creeper had no malicious intent and only displayed a simple message: ”I’M THE CREEPER. CATCH ME IF YOU CAN!

And just as good can be warped into bad – some bad things can also have “good” use cases. Manipulation of viruses has led to breakthroughs in healthcare. The invention of the nuclear bomb led to nuclear power, which, while definitely not purely good, was a breakthrough in “cheap” energy.

GPS technology was originally created by the US Space force as a military purpose navigational tool. 

What about breaking the law to do good? An interesting documentary called “The Hacker – breaking the law to stop criminals” tells the story of  a hacker who has taken action to help elderly people who are scammed over the phone. The hacker takes control of the criminals’ computers and sends all material directly to the authorities. Where this falls on this spectrum probably depends on your own sense of moral – does the end justify the means?


Wrapping up part one

To sum this all up: Humans repeatedly do bad things starting from the very dangerous question: “Can I do this?”, without also considering “...and should I?”. 

We hope you enjoyed this first part, in the next one we will look into some interesting scenarios and dilemmas and then in the last part we will leave you with some suggestions of how to get more intentionally ethical and legal. 


Reading list

Laws and regulations

Other

Co-Authors

  • Lena Pejgan Nyström

    Lena has been building software in one shape of form since 1999 when she started out as a developer building Windows Desktop Applications (Both impressive and scary - they are still alive and kicking) She later found her passion for testing and even though her focus has shifted to building organizations and growing people, she is still an active voice in the testing community. Her core drive is continuous improvement and she strongly believes we all should strive to challenge ourselves, our assumptions and the way things are done. Lena is the author and creator of “Would Heu-risk it?” (card deck and book), an avid blogger, international keynote speaker and workshop facilitator.

  • Mathias Jansson

    Mathias Jansson works as a legal counsel at a Swedish government agency. He is currently working in project financing and has a long experience in application of public law, public access to information and secrecy as well as integrity and personal information regulation. He is a passionate power lifter, biker and loves climbing mountains or take seriously long walks in the woods. Some also say he is fun. Don’t ask him about history unless you are prepared for a lecture.