Dangerous Algorithms

Jessica Beaver
3 min readMar 22, 2021

The moral imperatives in programming

In 2010 on the message boards of the techno-futurist website LessWrong, a user going by Roko proposed a though experiment: What if, in the future, a malevolent AI were to come along and “ retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being.”¹ The reaction of LessWrong’s founder Eliezer Yudkowsky was…jarring and led Roko’s Basilisk to become the stuff of internet legend:

Listen to me very closely, you idiot.
YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.
You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends.
This post was STUPID.

The implications of Roko’s Basilisk are that there could, in the future, be runaway malevolent AIs that wreak havoc on humanity and would be out of human control. Anxiety over the singularity receives a lot of attention and makes for a great movie plot, but the more sinister truth is that we are already surrounded by microcosms of dangerous code that could very well pave the way for our Terminator overlords.

In 2017 a video shared by a Facebook employee in Nigeria went viral. The clip showed a white man walking up to a soap dispenser, sticking his hand underneath, and receiving a dollop of soap. Next, a black man performs the same action, sticks his hand underneath and waves it around, but no soap is dispensed. The men can be heard saying “too black,” which is exactly right. The soap dispenser was programmed to release soap when the infrared beam shooting out the bottom was reflected back onto a sensor. The darker the skin, the more light absorbed, so there reaches a degree of melanin when a hand no longer registers for the sensor. This was, no doubt, an unintended side-effect, but is an example of how our biases do not require malice to become harmful.

The soap dispenser can seem benign enough, but when you consider the rate at which the world around us is automating and being translated into algorithms, one might wonder how we can account for all these “unintended side-effects.” Unfortunately, there is no centralized governing body or army of hobbyist programmers to pore over the algorithms the way amateur astronomers aid in discovering far-off galaxies. The burden remains on the individual or team that writes the code.

For the first time in 26 years, the Association for Computing Machinery updated their code of ethics in 2018 to include these sorts of ethical considerations. Closely mimicking the Hippocratic oath to “do no harm”, the updated principles include²:

1.1 Contribute to society and to human well-being, acknowledging that all people are stakeholders in computing.

3.1 Ensure that the public good is the central concern during all professional computing work.

3.7 Recognize and take special care of systems that become integrated into the infrastructure of society.

Throughout our careers as software developers, it is important to keep these principles in mind, to ask uncomfortable questions about the implications of our work when necessary and, to paraphrase Dr. Ian Malcolm, be less concerned about if we can do something and more worried about if we should.

--

--