Fundamental Ethical Theories on Surveillance Technology

Let’s apply some fundamental ethical theories to the issue of a startup that sells surveillance software. This was an assignment for my CS ethics class and is therefore neither conclusive nor particularly novel. Some resources on the ethics of surveillance technology are linked at the bottom.

How would a utilitarian assess the scenario? Discuss the difference between how an act and a rule utilitarian might approach it.

Utilitarianism is a consequentialist theory: it is focused on judging the morality of actions from their consequences, not the value inherent in the action itself. Within the provided scenario, there are multiple actors: the employees of the startup, the employees of the governments and businesses utilizing these surveillance tools, and the populations under surveillance (this includes all the faces included and not included within the training data of the described software). To assess this scenario, a utilitarian needs to identify the consequences of the actions taken to each of the actors. Two types of utilitarianism are act utilitarianism and rule utilitarianism. An act utilitarian would attempt to identify the consequences of this scenario in isolation and choose the series of actions that maximize utility (or the total happiness), while a rule utilitarian would try to create maxims from this situation and universalize them with the consequences in mind. In other words, the rule utilitarian is concerned with the consequences of the rule describing the action taken if it were to be repeated in the future, while the act utilitarian is only concerned with maximizing the utility of this specific situation.

An act utilitarian would first look at the employees of the startup. The employees will be compensated by selling this surveillance technology, but the startup may be exploiting many of these employees, creating an unsafe work environment and perpetuating inequality. This action would then be perpetuating exploitation. More information about the nature of the startup is necessary to determine if this action maximizes utility under this lens. According to the prompt, different governments and businesses purchase this technology, including shopping-mall operators, commercial real-estate facilities, and local law enforcement agencies. Recording each unfamiliar face may assist in the prevention and litigation of various crimes, benefitting the employees of these organizations; the predictive modeling software also has the potential to achieve this. However, flaws within this technology may lead to lawsuits and reputational damage to these governments and businesses. Once again, more information about the correctness, reliability, and security of this technology is necessary to precisely ascertain the utility of this business for these organizations. Despite this, an act utilitarian may assume that the benefits outweigh the consequences to the organizations themselves as the consequences are likely less probable than the perceived benefits.

The negative effects are more evident when viewing the impact of this technology on the populations that are under surveillance. It should be noted that an act utilitarian is not concerned with the principles of privacy and consent under violation, but only the harms that can be enacted by this software. Tracking unfamiliar faces may lead to a potentially dangerous situation dependent on whomsoever obtains access through the data (imagine a stalker accessing the recorded data to track their victims). These organizations themselves may use this for potentially harmful purposes, including increasing the number of incarcerated peoples, enforcing biased and uneven policing, or removing certain groups from specific places. This data may even be used in aggregate to enable disinformation and ethnic cleansing campaigns.

A rule utilitarian would approach this scenario by generating maxims from the described actions. Since this is a complex situation, there are multiple maxims that can be generated; this is reliant on the finitude of the ethicist. One maxim that can be generated is “it is moral to surveil people by collecting videos of unfamiliar people and using that data to detect unfamiliar faces.” Then, the question remains: do the benefits of surveilling people in shopping malls, encounters with law enforcement, and other environments outweigh the consequences? Will this maximize utility? It is likely a rule utilitarian would favor the dismissal of this technology. It is easy to demonstrate and point to cases where mass surveillance results in increasing racial inequity or even facilitating war crimes and genocide, decreasing utility on a scale far greater than increasing utility by litigating a small number of petty thieves (or whatever use case the software is used for at the moment). If this action was ethical, it should maximize utility for everyone. Surveilling people does not since it only partially and unevenly benefits the organizations purchasing the software and the startup itself.

How would a deontologist assess the scenario? Which version of the categorical imperative is most useful here? How would a Rossian respond?

A deontologist assesses scenarios based upon the intrinsic value of the action itself instead of its direct effects. They ask “Does the action conform to a moral rule or duty?”. This is typically evaluated through a mechanism known as the categorical imperative- an unconditional duty that must be adhered to regardless of the specific consequences. The most useful version of the categorical imperative in this situation is: “Act only in such a way that you could will that your maxim become a universal law.” As with rule utilitarianism, multiple maxims can be construed due to the complexity of this scenario. One such maxim is that “we can surveil people without their consent.” This would likely fail the intrinsic goodness test for several reasons. Clearly, the principle of consent is violated. The universalization of this measure would eradicate privacy and transform the world into one of ubiquitous surveillance.

William David Ross expanded upon Kant’s theories to establish a more pragmatic and flexible form of deontology. He outlined a series of prima facie duties with a natural hierarchy and contrasted them with actual duties. According to Rossian pluralism (as his theory is often referred to), it is the duty of the individual to determine the hierarchy of prima facie duties in a particular context and make a decision; this addresses the possibilities of contradictions between different moral duties, a failure of the previously described deontological approach. Unfortunately, it is relatively easy to manipulate these duties to support any decision, a failure of this approach.

One Rossian may prioritize the prima facie duty of non-maleficence. The aforementioned possibilities of supporting unjust governance, bolstering racial inequity, and violating the privacy of different populations causes harm, violating the duty of non-maleficence, indicating to the Rossian that the actions taken within this scenario (the use of surveillance technologies) is not moral. Another Rossian can prioritize self-improvement, justifying this decision by pointing to the material benefit bestowed upon the organizations relying upon this technology and the startup.

How would a social-contract theorist assess the scenario? Choose one social-contract theorist to focus on.

Social contract theory differs from previously mentioned theories as it is a political theory- not just an ethical one. This theory posits that moral obligations rely on a contract that binds people to a society. Henceforth, a social-contract theorist would focus primarily on the fact that the government is deploying this surveillance software sold by a private company. Thomas Hobbes is one notable social contract theorist. Influenced by Enlightenment notions of universal laws of nature and mechanistic views of social behavior, he sought a balance between absolute rule and a democratic society. According to Hobbes, people are inherently self-interested, wanting to better their own circumstances, accrue wealth, increase status, amongst other motivations. The human state of nature is one of chaos. People then enter social contracts to escape the state of nature and improve their circumstances. Else, there is no inspiration for cooperation. A Hobbsian social-contract theorist may justify the government’s decision to deploy surveillance software, arguing it would enable them to enforce laws that help humans escape the state of nature.

How would an Aristotelian virtue ethicist assess the scenario?

Virtue ethics emphasizes the role of moral character and virtue in decision-making in contrast to the intrinsic goodness of an action or its consequences; it is focused on the actor who carries out the actions under contention. Aristotelian virtue ethicists view virtues as a path to eudaimonia or “human flourishing”- the highest end. Habit is the path to virtue; an actor needs to develop a disposition towards virtuousness. Virtue is not a resource to maximize, but a golden mean between excess and deficiency of a set of 12 major virtues detailed by Aristotle. A virtuous actor needs the right degree of concern about these virtues for their decisions.

The ethicist would have to analyze whether the startup is acting virtuously in developing cameras and a predictive algorithm to detect unfamiliarity in the environment and whether the consumers of these products are acting virtuously by using them. An imbalance in various virtues is evident. These actors lack justice due to the unequal nature of the predictive algorithm. They have too much ambition, attempting to quantify unfamiliarity in all cases. They lack truthfulness by surveilling populations without their consent. They lack wit and friendliness by assuming all unfamiliar individuals are likely to be guilty and should be surveilled. An Aristotelian virtue ethicist would consider the startup as well as the governments and businesses purchasing these products to be acting unethically and unrighteously.

What is pickle if not insecurity persevering?

My team at Trail of Bits just released “Never a dill moment: Exploiting machine learning pickle files” where we address a critical security issue plaguing machine learning frameworks. Check it out and play with fickling today.

PrivacyRaven!

Nowadays, much of my time is spent developing and maintaining PrivacyRaven- a privacy testing library for deep learning. We recently released a blog post about PrivacyRaven on the Trail of Bits blog (also featuring my talk at Empire Hacking). If you want even more information, take a look at the:

I’m excited to continue working on PrivacyRaven. This wouldn’t have been possible without the help of my mentor, Jim, and the rest of Trail of Bits.

Feel free to reach out to me at suhashussain1 'at' gmail 'dot' com or @suhackerr (even if it’s not about PrivacyRaven).