Big Data and Large Numbers of People: the Need for Group Privacy
Big Data and Large Numbers of People: the Need for Group Privacy
by Prof. Luciano Floridi, Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS, United Kingdom; luciano.floridi@oii.ox.ac.uk
One of the consequence of the information revolution (Floridi 2014) is that most of the debate on data protection focuses on individual privacy. How can the latter be protected while taking advantage of the enormous potentialities offered by ever-bigger data sets (Big Data) and ever-smarter algorithms and applications? The tension is sometimes presented as being asymmetric: between the ethics of privacy and the politics of security. In fact, it is ultimately ethical.
Two moral duties need to be reconciled: fostering human rights and improving human welfare. The tension is obvious if one considers medical contexts and biomedical Big Data, for example, where protection of patients’ records and cure or prevention of diseases need to go hand in hand.
Currently, the balance between these two moral duties towards human rights and welfare is implicitly understood within a classic framework. The beneficiaries of the exercise of the two moral duties are the individual vs. the society to which the individual belongs. At first sight, this may seem unproblematic. We work on the assumption that these are the only two “weights” on the two sides of the scale. Such a framework is not mistaken, but it is dangerously reductive, and it should be expanded urgently. For there is a third “weight” that must be taken into account by our data protection framework, that of groups and their privacy. Privacy as a group right is a right held by a group as a group rather than by its members severally. It is the group, not its members, that is correctly identified as the right-holder. A typical example is the right of self-determination, which is held by a nation as a whole.
The idea that groups may have a right to privacy is not new and it is open to debate, but it has not received yet all the attention it deserves, although it is becoming increasingly important. This because, by far, most people are not targeted by digital technologies as individuals but as members of specific groups, where the groups are the really interesting focus, as carriers of rights, values, and potential risks. Think of owners of such and such kind of car, people who like this kind of music, or live at that kind of postal code address, carriers of a specific gene, people affected by a particular disease, … Big Data is more likely to treat types (of customers, users, citizens, demographic population, etc.) rather than tokens (you, Alice, me…), and hence groups rather than individuals. But re-identifiable groups are ipso facto targetable groups. It is therefore a very dangerous fallacy to think that, if we protect personal data that identify individuals, the protection of the groups will take care of itself.
Such an “atomistic” approach (ontology, (Floridi 2003))—take care of each member separately and the group will automatically be fine too—is at the roots of current legislation everywhere, but especially in Europe. What we should acknowledge is the fact that both friendly and hostile users of Big Data may not care about Alice at all, but only about the fact whether Alice, whoever she is, belong to the group that regularly goes to the local church, or mosque, or synagogue, uses Grindr, or has gone to a hospital licensed to carry out abortions, or indeed shares a feature of your choice. In military terminology, Alice is hardly ever a High Value Target, like a special and unique building.
She is usually part of a High Pay-off Target, like a tank in a column of tanks. It is the column that matters.
As I have argued elsewhere (Floridi 2013), our current ethical approach is too anthropocentric (only natural persons count) and atomistic (only the single individual count). We need to be more inclusive because we are underestimating the risks involved in opening anonymised personal data to public use, in cases in which groups of people may still be easily identified and targeted. Such inclusiveness should not be too hard to achieve. After all, we already accept as ordinary the fact that groups as agents may infringe on someone’s privacy. In the United States, one is used to consider as normal collective lawsuits (class actions) in which a group may sue a person or another group. And in Europe, consumer organisations regularly bring claims on behalf of the groups they represent. Clearly, there are cases in which the protection of a right requires a balance between the agents, issuing the action, and the patients, receiving the action.
There are very few Moby-Dicks. Most of us are sardines. The individual sardine may believe that the encircling net is trying to catch it. It is not. It is trying to catch the whole shoal. It is therefore the shoal that needs to be protected, if the sardine is to be saved. An ethics addressing each of us as if we were all special Moby-Dicks may be flattering and it is not mistaken, but needs to be upgraded urgently. Sometimes the only way to protect the individual is to protect the group to which the individual belongs. Preferably before any disaster happens.
References
Floridi, Luciano. 2003. “Informational realism.” Selected papers from conference on Computers and philosophy-Volume 37.
Floridi, Luciano. 2014. The Fourth Revolution – How the infosphere is reshaping human reality. Oxford: Oxford University Press.
Floridi, Luciano. 2013. The Ethics of Information. Oxford: Oxford University Press.