Why Data Privacy Based on Consent Is Impossible
por Scott Berinato

For a philosopher, Helen Nissenbaum is a surprisingly active participant in shaping how we collect, use, and protect personal data. Nissenbaum, who earned her PhD from Stanford, is a professor of information science at Cornell Tech, New York City, where she focuses on the intersection of politics, ethics, and values in technology and digital media — the hard stuff. Her framework for understanding digital privacy has deeply influenced real-world policy.
In addition to several books and countless papers, she’s also coauthored privacy plug-ins for web browsers including TrackMeNot, AdNauseum, and Adnostic. Nissenbaum views these pieces of code as small efforts at rationalizing a marketplace where opaque consent agreements give consumers little bargaining power against data collectors as they extract as much information, and value from this information, as they can. Meanwhile, these practices offer an indefinite value proposition to consumers while compromising the integrity of digital media, social institutions, and individual security.
HBR senior editor Scott Berinato spoke with Nissenbaum about the concept of consent, a good definition of privacy, and why privacy is a moral issue. The following excerpts from their conversation have been edited for clarity and length.
Crummy Consent
HBR: You often sound frustrated when you talk about the idea of consent as a privacy mechanism. Why?
Nissenbaum: Oh, it’s just such a [long pause] — look, the operationalization of consent is just so, so crummy. For example, as part of GDPR, we’re now constantly seeing pop-ups that say, “Hey, we use cookies — click here.” This doesn’t help. You have no idea what you’re doing, what you’re consenting to. A meaningful choice would be, say, “I’m OK that you’re using cookies to track me” or “I don’t want to be tracked but still want to enjoy the service” or “It’s fine to use cookies for this particular transaction, but throw unnecessary data out and never share it with others.” But none of these choices are provided. In what sense is this a matter of choosing (versus mere picking)?
The farce of consent as currently deployed is probably doing more harm as it gives the misimpression of meaningful control that we are guiltily ceding because we are too ignorant to do otherwise and are impatient for, or need, the proffered service. There is a strong sense that consent is still fundamental to respecting people’s privacy. In some cases, yes, consent is essential. But what we have today is not really consent.
It still feels pretty clear-cut to me. I chose to check the box.
Think of it this way. If I ask you for your ZIP code, and you agree to give it to me, what have you consented to?
I’ve agreed to let you use my ZIP code for some purpose, maybe marketing.
Maybe. But did you consent to share your ZIP code with me, or did you consent to targeted marketing? I can combine your ZIP code with other information I have about you to infer your name and precise address and phone number. Did you consent to that? Would you? I may be able to build a financial profile of you based on your community. Did you consent to be part of that? I can target political ads at your neighbors based on what you tell me. Did you consent to that?
The calculus is getting more complicated.
Especially in translating meaningful natural language terms into representations of these terms in a machine. You get a pop-up that asks if it’s OK to collect location data. What is location data? On your device, location may be operationalized in a certain way, for example, with GPS latitude and longitude. But there are many other ways I can infer your location. Location can be obtained through an IP address. Or when you’re searching for the arrival time of a flight and you’ve been discussing this flight from Paris. You text a friend, I’ll pick you up at Terminal A at 3. There’s no geographic tracking here. Have you consented to turning over this location data? Are you consenting to location or GPS coordinate?
You might think that consumers and machines (in this case, the device or app) mean the same thing by location — namely, GPS coordinates, which are so precise. Not the case. In a research project, I (with colleagues) discovered that people are far less bothered about sharing latitude and longitude than about sharing location data such as “at the hospital” or “in X store” that has semantic content. And when you tell subjects what can be inferred from location data, they get even more freaked out. So just asking for consent to acquire location isn’t providing the details people need to make an informed choice.
So consumers don’t know what they’re consenting to, data collectors can’t say for sure how they’ll use the information, and the two sides may not see eye to eye on what they’re actually agreeing to share. Now this all sounds intractable.
Even if you tried to create totally transparent consent, you couldn’t. Well-meaning companies don’t know everything that happens with the data they collect, particularly those that have succumbed, against their better judgment, to the pressures of online tracking and behavioral targeting. They don’t know where the data is going or how it will be utilized. It’s an ever-changing landscape. On the one hand, requiring consent for every use isn’t reasonable and may prevent as many good outcomes as bad ones. Imagine if new science suggests a connection between a property, or cluster of properties, and a particular cancer treatment. Returning for consent may impose obstacles that are impossible to overcome.
The Big Idea
Stay informed with emails about everything new we launch in this and other Big Idea projects.
But on the other hand, what exactly does it mean to grant consent no matter what uses may come up in the future? Think about a surgeon explaining a procedure to a patient in great medical detail and then asking, “Are you OK with this?” We kid ourselves if we believe that consent is all that stands in the way of surgery and outcome. Most of us say OK not because we deeply grasp the details and ramifications but because we trust the institutions that educate and train surgeons, the integrity of the medical domain, and — at worst — the self-interest of the hospitals and surgeons wishing for positive acclaim and to avoid being sued.
It’s not that we don’t know what consent means; it’s that getting to a point where we understand the true sense of what consent means is impossible.
I hear the passion in your voice.
Stop thinking about consent! It isn’t possible, and it isn’t right. I respectfully but strongly disagree with my colleagues who believe that incremental improvement in consent mechanisms is the solution. My position is not that modeling “true” consent in this age of digital technologies is hard or even impossible, but that in the end, it’s simply not a measure of privacy! Take the Cambridge Analytica case. Very enlightened people complained, “Facebook shared the information without consent.” But was it really about consent? Based on all our behaviors, all the time, I promise you, if they had sought consent, they’d have gotten it. That’s not what outraged us. What outraged us was what Cambridge Analytica was doing, and has done, to democratic institutions and the fact that Facebook was so craven they didn’t care. Consent wouldn’t have mattered; it would have easily been attained.
We need to focus on approaches — “postconsent” approaches — that still rely on consent but not only on consent. Once we admit that consent is an inappropriate safeguard, we can ask, “Where do we go from here? How does a society address privacy and data collection?”
Context and Dataflows
So that’s my question: Where do we go from here? If consent doesn’t work, what does?
In my work, I support a view of privacy as a balanced value. Yes, privacy promotes the interests of data subjects — note interests, not only expressed preferences. But we must go beyond the interests of data subjects and consider the spread of interests across other affected parties, which may be in conflict. Some economists would say an interest-based analysis is enough. But I take it one step further and look at the implications beyond individuals and individual stakeholders. Following George Mason University professor Pris Regan, we cannot ignore privacy’s societal value. The right conception of privacy understands the role privacy plays in promoting societal values, such as education, justice, liberty, autonomy, and so forth. And finally, privacy promotes contextual or institutional values. Individual consent may be a mechanism for expressed preferences, and may even be a mechanism for promoting interests, but it cannot ignore the critical role privacy plays in judiciously constraining dataflows to promote societal and contextual (or domain-specific) values.
You’ve used the term “dataflows” a couple of times. I always thought of privacy as a transaction between the owner of the information and those who want access to it. Do you think of it differently?
My definition of privacy is “an appropriate flow of information” (or, “data,” if you prefer). If you imagine a river, you can think about ways in which we can shape its flow. We can pause, dam, or divert it with different means and for different reasons. Scott, you asked for my phone number, and I gave it to you. Even in that simple transaction there was a flow of data about me to you. It was a flow that was, in this instance, constrained by consent, because you were polite enough to ask. I realize you could have gotten my number by some other means, and that may or may not have been wrong (for example, violating privacy), but the dataflow would have been different. And, I should say, I expected you would not share my phone number with others not because there’s a law preventing this or because I said so, but because there’s an implicit understanding — a norm, if you will — of confidentiality. One could venture further and speculate that, in these circumstances and in the capacities in which both of us are acting, such behaviors are important for promoting trust and expressing respect.
For different dataflows there are different constraints. When a judge requests information, it’s actually a command. Or when filing a tax return, you are required by law to provide various fields of information. You don’t decide to do those transactions; they’re required. The IRS, likewise, is constrained in what it can do — as we know, it is bound to not release this information except under extremely limited conditions. Sherlock Holmes acquired data with no transaction at all. He just used inference. That’s a different flow, and one that’s becoming more important for us to evaluate as machine learning begins to infer our personal data.
For the theory of contextual integrity, which I’ve just described in very general terms, information flows are primitive — they are the basic building block for privacy. Specifically, the theory posits five parameters to describe the flow in order to properly assess whether the flows in question threaten privacy. (These parameters are sender, recipient, subject, information type, and transmission principle.) Briefly, according to the theory of contextual integrity, appropriate information flows conform to legitimate informational norms. The theory presumptively favors entrenched norms — basically, reflecting what most people expect — but in light of so many changes and challenges from digital technologies, it allows for norms to change — sometimes slowly, other times rapidly — not because these changes are foisted upon us by tech companies, but because they promote interests and values.
And the appropriate constraint depends on the context? Sometimes you dam it, sometimes you divert it, sometimes you let it flow freely?
Yes. That’s it. Privacy requires appropriate constraints on dataflows, typically between the data subject and the party who is collecting the data. But these constraints may also apply on flows between third parties, data collectors, and others, where the data subject is not directly party to them. Thus, the IRS may not share a candidate’s information with a political opponent, but a teacher is obliged to inform a parent about a 10-year-old student’s academic performance (whether or not the kid desires that). Appropriate flow is the be-all and end-all.
I’ve always thought that a good definition of privacy was one that was about the right to selectively reveal oneself as one sees fit. What matters is that the individual retains the right to it.
No! No! I don’t believe what’s worthy of protection is fundamentally based on only the individual’s preferences or interests. The meaning of privacy I want to defend isn’t just about what I want as a user, consumer, citizen, family member, etc. Yes, perhaps in certain kinds of relationships, your definition works. In a friendship or with other social acquaintances, for example, one chooses what information to reveal or not. In a job interview, although the candidate may be allowed the choice to reveal certain information like religious affiliation, this may not be so for information such as past work experiences. But in my view, the basic assumption that privacy is always about the right of an individual to selectively reveal gets us off on the wrong foot. I can imagine cases where you think it’s OK for people to be profiled with or without consent and whether or not it is strictly in their interests, not because we are trading privacy for other values, but because a right to privacy is one that is already bounded, or balanced.
Privacy and the Greater Good
Is privacy a moral issue to you? Are some data-collection practices just wrong despite their value or our consent to them?
Yes, privacy is a value that carries moral weight, but allow me to split your question into two. First, “yes” to the question about whether some data-collection practices are wrong even if data subjects consent. One only needs to scan the innumerable “privacy” policies each of us encounters and to which we implicitly consent to know this. Regulators turn a blind eye because even if there are small harms and indignities for data subjects, they have been persuaded by the much larger benefits to business. That is, the benefits outweigh the costs, even if not evenly spread. But, there are deeper reasons, some extremely difficult.
The analogy with environmental conservation can help. Imagine that I own forested land and a paper company offers to purchase and harvest the trees. Treating this as a business proposition I may decide it’s a good deal. But if one takes into consideration the future costs, the external costs, all those things that affect not just the two parties in question, then chopping down that forest is a problem.
A Blueprint for a Better Digital Society
[
For individuals and platforms, the future requires a fundamental economic shift.
](/2018/09/a-blueprint-for-a-better-digital-society)
I don’t think even a hard-nosed economist would dismiss such considerations; presumably one can perform a rigorous economic analysis that accounts for future and external costs. With respect to privacy, tough questions to confront include what to do when individuals consent to share but in so doing they compromise others who are connected with them in certain ways, whether in social networks, common genetics, or merely in shared profiles.
Are there cases where you think policies around privacy should be focused on the greater good, not just protecting the individual?
For sure. Some economists would look at a social media platform and say, “Yay for people who extract value where others couldn’t,” and leave it at that. But once society understands that the policies we have in place create systematic imbalances, and may even undermine critical societal institutions, the situation calls for recalibration. Presently we accept that social media platforms have the right to own an individual’s data based purely on the fact that the individual utilizes that platform, but we need to scrutinize this assumption. There is so much untapped value, as well as potential for societal harm. We need to rejigger social policy to achieve a better distribution of the benefits while minimizing harms.
Do you mean so that everyone, not just those who collect data, can access the data’s value? Like sharing medical data for better public health.
Yes. That’s one of my favorite examples, actually. Insurance companies receive highly detailed, highly structured data about patients. By law in the United States, they have access to and rights over this huge repository. There is a lot of value in it. Now imagine if we created forged policy that allows other parties access to that information provided they are able to extract value for society — that is, in the public interest: better pricing, better disease surveillance, greater understanding of treatment to prognosis, whatever. Such access may not benefit insurance companies, and they may simply prefer not to provide it, but it would be good for society. At the present time we allow insurance companies sole discretion over who gets access to that data and the same for several other parties who dominate the “datasphere.” The opportunity costs are staggering.
I’m not saying these societal benefits are easy to unlock. These are challenges that we haven’t confronted before in precisely this form. But they’re also the hard challenges that we need to face. It’s time to stop bashing our heads against a brick wall figuring out how to perfect a consent mechanism when the productive approach is articulating appropriate constraints on dataflow that distributes costs and benefits fairly and promotes the purposes and values of social domains: health, democracy, education, commerce, friends and family, and so on.The Big Idea
Artículos Relacionados

La IA es genial en las tareas rutinarias. He aquí por qué los consejos de administración deberían resistirse a utilizarla.

Investigación: Cuando el esfuerzo adicional le hace empeorar en su trabajo
A todos nos ha pasado: después de intentar proactivamente agilizar un proceso en el trabajo, se siente mentalmente agotado y menos capaz de realizar bien otras tareas. Pero, ¿tomar la iniciativa para mejorar las tareas de su trabajo le hizo realmente peor en otras actividades al final del día? Un nuevo estudio de trabajadores franceses ha encontrado pruebas contundentes de que cuanto más intentan los trabajadores mejorar las tareas, peor es su rendimiento mental a la hora de cerrar. Esto tiene implicaciones sobre cómo las empresas pueden apoyar mejor a sus equipos para que tengan lo que necesitan para ser proactivos sin fatigarse mentalmente.

En tiempos inciertos, hágase estas preguntas antes de tomar una decisión
En medio de la inestabilidad geopolítica, las conmociones climáticas, la disrupción de la IA, etc., los líderes de hoy en día no navegan por las crisis ocasionales, sino que operan en un estado de perma-crisis.