Model(ing) Privacy: Empirical Approaches to Privacy Law and Governance

INTRODUCTION

“The makers of our Constitution . . . sought to protect Americans in their beliefs, their thoughts, their emotions and their sensations. They conferred, as against the Government, the right to be let alone – the most comprehensive of rights, and the right most valued by civilized men.”

– Justice Louis Brandeis, dissenting, Olmstead v. United States

“Your user agreement sucks.”

Senator John Kennedy to Mark Zuckerberg, Senate Judiciary Hearing on Facebook, Social Media, Privacy, and the Use and Abuse of Data

People care about privacy for different reasons, and to differing extents. As the volume of data sets about each of us continues to proliferate, and the uses of that information continue to evolve, gauging individual privacy expectations and broader societal norms has become increasingly challenging. Individuals make privacy decisions that seem to undermine their stated preferences, even as the risks to the fundamental interests linked to privacy, such as equality, autonomy, and intellectual freedom, only continue to grow. Largely to blame for these apparent contradictions are the ineffective standards that determine how privacy decision-making, expectations, and preferences are measured. The regulatory regime governing consumer privacy and the Fourth Amendment’s protections for privacy from the government both rest on the idea that judges and policymakers can discern individual and collective privacy norms, when in reality, they are rarely able to do so accurately.

Consumer privacy law in the United States is molded around the idea of privacy as an economic good, where the degree of legal protection a person receives depends on her control over her information through notice and choice mechanisms, like app permissions or privacy policies. The notice and choice model relies on the idea that informing consumers in convoluted boilerplate of how their data is collected and used empowers them to make privacy decisions that reflect their preferences. Under this thinking, any failure to subsequently make privacy-protective choices indicates either apathy or a deliberate declaration of a contrary preference. In fact, it is exceedingly difficult for individuals to make choices that produce the privacy outcomes they prefer or expect due to cognitive and structural limitations. Phenomena like decision fatigue, learned helplessness, and lack of information about collection and tracking all impede individuals from making the privacy choices that correspond to what they hope (or believe) will happen to their information.

Many technologies have become so intertwined with daily life that even individuals with strong privacy preferences or expectations cannot make choices that suit those preferences. A person might want to avoid geo locational tracking but need to carry a cell phone to ensure an elderly parent can get in touch; someone else might wish to avoid web tracking but be required to use a school or employer-run Gmail account. In a memorable example, journalist Julia Angwin documented her attempts to avoid every form of surveillance she could by using burner phones whenever possible, abstaining from using any Google services, relying on a credit card under a fake name when she couldn’t use cash, and carrying her smartphone in a makeshift Faraday bag to block the phone from sending or receiving signals. 1Jacob Silverman, ‘Dragnet Nation’ Looks at the Hidden Systems that Are Always Looking at You, L.A. TIMES (Mar. 6, 2014, 12:00 PM), https://perma.cc/3J4C-HQUS. She assessed her diligent efforts to avoid being tracked as “50% successful” – and this is a tech-savvy investigative journalist, who was solely dedicated to the task of protecting her privacy.4 Others may have stronger privacy preferences or expectations repudiated by the actions of others that are beyond their control. In a recent and infamous example, Facebook provided the information of 87 million users to the political firm Cambridge Analytica, which then coordinated with the Trump campaign to target voters based on that data.5 The firm acquired the information after just 30,000 users downloaded a quiz app, and Facebook’s developer guidelines allowed the firm to access the data from every Facebook friend the quiz app users had. While users consent to a lot of things when they create a Facebook account, it is difficult to argue that the 86,970,000 users who did not download the app themselves expected their information to be used for voter targeting by a presidential campaign (or even that the 30,000 who did download the app would expect that result). Data breach after data breach further demonstrates that while companies may promise to protect their users’ data in their privacy policy, they consistently fail to do so.6 Chief Justice Roberts noted in Carpenter v. United States, this country of 326 million individuals is a country of 396 million cell phone accounts, and only the few without phones can escape such “tireless and absolute surveillance.”