It is a rare individual who has managed to keep at least some of their personal information from being stored in a Federal government database. With this forum, hopefully we can help each other better safeguard the public’s personal information.
Harvard Law Review Symposium Papers
November 20, 2012 at 10:13 am #173460
Privacy & Technology
Please join us for an engaging and lively set of debates on one of the most important legal issues of our day: what is the future of privacy law in an age in which rapid cultural and social transformation is precipitated by the bewilderingly rapid pace of technological advance? Many of the nation’s top privacy scholars will be joining us to discuss Big Data, executive surveillance, the E.U.-U.S. privacy divide, the theoretical foundations of privacy.
Daniel J. Solove
George Washinton University
“Introduction: Privacy Self-Management and the Consent Paradox”
Berkeley Law School
“The E.U.-U.S. Privacy Collision”
University of Chicago
“A Positive Theory of Privacy”
“What Privacy is For”
“The Dangers of Surveillance”
November 22, 2012 at 11:44 am #173471
Title: Privacy Self-Management and the Consent Paradox … Introduction
During the past decade the problems involving information privacy have become thornier – the ascendance of Big Data and fusion centers, the tsunami of data security breaches, the rise of Web 2.0, the growth of behavioral marketing, and the proliferation of tracking technologies. Significant new regulation has been proposed and passed in the United States and abroad, yet the basic approach to protecting privacy has remained largely unchanged since the 1970s. The current approach involves the law providing people with a set of rights to enable them to make decisions about how to manage their data. These rights consist primarily of rights to notice, access, and consent regarding the collection, use, and disclosure of personal data. I will refer to this approach to privacy regulation as the “privacy selfmanagement model.”
In most privacy regulatory regimes, such as U.S. and EU privacy law, the privacy self-management model is the vital core of the regime. Although regulatory regimes impose certain responsibilities upon companies such as the obligation to secure personal data, these responsibilities generally constitute the periphery. Most forms of personal data collection, use, and disclosure are handled by the selfmanagement model. The goal of the model is to provide people with control over their personal data, and through this control people can decide for themselves about how to weigh the costs and benefits of the collection, use, or disclosure of their information.
The privacy self-management model attempts to be neutral about substance – whether certain forms of collecting, using, or disclosing of personal data are good or bad – and instead focuses on whether people consent to the collection, use, or disclosure of their data. Consent legitimizes nearly any form of collection, use, and disclosure of personal data.
Although the privacy self-management model is certainly a laudable and necessary component of any regulatory regime, I contend that it is being asked to do work beyond its capabilities. Despite its goal is to provide people with control over their personal data, privacy self-management does not provide meaningful control. Empirical and social science research has undermined key assumptions about how people make decisions regarding their data, assumptions that underpin and legitimize the privacy selfmanagement model. More troubling, I will argue, is that even well-informed and rational individuals cannot appropriately self-manage their privacy.
With each sign of failure of privacy self-management, the typical response by policymakers, scholars, and others is to call for more and improved privacy self-management. In this article, I argue that in order to advance, privacy law and policy must confront a complex and confounding paradox with consent. Consent to collection, use, and disclosure of personal data is often not meaningful, and the most apparent solution – paternalistic measures – even more directly denies people the freedom to make consensual choices about their data.
November 22, 2012 at 11:57 am #173468
The E.U.-US Privacy Collision: A Turn to Institutions and Procedures
By Paul M. Schwartz
I. Introduction Internet scholarship in the US generally concentrates on how decisions made in this country about copyright law, network neutrality, and other policy areas shape cyberspace.1 In one important area of the evolving Internet, however, a comparative focus is indispensable. Legal forces outside of the US have significantly shaped the governance of a highly important area in cyberspace, and one involving central issues of civil liberties. The EU has played a major role in international decisions around information privacy, and this role has been bolstered by the authority of EU Member Nations to block data transfers from their country to third party nations. Such nations include the U.S., which the EU generally considers to lack “adequate” privacy protections.
The European Commission’s release in late January 2012 of its “General Data Protection Regulation” provides a perfect juncture to assess the ongoing EU-US privacy collision. An intense debate is now occurring around critical areas of information policy, including the rules for lawfulness of personal processing, the “right to be forgotten,” and the conditions for data flows between the EU and US.
This Article begins by tracing the rise of the current EU-US privacy status quo. The 1995 Data Protection Directive staked out a number of bold positions, including a limit on international data transfers to countries that lacked adequate” legal protections for personal information. The EU also quickly determined that the US, at least as a general matter, was without such protections. The impact of the Directive has been considerable. It has shaped the form of numerous laws, inside and outside of the EU, and contributed to the creation of a substantive EU model of data protection, which has also been highly influential.
The U.S. is an outlier to this story. Its approach to information privacy law has been of lesser influence on the global level than that of the EU. At the same time, the aftermath of the Directive has seen ad hoc policy efforts between the US and EU that has created numerous paths to “adequacy.” The policy instruments involved are the Safe Harbor; the two sets of Model Contractual Clauses; and the Binding Corporate Rules. A novel process of “lawmaking” has occurred and it has drawn on a large cast of characters, governmental and non- governmental. Building on Anu Bradford’s concept of “The Brussels Effect,” his paper argues that this policymaking has not been one simply led by the EU, ut a collaborative effort marked by accommodation and compromises. The resulting “lawmaking” is a productive outcome of the kinds of “harmonization networks” that Anne-Marie Slaughter has identified in her scholarship.
November 22, 2012 at 12:20 pm #173465
Toward a Positive Theory of Privacy Law Lior Jacob Strahilevitz*
Privacy law creates winners and losers. The distributive implications of privacy rules are often very significant, but they can be subtle too. Policy and academic debates over privacy rules tend not to emphasize the distributive dimensions of those rules, with many privacy advocates deluding themselves into believing that “all consumers and voters win” when privacy is enhanced. At the same time, privacy skeptics who do discuss privacy in distributive terms sometimes score cheap rhetorical points by suggesting that only those with shameful secrets to hide benefit from privacy protections. Neither approach is appealing, and privacy scholars ought to be able to do better.
This article will reveal some of the subtleties of privacy regulation, with a particular focus on the distributive consequences of privacy rules. The article suggests that, at bottom, understanding the identities of privacy law’s real winners and losers is indispensable both for clarifying existing debates in the scholarship and for helping us predict which interests will prevail in the institutions that formulate privacy rules. Drawing on public choice theory and median voter models, I will begin to construct a positive account of why American property law looks the way it does. I will also suggest that a key structural aspect of American privacy law – its absence of a catch-all privacy provision nimble enough to confront new threats – affects the attitudes of American voters and the balance of power among American interest groups. Along the way, I will also make several other subsidiary contributions: showing why criminal history registries are quite likely to becoming increasingly granular over time, examining the relationship between data mining and personality-based discrimination, and explaining how the American political system might be biased in favor of citizens who do not value privacy to the same degree that it is biased in favor of highly educated and high income citizens.
Part I assesses the distributive implications of two privacy controversies: the extent to which public figures should be protected against the nonconsensual disclosure of information concerning their everyday activities, and the extent to which the law should suppress criminal history information. In both instances the United States is far less protective of privacy interests than Europe is, subjecting the United States to criticism both here and abroad. The article shows that defensible distributive judgments undergird the American position. The European approach to celebrity privacy is highly regressive, and causes elites and non-elites to have differential access to information that is valuable to both groups. The American attitude towards criminal history information may be defended on pragmatic grounds: in the absence of transparent criminal history information, individuals may try to use obnoxious proxies for criminal history, like race and gender. The article then shows how these distributive implications affect the politics of privacy, with California’s interest groups pushing that state toward European-style regulation, and with an anticipated trend towards ever-increasing granularity in criminal history disclosures.
Part II analyzes the emerging issue of Big Data and consumer privacy. The article posits that firms rely on Big Data (data mining + analytics) to tease out the individual personality characteristics that will affect the firms’ strategies about how to price products and deliver services to particular consumers. We cannot anticipate how the law will respond to the challenges posed by Big Data without assessing who gains and who is harmed by the shift toward new forms of personality discrimination, so the paper analyzes the likely winners and losers among voters and industry groups. The analysis focuses on personality cohorts characterized by high levels of extraversion and sophistication, whose preferences and propensities to influence political decisions should deviate from those of introverts and unsophisticated individuals in important ways.
Part III glances across the Atlantic, using Europe’s quite different legal regime governing Big Data as a way to test some of the hypotheses articulated in Part II. Although American and European laws differ significantly, the attitudes of Americans and Europeans seem rather similar. The article therefore posits that different public choice dynamics, especially the strength of business interests committed to data mining in the United States, are a more likely cause of the observed legal differences. But this conclusion begs the question of why European business interests committed to data mining do not have similar sway overseas. The article hypothesizes that structural aspects of American and European privacy laws substantially affect the contents of those laws. In Europe, open-ended, omnibus privacy laws permit regulators to intervene immediately to address new privacy challenges. The sectoral American approach, especially its lack of an effective “catch-all” provision, renders American law both reactive and slow-to-react, with the result being that by the time American regulators seek to challenge envelope-pushing practices, interest groups supporting the practice have developed, social norms have adjusted to the program, and a great deal of the sensitive information at issue will have already been disclosed by consumers.
Part IV examines a rare case in which American regulators were able to combat a substantial privacy harm despite these structural and interest group dynamics. The fact that Do Not Call took more than a decade to be implemented, despite its enormous popularity with voters, shows just how difficult regulating privacy can be, especially since many other privacy regulations will create a substantial number of losing consumers who are likely to buttress the interests of prospective-loser firms in opposing the new regulation.
November 22, 2012 at 12:29 pm #173462
The Dangers of Surveillance Neil M. Richards*
From the Fourth Amendment to George Orwell’s Nineteen Eighty-Four, and from the Electronic Communications Privacy Act to films like Minority Report and The Lives of Others, our law and literature are full of warnings about state scrutiny of our lives. These warnings are commonplace, but they are rarely very specific. Other than the vague threat of an Orwellian dystopia, as a society we don’t really know why surveillance is bad, and why we should be wary of it. To the extent the answer has something to do with “privacy,” we lack an understanding of what “privacy” means in this context, and why it matters. We’ve been able to live with this state of affairs largely because the threat of constant surveillance has been relegated to the realms of science fiction and failed totalitarian states.
But these warnings are no longer science fiction. The digital technologies that have revolutionized our lives have also created minutely-detailed records of our daily activities. In an age of terror, our government has shown a keen willingness to acquire this data and use it for unknown purposes. We know that governments have been buying and borrowing private-sector databases, and we recently learned that the National Security Agency has been building a massive data and supercomputing center in Utah, apparently with the goal of intercepting and storing all internet communications for decryption and analysis.
Although we have laws that protect us against government surveillance, secret government programs cannot be challenged until they are discovered. And even when they are, our law of surveillance can provide only minimal protections. Courts frequently dismiss challenges to such programs for lack of standing, under the theory that mere surveillance creates no harms. The Supreme Court recently granted certiorari on the only major case to hold the contrary – Clapper v. Amnesty International Program.3 If the Court reverses this outlier case, we face the prospect of vast government surveillance that is unreviewable and unaccountable.
But the important point is this: However the Court rules in Clapper, we lack an understanding of why (and when) government surveillance is harmful. Existing attempts to define the dangers of surveillance are often unconvincing, and they have generally failed to speak in terms that are likely to influence the law. In this essay, I try to explain the harms of government surveillance. Drawing on law, history, literature, and the work of scholars in the emerging interdisciplinary field of “surveillance studies,” I offer an account of what those harms are and why they matter. I will move beyond the vagueness of current theories of surveillance to articulate a more coherent understanding and a more workable approach.
You must be logged in to reply to this topic.