scispace - formally typeset
Search or ask a question
Institution

Future of Privacy Forum

NonprofitWashington D.C., District of Columbia, United States
About: Future of Privacy Forum is a nonprofit organization based out in Washington D.C., District of Columbia, United States. It is known for research contribution in the topics: Data Protection Act 1998 & Information privacy. The organization has 11 authors who have published 37 publications receiving 1003 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors introduce the concept of the second exchange in customer relationships that is distinct from the first exchange of goods or services, where personal information is exchanged for benefits derived from the use of the information.
Abstract: The paper introduces the concept of the second exchange in customer relationships that is distinct from the first exchange of goods or services. The second exchange is based on the information consumers disclose in the course of a marketing transaction where personal information is exchanged for benefits derived from the use of the information. Propositions developed from our model demonstrate how strategically managing the benefits and risks (invasion of privacy) of this exchange process can impact customer acquisition and retention and thus, the firm's bottom line.

36 citations

Proceedings ArticleDOI
29 Jan 2019
TL;DR: A new taxonomy is created that identifies fundamental types of dishonest anthropomorphism and pinpoints harms that they can cause and critically considers a representative series of ethical issues, proposals, and questions concerning whether the principle of honest anthropomorphicism has been violated.
Abstract: The goal of this paper is to advance design, policy, and ethics scholarship on how engineers and regulators can protect consumers from deceptive robots and artificial intelligences that exhibit the problem of dishonest anthropomorphism. The analysis expands upon ideas surrounding the principle of honest anthropomorphism originally formulated by Margot Kaminsky, Mathew Ruben, William D. Smart, and Cindy M. Grimm in their groundbreaking Maryland Law Review article, "Averting Robot Eyes." Applying boundary management theory and philosophical insights into prediction and perception, we create a new taxonomy that identifies fundamental types of dishonest anthropomorphism and pinpoints harms that they can cause. To demonstrate how the taxonomy can be applied as well as clarify the scope of the problems that it can cover, we critically consider a representative series of ethical issues, proposals, and questions concerning whether the principle of honest anthropomorphism has been violated.

33 citations

Posted Content
TL;DR: It is argued that the focus on the machine is a distraction from the debate surrounding data driven ethical dilemmas, such as privacy, fairness and discrimination, and policymakers should seek to devise agreed-upon guidelines for ethical data analysis and profiling.
Abstract: Big data, the enhanced ability to collect, store and analyze previously unimaginable quantities of data in tremendous speed and with negligible costs, delivers immense benefits in marketing efficiency, healthcare, environmental protection, national security and more. While some privacy advocates may dispute the merits of sophisticated behavioral marketing practices or debate the usefulness of certain data sets to efforts to identify potential terrorists, few remain indifferent to the transformative value of big data analysis for government, science and society at large. At the same time, even big data evangelists should recognize the potentially ominous social ramifications of a surveillance society governed by heartless algorithmic machines. In this essay, we present some of the privacy and non-privacy risks of big data as well as directions for potential solutions. In a previous paper, we argued that the central tenets of the current privacy framework, the principles of data minimization and purpose limitation, are severely strained by the big data technological and business reality. Here, we assess some of the other problems raised by pervasive big data analysis. In their book, “A Legal Theory for Autonomous Artificial Agents,” Samir Chopra and Larry White note that “as we increasingly interact with these artificial agents in unsupervised settings, with no human mediators, their seeming autonomy and increasingly sophisticated functionality and behavior, raises legal and philosophical questions.” In this article we argue that the focus on the machine is a distraction from the debate surrounding data driven ethical dilemmas, such as privacy, fairness and discrimination. The machine may exacerbate, enable, or simply draw attention to the ethical challenges, but it is humans who must be held accountable. Instead of vilifying machine-based data analysis and imposing heavy-handed regulation, which in the process will undoubtedly curtail highly beneficial activities, policymakers should seek to devise agreed-upon guidelines for ethical data analysis and profiling. Such guidelines would address the use of legal and technical mechanisms to obfuscate data; criteria for calling out unethical, if not illegal, behavior; categories of privacy and non-privacy harms; and strategies for empowering individuals through access to data in intelligible form.

24 citations

Journal ArticleDOI
TL;DR: In this article, the authors examine how the GDPR addresses de-identification and propose that the incentives to apply deidentification found in these provisions should be reinforced by guidance and enforcement decisions that will reward the use of de-ID and encourage the highest practical level of deID.
Abstract: In May 2018, the General Data Protection Regulation (GDPR) will become enforceable as the basis for data protection law in the European Economic Area (EEA). Compared to the 1995 Data Protection Directive that it will replace, the GDPR reflects a more developed understanding of de-identification as encompassing a spectrum of different techniques and strengths. And under the GDPR, different levels of de-identification have concrete implications for organizations’ compliance obligations – including, in some cases, relief from certain obligations. Thus, organizations subject to the GDPR can and should consider de-identification as a key tool for GDPR compliance. Nevertheless, there are many respects in which GDPR obligations remains unclear. Regulators and policymakers can help advance the rights of data subjects and further the objectives of the GDPR, while providing additional clarity, by interpreting, applying, and enforcing these GDPR provisions in a way that encourages and rewards the appropriate use of de-identification. This article examines how the GDPR addresses de-identification. It reviews several substantive obligations under the GDPR, including notice, consent, data subject rights to access or delete personal data, data retention limitations, data security, breach notification, privacy by design and by default, and others. In each case, it describes how the use of different levels of de-identification can play a role in complying with the relevant obligations. It proposes that the incentives to apply de-identification found in these provisions should be reinforced by guidance and enforcement decisions that will reward the use of de-identification and encourage the highest practical level of de-identification. Such an approach will bring clarity to the rules, enable practical tools for compliance, help foster greater consistency with data protection regimes in other jurisdictions, and advance the purposes of the regulation.

24 citations

Posted Content
TL;DR: In this article, the authors analyze the opportunities and risks of data driven education technologies and argue that together with teachers, parents and students, schools and vendors must establish a trust framework to facilitate the adoption of data-driven ed tech.
Abstract: This article analyzes the opportunities and risks of data driven education technologies (ed tech). It discusses the deployment of data technologies by education institutions to enhance student performance, evaluate teachers, improve education techniques, customize programs, devise financial assistance plans, and better leverage scarce resources to assess and optimize education results. Critics fear ed tech could introduce new risks of privacy infringements, narrowcasting and discrimination, fueling the stratification of society by channeling “winners” to a “Harvard track” and “losers” to a “bluer collar” track; and overly limit the right to fail, struggle and learn through experimentation. The article argues that together with teachers, parents and students, schools and vendors must establish a trust framework to facilitate the adoption of data driven ed tech. Enhanced transparency around institutions’ data use philosophy and ethical guidelines, and novel methods of data “featurization,” will achieve far more than formalistic notices and contractual legalese.

17 citations


Authors
Network Information
Related Institutions (5)
Fortify Software
11 papers, 1.1K citations

84% related

Azul Systems
96 papers, 3.7K citations

83% related

Zero Knowledge Systems
11 papers, 2.4K citations

82% related

MCI Inc.
12 papers, 1.7K citations

81% related

Annenberg Center for Communication
11 papers, 1K citations

81% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20221
20212
20202
20193
20185
20174