scispace - formally typeset
Search or ask a question
Institution

Future of Privacy Forum

NonprofitWashington D.C., District of Columbia, United States
About: Future of Privacy Forum is a nonprofit organization based out in Washington D.C., District of Columbia, United States. It is known for research contribution in the topics: Data Protection Act 1998 & Information privacy. The organization has 11 authors who have published 37 publications receiving 1003 citations.

Papers
More filters
Posted Content
TL;DR: Parameters for calibrating legal rules to data depending on multiple gradations of identifiability are proposed, while also assessing other factors such as an organization’s safeguards and controls, as well as the data’'s sensitivity, accessibility and permanence.
Abstract: One of the most hotly debated issues in privacy and data security is the notion of identifiability of personal data and its technological corollary, de-identification. De-identification is the process of removing personally identifiable information from data collected, stored and used by organizations. Once viewed as a silver bullet allowing organizations to reap the benefits of data while minimizing privacy and data security risks, de-identification has come under intense scrutiny with academic research papers and popular media reports highlighting its shortcomings. At the same time, organizations around the world necessarily continue to rely on a wide range of technical, administrative and legal measures to reduce the identifiability of personal data to enable critical uses and valuable research while providing protection to individuals’ identity and privacy. The debate around the contours of the term personally identifiable information, which triggers a set of legal and regulatory protections, continues to rage. Scientists and regulators frequently refer to certain categories of information as “personal” even as businesses and trade groups define them as “de-identified” or “non-personal.” The stakes in the debate are high. While not foolproof, de-identification techniques unlock value by enabling important public and private research, allowing for the maintenance and use – and, in certain cases, sharing and publication – of valuable information, while mitigating privacy risk. This paper proposes parameters for calibrating legal rules to data depending on multiple gradations of identifiability, while also assessing other factors such as an organization’s safeguards and controls, as well as the data’s sensitivity, accessibility and permanence. It builds on emerging scholarship that suggests that rather than treat data as a black or white dichotomy, policymakers should view data in various shades of gray; and provides guidance on where to place important legal and technical boundaries between categories of identifiability. It urges the development of policy that creates incentives for organizations to avoid explicit identification and deploy elaborate safeguards and controls, while at the same time maintaining the utility of data sets.

17 citations

Posted Content
TL;DR: In this paper, the authors present the most comprehensive study to date of the policy issues and privacy concerns arising from the surge of ed-tech innovation, and propose solutions ranging from deployment of traditional privacy tools such as contractual and organizational governance mechanisms, to greater data literacy by teachers and parental involvement.
Abstract: The arrival of new technologies in schools and classrooms around the nation has been met with a mixture of enthusiasm and anxiety. Education technologies (ed tech) present tremendous opportunities. They allow schools to tailor programs to individual students; make education more collaborative and engaging through social media, gamification and interactive content; and facilitate access to education for anyone with an Internet connection in remote parts of the world. At the same time, the combination of enhanced data collection with highly sensitive information about children and teens presents grave privacy risks. Indeed, in a recent report, the White House identified privacy in education as a flashpoint for big data policy concerns.This article is the most comprehensive study to date of the policy issues and privacy concerns arising from the surge of ed tech innovation. It surveys the burgeoning market of ed tech solutions, which ranges from free Android and iPhone apps to comprehensive learning management systems and digitized curricula delivered via the Internet. It discusses the deployment of big data analytics by education institutions to enhance student performance, evaluate teachers, improve education techniques, customize programs and better leverage scarce resources to optimize education results.The article seeks to untangle ed tech privacy concerns from the broader policy debates surrounding standardization, the Common Core, longitudinal data systems and the role of business in education. It unpacks the meaning of commercial data uses in schools, distinguishing between behavioral advertising to children and providing comprehensive, optimized education solutions to students, teachers and school systems. It addresses privacy problems related to “small data,” the individualization enabled by optimization solutions that “read students” even as they read their books, as well as concerns about “big data” analysis and measurement, including algorithmic biases, discreet discrimination, narrowcasting and chilling effects.The article proposes solutions ranging from deployment of traditional privacy tools, such as contractual and organizational governance mechanisms, to greater data literacy by teachers and parental involvement. It advocates innovative technological solutions, including converting student data to a parent-accessible feature and enhancing algorithmic transparency to shed light on the inner working of the machine. For example, individually curated “data backpacks” would empower students and their parents by providing them with comprehensive portable profiles to facilitate personalized learning regardless of where they go. The article builds on a methodology developed in the authors' previous work to balance big data rewards against privacy risks, while complying with several layers of federal and state regulation.

17 citations

Posted Content
TL;DR: Questions surrounding the boundaries of responsibility for algorithmic fairness are addressed, and a series of case studies under the proposed framework are analyzed, highlighting the importance of public accountability about the editorial nature of the algorithm.
Abstract: The prospect of digital manipulation on major online platforms has reached fever pitch in the last election cycle in the United States. Jonathan Zittrain’s concern about “digital gerrymandering” has found resonance in reports, which were resoundingly denied by Facebook, of the company’s alleged editing content to tone down conservative voices. At the start of the election cycle, critics blasted Facebook for allegedly injecting editorial bias into an apparently neutral content generator, its “Trending Topics” feature. Immediately after the election, when the extent of dissemination of “fake news” through social media became known, commentators chastised Facebook for not proactively policing user generated content to block and remove untrustworthy information. Which one is it then? Should Facebook have deployed policy directed technologies or should its content algorithm have remained policy neutral? This article examines the potential for bias and discrimination in automated algorithmic decision making. As a group of commentators recently asserted, “The accountability mechanisms and legal standards that govern such decision processes have not kept pace with technology.” Yet the article rejects an approach that depicts every algorithmic process as a “black box,” which is inevitably plagued by bias and potential injustice. While recognizing that algorithms are manmade artifacts written and edited by humans in order to code decision making processes, the article argues that a distinction should be drawn between “policy neutral algorithms,” which lack an active editorial hand, and “policy directed algorithms,” which are intentionally framed to pursue a designer’s policy agenda. Policy neutral algorithms could in some cases reflect existing entrenched societal biases and historic inequities. Companies, in turn, can choose to fix their results through active social engineering. For example, after facing controversy in light of an algorithmic determination to not offer same-day delivery in low-income neighborhoods, Amazon has nevertheless recently decided to offer the services in order to pursue an agenda of equal opportunity. Recognizing that its decision making process, which was based on logistical factors and expected demand, had the effect of accentuating prevailing social inequality, Amazon chose to level the playing field. Policy directed algorithms are purposely engineered to correct for apparent bias and discrimination or intentionally designed to advance a predefined policy agenda. In this case, it is essential that companies provide transparency about their active pursuit of editorial policies. For example, if a search engine decides to scrub search results clean of apparent bias and discrimination, it should let users know they are seeing a manicured version of the world. If a service optimizes results for financial motives without alerting users, it risks violating FTC standards for disclosure. So too should service providers consider themselves obligated to prominently disclose important criteria that reflect an unexpected policy agenda. The transparency called for is not one based on revealing source code, but rather public accountability about the editorial nature of the algorithm. The article addresses questions surrounding the boundaries of responsibility for algorithmic fairness, and analyzes a series of case studies under the proposed framework.

15 citations

Posted Content
TL;DR: This article addresses the processes required to authorize noncontextual data uses at corporations or not-for-profit organizations in the absence of additional notice and choice.
Abstract: As scientific knowledge advances, new data uses continuously emerge in a wide variety of contexts, from combating fraud in the payment card industry, to reducing the time commuters spend on the road, detecting harmful drug interactions, improving marketing mechanisms, personalizing the delivery of education in K–12 schools, encouraging exercise and weight loss, and much more.At corporations, not-for-profits, and academic institutions, researchers are analyzing data and testing theories that often rely on data about individuals. Many of these new uses of personal information are natural extensions of current practices, well within the expectations of individuals and the boundaries of traditional Fair Information Practice Principles. In other cases, data use may exceed expectations, but organizations can provide individuals with additional notice and choice. However, in some cases enhanced notice and choice is not feasible, despite the considerable benefit to consumers if personal information were to be used in an innovative way. This article addresses the processes required to authorize noncontextual data uses at corporations or not-for-profit organizations in the absence of additional notice and choice. Although many of these challenges are also relevant to academic researchers, their work will often be guided by the oversight of Internal Review Boards (which are required for many — but not all — new research uses of personal information).

14 citations

Journal ArticleDOI
TL;DR: In this article, the authors discuss the practical implications of consent requirements both for day-to-day school management and for the education system as a whole and argue that parents should never have to opt-out of embracing new technologies simply in order to protect their children's privacy.
Abstract: This paper discusses how data is used both in classrooms and by educators and policymakers to assess educational outcomes. 9 It addresses the practical implications of consent requirements both for day-to-day school management and for the education system as a whole. It explores how existing federal laws, including the Federal Educational Rights and Privacy Act (FERPA), protect student data. It reviews the activities of vendors and the role of individual consent in data processing by the health and financial sectors. It proposes that in lieu of focusing on the technicalities of parental consent requirements, legitimate privacy concerns must be addressed in a manner that protects all students. It argues that parents should never have to opt-out of embracing new technologies simply in order to protect their children’s privacy. Instead, to foster an environment of trust, schools and their education partners must offer more insight into how data is being used. With more information and better access to their own data, parents and students will be better equipped to make informed decisions about their education choices.

13 citations


Authors
Network Information
Related Institutions (5)
Fortify Software
11 papers, 1.1K citations

84% related

Azul Systems
96 papers, 3.7K citations

83% related

Zero Knowledge Systems
11 papers, 2.4K citations

82% related

MCI Inc.
12 papers, 1.7K citations

81% related

Annenberg Center for Communication
11 papers, 1K citations

81% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20221
20212
20202
20193
20185
20174