scispace - formally typeset
Search or ask a question
Posted Content

A Theory of Creepy: Technology, Privacy and Shifting Social Norms

TL;DR: In this paper, the authors present a set of social and legal considerations to help individuals, engineers, businesses and policymakers navigate a world of new technologies and evolving social norms, including enhanced transparency, accessibility to information in usable format, and the elusive principle of context.
Abstract: The rapid evolution of digital technologies has hurled to the forefront of public and legal discourse dense social and ethical dilemmas that we have hardly begun to map and understand. In the near past, general community norms helped guide a clear sense of ethical boundaries with respect to privacy. One does not peek into the window of a house even if it is left open. One does not hire a private detective to investigate a casual date or the social life of a prospective employee. Yet with technological innovation rapidly driving new models for business and inviting new types of personal socialization, we often have nothing more than a fleeting intuition as to what is right or wrong. Our intuition may suggest that it is responsible to investigate the driving record of the nanny who drives our child to school, since such tools are now readily available. But is it also acceptable to seek out the records of other parents in our child’s car pool or of a date who picks us up by car? Alas, intuitions and perceptions of “creepiness” are highly subjective and difficult to generalize as social norms are being strained by new technologies and capabilities. And businesses that seek to create revenue opportunities by leveraging newly available data sources face huge challenges trying to operationalize such subjective notions into coherent business and policy strategies. This article presents a set of social and legal considerations to help individuals, engineers, businesses and policymakers navigate a world of new technologies and evolving social norms. These considerations revolve around concepts that we have explored in prior work, including enhanced transparency; accessibility to information in usable format; and the elusive principle of context.
Citations
More filters
Book
29 Aug 2016
TL;DR: The Black Box Society argues that we all need to be able to do so and to set limits on how big data affects our lives as mentioned in this paper. But who connects the dots about what firms are doing with this information?
Abstract: Every day, corporations are connecting the dots about our personal behaviorsilently scrutinizing clues left behind by our work habits and Internet use. The data compiled and portraits created are incredibly detailed, to the point of being invasive. But who connects the dots about what firms are doing with this information? The Black Box Society argues that we all need to be able to do soand to set limits on how big data affects our lives. Hidden algorithms can make (or ruin) reputations, decide the destiny of entrepreneurs, or even devastate an entire economy. Shrouded in secrecy and complexity, decisions at major Silicon Valley and Wall Street firms were long assumed to be neutral and technical. But leaks, whistleblowers, and legal disputes have shed new light on automated judgment. Self-serving and reckless behavior is surprisingly common, and easy to hide in code protected by legal and real secrecy. Even after billions of dollars of fines have been levied, underfunded regulators may have only scratched the surface of this troubling behavior. Frank Pasquale exposes how powerful interests abuse secrecy for profit and explains ways to rein them in. Demanding transparency is only the first step. An intelligible society would assure that key decisions of its most important firms are fair, nondiscriminatory, and open to criticism. Silicon Valley and Wall Street need to accept as much accountability as they impose on others.

1,342 citations

Journal ArticleDOI
TL;DR: The tools of big data research are increasingly woven into the authors' daily lives, including mining digital medical records for scientific and economic insights, mapping relationships via social media, capturing individuals’ speech and action via sensors, tracking movement across space, shaping police and security policy via “predictive policing,” and much more.
Abstract: The use of big data research methods has grown tremendously over the past five years in both academia and industry. As the size and complexity of available datasets has grown, so too have the ethical questions raised by big data research. These questions become increasingly urgent as data and research agendas move well beyond those typical of the computational and natural sciences, to more directly address sensitive aspects of human behavior, interaction, and health. The tools of big data research are increasingly woven into our daily lives, including mining digital medical records for scientific and economic insights, mapping relationships via social media, capturing individuals’ speech and action via sensors, tracking movement across space, shaping police and security policy via “predictive policing,” and much more. The beneficial possibilities for big data in science and industry are tempered by new challenges facing researchers that often lie outside their training and comfort zone. Social scientists now grapple with data structures and cloud computing, while computer scientists must contend with human subject protocols and institutional review boards (IRBs). While the connection between individual datum and actual human beings can appear quite abstract, the scope, scale, and complexity of many forms of big data creates a rich ecosystem in which human participants and their communities are deeply embedded and susceptible to harm. This complexity challenges any normative set of rules and makes devising universal guidelines difficult. Nevertheless, the need for direction in responsible big data research is evident, and this article provides a set of “ten simple rules” for addressing the complex ethical issues that will inevitably arise. Modeled on PLOS Computational Biology’s ongoing collection of rules, the recommendations we outline involve more nuance than the words “simple” and “rules” suggest. This nuance is inevitably tied to our paper’s starting premise: all big data research on social, medical, psychological, and economic phenomena engages with human subjects, and researchers have the ethical responsibility to minimize potential harm. The variety in data sources, research topics, and methodological approaches in big data belies a one-size-fits-all checklist; as a result, these rules are less specific than some might hope. Rather, we exhort researchers to recognize the human participants and complex systems contained within their data and make grappling with ethical questions part of their standard workflow. Towards this end, we structure the first five rules around how to reduce the chance of harm resulting from big data research practices; the second five rules focus on ways researchers can contribute to building best practices that fit their disciplinary and methodological approaches. At the core of these rules, we challenge big data researchers who consider their data disentangled from the ability to harm to reexamine their assumptions. The examples in this paper show how often even seemingly innocuous and anonymized data have produced unanticipated ethical questions and detrimental impacts. This paper is a result of a two-year National Science Foundation (NSF)-funded project that established the Council for Big Data, Ethics, and Society, a group of 20 scholars from a wide range of social, natural, and computational sciences (http://bdes.datasociety.net/). The Council was charged with providing guidance to the NSF on how to best encourage ethical practices in scientific and engineering research, utilizing big data research methods and infrastructures [1].

248 citations

Book ChapterDOI
01 Jun 2014
TL;DR: In this paper, the authors focus on attempts to avoid or mitigate the conflicts that may arise, taking as a given that big data implicates important ethical and political values, and they do so because the familiar pair of anonymity and informed consent continues to strike many as the best and perhaps only way to escape the need to actually resolve these conflicts one way or the other.
Abstract: Introduction Big data promises to deliver analytic insights that will add to the stock of scientific and social scientific knowledge, significantly improve decision making in both the public and private sector, and greatly enhance individual self-knowledge and understanding. They have already led to entirely new classes of goods and services, many of which have been embraced enthusiastically by institutions and individuals alike. And yet, where these data commit to record details about human behavior, they have been perceived as a threat to fundamental values, including everything from autonomy, to fairness, justice, due process, property, solidarity, and, perhaps most of all, privacy. Given this apparent conflict, some have taken to calling for outright prohibitions on various big data practices, while others have found good reason to finally throw caution (and privacy) to the wind in the belief that big data will more than compensate for its potential costs. Still others, of course, are searching for a principled stance on privacy that offers the flexibility necessary for these promises to be realized while respecting the important values that privacy promotes. This is a familiar situation because it rehearses many of the long-standing tensions that have characterized each successive wave of technological innovation over the past half-century and their inevitable disruption of constraints on information flows through which privacy had been assured. It should come as no surprise that attempts to deal with new threats draw from the toolbox assembled to address earlier upheavals. Ready-to-hand, anonymity and informed consent remain the most popular tools for relieving these tensions – tensions that we accept, from the outset, as genuine and, in many cases, acute. Taking as a given that big data implicates important ethical and political values, we direct our focus instead on attempts to avoid or mitigate the conflicts that may arise. We do so because the familiar pair of anonymity and informed consent continues to strike many as the best and perhaps only way to escape the need to actually resolve these conflicts one way or the other.

199 citations

Proceedings ArticleDOI
02 May 2019
TL;DR: This paper shows how three key concepts for research and design pertaining to new and emerging digital consumer technologies, foot-in-the-door devices, hole-and-corner applications, and digital leakage may be used analytically to investigate issues such as privacy and security.
Abstract: Through a design-led inquiry focused on smart home security cameras, this research develops three key concepts for research and design pertaining to new and emerging digital consumer technologies. Digital leakage names the propensity for digital information to be shared, stolen, and misused in ways unbeknownst or even harmful to those to whom the data pertains or belongs. Hole-and-corner applications are those functions connected to users' data, devices, and interactions yet concealed from or downplayed to them, often because they are non-beneficial or harmful to them. Foot-in-the-door devices are product and services with functional offerings and affordances that work to normalize and integrate a technology, thus laying groundwork for future adoption of features that might have earlier been rejected as unacceptable or unnecessary. Developed and illustrated through a set of design studies and explorations, this paper shows how these concepts may be used analytically to investigate issues such as privacy and security, anticipatorily to speculate about the future of technology development and use, and generatively to synthesize design concepts and solutions.

88 citations

Journal ArticleDOI
TL;DR: To maximize the potential of technology to help patients with mental illness, physicians need education about the digital economy, and patients need help understanding the appropriate use and limitations of online websites and smartphone apps.
Abstract: The digital revolution in medicine not only offers exciting new directions for the treatment of mental illness, but also presents challenges to patient privacy and security. Changes in medicine are part of the complex digital economy based on creating value from analysis of behavioral data acquired by the tracking of daily digital activities. Without an understanding of the digital economy, recommending the use of technology to patients with mental illness can inadvertently lead to harm. Behavioral data are sold in the secondary data market, combined with other data from many sources, and used in algorithms that automatically classify people. These classifications are used in commerce and government, may be discriminatory, and result in non-medical harm to patients with mental illness. There is also potential for medical harm related to poor quality online information, self-diagnosis and self-treatment, passive monitoring, and the use of unvalidated smartphone apps. The goal of this paper is to increase awareness and foster discussion of the new ethical issues. To maximize the potential of technology to help patients with mental illness, physicians need education about the digital economy, and patients need help understanding the appropriate use and limitations of online websites and smartphone apps.

82 citations


Cites background from "A Theory of Creepy: Technology, Pri..."

  • ...Traditional societal concepts of what data are public versus private data, and medical versus non-medical are blurring (Tene and Polonetsky 2013; Monteith and Glenn 2016; Friedland 2015)....

    [...]

References
More filters
Book ChapterDOI
21 Jul 2009
TL;DR: Will alphanumeric passwords still be ubiquitous in 2019, or will adoption of alternative proposals be commonplace?
Abstract: While a lot has changed in Internet security in the last 10 years, a lot has stayed the same --- such as the use of alphanumeric passwords. Passwords remain the dominant means of authentication on the Internet, even in the face of significant problems related to password forgetting and theft. In fact, despite large numbers of proposed alternatives, we must remember more passwords than ever before. Why is this? Will alphanumeric passwords still be ubiquitous in 2019, or will adoption of alternative proposals be commonplace? What must happen in order to move beyond passwords? This note pursues these questions, following a panel discussion at Financial Cryptography and Data Security 2009.

123 citations