scispace - formally typeset
Search or ask a question
Author

Danielle Keats Citron

Bio: Danielle Keats Citron is an academic researcher from University of Virginia. The author has contributed to research in topics: Information privacy & Supreme court. The author has an hindex of 21, co-authored 93 publications receiving 2791 citations. Previous affiliations of Danielle Keats Citron include University of Michigan & Yale University.


Papers
More filters
Posted Content
TL;DR: P Procedural regularity is essential for those stigmatized by “artificially intelligent” scoring systems, and regulators should be able to test scoring systems to ensure their fairness and accuracy.
Abstract: Big Data is increasingly mined to rank and rate individuals. Predictive algorithms assess whether we are good credit risks, desirable employees, reliable tenants, valuable customers — or deadbeats, shirkers, menaces, and “wastes of time.” Crucial opportunities are on the line, including the ability to obtain loans, work, housing, and insurance. Though automated scoring is pervasive and consequential, it is also opaque and lacking oversight. In one area where regulation does prevail — credit — the law focuses on credit history, not the derivation of scores from data. Procedural regularity is essential for those stigmatized by “artificially intelligent” scoring systems. The American due process tradition should inform basic safeguards. Regulators should be able to test scoring systems to ensure their fairness and accuracy. Individuals should be granted meaningful opportunities to challenge adverse decisions based on scores miscategorizing them. Without such protections in place, systems could launder biased and arbitrary data into powerfully stigmatizing scores.

365 citations

Journal ArticleDOI
TL;DR: The aim is to provide the first in-depth assessment of the causes and consequences of this disruptive technological change, and to explore the existing and potential tools for responding to it.
Abstract: Harmful lies are nothing new But the ability to distort reality has taken an exponential leap forward with “deep fake” technology This capability makes it possible to create audio and video of real people saying and doing things they never said or did Machine learning techniques are escalating the technology’s sophistication, making deep fakes ever more realistic and increasingly resistant to detection Deep-fake technology has characteristics that enable rapid and widespread diffusion, putting it into the hands of both sophisticated and unsophisticated actors While deep-fake technology will bring with it certain benefits, it also will introduce many harms The marketplace of ideas already suffers from truth decay as our networked information environment interacts in toxic ways with our cognitive biases Deep fakes will exacerbate this problem significantly Individuals and businesses will face novel forms of exploitation, intimidation, and personal sabotage The risks to our democracy and to national security are profound as well Our aim is to provide the first in-depth assessment of the causes and consequences of this disruptive technological change, and to explore the existing and potential tools for responding to it We survey a broad array of responses, including: the role of technological solutions; criminal penalties, civil liability, and regulatory action; military and covert-action responses; economic sanctions; and market developments We cover the waterfront from immunities to immutable authentication trails, offering recommendations to improve law and policy and anticipating the pitfalls embedded in various solutions

300 citations

Book
22 Sep 2014
TL;DR: Citron as discussed by the authors argues that cyber-harassment is a matter of civil rights law, and legal precedents as well as social norms of decency and civility must be leveraged to stop it.
Abstract: Some see the internet as a Wild West where those who venture online must be thick-skinned enough to ensure verbal attacks in the name of free speech protection. Danielle Keats Citron rejects this view. Cyber-harassment is a matter of civil rights law, and legal precedents as well as social norms of decency and civility must be leveraged to stop it.

288 citations

Journal Article
TL;DR: Eggers as discussed by the authors describes persistent surveillance technologies that score people in every imaginable way, such as high school students' test results, their class rank, their school's relative academic strength, and a number of other factors.
Abstract: [Jennifer is] ranked 1,396 out of 179,827 high school students in Iowa. . . . Jennifer's score is the result of comparing her test results, her class rank, her school's relative academic strength, and a number of other factors. . . .[C]an this be compared against all the other students in the country, and maybe even the world? . . .That's the idea . . . .That sounds very helpful. . . . And would eliminate a lot of doubt and stress out there.-Dave Eggers, The Circle1INTRODUCTION TO THE SCORED SOCIETYIn his novel The Circle, Dave Eggers imagines persistent surveillance technologies that score people in every imaginable way. Employees receive rankings for their participation in social media.2 Retinal apps allow police officers to see career criminals in distinct colors-yellow for low-level offenders, orange for slightly more dangerous, but still nonviolent offenders, and red for the truly violent.3 Intelligence agencies can create a web of all of a suspect's contacts so that criminals' associates are tagged in the same color scheme as the criminals themselves.4Eggers's imagination is not far from current practices. Although predictive algorithms may not yet be ranking high school students nationwide, or tagging criminals' associates with color-coded risk assessments, they are increasingly rating people in countless aspects of their lives.Consider these examples. Job candidates are ranked by what their online activities say about their creativity and leadership.5 Software engineers are assessed for their contributions to open source projects, with points awarded when others use their code.6 Individuals are assessed as likely to vote for a candidate based on their cable-usage patterns.7 Recently released prisoners are scored on their likelihood of recidivism.8How are these scores developed? Predictive algorithms mine personal information to make guesses about individuals' likely actions and risks.9 A person's on- and offline activities are turned into scores that rate them above or below others.10 Private and public entities rely on predictive algorithmic assessments to make important decisions about individuals.11Sometimes, individuals can score the scorers, so to speak. Landlords can report bad tenants to data brokers while tenants can check abusive landlords on sites like ApartmentRatings.com. On sites like Rate My Professors, students can score professors who can respond to critiques via video. In many online communities, commenters can in turn rank the interplay between the rated, the raters, and the raters of the rated, in an effort to make sense of it all (or at least award the most convincing or popular with points or "karma"). 12Although mutual-scoring opportunities among formally equal subjects exist in some communities, the realm of management and business more often features powerful entities who turn individuals into ranked and rated objects.13 While scorers often characterize their work as an oasis of opportunity for the hardworking, the following are examples of ranking systems that are used to individuals' detriment. A credit card company uses behavioral-scoring algorithms to rate consumers' credit risk because they used their cards to pay for marriage counseling, therapy, or tire-repair services.14 Automated systems rank candidates' talents by looking at how others rate their online contributions.15 Threat assessments result in arrests or the inability to fly even though they are based on erroneous information.16 Political activists are designated as "likely" to commit crimes.17And there is far more to come. Algorithmic predictions about health risks, based on information that individuals share with mobile apps about their caloric intake, may soon result in higher insurance premiums.18 Sites soliciting feedback on "bad drivers" may aggregate the information, and could possibly share it with insurance companies who score the risk potential of insured individuals. …

277 citations

Posted Content
TL;DR: The criminalization of revenge pornography as discussed by the authors is necessary to protect against devastating privacy invasions that chill self-expression and ruin lives, and a narrowly and carefully crafted criminal statute can comport with the First Amendment.
Abstract: Violations of sexual privacy, notably the non-consensual publication of sexually graphic images in violation of someone's trust, deserve criminal punishment. They deny subjects' ability to decide if and when they are sexually exposed to the public and undermine trust needed for intimate relationships. Then too they produce grave emotional and dignitary harms, exact steep financial costs, and increase the risks of physical assault. A narrowly and carefully crafted criminal statute can comport with the First Amendment. The criminalization of revenge porn is necessary to protect against devastating privacy invasions that chill self-expression and ruin lives.

204 citations


Cited by
More filters
21 Jan 2018
TL;DR: It is shown that the highest error involves images of dark-skinned women, while the most accurate result is for light-skinned men, in commercial API-based classifiers of gender from facial images, including IBM Watson Visual Recognition.
Abstract: The paper “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification” by Joy Buolamwini and Timnit Gebru, that will be presented at the Conference on Fairness, Accountability, and Transparency (FAT*) in February 2018, evaluates three commercial API-based classifiers of gender from facial images, including IBM Watson Visual Recognition. The study finds these services to have recognition capabilities that are not balanced over genders and skin tones [1]. In particular, the authors show that the highest error involves images of dark-skinned women, while the most accurate result is for light-skinned men.

2,528 citations

Book
29 Aug 2016
TL;DR: The Black Box Society argues that we all need to be able to do so and to set limits on how big data affects our lives as mentioned in this paper. But who connects the dots about what firms are doing with this information?
Abstract: Every day, corporations are connecting the dots about our personal behaviorsilently scrutinizing clues left behind by our work habits and Internet use. The data compiled and portraits created are incredibly detailed, to the point of being invasive. But who connects the dots about what firms are doing with this information? The Black Box Society argues that we all need to be able to do soand to set limits on how big data affects our lives. Hidden algorithms can make (or ruin) reputations, decide the destiny of entrepreneurs, or even devastate an entire economy. Shrouded in secrecy and complexity, decisions at major Silicon Valley and Wall Street firms were long assumed to be neutral and technical. But leaks, whistleblowers, and legal disputes have shed new light on automated judgment. Self-serving and reckless behavior is surprisingly common, and easy to hide in code protected by legal and real secrecy. Even after billions of dollars of fines have been levied, underfunded regulators may have only scratched the surface of this troubling behavior. Frank Pasquale exposes how powerful interests abuse secrecy for profit and explains ways to rein them in. Demanding transparency is only the first step. An intelligible society would assure that key decisions of its most important firms are fair, nondiscriminatory, and open to criticism. Silicon Valley and Wall Street need to accept as much accountability as they impose on others.

1,342 citations

01 Jan 2014
TL;DR: In this paper, Cardozo et al. proposed a model for conflict resolution in the context of bankruptcy resolution, which is based on the work of the Cardozo Institute of Conflict Resolution.
Abstract: American Bankruptcy Institute Law Review 17 Am. Bankr. Inst. L. Rev., No. 1, Spring, 2009. Boston College Law Review 50 B.C. L. Rev., No. 3, May, 2009. Boston University Public Interest Law Journal 18 B.U. Pub. Int. L.J., No. 2, Spring, 2009. Cardozo Journal of Conflict Resolution 10 Cardozo J. Conflict Resol., No. 2, Spring, 2009. Cardozo Public Law, Policy, & Ethics Journal 7 Cardozo Pub. L. Pol’y & Ethics J., No. 3, Summer, 2009. Chicago Journal of International Law 10 Chi. J. Int’l L., No. 1, Summer, 2009. Colorado Journal of International Environmental Law and Policy 20 Colo. J. Int’l Envtl. L. & Pol’y, No. 2, Winter, 2009. Columbia Journal of Law & the Arts 32 Colum. J.L. & Arts, No. 3, Spring, 2009. Connecticut Public Interest Law Journal 8 Conn. Pub. Int. L.J., No. 2, Spring-Summer, 2009. Cornell Journal of Law and Public Policy 18 Cornell J.L. & Pub. Pol’y, No. 1, Fall, 2008. Cornell Law Review 94 Cornell L. Rev., No. 5, July, 2009. Creighton Law Review 42 Creighton L. Rev., No. 3, April, 2009. Criminal Law Forum 20 Crim. L. Forum, Nos. 2-3, Pp. 173-394, 2009. Delaware Journal of Corporate Law 34 Del. J. Corp. L., No. 2, Pp. 433-754, 2009. Environmental Law Reporter News & Analysis 39 Envtl. L. Rep. News & Analysis, No. 7, July, 2009. European Journal of International Law 20 Eur. J. Int’l L., No. 2, April, 2009. Family Law Quarterly 43 Fam. L.Q., No. 1, Spring, 2009. Georgetown Journal of International Law 40 Geo. J. Int’l L., No. 3, Spring, 2009. Georgetown Journal of Legal Ethics 22 Geo. J. Legal Ethics, No. 2, Spring, 2009. Golden Gate University Law Review 39 Golden Gate U. L. Rev., No. 2, Winter, 2009. Harvard Environmental Law Review 33 Harv. Envtl. L. Rev., No. 2, Pp. 297-608, 2009. International Review of Law and Economics 29 Int’l Rev. L. & Econ., No. 1, March, 2009. Journal of Environmental Law and Litigation 24 J. Envtl. L. & Litig., No. 1, Pp. 1-201, 2009. Journal of Legislation 34 J. Legis., No. 1, Pp. 1-98, 2008. Journal of Technology Law & Policy 14 J. Tech. L. & Pol’y, No. 1, June, 2009. Labor Lawyer 24 Lab. Law., No. 3, Winter/Spring, 2009. Michigan Journal of International Law 30 Mich. J. Int’l L., No. 3, Spring, 2009. New Criminal Law Review 12 New Crim. L. Rev., No. 2, Spring, 2009. Northern Kentucky Law Review 36 N. Ky. L. Rev., No. 4, Pp. 445-654, 2009. Ohio Northern University Law Review 35 Ohio N.U. L. Rev., No. 2, Pp. 445-886, 2009. Pace Law Review 29 Pace L. Rev., No. 3, Spring, 2009. Quinnipiac Health Law Journal 12 Quinnipiac Health L.J., No. 2, Pp. 209-332, 2008-2009. Real Property, Trust and Estate Law Journal 44 Real Prop. Tr. & Est. L.J., No. 1, Spring, 2009. Rutgers Race and the Law Review 10 Rutgers Race & L. Rev., No. 2, Pp. 441-629, 2009. San Diego Law Review 46 San Diego L. Rev., No. 2, Spring, 2009. Seton Hall Law Review 39 Seton Hall L. Rev., No. 3, Pp. 725-1102, 2009. Southern California Interdisciplinary Law Journal 18 S. Cal. Interdisc. L.J., No. 3, Spring, 2009. Stanford Environmental Law Journal 28 Stan. Envtl. L.J., No. 3, July, 2009. Tulsa Law Review 44 Tulsa L. Rev., No. 2, Winter, 2008. UMKC Law Review 77 UMKC L. Rev., No. 4, Summer, 2009. Washburn Law Journal 48 Washburn L.J., No. 3, Spring, 2009. Washington University Global Studies Law Review 8 Wash. U. Global Stud. L. Rev., No. 3, Pp.451-617, 2009. Washington University Journal of Law & Policy 29 Wash. U. J.L. & Pol’y, Pp. 1-401, 2009. Washington University Law Review 86 Wash. U. L. Rev., No. 6, Pp. 1273-1521, 2009. William Mitchell Law Review 35 Wm. Mitchell L. Rev., No. 4, Pp. 1235-1609, 2009. Yale Journal of International Law 34 Yale J. Int’l L., No. 2, Summer, 2009. Yale Journal on Regulation 26 Yale J. on Reg., No. 2, Summer, 2009.

1,336 citations

Proceedings ArticleDOI
12 May 2019
TL;DR: This paper proposes a new method to expose AI-generated fake face images or videos based on the observations that Deep Fakes are created by splicing synthesized face region into the original image, and in doing so, introducing errors that can be revealed when 3D head poses are estimated from the face images.
Abstract: In this paper, we propose a new method to expose AI-generated fake face images or videos (commonly known as the Deep Fakes). Our method is based on the observations that Deep Fakes are created by splicing synthesized face region into the original image, and in doing so, introducing errors that can be revealed when 3D head poses are estimated from the face images. We perform experiments to demonstrate this phenomenon and further develop a classification method based on this cue. Using features based on this cue, an SVM classifier is evaluated using a set of real face images and Deep Fakes.

681 citations

Journal ArticleDOI
TL;DR: The ways in which Reddit’s karma point system, aggregation of material across subreddits, ease of subreddit and user account creation, governance structure, and policies around offensive content serve to provide fertile ground for anti-feminist and misogynistic activism are considered.
Abstract: This article considers how the social-news and community site Reddit.com has become a hub for anti-feminist activism. Examining two recent cases of what are defined as “toxic technocultures” (#Gamergate and The Fappening), this work describes how Reddit’s design, algorithm, and platform politics implicitly support these kinds of cultures. In particular, this piece focuses on the ways in which Reddit’s karma point system, aggregation of material across subreddits, ease of subreddit and user account creation, governance structure, and policies around offensive content serve to provide fertile ground for anti-feminist and misogynistic activism. The ways in which these events and communities reflect certain problematic aspects of geek masculinity are also considered. This research is informed by the results of a long-term participant-observation and ethnographic study into Reddit’s culture and community and is grounded in actor-network theory.

660 citations