scispace - formally typeset
Search or ask a question
JournalISSN: 0043-0617

Washington Law Review 

University of Washington School of Law
About: Washington Law Review is an academic journal. The journal publishes majorly in the area(s): Supreme court & Common law. It has an ISSN identifier of 0043-0617. Over the lifetime, 611 publications have been published receiving 4831 citations.


Papers
More filters
Journal Article
TL;DR: In this article, the authors argue that public surveillance violates a right to privacy because it violates contextual integrity; as such, it constitutes injustice and even tyranny, and propose a new construct called contextual integrity as an alternative benchmark for privacy.
Abstract: The practices of public surveillance, which include the monitoring of individuals in public through a variety of media (e.g., video, data, online), are among the least understood and controversial challenges to privacy in an age of information technologies. The fragmentary nature of privacy policy in the United States reflects not only the oppositional pulls of diverse vested interests, but also the ambivalence of unsettled intuitions on mundane phenomena such as shopper cards, closed-circuit television, and biometrics. This Article, which extends earlier work on the problem of privacy in public, explains why some of the prominent theoretical approaches to privacy, which were developed over time to meet traditional privacy challenges, yield unsatisfactory conclusions in the case of public surveillance. It posits a new construct, “contextual integrity,” as an alternative benchmark for privacy, to capture the nature of challenges posed by information technologies. Contextual integrity ties adequate protection for privacy to norms of specific contexts, demanding that information gathering and dissemination be appropriate to that context and obey the governing norms of distribution within it. Building on the idea of “spheres of justice,” developed by political philosopher Michael Walzer, this Article argues that public surveillance violates a right to privacy because it violates contextual integrity; as such, it constitutes injustice and even tyranny.

1,477 citations

Journal Article
TL;DR: Eggers as discussed by the authors describes persistent surveillance technologies that score people in every imaginable way, such as high school students' test results, their class rank, their school's relative academic strength, and a number of other factors.
Abstract: [Jennifer is] ranked 1,396 out of 179,827 high school students in Iowa. . . . Jennifer's score is the result of comparing her test results, her class rank, her school's relative academic strength, and a number of other factors. . . .[C]an this be compared against all the other students in the country, and maybe even the world? . . .That's the idea . . . .That sounds very helpful. . . . And would eliminate a lot of doubt and stress out there.-Dave Eggers, The Circle1INTRODUCTION TO THE SCORED SOCIETYIn his novel The Circle, Dave Eggers imagines persistent surveillance technologies that score people in every imaginable way. Employees receive rankings for their participation in social media.2 Retinal apps allow police officers to see career criminals in distinct colors-yellow for low-level offenders, orange for slightly more dangerous, but still nonviolent offenders, and red for the truly violent.3 Intelligence agencies can create a web of all of a suspect's contacts so that criminals' associates are tagged in the same color scheme as the criminals themselves.4Eggers's imagination is not far from current practices. Although predictive algorithms may not yet be ranking high school students nationwide, or tagging criminals' associates with color-coded risk assessments, they are increasingly rating people in countless aspects of their lives.Consider these examples. Job candidates are ranked by what their online activities say about their creativity and leadership.5 Software engineers are assessed for their contributions to open source projects, with points awarded when others use their code.6 Individuals are assessed as likely to vote for a candidate based on their cable-usage patterns.7 Recently released prisoners are scored on their likelihood of recidivism.8How are these scores developed? Predictive algorithms mine personal information to make guesses about individuals' likely actions and risks.9 A person's on- and offline activities are turned into scores that rate them above or below others.10 Private and public entities rely on predictive algorithmic assessments to make important decisions about individuals.11Sometimes, individuals can score the scorers, so to speak. Landlords can report bad tenants to data brokers while tenants can check abusive landlords on sites like ApartmentRatings.com. On sites like Rate My Professors, students can score professors who can respond to critiques via video. In many online communities, commenters can in turn rank the interplay between the rated, the raters, and the raters of the rated, in an effort to make sense of it all (or at least award the most convincing or popular with points or "karma"). 12Although mutual-scoring opportunities among formally equal subjects exist in some communities, the realm of management and business more often features powerful entities who turn individuals into ranked and rated objects.13 While scorers often characterize their work as an oasis of opportunity for the hardworking, the following are examples of ranking systems that are used to individuals' detriment. A credit card company uses behavioral-scoring algorithms to rate consumers' credit risk because they used their cards to pay for marriage counseling, therapy, or tire-repair services.14 Automated systems rank candidates' talents by looking at how others rate their online contributions.15 Threat assessments result in arrests or the inability to fly even though they are based on erroneous information.16 Political activists are designated as "likely" to commit crimes.17And there is far more to come. Algorithmic predictions about health risks, based on information that individuals share with mobile apps about their caloric intake, may soon result in higher insurance premiums.18 Sites soliciting feedback on "bad drivers" may aggregate the information, and could possibly share it with insurance companies who score the risk potential of insured individuals. …

277 citations

Journal Article
TL;DR: The idea that humans could, at some point, develop machines that actually "think" for themselves and act autonomously has been embedded in our literature and culture since the beginning of civilization.
Abstract: INTRODUCTIONThe idea that humans could, at some point, develop machines that actually "think" for themselves and act autonomously has been embedded in our literature and culture since the beginning of civilization.1 But these ideas were generally thought to be religious expressions-what one scholar describes as an effort to forge our own Gods2-or pure science fiction. There was one important thread that tied together these visions of a special breed of superhuman men/machines: They invariably were stronger, smarter, and sharper analytically; that is, superior in all respects to humans, except for those traits involving emotional intelligence and empathy. But science fiction writers were of two minds about the capacity of super-smart machines to make life better for humans.One vision was uncritically Utopian. Intelligent machines, this account goes, would transform and enlighten society by performing the mundane, mind-numbing work that keeps humans from pursuing higher intellectual, spiritual, and artistic callings.3 This view was captured in the popular animated 1960s television show The Jetsons.4 As its title suggests, the show's vision is decidedly futuristic. The main character, George Jetson, lives with his family in a roomy, bright, and lavishly furnished apartment that seems to float in the sky. George and his family travel in a flying saucer-like car that drives itself and folds into a small briefcase. All of the family's domestic needs are taken care of by Rosie, the robotic family maid and housekeeper, who does the household chores and much of the parenting.5 George does "work." He is employed as a "digital index operator" by Spacely's Space Sprockets, which makes high tech equipment. George often complains of overwork, even though he appears to simply push buttons on a computer for three hours a day, three days a week.6 In other words, the Jetsons live the American dream of the future.In tangible ways, this Utopian vision of the partnership between humans and highly intelligent machines is being realized. Today, supercomputers can beat humans at their own games. IBM's "Deep Blue" can beat the pants off chess grand-masters, while its sister-super- computer "Watson" can clobber the reigning Jeopardy champions.7 But intelligent machines are more than show. Highly sophisticated robots and other intelligent machines perform critical functions that not long ago were thought to be within the exclusive province of humans. They pilot sophisticated aircraft; perform delicate surgery; study the landscape of Mars; and through smart nanotechnology, microscopic machines may soon deliver targeted medicines to areas within the body that are otherwise unreachable.8 In every one of these examples, machines perform these complex and at times dangerous tasks as well as, if not better than, humans.But science fiction writers also laid out a darker vision of intelligent machines and feared that, at some point, autonomously thinking machines would turn on humans. Some of the best science fiction expresses this dystopian view, including Stanley Kubrick's 1968 classic film 2001: A Space Odyssey.9 The film's star is not the main character, "Dave" (Dr. David Bowman, played by Keir Dullea), or "Frank" (Dr. Frank Poole, played by Gary Lockwood), who are astronauts on a secret and mysterious mission to Jupiter. Instead, the character who rivets our attention is HAL 9000,10 the all-knowing supercomputer who controls most of the ship's operations, but does so under the nominal command of the astronauts. The complexity of the relationship between man and the super-intelligent machine is revealed early in the film. During a pre- mission interview, HAL claims that he is "foolproof and incapable of error,"11 displaying human-like hubris. And when Dave is asked if HAL has genuine emotions, he replies that HAL appears to, but that the truth is unknown.12Once the mission begins, tensions between HAL and the astronauts start to surface. …

98 citations

Journal Article
TL;DR: In this paper, the authors suggest that there are certain legal tasks that are likely to be able to be partially automated using machine learning techniques, provided that the technologies are appropriately matched to relevant tasks and that accuracy limitations are understood and accounted for.
Abstract: INTRODUCTIONWhat impact might artificial intelligence (AI) have upon the practice of law? According to one view, AI should have little bearing upon legal practice barring significant technical advances.1 The reason is that legal practice is thought to require advanced cognitive abilities, but such higher-order cognition remains outside the capability of current AI technology.2 Attorneys, for example, routinely combine abstract reasoning and problem solving skills in environments of legal and factual uncertainty.3 Modern AI algorithms, by contrast, have been unable to replicate most human intellectual abilities, falling far short in advanced cognitive processes-such as analogical reasoning-that are basic to legal practice.4 Given these and other limitations in current AI technology, one might conclude that until computers can replicate the higher-order cognition routinely displayed by trained attorneys, AI would have little impact in a domain as full of abstraction and uncertainty as law.5Although there is some truth to that view, its conclusion is overly broad. It misses a class of legal tasks for which current AI technology can still have an impact even given the technological inability to match human-level reasoning. Consider that outside of law, non-cognitive AI techniques have been successfully applied to tasks that were once thought to necessitate human intelligence-for example language translation.6 While the results of these automated efforts are sometimes imperfect, the interesting point is that such computer generated results have often proven useful for particular tasks where strong approximations are acceptable.7 In a similar vein, this Article will suggest that there may be a limited, but not insignificant, subset of legal tasks that are capable of being partially automated using current AI techniques despite their limitations relative to human cognition.In particular, this Article focuses upon a class of AI methods known as "machine learning" techniques and their potential impact upon legal practice. Broadly speaking, machine learning involves computer algorithms that have the ability to "learn" or improve in performance over time on some task.8 Given that there are multiple AI approaches, why highlight machine learning in particular? In the last few decades, researchers have successfully used machine learning to automate a variety of sophisticated tasks that were previously presumed to require human cognition. These applications range from autonomous (i.e., self- driving) cars, to automated language translation, prediction, speech recognition, and computer vision.9 Researchers have also begun to apply these techniques in the context of law.10To be clear, I am not suggesting that all, or even most, of the tasks routinely performed by attorneys are automatable given the current state of AI technology. To the contrary, many of the tasks performed by attorneys do appear to require the type of higher order intellectual skills that are beyond the capability of current techniques. Rather, I am suggesting that there are subsets of legal tasks that are likely automatable under the current state of the art, provided that the technologies are appropriately matched to relevant tasks, and that accuracy limitations are understood and accounted for. In other words, even given current limitations in AI technology as compared to human cognition, such computational approaches to automation may produce results that are "good enough" in certain legal contexts.Part I of this Article explains the basic concepts underlying machine learning. Part II will convey a more general principle: non-intelligent computer algorithms can sometimes produce intelligent results in complex tasks through the use of suitable proxies detected in data. Part III will explore how certain legal tasks might be amenable to partial automation under this principle by employing machine learning techniques. This Part will also emphasize the significant limitations of these automated methods as compared to the capabilities of similarly situated attorneys. …

86 citations

Network Information
Related Journals (5)
Stanford Law Review
1.9K papers, 50.4K citations
85% related
Vanderbilt Law Review
1.4K papers, 10.4K citations
83% related
Duke Law Journal
1.3K papers, 15.6K citations
83% related
Yale Law Journal
3.8K papers, 95K citations
81% related
Fordham Law Review
2.2K papers, 12.8K citations
81% related
Performance
Metrics
No. of papers from the Journal in previous years
YearPapers
202012
201913
201827
201716
201621
201529