scispace - formally typeset
Search or ask a question

Showing papers in "Brooklyn law review in 2011"


Journal Article
Matthew Sag1
TL;DR: The fair use doctrine is a central part of modern copyright law: academics, critics, journalists, teachers, film makers, fan-fiction writers, and technology companies all rely on the fair-use doctrine to give them a certain amount of freedom in dealing with other people's copyrights as mentioned in this paper.
Abstract: The fair use doctrine is a central part of modern copyright law: academics, critics, journalists, teachers, film makers, fan-fiction writers, and technology companies all rely on the fair use doctrine to give them a certain amount of freedom in dealing with other people’s copyrights. The fair use doctrine recognizes that very few works are created without some recognizable borrowing from antecedent works. Fair use allows copyrighted material to be used without permission; in so doing, it sets limits on the otherwise expansive rights of copyright owners to control the reproduction and performance of their works. As part of copyright law’s overall balance between authorial incentives and public freedom, the fair use doctrine “permits and requires courts to avoid rigid application of the copyright statute, when, on occasion, it would stifle the very creativity that law is designed to foster.” For all its acknowledged importance, however, the fair use doctrine is difficult—some say impossible—to define. This article proposes that a full understanding of fair use cannot be achieved without appreciating both its origins in English copyright law and its development as a legal transplant in the United States. Two recent cases illustrate the salience and difficulty of fair use. In 2005, Google, Inc. began its massive unauthorized digitization of library books to create an unashamedly

14 citations


Journal Article
TL;DR: In this article, the role and functions of digital technology in local governance are explored, and the role of digital participatory practices in municipal budgeting, urban planning, and policy-making in various European cities is discussed.
Abstract: The transformative potential of digital technology for democratic governance is hardly questioned, but has not yet been tackled by legal scholarship. The Article starts filling this gap by exploring the role and functions of digital technology in local governance. The Article situates the relations between cities and citizens along two complementary axes - consumerism, in which citizens are regarded as consumers of services provided by the city; and participation, in which citizens play an active role in local decision-making and agenda-setting. The Article explains how digital technology fits into this framework and develops evaluative criteria to assess the performance of local digital initiatives. The Article then argues that while American cities reasonably satisfy consumerist, service-provision requirements, they fail to benefit from the participatory potential of digital technology. While this reality is lamentable, the Article demonstrates that it is not inevitable. Drawing on digital participatory practices in municipal budgeting, urban planning, and policy-making in various European cities, the Article proposes to adopt digital participatory patterns in American municipalities.

13 citations



Journal Article
Albert Lin1
TL;DR: Lin et al. as mentioned in this paper presented the Technology Assessment 2.0 as mentioned in this paper, which discussed the role of emerging technologies in addressing many of the problems facing human society, including adverse health effects, environmental degradation and disaster, and even dehumanization.
Abstract: Technology Assessment 2.0 REVAMPING OUR APPROACH TO EMERGING TECHNOLOGIES Albert C. Lin † I NTRODUCTION We live in an era of rapid and potentially revolutionary technological changes. These changes will play a critical role in addressing many of the problems facing human society. Improvements in energy efficiency may reduce our dependence on fossil fuels. Redesigned manufacturing processes may require less energy and generate less waste. Geoengineering projects may mitigate some of the effects of climate change. And developments in synthetic biology and nanotechnology may increase food production, generate new pharmaceuticals, remediate environmental pollution, and transform countless aspects of our lives. At the same time, however, new technologies also raise the specter of adverse health effects, environmental degradation and disaster, and even dehumanization, should those technologies go awry. As past experiences teach us, new technologies do not merely solve old problems. Often, technologies create problems of their own, many of which reveal themselves only with time. Addressing these problems becomes especially difficult when technological systems become entrenched. 1 New technologies pose © 2011 Albert C. Lin. All rights reserved. Professor of Law, University of California, Davis, School of Law. Thanks to Eric Biber, Peter Lee, and workshop participants at the Arizona State University School of Law Southwest/West Junior Scholars Conference for thoughtful feedback. Thanks also to Dean Kevin Johnson, Associate Dean Vik Amar, and the U.C. Davis School of Law for financial support for this project, and to Anna Berces, Aylin Bilir, Autumn Luna, and Nick Warden for their research assistance. See D AVID C OLLINGRIDGE , T HE S OCIAL C ONTROL OF T ECHNOLOGY 17-19 (1980) (“[B]y the time a technology is sufficiently well developed and diffused for its unwanted social consequences to become apparent, it is no longer easily controlled.”); R ICHARD S CLOVE , W OODROW W ILSON I NT ’ L C TR . FOR S CHOLARS , R EINVENTING T ECHNOLOGY A SSESSMENT : A 21 ST C ENTURY M ODEL 2-3 (2010), available at http://wilsoncenter.org/ topics/docs/ReinventingTechnologyAssessment1.pdf.

6 citations


Journal Article
TL;DR: In this article, the authors explore the question of whether certain approaches to statutory interpretation can be regarded as wrongful, and argue that recourse to purpose, contrary to the views of many, actually reduces the range of judicial discretion, and that those who associate purposive interpretation with judicial activism appear to be subject to a cognitive bias.
Abstract: In this essay, I wish to explore the question of whether certain approaches to statutory interpretation can be regarded as wrongful. My argument concerns instances in which interpreters take advantage of linguistic accident to license arguments that flout the intent or purpose of a law. Philosopher Bernard Williams calls reliance on literal meaning in this manner “fetishizing assertion,” and considers it tantamount to lying. If linguistic practices that rely too heavily on linguistic accident are wrongful, then serious ethical questions present themselves to the legal system. For if we acknowledge the problem, we then are forced to ask ourselves how comfortable we are with a rule of law that cannot rely fully on the law as written to sustain its legitimacy. In this brief essay, I raise these issues, and comment on their relationship to questions of judicial candor in cases concerning the interpretation of statutes. I conclude that especially when there is doubt about meaning, or suspicion that the legislature has erred, it is essential to turn to the purpose of the law in order to avoid the moral consequences of assertive fetishism. I further argue that recourse to purpose, contrary to the views of many, actually reduces the range of judicial discretion, and that those who associate purposive interpretation with judicial activism appear to be subject to a cognitive bias—the conjunction fallacy.

5 citations


Journal Article
Mary Holper1
TL;DR: The authors argued that courts should refuse deference to Silva-Trevino under "Chevron step zero" in the case of U.S. v. Silva Trevino, in which the decision-making process demonstrated neither transparency nor careful consideration.
Abstract: In the waning days of the Bush administration, Attorney General Michael Mukasey decided In re Silva-Trevino, in which he reversed over a century of immigration law precedent by creating a new moral turpitude test. He abandoned the well-entrenched “categorical approach,” the mechanism by which immigration judges decide whether a noncitizen is removable for a criminal conviction, and allowed judges to engage in a factual inquiry of whether an offense involves moral turpitude. The Attorney General made such a broad, sweeping change through a process that allowed no input from affected parties, including the individual whose case became the new precedent. In this article, I argue that courts should refuse deference to Silva-Trevino under “Chevron step zero.” Chevron, U.S.A. Inc. v. NRDC introduced a well-known two-step analysis for courts to determine whether an agency’s decision deserved deference: first, courts determine whether Congress used clear language in the statute; second, if Congress was not clear, courts defer to the agency’s reasonable interpretation. The Court later introduced what scholars call “Chevron step zero.” In an important step zero decision, the Court decided United States v. Mead Corp., holding that courts should not defer to agency interpretations of law issued through informal procedures because such interpretations do not have the force of law. I argue that courts should not defer to Silva-Trevino under Chevron step zero because the Attorney General did not decide the case using law-like procedures: the decision-making process demonstrated neither transparency nor careful consideration. THE NEW MORAL TURPITUDE TEST: FAILING

5 citations


Journal Article
TL;DR: The focus of this essay is to advance the conversation by reducing the far-flung objective of interpreting legislation to a core purpose, and the critical task of evaluating competing approaches to discerning statutory meaning.
Abstract: Statutes are best understood as a form of communication. Communicating messages requires a sender and receiver. The sender encodes her message in the form of communication, and the receiver's task is to decode this message so that she can understand what it means. In all forms of communication that include commands, the challenge is to make sure that the commands can be effectively decoded and thus implemented as appropriate.' In short, we view statutory interpretation's essential purpose as producing \"a constitutionally legitimate decoding of [ambiguous] statutory commands.\" Although legislation is admittedly a very stylized rendering of a multifaceted, complex structure of law, politics, and institutional performance, we see value in reducing the far-flung objective of interpreting legislation to a core purpose. With this core purpose in mind, we can proceed to the critical task of evaluating competing approaches to discerning statutory meaning. The focus of this essay is to advance the conversation. Part I recapitulates the basic elements of communication theory and positive political theory, and their potent applications to statutory interpretation. Part II explains how a nuanced understanding of the lawmaking structure in

3 citations



Journal Article
TL;DR: In this paper, the authors argue that plain meaning, as legalist meaning, can quite easily expand a statute's scope relative to a baseline of ordinary meaning or the status quo ex ante.
Abstract: Is plain meaning so plain? This is not meant to be a philosophical question, but one deserving serious legal analysis. The plain-meaning rule claims to provide certainty and narrow statutes’ domains. As a relative claim, comparing plain meaning with purposivism, I agree. But I do not agree that plain-meaning analysis is as easy as its proponents suggest. In this piece, I tease out two very different ideas of plain meaning—ordinary/popular meaning and expansive/ legalist meaning—suggesting that doctrinal analysis requires more than plain-meaning simpliciter. Perhaps more importantly, I argue that plain meaning, as legalist meaning, can quite easily expand a statute’s scope, relative to a baseline of ordinary meaning or the status quo ex ante. In 1987, Justice Scalia gave an extremely influential set of lectures in which he set forth a doctrine of statutory interpretation known as the new textualism. The Scalia Tanner Lectures contain one of the most eloquent statements in print about the importance of legislation: “Every issue of law resolved by a federal judge involves interpretation of text—the text of a regulation, or of a statute, or of the Constitution.” Scalia’s theory influenced me, and a generation of scholars and students. In a world where very few lawyers have any clue about how legislation is debated—or even how to find legislative history—the textualism rule is easy to understand

1 citations



Journal Article
Linda Jellum1
TL;DR: In this paper, the difference between specific and general absurdity is defined and examined, and it is concluded that textualists should be especially loath to apply the absurdity doctrine in cases of specific, as opposed to general, absurdity.
Abstract: Absurdity is currently undefined in either the jurisprudence or scholarship; yet, it is one method by which textualists avoid the ordinary meaning of clear statutory text. Currently, the absurdity doctrine covers two very different types of statutes. It covers those statutes that are patently absurd on their face and in virtually all situations, such as a statute that allows a judge to weigh the probative value of a witness’s prior conviction when the evidence is offered by a civil defendant but not when offered by a civil plaintiff. It also covers those statutes that are absurd only when applied to specific situations, such as a statute that penalizes individuals from owning wild animals as applied to a person who rescues an injured squirrel. To date, the distinction between these types of absurdity has neither been noticed nor explored. This article fills that gap by defining the difference between specific and general absurdity and by examining why this difference matters. Given that the absurdity doctrine allows textualist judges to ignore the ordinary meaning of clear statutes, this article concludes that textualists should be especially loath to apply the doctrine in cases of specific, as opposed to general, absurdity. Yet, it is precisely in cases of specific absurdity that judicial intervention is needed most.

Journal Article
TL;DR: This article argued that the reliance on legislative history to confirm or reinforce what they already have concluded is the plain meaning of statutory text can too often serve as either a mirage or a refuge.
Abstract: The Supreme Court and lower courts often rely on legislative history to confirm or reinforce what they already have concluded is the plain meaning of statutory text. The Roberts Court has done so on numerous occasions since 2006: six of these majorities, including four cases decided during the 2009 Term, have drawn sharp rebukes from Justice Scalia.This Essay maintains that persistent judicial reliance on confirmatory history reflects important shortcomings in the textualist approach. When courts move beyond the presumptively clear meaning of statutory language, they recognize - even if implicitly - that assertions of clarity can too often serve as either a mirage or a refuge. Clarity may be a mirage because apparently precise words or phrases often give rise to conflicting "plain meanings," and also because apparently assured readers of those words or phrases are conditioned to perceive clarity based on their own specialized training, background, and level of self-confidence. Assertions of clarity may serve as a refuge in that they obviate the need for judges to provide more complete explanations for their decisions. This aspiration for completeness, although not embraced by Justice Scalia, is important to many other judges as they seek to explain adjudicative resolutions before the diverse audiences to whom they are responsive and responsible.


Journal Article
TL;DR: The magnetic pull of taxonomy is a well-worn feature of scholarship in the realm of statutory interpretation and beyond as mentioned in this paper, and so it has been with textualism in statutory interpretation, which was once dubbed the new textualism, though presumably the moniker can be dropped now that twenty years have passed since textualism first appeared, close on the heels of its avatar, Justice Antonin Scalia, taking his seat on the Supreme Court.
Abstract: The magnetic pull of taxonomy is a well-worn feature of scholarship in the realm of statutory interpretation and beyond. Casting competing theories in bold relief and in terms of what separates them produces sharp and lively exchanges. And so it has been with textualism in statutory interpretation. The approach was once dubbed the “new textualism,” though presumably the moniker of novelty can be dropped now that twenty years have passed since textualism first appeared, close on the heels of its avatar, Justice Antonin Scalia, taking his seat on the Supreme Court. In those two decades, textualism has been set against intentionalism, purposivism, dynamic interpretation, pragmatism, and other worthy competitors in a vigorous normative debate. As part of this contest over interpretive first principles, Justices Scalia and Stephen Breyer have engaged one another repeatedly, and they show no sign of fatigue as they continue a long-running interpretive road show that has brought this debate to various venues and to C-SPAN viewers. The lines of



Journal Article
TL;DR: For example, the authors argued that the role of background principles of law in the interpretation of a statute can be traced back to the field of patent law, and pointed out the dangers of relying on the dictionary to interpret the patent statute.
Abstract: This symposium asks, “How much work does language do?” The answer these days is “too much.” Courts are letting statutory language do the work that used to be done by judges’ paying sensitive attention to context, history, policy, and background understandings. Or at least, they are apparently doing so — the even less appealing possibility is that courts are using statutory language as a cover for decisions reached on other grounds. I have long argued that part of the judicial function in statutory interpretation is to apply “background principles” of law, or “field-specific canons of construction.” Courts, in construing statutes, should — and do — discern the background principles of the area of law of which a statute is a part and interpret statutory text in light of them. Background principles of law frequently influence statutory interpretation, and in appropriate cases, the force of field-specific canons of construction may be so great as to cause courts to depart from apparently clear statutory text. Textualist interpreters, however, are pushing more and more in the direction of insistently following statutory text. Textualists are becoming increasingly radical, as they gradually realize that the accommodations they previously allowed in order to reach sensible results are inconsistent with fundamental textualist premises. This trend has resulted in the creation of a “naive textualism.” This mode of interpretation is not sharply differentiated from textualism per se, but is distinguished by its naive attitude that statutes can be best understood by simply looking up their words in a dictionary, applying a few canons of statutory construction, and eschewing other considerations. The Supreme Court recently provided an excellent example of its radical shift in the direction of naive textualism in the field of patent law. For decades — indeed, for centuries — patent law was a paradigm of richly contextualized judicial interpretation. Courts understood the sparse text of patent statutes in light of history, policy, and background understandings of the field of patent law. In the recent case of Bilski v. Kappos, however, the Supreme Court looked to little more than the dictionary in deciding fundamentally important questions under the patent statute. Bilski shows the dangers of language doing too much work.