scispace - formally typeset
Search or ask a question

Showing papers in "Chicago-Kent} Law Review in 2004"


Journal Article
TL;DR: German business contracts are much shorter than their American counterparts and avoid the worst excesses of legalese that American contracts are known for as mentioned in this paper. But they seem to work as well as United States contracts.
Abstract: German business contracts are much shorter than their American counterparts. They also avoid the worst excesses of legalese that American contracts are known for. But they seem to work as well as United States contracts. We seek to understand how German business contracts could do as much with fewer words. How well a contract works is not amenable to precise measurement. Still, characterizing German contracts and the businesscontracting endeavor in Germany as working as well as their American counterparts seems reasonable as a working assumption. Certainly, there are no indications that Germany’s transactional sector has systematic defects relative to that of the United States. Transaction activity is vigorous—many deals are negotiated and consummated. And, as is the case in the United States, some transactions end up being litigated in court, but most do not. Our explanation is predicated on an account of what contracting does. Contracting aims to create a bigger transactional pie in a world where parties’ incentives are misaligned and they need to coordinate

30 citations



Journal Article
TL;DR: In this paper, the authors make the case for quantifying and monetizing fear and anxiety as a component of cost-benefit analysis, and explain in detail how to quantify and monetize them as part of a formal, regulatory costbenefit analysis.
Abstract: Risk assessment is now a common feature of regulatory practice, but fear assessment is not. In particular, environmental, health and safety agencies such as EPA, FDA, OSHA, NHTSA, and CPSC, commonly count death, illness and injury as costs for purposes of cost-benefit analysis, but almost never incorporate fear, anxiety or other welfare-reducing mental states into the analysis. This is puzzling, since fear and anxiety are welfare setbacks, and since the very hazards regulated by these agencies - air or water pollutants, toxic waste dumps, food additives and contaminants, workplace toxins and safety threats, automobiles, dangerous consumer products, radiation, and so on - are often the focus of popular fears. Even more puzzling is the virtual absence of economics scholarship on the pricing of fear and anxiety, by contrast with the vast literature in environmental economics on pricing other intangible benefits such as the existence of species, wilderness preservation, the enjoyment of hunters and fishermen, and good visibility, and the large literature in health economics on pricing health states.This Article makes the case for fear assessment, and explains in detail how fear and anxiety should be quantified and monetized as part of a formal, regulatory cost-benefit analysis. I propose, concretely, that the methodology currently used to quantify and monetize light physical morbidities, such as headaches, coughs, sneezes, nausea, or shortness of breath, should be extended to fear. The change in total fear-days resulting from regulatory intervention to remove or mitigate some hazard - like the change in total headache-days, cough-days, etc. - should be predicted and then monetized at a standard dollar cost per fear-day determined using contingent-valuation interviews.Part I of the Article rebuts various objections to fear assessment. Deliberation costs are a worry, but this is not unique to fear, and can be handled through threshold rules specifying when the expected benefits of fear assessment appear to outweigh the incremental deliberation costs of quantifying and monetizing fear. Other objections reduce to the deliberation-cost objection, or are misconceived: irrational as well as rational fears are real harms for those who experience them; fear can be quantified; worries about uncertainty and causality reduce to deliberation costs; the possibility of reducing fear through information rather than prescription means that agencies should look at a wider range of policy options, not that they should evaluate options without considering fear costs; the concern that the very practice of fear assessment will on balance, increase fear by creating stronger incentives for fear entrepreneurs seems overblown; and fear is a welfare setback, whether or not it flows from political views.Part II argues for what I call the unbundled valuation of fear. A given hazard may cause a package of harms for each individual exposed to it: the fear of the hazard, an incremental risk of death, and perhaps other harms. Why not, then, price the harms as a package? In particular, why not predict the deaths caused by a given hazard and avoided by regulatory intervention, and multiply those deaths by a value of statistical life (VOSL) number designed to incorporate a fear premium? For reasons explored in Part II, the bundled valuation of fear and death, through fear premia attached to VOSLs or in other ways, is misguided. Rather, these two kinds of harms should be separately quantified and monetized. Parts III and IV consider how agencies should monetize fear states. How should the price of a fear-day be determined? Part III argues that contingent-valuation techniques are more appropriate, here, than revealed-preference techniques. Part IV discusses the design of contingent-valuation interviews for pricing fear. The instrumental as well as intrinsic costs of fear may need to be accounted for; the respondents to these interviews should, optimally, be calm rather than fearful; and interviews designed to secure a QALY valuation of fear can also be useful, but only if the QALY scale is understood as a welfare scale and calibrated in an inclusive way.In arguing for fear assessment as a component of cost-benefit analysis, this Article contributes to the literature on cost-benefit analysis and also stakes out a novel position in scholarly debates about risk regulation. One standard view (call it the simple technocratic view) argues that popular fear should play no role in determining regulatory choice; instead, regulators should focus on minimizing or achieving technologically feasible or cost-justified reductions in death, illness and injury. A standard opposing view (call it the populist view) is that popular perceptions of the riskiness of hazards, in turn substantially influenced by how feared or dreaded the hazards are, should be determinative. The account presented in this Article is technocratic, not populist; risk regulators should seek to maximize social welfare, and cost-benefit analysis is a technocratic tool for doing just that. Yet technocratic risk regulation need not focus narrowly on mortality and morbidity. It should focus (prima facie) on all constituents of welfare, including fear and anxiety. Although popularly perceived risk should not determine risk regulation, since the fear and anxiety that drives popular risk perception is simply one welfare impact among the multitude of costs and benefits flowing from hazards, neither should risk regulation reduce to counting deaths or injuries - to a crude minimization of physical impacts or a simplistic balancing in which death- and injury-reduction are the sole regulatory benefits that are seen to counterbalance compliance costs.

21 citations



Journal Article
TL;DR: The authors explores pernicious ambiguity, an interpretive problem that is not adequately acknowledged by the legal system, and offers some explanations based on advances in linguistics, cognitive psychology and the philosophy of language.
Abstract: This Article explores pernicious ambiguity, an interpretive problem that is not adequately acknowledged by the legal system. Pernicious ambiguity occurs when the various actors involved in a dispute all believe a text to be clear, but assign different meanings to it. Depending upon how the legal system handles this situation, a case with pernicious ambiguity can easily become a crapshoot. If the judge does not take heed of the competing interpretations as reflecting a lack of clarity, and if that judge happens to understand the document in a way helpful to a particular party, that party wins. Because the document is not seen as ambiguous, the document is declared clear. In reality, however, the document is even less clear than are ambiguous documents. The competing interpretations reflect a complete communicative breakdown. If language worked so poorly generally, it would not be possible to have a language-driven rule of law at all. The problem, perhaps ironically, is that the concept of ambiguity is itself perniciously ambiguous. People do not always use the term in the same way, and the differences often appear to go unnoticed. While all agree that ambiguity occurs when language is reasonably susceptible to different interpretations, people seem to differ with respect to whether those interpretations have to be available to a single person, or whether ambiguity occurs when different speakers of the language do not understand a particular passage the same way. Often, courts even ignore disagreement among judges as irrelevant to whether a document is clear on its face. This Article will show how these different notions of ambiguity emerge, and offer some explanations based on advances in linguistics, cognitive psychology and the philosophy of language. Examples are taken from cases concerning the interpretation of statutes, contracts and insurance policies.

13 citations






Journal Article
TL;DR: The authors argued that the transformation of the professional class from a self-employed group to salaried employee status has subjected professionals to traditional strategies of managerial control, namely ideological and technical de-skilling.
Abstract: For professionals, work constitutes personal identity and confers a relatively privileged class status. Professionals have historically claimed an entitlement to autonomy at work and a privilege to self-regulate through codes of ethics enforced by professional peers. As relatively powerful workers, most professionals shunned union membership because it was perceived as the antithesis of professional privilege, values and status. Recently, however, professionals have been joining unions in record numbers: physicians, interns and residents, graduate students, legal services lawyers, and even administrative law judges have sought the protection of unions, and both the AMA and the ABA now endorse unionization for their members. This paper examines the market forces which have contributed to this trend, and argues that the transformation of the professional class from a self-employed group to salaried employee status has subjected professionals to traditional strategies of managerial control, namely ideological and technical de-skilling. The threat to professionals' privilege and identity is, I suggest, the impetus for organizing. The paper also assesses recent Supreme Court doctrine which threatens to obliterate the coverage of professionals under the National Labor Relations Act, and suggests a reconceptualization of professional status as a commodity collectively owned by the profession which entails control over the labor process. Focusing on the loss of key atttributes of professional status is key to defining professionals as covered employees under the Act rather than as excluded supervisors: in effect, modern managerial strategies are deprofessionalizing the professions.

9 citations


Journal Article
TL;DR: The authors argue that the most important differences between the two theories rest on particular understandings of what the harms of crime actually are, and whether given punishments address them, and argue that punishment policy debates should bypass the punishment philosophy stage altogether, and focus directly on contested views about harms.
Abstract: Consequentialists believe punishment can only be justified to the extent that it serves a particular goal - generally of reducing wrongdoing. Retributivists believe that punishment serves as an end (and a good) in itself, by 'answering' wrongdoing and giving a voice to society's norms and moral edicts. This Article aims to demonstrate, however, that dividing up the world of punishment theory in this way is not especially useful. By laying out the underlying assumptions of these theories (something infrequently done), we reveal not only several surprising and fundamental similarities, but we also make clear that the most important differences between the two theories rest on particular understandings of what the harms of crime actually are, and whether given punishments address them. Once people specify which harms are in dispute in a particular policy debate, speaking in terms of consequentialist and retributivist theories adds little, if anything, to the discussion. In fact, it tends to obscure the real issues in contest, and makes it more difficult to see which punishment policies will best redress the harms of crime. To improve both clarity of the discussion and the effectiveness of ultimate solutions, we argue that punishment policy debates should bypass the punishment philosophy stage altogether, and focus directly on contested views about harms. Using philosophical and empirical findings, we describe what this sort of debate would look like both in the abstract and through concrete examples.

Journal Article
TL;DR: The legal structure of these political trusteeships has varied widely as discussed by the authors, and only in East Timor has the political trustee made a relatively clean exit from the U.N. trusteeship system.
Abstract: Transforming nondemocratic states is one of the great challenges of the twenty-first century. Since the official end of the U.N. trusteeship system and the subsequent end of the Cold War, the international community has intervened in Bosnia, Kosovo, East Timor, Afghanistan, and Iraq to set up "political trusteeships,"1 under which the international community exercises powers traditionally associated with sovereignty.2 The legal structure of these political trusteeships has varied widely. Only in East Timor has the political trustee made a relatively clean exit.



Journal Article
TL;DR: In this paper, the authors explore the role of the First Amendment in the protection of children from harmful cultural materials (such as Internet pornography and violent movies) and what difference does historical context make for the issue at hand, and are all minors to be treated the same?
Abstract: When freedom of speech comes into conflict with the protection of children, how should this conflict be resolved? What principles should guide such deliberations? Can one rely on parents and educators (and more generally on voluntary means) to protect children from harmful cultural materials (such as Internet pornography and violent movies) or is government intervention necessary? What difference does historical context make for the issue at hand? Are all minors to be treated the same? What is the scope of the First Amendment rights of children in the first place? These are the questions here explored.

Journal Article
TL;DR: The focus of the impeachment proceedings was that Clinton perjured himself and engaged in obstruction of justice as discussed by the authors, and the question of whether he committed perjury, and in particular whether he lied when he denied having a sexual relationship with a White House intern, Monica Lewinsky.
Abstract: With the impeachment proceedings against President Clinton now well behind us, we can step back and consider the matter somewhat more dispassionately. The focus of the impeachment hearings was that Clinton perjured himself and engaged in obstruction of justice. I limit my observations to the question of whether he committed perjury, and in particular whether he lied when he denied having a sexual relationship with a White House intern, Monica Lewinsky. When Clinton was first asked during a deposition whether he had ever had an affair or sexual relationship with Lewinsky, he quite explicitly denied it. He was asked about his denials during a second legal proceeding - his testimony before a grand jury - when he was again placed under oath. Clinton insisted that his denials were true based on the ordinary understanding of these terms. In other words, he appealed to usage of that phrase in the speech community. His lawyers during the impeachment made similar arguments on the basis of dictionary definitions. Because there seems to be a great deal of variation in how people use this phrase, I will argue that Clinton's defenders were largely correct on this point. The lawyers examining the president were obviously aware of the dangers of using such a slippery term, so they introduced a definition of sexual relations into evidence during the deposition and then asked Clinton whether, under that rather convoluted definition, he had engaged in sexual relations with Lewinsky. Clinton again denied having done so, but was later forced to admit to at least some sexual activity with the former intern. During the subsequent grand jury proceedings he was also interrogated on his denials of having sexual relations, as defined. His defense consisted of an extremely literalistic dissection of the words of the definition. I will suggest that a large part of the problem is that the definition had largely been textualized. A result of textualization is that the resulting text invites a very literal and sometimes even hypertechnical interpretation, and Clinton was only to happy to comply.

Journal Article
TL;DR: The Americans with Disabilities Act (ADA) as discussed by the authors takes clear aim at a pervasive and enduring societal problem: prejudice against the disabled, and it is intended to eliminate disability discrimination in all facets of society, including the workplace.
Abstract: The Americans with Disabilities Act (“ADA”)1 takes clear aim at a pervasive and enduring societal problem: prejudice against the disabled.2 The Act reflects an unambiguous congressional intent to eliminate disability discrimination in all facets of society, including the workplace.3 Before passage of the ADA, permanently disabled individuals had difficulty obtaining employment and those who became disabled while employed frequently were terminated.4 These conditions resulted in high unemployment rates for the disabled, virtually guaranteeing that many lived out their lives trapped in cycles of poverty and social dependence.5 Title I of the ADA seeks to disrupt these cycles by mandating that employers take affirmative steps to employ and retain “quali-







Book ChapterDOI
TL;DR: The functional school of law and economics as discussed by the authors uses public choice theory and the constitutional paradigm of the Virginia school of economics to design legal meta-rules that lead to efficiency ex ante.
Abstract: During its relatively short history, the law and economics movement has developed three distinct schools of thought. The first two schools of thought—the positive school and the normative school— developed almost concurrently. The positive school, historically associated with the early contributions of the Chicago school, restricts itself to the descriptive study of the incentives produced by the legal system largely because its adherents believe that efficient legal rules evolve naturally. On the other hand, the normative school, historically associated with the early contributions of the Yale school, sees the law as a tool for remedying “failures” that arise in the market. The subsequently developed functional school of law and economics draws from public choice theory and the constitutional paradigm of the Virginia school of economics, and offers a third perspective which is neither fully positive nor fully normative. Recognizing that there are structural forces that often impede the development of efficient legal rules, the functional school allows for the possibility of using insights from public choice economics to remedy faulty legal rules at a meta-level. However, unlike the normative school, the functional school also recognizes that there are failures in the political market that make it unlikely that changes will be made on a principled basis. Also, because it is difficult to identify all of the ultimate consequences of corrective legal rules, the functional school focuses on using economic theory to design legal meta-rules that lead to efficiency ex ante. Achieving this ex ante efficiency requires the

Journal Article
TL;DR: In this paper, the authors scrutinize Congress's recent efforts to regulate access to sexually-themed Internet speech and make recommendations as to how libraries can implement CIPA in a manner that protects both adults' and minors' free speech rights.
Abstract: In this article, I scrutinize Congress's recent efforts to regulate access to sexually-themed Internet speech. The first such effort, embodied in the Communications Decency Act, failed to take into account the Supreme Court's carefully-honed obscenity and obscenity-for-minors jurisprudence. The second, embodied in the Child Online Protection Act, attended carefully to Supreme Court precedent, but failed to account for the geographic variability in definitions of obscene speech. Finally, the recently-enacted Children's Internet Protection Act apparently remedies the constitutional deficiencies identified in these two prior legislative efforts, but runs the risk of being implemented in a manner that fails to protect either adults' or minors' right to access protected expression. Although CIPA recently withstood a facial attack on its constitutionality, it is likely that this statute will confront as-applied challenges. I analyze the technology and the First Amendment doctrines at issue in CIPA's implementation, and set forth recommendations as to how libraries can implement CIPA in a manner that protects both adults' and minors' free speech rights.

Journal Article
TL;DR: In this paper, the authors argue that contrived offenses are at best proxies for the independent crimes that legitimate the investigation, and that the rules of evidence should be reformed in ways that will motivate prosecutors to put evidence of contrived offences to its best use: proving independent crimes.
Abstract: It is well-known that undercover investigations influence and sometimes distort the crimes they seek to expose. This is the problem that the entrapment defense is designed to address. What has not yet been recognized, however, is that the investigator's influence on crimes is also a problem for the law of evidence. Undercover operations can be used to gather information on a crime shaped or prompted by the investigator (which I term a contrived offense) as well as on wrongdoing that occurs without government intervention (which I call an independent crime.) The ease of documenting the former tempts investigators to forego the arduous task of proving the latter. Yet the same evidence that proves a contrived offense may also corroborate an independent crime. This article argues that contrived offenses are at best proxies for the independent crimes that legitimate the investigation. Because of the investigator's influence, contrived offenses are inherently flawed substitutes for independent crimes. Invoking best evidence principles, this article argues that the rules of evidence should be reformed in ways that will motivate prosecutors to put evidence of contrived offenses to its best use: proving independent crimes.