scispace - formally typeset
Search or ask a question

Showing papers by "Cass R. Sunstein published in 2008"


Book
08 Apr 2008
TL;DR: In Nudge as discussed by the authors, Thaler and Sunstein argue that human beings are susceptible to various biases that can lead us to blunder and make bad decisions involving education, personal finance, health care, mortgages and credit cards, the family, and even the planet itself.
Abstract: A groundbreaking discussion of how we can apply the new science of choice architecture to nudge people toward decisions that will improve their lives by making them healthier, wealthier, and more free Every day, we make decisions on topics ranging from personal investments to schools for our children to the meals we eat to the causes we champion. Unfortunately, we often choose poorly. Nobel laureate Richard Thaler and legal scholar and bestselling author Cass Sunstein explain in this important exploration of choice architecture that, being human, we all are susceptible to various biases that can lead us to blunder. Our mistakes make us poorer and less healthy; we often make bad decisions involving education, personal finance, health care, mortgages and credit cards, the family, and even the planet itself. In Nudge, Thaler and Sunstein invite us to enter an alternative world, one that takes our humanness as a given. They show that by knowing how people think, we can design choice environments that make it easier for people to choose what is best for themselves, their families, and their society. Using colorful examples from the most important aspects of life, Thaler and Sunstein demonstrate how thoughtful "choice architecture" can be established to nudge us in beneficial directions without restricting freedom of choice. Nudge offers a unique new take-from neither the left nor the right-on many hot-button issues, for individuals and governments alike. This is one of the most engaging and provocative books to come along in many years.

7,772 citations


Journal ArticleDOI
16 May 2008-Science
TL;DR: The ability of groups of people to make predictions is a potent research tool that should be freed of unnecessary government restrictions.
Abstract: The ability of groups of people to make predictions is a potent research tool that should be freed of unnecessary government restrictions.

351 citations


Journal ArticleDOI
TL;DR: In the face of a low-probability fearsome risk, people often exaggerate the benefits of preventive, risk-reducing, or ameliorative measures as mentioned in this paper.
Abstract: Fearsome risks are those that stimulate strong emotional responses. Such risks, which usually involve high consequences, tend to have low probabilities, since life today is no longer nasty, brutish and short. In the face of a low-probability fearsome risk, people often exaggerate the benefits of preventive, risk-reducing, or ameliorative measures. In both personal life and politics, the result is damaging overreactions to risks. We offer evidence for the phenomenon of probability neglect, failing to distinguish between high and low-probability risks. Action bias is a likely result.

172 citations


Journal ArticleDOI
TL;DR: This article found that many groups make their decisions through some process of deliberation, usually with the belief that deliberation will improve judgments and predictions, but such groups often fail, in the sense that they make judgments that are false or that fail to take advantage of the information that their members have.
Abstract: Many groups make their decisions through some process of deliberation, usually with the belief that deliberation will improve judgments and predictions. But deliberating groups often fail, in the sense that they make judgments that are false or that fail to take advantage of the information that their members have. There are four such failures. (1) Sometimes the predeliberation errors of group members are amplified, not merely propagated, as a result of deliberation. (2) Groups may fall victim to cascade effects, as the judgments of initial speakers or actors are followed by their successors, who do not disclose what they know. Nondisclosure, on the part of those successors, may be a product of either informational or reputational cascades. (3) As a result of group polarization, groups often end up in a more extreme position in line with their predeliberation tendencies. Sometimes group polarization leads in desirable directions, but there is no assurance to this effect. (4) In deliberating groups, shared information often dominates or crowds out unshared information, ensuring that groups do not learn what their members know. All four errors can be explained by reference to informational signals, reputational pressure, or both. A disturbing result is that many deliberating groups do not improve on, and sometimes do worse than, the predeliberation judgments of their average or median member.

129 citations


Journal ArticleDOI
TL;DR: The concept of irreversibility plays a large role in the theory and practice of environmental protection as mentioned in this paper. But the concept is explicit in some statements of the Precautionary Principle.
Abstract: The concept of "irreversibility" plays a large role in the theory and practice of environmental protection. Indeed, the concept is explicit in some statements of the Precautionary Principle. But the idea of irreversibility remains poorly defined. Because time is linear, any loss is, in a sense, irreversible. On one approach, drawn from environmental economics, irreversibility might be understood as a reference to the value associated with taking precautionary steps that maintain flexibility for an uncertain future ("option value"). On another approach, drawn from environmental ethics, irreversibility might be understood to refer to the qualitatively distinctive nature of certain environmental harms - a point that raises a claim about incommensurability. The two conceptions fit different problems. For example, the idea of option value best fits the problem of climate change; the idea of qualitatively distinctive harms best fits the problem of extinction of endangered species. These ideas can be applied to a wide assortment of environmental problems.

103 citations


Posted Content
TL;DR: The Court's decision in District of Columbia v. Heller as mentioned in this paper was seen as a modern version of Marbury v. Madison, speaking neutrally for the text, structure, and original understanding of the Constitution.
Abstract: The Court's decision in District of Columbia v. Heller might be taken in three different ways. First, it might be seen as a modern version of Marbury v. Madison, speaking neutrally for the text, structure, and original understanding of the Constitution. Second, it might be seen as analogous to Lochner v. New York, in which a majority of the Court invoked a dubious understanding of the Constitution in order to override the democratic will. Third, it might be taken as analogous to Griswold v. Connecticut, in which a majority of the Court, proceeding in minimalist fashion, used the Constitution to vindicate the contemporary judgments of a national majority. It is true that in emphasizing constitutional text and structure, the Court spoke in terms close to those in Marbury; indeed, Heller is the most self-consciously originalist opinion in the history of the Supreme Court. It is also true that many historians reject the Court's understanding of the Second Amendment, making it plausible to see the ruling as a modern incarnation of Lochner. But the timing and context of the decision suggest that Griswold is the most illuminating analogy. In both cases, the Court spoke on behalf of the contemporary sentiment of a national majority against a national outlier. The claimed analogy between Griswold and Heller fits well with the fact that Heller is a narrow ruling with strong minimalist elements. No less than the right of privacy, and notwithstanding the backward-looking nature of the Court's opinion, the right to have guns is likely to evolve over time through case-by-case judgments made under the influence of contemporary social commitments.

93 citations


Posted Content
TL;DR: This paper argued that adolescent risk-taking is often driven by the social meaning of risk and caution, and that social meaning operates as a tax on or a subsidy to behavior, and argued that changes in social meaning present a serious collective action problem, but also a valuable opportunity for both law and policy.
Abstract: Why do adolescents take risks? What is the appropriate response to adolescent risk-taking? This Commentary for a special issue of Developmental Review, discussing a set of papers in that issue, explores these questions with attention to changes in the adolescent brain, to dual-processing theory, to social influences, and to fuzzy-trace theory. It contends that adolescent risk-taking is often driven by the social meaning of risk and caution, and that social meaning operates as a tax on or a subsidy to behavior. Changes in social meaning present a serious collective action problem, but also a valuable opportunity for both law and policy.

72 citations


Journal ArticleDOI
TL;DR: In this paper, a comprehensive study of relevant courts of appeals decisions in the aftermath of the 9/11 attacks is presented, showing that the invalidation rate is about 15 percent -low, but not so low as to suggest that federal courts have applied a broad rule of deference to government action.
Abstract: Many people believe that when national security is threatened, federal courts should defer to the government. Many other people believe that in times of crisis, citizens are vulnerable to a kind of "panic" that leads to unjustified intrusions on liberty. But to date, there is little information about what federal courts have actually done in this domain, especially in the period after the attacks of September 11, 2001. On the basis of a comprehensive study of relevant courts of appeals decisions in the aftermath of those attacks, this essay offers four findings. First, the invalidation rate is about 15 percent - low, but not so low as to suggest that federal courts have applied a broad rule of deference to government action. Second, the division between Republican and Democratic appointees is comparable to what is found in other areas of the law; contrary to reasonable expectations, there is no significant "compression" of ideological divisions in this domain. Third, and perhaps most strikingly, no panel effects are apparent here. Unlike in the vast majority of other areas, Republican and Democratic appointees do not appear to vote differently if they are sitting with Republican or Democratic appointees. Finally, judicial behavior cannot be shown to have changed over time. The invalidation rate is not higher in recent years than it was in the years immediately following the 9/11 attacks. Explanations are ventured for these various findings, with particular reference to the absence of discernible panel effects.

66 citations


Journal ArticleDOI
TL;DR: In this article, the authors argue that discounting is a means of taking account of opportunity costs, and a refusal to discount may well hurt, rather than help, future generations, and that the proper response is to make investments that will help those generations, not to refuse to discount.
Abstract: Some of the most important disagreements about how aggressively to respond to the threat of climate change turn on the choice of the discount rate. A high discount rate implies relatively modest and slow reductions; a low discount rate implies immediate and dramatic action. The debate between the two sides reflects a disagreement between the positivists, who argue for a market rate, and the ethicists, who urge that the positivist approach violates the duty of the present to the future. We argue that the positivists are largely right, and that the question of discounting should be separated from the question of the ethical duties of the present. Discounting is a means of taking account of opportunity costs, and a refusal to discount may well hurt, rather than help, future generations. Nonetheless, it is also possible that cost-benefit analysis with discounting will impose excessive harms on future generations. If so, the proper response is to make investments that will help those generations, not to refuse to discount. We also explore several questions on which the ethicists' legitimate objections require qualification of the positivists' arguments, justifying a low discount rate for climate change policy.

63 citations


Journal ArticleDOI
TL;DR: For instance, the authors argued that adolescents take unjustified risks, often because of the weakness of their analytic systems, which provide an inadequate check on impulsive or ill-considered decisions.

61 citations


Posted Content
TL;DR: In the Occupational Safety and Health Act (OSHA), the Secretary of Labor is authorized to issue whatever standards are reasonably necessary or appropriate to provide safe or healthful places of employment as mentioned in this paper.
Abstract: Under the Occupational Safety and Health Act, the Secretary of Labor is authorized to issue whatever standards are reasonably necessary or appropriate to provide safe or healthful places of employment. More than any other provision in federal regulatory law, this language is subject to a plausible nondelegation challenge, because it seems to ask the Secretary to choose among a wide array of intelligible principles for standard-setting. The constitutional challenge raises serious and unresolved questions for both regulatory policy and administrative law. In answering those questions, courts have three principal alternatives. The most aggressive approach would be to invalidate the statute in the hopes of encouraging, for the first time, sustained legislative deliberation about the proper content of occupational safety and health policy. The most modest approach, rooted in the Avoidance Canon, would be to construe the statutory language to produce floors and ceilings on agency action; that approach would require the Secretary to ban significant risks while forbidding the Secretary from regulating trivial or de minimis risks and also requiring the Secretary to show that any regulations are feasible. The third and preferable approach, also rooted in the Avoidance Canon, would be to construe the statute so as to require the agency to engage in a form of cost-benefit balancing. Such a construction would have the advantage of promoting greater transparency and accountability at the agency level. At the same time, it would raise difficult questions about the precise nature of such balancing in the context of occupational safety policy and also about legal constraints on agency assessment of both costs and benefits. Because of the distinctive nature of workplace safety, the best approach would give the agency considerable flexibility on questions of valuation while also permitting serious attention to distributional factors.

Journal Article
TL;DR: The New Legal Realism has clear jurisprudential implications, bearing as it does on competing accounts of legal reasoning, including Ronald Dworkin's suggestion that such reasoning is a search for "integrity".
Abstract: The last decade has witnessed the birth of the New Legal Realism—an effort to go beyond the old realism by testing competing hypotheses about the role of law and politics in judicial decisions, with reference to large sets and statistical analysis. The New Legal Realists have uncovered a Standard Model of Judicial Behavior, demonstrating significant differences between Republican appointees and Democratic appointees, and showing that such differences can be diminished or heightened by panel composition. The New Legal Realists have also started to find that race, sex, and other demographic characteristics sometimes have effects on judicial judgments. At the same time, many gaps remain. Numerous areas of law remain unstudied; certain characteristics of judges have yet to be investigated; and in some ways, the existing work is theoretically thin. The New Legal Realism has clear jurisprudential implications, bearing as it does on competing accounts of legal reasoning, including Ronald Dworkin’s suggestion that such reasoning is a search for “integrity.” Discussion is devoted to the relationship between the New Legal Realism and some of the perennial normative questions in administrative law. In 1931, Karl Llewellyn attempted to capture the empirical goals of the legal realists by referring to early “efforts to capitalize the wealth of our reported cases to make large-scale quantitative studies of facts and outcome.” Llewellyn emphasized the “hope that these might develop lines of prediction more sure, or at least capable of adding further certainty to the predictions based as hitherto on intensive study of smaller bodies of cases.” But Llewellyn added, with apparent embarrassment: “I know of no published results.” We are in the midst of a flowering of “large-scale quantitative studies of facts and outcome,” with numerous published results. The relevant studies have produced a New Legal Realism—an effort to understand the sources of judicial decisions on the basis of ∗ Assistant Professor of Law, University of Chicago Law School. ** Karl N. Llewellyn Distinguished Service Professor, Law School and Department of Political Science, University of Chicago. We are grateful to the Chicago Judges Project, and in particular to Dean Saul Levmore, for relevant support. 1 Karl N. Llewellyn, Some Realism About Realism: Responding to Dean Pound, 44 Harv L Rev 1222, 1243-44 (1931). 2 Id. at 1244. 3 Id.

Journal ArticleDOI
TL;DR: In this paper, the authors argue that per capita allocations of emissions rights do not take into account all the effects of such a climate treaty, and that such allocations do not balance welfare and fairness.
Abstract: Many people believe that the problem of climate change would be best handled by an international agreement that includes a system of cap and trade. Such a system would impose a global cap on greenhouse gases emissions and allocate tradable emissions permits. This proposal raises a crucial but insufficiently explored question: How should such permits be allocated? It is tempting to suggest that in principle, allocation should be done on a per capita basis, with the idea that each person should begin with the same entitlement, regardless of place of birth. This idea, pressed by many analysts and by the developing world, can be defended on grounds of either welfare or fairness. But on both grounds, per capita allocations run into serious objections. If fairness is understood in terms of equally or proportionally sharing the burdens of a climate treaty, per capita allocations are not fair because they do not take into account all the effects of such a treaty. Any agreement to reduce greenhouse gas emissions will give more benefits to some nations than to others, and will impose more costs on some nations than on others; in these circumstances, per capita emissions rights give the appearance but not the reality of fairness. For those who seek redistribution to those who need help, on grounds of either welfare or fairness, per capita allocations of emissions rights are at best a mixed blessing. Some rich nations are highly populated, and some poor nations have small populations; there is essentially no relationship between size of population and per capita wealth. Per capita allocations would also create serious incentive problems, and they would face decisive objections from the standpoint of feasibility: Per capita rights would transfer hundreds of billions of dollars annually from the United States to China and India, and the United States is most unlikely to sign a treaty with that consequence. Comparisons are drawn between per capita allocations and other approaches, including those based on existing emissions rates and those with self-conscious redistributive aims. A general goal is to balance welfarist and fairness goals with feasibility constraints; per capita allocations do a poor job of achieving that balance, and an insistence on that approach might make the climate change problem intractable. These conclusions have general implications for thinking about normative goals and practical limitations in the context of international law.

Journal Article
TL;DR: The RegMarkets Center as discussed by the authors focuses on understanding and improving regulation, market performance, and government policy, and provides analyses of key issues aimed at improving decisions in the public, private and not-for-profit sectors.
Abstract: The Reg-Markets Center focuses on understanding and improving regulation,market performance, and government policy. The Center provides analyses of keyissues aimed at improving decisions in the public, private and not-for-profitsectors. It builds on the success of the AEI-Brookings Joint Center. The viewsexpressed in this publication are those of the authors.ROBERT HAHN

Journal Article
TL;DR: For example, the authors found that judges' policy preferences affect judicial decisions about whether agency decisions are "arbitrary" or "capricious" and that Republican appointees are far more likely to invalidate, as arbitrary, conservative agency decisions than liberal agency decisions.
Abstract: The Administrative Procedure Act instructs federal courts to invalidate agency decisions that are “arbitrary” or “capricious.” In its 1983 decision in the State Farm case, the Supreme Court firmly endorsed the idea that arbitrariness review requires courts to take a “hard look” at agency decisions. The hard look doctrine has been defended as a second-best substitute for insistence on the original constitutional safeguards; close judicial scrutiny is said to discipline agency decisions and to constrain the illegitimate exercise of discretion. In the last two decades, however, hard look review has been challenged on the plausible but admittedly speculative ground that judges’ policy preferences affect judicial decisions about whether agency decisions are “arbitrary.” This study, based on an extensive data set, finds that the speculation is correct. Democratic appointees are far more likely to vote to invalidate, as arbitrary, conservative agency decisions than liberal agency decisions. Republican appointees are far more likely to invalidate, as arbitrary, liberal agency decisions than conservative agency decisions. Significant panel effects are also observed. Democratic appointees show especially liberal voting patterns on allDemocratic panels; Republican appointees show especially conservative voting patterns on allRepublican panels. Our central findings do not show that judicial votes are dominated by political considerations, but they do raise grave doubts about the claim that hard look review is operating as a neutral safeguard against the errors and biases of federal agencies. Because judicial policy commitments are playing a large role, there is a strong argument for reducing the role of those commitments, and perhaps for softening hard look review. * Assistant Professor of Law, University of Chicago Law School. ** Karl N. Llewellyn Distinguished Service Professor, Law School and Department of Political Science, University of Chicago Law School. We thank Eric Posner, Richard Posner, Peter Strauss, and Adrian Vermeule for helpful comments. We are grateful to Rachael Dizard, Casey Fronk, Darius Horton, Matthew Johnson, Bryan Mulder, Brett Reynolds, Matthew Tokson, and Adam Wells for superb research assistance.

Journal ArticleDOI
TL;DR: The debate between minimalists and their adversaries is closely related to the debate between those who prefer standards and those who preference rules, though there are some important differences as discussed by the authors, namely that minimalism is hard to justify in many cases.
Abstract: Many judges are minimalists. They favor rulings that are narrow, in the sense that they govern only the circumstances of the particular case, and also shallow, in the sense that they do not accept a deep theory of the legal provision at issue. In law, narrow and shallow decisions have real advantages insofar as they reduce both decision costs and error costs; make space for democratic engagement on fundamental questions; and reflect a norm of civic respect. In many cases, however, minimalism is hard to justify in these ways. Sometimes small steps increase the aggregate costs of decisions; sometimes they produce large errors, especially when they export decision-making burdens to fallible people. Predictability is an important variable, and minimalist decisions can compromise predictability. Sometimes large, nonminimalist steps serve democratic values and do not compromise the norm of civic respect. It follows that the justifications for minimalism are unconvincing in many contexts. The debate between minimalists and their adversaries is closely related to the debate between those who prefer standards and those who prefer rules, though there are some important differences.

Book
01 Jan 2008

Journal ArticleDOI
TL;DR: A large body of empirical evidence demonstrates that judicial review of agency action is highly politicized, in the sense that Republican appointees are significantly more likely to invalidate liberal agency decisions than conservative ones as discussed by the authors.
Abstract: A large body of empirical evidence demonstrates that judicial review of agency action is highly politicized, in the sense that Republican appointees are significantly more likely to invalidate liberal agency decisions than conservative ones, while Democratic appointees are significantly more likely to invalidate conservative agency decisions than liberal ones. These results hold for both (a) judicial review of agency interpretations of law and (b) judicial review of agency decisions for "arbitrariness" on questions of policy and fact. On the federal courts of appeals, the most highly politicized voting patterns are found on unified panels, that is, on panels consisting solely of either Democratic or Republican appointees. On the Supreme Court, politicized administrative law is also unmistakable, as the more conservative justices show a distinctive willingness to vote to invalidate liberal agency decisions, and the more liberal justices show a distinctive willingness to vote to invalidate conservative agency decisions. Indeed, it is possible to "rank" justices in terms of the extent to which their voting patterns are politicized. The empirical results raise an obvious question: What might be done to depoliticize administrative law? Three sets of imaginable solutions have promise: (1) self-correction without formal doctrinal change, produced by a form of "debiasing" that might follow from a clearer judicial understanding of the current situation; (2) doctrinal innovations, as, for example, through rethinking existing deference principles and giving agencies more room to maneuver; and (3) institutional change, through novel voting rules and requirements of mixed panels. An investigation of these solutions has implications for other domains in which judges are divided along political lines, and indeed in which nonjudicial officials show some kind of politicized division or bias.


Posted Content
TL;DR: In this article, the authors investigate the notion of distributive and corrective justice in the context of climate change policy and conclude that greenhouse gas emissions restrictions do not fit the standard conception of tort.
Abstract: This article investigates considerations of distributive and corrective justice in the context of climate change policy. The authors accept that there is good reason for greenhouse gas emissions restrictions, but those reasons do not include concerns about distributive and corrective justice. It is unclear that those restrictions are the best way to help the most disadvantaged people in the world, and climate change does not fit the standard conception of tort.

Journal Article
TL;DR: The debate between minimalists and their adversaries is closely related to the debate between those who prefer standards and those who preference rules, though there are some important differences as discussed by the authors, namely that minimalism is hard to justify in many cases.
Abstract: Many judges are minimalists. They favor rulings that are narrow, in the sense that they govern only the circumstances of the particular case, and also shallow, in the sense that they do not accept a deep theory of the legal provision at issue. In law, narrow and shallow decisions have real advantages insofar as they reduce both decision costs and error costs; make space for democratic engagement on fundamental questions; and reflect a norm of civic respect. In many cases, however, minimalism is hard to justify in these ways. Sometimes small steps increase the aggregate costs of decisions; sometimes they produce large errors, especially when they export decision-making burdens to fallible people. Predictability is an important variable, and minimalist decisions can compromise predictability. Sometimes large, nonminalist steps serve democratic values and do not compromise the norm of civic respect. It follows that the justifications for minimalism are unconvincing in many contexts. The debate between minimalists and their adversaries is closely related to the debate between those who prefer standards and those who prefer rules, though there are some important differences.


Journal Article
TL;DR: In this paper, it is assumed that Congress creates a federal agency to deal with a large problem, one that involves a significant part of the national economy, and that Congress instructs the agency: Do what you believe is best.
Abstract: MAGINE that Congress creates a federal agency to deal with a large problem, one that involves a significant part of the national economy. Suppose that Congress instructs the agency: Do what you believe is best. Act reasonably and appropriately. Adopt the legal standard that you prefer, all things considered. Suppose, finally, that these instructions lack clear contextual referents, such as previous enactments or judicial understandings, on which the agency might build. I

Journal Article
TL;DR: A large body of empirical evidence demonstrates that judicial review of agency action is highly politicized, in the sense that Republican appointees are significantly more likely to invalidate liberal agency decisions than conservative ones.
Abstract: A large body of empirical evidence demonstrates that judicial review of agency action is highly politicized, in the sense that Republican appointees are significantly more likely to invalidate liberal agency decisions than conservative ones, while Democratic appointees are significantly more likely to invalidate conservative agency decisions than liberal ones. These results hold for both (a) judicial review of agency interpretations of law and (b) judicial review of agency decisions for “arbitrariness” on questions of policy and fact. On the federal courts of appeals, the most highly politicized voting patterns are found on unified panels, that is, on panels consisting solely of either Democratic or Republican appointees. On the Supreme Court, politicized administrative law is also unmistakable, as the more conservative justices show a distinctive willingness to vote to invalidate liberal agency decisions, and the more liberal justices show a distinctive willingness to vote to invalidate conservative agency decisions. Indeed, it is possible to “rank” justices in terms of the extent to which their voting patterns are politicized. The empirical results raise an obvious question: What might be done to depoliticize administrative law? Three sets of imaginable solutions have promise: (1) self-correction without formal doctrinal change, produced by a form of “debiasing” that might follow from a clearer judicial understanding of the current situation; (2) doctrinal innovations, as, for example, through rethinking existing deference principles and giving agencies more room to maneuver; and (3) institutional change, through novel voting rules and requirements of mixed panels. An investigation of these solutions has implications for other domains in which judges are divided along political lines, and indeed in which nonjudicial officials show some kind of politicized division or bias.

Journal Article
TL;DR: Heller as discussed by the authors is the most explicitly and self-consciously originalist opinion in the history of the United States Supreme Court, and it has been interpreted as a modern incarnation of Marbury v. Madison.
Abstract: I. INTRODUCTION District of Columbia v. Heller (1) is the most explicitly and self-consciously originalist opinion in the history of the Supreme Court. (2) Well over two hundred years since the Framing, the Court has, for essentially the first time, interpreted a constitutional provision with explicit, careful, and detailed reference to its original public meaning. (3) It would be possible, in this light, to see Heller as a modern incarnation of Marbury v. Madison, (4) at least as that case is understood by some contemporary scholars, (5) and to a considerable extent as Chief Justice John Marshall wrote it. In Marbury, the Court also spoke on behalf of what it took to be the text, structure, and original meaning of the Constitution. (6) On one view, Heller represents the full flowering of the approach that Chief Justice Marshall imperfectly inaugurated--one that has been abandoned at crucial periods in American history. To its defenders, Heller speaks honestly and neutrally on behalf of the original meaning, and it should be appreciated and applauded for that reason. (7) But there is a radically different reading of Heller. The constitutional text is ambiguous, and many historians believe that the Second Amendment does not, in fact, create a right to use guns for nonmilitary purposes. (8) In their view, the Court's reading is untrue to the relevant materials. If they are right, then it is tempting to understand Heller not as Marbury but as a modern incarnation of Lochner v. New York, (9) in which the Court overrode democratic judgments in favor of a dubious understanding of the Constitution. On this view, it is no accident that the five-Justice majority in Heller consisted of the most conservative members of the Court (who were all Republican appointees). Perhaps Heller is, in the relevant sense, a twenty-first-century version of Lochner-style substantive due process, and perhaps it marks the beginning of a long series of confrontations between the Supreme Court and the political branches. On a third view, this characterization badly misses the mark. Heller is more properly characterized as a rerun of the minimalist ruling in Griswold v. Connecticut. (10) In Griswold, the Court struck down a Connecticut law banning the use of contraceptives by married couples, under circumstances in which the Connecticut law was plainly inconsistent with a national consensus. The Court worked hard to support its decision by reference to the standard legal materials, (11) but the national consensus probably provides the best explanation of what the Court did. (12) Perhaps Heller is closely analogous. The Court spoke confidently in terms of the original meaning, but perhaps its ruling is impossible to understand without attending to contemporary values, and in particular to the fact that the provisions that the Court invalidated were national outliers. In this Comment, I contend that the third view is largely correct, and that Heller will, in the fullness of time, be seen as embracing a kind of Second Amendment minimalism. Notwithstanding the Court's preoccupation with constitutional text and history, Heller cannot be adequately understood as an effort to channel the document's original public meaning. The Court may have been wrong on that issue, and even if it was right, a further question remains: why was the robust individual right to possess guns recognized in 2008, rather than 1958, 1968, 1978, 1988, or 1998? And notwithstanding the possible inclinations of the Court's most conservative members, Heller is not best seen as a descendent of Lochner. In spite of its radically different methodology, Heller is far closer to Griswold than it is to Marbury or to Lochner. No less than Griswold, Heller is a narrow ruling with strong minimalist features. And if this view is correct, then the development of the gun right, as it is specified over time, will have close parallels to the development of the privacy right. …

Journal ArticleDOI
TL;DR: In this article, a comprehensive study of relevant courts of appeals decisions in the aftermath of the 9/11 attacks is presented, showing that the invalidation rate is about 15 percent, low but not so low as to suggest that federal courts have applied a broad rule of deference to government action.
Abstract: Many people believe that when national security is threatened, federal courts should defer to the government. Many other people believe that in times of crisis, citizens are vulnerable to a kind of “panic” that leads to unjustified intrusions on liberty. But to date, there is little information about what federal courts have actually done in this domain, especially in the period after the attacks of September 11, 2001. On the basis of a comprehensive study of relevant courts of appeals decisions in the aftermath of those attacks, this essay offers four findings. First, the invalidation rate is about 15 percent – low, but not so low as to suggest that federal courts have applied a broad rule of deference to government action. Second, the division between Republican and Democratic appointees is comparable to what is found in other areas of the law; contrary to reasonable expectations, there is no significant “compression” of ideological divisions in this domain. Third, and perhaps most strikingly, no panel effects are apparent here. Unlike in the vast majority of other areas, Republican and Democratic appointees do not appear to vote differently if they are sitting with Republican or Democratic appointees. Finally, judicial behavior cannot be shown to have changed over time. The invalidation rate is not higher in recent years than it was in the years immediately following the 9/11 attacks. Explanations are ventured for these various findings, with particular reference to the absence of discernible panel effects.

Journal Article
TL;DR: The most plausible defense of due process traditionalism operates on rule-consequentialist grounds, with the suggestion that even if traditions are not great, they are often good, and judges do best if they defer to traditions rather than attempting to specify the content of "liberty" on their own.
Abstract: In important cases, the Supreme Court has limited the scope of "substantive due process" by reference to tradition, but it has yet to explain why it has done so. Due process traditionalism might be defended in several distinctive ways. The most ambitious defense draws on a set of ideas associated with Edmund Burke and Friedrich Hayek, who suggested that traditions have special credentials by virtue of their acceptance by many minds. But this defense runs into three problems. Those who have participated in a tradition may not have accepted any relevant proposition; they might suffer from a systematic bias; and they might have joined a cascade. An alternative defense sees due process traditionalism as a second-best substitute for two preferable alternatives: a purely procedural approach to the Due Process Clause, and an approach that gives legislatures the benefit of every reasonable doubt. But it is not clear that in these domains, the first-best approaches are especially attractive; and even if they are, the second-best may be an unacceptably crude substitute. The most plausible defense of due process traditionalism operates on rule-consequentialist grounds, with the suggestion that even if traditions are not great, they are often good, and judges do best if they defer to traditions rather than attempting to specify the content of "liberty" on their own. But the rule-consequentialist defense depends on controversial and probably false assumptions about the likely goodness of traditions and the institutional incapacities of judges.

Posted Content
TL;DR: The authors argue that people are better seen as Credulous Bayesians, who insufficiently adjust for idiosyncratic features of particular environments and put excessive weight on the statements of others where there are common sources of information; highly unrepresentative group membership; statements that are made to obtain approval; and statements that were designed to manipulate.
Abstract: When members of deliberating groups speak with one another, their predeliberation tendencies often become exacerbated as their views become more extreme. The resulting phenomenon -- group polarization – has been observed in many settings, and it bears on the actions of juries, administrative tribunals, corporate boards, and other institutions. Polarization can result from rational Bayesian updating by group members, but in many contexts, this rational interpretation of polarization seems implausible. We argue that people are better seen as Credulous Bayesians, who insufficiently adjust for idiosyncratic features of particular environments and put excessive weight on the statements of others where there are 1) common sources of information; 2) highly unrepresentative group membership; 3) statements that are made to obtain approval; and 4) statements that are designed to manipulate. Credulous Bayesianism can produce extremism and significant blunders. We discuss the implications of Credulous Bayesianism for law and politics, including media policy and cognitive diversity on administrative agencies and courts.

Journal ArticleDOI
TL;DR: This paper showed that false rumors can have a tenacious hold on some groups and cultures while dying a rapid death in others; multiple equilibria are likely, and that the exchange of information may not produce convergence on truth and that damaging false reports will often be widely credited.
Abstract: Why do false rumors spread? Why do otherwise sensible people believe them? Why are they sometimes impervious to correction? There are several answers. (a) Some false rumors gain traction because of their fit with prior convictions within particular groups and cultures. People are strongly motivated to accept certain beliefs, however groundless; they also have good reasons to accept some of those beliefs. Diverse groups will have diverse thresholds for accepting false rumors. It follows that particular rumors can have a tenacious hold on some groups and cultures while dying a rapid death in others; multiple equilibria are likely. (b) Informational cascades are often responsible for belief in false rumors. Such rumors typically spread as a result of such cascades; people believe them because they lack the information that would lead them to reject the signals given by the apparently shared beliefs of numerous others. The important point here is that with respect to many rumors, private signals are essentially nonexistent. (c) Reputational cascades help propagate false rumors. Sometimes people do not correct such rumors, and even endorse them, so as to curry favor or to avoid public opprobrium. Because of the role of early movers, multiple equilibria are (again) likely, as some groups come to believe rumors that other groups deem preposterous. (d) Group polarization accounts for the intensity with which people accept false rumors. Like-minded people, engaged in deliberation with one another, increase one another's confidence in rumors. Here too we see why false rumors are widely believed within some groups but widely rejected in others. As a result of group polarization, such rumors often become entrenched. (e) Biased assimilation can make false rumors exceedingly hard to correct. Because people with strong antecedent commitments process balanced information in a biased way, such information can strengthen people's commitment to false perceptions. That commitment can also be strengthened by corrections, which therefore turn out to be self-defeating. These points have significant implications for freedom of speech and the marketplace of ideas, especially in the age of the Internet; they demonstrate that the exchange of information may not produce convergence on truth and that damaging false reports will often be widely credited. A chilling effect on false rumors can be highly desirable; the goal should be to produce optimal chill, rather than no chill at all.

Journal ArticleDOI
TL;DR: The happiness approach as discussed by the authors relies on surveys that ask people to rate their happiness on a scale, and then finds correlations between ratings on the scale and various characteristics or experiences of the survey respondents.
Abstract: Economists who make normative proposals traditionally assume that policy should advance “efficiency,” usually in the Kaldor or Hicks sense, which defines efficiency in terms of whether the project’s winners can hypothetically compensate the project’s losers. A compensation criterion is used because it can be based on ordinal utilities, which puts a smaller information burden on the decision maker than cardinal utilities do. Ordinal utilities, unlike cardinal utilities, can (in principle) be inferred from observations of consumer behavior. By seeing how people trade off goods, willingness-to-pay (or willingness-to-accept) amounts can be derived and summed, so that alternative policy outcomes can be easily compared. This approach has received a great deal of criticism over the decades, but it has survived mainly because no alternative method has commanded widespread agreement. In recent years, however, a small group of economists and psychologists have argued that an alternative method is available. This method, often called the “happiness approach,” relies on surveys that ask people to rate their happiness on a scale. Econometric analysis then finds correlations between ratings on the scale and various characteristics or experiences of the survey respondents—wealth, income, family relationships, and so forth. Though still regarded with skepticism in many quarters, the happiness approach has scored some notable successes. The various factors that are correlated with happiness appear to be robust: they recur in different surveys and are correlated