scispace - formally typeset
Search or ask a question

Showing papers in "Policy Review in 2005"


Journal Article
TL;DR: Literacy is not merely an educational or social need; it is essential if the United States is to compete in the new global economy as mentioned in this paper, but experts define and measure the problem differently and propose varied methods of attack.
Abstract: AFTER DECADES OF debate inside the educational community, literacy policy has recently moved to the larger stage of national politics. Prior to 1997 no federal bill had specifically addressed child literacy as an issue. But in the 2000 presidential campaign, George W. Bush regularly touted his record on literacy, and he and his wife Laura, a former librarian, speak of reading as "the new civil right." Legislators debate literacy philosophy and methodology--such as phonics versus whole language--and newspaper headlines track how U.S. kids read in comparison with the rest of the world. The No Child Left Behind Act of 2002, which federalizes literacy policy in important ways, is hotly debated in state legislatures, Congress, and federal courts. The transformation of literacy from an educational concern to a national political issue has been swift and significant. In a sense, literacy has traveled the same federalization road as educational policy generally, moving from local school boards and statehouses to our nation's capital in less than a decade. Two former governors, Bill Clinton and George W. Bush, wrestled with literacy and education at the state level before packing up the problems and bringing them along to the White House. In fact, the road to federalizing literacy policy moves largely through Sacramento, California and Austin, Texas on its way to Washington, D.C. Playing out literacy on the national political stage reframes everything--the issues, the vocabulary, and the cast of characters. Literacy is not merely a problem now; it is a crisis. Improving literacy is not just an educational or social need; it is essential if the United States is to compete in the new global economy. Everyone seems ready to declare war on the enemy--illiteracy--but experts define and measure the problem differently and propose varied methods of attack. In fact, the several combatants in the war on illiteracy seem to expend as much ammunition firing on one another as they do in attacking the problem. The politicians in Washington, the scholars in their ivory towers, the vested interests in our communities, and the teachers and students in the trenches all seem to be warring among themselves at the same time they are trying to combat the common enemy. Much was written about literacy when it played on the academic stage as an educational issue. Now that political attention and governmental dollars are involved, it is time to assess literacy on the public policy stage. What can we learn by tracking the politics of literacy? Will greater political attention and government expenditure improve literacy? Can illiteracy be overcome in the twenty-first century? Literacy policy travels eastward ONE WAY TO TRACK the story of how literacy became a national political issue is to follow its path through two key states on its way to Washington, D.C. The story begins, as new political trends often do, in the Golden State, California. There, an ambitious politician, Superintendent of Education Bill Honig, joined forces with advocates of "whole language" instruction to revise the California reading curriculum. Changes in the educational approach of a major state such as California have a corresponding effect on national textbook publishing and on the policies of other states. In the late 1980s, Honig decided to launch a major reform of California education centered on reading. Some say he was simply doing his job as superintendent of education, but most believe Honig had a larger political agenda, including a possible run for governor of California. Under his leadership, the state initiated a program called Reading Recovery that had been developed by a New Zealand teacher, Mary Clay. Honig apparently wanted to recast reading instruction in a "great books" format, encouraging the development of reading skills in the context of exposing students to rich literature. Perhaps unwittingly, however, he tied his policies to an educational/political movement called "whole language. …

34 citations


Journal Article
TL;DR: The culture war is invariant: a set of traditional values comes under attack by those who, like the Greek Sophist, the French philosophe, and the American intellectual, make their living by their superior proficiency in handling abstract ideas, and promote a radically new and revolutionary set of values.
Abstract: AMERICA HAS BEEN in the midst of a culture war for some time and will probably remain so for some time longer. But culture war is not peculiar to this country. Indeed, there have been at least three great culture wars fought in the course of Western history, including one contemporaneous with the rise of the Sophists in ancient Greece, the epoch identified with the French Enlightenment and the German Aufklarung, and our own current battle. The first two ended in disaster for the societies in which they occurred--the outcome of the third is still pending. Each of these wars has its own particular antagonists, each its own weapons of combat, each its own battlefield. But the essential nature of a culture war is invariant: A set of traditional values comes under attack by those who, like the Greek Sophist, the French philosophe, and the American intellectual, make their living by their superior proficiency in handling abstract ideas, and promote a radically new and revolutionary set of values. This is precisely what one would expect from those who excel in dispute and argumentation. In every culture war the existing customs and traditions of a society are called to the bar of reason and ruthlessly interrogated and cross-examined by an intellectual elite asking whether they can be rationally justified or are simply the products of superstition and thus unworthy of being taken seriously by enlightened men and women. Indeed, there could be no better example of this disdainful attitude toward inherited tradition than that displayed by the chief justice of the Supreme Court of Canada in discussing her court's legalization of gay marriage, clearly expressed by her summary dismissal of any opposition to the high court's decision as arising from nothing more than "residual personal prejudice." Against such opposition, it is no wonder that many conservatives--including many of those who call themselves neoconservatives--have attempted to combat the opponents of tradition with their opponents' own weapon of enlightened rationality. But is it possible to defend tradition with the help of reason? Can a particular tradition be justified by reason? And what if our traditional belief conflicts with reason--can we rationally justify keeping it? Suppose we have been raised in the belief that we must wash our hands before every meal in order to appease a local deity in our pantheon, say, the god of the harvest; and suppose again that we have come to learn of the hygienic benefits of washing our hands before every meal. Must we keep the absurd tradition once we have grasped its scientific rationale? In either case, whether tradition and reason conflict, or tradition is revealed to be reason disguised, reason wins and tradition loses. Where reason shines forth, then, tradition is no longer necessary. Hence the question before us: In a world that is being more and more rationalized, does tradition have a future? Or will we one day look upon it as we now look upon the myths of the ancient world--quaint and amusing, but of no real relevance to our lives? The quandary of cultural relativism PERHAPS THE EARLIEST, and certainly one of the most distinguished, attempts to rationalize a tradition was made by the twelfth-century Jewish philosopher Moses Maimonides as he contemplated the laws regulating dietary customs presented in Leviticus. Should these regulations be obeyed simply because we are commanded to obey them, as a strict traditionalist will insist; or should they be obeyed because they represent prudent counsel concerning what foods are healthy for us, as Maimonides, himself a physician, asserts? In order to appreciate the revolutionary nature of Maimonides's defense of inherited custom, we must recall that the question could not even be asked in most tradition-bound societies. In the kind of primitive and compact society that Walter Bagehot described as cemented in a cake of custom--totally absorbed within a particular traditional ethos--such a dilemma cannot be articulated. …

16 citations


Journal Article
TL;DR: In the wake of the 2004 Sumatran Tsunami, the United States, Japan, Europe, Australia, and Canada continued to escalate their tsunami aid packages to an eventual total of over $4 billion, while China was playing aid catch-up with rival Taiwan, which had started off the week with a generous pledge of $50 million as mentioned in this paper.
Abstract: DECEMBER 30, 2004, was hardly a proud moment for China, Asia's rising superpower. On that day, China's Foreign Ministry spokesman, Liu Jianchao, announced $2.7 million in disaster relief to victims of the great Sumatran tsunami that killed hundreds of thousands of people and wiped out hundreds of towns along the western coast of the Indonesian island. Liu was understandably a bit defensive as foreign reporters peppered him with questions about the minimal aid amount. "China is a developing country," he offered. "We have a population of 1.3 billion. China's per capita GDP is still very low." The $2.7 million was, he explained, "equivalent to the annual income of 20,000 farmers." While for the next several days the United States, Japan, Europe, Australia, and Canada continued to escalate their tsunami aid packages to an eventual total of over $4 billion, China was playing aid catch-up ball with rival Taiwan, which had started off the week with a generous pledge of $50 million. By mid-January China had pledged about $63 million, though Taiwan's relief teams were far more visible in the stricken areas than China's. Chinese aid efforts were dwarfed by the fleets of U.S. helicopters ferrying vast cargoes of medical, food, and construction supplies from American aircraft carriers and support ships anchored in the hazy distance directly to needy masses of refugees ashore. Unsurprisingly, this was not an image that was seen in China. China's official Xinhua news agency breathlessly reported that Indonesians were emotionally overwhelmed by China's aid. On January 2, a week after the tidal wave, a Sumatran refugee named "Awada," who drove an ambulance for a team of Chinese medics, was moved to proclaim, "China, in my heart, is a great nation!" These words (complete with exclamation point) comprised the headline at the top of the international news page on the January 3, 2005 People's Daily. Despite the fact that China's meager contributions excluded them from the international "core group" of tsunami aid donors, Chinese readers were left with the rosy impression that their country was "a major humanitarian aid power." To the rest of Southeast Asia, reported the New York Times (January 4, 2005), the huge American, Japanese, and European aid campaigns were "a reminder that the world's most populous country is still far from being the dominant power in Asia." Added the Washington Post (January 5, 2005), "the response has also underscored the limitations of China--a fast-growing economic powerhouse that nevertheless has not been able to offer anywhere near the amount of aid provided by Japan, the United States or Britain." All true. But anyone who concluded from the Times and Post accounts that in 2005, China was merely a bit player in Southeast Asia--or anywhere else in the world--would be dead wrong. Beijing's political leaders know that superpowers aren't measured by their foreign aid budgets, or by their economies. They are measured by their ability to use their comprehensive national power--economic, political, and above all military--to gain the obeisance of their neighbors and their regional and global rivals. Asian superpower IT SEEMS THAT the United States may already have resigned itself to China's imminent emergence as a "military superpower"--the term Secretary of State Condoleezza Rice used to describe it in a June 29, 2005 interview with the Wall Street Journal. For regardless of the niggardliness of its Tsunami aid effort, China is now the dominant power in Southeast Asia. How it became so should yield insights into its strategies for the rest of the globe. In his National Security Strategy paper of September 2002, President Bush announced, "We must build and maintain our defenses beyond challenge," an unmistakable declaration that U.S. defenses must be so awesome that no other country would even "challenge" them. At the same time, he pledged that he would be "attentive to the possible renewal of old patterns of great power competition. …

13 citations


Journal Article
TL;DR: In this article, the authors argue that there is a way to substantially improve on the basic model for economic development by using a new kind of market and paying for performance, which is similar to our approach.
Abstract: "OUR DREAM IS A World Free of Poverty" reads the sign at the entrance to the World Bank headquarters That's quite a lofty goal So how do we achieve it? The short answer is that no one is certain The long answer is that there is a way to substantially improve on the basic model for economic development--using a new kind of market and paying for performance Not too long ago, foreign aid was viewed as a path to economic growth for the developing world In some quarters, most notably the development banks and the United Nations, it still is But there is dissension in the ranks Scholars have been chipping away at the aid-buys-growth paradigm for over 30 years--with some going so far as to suggest that state aid could actually hurt the poorest of the poor Over the past decade, revisionists have asserted that foreign aid can be helpful, but only if countries pursue good policies So, if a country has good domestic economic policies and open trade, aid can help; but aid can do little in the presence of poor policies Some scholars have questioned even this view, noting that measuring the impact of foreign aid depends on the definitions of terms like "aid," "policies," and "growth" Foreign aid, consisting of labor and capital that flow to particular countries, will tend to be good if those assets are spent wisely and bad if they are not The real question is how to spend those assets wisely At one level, this is a difficult question because it involves trying to get governments that may be near-sighted or corrupt to take a longer view It asks them to think about investing in areas such as education, health, and roads instead of squandering resources on wasteful activities Solving this problem is difficult One can point to several success stories in getting developing countries to clean up their acts, but there are numerous failures The potential perverse incentives of aid are well known Recipient-country governments that use aid productively may not receive any more Aid bureaucracies that solve problems effectively could put themselves out of a job These perverse incentives prevent policymakers from spending aid wisely Furthermore, like many government programs that give out money, aid programs rarely evaluate how well the aid is actually spent The Meltzer Commission notes, for example, that three to ten years after final disbursement, the World Bank reviews the broad policy impact of just 5 percent of its programs To some extent, these problems can be overcome by setting up rules for giving out aid One such rule, currently in vogue, is that aid should be given to really poor countries that promote good policies in general Another rule is to make sure that aid actually does what it is intended to do by paying the project implementers based on actual results Both of these rules may make sense, but both have problems Just giving aid to well-behaved poor countries may mean that donor countries have to write off a large part of the developing world Paying for performance sounds great in theory, but it may be difficult to do in practice While problems with foreign aid are legion, the problems in developing countries are too important to be ignored In 2000, an estimated 27 billion people were living on less than $2 per day These people could potentially benefit from aid from rich countries and international institutions The question is how to make the best use of that aid (1) A new development model AID AGENCIES WANT to spend their limited resources wisely, but they frequently fall short To allocate resources to their most highly valued uses and get the maximum bang for each aid dollar spent, an agency needs to do two things The first is to get reasonable information on the likely costs and benefits of different projects The second is to implement projects effectively Assume that we can solve the information problem We will shortly explain how to do so by making use of a new kind of market, an "information market …

12 citations


Journal Article
TL;DR: Kelo v. City of New London, 2005 as mentioned in this paper was a seminal case in the field of eminent domain, and the Kelo decision was used by the U.S. Supreme Court to declare that it is permissible for the city government to take a group of working-class homes from their owners and turn the parcel of land over to private parties for the purpose of economic development.
Abstract: FOR MANY YEARS, the subject of eminent domain, or "takings," was the purview chiefly of academics and a narrow subspecialty of lawyers. But after June 23, 2005, when the U.S. Supreme Court handed down its 5-4 decision in Kelo v. City of New London, Conn., the term immediately found its way into heated debates in legislative chambers and the flying mud of electoral campaigns nationwide. In Kelo, the Supreme Court ruled that it was constitutionally permissible for the city government to take a group of working-class homes from their owners and turn the parcel of land over to private parties for the purpose of economic development. Kelo thereby tapped into deep-rooted questions of money and class, its result threatening to violate that most sacred of American domains: the home. The Kelo decision is testament to the expanding use of police power by the state for the advancement of private interests that are often in cozy relationships with local municipal governments. To follow the path of the takings locomotive that has chugged across this country is to see how the meaning of private property has changed in the United States from its original promise as a place of sanctuary from outside interference to a contingent relationship in which property is private and unmolested only at the sufferance of local government. Around the nation, there are thousands of ordinary citizens whose lives have been touched--and sometimes destroyed--by takings. For decades, the takings locomotive was fueled by urban renewal policies, now known by the more delicate term "urban revitalization" (and no longer practiced in 1960s-style blunderbuss fashion). But the Kelo case was fueled by a different type of fervor, and one with far greater potential for mischief in the twenty-first century: economic development. The homes of the petitioners in Kelo were marked for eminent domain not because they were blighted, but because they stood in the way of the city's plan to increase its tax base and jazz up what officials saw as a depressed waterfront in their town. Citizens who had been the targets of economic development takings projects long in the works before the Kelo decision suddenly found the vocabulary they had lacked to express their anger. The Kelo case, covered in depth by the media, crystallized the often Byzantine nature of condemnation proceedings for the homeowners caught up in them. Armed with a clearer understanding of what had befallen them and an outlet for their outrage, they have been hitting the streets in protest. Even Justice John Paul Stevens, who authored the majority opinion in Kelo, said in a speech before a bar association meeting in August that he personally regretted the Kelo decision but had felt compelled to rule against the homeowners, based on precedent. What has emerged from the ashes of the Kelo ruling is the rarest of political birds, a Supreme Court Phoenix--a case that lives on in political consciousness. If the Phoenix rises high enough, it may result in state laws and a federal statute that could render the Supreme Court ruling moot. High court blessing IN THE Kelo case, the high court ruled against 15 homeowners from a working-class neighborhood in Connecticut, giving federal constitutional blessing to what has become standard practice in a number of states for many years (though expressly rejected in others). What it has come down to is this: You may rest easy by your hearth (or behind the cash register of your business establishment) so long as the municipality in which it sits has not gotten a notion in its collective head that your property would raise more tax revenue by being taken in condemnation and given to a private developer, who would then raze it to build what the local government deems necessary in the name of economic development. And lest you imagine that the project for which your home or business could be torn down would be something of great public purpose, such as a hospital, a school, or a missile silo, think again. …

9 citations


Journal Article
TL;DR: For example, the U.S. CENTCOM's strategy for the counterinsurgency effort in Iraq is an attempt to avoid making Vietnam-like mistakes as discussed by the authors, which is not the case.
Abstract: WHEN AMERICAN GROUND forces paused briefly during the march to Baghdad in 2003, critics of the war were quick to warn of a "quagmire," an oblique reference to the Vietnam War. Virtually as soon as it became clear that the conflict in Iraq had become an insurgency, analogies to Vietnam began to proliferate. This development is not surprising. Critics have equated every significant American military undertaking since 1975 to Vietnam, and the fear of being trapped in a Vietnam-like war has led to the frequent demand that U.S. leaders develop not plans to win wars, but "exit strategies," plans to get out of messes. There is no question that the Vietnam War scarred the American psyche deeply, nor that it continues to influence American foreign policy and military strategies profoundly. CENTCOM's strategy for the counterinsurgency effort in Iraq is an attempt to avoid making Vietnam-like mistakes. Proponents of other strategies, like "combined action platoons" or "oil spot" approaches, most frequently derive those programs from what they believe are the "right" lessons of Vietnam. It is becoming increasingly an article of faith that the insurgency in Vietnam is similar enough to the insurgency in Iraq that we can draw useful lessons from the one to apply to the other. This is not the case. The only thing the insurgencies in Iraq and Vietnam have in common is that in both cases American forces have fought revolutionaries. To make comparisons or draw lessons beyond that basic point misunderstands not only the particular historical cases, but also the value of studying history to draw lessons for the present. Vietnam AN INSURGENCY WAS underway in Vietnam for nearly two decades before Lyndon Johnson committed large numbers of American ground forces to the fight in 1965. The U.S. had nevertheless maintained hundreds and then thousands of "advisors" there for years before that in an effort to help the South Vietnamese government of Ngo Dinh Diem fight off an attempt to remove him that had both internal and external components. The Viet Cong was a terrorist/guerrilla force recruited from within South Vietnam and operated there. It was heavily supported by the communist government in North Vietnam, which sent advisors, equipment, and supplies, and which provided a safe haven. Ho Chi Minh's government also supplied troops, however, and the first major battle U.S. forces in Vietnam fought on their own (now immortalized in print and on the screen as We Were Soldiers Once ... and Young) was the Battle of the Ia Drang Valley; the enemy were North Vietnamese soldiers. The presence of North Vietnamese troops in South Vietnam, and the enormous logistics train the North maintained for the benefit of its Viet Cong partners, complicated the development of American counterinsurgency strategy enormously. Throughout the war, American leaders had difficulty deciding whether the main enemy was the North Vietnamese Army (NVA) or the Viet Cong (VC). In the initial phases of the war, the U.S. leadership focused more on the NVA and therefore on using conventional American military capabilities to defeat the external threat. This was a convenient decision that allowed the U.S. to bring all its military power to bear: Troops fought the NVA on the ground; aircraft and "swift boats" attempted to cut off North Vietnamese supply lines; bombers attacked targets within North Vietnam in an attempt to dissuade Ho Chi Minh from continuing the fight. Efforts to conduct a real counterinsurgency within the South were generally overwhelmed by this focus on a more or less conventional struggle against North Vietnam. Thus critics then and since have complained that the Combined Action Platoons (CAPs) program pioneered by the Marines would have been much more successful if only it had been better resourced, for example. Such claims are plausible, but they generally ignore two defining factors of the South Vietnamese insurgency: the presence of sizable enemy units maneuvering throughout the country, and the illegitimacy of the South Vietnamese government. …

7 citations


Journal Article
TL;DR: For example, Canada and the U.S. are more similar to each other than any two other large countries on the planet today as mentioned in this paper, but for all their similarities, the United States and Canada are remarkably distinct from one another.
Abstract: CANADA AND THE U.S. are more similar to each other than any two other large countries on the planet today. We share a language, a continent, and a colonial history. Our two affluent and resource-rich countries, moreover, have forged the largest trading bond in the modern world. (1) Since the implementation of NAFTA in 1993, of course, the volume of U.S.-Canadian trade has steadily increased; this economic integration is drawing the two economies ever closer. Yet for all their similarities--and the unfolding forces pressing for still greater homogenization--Canada and the United States are remarkably distinct from one another. In recent years, government policies in these two similar countries have diverged recurrently, and conspicuously, on a number of issues: Think of Iraq, missile defense, lumber, gay marriage, and marijuana. And these highly visible differences may not be the biggest ones. A quiet and as yet largely unrecognized divergence may be even more fundamental. Its indicators are found in the relatively new but steadily increasing differentiation of demographic trends in North America. Twenty-five years ago the population profiles of Canada and the United States were similar. Both were younger than their European allies, and their societies were more heterogeneous. In 1980 their populations had almost the same median age, fertility rates, and immigration rates. In the years since then, small changes in demographic variables have accumulated, ultimately creating two very different countries in North America by the end of the twentieth century. Canadians now have half a child fewer than Americans during their life-times--their fertility level is roughly 25 percent lower than that of their neighbors south of the border--and they are living two years longer. Both populations are growing at about the same rate, but the components of growth have diverged. Immigration is relatively more important in Canada's growth rate, and fertility is more important in the United States. Canadians marry later and less often than Americans. They enter common-law unions more often and their children are increasingly likely to be born out of wedlock. Canadians and Americans have similar labor force participation rates, but Americans work more hours per year. They have higher incomes but less leisure. And even though Canada's birth rate is now substantially lower than America's, the Canadian government provides more child services and benefits than the U.S. government. Changes in patterns of marriage and fertility are the accumulated outcomes of millions of personal decisions by men and women. When couples, one at a time, make decisions that differ in aggregate from the couples in a neighboring country, it is a reflection of deliberate agency rather than mere chance. That's why the still-widening demographic gap that has opened up between Canada and the U.S. says even more about the two societies and their futures than public or policy differences on any single issue. It also demonstrates that macroeconomic integration since NAFTA may not have had a homogenizing effect at a household level. This exploration should make Canadians who fear becoming too much like the U.S. a bit less fearful. Why fertility may change ONE OF THE most important and interesting debates in demography today centers on the decline in fertility in developed countries. When the decline in total fertility rates begins and when it stops is of importance not only to demographers, but also to societies. Age structure changes that are caused by declining fertility have far-reaching ripple effects: They touch on all age-specific activities and programs throughout society. Over the past generation, childbearing patterns in nearly all developed countries have changed significantly, falling to levels that (if continued indefinitely in the absence of immigration) would presage a steady shrinking of successive generations. …

6 citations


Journal Article
TL;DR: In his first term in office, George W. Bush established and nurtured a close personal relationship with Russian President Vladimir V. Putin this paper, who sided publicly and unequivocally with the United States in the war on terror, providing material and intelligence assistance to the American military intervention in Afghanistan and not hindering the deployment of American troops in Central Asia.
Abstract: IN HIS FIRST TERM IN office, President George W. Bush established and nurtured a close personal relationship with Russian President Vladimir V. Putin. Early on, Bush's overtures toward his counterpart in the Kremlin produced beneficial results for the president's policies. President Bush succeeded in persuading Putin to acquiesce in the abrogation of the Anti-Ballistic Missile Treaty, a revision of the Cold War arms-control regime that Bush deemed necessary for his security agenda. After the attacks of September 11, Putin sided publicly and unequivocally with the United States in the war on terror, providing material and intelligence assistance to the American military intervention in Afghanistan and not hindering the deployment of American troops in Central Asia. Since then, Russian and American officials claim that the two countries have continued to share intelligence in fighting cooperatively the global war on terror. During each man's second term, however, the Russian-American bilateral relationship exhibits little of the optimism and enthusiasm expressed immediately after September 11 in both countries about common struggles, new alliances, or shared values. At their recent meetings, both Bush and Putin have made sure to continue to praise each other personally, but behind the rhetoric of friendship is a troubled partnership in drift. In advancing Bush's three central foreign policy objectives--fighting the war on terror, preventing the proliferation of weapons of mass destruction, and promoting liberty--Russia makes no significant contributions. In addition, the drift toward autocracy inside Russia has helped to produce a Russian foreign policy more at odds with Western interests and values in places like Georgia, Ukraine, and Moldova. Rhetorically and symbolically, Putin and his aides seem determined to rekindle Cold War antagonisms, denouncing "Western" backing for terrorists after the tragedy in Beslan and American "meddling" in fomenting revolution in Ukraine while at the same time conducting joint military maneuvers with China. President Bush's foreign policy priorities today do not include Russia. He and his foreign policy team are focused first and foremost on stabilizing Iraq, fighting terrorism, managing China's growing power, dealing with Iran and North Korea, and perhaps repairing relations with Europe, a long list which leaves little time for Russia. A major review of his Russia policy is not likely to be high on Bush's agenda. At the same time, the president can no longer pretend that his personal ties with Putin are a substitute for an effective American policy for dealing with Russia and especially Russia's autocratic drift. In the long run, Bush's failure to develop a new and more strategic policy toward Russia could create serious problems for American national security interests--i.e., a nationalist leader in the Kremlin with anti-Western foreign policy interests empowered by a thriving economy, a state-owned oil and gas conglomerate with tentacles deep into Europe, and a revamped Russian state and military with imperial ambitions. Fortunately, the probability of this outcome is still small; now is the time to ensure that it remains so. The most effective strategy for Bush's new foreign policy team to help slow Russia's democratic deterioration is not isolation, containment, or confrontation, but rather deeper engagement with both the Russian government and Russian society. The United States does not have enough leverage over Russia to influence internal change through coercive means. Only a strategy of linkage is available. However paradoxical, a more substantive agenda at the state-to-state level would create more permissive conditions for greater Western engagement with Russian society. A new American policy toward Russia must pursue both--a more ambitious bilateral relationship in conjunction with a more long-term strategy for strengthening Russian civil, political, and economic societies, which ultimately will be the critical forces that push Russia back onto a democratizing path. …

4 citations


Journal Article
TL;DR: In the years since September 11, 2001, the United States and the European Union have signed agreements previously thought unachievable and have worked together much more closely than ever before as discussed by the authors.
Abstract: AFTER SEPTEMBER 11, 2001, NATO's invocation of Article 5 committing members to the collective defense of U.S. territory dominated news reports from Europe. Then the media reported that the U.S. government had mostly declined European offers of help, in part because the Europeans lacked useful military capabilities. The resulting hurt feelings of the Europeans, together with American doubts that the Europeans had much to contribute to the fight against terrorism, certainly soured the transatlantic relationship. That was the view from NATO circles, at any rate. A different view emerged from downtown Brussels, where the European Union also responded quickly to the 9/11 attacks. Within a week, EU leaders had publicly committed themselves to closer cooperation with the United States than ever before. The United States was slow to respond, just as it had been with NATO, for a variety of reasons. But the Europeans persisted, and within a short period of time a new dynamic emerged in the U.S.-EU relationship. During the years since September 2001, the United States and the European Union have signed agreements previously thought unachievable and have worked together much more closely than ever before. In fact, the breadth of the cooperation in itself contributes to the difficulty of any review and analysis. Since September 11, there have been numerous transatlantic initiatives: to develop law enforcement cooperation; to extend the freezing of terrorist assets; to develop more secure procedures for container shipping, air passenger travel and issuance of travel documents; to improve export control systems and other nonproliferation measures; and to coordinate foreign policy, especially toward the Broader Middle East. The bilateral cooperation thus included both foreign and domestic policy officials from numerous agencies on both sides of the Atlantic. However, the number of agreements signed and meetings attended does not in itself define the quality or success of the cooperation. The substance of the agreements is important, as is the degree to which they been implemented. Further, at the outset it was not clear whether any new U.S.-EU cooperation would come at the expense of bilateral cooperation between the United States and EU member states at the national level, or whether it would indeed provide its own added value. Beyond the technical issues are wider ones associated with the goal of "building Europe." As more and more functions are concentrated in Brussels rather than in national capitals throughout Europe, it is not clear whether this will help or hurt U.S. interests. In part, the answer to that question will depend on whether the EU is able to persuade its citizens of the danger that terrorism poses to them, as well as the value of close cooperation with the United States on these issues. The United States also had to decide whether it should cooperate with the EU as a means of inducing European governments to tighten their counterterrorism regimes, or whether such cooperation might be limited and possibly damaged by public opposition with a strong tinge of anti-Americanism. In sum, the United States had to evaluate the potential effectiveness of the proposed new partnership. September 11: Before and after ON SEPTEMBER 10, 2001, the danger posed by terrorism was far from the minds of most Europeans. Of the 15 EU member states, only half had specific legislation identifying terrorism as a crime. While most had signed various UN conventions against terrorism over the years, many still had to complete the necessary ratifications. And various member states continued to provide sanctuary for individuals that other member states considered to be terrorists: ETA members found refuge in France; the UK refused to surrender alleged terrorists to France; and a major terrorist network, November 17, had operated with impunity for years on Greek territory. In the EU Treaty of Amsterdam, which entered into force in 1997, the EU committed itself to establishing an internal "area of freedom, security and justice. …

4 citations


Journal Article
TL;DR: For example, Australia is a nation conceived in peace: No war of independence marked its birth and no civil war its coming of age. But its national consciousness bears the deep imprint of war.
Abstract: SINCE 9/11, AUSTRALIANS have proven themselves once more to be "very satisfactory friends in peace, and the best of friends in war," as President John Kennedy described them in 1962, attesting to "this happy relationship between two great people." According to Prime Minister John Howard, "Australians have never asked others to do for us what we have been unwilling to do for ourselves." But Australia's participation in the coalition would not have been possible had Howard not won a political debate that consumed much of the 1990s about Australia's place in the world and its national identity--and, having won power in 1996, had he not demonstrated in government that an instinctively conservative politician can govern successfully in accordance with his principles. Australia is a nation conceived in peace: No war of independence marked its birth and no civil war its coming of age. But its national consciousness bears the deep imprint of war. "Australia was born on the shores of Gallipoli," Billy Hughes, who served as prime minister during the First World War, once said. The Australian federation had been formed just 13 years before, and the Great War was the young nation's first test, one that exacted a huge toll. Out of a population of 4.5 million, 60,000 gave their lives. To put that in perspective, the United States, with a population in 1914 over 20 times higher, lost 116,000 men. Its wartime sacrifice has been described as Australia's spiritual bonding. Every town has a war memorial. The remembrance of Australia's war dead and the celebration of its war heroes is a deeply-rooted part of its sense of nationhood. The National War Memorial in Canberra is one of the most visited places in the country, and Anzac Day 2004 saw record numbers of young Australians at Gallipoli. Although Australia entered the First World War as part of the British empire, it did so enthusiastically. In part, this reflected ties of kinship and Australians' dual identity as Australian and British. But Australia wasn't just fighting Britain's battles. It also evinced a hard-headed calculation of its own security interests. In the era of imperial expansion, which the First World War was to bring to an end, mastery of Europe would change the balance of power in the South Pacific. Australians recognized that the Royal Navy was the guarantor of their independence. Recognition of the need for allies went hand in hand with an assertion of Australian interests and, at times, a vocal presence in world affairs. At the Paris peace conference at the end of World War I, Billy Hughes insisted on annexing the South Pacific islands his country had captured from Germany and made little secret of his contempt for Woodrow Wilson, his Fourteen Points, and the League of Nations. Wilson returned the compliment, calling Hughes a "pestiferous varmint." The collapse of British power in the Pacific following the surrender of Singapore to Japan during the Second World War meant that from then on, American power was to be the cornerstone of Australia's defense. Its most important bilateral relationship switched from Britain to the United States. Ten weeks after Pearl Harbor, the Japanese bombed a joint Australian--U.S. military force in Darwin on Australia's northern coast; and at the battle of the Coral Sea, the U.S. Navy successfully thwarted a Japanese attempt to cut Australia off from America. After the Second World War, Australia's strategic imperative was to secure its alliance with the United States. Australian diplomacy achieved its greatest triumph when the Truman administration signed the ANZUS treaty in 1951. Under it, the parties declared their "sense of unity," ensuring that "no potential aggressor could be under the illusion that any of them stand alone in the Pacific Area." While clearly Australia has gained an immense strategic benefit from the ANZUS treaty, it has consistently been a far more reliable ally than many of America's NATO partners. …

3 citations


Journal Article
TL;DR: In the case of as mentioned in this paper, the government had argued that a citizen-detainee seeking to challenge his classification as an enemy combatant must receive notice of the factual basis for his classification, and a fair opportunity to rebut the Government's factual assertions before a neutral decision-maker.
Abstract: THE DAY THE Supreme Court handed down what have collectively become known as the enemy combatant cases--June 28, 2004--was both widely anticipated and widely received as a legal moment of truth for the Bush administration's war on terrorism The stakes could not have been higher The three cases came down in the midst of election-year politics They each involved challenges by detainees being held by the military without charge or trial or access to counsel They each divided the Court And they appeared to validate or reject core arguments that the administration had advanced--and had been slammed for advancing--since the fight against al Qaeda began in earnest after September 11, 2001 The dominant view saw the cases as a major defeat for President George W Bush--and with good reason After all, his administration had urged the Court to refrain from asserting jurisdiction over the Guantanamo Bay naval base in Cuba, and it did just that in unambiguous terms: "Aliens held at the base, no less than American citizens, are entitled to invoke the federal courts' authority" (1) The administration fought tooth and nail for the proposition that an American citizen held domestically as an enemy combatant has no right to counsel and no right to respond to the factual assertions that justify his detention The Court, however, held squarely that "a citizen-detainee seeking to challenge his classification as an enemy combatant must receive notice of the factual basis for his classification, and a fair opportunity to rebut the Government's factual assertions before a neutral decision-maker" (2) It held as well that "[h]e unquestionably has the right to access to counsel" in doing so These holdings led the New York Times (June 29, 2003) to call the cases "a stinging rebuke" to the administration's policies, one that "made it clear that even during the war on terror, the government must adhere to the rule of law" A dissident analysis of the cases, however, quickly emerged as well and saw them as a kind of victory for the administration dressed up in defeat's borrowed robes As David B Rivkin Jr and Lee A Casey put it in the Washington Post (August 4, 2004): In the context of these cases, the court accepted the following critical propositions: that the United States is engaged in a legally cognizable armed conflict with al Qaeda and the Taliban, to which the laws of war apply; that "enemy combatants" captured in the context of that conflict can be held "indefinitely" without criminal trial while that conflict continues; that American citizens (at least those captured overseas) can be classified and detained as enemy combatants, confirming the authority of the court's 1942 decision in Ex Parte Quirin (the "Nazi saboteur" case [317 US 1 (1942)]); and that the role of the courts in reviewing such designations is limited All these points had been disputed by one or more of the detainees' lawyers, and all are now settled in the government's favor Even among those who celebrated the administration's defeat, this analysis had some resonance Ronald Dworkin, for example, began his essay on the cases in the New York Review of Books ("What the Court Really Said," August 12, 2004) by triumphantly declaring, "The Supreme Court has finally and decisively rejected the Bush administration's outrageous claim that the President has the power to jail people he accuses of terrorist connections without access to lawyers or the outside world and without any possibility of significant review by courts or other judicial bodies" But he then went on to acknowledge that the Court had "suggested rules of procedure for any such review that omit important traditional protections for people accused of crimes" and that the government "may well be able to satisfy the Court's lenient procedural standards without actually altering its morally dubious detention policies …

Journal Article
TL;DR: IWAO HAKAMADA as mentioned in this paper was a promising prizefighter who was sent to death row in Japan for murder, robbery, and arson, where he has been for 25 years.
Abstract: IWAO HAKAMADA USED to be a promising prizefighter. Crowds cheered his name. Nowadays, his only regular human contact is with prison guards, who address him by number. He spends his time pacing the floor of his nine-by-nine-foot cell at the Tokyo Detention Center. When his food comes, he stares at it for 30 minutes or so before taking a taste. He refuses most of the few visits he's allowed. This is his life on death row in Japan, where Hakamada, now 69, has been for 25 years. As far as the Japanese government is concerned, death row is exactly where Hakamada belongs. A court found him guilty of stabbing to death four people--a father, a mother, and their two children--robbing them, and setting their house on fire. In Japan, the law is clear: Those who commit such crimes must be ready to pay with their lives. Between 1946 and 2003, Japanese courts sentenced 766 people to death, 608 of whom were executed. As of the end of 2004, there were 118 convicted criminals under sentence of death in Japan. Sixty-eight of them, including Hakamada, had their convictions and sentences affirmed on appeal and were subject to execution at any time. No execution dates are given in advance. Rather, inmates learn that they're to be executed when a guard comes one morning and gives them the news. Largely because capital punishment has been abolished in Europe, Americans are used to thinking of their country's retention of the death penalty as unique among the democratic nations of the world. But the advanced industrial democracy of Japan regularly sentences convicted murderers to die as well. And Japan is not the only democracy in East Asia to retain capital punishment--South Korea, Taiwan, the Philippines, Thailand, and Indonesia do so as well. Like all those countries, Japan has faced international condemnation over its continued use of the death penalty, even more criticism perhaps than newer Asian democracies do. The United Nations Human Rights Commission has adopted unfavorable resolutions on Japanese capital punishment, and the Council of Europe, that continent's principal intergovernmental human rights organization, has threatened Japan with loss of its observer status. Such words sting a government that would much rather discuss the quality of its exports and the responsible multilateralism of its foreign policy. Yet on this issue, Japan, the world's second-largest economic power, can afford to resist foreign critics. Unlike countries such as Poland, which abolished the death penalty in 1997 to meet one of the conditions for admission to the European Union, Japan is subject to no binding multilateral checks on its internal policies. Government statements on the death penalty simply repeat Tokyo's longstanding position that the death penalty is a sovereign decision of each country, to be tailored to its crime-fighting needs. Still, as Hakamada's story shows, there will always be questions about whether the death penalty is being imposed fairly. Those questions are especially difficult for democracies such as the United States and Japan, which are committed to due process, individual rights, and the rule of law. Indeed, the issue of innocence is now at the forefront of the capital punishment debate in the United States because new DNA evidence has led to exonerations of some death-row inmates. Because Japan has stepped up its use of capital punishment in recent years, the issue may become more pressing there, too. IWAO HAKAMADA WAS 30 years old in 1966. The erstwhile sixth-ranked featherweight in Japan had retired from the ring and was working in the town of Shimizu, Shizuoka Prefecture, at a plant that manufactured miso, a soybean paste used in soups and other foods. On June 30, 1966, police found the bloody bodies of the plant's managing director, his wife, and their two children. Their house had been robbed of 200,000 yen and set on fire. In August, police arrested Hakamada and charged him with murder, robbery, and arson. …

Journal Article
TL;DR: The Roper majority opinion in Roper v. Simmons as discussed by the authors, which endorsed the use of foreign and international law in U.S. constitutional adjudication, has at least the virtue of putting everyone's cards on the table.
Abstract: JUSTICE ANTHONY KENNEDY'S majority opinion in Roper v. Simmons, (1) which endorsed the use of foreign and international law in U.S. constitutional adjudication, has at least the virtue of putting everyone's cards on the table. Until that decision was handed down (on March 1, 2005), it remained possible to view the appearance of foreign law in constitutional decisions as nothing more than a minor hobbyhorse for Justice Stephen Breyer or Justice Kennedy--a merely rhetorical nod in the direction of the mostly Western European judges with whom they have become friends at international judicial conferences and other such venues over the years. As for Justice Antonin Scalia's attacks on the use of foreign legal materials, well, they were withering and witty, as always, but surely a bit over the top? Judges, after all--even Justice Scalia--have been adorning their opinions with bits of poetry, Shakespeare, and the Bible for a very long time, so why not the occasional reference to opinions of the Supreme Court of Zimbabwe or the Privy Council or the European Court of Human Rights? What could possibly be the harm in it? Justice Kennedy's Roper majority opinion puts paid to the conceit that this is all just a bit of fluff exaggerated into something sinister and conspiratorial by Federalist Society right-wing ideologues. Roper asserts far more, it turns out, than the prior use of foreign law in contemporary constitutional cases would have suggested. (2) It blesses in the contemporary era a new doctrine of constitutional adjudication, what has been called "constitutional comparativism," that is very far indeed from mere flirtation. It invites the deployment of a sweeping body of legal materials from outside U.S. domestic law into the process of interpreting the U.S. Constitution--and, moreover, invites it into American society's most difficult and contentious "values" questions. The Roper opinion reassuringly holds that the "task of interpreting the Eighth Amendment [cruel and unusual punishment] remains our responsibility." It adds, however, that it does not "lessen our fidelity to the Constitution ... to acknowledge that the express affirmation of certain fundamental rights by other nations and peoples simply underscores the centrality of those same rights within our heritage of freedom." Roper then proceeds to deploy a startling range of international authorities that hitherto would have been thought not only irrelevant but affirmatively barred from U.S. constitutional adjudication. That the opinion overlays the groundwork for a globalizing Court with a series of pat phrases transparently aimed at soothing parochial American sensibilities--reassuring the populace that the Constitution remains "theirs"--does not lessen in the least the enormity of what the Court has done. "International" or "universal" ROPER CITES, FOR example, the United Nations Convention on the Rights of the Child. Indeed, the Court even notes in passing what might have been thought a fatal flaw, viz., that the United States has not ratified it. The Court prefers to treat this unratified convention as evidence of global--in the sense of universal--views on juvenile capital punishment to which the United States should, and the Court will ensure that it does, pay heed. Such citation is problematic on a number of fronts. It is, moreover, emblematic of the several conceptual difficulties with the use of either foreign law or international law to which the United States has not assented and given an understanding of the nature and scope of its formal legal undertaking. (3) The Court's unstated assumption, for example, that the Children's Rights Convention's near-universal ratification means that it is actually accepted on its own terms by the world is simply false. Even at the formal legal level, the Court ignores how widely the Convention features sweeping reservations by individual countries: Saudi Arabia, for example, as with so many Muslim countries, has ratified, but with a formal reservation (surely not irrelevant to the Court's inquiry) that none of it has any application to the extent that it conflicts with shari'ah law. …

Journal Article
TL;DR: Lasch's "culture of narcissism" as mentioned in this paper has been identified as a major cause of the decline of the American character and the rise of a new type of American personality, which is characterized by a narcissistic preoccupation with the self.
Abstract: "EVERY AGE DEVELOPS its own peculiar forms of pathology, which express in exaggerated form its underlying character structure," the historian and social critic Christopher Lasch wrote in The Culture of Narcissism. For Lasch, writing in 1979, that character structure was an unrelenting narcissism, one that threatened to undermine the rugged individualism of previous eras and, quite possibly, liberalism itself. His book "describes a way of life that is dying," he wrote in the introduction, "the culture of competitive individualism, which in its decadence has carried the logic of individualism to the extreme of a war of all against all, [and] the pursuit of happiness to the dead end of a narcissistic preoccupation with the self." Critics promptly judged Lasch's work a jeremiad (albeit a best-selling one), an erudite but extreme lament about the state of the culture that was astute in pointing out the development of certain tendencies among Americans but far too pessimistic about the future of liberalism. Those optimists assessed the American and found an individualistic, competitive specimen who had, in recent years, become appropriately more attuned to the need for self-care and an enriched self-esteem. Lasch, by contrast, looked at the American and found him peering into a mirror, anxiously rating the figure staring back at him and wondering how to combat the inexplicable emptiness he felt. As for the causes of this new narcissism, Lasch placed the blame on "quite specific changes in our society and culture--from bureaucracy, the proliferation of images, therapeutic ideologies, the rationalization of the inner life, the cult of consumption, and in the last analysis from changes in family life and from changing patterns of socialization." Although it is true that Lasch allowed himself to make sweeping generalizations about the quality of the American character, The Culture of Narcissism has nevertheless remained one of the more useful critiques of late twentieth-century American life and has outlived the feverish criticism it once spawned. The book challenged many of the core assumptions that elites and non-elites blithely accepted as facts at the time: that human beings would continue to devise more sophisticated means of controlling nature and its effects (such as aging) through technology and science, and that these would bring inordinately positive results; that democracies inevitably continue to progress in their development rather than stall or regress; that extremes of individualism and secularism would free people from the supposedly restrictive confines of family, religious, social, and political obligation. Such sentiments were hardly new, of course, but Lasch outlined the weaknesses of them keenly. Lasch subtitled his book, "American Life in an Age of Diminishing Expectations," and it is useful to question just how far the diminishing of expectations he first identified has gone. Looking back on The Culture of Narcissism more than 25 years later, what did Lasch get right and what did he get wrong? What developments did he presciently identify and which ones did he miss? In the interim decades, has Lasch's narcissist given way to a new type of American character and, if so, what are that character's defining traits? A descriptive tour revisiting some of Lasch's themes--especially the transformation of the family--suggests that the narcissism Lasch described has not disappeared. It has simply taken on a different and in some ways more exaggerated form. The cult of therapy TODAY, A BOOK about the vagaries of the American character might still have a great deal to say about narcissism, but its subtitle would likely point to something other than diminishing expectations. It would, perhaps, document "Life in the Age of the Overpraised American," for praise (and its kin, attention-seeking), is our common cultural currency. If, in the twentieth century, "character" gave way to "personality," as Lasch and others such as Richard Sennett and Anthony Giddens argued, then in the twenty-first century "personality" exists only if it is broadcast, rated, praised and consumed by as many people as possible--put on display for strangers as well as intimates. …

Journal Article
TL;DR: The U.S. image problem is especially acute in the Middle East and among predominantly Muslim populations as discussed by the authors, and the extent to which the behavior and policies of foreign governments are affected by the behaviour and attitudes of their citizens, public diplomacy may affect governments by influencing their citizens.
Abstract: AMERICA HAS AN image problem. While the problem is serious, it is complicated by more variation than is usually ascribed to it. For example, according to the Pew Global Attitudes Survey of June 2005, the "U.S. image [is] up slightly, but still [is] negative." This variation is further reflected by the fact that in two of the world's potentially most important triangular relationships--namely, those between China, Japan, and the U.S. and between India, Pakistan, and the U.S.--it is the United States that is regarded as most friendly by the other two members of each triad. America's image problem is especially acute in the Middle East and among predominantly Muslim populations. Recent polls highlight the depth and breadth of the animus. In 2002, Gallup conducted a poll of nearly 10,000 residents in nine Muslim countries. By an average of more than 2:1, respondents reported an unfavorable view of the United States. The prevalence of an unfavorable view in Iran is unsurprising because that country has had an adversarial relation with the United States for more than 20 years. More troubling are the results from ostensible allies. Only 16 percent of respondents in Saudi Arabia, supposedly one of America's long-standing allies in the region, held a favorable view, while 64 percent reported an unfavorable view. Results from Kuwait were even more disconcerting. In a country that the United States waged war to liberate a decade earlier, only slightly more than a quarter of those polled expressed a favorable view of the United States. This displeasure cannot be easily dismissed as vague and loose views held by those in remote lands whose attitudes and behavior are immaterial to the U.S. It may not foreshadow calamitous outcomes for the U.S., but it hardly provides reassurance that such outcomes will not ensue. As President George W. Bush plainly stated the task, "We have to do a better job of telling our story." That is the job of public diplomacy. The term "public diplomacy" was first used in 1965 by Edmund Gullion, a career foreign service diplomat and subsequently dean of the Fletcher School of Law and Diplomacy at Tufts University, in connection with establishment at the Fletcher School of the Edward R. Murrow Center for Public Diplomacy. The Department of State now defines "public diplomacy" as "government-sponsored programs intended to inform or influence public opinion in other countries." But it can perhaps best be understood by contrasting its principal characteristics with those of "official diplomacy." First, public diplomacy is transparent and widely disseminated, whereas official diplomacy is (apart from occasional leaks) opaque and its dissemination narrowly confined. Second, public diplomacy is transmitted by governments to wider, or in some cases selected, "publics" (for example, those in the Middle East or in the Muslim world), whereas official diplomacy is transmitted by governments to other governments. Third, the themes and issues with which official diplomacy is concerned relate to the behavior and policies of governments, whereas the themes and issues with which public diplomacy is concerned relate to the attitudes and behaviors of publics. Of course, these publics may be influenced by explaining to them the sometimes-misunderstood policies and behavior of the U.S. government. Additionally, to the extent that the behavior and policies of foreign governments are affected by the behavior and attitudes of their citizens, public diplomacy may affect governments by influencing their citizens. In this article, we consider how to inform and persuade foreign publics that the ideals that Americans cherish--such as pluralism, freedom, and democracy--are fundamental human values that will resonate and should be pursued in their own countries. Associated with this consideration are two questions that are rarely addressed in most discussions of public diplomacy: Should the U. …

Journal Article
TL;DR: It is not controversial to argue that the American Constitution is the supreme law of the land as mentioned in this paper, and that it can be seen as a decisive battleground in the struggle over freedom's moral and political meaning.
Abstract: IT IS NOT CONTROVERSIAL to contend that in the United States, constitutional law serves as a decisive battleground in the struggle over freedom's moral and political meaning. It is another matter to assess the impact of the battleground on the battle, to clarify the current balance of power, and to anticipate the battles to come. By design, the American Constitution is the supreme law of the land. Because it is a liberal constitution, one whose first purpose is to protect individual freedom, the supreme law of the land avoids taking a stand on the supreme issues. It does not aim to instruct people on the virtues, or the content of happiness, or the path to salvation. That's not because it supposes that virtue is irrelevant, happiness has no content, or salvation is a delusion. Rather, the Constitution presupposes that the people, as individuals and through the various associations and groups they form, will pursue these goods. And it lays down a framework within which we, as a people, can maintain a society where each has the liberty to pursue, consistent with a like liberty for others, virtue, happiness, and salvation in the way each regards as fitting. This constitutional framework consists of the enumeration of government powers and the elaboration of individual rights. It establishes minimum requirements and imposes outer boundaries on state action and personal conduct while largely leaving substantive judgments about morals and policy to individuals and democratic politics. Accordingly, as Alexander Bickel dryly observed more than . years ago in The Least Dangerous Branch, to say of some law or action or institution that it is constitutional is not to offer very high praise. For the Constitution permits much--from those in as well as out of office--that is foolish, vulgar, and degrading. Yet the enshrinement in the supreme law of the land of a large latitude for the exercise of individual freedom has consequences. It cannot but give direction to our moral life, incite and inspire habits and hopes, inform our sense of what is possible and of what is necessary, and instruct our understanding of what we owe others and what we owe ourselves. To recognize the role of constitutional law in establishing a culture of freedom takes nothing away from the formative role played by economic life, popular entertainment and the arts, friendship and family, love and war, religious faith and faith in reason. Our opinions about freedom, as well as our capacities to enjoy its blessings and maintain its material and moral preconditions, are formed by many forces. The supreme law of the land, however, is of special interest. By establishing authoritative limits, by proclaiming, with the backing of the coercive power of the state, what is forbidden, what is permitted, and what is required, it creates comprehensive background conditions for, and sets a tone that reverberates throughout, all spheres of our lives. Between progress and preservation BY AND LARGE, since Marbury v. Madison (1803), when it settled the matter, the Supreme Court has been understood to have principal--though in our separation of powers system not exclusive or ultimate--responsibility under the Constitution for saying authoritatively what the supreme law of the land is. Yet most of the 80 to 90 formal written opinions the Court delivers each year involve technical issues which, when they are noticed at all by the public at large, do not excite much enthusiasm or cause much consternation beyond the confines of the parties involved. Nor do they have much discernible impact on how we experience or think about freedom. Of those cases that, because of the morally and politically fraught issues at stake, do capture the public's attention, a preponderance arise under the Fourteenth Amendment. And the most morally and politically fraught of these concern abortion, which involves a contest over the interpretation of the Fourteenth Amendment's due process clause, and affirmative action, which involves a contest over the interpretation of the Fourteenth Amendment's equal protection clause. …

Journal Article
TL;DR: GOLDSMITH and POSNER as mentioned in this paper argue that international law has an identifiable content and that its content corresponds to a progressive interpretation of government's obligations at home and abroad.
Abstract: ANNE-MARIE SLAUGHTER. A New World Order. PRINCETON UNIVERSITY PRESS. 341 PAGES. $29.95 JACK L. GOLDSMITH AND ERIC A. POSNER. The Limits of International Law. OXFORD UNIVERSITY PRESS. 262 PAGES. $29.95 JEREMY A. RABKIN. Law Without Nations? Why Constitutional Government Requires Sovereign States. PRINCETON UNIVERSITY PRESS. 350 PAGES. $29.95 AMONG AMERICAN law professors, international law became in the 90s and continues to be today what American constitutional law was in the 70s and 80s--the fashionable front line for advancing progressive social change. Yet even more than constitutional law, international law's sources and authority are open to dispute. Even more than constitutional law, international law has an ineliminable and robust political dimension. And even more than constitutional law, international law invites an appeal to debatable moral principles in the controversies that arise under it. Despite these vexing features, the dominant view in the legal academy--which closely resembles the consensus among European elites and is associated with the European Union's self-understanding--is that international law has an identifiable content and that its content corresponds to a progressive interpretation of government's obligations at home and abroad. The view is theory-driven and flies commonly under the flag of liberal internationalism. According to the liberal internationalists, a good portion of the structure and content of international law can be derived from reflection on our common humanity or, more precisely, our nature as free and equal rational beings. Such reflection generates an increasingly dense list of human rights that apply to all states everywhere; favors the strengthening of international institutions--such as the International Court of Justice, the International Criminal Court, and the UN General Assembly and Security Council--to promote these rights; seeks an increased role for multilateral initiatives; and applauds the growing role of transnational nongovernmental organizations. In the United States, the liberal internationalist view draws support from the writings of America's preeminent political theorist, John Rawls. In Europe, it gains intellectual heft from Germany's foremost philosophical voice, Jurgen Habermas. Both theorize about the principles under which rational individuals, freed from partiality and prejudice, would choose to live and from which they can derive binding laws and equitable public policy. To be sure, international human rights lawyers are less likely to invoke the abstractions of Rawls and Habermas than they are to look to developing state practice, or the achievements of international institutions and the fruits of diplomacy, as evidence of what international law requires. Nevertheless, it is theory--or, perhaps more accurately, it is a moral and political conception to which Rawls and Habermas give theoretical expression--that determines for the scholars and jurists which examples of state practice, international institutions, and diplomacy they will appeal to as evidence of the structure and content of international law. Critics raise a number of serious objections. First, officials of international institutions (to say nothing of NGOs) charged with promulgating international law lack democratic accountability: Either they come from democracies but operate at several levels of remove from voters or, far worse, they come from autocracies in which the people whom they supposedly represent have never had a chance to vote for them in free and fair elections. Second, as most international institutions--possessing neither police force nor military--lack the capacity to enforce their rulings and resolutions, their legal pronouncements are impotent and make a mockery of the rule of law. Third, international institutions rely on the dangerous misconception that individuals do, or will come to, place a premium on global citizenship, and that states do, or will come to, place their obligations under international law and to global norms of justice ahead of their own national interest. …

Journal Article
TL;DR: In this paper, the authors define marriage as a society's normative institution for both sexual activity and the rearing of children, and argue that a society will be able to govern itself with a smaller, less intrusive government if that society supports organic marriage rather than the legalistic understanding of marriage.
Abstract: MARRIAGE IS A naturally occurring, pre-political institution that emerges spontaneously from society. Western society is drifting toward a redefinition of marriage as a bundle of legally defined benefits bestowed by the state. As a libertarian, I find this trend regrettable. The organic view of marriage is more consistent with the libertarian vision of a society of free and responsible individuals, governed by a constitutionally limited state. The drive toward a legalistic view of marriage is part of the relentless march toward politicizing every aspect of society. Although gay marriage is the current hot-button topic, it is a parenthetical issue. The more basic question is the meaning of love, marriage, sexuality, and family in a free society. I define marriage as a society's normative institution for both sexual activity and the rearing of children. The modern alternative idea is that society does not need such an institution: No particular arrangement should be legally or culturally privileged as the ideal context for sex or childbearing. The current drive for creating gay unions that are the legal equivalent of marriage is part of this ongoing process of dethroning marriage from its pride of place. Only a few self-styled conservative advocates of gay marriage, such as Andrew Sullivan and Jonathan Rauch, seem to understand and respect the social function of marriage. Marriage as an institution necessarily excludes some kinds of behavior and endorses other kinds of behavior. This is why the conservative case for gay marriage is so remarkable: It flies in the face of the cultural stampede toward social acceptance of any and all sexual and childbearing arrangements, the very stampede that has fueled so much of the movement for gay marriage. This article is not primarily about gay marriage. It isn't even about why some forms of straight marriage are superior to others. Rather, the purpose of this article is to explain why a society, especially a free society, needs the social institution of marriage in the first place. I want to argue that society can and must discriminate among various arrangements for childbearing and sexual activity. The contrary idea has a libertarian justification in the background: Marriage is a contract among mutually consenting adults. For instance, libertarian law professor Richard Epstein penned an article last year called "Live and Let Live" in the Wall Street Journal (July 13, 2004). In it, he treated marriage as a combination of a free association of consenting individuals and an institution licensed by the state. But the influence of the libertarian rationale goes far beyond the membership of the Libertarian Party or the donor list of the Cato Institute. The editors of the Nation, for instance, support gay marriage but do not usually defend the sanctity of contracts. This apparent paradox evaporates when we realize that the dissolution of marriage breaks the family into successively smaller units that are less able to sustain themselves without state assistance. Marriage deserves the same respect and attention from libertarians that they routinely give the market. Although I believe life-long monogamy can be defended against alternatives such as polygamy, it is beyond the scope of a single article to do so. My central argument is that a society will be able to govern itself with a smaller, less intrusive government if that society supports organic marriage rather than the legalistic understanding of marriage. A natural institution LIBERTARIANS HAVE EVERY reason to respect marriage as a social institution. Marriage is an organic institution that emerges spontaneously from society. People of the opposite sex are naturally attracted to one another, couple with each other, co-create children, and raise those children. The little society of the family replenishes and sustains itself. Humanity's natural sociability expresses itself most vibrantly within the family. …

Journal Article
TL;DR: The authors argues that there are structural conditions that threaten democracies in ways that cannot be overcome simply by a desire for self-rule, implying that no societal attributes are necessary preconditions for stable democracy.
Abstract: AN AMBITIOUS STRATEGY of democracy promotion is poised to be a major pillar of U.S. foreign policy for many years after 9/11, just as Cold War containment, trade liberalization, and development assistance were pillars of American policy in the decades after 1945. The strategy of democratization must begin with the moral proposition that "the call of freedom comes to every mind and every soul," as President Bush said in his second inaugural address. But if the strategy is to succeed, we have to ask and answer some hard questions about what obstacles exist to achieving stable democracies and how they can be overcome. That the strategy faces challenges is not doubted, least of all by some of its leading advocates. Bush acknowledged "many obstacles" to democratization and called it the "concentrated work of generations." British Prime Minister Tony Blair has said that "democracy is hard to bring into countries that have never had it before." Even Natan Sharansky, author of a relentlessly optimistic appeal for democratization, says that in places like Iraq, democracy faces "a very difficult transitional period." (1) But these champions of democratization emphasize obstacles to transitions to democracy rather than obstacles to the stability of democracies afterward. Bush and Blair and authors like Sharansky and Joshua Muravchik repeatedly reject the notion that fully functioning democracies may face more structural obstacles even after they are inaugurated. They especially reject two long-standing claims: that stable democracy requires certain cultural preconditions and that stable democracy is possible only above certain per capita income levels. There are, indeed, solid grounds for rejecting both: Several democracies have endured in what are, by the standards of these claims, inhospitable cultural and economic contexts. But more often than not, the reasoning of the democratization advocates goes farther, implying that no societal attributes are necessary preconditions for stable democracy. Sharansky, for example, sweepingly rejects the "idea that certain peoples are incapable of democratic self-rule" and the notion that "there are certain cultures and civilizations that are not compatible with democracy." (2) Consistent with this, while some programs of the National Endowment for Democracy (NED)--the main U.S. entity tasked with promoting democracy--also seek to strengthen existing democracies, most recent U.S. policies are designed to help tip countries from the authoritarian category to the democratic. (3) That tipping is seen as the biggest challenge. These advocates offer a powerful justification for their optimism: the universal hunger for liberty. President Bush's letter introducing his 2002 National Security Strategy proclaimed that "People everywhere want to be able to speak freely; choose who will govern them; worship as they please." At a November 12, 2004 press conference, Bush said he believed that successful democratization among Palestinians "can happen, because I believe people want to live in a free society." Standing at his side, Blair said that "given the chance, [Iraqis will] want to elect their leaders. Why wouldn't they? I mean, why would they want a strong-arm leader who's going to have the secret police, no freedom of speech, no free press, no human rights, no proper law courts? The people want the freedom." The NED'S "Statement of Principles and Objectives" states that the idea of democracy is "intrinsically attractive to ordinary people throughout the world ... an ideal that billions of people in all parts of the globe revere and aspire to." Sharansky says succinctly that "all peoples desire to be free." (4) These champions seem to be saying that where there is this much will, a way will be found to create stable representative institutions--indeed, that will may be the way, especially once people are offered the opportunity. But there are compelling reasons to believe that certain structural conditions threaten democracies in ways that cannot be overcome simply by a desire for self-rule. …

Journal Article
TL;DR: The Eighth Amendment is a jurisprudential train wreck as mentioned in this paper, and it is difficult to understand what it actually means in practice, much less what it meant when it was adopted.
Abstract: THE EIGHTH AMENDMENT is a jurisprudential train wreck. Its proudly humane language banning "cruel and unusual punishments" may remain among the Bill of Rights' most famous sound bites, but nobody today has the faintest clue what it means. The reason is as simple as it is sad: The Supreme Court's case law has left the amendment without coherent meaning. No principle guides its reach. No methodology solemnly pronounced in any case do the justices predictably follow in the next. A punishment upheld today can be, without alteration, struck down tomorrow with no justice even admitting that his or her mind has changed. The justices no longer even pretend to examine whether a punishment offends the amendment's textual prohibition. Instead they apply perhaps the single most impressionistic test ever devised by the court: whether the challenged practice has run afoul of "the evolving standards of decency that mark the progress of a maturing society." (1) Unsurprisingly, nine judges of wildly different politics, temperaments, and backgrounds do not generally agree on the standards or the methodology for assessing society's maturation, much less its substance. As a consequence, more than two centuries after its incorporation into the Constitution, the amendment has been rendered nothing more than a vehicle to remove from the policymaking arena punitive practices that offend a majority of the court at any moment in time. The train wreck does not end there. Normally, when the court runs a major doctrinal area off the rails, a cogent line of dissent over time helps rationalize the errant line of cases by offering a more legally faithful, a more constitutionally stable, or simply a more sensible alternative. The Eighth Amendment has not proven so lucky. To be sure, the court's conservative flank--led by Justice Antonin Scalia--has dissented from its emerging Eighth Amendment jurisprudence and has offered a compelling critique. It has even proposed a principled alternative--at the core of which lies the premise that the amendment's protections are static and contain no evolutionary dimension whatsoever. As Scalia once poetically declared, "the Constitution that I interpret and apply is not living but dead--or, as I prefer to put it, enduring. It means today not what current society (much less the Court) thinks it ought to mean, but what it meant when it was adopted." (2) In reality, however, this principle is not nearly as self-evident, at least in the context of the Eighth Amendment, as Scalia's bombastic rhetoric would have one believe. It is, rather, somewhat implausible as a textual matter, uncertain as a historical matter, and utterly at odds not only with the court's jurisprudence during its recent period of intellectual incoherence but with its entire century-long history of interpreting the amendment altogether. Moreover, Scalia's reading would, in effect, render a major plank of the Bill of Rights a dead letter that protects Americans only against those punishments that are politically unthinkable anyway. The Eighth Amendment is thus trapped in a shouting match between the entirely inconstant and the most foolish of consistencies. This stalemate by no means flows inexorably from some inherent defect in the amendment itself. Though its specific language presents some unique challenges, the text of the Eighth Amendment is no vaguer than the Fourth Amendment's requirement that searches and seizures be "reasonable" or the Fifth Amendment's demand that an individual's life, liberty, and property be secure from government in the absence of "due process of law." Yet in contrast to the Fourth and Fifth Amendments, where generations of case law have put meat on these rather bare constitutional bones, the Eighth Amendment's key terms--"cruel" and "unusual"--remain almost entirely undefined. In their zeal to unravel how society's standards of decency have evolved--or to snipe at how the court has done so--both sides in the debate seem to have forgotten what the words of the amendment actually say. …