scispace - formally typeset
Search or ask a question

Showing papers in "Policy Review in 2001"


Journal Article
TL;DR: The human genome project helped pound the final coffin nail in place: Race was at long last dead, and if the authors regard it solely as a social construct, they may forfeit opportunities to enlarge their medical treatment repertoire.
Abstract: ON JUNE 26, 2000, the White House announced to the world that the human genome had been sequenced (with the final, polished version due perhaps by 2002). Though the precise functions of all our genes have yet to be deciphered -- Nobel laureate David Baltimore foresees a "century of work ahead of us" -- it is clear that we are on the threshold of knowledge that could revolutionize the way we predict, diagnose, and treat disease. The momentous discovery was lauded for something else as well: It supposedly laid to rest the idea that race is a biological category. "Researchers have unanimously declared there is only one race -- the human race," said the New York Times in an article headlined "Do Races Differ? Not Really, DNA Shows." Much heralded was the finding that 99.9 percent of the human genome is the same in everyone regardless of race. "The standard labels used to distinguish people by 'race' have little or no biological meaning," claimed the Times. Said Stephen Jay Gould, evolutionary biologist at Harvard: "The social meaning [of race] may finally liberate us from [that] simplistic and harmful idea." That point has found its way into the rhetoric of politicians. As former President Clinton has said, "in genetic terms all human beings, regardless of race, are more than 99.9 percent the same. . . . The most important fact of life on this earth is our common human ancestry." We were reminded by former GOP vice presidential candidate Jack Kemp that "the human genome project shows there is no genetic way to tell races apart. For scientific purposes, race doesn't exist," he averred. In Hollywood, too, notice was taken. When a television talk show host asked actor Rob Reiner about the sparring his character did with Archie Bunker on the long-running television program "All in the Family," Reiner (aka "Meathead") explained that Archie's signature bigotry stemmed from ignorance. "We are all the same, though," Reiner said, "the human genome project taught us that." The human genome project, in the view of many, helped pound the final coffin nail in place: Race was at long last dead. It is noble, of course, to celebrate spiritual kinship within the family of man, and now even to suggest that race has biological meaning, it seems, can be something close to professional suicide. The mere mention of race and biology together sends many physicians and scientists scrambling to protest (too much) against a possible connection. The facts, however, paint a more complex picture, one with clinical implications: race does have biological dimensions, and if we regard it solely as a social construct, we may forfeit opportunities to enlarge our medical treatment repertoire. Wanting it both ways THE SENTIMENTS FUELING the impulse to regard race as an arbitrary biological fiction should be taken seriously, especially given the shameful history of race and biology. People properly shudder at the memory of the Tuskegee Syphilis Experiment, in which hundreds of black sharecroppers were never told they had the disease nor offered penicillin for its treatment. Many now worry that genetic determinism might be used as the sole explanation for social differences between races or, worse, as justification for new eugenics movements and programs of ethnic cleansing. Nevertheless, the corrective is not obfuscation or outright censorship of inquiry. It is a clear-eyed understanding of the intertwining of race and biology. Denying the relationship flies in the face of clinical reality, and pretending that we are all at equal risk for health problems carries its own dangers. We were reminded last May of the controversy about the role of race in biomedical research when the New England Journal of Medicine (NEJM) published two papers describing the responses of black and white patients to medications used to treat heart disease. The researchers sought to compare racial groups in light of the well-documented observation that, on average, African Americans with high blood pressure and other cardiovascular conditions do not fare as well as whites when given the same medications. …

21 citations


Journal Article
TL;DR: The U.S. economy slowing down from a decade of unprecedented growth and even threatening to enter a recession, the economic outlook for the region as a whole, especially the weak and vulnerable states of Southeast Asia, is grim.
Abstract: Why There Is No Substitute for U.S. Power AS THE BUSH ADMINISTRATION was brusquely reminded with events surrounding the forced landing of a United States reconnaissance aircraft on Hainan Island after a collision with a Chinese fighter, it will very likely have its hands full dealing with the AsiaPacific. Historic animosities and unresolved Cold War disputes, combined with continuing territorial disputes ranging from the South China Sea to the East China Sea, make for an insecure and unpredictable region. In addition, the issue of how to deal with China, a rising power and traditional hegemon in the region, is particularly problematic. Until the East Asian financial crisis of 1997, the conventional wisdom held that the region was set for an extended period of economic growth and tranquil security relations. To even question the notion of the Asia-Pacific as a place of progress, stability, and prosperity was to invite reproach for being out of touch with the reality of what one commentator described as a region characterized by "increased domestic tr anquility and regional order." Such sanguine views of regional developments, prevalent in scholarly and diplomatic circles before 1997, clearly failed to appreciate the underlying tectonics of what in fact is a deeply unstable area, at once riven with serious fault lines both between states and within states themselves. As a result of the economic crisis, governments have fallen in Indonesia, Thailand, and South Korea or, as in Malaysia, have come perilously close to the precipice. It is likely that regional fragility is set to increase. A number of analysts have taken comfort in the region's economic performance in 1999-2000, when the chance of further catastrophic decline appeared to have been arrested. The truth, however, is that many economies in the Asia-Pacific are stagnant. The partial and very patchy recovery since 1999 arose not out of any intrinsic resurgence in the Asia-Pacific economy or any fundamental structural reform and improvement in competitiveness. Instead, it has been the continued openness of a booming U.S. economy that has sustained what is essentially a two-year-long "dead-cat bounce" in the Asia-Pacific. The unexpectedly high demand in the United States provided a market for the heavily export-oriented economies of the Asia-Pacific. This factor singlehandedly revived the flagging export industries of the region. With the U.S. economy slowing down from a decade of unprecedented growth and even threatening to enter a recession, the economic outlook for the region as a whole, especially the weak and vulnerable states of Southeast Asia, is grim. With even leaner times ahead, further political instability will surely ensue. In Northeast Asia, China muddles along in a bid to preserve social stability even while attempting to reform its economy in an effort to adapt to the terms to which Beijing acceded in negotiating entry into the World Trade Organization. Meanwhile, in Taiwan and South Korea, the ruling parties each face domestic opposition parties that resist current domestic and foreign policies. On the Korean Peninsula, North Korean leader Kim Jong-Il pursues a policy of strategic blackmail whereby regional fears of war are exploited to sustain a corroding regime. Notwithstanding South Korean President Kim Dae Jung's sunshine diplomacy, Pyongyang shows no real signs of peacefully relinquishing the regime's only bargaining chip, its nuclear and conventional ballistic missile program. In Southeast Asia, other than Singapore, the core states of the Association of Southeast Asian Nations (ASEAN) are facing various forms of internal dissension that have attended the financial crisis of 1997. Secessionist movements, such as those in Indonesia and the Philippines, or severe domestic political transitions (Indonesia, Malaysia, Thailand, and the Philippines) have devastated the straight-line projections of unrelenting economic growth that were proffered in the early and mid-1990s. …

18 citations


Journal Article
TL;DR: The "precautionary principle" is defined as "erring on the side of safety" or "better safe than sorry" as mentioned in this paper, with the idea being that the failure to regulate risky activities sufficiently could result in severe harm to human health or the environment, and that overregulation causes little or no harm.
Abstract: Why Regulators' "Precautionary Principle" Is Doing More Harm Than Good ENVIRONMENTAL AND PUBLIC HEALTH activists have clashed with scholars and risk-analysis professionals for decades over the appropriate regulation of various risks, including those from consumer products and manufacturing processes. Underlying the controversies about various specific issues -- such as chlorinated water, pesticides, gene-spliced foods, and hormones in beef -- has been a fundamental, almost philosophical question: How should regulators, acting as society's surrogate, approach risk in the absence of certainty about the likelihood or magnitude of potential harm? Proponents of a more risk-averse approach have advocated a "precautionary principle" to reduce risks and make our lives safer. There is no widely accepted definition of the principle, but in its most common formulation, governments should implement regulatory measures to prevent or restrict actions that raise even conjectural threats of harm to human health or the environment, even though there may be incomplete scientific evidence as to the potential significance of these dangers. Use of the precautionary principle is sometimes represented as "erring on the side of safety," or "better safe than sorry" -- the idea being that the failure to regulate risky activities sufficiently could result in severe harm to human health or the environment, and that "overregulation" causes little or no harm. Brandishing the precautionary principle, environmental groups have prevailed upon governments in recent decades to assail the chemical industry and, more recently, the food industry. Potential risks should, of course, be taken into consideration before proceeding with any new activity or product, whether it is the siting of a power plant or the introduction of a new drug into the pharmacy. But the precautionary principle focuses solely on the possibility that technologies could pose unique, extreme, or unmanageable risks, even after considerable testing has already been conducted. What is missing from precautionary calculus is an acknowledgment that even when technologies introduce new risks, most confer net benefits -- that is, their use reduces many other, often far more serious, hazards. Examples include blood transfusions, MRI scans, and automobile air bags, all of which offer immense benefits and only minimal risk. Several subjective factors can cloud thinking about risks and influence how nonexperts view them. Studies of risk perception have shown that people tend to overestimate risks that are unfamiliar, hard to understand, invisible, involuntary, and/or potentially catastrophic -- and vice versa. Thus, they overestimate invisible "threats" such as electromagnetic radiation and trace amounts of pesticides in foods, which inspire uncertainty and fear sometimes verging on superstition. Conversely, they tend to underestimate risks the nature of which they consider to be clear and comprehensible, such as using a chain saw or riding a motorcycle. These distorted perceptions complicate the regulation of risk, for if democracy must eventually take public opinion into account, good government must also discount heuristic errors or prejudices. Edmund Burke emphasized government's pivotal role in making such judgments: "Your Representative owes you, not only his industry, but his judgment; and he betrays, instead of serving you, if he sacrifices it to your opinion." Government leaders should lead; or putting it another way, government officials should make decisions that are rational and in the public interest even if they are unpopular at the time. This is especially true if, as is the case for most federal and state regulators, they are granted what amounts to lifetime job tenure in order to shield them from political manipulation or retaliation. Yet in too many cases, the precautionary principle has led regulators to abandon the careful balancing of risks and benefits -- that is, to make decisions, in the name of precaution, that cost real lives due to forgone benefits. …

17 citations


Journal Article
TL;DR: In the wake of the September 11 terrorist attacks on the United States, observers of Japanese politics were astonished to see the decisiveness with which Prime Minister Junichiro Koizumi acted to lend Japanese support to the U.S. war on terrorism.
Abstract: IN THE AFTERMATH of the September 11 terror attacks on the United States, observers of Japanese politics were astonished to see the decisiveness with which Prime Minister Junichiro Koizumi acted to lend Japanese support to the U.S. war on terrorism. Using remarkably tough language, Koizumi called the attacks -- in which 24 Japanese perished -- "unforgivable" and followed with a seven-point emergency plan committing the Japanese military to support U.S. activities in Afghanistan. Most notably, he pushed forcefully for the dispatch of members of Japan's Self-Defense Forces, largely aboard high-tech Aegis cruisers, to the Indian Ocean to provide intelligence support. These actions stood in stark contrast to decades of Japanese pacifism and public ambivalence about the nation's military stance, and Koizumi certainly expected to meet serious resistance from dovish members of the Japanese government. Perhaps for this reason, the popular, self-styled maverick Koizumi seemed especially eager to push through the debat e quickly so that Japan would avoid the kind of harsh international criticism that had attended the nation's tardiness in accepting any role in the Persian Gulf War. Although Japan ultimately made the largest financial contribution of any nation to that conflict, critics both at home and abroad referred derisively to the nation's "checkbook diplomacy" and its unwillingness to risk lives even to support collective security arrangements. Although some argued that Japan would never be a "normal" country, Koizumi's determination and the Japanese Diet's relatively speedy passage of his legislation together indicated that Japan had done the seemingly impossible. But if the war on terrorism really is a new kind of war, experts on Japan may need to rethink what this apparent shift really means. Something is and has been happening in Japanese debates over international security, and we may yet witness a sea-change in the nation's policies toward the security of the Asia-Pacific region. What these recent moves will not do, however, is guarantee a change in Japanese thinking about terrorism and how to confront it. This, more than any specific Japanese failure to support American actions in Afghanistan, may genuinely endanger the alliance in ways that no one in Washington or Tokyo has addressed. If the U.S. war on terrorism expands to the Pacific Rim -- as most analysts now believe it will -- the United States will need and expect support that Japan will find difficult to provide. The nation is not prepared to engage in a genuine counterterrorist campaign, and its eagerness to support the United States may begin to backfire when it becomes clear what a war on terrorism mea ns. The shape of the security treaty THE STORY OF the U.S.-Japan Security Treaty is familiar if peculiar. In the aftermath of World War II, the United States Occupation forces in Japan dedicated themselves to two initial tasks: the country's democratization and its pacification. Working with representatives of the Japanese government, Supreme Commander of Allied Forces in the Pacific Douglas MacArthur and his team crafted a new constitution reflecting both goals. Although this constitution, officially adopted in 1947, maintained some of the institutional features of the prewar Japanese government -- especially its parliamentary system and a role for the emperor -- it emphasized democratic rights and liberties and included a new legal commitment to pacifism. In Article IX of the constitution, Japan "forever renounces the threat or use of force as a means of settling international disputes" and further stipulates that "land, sea, and air forces. . . will never be maintained." In explicit language, the section concludes, "the right of belligerency of the state will not be recognized." This, it seemed, would satisfy the American desire to avoid a future war with Japan. Within a few short years, however, U.S. authorities -- who occupied Japan until 1952 -- added two more goals for the nation, both tied to the developing Cold War with the Soviet Union. …

15 citations


Journal Article
TL;DR: In this article, the authors argue that without strong political parties and political institutions that are accountable and effective, that can negotiate and articulate compromises to respond to conflicting demands, the door is effectively open to those populist leaders who will seek to bypass the institutions of government, especially any system of checks and balances, and the rule of law.
Abstract: Civil Society Can't Replace Political Parties AX WEBER ONCE REFERRED to political parties as "the children of democracy," but in recent years civil society, in the new and emerging democracies, has often become the favored child of international efforts to assist democracy. Civil society has been described as the "wellspring of democracy," a romantic, if perhaps exaggerated, claim. The international community has promoted civic organizations, assisted them, and supported their expansion and development, often building on the ruins of discredited political parties. This has been a good and necessary endeavor. Yet the almost exclusive focus on civil society has moved beyond fashion. For some it has become an obsession, a mantra. Increasingly, resources are being channeled to programs that develop civil society to the exclusion of political parties and political institutions such as parliaments. Many private and public donors feel that it is more virtuous to be a member of a civic organization than a party and that participating in party activity must wait until there is a certain level of societal development. There is a grave danger in such an approach. Strengthening civic organizations, which represent the demand side of the political equation, without providing commensurate assistance to the political organizations that must aggregate the interests of those very groups, ultimately damages the democratic equilibrium. The neglect of political parties, and parliaments, can undermine the very democratic process that development assistance seeks to enhance. Without strong political parties and political institutions that are accountable and effective, that can negotiate and articulate compromises to respond to conflicting demands, the door is effectively open to those populist leaders who will seek to bypass the institutions of government, especially any system of checks and balances, and the rule of law. The civil society boom IN THE 1980S AND '90S, civil society became the fashionable focus of attention as the changing political landscape created new opportunities for civic groups in countries emerging from dictatorial regimes. This newfound infatuation with civil society can be attributed to a number of factors: the critical role played by civil society -- before real political parties could legally operate -- in leading the charge against totalitarian regimes in Asia and Eastern Europe; the early adverse reaction to political parties by citizens who had experienced single-party systems in many of these countries; and the reaction of those offering support from established democracies who were themselves disillusioned with party systems and were more comfortable placing their hopes in civil society as a means of political and social renewal. Those who embrace the development of civil society as a means of apolitical involvement in the internal politics of a country fail to recognize the limitations of such an approach. In the first instance, civil society groups in new and emerging democracies constantly grapple with what are intrinsically political issues. For example, in the context of monitoring an electoral process or advocating for improved living standards, political parties remain the primary vehicle for political action and the enactment of laws; without engaging them in the process, there can only be limited advancement. Avoiding the issue of partisan politics in the rush to strengthen civil society runs the risk of undermining representative politics and failing to exploit the real avenues to political influence open to civil society. Examples abound of countries with a strong and active civil society where the weakness or entrenchment of political parties serves to put the entire democratic system in jeopardy. In Bangladesh, despite an abundance of advocacy and citizen action groups, the recurring partisan political stalemate consigns the country and its citizens to abject poverty. …

14 citations


Journal Article
TL;DR: For example, the authors pointed out that the worst insult any elected politician may suffer is being voted out of office, and that no matter how much dread he instills or respect he may inspire, his people do not love him.
Abstract: WHEN IT IS NOT at war, democracy is a comical political system. To many of history's greatest minds, the very idea of allowing the rabble to choose its leaders, who then pander to its wishes, was inherently ridiculous. Compared to the simple elegance of despotism, the stability of baronial rule, or the divinely ordained reign of a monarch, democracy is a messy, muddled method of government. We, the people, know this. Because our leaders emerge from within our own ranks, we feel entitled to insult them mercilessly, especially when they get too big for their boots. And the worst insult any elected politician may suffer is being voted out of office. No matter how much dread he instills or respect he may inspire, his people do not love him. In times of peace, we recognize that our leaders are not only no better than we, but in many cases, worse. We, at least, mind our own business and get on with making an honest living; they, on the other hand, willingly thrust themselves into the maelstrom of politics for what purpose? Power? Glory? Money? Fame? We suspect, perhaps wrongly, that whatever the reason, it must be nefarious. Some politicians may be motivated by a Great Vision, others by Moral Fervor, and still others by Fierce Ambition, but one verity remains: They want to clamber to the top of the greasy pole in order to run things and boss other people around. Given that the system is so endearingly ludicrous, the antics of our elected representatives make entertaining viewing. Some commentators rue the disrespect with which politicians are sometimes treated. In turn, they can be accused of not only taking politicians altogether too seriously, but thinking ahistorically. THE INVENTORS OF modern democratic politics, the British, occupied a great deal of their time mocking its practitioners. The more rambustious and vibrant the politics, the more malicious the mockery. Whereas parliamentarians had once been lumped together as "the Commons," the eighteenth century witnessed the emergence of Tory and Whig political parties possessing distinct platforms. This was a development, coinciding with the explosion in political pamphleteering and newspapers, that bitterly divided the London coffee houses (Jonathan Swift, misanthropic author of the political satire Gulliver's Travels, was a particularly vicious Tory hack) and unleashed an extraordinary partisan rancor. Turn to a typical eighteenth-century caricature and, even in our crass age, one is horrified by the sheer delight artists took in portraying the great politicians defecating, urinating, fornicating, being disemboweled, and suffering from flatulence. Take, for instance, the famous print of the Whiggish Sir Robert Walpole -- who served for more than 20 years as Britain's first prime minister and to whom the phrase "every man has his price" has been, no doubt inaccurately yet justifiably, attributed -- straddling the gate of government. The cartoon is entitled "Idol-Worship, or The Way to Preferment" and depicts a servile place-seeker kissing Walpole's enormous, naked buttocks. By comparison, Doonesbury's "waffle" and "feather" jibes seem thin gruel indeed. Later artists and writers would downplay the scatology and instead focus on grotesquely exaggerating the personal appearance and besmirching the characters of politicians. By the late nineteenth century, when suffrage had bestowed the vote on most of the male population and parliamentary government was upheld as the epitome of Progress, middle-class publications such as Punch drew their claws. Disraeli, Salisbury, and Gladstone were portrayed rather harmlessly as (respectively) greasily unctuous, self-satisfied, and eye-glazingly boring. But the British were always fond of their idiosyncratic little democracy, even as they poked fun at its ridiculousness. As Sir Joseph recounts in Gilbert and Sullivan's HMS Pinafore: I grew so rich that I was sent By a pocket borough into Parliament. …

14 citations


Journal Article
TL;DR: The value-added assessment (VA) as mentioned in this paper is a state-of-the-art assessment that measures the effectiveness of a teacher in a child's academic progress. And it has been used to evaluate teachers' performance in the state of Tennessee since 1992.
Abstract: AMERICAN SCHOOLS need more teachers. American schools need better teachers. Practically everyone with a stake in the education debate agrees with those two premises. However, there is sharp disagreement as to whether more regulation or less is the way to go. The differences of perspective begin over just how vital to transmitting knowledge a teacher is. No one is more certain about the overriding importance of a teacher in a child's academic progress than Tennessee statistician William Sanders, who has developed a value-added instrument that might revolutionize how good teachers are found and rewarded for productive careers. Speaking before the metropolitan school board in Nashville in January, Sanders risked friendly fire when he disputed the connection much of the education world makes between poverty and low student performance: "Of all the factors we study -- class size, ethnicity, location, poverty -- they all pale to triviality in the face of teacher effectiveness." That flies in the face of a widespread conviction in the education world that poverty is such a powerful depressant on learning that even the greatest teachers may only partially overcome its effects. As Diane Ravitch documents in her recent book Left Back (Simon & Schuster), education "progressives" long have believed that many children shouldn't be pushed to absorb knowledge beyond their limited innate capacities; that they are better off with teachers who help them get in touch with their feelings and find a socially useful niche. But Sanders has volumes of data to back up his contention. While at the University of Tennessee, he developed a sophisticated longitudinal measurement called "value-added assessment" that pinpoints how effective each district, school, and teacher has been in raising individual students' achievement over time. His complex formula factors out demographic variables that often make comparisons problematic. Among other things, he found that students unlucky enough to have a succession of poor teachers are virtually doomed to the education cellar. Three consecutive years of first quintile (least effective) teachers in Grades 3 to 5 yield math scores from the thirty-fifth to forty-fifth percentile. Conversely, three straight years of fifth quintile teachers result in scores at the eighty-fifth to ninety-fifth percentile. The state of Tennessee began using value-added assessment in its public schools in 1992, and Sanders is in demand in many other states where legislators are considering importing the system. The "No Excuses" schools identified by an ongoing Heritage Foundation project -- high-poverty schools where outstanding pupil achievement defies stereotypes about race and poverty -- buttress Sanders' contention that the quality of teaching is what matters most. Consider, for instance, Frederick Douglass Academy, a public school in central Harlem that has a student population 80 percent black and 19 percent Hispanic. The New York Times recently reported that all of Frederick Douglass's students passed a new, rigorous English Regents exam last year, and 96 percent passed the math Regents. The Grades 6-12 school ranks among the top 10 schools in New York City in reading and math, despite having class sizes of 30 to 34. And what makes the difference? "Committed teachers," said principal Gregory M. Hodge -- teachers, he said, who come to work early, stay late, and call parents if children don't show up for extra tutoring. The disciplined yet caring climate for learning set by Hodge and principals of other No Excuses schools also is due much credit. Those who believe in deregulation of teacher licensing see in value-added assessment a potential breakthrough. Principals (like Hodge) could hire and evaluate their teachers not necessarily on the basis of credit-hours amassed in professional schools of education but in terms of objective differences instructors make when actually placed before classrooms of children. …

12 citations


Journal Article
TL;DR: Alternative medicine includes some therapies that have proven value and are consistent with modern science, but it also includes other therapies that may have no value at all, and if they did, would challenge the basic assumptions of science.
Abstract: YEARS AGO, PEOPLE WORE garlic around their necks to protect their health. There may have been a rational basis for this action: Wearing garlic kept others at a distance, thus decreasing one's exposure to infectious disease. But the people who wore garlic did not know this, and what animated them was more superstition than science. As a practicing physician, I see a related phenomenon. I see patients who ingest garlic and other herbal products even though the science supporting the efficacy of these remedies is unclear. Ironically, these patients do so in the name of science. My patients would blush at the idea of wearing garlic in the style of a necklace, but when garlic is crushed up, put into capsules, and then swallowed, they are convinced that they are acting scientifically. This eagerness to swallow what others once wore may some day find justification in science. Then again, it may prove to be nothing more than superstition. Alternative medicine is not a recent phenomenon, and at different times, branches of the movement have enjoyed widespread public support. Homeopathy, for example, was extremely popular in the United States during the second half of the nineteenth century. Ancient Chinese medicine has been popular in Asia for over 2,000 years. What today is called alternative medicine covers a wide range of disciplines, most of which are guided by the belief that the human body has more than just a material reality. Supposedly, the human body has an energy to it that can be guided by external manipulation, much the way that matter and tissues are influenced by chemicals and radiation in allopathic medicine. This manipulation of energy harnesses the inner resources of the body to promote healing. Still, the definition of alternative medicine is just as confusing as the science of alternative medicine. Acupuncture, herbal therapy, and biofeedback are commonly included within that definition, but so also are chiropractic and magnet therapy. The spinal adjustment technique used by chiropractors has been shown in clinical trials to be effective for the treatment of low back pain. Moreover, the theory behind spinal adjustment has links to the biomechanical model of allopathic medicine. On the other hand, not only has magnet therapy failed to show effectiveness in serious clinical trials, its effectiveness might not even be demonstrable within the scientific paradigm governing medicine. Alternative medicine includes some therapies that have proven value and are consistent with modern science, but it also includes other therapies that may have no value at all, and if they did, would challenge the basic assumptions of science. The confusion surrounding alternative medicine is reflected in the political arena, causing deep divisions within both the liberal and conservative camps. Within the conservative camp, libertarians see any governmental regulation of the alternative medicine movement as a violation of individual freedom. Cultural conservatives, on the other hand, are suspicious of the movement's links to anti-Western multiculturalism. Within the liberal camp, progressives like Sen. Ted Kennedy and Rep. Henry Waxman have pushed for greater regulation of the alternative medicine industry in the spirit of consumer protection. Yet multiculturalists want alternative medicine to flourish unimpeded because they see it as a powerful weapon to use against traditional Western ideas. Alternative medicine does not fit neatly into either the conservative or the liberal worldview. The conservatives have tax cuts and school vouchers, the liberals have national health insurance and environmentalism -- and almost as if to keep from running afoul of each other, the two camps remain silent on a subject that belongs to neither. This cannot continue. From 1990 to 1997, the amount of money spent by consumers on alternative medicine increased 45 percent. In 1997, Americans spent over $21 billion on alternative medicine, with almost a third of the U. …

11 citations


Journal Article
TL;DR: For example, this paper pointed out that after nine years of independence and tens of billions of dollars in international assistance, Russia is far from having met the expectations of a bright future so widespread in 1991, the time of the fall of communism and the breakup of the Soviet Union.
Abstract: RUSSIA HAS NOT LIVED up to its hype. After nine years of independence and tens of billions of dollars in international assistance -- not to mention voluminous foreign advice -- Russia is far from having met the expectations of a bright future so widespread in 1991, the time of the fall of communism and the breakup of the Soviet Union. Rather, Russia remains a poor, semi-authoritarian country -- a considerable disappointment. Boris Yeltsin's surprise resignation just over a year ago rekindled long-frustrated hopes for rapid improvement. Instead of the ailing and erratic Yeltsin, who appeared to lack both the will and the political muscle to advance a radical reform agenda in his last years in office, Russia would have a younger, more vigorous leader backed by a newly supportive parliament. In fact, on Vladimir Putin's first full day in office as president, President Clinton called the new Russian leader to tell him that he was "off to a good start" and that his appointment was "encouraging for democracy." Secretary of State Madeleine Albright soon said that she was impressed by his "can-do approach." Despite this initial optimism from Clinton administration officials, however, Russia's transformation under Putin has begun to look like one step forward and two steps back. While the country is experiencing modest economic growth, largely attributable to windfalls from high oil prices and a cheap currency, its political system and its foreign policy are increasingly troubling. The "dictatorship of law" proclaimed by the Russian president seems to be taking shape as simply a more effective version of the semiauthoritarian system created by Yeltsin; justice is still dispensed selectively and is used in full force only against political opponents of the regime. Internationally, Moscow seems to be strengthening its ties with former Soviet allies such as North Korea while reviving decades-old efforts to expand and exploit differences between Washington and European capitals. Taking into account these realities, we must ask why Russia has still not met the expectations that so many held for its future. Were our expectations realistic? If not, why not? What should we do? Great expectations THAT 1992 -- the first year of Russian independence -- should be a year of high hopes is hardly surprising. After all, the last months of 1991 were enormously exciting: They saw the end of 70 years of the Soviet empire and produced the enduring and heroic image of Boris Yeltsin fighting for freedom atop a tank in front of the Russian parliament building. The fact that the events of 1991 took place just after those of 1989, when communism collapsed in Eastern Europe, contributed to a widespread sense that democracy was sweeping the globe. But by the end of 1992, Russia was plagued with hyperinflation, sharp political conflict, and considerable human suffering. By the end of 1993, Yeltsin had illegally -- by his own admission in his memoir, The Struggle for Russia (1994) -- disbanded the Russian Supreme Soviet and written a new constitution granting vast powers to the country's president, himself. The years 1994 and 1995 brought further troubling developments, most notably Russia's first brutal war in Chechnya and the odious "loans-for-shares" privatization. Yet throughout this period, and well beyond it, great expectations persisted regarding the development of democracy, the market, and "partnership" with the United States. Why did these expectations endure? Part of the reason is, of course, that Russia was indeed making some progress. Since independence, Russia has held two presidential elections and three parliamentary elections. Each of these elections has been largely free, though most have been far from fair. Moreover, though cynical perspectives on Russia's underdeveloped democracy are widespread, many Russians have come to see elections as an essential component of their government's political legitimacy. …

6 citations


Journal Article
TL;DR: Cheney is poised to play a role unparalleled for a vice president as discussed by the authors, both because of the depth and breadth of his political experience (former White House chief of staff, former defense secretary, former member of the House leadership) and the political climate that awaits him (a 50-50 party split in the U.S. Senate).
Abstract: "VICE PRESIDENT CHENEY to Wield Unusual Power," said the headline on the jump page of a Washington Post story published late last year. The article speculated that George W. Bush's vice president would function as the government's CEO, with the president serving as chairman of the board. "Cheney to Play a Starring Role on Capitol Hill" proclaimed a front page New York Times article a week earlier. "Prime Minister Cheney?" asked the Economist on the cover of its year-end issue. What was going on in the high temples of conventional wisdom as George W. Bush prepared to become the forty-third president of the United States? A certain amount of hype, perhaps. But these stories do reflect the enhanced role Richard Cheney will play in the new administration. Both because of the depth and breadth of his political experience (former White House chief of staff, former defense secretary, former member of the House leadership) and the political climate that awaits him (a 50-50 party split in the U.S. Senate), Cheney is poised to play a role unparalleled for a vice president. How involved in the administration he will be was much in evidence during the transition. But this enhanced status and influence are not entirely a product of the particulars of Cheney's resume. They also reflect the increased power and influence the vice presidency has taken on in the past 50 years. Bush, by his own admission, had this in mind when he selected Cheney as his running mate primarily because of his experience in government. The vice presidency has come a long way since Nelson Rockefeller dismissed it as "standby equipment." Now, vice presidents are senior advisors to the president, sometimes with a policy portfolio of their own, always as an integral part of an administration, and usually as an estimable political figure. By lore and tradition, vice presidents may command little respect. But based on their influence in recent years, they deserve far more. This change has gone underappreciated, though its manifestations are everywhere. Pundits and politicians alike reflect the elevation of the office's status when they speak of a Bush-Quayle or a Clinton-Gore administration. Their counterparts in generations past never saw juxtaposed the names "Hoover-Curtis" or "Truman-Barkley" on anything other than campaign posters. What accounts for the growing importance of the office of vice president? Several factors, including the age of jet travel, the power of television, cold war tensions, growing demands on the president's time -- and, in a compressed period of time, a half dozen presidential illnesses, a presidential assassination, attempts on the lives of others, the resignation of a president, and impeachment. Each of these episodes brought increased attention to the nation's second highest office and the qualifications of the person filling it. Since the office was created, one Out of four vice presidents, whether through election in their own right or through death or resignation, became president. Every vice president elected or appointed since 1952 (Nixon, Johnson, Humphrey, Ford, Mondale, Quayle, Gore), except for two, either became a major party nominee for president or contended for the designation. (One of the two, Nelson Rockefeller, had competed for the GOP presidential nomination before and might have again, had Gerald Ford not appointed him vice president.) Two unsuccessful vice presidential candidates, Henry Cabot Lodge and Edmund S. Muskie, took a stab at their party's presidential nomination. A third, Bob Dole, received it. All told, an office once deemed a political backwater has evolved into a recruitment field for presidents. Recent history shows that when presidential nominees select their running mates, they are also designating the "favorite" for their party's nomination four or eight years hence or even beyond. It was for all these reasons that in 2000, both major contenders took more care in the selection of their running mates than their predecessors. …

5 citations


Journal Article
TL;DR: A closer study of the relationship between civilian policymakers and military leaders in setting air strategy reveals a far more complicated relationship between military leaders and civilian policymakers than is generally understood either by military leaders or their civilian masters.
Abstract: IN THEIR BOOK Thinking in Time: The Uses of History for Decision-Makers (1986), Harvard professors Richard Neustadt and Ernest May make an important observation. Washington decision makers, and even academics, students, journalists, and the average citizen, "used history in their decisions, at least for advocacy or for comfort, whether they knew any or not." While most of their work concentrates on the question of whether or not decision makers, within the limits of their circumstances, could have done better, it also focuses on how decision makers often misread cases in history and draw inaccurate comparisons and parallels. Munich framed many decisions after World War II. Vietnam has been the military's frame of reference for over two decades, and the past decade has seen the Gulf War used as the antithetical comparison to Vietnam. Whether these analogies are appropriate or not, they are used over and over, often to the detriment of thoughtful reflection. The military itself indulges too often in complacent hindsight, and it has done so again in looking back on the Kosovo air campaign, Operation Allied Force. Much of the debate since Allied Force, especially in military circles and the Air Force in particular, has centered around the dissatisfaction of many commanders with the strategy of the campaign. These commanders are critical of the basic strategy choices made by NATO's leaders, arguing that politicians needlessly hampered the application of a coherent and doctrinally pure air power strategy, thereby risking American credibility and also prolonging the war itself. What is most disturbing about this after-action chastisement is the absence of the appropriate collegiality coupled with civilian primacy that is necessary for both healthy civil-military relations as well as good national policy. Exacerbating this is the military's misreading of both the Vietnam War and the Gulf War. Vietnam is remembered as a case of air power being undermined by civilian control of air operations, with images of President Johnson and Secretary of Defense McNamara on their knees in the Oval Office selecting targets. The Gulf War is remembered as a textbook case of proper civilian noninvolvement, with President Bush, Secretary of Defense Cheney, and others merely standing back while the air planners conducted a lethal and successful strategy. Both of these notions are incorrect, and they are especially harmful because they lead to the subsequent conclusion that politicians should only set objectives, not involve themselves with military plans or scrutinize the conduct of operations. A closer study of Vietnam, Iraq, and Kosovo reveals a far more complicated relationship between civilian policymakers and military leaders in setting air strategy than is generally understood either by military leaders or their civilian masters. The fundamentals of success in air warfare are candor, collegiality, and a common sense of purpose. And it is time to put to rest the unsupportable notion that civilians should only give broad guidance and then stay out of the way. The criticism LIEUTENANT GENERAL MICHAEL SHORT, now retired, served as the Air Component Commander during Allied Force. He has publicly decried the strategy of an incremental, gradual escalation, appealing to the president and those above him in the military commands (the regional commanders in chief, or CINC's), that they should heed the advice of airmen, who best understand how to carry out a campaign. Just weeks after the end of the war in an interview with the Washington Post, he declared that "as an airman, I'd have done this a whole lot differently than I was allowed to do. We could have done this differently. We should have done this differently." He further expanded his argument in a speech at the Air Force Association Air Warfare Symposium in February 2000: We need to prepare our politicians as best we can for what is going to happen. …

Journal Article
TL;DR: The European Project is not just a superstate but a superpower that will work to spread its values and concepts of governance on the international level as discussed by the authors, but rather retreating to a model of governance characteristic of the Continent before the period of Reformation, Enlightenment, and Revolution that spawned the United States.
Abstract: The Alarmingly Undemocratic Drift Of the European Union EVER SINCE THE COLD WAR ended 10 years ago, the nations of Western and Central Europe have rapidly moved to transform the European Economic Community, the "Common Market," into a genuine political union. One after the other, major areas of policymaking responsibility, including important aspects of economic, monetary, social, and legal policies, have been transferred from the nation-states of Europe to the institutions of the European Union (EU). The goal and purpose of this new Europe's leaders are no secret. In a November 2000 speech in Germany, Romano Prodi, president of the European Commission, the EU'S principal executive and legislative body, stated that the objective of this "European Project" is not just to create "a superstate but a superpower" -- a superpower that will work to spread its values and concepts of governance on the international level. During this critical period, the United States has continued to endorse European integration, as it has done for the past 50 years. The political traditions of the EU-member states, which include principles of popular sovereignty, the fact that European integration has been accomplished through peaceful means, and decades worth of Cold War era support by Washington for a stronger, more unified, Europe better able to stand up to Moscow, have led American policymakers to continue viewing the European Project as democratic and, by and large, beneficial. This is true even of those U.S. officials and commentators who are otherwise uncomfortable with the prospect of an emerging European power capable of challenging U.S. "leadership" in global affairs, and who oppose many of the EU's policy positions. However, the assumption that the new Europe, or at least the new Europe planned by the EU's current leadership, will continue to share the democratic values of the United States is badly in need of reexamination. Although only time will reveal the truth, there are a number of very troubling indicators suggesting that Europe is not moving towards a unified, democratic state on the American model, a "United States of Europe," but rather retreating to a model of governance characteristic of the Continent before the period of Reformation, Enlightenment, and Revolution that spawned the United States. In this regard, at the heart of the European Project lies the notion of a supernational or "universal" authority, spread across the whole of Europe, very similar to the universalist ideas of the Middle Ages. Moreover, this goal has manifested itself in institutions and assumptions about the role of the citizenry in government that are more characteristic of the Age of Absolutism than of American-style republicanism. If, in the long run, the political ideas of the Reformation and Enlightenment prove to be exceptions to a more permanent and ancient European rule, departures rather than transformations, this will create unique and serious problems for the United States. The American republic has no place, intellectually or politically, in the pre-Enlightenment European world, and while Francis Fukuyama's "end of history" thesis is partially correct -- communism, at least outside of the halls of academe, does not offer a viable ideological threat to democracy -- what model of nontotalitarian governance will ultimately triumph globally is still very much in doubt. Few Americans would disagree with the proposition that popular legitimacy, accountability, limited government, and the existence of a large sphere of private activities free from government involvement are essential attributes of democracy, necessary for both domestic tranquility and international stability. It appears that few of the EU's leaders would agree, judg ing by its institutions and their goals. At the same time, Europe has rarely been content, for long, to manage its own affairs without seeking to export its vision of the proper order of things. …

Journal Article
TL;DR: The United States has the chance now to revive U.S. relations with the moderate Arab world and drive radical regimes into a corner; and to reverse the sharp decline in U. S.-Russian relations of the past decade.
Abstract: THE DEVASTATING TERRORIST attacks of September 11 on the World Trade Center and the Pentagon have, in a single stroke, transformed the national security debate in the United States. The post-Cold War world is finally over; terrorism has emerged overnight as the new great threat. This threat will either unite or cripple America and its allies. While senior officials cobble together various coalitions to prosecute the anti-terror campaign ahead, an immense opportunity presents itself to the United States. As we fight the war against terrorists, the Bush administration should already be considering crucial ancillary outcomes. The United States has the chance now to revive U.S. relations with the moderate Arab world and drive radical regimes into a corner; to put U.S.-Russian relations, for the first time, on a stable and positive footing; and to reverse the sharp decline in U.S.-European relations of the past decade. Forming coalitions and reviving alliances cannot be the primary goal of American foreign policy, of course. Nor should the United States accept any unreasonable constraints imposed by international coalitions. But in the near term, alliances will serve America, at times in critical ways, in its sustained and far-reaching anti-terror campaign. In the long term, nothing could be more conducive to advancing American interests and promoting global security than to reestablish the credibility of a united West under American leadership. The challenge will be a formidable one. Anti-hegemony had become in recent years the buzzword -- and a strong motivation -- among allies and adversaries alike. "Building a multipolar world" had emerged as a prominent code phrase, whether in Berlin or Beijing, for curbing American influence. Consider Chinese opposition to American missile defense; Russian antagonism to NATO enlargement; EU political ambitions as expressed through the euro. In various ways, to be sure, but in each and every case, at least one prime motive behind these projects and policies was to constrain American power and predominance. But September 11, 2001, has changed everything. The Arab world -- and Saddam U.S. RELATIONS WITH moderate Arab countries, and with key allies such as Saudi Arabia, have worsened in recent years. This set of relationships may be the most difficult to mend. In some ways it remains close to the heart, though, of America's problem with terrorism. Since September 11, discussion has begun (again) about the root causes of terrorism. Whispers have also emerged about America's own responsibility, in statements from moderate Arab leaders like Egyptian President Hosni Mubarak to the commentaries of major European papers. "If America wants security," argued an editorial in the prestigious German daily, Suddeutsche Zeitung, "it has to address the concerns of the people of the region... and help solve the Palestinian problem." One prominent Egyptian columnist went so far as to argue that the "Arab-Israeli conflict" should really be seen as "an Arab conflict with Western, and particularly American, colonialism." It's time for Western leaders to insist on political and moral clarity. America must take the lead. Of course, the United States makes its mistakes. It's guilty at times of arrogance, misjudgment, and poor policy choices. Still, Islamic terrorism has never been about Israel, America's support for the Jewish state, American foreign policies, or the effects of globalization. Rather, it is the wholesale failure of Arab states to modernize and democratize that helps explain why radical Islam has been permitted to grow and spread so extensively -- and why, in fact, American relations with so many Islamic countries have remained poor. America holds no brief against Muslims. The United States fought to save innocent Slavic Muslims of Bosnia from slaughter and destruction. It did the same later on behalf of the Kosovar Albanians. The U. …

Journal Article
TL;DR: Barak's call for separation along the pre-1967 "green line" was supported by 75 percent of the respondents in a recent poll as discussed by the authors, with the majority of them favoring separation in some form or another.
Abstract: DESPERATE TO REPLACE or resuscitate the Oslo "peace process during the miniwar last fall and winter with the Palestinians, Israeli Prime Minister Ehud Barak reiterated his call for separation. If seven years of Israeli withdrawals from one national security "red line" after another had not bought peace, then at least separation -- unilateral and quick -- of Jews and Arabs, of Israel from the West Bank and Gaza Strip, would bring quiet. To stimulate cabinet discussion of separation, Barak distributed copies of Haifa University Professor Dan Schueftan's manifesto, Disengagement, to his ministers. By late December a poll showed 75 percent of Israelis (no doubt the figure would have been higher if it reflected only Jewish Israeli sentiment) favoring separation in some form or another. That meant the idea behind Barak's winning slogan in the 1999 campaign, "Us here, them there," remained popular, if Barak himself did not. Shortly before ousting Barak in February's election for prime minister, Ariel Sharon restated his own proposal for unilateral separation (though only as a response to a future unilateral declaration of statehood by the Palestinians). Despite the renewed interest in the subject -- which survives Barak's defeat -- separation along the pre-1967 "green line" neither divides nor conquers. That is because, as the "al-Aksa intifada" confirmed by enlisting the participation of many Israeli Arabs and the vociferous support of even more, the Israeli-Palestinian struggle already has penetrated to within "Israel proper." This expansion feeds on the rapid growth of Israel's Arab population and the deepening of that population's Palestinian national identification. Separation, as discussed by Israeli officials and academics, fails to deal realistically with this changed, but hardly new, paradigm. Last fall, not only were Israelis and Palestinians killing each other across the pre-1967 green line in the West Bank, Gaza Strip, and eastern Jerusalem, but Israeli Arabs and Jews also did likewise inside the 1948 boundaries. Although the numbers were small -- 13 Israeli Arabs killed by Israeli police, one by a Jewish mob, and five Israeli Jews murdered by Israeli Arabs -- the significance was great. The struggle that Israeli Jews had long imagined was between their superior state and an inferior Palestinian Arab movement over a West Bank/Gaza Strip entity has relapsed into its essential pre-1948 condition. Then the Arab and Jewish inhabitants of British Mandatory Palestine west of the Jordan River (Britain unilaterally separated eastern Palestine -- Transjordan -- in 1922) waged an intercommunal fight for dominance. Today, they do so again, the Oslo process and favorable demographic trends having stimulated Arab appetites and solidarity on both sides of the green line. Arafat rejected Barak's unprecedented offer of 95 percent of the West Bank and Gaza Strip and de facto control over eastern Jerusalem at Camp David last summer because he would have had to share Jerusalem, drop the Arab "right of return" to pre-'67 Israel, and declare the conflict over. Simultaneously, the Arabs of Israel, by supporting those of the West Bank and Gaza last fall, also reaffirmed that Jewish claims inside '48 lines are still up for grabs. Smaller majority, larger minority DESPITE ITS MANY successes, Israel 53 years after independence remains a Jewish beachhead in the Near East. Three-fourths of Israel's infrastructure and Jewish population lie within an L-shaped strip 75 miles from Haifa's northern suburbs to Tel Aviv's southern ones and 35 miles west to east, from Tel Aviv to Jerusalem. This Jewish heartland rarely exceeds nine miles in width. As a result of Jewish immigration and Arab emigration during the first five years of Israel's founding, Jews constituted roughly 87 percent of Israel's population from 1953 through 1967. But then the consistently much higher Arab Israeli fertility rates -- supplemented by a high level of Jewish emigration -- began to close the gap. …

Journal Article
TL;DR: The threats came fast and furious from Russian government officials and nationalist politicians in the early days of the 1999 NATO enlargement debate as mentioned in this paper, with the year 2000 being the more likely candidate for Apocalypse.
Abstract: THE WORLD DID NOT come to an end in 1999 That it didn't was not a surprise to most of us, the year 2000 being members more likely candidate for Apocalypse Nevertheless, there were some who expected that the year NATO accepted new members from the former Warsaw Pact, a contemporary equivalent of the 10 plagues of Egypt would be visited on the transatlantic military alliance The threats came fast and furious from Russian government officials and nationalist politicians Former general and governor of Siberia Alexander Lebed warned of Russia's intention to create a military counterbalance to a NATO that included Poland "A similar precedent was created in Poland in 1939," he said, implying that NATO enlargement was the equivalent of Hitler's invasion of Poland "The price of that precedent was 50 million lives We won't get away with only 50 million lives today" Subtlety has never been a Russian forte Gen Lebed was just one of many Russian officials to bluster The buffoonish and insufferable Vladimir Zhirinovsky received a great deal of attention in the early days of the NATO expansion debate until it was discovered in the West that the Russians considered him a joke Zhirinovsky charmingly threatened to take back the Baltic countries, blustering, "They are standing in the way of our seaports" President Boris Yeltsin, in the days after the fall of the Soviet Union when he wa s courting Western support, had confirmed "the sovereign right of each state to choose its own method for guaranteeing its security" However, once the countries of the former Warsaw Pact made clear that their preferred method was NATO membership, the Russian changed his tune In 1995, Yeltsin blamed the Bosnia debacle on NATO enlargement plans in blunt words: "This is the first sign of what can happen The first sign When NATO approaches the borders of the Russian Federation, you can say there will be two military blocs This is the restoration of what we already had In that case, we will immediately establish constructive ties with all ex-Soviet republics and form a bloc" While Russian anger and frustration at seeing former vassal states voluntarily choose a former enemy alliance are understandable, it is harder to grasp the motivations of those who repeated the Russians' bluster here in the West (although history teaches us that there will always be some who favor the appeasement of bullying, sad lot that they are) One vociferous critic of NATO enlargement was Michael Mandelbaum, a former Clinton advisor now at the Johns Hopkins School of Advanced International Studies "NATO expansion is the Titanic of American foreign policy, and the iceberg on which it will founder is Baltic membership," he said Likewise George F Kennan, the famous architect of US containment policy towards the Soviet Union -- and, one would have thought, an unlikely source for such sentiments -- condemned NATO enlargement as "the most fateful error of American policy in the entire post-cold war era" Jack Matlock, former US ambassador to the Soviet Union, told a Cato Institute conference in May 19 97, "If this process is not stopped, we're going to see a NATO that is no longer capable of pursuing the purposes for which it was created because it will be preoccupied watching its own navel and its expanding waistline" The fact is, however, that the Russian government eventually did manage to reconcile itself to the first post-Cold War round of NATO enlargement To some degree, it had been placated by the NATO-Russia Founding Act of 1994, which gave Russia certain consultative rights vis-a-vis NATO In any case, Poland, Hungary and the Czech Republic, the first three new members since the admission of Spain in 1982, were inducted with due ceremony and without protest at the fiftieth anniversary NATO summit in Washington in 1999 Then Russia turned its attention to preventing a second round of enlargement and to undermining American plans to build a national missile defense …

Journal Article
TL;DR: In the U.S. 2000 presidential campaign, the talk was not about what had been achieved with Russia, but rather growing suspicions and finger-pointing as discussed by the authors, with Russian President Vladimir Putin's government clamping down at home and U. S. President George W. Bush's top advisers initially calling Russia a threat to U.,S. interests.
Abstract: Keeping Expectations Realistic IN AUGUST 1991, watching Russian President Boris Yeltsin standing on the tank in defiance of the last-ditch effort of the old Soviet elite to hang onto power and empire, we were euphoric. The old guard was finished, and within months, the Soviet Union, our main adversary of 45 years, was finally placed on the ash heap of history. After the cooperation between U.S. President George H.W. Bush and Soviet leader Mikhail Gorbachev during German unification and then in the Gulf War, it seemed at the beginning of 1992 that a future partnership between Russia and the United States in a post-Soviet era would be relatively easy to achieve. Ten years later, it is not partnership that seems to have defined the past decade, but rather growing suspicions and finger-pointing. In the U.S. 2000 presidential campaign, the talk was not about what had been achieved with Russia. It was about "Who Lost Russia?" With Russian President Vladimir Putin's government clamping down at home and U.S. President George W Bush's top advisers initially calling Russia a threat to U.S. interests, the start of the second decade of America's relations with post-Soviet Russia is a farcry from the heady days of 1992, regardless of what Mr. Bush saw when he peered into Putin's soul at their first face-to-face meeting in June. There are several reasons that we have arrived at this point. One problem was simply the overblown expectations after the collapse of Soviet communism about what Russia might achieve politically and economically. By all accounts, the past 10 years should be seen as a major victory for freedom when one compares political and economic life to the odious nature of the Soviet period, and yet instead progress seems to be woefully inadequate. Second was an underestimation in Moscow of how quickly Russian power would decline, leaving the Russian government on the outside of the agenda-setting process, especially in Europe, which is not what Gorbachev and Yeltsin had envisioned. After all, they had assumed that Russia would be part of any major decision-making on the continent, as they had been, for example, during the nineteenth-century Concert of Europe. But these are not the fundamental obstacles to greater U.S.-Russian cooperation. To understand the crux of the problem in U.S.-Russian relations, one needs to remember what factors led to cooperation in previous eras -- because none of these underlying dynamics is as powerful today as it was in the past. Russia still has an interest in integrating into the West, and the United States still has an interest in fostering this integration, but the domestic politics on both sides make this effort more difficult for the next few years than they were in the 1990s. Past cooperation THREE BASIC REASONS brought the United States and Russia closer together at different points during the twentieth century: the fear of growing German and Japanese power in the decades after 1905, which culminated in the creation of the Grand Alliance of World War II; U.S.-Soviet parity in strategic nuclear forces and a mutual fear that a Cold War crisis might escalate to nuclear war, which led to the cooperative competition known as detente in the early 1970s; and the domestic political needs of leaders on both sides, which produced the Clinton-Yeltsin partnership in the 1990s. Today, without an easily identifiable common enemy, fear of nuclear holocaust, or domestic political imperatives on either side, there is no major impetus to greater cooperation. Prior to the twentieth century, neither country had had much to do with one another as there was no real need, except for isolated incidents such as the United States purchase of Alaska. But early in the twentieth century, changes in the European and Asian balance of power and the growing U.S. presence on the world stage led to a greater coincidence of strategic interests despite the political and ideological gulf that existed. …

Journal Article
TL;DR: In 1989, the New York Times weighed in immediately with a stern editorial about ''Guns in Young Hands,\" urging President George W. Bush to take serious action, or at least what the Times means by serious, namely to convene a White House conference on teen violence as discussed by the authors.
Abstract: IN EARLY MARCH, when the latest teenage killer to make national news opened fire in a high school near San Diego with the deadliest display of such violence since the murders at Columbine two years ago, the usual public scramble for explanations of his behavior followed true to what a sociologist would call \"cultural script.\" The New York Times weighed in immediately with a stern editorial about \"Guns in Young Hands,\" urging President Bush to take serious action -- or at least what the Times means by serious -- namely to convene a White House conference on teen violence. Reporters from the news services fanned out across the country to interview as many acquaintances of the killer as they could lay cameras on -- most of which witnesses, as has likewise become customary, would earnestly testify that nothing about the boy ever seemed amiss. Also true to form, a disproportionate share of the \"blame\" for the young killers actions was deposited not quite at his own feet (\"an obviously troubled young teenager,\" as th e Washington Post editorialized and just about all other sources agreed), nor at those of the adults around him, but rather upon his peers -- the bullies who tormented him, the acquaintances who dismissed his threats to \"bring the school down\" as idle boasts, the fellow drinkers at a party the weekend before who had heard the killer say he had a gun he was taking to school and did nothing about it. In fact, in what appears to have become cultural routine in these matters, just about every detail of the case would turn out to be reported and analyzed at length, with the New York Times even waxing lyrical about a \"Joan Didion world of dropouts and tough teenagers.\" Every detail, that is, but one -- that, as the Washington Post did manage to relay deep into a story on the teenager's clueless friends, \"[He] was known as a latch-key child who often ate dinner and slept over at friends' homes.\" Piecemeal, in various reports and in a handful of opinion columns, other details of the killer's family life and lack of it filled in the blanks. The child of a decade-old divorce, he had resided, loosely speaking, with his father in California. He was a boy left largely to his own devices, who slept elsewhere much of the time, who called his friends' mothers \"Mom.\" He had spent the preceding summer with neither parent, but instead in Knoxville, Md., with the family of former neighbors there. His mother, distraught an d horrified by events as any mother would be, was giving her anguished interviews from behind a closed door where she herself lived -- on the other side of the country, in South Carolina. The reason why so little was made of what would once have been judged meaningful facts -- that this latest killer was one more unsupervised, motherless boy -- is not elusive. Of all the explosive subjects in America today, none is as cordoned off, as surrounded by rhetorical landmines, as the question of whether and just how much children need their parents -- especially their mothers. The reasons for this cultural code of silence are twofold. One is the fact that divorce, which is now so widespread that nearly everyone is personally affected by it in one way or another, is so close to qualifying as the national norm that a sizeable majority of Americans have tacitly, but nonetheless decidedly, placed the whole phenomenon beyond public judgment. [1] Moreover, for all that divorce itself shows signs of leveling off at its current (albeit unprecedented) rate, illegitimacy, for its part, continues to rise. Putting these two facts together -- divorce and out-of wedlock births -- means that the country is guarant eed a steady quotient of single-parent, which is to say, often absent-parent, homes. The fact that many of the women now heading those homes would choose otherwise if they could means that public sympathy and private compassion, including the desire not to add to their already heavy burden by criticizing any aspect of how they handle it, quite naturally go out to them. …

Journal Article
TL;DR: The Tooned-In Menu Team, Inc., which produces menus for school cafeterias around the country, each one loaded with ads for products such as Pillsbury cookies or Pokemon as discussed by the authors.
Abstract: Making Sense of Commercialization LOS ANGELES-BASED Tooned-In Menu Team, Inc., prints 4 million menus each month for school cafeterias around the country, each one laden with ads for products such as Pillsbury cookies or Pokemon. The deal is this: In exchange for getting their menus done up for free, participating schools provide Tooned-In with a ready market for its advertisers. It's just one of a proliferating number of arrangements forged each year between schools (or school boards) and companies. Consider McDonald's All-American Reading Challenge, in which McDonald's gives hamburger coupons to elementary-school students in exchange for their reading a certain number of books. Or Piggly Wiggly's offer to donate money to a school in return for sales receipts--indicating proof of purchase at the store--from the school community. Or the American Egg Board's "Incredible Journey from Hen to Home" curricular material, which is provided to schools for free while also promoting egg consumption. Or ZapMe!, which furnishes schools with free computer labs in return for the opportunity to run kid-oriented banner ads on the installed browsers and collect aggregate demographic information on students' web-surfing habits. In each case, the school gets something -- money, equipment, incentives for kids to learn, curricular material -- at a time of shrinking public-education budgets. And the companies also get something: access to a lucrative market. American teenagers spend $57 billion of their own money annually while influencing family expenditures of $200 billion more. As important, these commercial deals enable companies to build brand loyalty in a new generation of consumers. That is why the term "commercialism in the schools," with its controversial connotations, never gets applied to the sorts of universally praised deals -- such as company-sponsored scholarship, internship, or training programs -- in which companies treat students not as future consumers but as future employees. And indeed commercialism in the schools is controversial; it attracts fierce criticism. National organizations such as the Yonkers-based Consumers Union or Oakland's Center for Commercial Free Public Education -- as well as numerous ad hoc parental movements at the local level -- have taken up arms against commercial deals, battling companies in school board hearings and courtrooms. Their concerns: that commercial deals cede control of the education agenda to nonteachers, that they prey upon a captive audience, that they distort kids' and families' consumer choices, that they foster materialistic values, and that kids should not be bombarded by biased, commercially motivated messages in a place where they expect the information disseminated to be objective and confined to pedagogical purposes. The debate has become shrill and polarized. "I was speaking to one of my critics not long ago," says Tooned-In's director of school relations, Frank Kohler. "She doesn't own a car because she's opposed to the use of fo ssil fuels. She doesn't go to the movies because she resents the commercials. I said to her, 'Lady, you don't represent America!"' Yet there is a big problem with both commercialism's critics and its defenders. Neither side adequately distinguishes -- among the many kinds of deals out there -- between those that are genuinely troubling and those that are not; both paint with a broad brush. In the eyes of Ernest Fleishman, senior vice president for education at Scholastic, Inc., a New York-based company that sells books and posters through the schools, many of his antagonists "tend to use shotguns" in their attacks. "They draw no distinctions," Fleishman complains, between the "very, very different kinds of deals schools strike with companies." But what kind of discriminations should commercialism's critics be making? As Fleishman himself notes, "very few school districts have guidelines covering these matters. …

Journal Article
TL;DR: It has now been a year since the U.S. Supreme Court cut off the manual recount in the Bush-Gore election, and the seemingly most obvious lesson is that the authors need better voting machines.
Abstract: IT HAS NOW BEEN a year since the US Supreme Court cut off the manual recount in the Bush-Gore election In the heat of the battle, it seemed few could dissociate who they thought should win from how they thought the disputed issues could be resolved Perhaps for many this is still true But for the rest of us, the events of last year may have acquired a sufficient distance, lengthened by the intervening shock of September 11, to allow us to put down our partisan positions and ask ourselves how we would want these issues resolved in the future If we learned anything from this election, it was that resolving election issues in midstream, after we know which candidate will benefit from any given resolution, is a recipe for disaster Right now, while we are still behind the veil of ignorance and do not know which candidate will benefit, we need to resolve as many open issues as we can Some of the issues will be familiar, since they arose in the actual dispute But they have been greatly misunderstood, both because the rapid pace of developments did not allow for sufficient explication and because insufficient attention was paid to how the candidates would want the issue resolved if their positions were reversed in the next election Other issue will be unfamiliar, for they are the issues that would have arisen had if Supreme Court not called a halt, and may well arise the next time around And if anything was clear to me as I nervously anticipated those issues at the time in my role as counsel to the Florida House, it was that the issues barreling down the track would have been even more explosive and bitterly contested than the ones we all argued about a year ago (1) LESSON I Better voting machines Let's begin with the seemingly most obvious lesson: We need better voting machines But now the nonobvious point: The problem was not, as conventional wisdom thought at the time, with punch card technology The exhaustive media recounts have confirmed that punch card and optical-scan ballots actually resulted in similar rates of spoilage, defined as the total of undervotes and overvotes (An undervote is a ballot that registers no vote for a candidate while an overvote is a ballot invalidated by votes for multiple candidates) How can that be? Didn't we all at the time hear statistics that seemed to confirm the superiority of optical scanners? Well, not quite, for two reasons First, although the focus at the time was on the undervotes that Gore and the Florida Supreme Court wanted recounted, it turns out that there were twice as many overvotes, and they were a bigger problem on the optical scan ballots Second, and more important, the counties in which optical-scan ballots seemed to be delivering better results were actually only those counties that counted ballots at the precinct level Under these systems, the ballot result is registered or rejected by the machine when the voter turns in the ballot Such a precinct counting machine provides voters with timely feedback that allows them to correct any errors in their ballots But when punch card ballots were also machine counted at the precinct level, they had a similarly lower rate of spoilage When either optical-scan or punch card ballots are machine counted at a centralized county location removed from the precinct and the voter, the rate of spoilage is higher, but similar for both types of ballots The implication is that what really would constitute "better" voting machines are not machines better at counting, but machines that are better at correcting voter errors This is not at all to trivialize the concern No matter how smart we may think we are, we all push the wrong elevator button from time to time Machines that would help us avoid these inevitable errors in voting are important and useful But correctly understanding the issue should put to rest the misguided bugaboo that certain counting machines were disenfranchising voters …

Journal Article
TL;DR: The U.S. is the dominating global power and the United States is the dominant global power to a degree and to an extent never before seen in the history of the world as mentioned in this paper.
Abstract: THE POWER OF THE United States looks very different in the aftermath of September 11. Since the attacks, the earth's major nations -- ranging from the NATO countries to Russia to China to Japan -- and so many others have put aside their differences with the United States. The U.S. and Russia may even emerge, at least for a time, almost as allies and not just against terrorism. There is talk of agreements on nuclear downsizing and missile defense, all but unimaginable before the attacks. So in the face of a new kind of threat, new international alignments may be emerging. Perhaps the world will be reborn. But if a truly new order is to endure, the United States must take a hard look at one of the most discussed and least understood sources of international antagonism towards it, a source that is part myth and part deceptive reality: the idea that the United States is the dominating global nation, powerful to a degree and to an extent never before seen. In particular we must come to grips with the differing forms of U.S. power, how these differing forms are shaping the international landscape and how they shape the response of others to us. Voices around the world have decried and denounced America's overwhelming presence on the global scene. The question is, why the discomfort? Why the criticism? After all, in its foreign policy the United States is a "status quo" state. It deploys its material power to preserve the present constellation of nation-states within their current boundaries. It does not attempt to coerce favorable trade deals or tribute from others. It has supported and encouraged the move towards nation-neutral, rule-based mechanisms for governing international economic relations, in other words a system that restricts the sway of its own material might. And indeed, in relative terms the United States is a less imposing force today than in the early Cold War years, when in contrast to now Washington took considerable interest in the internal affairs and international posture of even its closest allies. Yet, although the hand of what is called "hegemony" is lighter, perhaps even nonexistent, today, the voices against that hegemony are louder. What are these critics and antagonists thinking? Is it simply that, as one scholar of international relations, William Wohlforth, has put it, "[e]lites will not stop resenting overweening U.S. capabilities"? Perhaps those global elites know or sense something that Americans by and large miss. Although America's active, material power is smaller than it was half a century ago, America also commands a remarkable passive, immaterial power -- what some have dubbed "soft power" but might more accurately be called cultural power. This form of American power has never been greater or expanding more rapidly. And while in active, material ways, the U.S. is a status quo nation, in these passive and immaterial ways it is a highly disruptive, even revolutionary, global force. Hegemon? HOW? WHY? The answers have, perhaps, not been fully sorted through. Instead, from global leaders, scholars, journalists, and activists have come disquiet, disapproval, and denunciation about something generically known as American power. This theme of the unprecedented dominance of the United States on the world stage has been among the enduring topics of international discourse since the collapse of the Soviet Union. French Foreign Minister Hubert Vedrine has famously termed the United States the world's "hyperpower" and has called for turning Europe into what he characterizes as a necessary counterweight to America's global dominance. But he has hardly been the only one to embrace -- happily or unhappily -- the notion of America's untouchable global sway. On the happy side have been mainly Americans. Clinton administration Secretary of State Madeleine Albright proclaimed the U.S. the international order's single "indispensable" nation, a benign equivalent of Vedrine's formulation. …

Journal Article
TL;DR: In the early 1990s, a book by David Osborne and Ted Gaebler, Reinventing Government: How the Entrepreneurial Spirit Is Transforming the Public Sector, From Schoolhouse to State House (Addison Wesley, 1992), remained on the national bestseller list for over a year as mentioned in this paper.
Abstract: What Government Can Learn from the Market AT LEAST SINCE RONALD REAGAN'S election in 1980, voters have expressed dissatisfaction with the traditional, 60 year-old model of government involvement in both the economy and society. Yet this dissatisfaction has resulted in relatively little in the way of comprehensive government reform. There is a reason for the relative unresponsiveness: the fact that regulation is allowed, and sometimes mandated, by federal statutes that poorly reflect market conditions. To significantly improve government performance, statutory reform must precede regulatory reform and must be carefully tailored to the specific market imperfections that government involvement is designed to correct. Instead, the outdated structure of government statutes often impedes the economy from adapting to new conditions. Regulation has always been important to economic and social prosperity. But it often imposes unnecessary social costs, reducing competitiveness and economic growth. Over the past few years, the broader public has begun to demand improved government efficiency. In the early 1990s, a book by David Osborne and Ted Gaebler, Reinventing Government: How the Entrepreneurial Spirit Is Transforming the Public Sector, From Schoolhouse to State House (Addison Wesley, 1992), remained on the national bestseller list for over a year. The Clinton administration engaged in a well-publicized attempt to reinvent government led by Vice President Gore's National Performance Review. Within Congress efforts have focused primarily on regulatory reform, one of the 10 items in the House Republicans' 1994 "Contract with America." Although opposition has frustrated reform, congressional Republicans continue to push for key changes in regulatory procedures. These include increased use of cost/benefit analysis, compensation to property o wners for the loss in value of private property due to regulation, risk analysis, peer review of agency scientific findings, and broader legal powers to challenge agency determinations in court. Although almost everyone now acknowledges the need to improve the way government operates, the two parties have taken very different approaches. Democrats have generally preferred a piecemeal approach to reform, taking each regulation or issue separately. This approach, which assumes that government is already doing the right things but needs to do them better, has produced some gains. The Clinton administration ordered each agency to develop plans to reduce unnecessary regulations, become more user-friendly, and adopt a policy of cooperation rather than confrontation with the private sector. The Internal Revenue Service was reorganized. Major regulatory agencies such as the Occupational Safety and Health Administration and the Environmental Protection Agency announced that they were reducing the size of their regulations. Although helpful, these marginal reforms do little to improve the basic structure of the relationship between the government and the private sector. Republicans so far have focused on broader regulatory reform and efforts to constrain the growth of federal power. Their efforts attempted to shift the balance between regulatory agencies and the private sector by increasing the procedural requirements agencies must meet before promulgating regulations that impose substantial costs on the private sector. Efforts to restrain government spending would force agencies to set priorities among numerous objectives, abandoning those that offer little public benefit. [1] The Republican leadership has succeeded in reforming the law in some fields such as welfare, agricultural commodity programs, banking, and worker training. It has also made major progress in other areas such as bankruptcy and telecommunications. In each case, Congress rewrote the underlying statutes governing federal policy. However, relatively little has been accomplished in other regulatory areas such as the environment, worker safety, food and drug inspection, education, and housing -- largely beca use efforts have been limited to improving the regulatory process while preserving the existing statutory framework. …


Journal Article
TL;DR: The disconnect between historians and reality grows with each passing day, because historians cannot explain the Great Depression except in terms of capitalism's failure as discussed by the authors. But historians could not have failed so badly in their judgments if economists had been able to explain the great depression.
Abstract: "Market Failure" Reconsidered ACCORDING TO NEW DEAL HISTORIANS, capitalism failed in the 193Os. What, then, is it doing flourishing in the United States, Britain, and Europe and taking root in Latin America and China, where it was never previously present? For the past 20 years there has been a large and growing incompatibility between the verdicts of historians and the performance of capitalism. In 1981 the United States reduced tax rates and reined in money growth. For two decades the economy has experienced an economic boom characterized by large income gains, high employment, and negligible inflation. In the U.K. similar reforms introduced by Margaret Thatcher have produced similar results. Heavily socialized countries such as France, Italy, and Spain have abandoned public ownership and privatized their economies. Political regimes in Eastern Europe and the Soviet Union, where a planning model had operated, failed both economically and politically and collapsed. Capitalism has appeared in Latin America and has taken hold of the Mexican, Chilean, and Argentinean economies. Even China's rulers have found it necessary to risk their political power by endorsing markets and private property in order to participate in the global economy. Big government (in terms of its presence in the economy) is everywhere in retreat. A Democratic president, Bill Clinton, declared that "the era of big government is over." Yet the history books and much analysis of public policy during the 1930s remain unadjusted and still proclaim the failure of capitalism. The disconnect between historians and reality grows with each passing day, because historians cannot explain the Great Depression except in terms of capitalism's failure. Historians came to the subject with views colored by the despair of the Depression and by a belief in the efficacy of government action. This belief had been growing ever since Jeremy Bentham introduced it into the English-speaking world in the late eighteenth century. Successes attributed to the fledgling communist government in Russia and the rise of fascism in Italy led many to believe that government directed economy was the wave of the future. This belief was kept alive into recent years by claims made for French "indicative planning" and Japanese "industrial policy." But historians could not have failed so badly in their judgments if economists had been able to explain the Great Depression. And economists could not. It was not until 1963, when the National Bureau of Economic Research published Milton Friedman and Anna Schwartz's monumental study, A Monetary History of the United States, 1857-1960, that an economic explanation of the Depression appeared. This was not a propitious time for the authors. The belief in market failure had had three decades to harden into an unchallenged orthodoxy, one reinforced by exaggerated claims for Soviet economic performance under central planning. Moreover, Friedman and Schwartz's analysis was keyed toward explaining inflation and recession in terms of the behavior of monetary aggregates. Their account of the Depression is in one chapter and has to be fashioned out of copious material by the reader's own mind. Although their work and its implications became known to many economists, it appears to have had scant impact on historians or on the public's understanding of the Great Depression. A country that doesn't understand its own history is not well equipped to deal with its future. The Great Depression was not a failure of the old order. It was the failure of the new order that had just begun. The Federal Reserve is the most powerful institution of a new order that believed in the efficacy of government and its ability to do good. The same Federal Reserve caused the Great Depression when its wise men made a series of cumulative mistakes that contracted the money supply by one-third and wiped out purchasing power in an unprecedented fashion. …

Journal Article