scispace - formally typeset
Search or ask a question

Showing papers in "Policy Review in 2012"


Journal Article
TL;DR: The first expression of new thinking was the historic Monterrey consensus, which was later refined by the Paris Declaration and the Accra Accord as mentioned in this paper, and the foundational principle outlined in those agreements is a move from paternalism to shared responsibility and mutual accountability.
Abstract: MOVEMENT ALONG THE arc of development has been propelled by new worldviews and the creation of institutions to respond to them. In the 19th and 20th centuries, development efforts evolved from colonial expansion to missionary zeal, the aftermath of two world wars, the Cold War, economic self-interest, and postcolonial guilt. Numerous private and public organizations were created to respond to shifting demands, including multilateral and bilateral organizations wholly or partially dedicated to global health. The opening ten years of the 21st century arguably were the decade of global health. Resources increased significantly and many millions of lives were saved and improved. The rapid expansion in global health was part of a broader conceptual movement that created core principles for the use of resources in a new era in development. The first expression of new thinking was the historic Monterrey Consensus, which was later refined by the Paris Declaration and the Accra Accord. The foundational principle outlined in those agreements is a move from paternalism to shared responsibility and mutual accountability. Key to shared responsibility are leadership and strategic direction for the use of resources by the country in which they are deployed ("country ownership"). Achieving country ownership requires good governance, a results-based approach, and the engagement of all sectors of society. Several large global health institutions were born out of the heady days of the opening of this century; they were intended to reflect and be responsive to the demands of a new generation in development. Governments in emerging economies such as Mexico, Thailand, China, and Brazil have developed innovative models and invested significant resources in the health of their people. Although governments in many middle-income countries provide a great share of health resources, many of the gains in low-income countries and aspects of gains in middle-income countries have been financed and supported by newly created disease-specific programs including the Global Fund to Fight AIDS, Tuberculosis, and Malaria (the Global Fund); the U.S. President's Emergency Plan for AIDS Relief (PEPFAR) and Malaria Initiative (PMI); and the Global Alliance for Vaccines and Immunizations (GAY i). In addition, the Bill and Melinda Gates Foundation and other philanthropists became major investors in global health, and numerous public-private partnerships and product development partnerships were created. The large funding organizations have supported many country-owned programs that have saved and lifted up millions of lives while being the driving force in shifting the benchmark of success in global health--and development--from the amount of money committed to results achieved. Furthermore, health became part of the world's top agendas, including at the G8, the UN Security Council, CARICOM, and the African Union. However, the focus on specific diseases has imposed and exposed fault lines in delivering services in places where many suffer from multiple health issues at the same time or at varying points in their lives. Although studies have shown that HIV interventions have reduced overall mortality and that malaria and immunization programs have reduced childhood mortality in the near term, it seems highly likely that more lives will be durably saved if a person afflicted by different health problems has access to services for all of them. Although there are limited supportive data, we believe it is likely that an integrated approach focused on the health of a person and community is more cost-effective than a silo approach focused on a specific disease or health threat. Yet, existing global health institutions were designed for specific diseases and have not effectively shifted to embrace a broader vision. The resources currently available could have significantly greater impact with a more rational global health strategy and institutional structure focused on stewardship of available resources to achieve public goods what is commonly called global health architecture. …

29 citations


Journal Article
TL;DR: In this article, the authors detail the dimensions of these changes in fertility patterns within the Muslim world, examine some of their correlates and possible determinants, and speculate about their implications.
Abstract: THERE REMAINS A widely perceived notion--still commonly held within intellectual, academic, and policy circles in the West and elsewhere--that "Muslim" societies are especially resistant to embarking upon the path of demographic and familial change that has transformed population profiles in Europe, North America, and other "more developed" areas (UN terminology). But such notions speak to a bygone era; they are utterly uninformed by the important new demographic realities that reflect today's life patterns within the Arab world, and the greater Islamic world as well. Throughout the Ummah, or worldwide Muslim community, fertility levels are falling dramatically for countries and subnational populations--and traditional marriage patterns and living arrangements are undergoing tremendous change. While these trends have not gone entirely unnoticed, no more than a handful of pioneering scholars and observers have as yet drawn attention to them and their potential significance. (1) In this essay we will detail the dimensions of these changes in fertility patterns within the Muslim world, examine some of their correlates and possible determinants, and speculate about some of their implications. The global Muslim population THERE IS SOME inescapable imprecision to any estimates of the size and distribution of the Ummah--an uncertainty that turns in part on questions about the current size of some Muslim majority areas (e.g., Afghanistan, where as one U.S. official country study puts it, "no comprehensive census based upon systematically sound methods has ever been taken"), and in part on the intrinsic difficulty of determining the depth of a nominal believer's religious faith, but more centrally on the crucial fact that many government statistical authorities do not collect information on the religious profession of their national populations. For example While the United States maintains one of the world's most extensive and developed national statistical systems, the American government expressly forbids the U.S. Census Bureau from surveying the American public about religious affiliation; the same is true in much of the EU, in the Russian Federation, and in other parts of the "more developed regions" with otherwise advanced data-gathering capabilities. Nevertheless, on the basis of local population census returns that do cover religion, demographic and health survey (DHS) reports where religious preference is included, and other allied data-sources, it is possible to piece together a reasonably accurate impression of the current size and distribution of the world's Muslim population. Two separate efforts to estimate the size and spread of the Ummah result in reasonably consistent pictures of the current worldwide Muslim demography profile. The first, prepared by Todd M. Johnson of Gordon-Conwell Theological Seminary under the aegis of the World Christian Database, comes up with an estimate of 1.42 billion Muslims worldwide for the year 2005; by that reckoning, Muslims would account for about 22 percent of total world population. The second, prepared by a team of researchers for the Pew Forum on Religion and Public Life, placed the total global Muslim population circa 2009, a few years later, at roughly 1.57 billion, which would have been approximately 23 percent of the estimated population at the time. Although upwards of one fifth of the world's population today is thereby estimated to be Muslim, a much smaller share of the population of the "more developed regions" adheres to Islam: perhaps just over three percent of that grouping (that is to say, around 40 million out of its total of 1.2 billion people). Thus the proportion of the world's Muslims living in the less developed regions is not only overwhelming, but disproportionate: Well over a fourth of the population of the less developed regions--something close to 26 or 27 percent--would be Muslim, to go by these numbers. …

17 citations


Journal Article
TL;DR: In this paper, an index of economic policy uncertainty was developed to measure the degree of economic uncertainty in the U.S. economy and its relationship with economic growth, and it reached an all-time peak in August 2011.
Abstract: THE U S. ECONOMY hit bottom in June 2009. Thirty months later, output growth remains sluggish and unemployment still hovers above 8 percent. A critical question is why. One view attributes the weak recovery, at least in part, to high levels of uncertainty about economic policy. This view entails two claims: First, that economic policy uncertainty has been unusually high in recent years. Second, that high levels of economic policy uncertainty caused households and businesses to hold back significantly on spending, investment, and hiring. We take a look at both claims in this article. We start by considering an index of economic policy uncertainty developed in our 2012 paper "Measuring Economic Policy Uncertainty." Figure 1, which plots our index, indicates that economic policy uncertainty fluctuates strongly over time. The index shows historically high levels of economic policy uncertainty in the last four years. It reached an all-time peak in August 2011. [FIGURE 1 OMITTED] As discussed below, we also find evidence that policy concerns account for an unusually high share of overall economic uncertainty in recent years. Moreover, short-term movements in overall economic uncertainty more closely track movements in policy-related uncertainty in the past decade than in earlier periods. In short, our analysis provides considerable support for the first claim of the policy uncertainty view. The second claim is harder to assess because it raises the difficult issue of identifying a causal relationship. We do not provide a definitive analysis of the second claim. We find evidence that increases in economic policy uncertainty foreshadow declines in output, employment, and investment. While we cannot say that economic policy uncertainty necessarily causes these negative developments--since many factors move together in the economy--we can say with some confidence that high levels of policy uncertainty are associated with weaker growth prospects. Economic policy uncertainty over time FIGURE I PLOTS our monthly index of economic policy uncertainty from January 1985 to December 2011. Before describing the construction of the index, we briefly consider its evolution over time. The policy uncertainty index shows pronounced spikes associated with the Balanced Budget Act of 1985, other major policy developments, the Gulf Wars, the 9/11 terrorist attack, financial scares and crises, and consequential national elections. Policy uncertainty shoots upward around these events, and typically falls back down within a few months. The experience since January 2008 is distinctive, however, in that policy uncertainty rose sharply and stayed at high levels. The last two years are especially noteworthy in this respect. While the most threatening aspects of the financial crisis were contained by the middle of 2009, the policy uncertainty index stood at high levels throughout 2010 and 2011. The index shows a sharp spike in January 2008, which saw two large, surprise interest rate cuts. The Economic Stimulus Act of 2008, signed into law on February 13, 2008, was also a major focus of economic policy concerns in January 2008. The policy uncertainty index jumps to yet higher levels with the collapse of Lehman Brothers on September 15, 2008, and the enactment in early October of the Emergency Economic Stabilization Act, which created the Troubled Asset Relief Program (TARP). A series of later developments and policy fights--including the debt-ceiling dispute between Republicans and Democrats in the summer of 2011, and ongoing banking and sovereign debt crises in the Eurozone area--kept economic policy uncertainty at very high levels throughout 2011. So how do we construct our index? We build several index components and then aggregate over the components to obtain the index displayed in Figure I. Interested readers can consult our 2012 paper for more details. Newsbased component. …

15 citations


Journal Article
TL;DR: In this article, the authors consider the legal and ethical challenges associated with the development of autonomous robotic systems and their use in the future battlefield and argue that the United States must act, however, before international expectations about these technologies harden around the views of those who would impose unrealistic, ineffective, or dangerous prohibitions.
Abstract: A LETHAL SENTRY ROBOT designed for perimeter protection, able to detect shapes and motions, and combined with computational technologies to analyze and differentiate enemy threats from friendly or innocuous objects--and shoot at the hostiles. A drone aircraft, not only unmanned but programmed to independently rove and hunt prey, perhaps even tracking enemy fighters who have been previously "painted and marked" by military forces on the ground. Robots individually too small and mobile to be easily stopped, but capable of swarming and assembling themselves at the final moment of attack into a much larger weapon. These (and many more) are among the ripening fruits of automation in weapons design. Some are here or close at hand, such as the lethal sentry robot designed in South Korea. Others lie ahead in a future less and less distant. Lethal autonomous machines will inevitably enter the future battlefield--but they will do so incrementally, one small step at a time. The combination of "inevitable" and "incremental" development raises not only complex strategic and operational questions but also profound legal and ethical ones. Inevitability comes from both supply-side and demand-side factors. Advances in sensor and computational technologies will supply "smarter" machines that can be programmed to kill or destroy, while the increasing tempo of military operations and political pressures to protect one's own personnel and civilian persons and property will demand continuing research, development, and deployment. The process will be incremental because nonlethal robotic systems (already proliferating on the battlefield, after all) can be fitted in their successive generations with both self-defensive and offensive technologies. As lethal systems are initially deployed, they may include humans in the decision-making loop, at least as a fail-safe--but as both the decision-making power of machines and the tempo of operations potentially increase, that human role will likely slowly diminish. Recognizing the inevitable but incremental evolution of these technologies is key to addressing the legal and ethical dilemmas associated with them; U.S. policy for resolving such dilemmas should be built upon these assumptions. The certain yet gradual development and deployment of these systems, as well as the humanitarian advantages created by the precision of some systems, make some proposed responses--such as prohibitory treaties--unworkable as well as ethically questionable. Those same features also make it imperative, though, that the United States resist its own impulses toward secrecy and reticence with respect to military technologies and recognize that the interests those tendencies serve are counterbalanced here by interests in shaping the normative terrain--i.e., the contours of international law as well as international expectations about appropriate conduct on which the United States government and others will operate militarily as technology evolves. Just as development of autonomous weapon systems will be incremental, so too will development of norms about acceptable systems and uses be incremental. The United States must act, however, before international expectations about these technologies harden around the views of those who would impose unrealistic, ineffective, or dangerous prohibitions--or those who would prefer few or no constraints at all. Incremental automation of drones THE INCREMENTAL MARCH toward automated lethal technologies of the future, and the legal and ethical challenges that accompany it, can be illustrated by looking at today's drone aircraft. Unmanned drones piloted from afar are already a significant component of the United States' arsenal. At this writing, close to one in three U.S. Air Force aircraft is remotely piloted (though this number also includes many tiny tactical surveillance drones). The drone proportion will only grow. Yet current drone military aircraft are not autonomous in the firing of weapons--the weapon must be fired in real time by a human controller. …

10 citations


Journal Article
TL;DR: The first ambassador-at-large for international religious freedom, Robert Seiple, testified before the House International Relations Committee on the State Department's inaugural "International Religious Freedom Report" as discussed by the authors, also unintentionally anticipated the conflicts and security threats that would confront the United States in the ensuing decade.
Abstract: AT THE END OF THE 20TH century, if one wanted to predict what security threats would preoccupy the United States over the coming decade, a good place to start would have been a little-noticed congressional testimony by a relatively obscure State Department official On October 6, 1999, Robert Seiple, the first ambassador-at-large for international religious freedom, testified before the House International Relations Committee on the State Department's inaugural "International Religious Freedom Report" Seiple's remarks also unintentionally anticipated the conflicts and security threats that would confront the United States in the ensuing decade Looking back, the regimes that he identified for severe violations of religious freedom overwhelmingly coincide with those the United States was already at war with or would soon go to war with, or that would emerge as first-order national security concerns In his testimony, Seiple announced the designation of Burma, China, Iran, Iraq, and Sudan as "Countries of Particular Concern" subject to sanction for severe violations of religious freedom He also noted that "the secretary also intends to identify the Taliban in Afghanistan, which we do not recognize as a government, and Serbia, which is not a country, as particularly severe violators of religious freedom" Seiple then cited Saudi Arabia and North Korea as two other countries that likely merited designation as severe violators The State Department eventually did so designate both of them, adding North Korea to the list in zoo and Saudi Arabia in 2004 With the sole exception of Burma, every single one of the countries cited by Seiple were or were to become major national security concerns if not outright targets of military action (The recent diplomatic opening with Burma might make it the exception that proves the rule, if Burma's inchoate reforms include religious freedom and eventually induce more pacific behavior and distance from North Korea and China) At the time of Seiple's testimony, the US had only months earlier concluded its participation in the NATO war on Serbian forces in Kosovo, and would maintain a troop presence in Kosovo and Bosnia for years hence The US had also the prior year launched strikes on Sudan, Afghanistan, and Iraq Just two years later would come the September II attacks planned by al Qaeda from its base in Afghanistan, with fifteen of the nineteen hijackers citizens of Saudi Arabia Within months, a US-led force responded by toppling the Taliban The year 2001 also witnessed the tense confrontation between the US and China over the Hainan Island EP-3 spy plane capture, anticipating the growing concerns over the next decade about China's assertive military expansion and challenge to American interests in the Indo-Pacific In 2002 North Korea admitted to its advanced nuclear weapons program, and Iran's uranium enrichment efforts came to light as well, which only added to concerns about Iran's longstanding sponsorship of terrorism And in 2003, the US invaded Iraq This correlation between religious persecution and national security threats is not just a 21st-century phenomenon of post--Cold War dislocations, but also holds true over the past century Including World War II, every major war the United States has fought over the past 70 years has been against an enemy that also severely violated religious freedom Such was the case with Nazi Germany, North Korea, North Vietnam, and Saddam Hussein's Iraq This characterized other conflicts as well The Cold War standoff with Soviet communism featured an opponent that engaged in systemic religious persecution Numerous smaller-scale military interventions, such as Lebanon in 1983, Libya in 1986 and 2011, Somalia in 1993, Bosnia in 1995, and Kosovo in 1999, were also targeted against actors that embraced religious intolerance These various enemies were religiously and ideologically diverse, from the Nazi Reich cult, to atheistic communism, to Serbian Orthodox nationalism, to Arab Baathism, to Islamist theocracy, to militant jihadism as practiced by Hezbollah or al Qaeda …

9 citations


Journal Article
TL;DR: The Obamians: The Struggle Inside the White House to Redefine American Power by James Mann and David Sanger as mentioned in this paper is a good starting point for a discussion of the influence of politics on foreign policy.
Abstract: Obama's Foreign Policy JAMES MANN. The Obamians: The Struggle Inside the White House to Redefine American Power. VIKING. 416 PAGES. $26.95. DAVID E. SANGER. Confront and Conceal: Obama's Secret Wars and Surprising Use of American Power. CROWN. 496 PAGES. $2.8. TO PARAPHRASE Mario Cuomo's dictum about political campaigns, we debate foreign policy in poetry and implement it in prose. The political discourse on foreign policy wrestles with the big questions facing America as a global power. Campaigns serve as crucial settings for those debates, but not the only ones. The broader public square includes journals, op-ed pages, the blogosphere, Twitter--any arena where contrasting viewpoints contend with each other. For foreign policy's prosaic side, the scene of the action is the governmental apparatus that the United States, as a global power, has developed for its dealings with the rest of the world. The standard jaded view of the effect of politics on foreign policy bemoans its distorting influence, which can be counterproductive indeed. Yet politics, properly understood, is integral to the process. Foreign-policy-making is political to the extent that the issues are controversial either in domestic politics or the specialist debates that parallel governmental decisions. The large number of politically appointed officials in the American system, compared with other governments, is a nod to this reality. At the same time, the appointees who "serve at the pleasure of the president" work alongside career officers responsible for every aspect of diplomacy, development, warfare, and intelligence--and the bureaucracy's levers for decision-making and implementation. This foreign policy civics lesson is offered as background for a pair of election-year books about President Obama's national security record thus far. James Mann and David Sanger are among the most sure-handed journalists on that particular beat. In their accounts of Obama foreign policy, they approach the subject from opposite sides of the political-technocratic divide. The books' subtitles hint at the authors' shared questions of interest but also their divergent styles and methods. Sanger's book is about Obama's "surprising use of American power," whereas Mann focuses on a struggle to "redefine American power." Sanger, who is the New York Times chief Washington correspondent, takes readers more deeply into the workings of national security policy execution; he watches President Obama and his advisers preside over the machinery of statecraft. The revelations that have earned the book buzz as well as controversy--the cyberwarfare used to sabotage Iran's uranium enrichment centrifuges--are the fruits of this method. While Sanger delves into the Obama team's exertion of American power to discern a policy style, James Mann is interested in their deliberate efforts to devise a foreign policy framework matching their view of list-century realities. He wants to know whether they could "bring about a new American relationship with the world, one that was less unilateral in approach and less reliant on American military power." Applying the same approach as his earlier book about President George W. Bush's foreign policy team (The Vulcans), Mann focuses on the perspectives and ideas that policymakers bring with them into government. Mann sees Baradk Obama as a president confronted with "the legacies of the two Georges." He would inherit not only the problems left by his predecessor (Iraq, Afghanistan, Iran, North Korea), but also the lingering association of the Democratic Party's brand with the antiwar constituency that gave George McGovern the nomination 35 years earlier. Rather than overcompensating for this stereotype with a me-too hawkish stance, Obama and the rising generation of Democratic foreign policy hands try to approach questions of military force with calm prudence--believing force should be used because of compelling circumstances, not strained arguments or figments. …

7 citations


Journal Article
TL;DR: The most widely accepted lessons of the 1962 Cuban Missile Crisis are that the discovery of Soviet missiles in Cuba constituted a stunning American intelligence success and that the United States should not forget what happened as discussed by the authors.
Abstract: THE CUBAN MISSILE crisis marks its 50th anniversary this year as the most studied event of the nuclear age. Scholars and policymakers alike have been dissecting virtually every aspect of that terrifying nuclear showdown. Digging through documents in Soviet and American archives, and attending conferences from Havana to Harvard, generations of researchers have labored to distill what happened in 1962--all with an eye toward improving U.S. foreign policy. Yet after half a century, we have learned the wrong intelligence lessons from the crisis. In some sense, this result should not be surprising. Typically, learning is envisioned as a straight-line trajectory where time only makes things better. But time often makes things worse. Organizations (and individuals) frequently forget what they should remember and remember what they should forget. One of the most widely accepted lessons of that frightening time--that the discovery of Soviet missiles in Cuba constituted a stunning American intelligence success--needs to be challenged. An equally stunning intelligence-warning failure has been downplayed in Cuban missile crisis scholarship since the 1960s. Shifting the analytic lens from intelligence success to failure, moreover, reveals surprising and important organizational deficiencies at work. Ever since Graham Allison penned Essence of Decision in 1971, a great deal of research has focused on the pitfalls of individual perception and cognition as well as organizational weaknesses in the policymaking process. Surprisingly little work, however, has examined the crucial role of organizational weaknesses in intelligence analysis. Many of these same problems still afflict U.S. intelligence agencies today. The pre-crisis estimates of 1962 THE EMPIRICAL RECORD of U.S. intelligence assessments leading up to the crisis is rich. We now know that between January and October 1962, when Soviet nuclear missile sites were ultimately discovered, the CIA'S estimates office produced four National Intelligence Estimates (NIES) and Special National Intelligence Estimates (SNIES) about Castro's communist regime, its relationship with the Soviet Bloc, its activities in spreading communism throughout Latin America, and potential threats to the United States. These were not just any intelligence reports. NIES and SNIES were--and still are--the gold standard of intelligence products, the most authoritative, pooled judgments of intelligence professionals from agencies across the U.S. government. Sherman Kent, the legendary godfather of CIA analysis who ran the CIA'S estimates office at the time, described the process as an "estimating machine," where intelligence units in the State Department, military services, and CIA would research and write initial materials; a special CIA estimates staff would write a draft report; an interagency committee would conduct "a painstaking" review; and a "full-dress" version of the estimate would go down "an assembly line of eight or more stations" before being approved for dissemination. (1) The four pre-crisis estimates of 1962 reveal that U.S. intelligence officials were gravely worried about the political fallout of a hostile communist regime so close to American shores and the possibility of communist dominoes in Latin America. But they were not especially worried about risk of a military threat from Cuba or its Soviet patron. The first estimate, released January 17, 1962 (SNIE 80-62), was a big-think piece that assessed threats to the United States from the Caribbean region over the next 2.0 years. Although the estimate considered it "very likely" that communism across the region would "grow in size" during the coming decade, it concluded that "the establishment of ... Soviet bases is unlikely for some time to come" because "their military and psychological value, in Soviet eyes, would probably not be great enough to override the risks involved" (emphasis mine). …

7 citations


Journal Article
TL;DR: The Arab Spring as discussed by the authors has been widely recognized as a watershed moment in the Arab world, but it has not been of a different nature from the European revolutions of the 19th and 20th centuries.
Abstract: REVOLUTIONS ARE NOT mechanical processes of social engineering. They unfold as an intrinsically unpredictable flow of events. Structurally, revolutions will go through phases, often through contradictory periods. Hardly any revolution will evolve without turbulences and phases of consolidation. And revolutions do not happen without moments of stagnation, surprising advancement, and unexpected transformation. The beginning of the Arab Spring in 2011 has not been of a different nature. It started as a fundamental surprise to most, took different turns in different countries, and was far from over by the end of 2011. Transatlantic partners are fully aware of the stark differences among Arab countries. They realize the genuine nature of each nation's struggle for democracy. Yet, they are inclined to take the Western experience with democracy as the key benchmark for judging current progress in the Arab world. The constitutional promise of the U.S. or the success of the peaceful revolutions in Eastern and Central Europe in 1989 and 1990 are inspiring, but one must be cautious in applying them to the Arab Spring. Preconditions have to be taken into account. Beside, the history of Europe's 19th and 20th century also suggest room for failure in the process of moving toward rule of law and participatory democracy Some cynics have already suggested that the Arab Spring could be followed by an Arab autumn or even winter. Even if one discards such visions as inappropriate self-fulfilling prophecy, certain European experiences should probably not be forgotten: In the 1380s, Germany experienced its own Spring toward pluralism and democracy called Vormarz. That German spring movement (Sturm and Drang) was essentially a cultural uprising without the follow-up of transformational political change. In 1848, across Europe, revolutionary upheavals promoted the hope for an early parliamentary constitutionalism across the continent. In most places, this hope was soon to he replaced by variants of a restrictive consolidation of the ancient regimes. In 1989, the experience of Romania deviated strongly from most of the peaceful revolutions across Europe. Ousting and even killing the former dictator was a camouflage for the old regime to prevail for almost another decade. While the rest of Central and South Eastern Europe struggled with regime change and renewal, Romania prolonged regime atrophy and resistance to renewal. For the time being, the Arab Spring has evolved into the prelude of a revolutionary transformation that will go on, most likely for many more years to come. The Prague Spring of 1968, in the former Czechoslovakia, comes to mind: It was welcomed with euphoria in the West and in secrecy by many citizens under communist rule in the east of Europe. Yet, it turned out to he just the beginning of a transformative period in the communist world. It took another two decades before a substantial change of the political order in most communist states came about. The Prague Spring was the spring of a generation, not the spring of a year. No matter what direction the Arab Spring may take in the years ahead, two trends are startling: First, the Arab Spring has initiated a wide range of different reactions and trends around the Arab world. The homogenous Arab world is a myth. Likewise, the notion that Arab societies are permanently stagnant and immobile is a myth. The quest for dignity, voice, and inclusion under rule of law, and a true structure of social pluralism, has been the signature of peaceful protest all over the Arab world. The reactions of incumbent regimes have demonstrated a variety of strategies but also different levels of strength, legitimacy, and criminal energy. Second, and more surprising, is the relative resilience of the Arab monarchies to the Arab Spring: Morocco and Jordan, Saudi Arabia and Oman, Kuwait and the United Arab Emirates, and Qatar and Bahrain have been reasonably unaffected and remain stable (in spite of the temporary clashes in Bahrain and their oppression with the help of Saudi Arabia's army). …

7 citations


Journal Article
TL;DR: According to the Case-Shiller/SP and second, supply-management policies assisting distressed homeowners in avoiding foreclosure have failed to stem large-scale housing losses as discussed by the authors, and this failure reflects the magnitude of the housing problem in which tens of millions of American families got into homes they could not afford.
Abstract: TRENDS IN THE housing sector have been a driving force behind the recent financial crisis and associated recession. According to the Case-Shiller/SP and second, supply-management policies assisting distressed homeowners in avoiding foreclosure. As the depth of the housing crisis demonstrates, these policies--pursued by both Republican and Democratic administrations--have failed to stem large-scale housing losses. In large part, this failure reflects the magnitude of the housing problem in which tens of millions of American families got into homes they could not afford. Policymakers were consistently behind the curve in estimating the extent of problems in housing, and were unable to tackle the core issue of excess mortgage debt, which was made worse by a precipitous fall in home prices. …

6 citations


Journal Article
TL;DR: In the context of the Arab Spring, a window of opportunity has been opened in the Middle East and North Africa to put in place new institutions conducive to entrepreneurship, innovation, and economic growth as discussed by the authors.
Abstract: WHAT PRACTICAL LESSONS can the experience of post-communist transitions in Central and Eastern Europe offer to countries that are attempting to overhaul their economic systems? With the Arab Spring, a window of opportunity has been opened in the Middle East and North Africa to put in place new institutions conducive to entrepreneurship, innovation, and economic growth. To be sure, the world today offers very few examples of genuine centrally planned economies. Even the worst performing low-and mid-income countries do have sizeable private sectors and experience with open markets. However, despite the wide-ranging scale of reform challenges in different societies, many countries in the mid-income world, which are undergoing significant political changes at the moment, will also need to privatize, remove distortionary subsidies, stabilize their public finances, and create space for the growth of the private sector. Egypt, for instance, is suffering from an acute public finance problem, which is driven mostly by the existence of a distortionary subsidy system, aiming to keep prices of certain commodities low in order to help people in need. But, of course, indiscriminate subsidies to commodities chiefly benefit the wealthy and place a strain on public finance. Many Arab countries have sizeable public sectors--roughly one third of Egypt's economy is run by the military, for instance. As a result, a genuine transition to the market will necessitate privatizing state-and military-run enterprises--albeit on a smaller scale than in countries where the whole of the economy was owned by the government. As a rule of thumb, Arab Spring countries suffer from suffocating barriers to entrepreneurship and corrupt bureaucrats enforcing those rules. The revolution in Tunisia was, after all, sparked by the self-immolation of Mohamed Bouazizi, an aspiring entrepreneur who was selling fruit and vegetables illegally and was being harassed by the local authorities. Context-specificity matters, and the Middle East in 2012 differs from Eastern Europe in 1990. Still, many of the challenges facing the Arab world today are not that different from those facing Eastern European economies twenty years ago. As a result, one may hope that there are some lessons from post-communist transitions that need to be kept in mind if the democratic transitions in the Middle East are to succeed. Unfortunately, most current debates about economic and political transitions and desirable reform strategies are flawed, as they do not reflect the role played by dispersed knowledge in the economy and the incentive problems existing within the political sphere. It is not helpful to provide reform advice that is grounded in the assumption that reformers are omniscient and benevolent. After all, the central reason for the failure of planned economies was that they placed unrealistic epistemic and motivational demands on policymakers. In the same vein, many popular prescriptions for economic reforms implicitly assume that policymakers do not face cognitive constraints and that their motives are purely benevolent. This article, therefore, has the ambition to outline a few lessons that are more likely to pass the test of robustness with regard to less-than-ideal assumptions about policymakers' knowledge and incentives. This essay is partly an attempt to rehabilitate the "Washington Consensus." By focusing policymakers' attention on variables that they could directly control, and by providing a simple laundry list of policies that are necessary, though not sufficient, for the success of transitions, this approach provided a more solid platform for economic transition than its alternatives. Although one might question the soundness of its economic fundamentals, it is not clear whether systematic improvements upon its results were possible without requiring unrealistic assumptions about the knowledge and benevolence of policymakers. …

5 citations


Journal Article
TL;DR: The idea of "geopolitics of emotions" as mentioned in this paper proposes to map the world around each region's expectations about its own future: helpless and resigned in the West but hopeful and domineering in the East, and resentful and even vengeful in the South.
Abstract: THE IDEA OF decline is in fashion--so much so that a so-called "geopolitics of emotions" proposes to map the world around each region's expectations about its own future: helpless and resigned in the West but hopeful and domineering in the East, and resentful and even vengeful in the South. Yet, it should be clear that the emergence of new powers, which is real, need not be construed as the fall of others. Admittedly, a state no longer needs a Western identity to exert global influence and even seek primacy: That alone represents a compelling change. It suggests that for the first time in quite a while the West is no longer decisive and can no longer remain exclusive. But still, entering this new era, the West stays ahead of the rest because the rest cannot afford to be without the West. This essay is, therefore, a case against the case against the West: Somewhere in the shadow of Francis Fukuyamals much-maligned forecast of the "endpoint of mankind's ideological evolution" there stands a Wester n world restored. (1) Farewell to yesteryear ENTERING THE SECOND decade of the 2.oth century one hundred years ago there were so countries at most; few of them were dubbed great powers, and those that qualified as world powers were mostly European states. This was a Western world whose dominance had deepened while India and China far in the East, and Turkey at the margin of the West, fell steadily behind. This was a belle epoque--a time when, as Simon Schama wrote, Rudyard Kipling's "fantasy was potent magic" that helped conquer empires in the morning and gather "home for tea" in the afternoon. This also looked like a good time to be alive--until the summer of 1914 when a horrific and unnecessary war that was to last Over three decades made it a good time to die. "We were horn at the beginning of the First World War," wrote Albert Camus of his generation. "As adodescents we had the crisis of 197_9; at twenty, Hitler. Then came the Ethiopian lvar, the civil war in Spain and Munich . . . Born and bred in such a world, what did we believe in? Nothing." This, feared (or hoped) the French humanist, was "humanity's zero hour." That the West stayed on top nonetheless, amidst the ruins of the entire European state system, was owed to a massive investment of American power and leadership that inaugurated a post-European world around a triumphant America with the consent of its new charges. Thus told, and without imperial intent from the United States, the history of the both century grows out of the rise of American power but also, and especially, the collapse of everybody else. Entering the second decade of the 21st century the past looks very distant--like a millennium away. It seems hard to remember, but in the previous epoch, the nation-state ruled and military force prevailed, leaving the weak at the mercy of the strong. This was an epoch of state coercion and national submission, of conquests and empires; this was an epoch, too, when time took its time, and territorial space kept its distance. In the new era, there is little time for a timeout from a world that brings ever more quickly the "over there" of Yesteryear over here. Nation-states are fading, and institutions occasionally matter more than their members. Territorial overlaps impose additional measures of state cooperation but they also facilitate a global awakening to the "better" things available elsewhere--and, therefore, more and more pressing bottom-up demands for instant access to them. Now, too, military force is rarely decisive. On the whole, wars are no longer in fashion and other forms of power are favored to shape the world rather than rule it. (2) There are over zoo countries now, and dozens of them can claim moments of relevance during which their influence extends beyond their region and to the world at large. So many states thus featured by history above the others create an unusual condition of zero-polarity: a structure of power in which many states are necessary but none can prove sufficient, and floating transnational loyalties find their voice in multilateral groupings that pretend to be the "BRICS" of a new world order but lack a decisive steering organization or even shared goals. …

Journal Article
TL;DR: This paper argued that the political personality of European power as we know it today is the product of America's Cold War policies and the universalization of Europe's post-World War II political experience.
Abstract: "America is a power, Europe is an experience" -- Joschka Fischer IN 2002 ROBERT KAGAN, a leading neoconservative intellectual temporarily "exiled" in Brussels, wrote a brilliant essay, "Power and Weakness," arguing that the political personality of European power as we know it today is the product of America's Cold War policies and the universalization of Europe's post-World War II political experience. But more than anything else, it is the product of Europe's current military weakness. In his analysis, post-9/11 America should naturally have cooperated with Europe, but Washington could not rely on the European Union as a strategic partner in managing the world because "on the all-important question of power-- the efficacy of power, the morality of power, the desirability of power-- American and European perspectives are diverging." A decade later American and European perspectives diverge on the all-important question of the decline of Western power. While American elites view the very discussion of "decline" as "pre-emptive superpower suicide," Europeans are busy learning how to live in a world where the EU is not a leading actor and the European continent is not the center of the world but simply the wealthiest province. Could it be that the clash between Washington's denial of the very possibility of America's decline and Europe's excessive readiness to adjust to it can damage the Western alliance more than the controversy over the use of power did? Could it be that at the heart of the current troubles in the transatlantic relationship is a crisis of the political imagination of the West? I want to argue that the paradox of the new world order emerging out of the ongoing recession is that the global spread of democracy and capitalism, instead of signaling "the end of history," marked "the end of the West" as a political actor constructed in the days of the Cold War. In the decades to come the nature of the political regimes will be an unreliable predictor for the geopolitical alliances to emerge; and it is the blurring borders between democracies and authoritarian capitalism, rather than the triumph of democracy or the resurgence of authoritarianism, that defines the global political landscape. Kagan's "Long Telegram IN "POWER AND WEAKNESS" Robert Kagan eschewed political correctness, insisting that after the end of the Cold War Americans and Europeans no longer shared a common view of the world. They no longer shared a common strategic culture. The divergence between America and Europe can be best explained not by differences in "national character" or value systems but by the asymmetry of power. It is Europe's relative military weakness that determines Europeans' rejection of power as an instrument in international relations. And it is America's strength that defines Americans' readiness to use military power. In short, capabilities shape intentions. Forced to choose between increasing their military budgets and becoming second-rate powers, Europeans most likely will choose marginality. Contrary to the opinion of its critics, Kagan's essay was not a dismissal of the eu's relevance in the world. Paradoxically Kagan's dispatch from Brussels was reaffirmation of the importance of the transatlantic alliance. Arguing with the prevailing consensus in Washington at the time, he asserted that it was true that Europe had lost its geostrategic and military significance for the U.S. But Europe had retained its critical ideological relevance for the American foreign policy, he believed, because the split between America and Europe meant the end of the post-Cold War world. And in Kagan's strategic vision it is the post-Cold War world-- a world without Soviet power but defined by the ideological confrontation between free nations and tyrannical regimes-- that best suits American interests. Re-reading Kagan's article a decade later, we face three critical questions. …

Journal Article
TL;DR: In this article, the authors show that much of what has been reported about income inequality is misleading, factually incorrect, or of little or no consequence to our economic well-being.
Abstract: GN OCTOBER 201 I, the Congressional Budget Office published a report, "Trends in the Distribution of Household Income between 1979 and 2007," showing that, during the period studied, aggregate income (as defined by the cBo) in the highest income quintiles grew more rapidly than income in the lower quintiles This was particularly true for the top one percent of earners. This 030 study has been cited by the media and politicians as confirmation that income inequality has increased "substantially" during the period studied, and has been used to support President Obama's claim that income inequality is a serious and growing problem in the United States that must be addressed by raising taxes on the highest income earners. We will show that much of what has been reported about income inequality is misleading, factually incorrect, or of little or no consequence to our economic well-being. We will also show that middle-class incomes are not stagnating; in fact, middle-class incomes have risen significantly over the 29 years covered by the CBO study. Lastly, we will address assertions that the rich are not paying their "fair share" of taxes. In our view, Americans should care about the well-being of the nation as a whole rather than whether some people earn more than others. To that end, the focus of public policy should not be on equality of income but on equality of economic opportunity. Policies designed to reduce income inequality inevitably involve redistribution of income through increases in transfer payments and marginal tax rates. But these policies discourage hiring and investment, which depresses economic growth and opportunity. In sharp contrast, policies designed to enhance equality of opportunity will increase economic wellbeing for all, most particularly those in lower income households. Income inequality PERHAPS THE MOST important question left out of almost every discussion about income inequality is, "Why should we care about it?" Many of those who worry about high income inequality argue that it is an indicator of social injustice that must be remedied through redistribution of income (or wealth). Unfortunately, those who make this claim have not provided any generally accepted criteria for determining when an economic system is unjust. Nor have they provided a convincing argument that such injustice is widespread in the U.S. (In considering this issue, it is worth noting that Greece, Spain, and Italy all have substantially lower income inequality than the U.S. The same is true for Afghanistan, Pakistan, and Bangladesh.) Measuring inequality using the Gini coefficient. There are at least five methodologies used to measure income inequality. The most commonly used is the Gini coefficient (also called the Gini index) developed by Italian statistician Corrado Gini. The Gini coefficient is a method of measuring the statistical dispersion of (among other things) income, consumption, and wealth. The figure of merit for the Gini coefficient for income inequality ranges from zero to -r.o, where zero represents total equality (all persons have identical incomes) and no represents total inequality (one person has all of the income). By this measure, the U.S. has substantially higher income inequality than almost all other industrialized nations. In zo 1 o, the Census Bureau reported that the U.S. Gini coefficient was .469, while the average Gini coefficient for the 17 European Union nations was .31. (1) The U.S. Gini coefficient cited here comes from an annual report of the Census Bureau, which uses what it calls "money income" in its measurement of income inequality.2 Money income, which is the definition of income typically used in public references to inequality, consists of cash income only, does not subtract taxes, and excludes the value of noncash transfer payments (such as nutritional assistance, Medicare, Medicaid, and public housing), as well as many other components of income. …

Journal Article
TL;DR: The cybercrime has been used broadly to describe a wide range of activities, from illegal interference and illegal access to the misuse of devices and content-related offenses as mentioned in this paper, which is the proper domain of federal law and international law, rather than the states.
Abstract: THE PARADE OF horribles potentially set to march by a cyberattack is by now familiar: No air traffic controllers or airport check-ins; no electronically regulated rail traffic; no computer-dependent overnight deliveries of packages or mail; no paychecks for millions of workers whose employers depend on payroll software; no financial records of funds on deposit and no ATMs; no reliable digital records in hospitals and health centers; no electrical power, resulting in no light, no heat, no operating oil refineries or heating fuel or gasoline; no traffic signals, and no telephone or internet service or effective police protection--such is the list of what could be disabled by an attack on America's computer networks. Addressing this threat has been assumed to be the task of the federal government. But the dangers posed clearly implicate the police powers traditionally exercised by the states--and the states' interests are significant. As the authors of one recent study noted, states hold the most comprehensive collection of personally identifiable information about their residents, and states routinely rely upon the internet to serve those residents. Health and driving records, educational and criminal records, professional licenses and tax information all are held by state governments. What role, then, might states play in promoting cybersecurity? Just how great is the threat from cyberattacks? What, indeed, is a cyberattack? How effective are federal and international safeguards? Isn't cybersecurity the proper domain of federal law and international law, rather than the states? Let's begin with the gravity of the threat. So far as we are aware, as James Lewis has pointed out, in only two incidents have actions taken in cyberspace thus far caused serious damage to critical infrastructure. Neither occurred in the United States. (The first involved the disruption of Syrian air defenses by the Israeli Air Force during the destruction of a Syrian nuclear reactor. The second involved the so-called Stuxnet attacks on Iranian nuclear reactors.) These operations were appropriately termed cyberattacks. They involved destruction or disruption of the sort associated with war; they are thus regulated--to a point--by the international law of armed conflict. Cyber-espionage, on the other hand, involves no destruction or disruption but is aimed at the surreptitious extraction of data. The term cybercrime has been used broadly to describe a wide range of activities, from illegal interference and illegal access to the misuse of devices and content-related offenses. Each of these terms refers as much to the perpetrators as to the act itself. Espionage conducted by other nations has been regarded as a matter for the federal government, whereas theft, the destruction of property, and related offenses committed by individuals and criminal organizations are thought to be the purview of both state and federal governments. While these distinctions provide a bit of analytic clarity, cyberattacks, cybercrimes, and cyber-espionage do not fit well into existing categories. For one thing, they're usually not easily distinguishable from one another until well after their initiation, if then. All exploit vulnerabilities in computer networks and use similar techniques. Malware that has been downloaded surreptitiously and sits silently on a computer may be intended simply to monitor keystrokes--or it may await the command of a distant operator to erase data, freeze the operating system, or participate in a botnet attack (explained below). Experts often cannot be sure what's afoot without time-consuming and painstaking forensic analysis. Given the instantaneity of strike and counterstrikc in cyberspace, this can be impractical. Further, the anonymity of cyberspace and the current state of information technology make it extremely difficult to identify transgressors and to attribute attacks. The absence of attributability severely complicates the application of any legal regime to individual acts. …

Journal Article
TL;DR: The Atlantic alliance has demonstrated remarkable resilience over the past two decades as discussed by the authors and has been able to resist the dissolution of the threat that brought them into being, however, not only survived the collapse of the Soviet Union but also welcomed a host of new members from Central Europe and to undertake military missions in Bosnia, Kosovo, Afghanistan, and Libya.
Abstract: THE ATLANTIC ALLIANCE has demonstrated remarkable resilience over the past two decades. Most alliances do not outlast the dissolution of the threat that brought them into being, nato, however, not only survived the collapse of the Soviet Union but went on to welcome a host of new members from Central Europe and to undertake military missions in Bosnia, Kosovo, Afghanistan, and Libya. As the Cold War came to a close, few observers could have predicted that nato, twenty years later, would be in the midst of a major mission in Afghanistan while simultaneously carrying out a successful air campaign to topple the Libyan government. NATO's surprising longevity and activism notwithstanding, the Atlantic alliance has certainly suffered its fair share of setbacks. The Iraq war of 2003 severely strained transatlantic relations and underscored the differences in approach and policy that had come into stark relief after President George W. Bush took office. The election of Barack Obama then seemed to guide the United States and Europe back into alignment-- but only temporarily. Soon enough, Europeans began to worry that Obama was a "post-Atlanticist" president who would focus his attention elsewhere-- on Asia, in particular. So, too, were Europeans disappointed when Obama backed away from some of his campaign pledges, unable to close Guantanamo or implement a credible U.S. program to curb global warming. For its part, Washington begrudged the eu's sluggish response to its financial crisis and its inability to muster more coherence and capability on matters of defense. The Western alliance has, however, admirably weathered these ups and downs, and proved wrong its many naysayers. Only a decade ago, many analysts were convinced that the transatlantic coupling was on the rocks. Robert Kagan predicted in these pages that a Hobbesian America obsessed with power and coercion was destined to separate from a Kantian Europe wedded to taming the world through law and institutions. In The End of the American Era, I foresaw a European Union whose deepening integration would gradually give it the wherewithal to chart its own course, fostering an independence that would come at the expense of Atlantic solidarity. Others, such as Ivo Daalder, the current U.S. ambassador to NATO, fretted in Survival about the "effective end of Atlanticism" and a strategic drift that could result "in separation and, ultimately, divorce." How and why has the Western alliance been able to defy such predictions of demise? This essay examines what the skeptics got right and what they got wrong, with particular reference to Kagan's essay, "Power and Weakness." Kagan and many other observers of the Atlantic alliance identified key fissures in the transatlantic relationship, but also significantly underestimated its staying power. Common values and interests, which have increased in salience due to the ongoing diffusion of power from the West to the rising rest, have kept centrifugal forces at bay. The transatlantic bond has also proved durable by default; Europe and America remain each other's best partners because there are for now no viable alternatives. It is good news for the Atlantic partnership that most observers underestimated its resilience. But there is bad news as well. Analysts also failed to foresee what is emerging as perhaps the most serious threat to the Western alliance; the crisis of governance afflicting both sides of the Atlantic. The United States is experiencing a prolonged period of political dysfunction, one that is impairing its ability to put its economic house in order. Meanwhile, the eu's struggle to bring financial stability to the Eurozone has exposed debilitating cleavages among its member states. Stagnant incomes and growing inequality, both of which are consequences of globalization, appear to be the main sources of this political discontent. Economic duress, recalcitrant publics, polarized legislatures, leaders bereft of popular support-- these realities mean that tackling challenges within the United States and Europe is now more urgent and daunting than managing relations between them. …

Journal Article
TL;DR: The Harm in Hate Speech Laws as discussed by the authors is a recent work by New York University Professor Jeremy Waldron, who argues that hate speech undermines the equal dignity of individual members of vulnerable minority groups.
Abstract: The Harm in Hate Speech Laws JEREMY WALDRON The Harm in Hate Speech HARVARD UNIVERSITY PRESS 304 PAGES $2695 IN THE HARM IN Hate Speech, New York University Professor Jeremy Waldron sets out to defend hate speech laws (or "group defamation laws," as he prefers to label them) against critiques based on "knee-jerk" American First Amendment exceptionalism Yet Waldron's defense of hate speech laws is based on a purely abstract and ultimately flawed harm principle that is at odds with modern realities The harm principle proposed by Waldron thus leads to a utilitarian calculus which reduces the freedoms of conscience and expression to dubious empirical disputations based on evidence that is vast and contradictory Waldron's abstract thesis leads him to dismiss some very weighty arguments against hate speech laws without investigating real examples of how they impact individuals and political debates and lead to arbitrary outcomes difficult to reconcile with the rule of law The book's central premise is that hate speech undermines the equal dignity of individual members of vulnerable minorities For Waldron, "The ultimate concern is what happens to individuals when defamatory imputations are associated with shared characteristics such as race, ethnicity, religion, gender, sexuality and national origin" As examples, Waldron invokes the history of systematic racism and segregation in the US and the lethal legacy of Nazism in Europe Manifestations of hate speech "intimate a return to the all-too-familiar circumstances of murderous injustice that people or their parents or grandparents experienced," which a "well ordered society" should not tolerate Accordingly, hate speech can be restricted as a means of "assurance" to the targeted groups Waldron explicitly rejects the idea that hate speech should constitute a "clear and present danger" before being prohibited by comparing hate speech with environmental harms such as automobile exhaust Since we know that exhaust can result in lead poisoning, it is justified to require each automobile owner to fit an emission control on the exhaust pipe, even if we cannot show a direct link between the individual car owner and those afflicted by pollution To 'Waldron the harm in hate speech outweighs the many objections to hate speech laws, such as the restrictions on autonomy and freedom of conscience, the interference with the political decision-making process, the often vague and imprecise language of hate speech codes, and the risk of political abuse The objective of seeking to reassure all members of society that they will not suffer persecution based on their race, religion, ethnicity, etc is one on which virtually all can agree No one can deny the very real and horrific consequences of Jim Crow and lynchings in America, or of Kristallnacht and the Holocaust in Europe Even the most principled defender of the First Amendment would surely allow for further restrictions on free speech if indeed it could be shown that hate speech creates a significant risk of a return to violent racial persecution But nothing suggests that America's increasingly isolated position on the protection of free speech has led to increasing racial tensions While there is likely no scientific method of accurately gauging the relationship between free speech and extremism, numerous surveys on race relations suggest that the putative dangers of tolerating extreme speech have little factual basis There is arguably no better way to gauge an ethnically diverse society's level of tolerance and commitment to equality than to look at interracial marriages--prohibited in sixteen states until the Supreme Court struck down Virginia's miscegenation law in 1967 The statistics on American attitudes toward interracial marriages reveal a startling development According to a 2011 Gallup survey, only four percent of Americans approved of interracial marriages in 1958 …

Journal Article
TL;DR: In the United States, human rights are generally identified as a left-wing cause as discussed by the authors, and despite the fact that human rights advocates do not come in one size that fits all, nor do they agree with one another on all issues.
Abstract: WHEN THE AMERICAN section of Amnesty International was first founded in the 1970s, William F. Buckley was one of its earliest supporters. The prime mover behind the American section, Ginetta Sagan, was a mentor to those of all political stripes, including, for example, Republican Congressman Dana Rohrabacher, whom no one has ever accused of being a "leftist." When George W. Bush called in his second inaugural address for the United States to affirm "the ultimate goal of ending tyranny in our world," he was issuing a call with which no human rights advocate could possibly disagree. The board of Freedom House, a prominent human rights organization, is rife with ex-Bush administration officials like William H. Taft IV and Paula J. Dobriansky, and with scholars like Ruth Wedgwood and Joshua Muravchik who are generally identified with the conservative end of the political spectrum. And yet, despite the political diversity these instances represent, human rights are generally identified as a left-wing cause. There are many reasons for that, perhaps foremost among them the fact that human rights standards are established largely by international instruments, beginning with the Universal Declaration of Human Rights (udhr), and enforced, to the extent to which they are "enforced" at all, by international institutions, such as the UN Human Rights Council. Conservatives tend to resist subsuming American sovereignty to international regimens and to be suspicious of international institutions, in part because they include some member states lacking consent of the governed and basic liberties.1 As a consequence, the United States has ratified fewer key human rights treaties than the other G 20 nations and, when it has ratified them, has tended to attach reservations asserting the preeminent authority of the U.S. Constitution. (2) Moreover, human rights have come to be associ ated with a number of causes--notably opposition to the death penalty; the closure of the prison camp at Guantanamo Bay; and the assertion of a right to health care--that, justifiably or not, are considered liberal causes in American political terms. The fact that conservatives have played a prominent role in other landmark human rights struggles--such as the promotion of religious freedom; an end to the second Sudanese Civil War in 2005; and the campaign to end human trafficking--has failed to redress the perception that human rights advocates, with the exception perhaps of a handful of neoconservatives, are ineluctably drawn from the left. As human rights figures identified with different parts of the political arc, we regret this bias because it does damage to the human rights cause. Michael Ignatieff has called human rights the "lingua franca of global moral thought," but moral thought is not the exclusive province of any one political position. Just as the standing of human rights claims declines if those claims are thought to be no more than the product of Western ideology or a so-called imperialist agenda, so too the power of human rights to influence U.S. foreign and domestic policies is diminished if human rights are perceived to be the concern of only one segment of the political community. When, on the contrary, a group of respected military leaders speak out against torture or former Bush Solicitor General Ted Olson pleads the case for marriage equality, stereotypes of what human rights advocates look like are constructively confounded. When conservative Republican Alberto Mora resigned as counsel general of the Department of the Navy over detainee policy, he did not suddenly become a liberal. The truth is that human rights champions do not come in one size that fits all, nor do they agree with one another on all issues. The late Democratic Congressman Tom Lantos, whose support for human rights was so widely recognized that the Human Rights Commission of the U.S. House of Representatives is named for him, could become irate at criticism of the United States by human rights organizations for its use of the death penalty. …

Journal Article
TL;DR: Kagan's "Power and Weakness" article as mentioned in this paper argued that there was a growing rift between the strategic cultures of American and Europe once the Soviet Union disappeared from the global stage, and that the United States was more comfortable using hard power in the form of military force to eliminate foreign threats.
Abstract: IN ASSERTING THAT "Americans are from Mars and from Europeans are from Venus," Robert Kagan combined pithiness with productive provocation. Following in the footsteps of Francis Fukuyama and Samuel Huntington, he stated hard truths that challenged observers of world politics to reconsider their worldviews-- even if they disagreed vehemently with his arguments. Writing "Power and Weakness" when he did-- after the September 11th attacks, after the seemingly successful ejection of the Taliban from Afghanistan, before Operation Iraqi Freedom-- Kagan seemed to crystallize the growing transatlantic rift. In the years immediately after the piece was published, the transatlantic divide grew to a chasm. Spats between American and European leaders became routine. Public attitudes in Europe about the United States worsened, and vice versa. A decade after this magazine published "Power and Weakness," I come to praise Kagan's insights-- and then to bury them. As they did with Fukuyama and Huntington before him, critics caricatured the arguments Kagan made in "Power and Weakness." It would be easy to point out events over the past decade that, on the surface, would appear to contradict Kagan's thesis. These critics elide the deeper points that Kagan made about the connection between national power and strategic culture. When looked at again, "Power and Weakness" presaged arguments made by academic realists and liberals about the future of international affairs. Ten years on, however, time has revealed some hidden flaws in the logic. Indeed, while "Power and Weakness" captured the strategic thinking of a decade ago, it also epitomizes the blind spots of those immediate post-9/11 debates. Kagan made two unstated assumptions in "Power and Weakness" that look increasingly dubious with the passage of time. First, he treated power as though it could only be articulated through military force or cultural attraction. This crude dichotomy omits the very significant space where economic power resides. In neglecting this dimension, Kagan underestimated the policy arenas where Europe has mattered to the United States. Just as Europe has thrived under the security umbrella of the United States, Washington has benefited from the support of Brussels' economic power. Second, in discussing the strategic cultures of Europe and America, Kagan obscured the gap between public attitudes and policy elites-- particularly on this side of the Atlantic. In doing so, he failed to note the ways in which these strategic cultures diverged from the attitudes of the mass public. Due to the overreaches of the past decade, this gap leaves minimal margin for error for American policy makers in contemplating the use of force. In the future, the United States may have no choice but to revert back to a strategic culture more simpatico with European elites. JHERE IS AN unfortunate tendency in criticism to assume that pithiness cannot be combined with sophistication. "Power and Weakness" is pithy, but it is also more than just Kagan's Mars/Venus bon mot. He argued that there was a growing rift between the strategic cultures of American and Europe once the Soviet Union disappeared from the global stage. The United States was vastly more comfortable using hard power-- in the form of military force-- to eliminate foreign threats. Furthermore, the U.S. was more willing to do so preemptively and preventively in far-flung areas of the globe, with little recourse to multilateral institutions or international law along the way. Europeans, in contrast, were far more reluctant to rely on force of arms. They viewed such an approach as militaristic and naive, doubting that brute force could solve intractable stalemates. European policymakers preferred policies of "subtlety and indirection" according to Kagan. These contrasting views of how to conduct foreign policy were leading to a clash of strategic cultures that threatened to rupture the transatlantic relationship. …

Journal Article
TL;DR: In the Ivanpah Basin and Range National Park, a company called BrightSource Energy has been counting the tortoises it would have to relocate in order to proceed with the project, and census takers are finding far more than they, or anyone else, expected.
Abstract: ABOUT 40 MILES southwest of Las Vegas, drivers on Interstate 15 reach a section of the Mojave desert called the Ivanpah Valley. For most travelers, the valley is a nondescript landscape of creosote bushes, cactus, and sand; but devotees of the desert sometimes leave the main road to see much more. The uninterrupted views of the surrounding mountains are especially crystalline on early spring mornings when unusual plant species like the Mojave Milkweed and the Desert Pincushion are in bloom. Several birds that nest in the valley, including the burrowing owl and the loggerhead shrike, have protected status under federal law, as does a reptile called gopherus agassizii, or desert tortoise. BrightSource Energy, a firm that plans to develop a 390-megawatt solar complex in the valley, has been counting the tortoises it would have to relocate in order to proceed with the project, and BrightSource's census takers are finding far more than they, or anyone else, expected. Since the history of successfully relocating this tortoise is not encouraging, and since the small reptile has an ever-growing cohort of protectors, BrightSource is no longer as sure as it once was that this project, at the scale proposed, will be feasible. Currently, the Bureau of Land Management has about twenty solar, wind, and geothermal projects under various stages of development review in the desert Southwest--Arizona, New Mexico, Nevada, and California--and two more wind projects in Oregon. All of these face varying degrees of opposition from environmental groups. Some are also being contested by Native American tribes, whose objections are both environmental and cultural, in that some of the lands are considered sacred burial grounds. For environmental advocates of renewable and sustainable energy, their colleagues' objections can be both nettlesome and embarrassing. The Mojave is ideally situated for solar development; these desert lands are bombarded with more of the sun's rays than almost any place on earth, and that sunshine conveniently arrives at the perfect time to he converted to electricity to meet the peak power requirements of large nearby population centers: Las Vegas, Phoenix, San Diego, and the megalopolis that combines Riverside, Orange, and Los Angeles counties. Since more than one million acres of the Mojave have already been excluded from such development by a law sponsored by U.S. Senator Diane Feinstein in 2.009, projects like BrightSource's become an even more important element in fulfilling California's ambitious plan to obtain at least 30 percent of its electricity from renewable sources by 2020. Feinstein's original proposal was to exclude over 2..5 million acres, but that was scaled back in the face of opposition from unions who foresaw an enormous loss of construction jobs. But the same groups that encouraged the set-aside in 2,009--organizations like the Nature Conservancy and the Audubon Society--continue to push for further restriction of energy development on any public lands that come close to being in pristine condition. There is some irony in the fact that the main reason such lands are "pristine" is that they were unsuitable for any other kind of development. Except for their mostly newly discovered environmental sanctity, these desert areas would have been the cheapest land upon which to develop solar resources. In some locations, wind power can compete in the marketplace even without production tax credits, but solar still heavily relies on subsidies or renewable energy mandates to compete with fossil fuel. So, until renewables have become fully competitive in the marketplace, does the outcome of these struggles between environmental supporters and opponents of utility-scale projects really matter? The answer is surely yes. Geothermal, like wind power, is already competitive in many locations; meanwhile, solar energy's costs are decreasingly fairly rapidly. The price of silicon the primary raw material in solar has fallen dramatically; so have engineering costs, as successful techniques are increasingly replicated and perfected. …

Journal Article
Abstract: THE SO-CALLED ARAB Spring has ushered in many surprising changes, not the least of which is an apparent sea change in American foreign policy. The Muslim Brotherhood--hitherto regarded as the principal ideological incubator of Islamic extremism and shunned accordingly--has been rehabilitated by the present American administration. Long before the Brotherhood's Mohamed Morsi was elected Egyptian president in June, the administration was openly courting the organization. The first sign of the change came in the form of what seemed initially to be a bizarre gaffe. Speaking at congressional hearings in February zoi r, Director of National Intelligence James Clapper incongruously described the Brotherhood as "largely secular." But the gaffe soon proved to have been a harbinger of policy. Within a year, the American ambassador to Egypt, Anne Patterson, and the Senate Foreign Relations Committee chair, John Kerry, were meeting with Brotherhood officials in Cairo. The contacts, in both Cairo and Washington, have gone on ever since. In this context, it is hardly surprising that many observers and especially those wary of the administration "reset" with the Brotherhood--would regard a recent book by former Wall Street Journal reporter Ian Johnson as, in effect, the book of the hour. Bearing the sensational title A Mosque in Munich: Nazis, the CIA, and the Rise of the Muslim Brotherhood in the West (Houghton Mifflin Harcourt, 2010), Johnson's volume contains an even more sensational thesis: namely, that the U.S. had already gotten involved with the Muslim Brotherhood in the 9 cos and that the Brotherhood's leading representative in Europe at the time, Said Ramadan, was even a CIA asset! On Johnson's account, the CIA helped Ramadan to seize control of the "mosque in Munich" of the book's title. The claim is all the more sensational inasmuch as the mosque--or rather the Islamic association that sponsored its construction--would in the aftermath of 9/1i come to be linked to al Qaeda. It is not difficult to understand, then, why Johnson's book has been hailed as a "cautionary talc." And this it would be, were it not for the fact that the tale Johnson tells is not supported by the evidence. The whole basis of Johnson's narrative of American "collusion"--as he put it in the Fall tot r Middle East Quarterly with Ramadan and the Brotherhood is circumstantial evidence and conjecture Unnervingly, once introduced into the narrative, the conjecture is then elevated to the status of established fact. This procedure allows Johnson, for instance, to refer repeatedly to an American "plan" to install Ramadan as the head of the Munich mosque project, even though he has offered no proof that such a plan ever existed. Dangerous liaisons or casual contacts? MORE PROBLEMATICA ELY STILL, most of the circumstantial evidence points precisely to American disdain for Ramadan, not the "mutual attraction" that Johnson essentially conjures out of thin air Take, for instance, Ramadan's now famous September 1953 visit to the White House. A photograph documenting the visit is reproduced in a February 3, 2011 post by Johnson on the blog of the New York Review of Books. The picture shows a group of more than. twenty Muslim dignitaries crowded around President Dwight D. Eisenhower in the Oval Office--among them, the then merely 1--year-old Muslim Brother Said Ramadan. The occasion for the photo-op was an international "Colloquium on Islamic Culture," which was cosponsored by Princeton University and the Library of Congress. The U.S. Information Agency and the State Department's International Information Agency were also involved. The apparently scholarly conference thus enjoyed U.S. government sponsorship. Johnson makes much of this fact, quoting darkly from a State Department memo that admits to ulterior motives. A passage from the same memo that is quoted by another source (to which we will come momentarily) specifies the ulterior motives in question: namely, "to create good will and to contribute to greater mutual understanding between the Muslim peoples and the United States. …

Journal Article
TL;DR: In this article, Altman argues that many of the statements of Strauss against Nazism are lies, designed to conceal the true nature of Strauss's project from unsuspecting, innocent Americans.
Abstract: ACCORDING TO WILLIAM ALTMAN'S The German Stranger, Leo Strauss concocted a "radical critique of liberal democracy" that is a "synthesis" of the thought of Carl Schmitt and Martin Heidegger, "two cowardly, utterly repulsive, and lapel pin-wearing Nazi philosophers." Strauss could not join the party due to his "Jewish blood," but he "did what no mere Nazi could have done or dreamed of doing: he boldly brought his anti-liberal project to the United States, the most fearsome of his homeland's Western enemies and the greatest and most powerful liberal democracy that has ever been." Strauss's project is "primarily destructive: it was the theoretical foundation of liberal democracy in general that he sought to annihilate, not some new form of totalitarianism he aimed to erect." Leo Strauss was born into an observant Jewish home in Germany at the end of the 19th century. As a young man he participated in the Zionist movement; he studied philosophy in several German universities, encountered Husserl and Heidegger as well as the academic philosophy of the neo-Kantian school, and began his scholarly career as a researcher in Jewish Studies in Berlin in the 1920s. Strauss left Germany on a fellowship to Cambridge in 1932. and did not return after Hitler came to power. He lived in England and France for a number of years before moving to the New School in New York, where he obtained a regular faculty position in 1941. Later, Strauss accepted a professorship at the University of Chicago, where he wrote the works that have made him famous, such as Natural Right and History, the City and Man, and Thoughts on Machiavelli. He is best known in America, at least by those who have taken the trouble to study carefully his writings, for his critique of the roots of modernity based on a perspective that is largely drawn from pre-modern philosophy--Greek, Jewish, and Islamic. Altman is aware of the many statements of Strauss against Nazism, often worded in strong, passionate terms. He knows that Strauss said many things that could be taken to support liberal democracy, including that liberal democracy was the best possible political alternative in his own time. According to Altman, these explicit statements are lies, designed to conceal the true nature of Strauss's project from unsuspecting, innocent Americans. Strauss had claimed that the great philosophers of the past wrote "exoterically"--in such a manner as to conceal their true teaching from all but a few understanding readers. According to Strauss these thinkers did so to avoid persecution, and also the harm to themselves and society that could come from the innocent and not-so-innocent misappropriation of their ideas. Writing in this way--"between the lines"--entails burying in a work with an innocuous external teaching various statements that guide the reader toward the author's true intent. Altman believes that Strauss himself practiced exotericism. According to Altman, once we assume this we will find many statements in Strauss's writing that modify those which attack Nazism and support liberal democracy. While often ambiguous, these statements, Altman maintains, are decisive hints of Strauss's hidden Nazi-inspired attack on liberalism. Reading "with suspicion" EVEN IF IT is were true that Strauss wrote between the lines, this does not establish that all explicit statements of Strauss are misleading or false accounts of his views. Altman would still have to show that the particular statements in question in favor of liberal democracy or against Nazism are lies. This depends on interpreting the ambiguous statements as anti-liberal and pro-Nazi. But ambiguous statements by definition permit of more than one meaning. On the basis of what principle ought we to resolve the ambiguity in favor of a pro-Nazi or anti-liberal meaning? The principle Altman asserts is that of reading "with suspicion." But where does such suspicion come from? A small shelf of books (mostly) dedicated to discrediting Strauss and his scholarship. …

Journal Article
TL;DR: The Weinberger-Powell doctrine was used by the U.S. National Security Adviser to justify the use of force in the Iraq and Afghanistan campaigns as discussed by the authors, with a focus on leaving a "stable order" in place before foreign troops are extricated and/or to defeat insurgents so that a viable set of political institutions can be established by local authorities.
Abstract: EXIT STRATEGIES ARE back in vogue. The Afghanistan campaign has not gone terribly well in the past several years and a deadline--of sorts--for withdrawal has been set for the mission in 2014. In the case of Iraq, the Obama administration declared that the combat mission was over following the successful "surge" strategy and removed U.S. troops by the end of 2011. These "exit strategy" deadlines were set against a background of continuing political instability and violence in both countries. Exit strategy is a term that originally comes from business. (1) It is the method by which venture capitalists or the owners of a business shed an investment that they own. The concept gained currency in relation to military interventions in the 1980s and 1990s in what became known as the Weinberger-Powell Doctrine. In the aftermath of the disastrous bombing of the U.S. Marine barracks in Lebanon in October 1983, Secretary of Defense Caspar Weinberger outlined six conditions for the proper application of U.S. force: (1) U.S. vital interests at stake; (2) a clear commitment to achieving victory; (3) clear political and military objectives; (4) the level of military engagement matches the mission's key objectives; (5) domestic and congressional support secured prior to the mission; and (6) use of force only as a last resort. (2) The concept was refined and amplified by then-Chairman of the Joint Chiefs of Staff Colin Powell just prior to Operation Desert Storm in 1991. Fearing another Vietnam-style disaster if U.S. troops were deployed to oust the Iraqi forces that invaded Kuwait, Powell argued that the mission would only succeed if its objectives were clearly identified and attainable, the commitment to achieving those objectives was firm, political support was widespread, and force could be used quickly to overwhelm the enemy and with a minimum number of casualties. As Jeffrey Record notes, in the aftermath of the first Gulf War and the subsequent "decade of U.S. military interventions in Africa, the Caribbean, and the Balkans, there [was] a rising clamor on Capitol Hill and within the Pentagon for 'clear exit strategies' before resorting to force overseas." (3) In 1996, U.S. National Security Adviser Anthony Lake stated that before the United States sends its "troops into a foreign country we should know how and when we're going to get them out." (4) The concept of exit strategy has also had its critics. Gideon Rose, for example, argues that the concept should be "jettisoned" because "it lumps together several important issues that are best handled separately." In any intervention, the key considerations are not just whether the "blood and treasure" are being used "in endless futile attempts to impose order and create harmony" but what "interests are at stake," whether a "stable order" can be left behind, and whether "overcommitment can be avoided ... through selective intervention." Rose also believes that the challenge is one of handling "unexpected developments" through well-conceived "contingency plans." Likewise, Record argues that "the idea of a sure-fire, pre-hostilities road map to post-hostilities military extrication is a delusion. Having a concept of success is always good, but having a healthy appreciation of the difficulties of maintaining it in the face of war's vicissitudes is even better." We argue here in favor of eschewing exit-based strategies driven by political agendas and hastily drafted timetables for withdrawal. But we do so on somewhat different grounds from those that have traditionally been advanced, which focus either on the need to leave a "stable order" in place before foreign troops are extricated and/or to defeat insurgents so that a viable set of political institutions can be established by local authorities. Instead, we argue that decisions about whether to keep a mission alive should be based on what some refer to as a real options perspective, which takes into account the uncertainties of the mission and the costs of both entry and exit. …

Journal Article
TL;DR: In this article, the authors present an experiment in the Indian state of Rajasthan in which the poor are asked to answer subjective questions such as "Did you have a bath today?" and "Do you think Nabile, who steals from You, is Your friend?" and they give much weaker answers than on the objective questions.
Abstract: THE CHALLENGES OF poverty and development have long been regarded in terms of transitive relationships, in which the rich help the poor because the poor are not seen as able to help themselves. This view of the poor assumes they have mainly needs and no assets. With so many people believing this view it isn't surprising that the poor themselves share the assumption that they can't play any significant role in improving their own condition. Belief in the powerlessness of the poor cripples not simply the poor, but both the left and the right, with each looking to the mechanistic solutions of government or market. Neither has an active strategy for empowering the poor themselves to play significant roles in promoting change. When viewed in distributional terms--the left in terms of equality, the right in terms of mobility--justice is impossible, because equality is itself impossible and because when some "get ahead," others are "left behind." When justice is understood, however, in terms of empowerment, it becomes possible for everyone. With empowerment, the case for compulsion to achieve equality disappears, and no one needs to be left behind. With empowerment, the distributional focus disappears in favor of values higher than the accumulation of wealth as the defining quality, in public policy, of people s lives. Observing empowerment may be easiest in the traditional and tribal societies that have become the new priority concerns of U.S. foreign and security policy. In traditional and tribal societies people live with preconscious concepts of self and live through roles prescribed by tradition, with few or no chances to break free in search of a better life. Because tradition dominates everything people do, these societies are passive, fatalistic, and disempowered in their nature. The issues become clear in specific examples. In the very tribal state of Rajasthan in India, traditional roles define all perceptions for many people. In a recent sir vey, therefore, n ibal girls had no trouble giving strong answers to objective questions such as "Did you have a bath today?" but had more trouble with more subjective questions, which require strong opinions. As an example, when tribal girls are asked, "Do you think Nabile, who steals from You, is Your friend?" they give much weaker answers than on the objective questions. Their subservient, role-defined answers highlight their over-whelmingly powerless and subservient roles. An experiment underway for almost a decade in two states of India is producing powerful evidence that the poor cant be empowered to move outside and beyond their traditional roles. They can start to pursue individual aspirations and dreams while also reaching out to each other, in connection, as citizens These results in India are encouraging indicators that may begin to change how we think and talk about poverty and development, and hopefully about other, larger issues, including the promotion of change in tribal societies and even the reform of governments. Empowerment, left and right "EMPOWERMENT" AT DIFFERENT times has been prominent in the ideas of both left and right about how to help the poor. In the mid-1960s the Office of Economic Opportunity (OEO), which became the central instrument of the War on Poverty, was the left's major instrument of empowerment, giving "maximum feasible participation" to the poor in decision-making of Community Action Programs (CA vs). For the right, empok' erment has emphasized "getting government off people's backs"--including proposals to give people alternatives to failing government institutions--e.g., school vouchers and charter schools For example, Empower America, an organization led by jack Kemp and Bill Bennett, focused most of its energy on fighting big government. But OEO lasted less than a decade, and while some CAPS persisted, showing the potential of empowerment with good leadership and community support, most did too little empowering and too little integrating of the poor into the mainstream society. …

Journal Article
TL;DR: For example, the authors argued that poor countries do not have the resources to build weaker versions of Western governments and instead, they will have to build sustainable governments commensurate with their resources, which will necessarily be governments of more limited function.
Abstract: THE SHIFTING IDEOLOGICAL winds of foreign aid donors have driven their policy towards governments in poor countries. Donors supported state-led development policies in poor countries from the 1940s to the 1970s; market and private-sector driven reforms during the 1980s and 1990s; and returned their attention to the state with an emphasis on governance and government social spending thereafter. Poor countries--sometimes called "low-income economies" or "least-developed countries"--have over the decades been a proxy battleground for the Western left and the right, with heated debates about the merits of infant industry protection, privatization of public utilities, government-provided health care, and the role of government more generally. Both liberals and conservatives in the West have more in common than they realize, however. Their policy preferences have evolved in the richest countries in the world, with well-financed governments that carry out an ever-growing number of functions under the rule of law. Much of their sense of what is minimally acceptable in government is a relatively recent product of wealth. Yet they do not hesitate to apply their home-grown standards to governments in poor countries. While it may seem obvious that the governments of poor countries are themselves poor, donors have largely failed to appreciate the enormity of the gap between the revenues of poor country governments and of their own governments. They have taken for granted the government-provided prerequisites that make their favorite policies work at home. When their policy prescriptions fail in poor countries, donors blame the failure on corruption and weak political will for implementation. But even with clean government and the best will, many of their policy prescriptions are unlikely to succeed. The poorest countries do not have the resources to build weaker versions of Western governments. Their attempt to do so spreads government too thin, making it impossible for the government to deliver on its commitments and making both law and government policy declarations aspirational. Instead, poor countries will have to build sustainable governments commensurate with their resources, which will necessarily be governments of more limited function. Donors shrink before the hard choices, and their reluctance to acknowledge them undermines the quality of government in poor countries and deepens poor country aid dependence. Development and government BOOKS AND ARTICLES that trace the intellectual history of development tell how development policy has been shaped by ideas of the role of government in richer countries. The story they tell divides development thinking into three eras: an initial era of state-led development; the "Washington Consensus" era, which focused on government retrenchment and market solutions; and the current era, which has been characterized as post-consensus or no consensus, but in which the pendulum swung back towards a more important role for government. The idea that government had an important role to play in managing the economy, educating the people, providing social safety nets and reducing inequality was well accepted in the United States and other industrialized countries when most poor countries gained independence. The Great Depression and World War II led the United States to adopt Keynesian economic policies, intervene in the economy, and begin to create social safety nets first for returning veterans and then for others. Alternative models available to leaders of newly independent countries were even more statist--European and Latin American socialism, or Soviet or Chinese communism. Initially, poor governments attempted to fill an expansive role in leading development. They regulated international trade, currency exchange, and domestic markets; established state-owned enterprises; sought to protect and nourish infant industries; and sought to raise revenue for infrastructure construction and the provision of social services, such as education and the delivery of clean water. …

Journal Article
TL;DR: Tax extenders sit alongside the centerpiece of temporary tax policy in the US: our current marginal rates on earned income Enacted as a 10-year policy in 200I during the George W Bush administration, modified in 2003, and extended for two additional years in 20I0 by President Obama and the Democratic mth Congress.
Abstract: THE US FEDERAL tax code is in desperate need of reform In this year's presidential election, both Republican nominee Mitt Romney and President Obama made corporate tax reform an issue in the campaigns, and it was one issue on which the candidates found more common ground than difference, with proposals for lower rates and a shift to--or, in the president's case, at least openness to--a territorial corporate tax system There is also broad agreement that the individual side of the US tax code is in disarray Setting aside the political arguments about who is or is not paying their fair share of income taxes, over the past decade, the US has imposed on itself a regime of temporary tax policymaking While a favorite statistic of would-be tax reformers is that there have been more than 15,000 changes to the tax code since the last overhaul of the federal income tax system in 1986, (1) the defining characteristic of income tax policy in this country is captured by a 2011 publication from the Joint Committee on Taxation (JCT) The JCT document listed expiring federal tax provisions by year over the decade from 2010-20 Not including temporary disaster relief tax breaks, the three years beginning with 2010 saw the expiration of 31, 56, and 37 provisions of the federal tax code, respectively (2) Among the most perishable federal tax laws are the set of tax provisions, primarily affecting businesses, which have earned the moniker "tax extenders" because Congress habitually extends them year-by-year The tax extenders sit alongside the centerpiece of temporary tax policy in the US: our current marginal rates on earned income Enacted as a 10-year policy in 200I during the George W Bush administration, modified in 2003, and extended for two additional years in 20I0 by President Obama and the Democratic mth Congress, this tax policy has become a partisan flashpoint and a source of significant uncertainty for US taxpayers If the Bush (or Bush-Obama) tax rates are allowed to expire under current law at the end of the 2012, calendar year, the marginal rates on earned income will increase from 25, 28, 33, 35 percent to 28, 31, 36, and 396 percent, and the To percent tax bracket will be reinstated All this temporary tax policy making has culminated in more than three-dozen expiring tax provisions that form strata in what has come to be known as the fiscal cliff The fiscal cliff is a set of federal fiscal policies, including tax increases, new taxes, and Medicare provider payment cuts enacted with the Affordable Care Act, along with impending cuts to national defense and discretionary spending enacted as part of the sequestration provisions of the Budget Control Act of 2012, that are scheduled to go into effect on January I, 2013, or sometime during that year In addition to the marquee increases in marginal income tax rates, the fiscal cliff is comprised of a spate of other expiring tax provisions, including: the American Opportunity Tax Credit, the doubled Child Tax Credit (now $1000, up from $500 per child), enhanced refundability of the Child Tax Credit, expansion of the Earned Income Tax Credit (EITC), elimination of phase-outs for itemized deductions and personal exemptions, reduction in the marriage penalty, lower capital gains rates (15 percent instead of 20 percent), 15 percent dividend tax rate rather than taxation as ordinary income, and the estate tax reduction from 35 percent over $5 million reverting to 55 percent over $1 million Together, the cost of all income tax provisions will be $II0 billion for 2013, $340 billion for 2013-14, and $z8 trillion for the ten-year period 2013-22 Add to that the expiration of the 2 percent payroll tax holiday (2013 cost: $90 billion) and a set of tax extenders including the R&E tax credit and the alcohol fuel tax credit ($30 billion in 2013, $455 billion from 2013 to 2022) (3) In total, if all fiscal cliff provisions are allowed to expire or take effect, the Congressional Budget Office (CBO) estimates that the U …

Journal Article
TL;DR: The de Chevigny family as discussed by the authors was one of the first families to resist the occupation of Alsace-Lorraine during World War I. But this time, the international situation was different: things had gone too far between France and Germany.
Abstract: EUROPEANS DON'T COME from Venus. They are the conflicted inheritors of a long military tradition which still survives--but which nearly devastated their continent, leaving in its trail a complex relationship to war. In 1870, as a result of the French defeat, the Germans invaded France and annexed its eastern region of Alsace-Lorraine, home of my family. But my great grandfather, Andre de Chevigny, was born French rather than German, because his mother would always make sure to cross the French border a few weeks before delivery so as to avoid giving birth to little Germans. Andre attended the Saint-Cyr military academy, class of 1897, and became a colonial officer. He took part in the fight against the Boxer rebellion in Beijing, and administered various territories in Indochina and in Madagascar. In 1914, when World War I broke out, Germany invaded eastern France again (Andre had moved to the French part of Lorraine in the meantime), this time with considerable violence. In the nearby small town of Longuyon, more than 80 civilians were executed by German soldiers, including the mayor and the priest. While Andre's manor was pillaged and occupied by Bavarian and then Prussian troops, who turned it into a hospital, he was on his way to Turkey, for the Dardanelles expedition (Gallipoli). While trying to seize Koum-Kale from the Turks and the Germans, he was shot and fell in the water. He was rescued at the last minute by one of his Senegalese soldiers and slowly recovered from his wounds in a military clinic in Alexandria, Egypt. He went on to fight in Verdun in 1917, where he suffered mustard gas attacks, and in the Somme in 1918, where he was severely injured again. He was lucky to be alive at the end of the war, unlike other men in his family and so many of his fellow officers--indeed, the entire Saint-Cyr class of 1914 was wiped out by the war. After the war, he spent a lot of time in the administrative process of obtaining reparations to rebuild his property. In 1940, when World War II broke out on the Western front, Lorraine was invaded again, the family property occupied and damaged again, and Andre's son Pierre de Chevigny (my grandfather) fought in the campaign of France. But his antiaircraft platoon was no match for Hitler's stukas, and he had to retreat until the armistice. After being demobilized, he took on the management of a regional youth organization for the Vichy regime. In 1942, this organization became a front for a Resistance movement called Alliance. Alliance was affiliated with the British intelligence services, and Pierre was reporting on German aircraft movements in Lyon. He and his young wife were arrested by Klaus Barbie's Gestapo in the summer of 1943 and brutally interrogated in Paris. While my grandmother was released to give birth to her first child (my mother), Pierre was sent to the Buchenwald concentration camp in January 1944. Even though he was not part of the underground communist organization which was running daily life in the camp with an iron fist (and tacit acquiescence from the Nazi guards), he survived the horrific experience of Buchenwald. In April 1945, the 80th Infantry Division of General Patton's Third Army took control of the camp, and liberated him. But his return home was tainted with sadness. His younger brother had died a few months before fighting the Germans after landing in Provence, as part of the French African army. Two of his brothers-in-law also died for France. Like his father had done 25 years earlier, he spent considerable time in the process of obtaining reparations to rebuild his property. But this time, the international situation was different. Things had gone too far between France and Germany. Even in the disputed and patriotic region of Lorraine, there was a solid European movement, in which Pierre de Chevigny took part. He embraced a political career, was elected a senator, and was part of the nato parliamentary assembly in the 1960s--where he cooperated with his American and West European, especially German, counterparts, to build a militarily strong and united West. …

Journal Article
TL;DR: In the immediate aftermath of his 2010 election as the newest senator from Utah, Mike Lee spoke before a crowd of enthusiastic practitioners, scholars, and students at the Federalist Society National Lawyers Convention, and he ended his remarks with the following pledge: "I will not vote for a single piece of legislation that I can't reconcile with the text and the original understanding of the U.S. Constitution." as discussed by the authors.
Abstract: In the immediate aftermath of his 2010 election as the newest senator from Utah, Mike Lee spoke before a crowd of enthusiastic practitioners, scholars, and students at the Federalist Society National Lawyers Convention. Senator Lee focused on the role of Congress in constitutional interpretation, and he ended his remarks with the following pledge: "I will not vote for a single piece of legislation that I can't reconcile with the text and the original understanding of the U.S. Constitution." The senator's statement rejected the idea that the Supreme Court is the only relevant constitutional interpreter in the federal system and struck at the heart of the "living Constitution," the notion that the original meaning of the Constitution is not binding on today's government officials. By requiring adherence to the original meaning of the constitutional text, Senator Lee sided with originalism. The late scholar Gary Leedes once complained that while originalists ask the federal judiciary to be originalist, they "permit the electorally accountable officials substantial leeway. The Congress can interpret the tenth amendment and the necessary and proper clause virtually as it pleases." Senator Lee's speech represents a forceful reply to Leedes's challenge: Congress must be originalist, too. The senator's pledge highlights a remarkable fact about American constitutionalism today: Only a generation removed from the constitutional revisions of the Warren and Burger Courts, originalism has not only established itself as a respectable interpretive theory in the federal judiciary, but it has also been taken up by some members of Congress. Even a major-party presidential candidate, Newt Gingrich, has pledged that as president he would interpret the Constitution using originalism. Such a state of affairs was unthinkable decades ago when, as Judge Robert Bork characterized the conventional wisdom of the era, lawyers came to "expect that the nature of the Constitution [would] change, often quite dramatically, as the personnel of the Supreme Court change[d]." But it was precisely because of an article by then-Professor Bork that so much has changed and that Senator Lee's pledge was possible. Bork's 1971 article in the Indiana Law journal, "Neutral Principles and Some First Amendment Problems," is widely recognized as having launched modern originalist theory. While Professor Noah Feldman has underlined the role justice Hugo Black played in the development of modern originalism, it was not until Bork's article in 1971 that the modern originalist movement took flight. Thus, having just passed the 40th anniversary of that landmark essay, it is appropriate that we survey how modern originalism began, how it has changed, and what challenges lie ahead. The birth of modern originalism ALTHOUGH JUSTICE ANTONIN Scalia is fond of saying that originalism was once orthodoxy within the judiciary, Johnathan O'Neill, professor of history, helpfully reminds us that "traditional textual originalism and contemporary originalism should not be ahistorically equated." As O'Neill tells the story, what we might think of as originalism in the 18th and 19th centuries was heavily influenced by the traditional notions of statutory interpretation articulated by William Blackstone in his Commentaries on the Laws of England. Modern originalism, by contrast, focuses more on historical sources to determine the meaning of the constitutional text, such as the records of the state ratification debates. In this sense, modern originalism is much more of a historian's art than that of a Blackstonian lawyer, and it makes sense to think of modern originalism as distinct from the 19th-century brand that preceded it. The distinguishing features of modern originalism could only be vaguely perceived in Bork's Indiana Law Journal article, which, as Bork said at the time, "did not offer a complete theory of constitutional interpretation" but rather set out to "attack a few points that may [have been] regarded as salient in order to clear the way for such a theory. …

Journal Article
TL;DR: The United States and Europe share common values and need to work together to protect and advance those values in the world as mentioned in this paper, and the reflection of these fundamental impulses has changed as well.
Abstract: BOB KAGAN'S ESSAY, "Power and Weakness," was and remains brilliant. Funny and illuminating, it crystallized a set of thinking at a critical moment in history. And it stands the test of time: It still illuminates fundamental impulses in Europe and America. The world has changed substantially since 2002. And the reflection of these fundamental impulses has changed as well. Europe's post-modern self-absorption was an indulgence in 2002; now in 2012 Europe's self-absorption is fully warranted and indeed a vital U.S. interest. We cheer on as Europe seeks to save itself, lest it bring down the entire "old world" global economy. Meanwhile, the United States' muscular assertions of 2002 have been replaced by retrenchment on the left and near neo-isolationism on the far right-- causing justified worry among European allies. The description of a muscular, assertive U.S. foreign policy still attracts many in the U.S. foreign policy elite-- but in an era of deficits, recession, and war fatigue, they lack broad voter support and the ability to assert their worldview. Neither situation is better. Yet despite the changes, Kagan's fundamental conclusion also remains the right one for today. The United States and Europe share common values, and need to work together to protect and advance those values in the world. We need to understand our differences, which do exist, but we must also get beyond them to make the world a better and safer place. Leaders matter THE PRINCIPAL OBJECTION I had to Kagan's article in 2002 was m that it provided a static snapshot. While the analysis was spot-on, there was no reason to assume that things would stay the way they were. I believed that with good leadership it wouldn't take much for Europeans to combine hard and soft power more effectively, and to be strong allies with the United States. They had done it before. And it wouldn't take much for the United States to work together with Europe, as it had before, convincing others and building support, rather than plowing ahead alone, and with a heavy reliance on military might. Things did indeed change: But little did I know that instead of good leadership righting the course, we would see bad leadership making things worse. As Europe grapples with its all-encompassing debt crisis and the U.S. ratchets down in Europe and pivots toward Asia, the transatlantic alliance is arguably in substantially worse shape today than it was when Kagan first wrote his article. It still remains the task of good leadership, on both sides of the Atlantic, to return the relationship from one of cooperation as necessary to one of strategic alliance out of shared values and purpose. 9/11 and Afghanistan JREMEMBER FIRST READING Kagan's essay as a penultimate draft circulating around the National Security Council in the late Spring of 2002. It created quite a buzz: a combination of locker room giddiness for some, and an acknowledgement of the substance, combined with uneasy foreboding, among others. Before going further, it is important to recreate the context. I was serving as director for nato and Western Europe. The allies were my beat. It was the time after 9/1 r and before the war in Iraq. It was a time when the United States felt extraordinarily vulnerable, having been attacked by terrorists using airplanes as missiles, and now identifying the single greatest threat to the nation's security as terrorists using weapons of mass destruction. It was a time after the U.S. went to war in Afghanistan, but did so without building on nato's decision to invoke its Article 5 commitment to collective defense for the first time in history. The allies "could not help us"-- or so we reasoned-- because the U.S. had to rely on extraordinary capacities: special forces on horseback integrating seamlessly with satellite communications and precision guided bombs and missiles delivered from tens of thousands of vertical feet and many hundreds of horizontal miles away. …

Journal Article
TL;DR: Sollenberger and Rozell as mentioned in this paper describe the "president's czars" of the 18th century, a group of presidents who viewed themselves as mediators of the national interest.
Abstract: MITCHEL A. SOLLENBERGER AND MARK J. ROZELL. The President's Czars: Undermining Congress and the Constitution. UNIVERSITY PRESS OF KANSAS. 356 PAGES. $39.95. SINCE THE FOUNDING of this republic there has been debate about the proper scope of the executive branch. Chastened by the tyranny of George III, the first independent state governments emphasized weak executives, and the Articles of Confederation prescribed none whatsoever. But the social and political turmoil of the 1780s taught the earliest generation that they had swung too far in the opposite direction--and the Constitution was basically a compromise between the extremes of no executives and a totalitarian monarchy. It called for an executive that would have vast powers in foreign affairs, great limits in both managing domestic policy and initiating war, and above all a dependence on both the Congress and the sovereign states (and, eventually, the whole people). The debate over a strong executive branch would not end with the ratification of the Constitution, as vigorous presidents like George Washington and above all Andrew Jackson induced fears among ardent republicans that a creeping monarchism was afoot in the New World. Indeed, the extent of executive power became a focal point of the so-called "Second Party System" of 1824-60, as the National Republicans (later the Whigs) blanched at the strong executive leadership of Jackson--King Andrew I, as he was derisively known--as well as James K. Polk. The Abraham Lincoln presidency during the Civil War was the strongest executive the country had seen to date, but after Reconstruction the executive fell into the background for the next generation. Civil service reform took from the president a major source of his political power--namely, patronage; the closeness of elections from 1876 through 1892 meant that no chief executive could really claim a governing mandate; and anyway the federal government had not yet claimed the kind of regulatory and redistributive powers needed to address the problems of industrialization, urbanization, and overexpansion into the West. In other words, the politics of the period were small, and so therefore was the executive branch. The progressive era brought a lasting change to this state of affairs. Presidents Theodore Roosevelt and Woodrow Wilson had a fundamentally different vision of the executive branch than their immediate predecessors, and indeed really any prior president going back to at least Jackson. They envisioned the presidency as the mediator of the national interest--something quite distinct from what our Congress-centered Constitution prescribes--and thus saw the occupant of the White House as a ceaseless source of activity: communicating to the public about what the national interest requires, placing pressure on recalcitrant legislators, taking an active lead as head of a national political party, and generally rallying the nation to whatever cause he deems important. With the exception of the presidencies of Warren Harding and Calvin Coolidge from 1921 through 1929, this view of the presidency has more or less obtained ever since. Republicans and Democrats, conservatives and liberals, have all ascribed to it--at least when their side resides at 1600 Pennsylvania Avenue. What's more, this view has taken hold as a normative ideal both in the academy and the public at large. The standard text for any presidential history class remains Richard Neustadt's Presidential Power, which unabashedly celebrates this modern presidency over the mere "clerkship" of the late 19th century. What's more, presidential rankings by historians inevitably favor those commanders in chief who acted in a "modern" way--FOR, TR, Wilson, etc.--while leaders like Grover Cleveland and Coolidge are regularly dismissed as forgettable. How do we explain this change, in light of a written Constitution? After all, the very purpose of writing down the organizing principles of the government was to prevent slow alterations to the way politics is conducted. …

Journal Article
TL;DR: A group of prominent experts at Stanford University's Hoover Institution as mentioned in this paper discussed the state of the U.S. nuclear enterprise and concluded that the United States nuclear enterprise currently meets very high standards in its commitment to safety and security and that the same cannot be said of the nuclear enterprise globally.
Abstract: THE TIMES WE live in are dangerous for many reasons. Prominent among them is the existence of a global nuclear enterprise made up of weapons that can cause damage of unimaginable proportions and power plants at which accidents can have severe, essentially unpredictable consequences for human life. For all of its utility and promise, the nuclear enterprise is unique in the enormity of the vast quantities of destructive energy that can be released through blast, heat, and radioactivity. To get a better grip on the state of the nuclear enterprise, we convened a group of prominent experts at Stanford University's Hoover Institution. The group included experts on nuclear weapons, power plants, regulatory experience, public perceptions, and policy. This essay summarizes their views and conclusions. We begin with the most reassuring outcome of our deliberations: It's the sense generally held that the U.S. nuclear enterprise currently meets very high standards in its commitment to safety and security. That has not always been the case in all aspects of the U.S. nuclear enterprise. But safety begins at home, and while the U.S. will need to remain focused to guard against nuclear risks, the picture here looks relatively good. Our greatest concern is that the same cannot be said of the nuclear enterprise globally. Governments, international organizations, industry, and media must recognize and address the nuclear challenges and mounting risks posed by a rapidly changing world. The biggest concerns with nuclear safety and security are in countries relatively new to the nuclear enterprise, and the potential loss of control to terrorist or criminal gangs of the fissile material that exists in such abundance around the world. In a number of countries, confidence in civil nuclear energy production was severely shaken in the spring of 2011 by the Fukushima nuclear reactor plant disaster. And in the military sphere, the doctrine of deterrence that remains primarily dependent on nuclear weapons is seen in decline due to the importance of nonstate actors such as al-Qaeda and terrorist affiliates that seek destruction for destruction's sake. We have two nuclear tigers by the tail. When risks and consequences are unknown, undervalued, or ignored, our nation and the world are dangerously vulnerable. Nowhere is this risk/consequence calculation more relevant than with respect to the nucleus of the atom. From Hiroshima to Fukushima THE NUCLEAR ENTERPRISE was introduced to the world by the shock of the devastation produced by two atomic bombs hitting Hiroshima and Nagasaki. Modern nuclear weapons are far more powerful than those early bombs, which presented their own hazards. Early research depended on a program of atmospheric testing of nuclear weapons. In the early years following World War II, the impact and the amount of radioactive fallout in the atmosphere generated by above-ground nuclear explosions was not fully appreciated. During those years, the United States and the Soviet Union conducted several hundred tests in the atmosphere that created fallout. A serious regulatory weak point from that time still exists in many places today, as the Fukushima disaster clearly indicates. The U.S. Atomic Energy Commission (AEC) was initially assigned conflicting responsibilities: to create an arsenal of nuclear weapons for the United States to confront a growing nuclear-armed Soviet threat; and, at the same time, to ensure public safety from the effects of radioactive fallout. The AEC was faced with the same conundrum with regard to civilian nuclear power generation. It was charged with promoting civilian nuclear power and simultaneously protecting the public. Progress came in 1963 with the negotiation and signing of the Limited Test Ban Treaty (LTBT) banning all nuclear explosive testing in the atmosphere (initially by the United States, the Soviet Union, and the United Kingdom). …