scispace - formally typeset
Search or ask a question

Showing papers in "Economic Perspectives in 2009"


Posted Content
TL;DR: In this paper, the authors consider whether prime and subprime loans responded similarly to these home price dynamics and find that the performance of prime loans has gotten substantially worse, with loans made in 2006 and 2007 defaulting at much higher rates.
Abstract: Introduction and summary We have all heard a lot in recent months about the soaring number of defaults among subprime mortgage borrowers; and while concern over this segment of the mortgage market is certainly justified, subprime mortgages account for only about one-quarter of the total outstanding home mortgage debt in the United States. The remaining 75 percent is in prime loans. Unlike subprime loans, prime loans are made to borrowers with good credit, who fully document their income and make traditional down payments. Default rates on prime loans are increasing rapidly, although they remain significantly lower than those on subprime loans. For example, among prime loans made in 2005, 2.2 percent were 60 days or more overdue 12 months after the loan was made (our definition of default). For loans made in 2006, this percentage nearly doubled to 4.2 percent, and for loans made in 2007, it rose by another 20 percent, reaching 4.8 percent. By comparison, the percentage of subprime loans that had defaulted after 12 months was 14.6 percent for loans made in 2005, 20.5 percent for loans made in 2006, and 21.9 percent for loans made in 2007. To put these figures in perspective, among loans originated in 2002 and 2003, the share of prime mortgages that defaulted within 12 months ranged from 1.4 percent to 2.2 percent and the share of defaulting subprime mortgages was less than 7 percent. (1) How do we account for these historically high default rates? How have recent trends in home prices affected mortgage markets? Could contemporary observers have forecasted these high default rates? Figure 1, panel A summarizes default patterns for prime mortgages; panel B reports similar trends for subprime mortgages. Both use loan-level data from Lender Processing Services (LPS) Applied Analytics. Each line in this figure shows the cumulative default experience for loans originated in a given year as a function of how many months it has been since the loan was made. Several patterns are worth noting. First, the performance of both prime and subprime mortgages has gotten substantially worse, with loans made in 2006 and 2007 defaulting at much higher rates. The default experience among prime loans made in 2004 and 2005 is very similar, but for subprime loans, default rates are higher for loans made in 2005 than in 2004. Default rates among subprime loans are, of course, much higher than default rates among prime loans. However, the deterioration in the performance of prime loans happened more rapidly than it did for subprime loans. For example, the percentage of prime loans that were 60 days or more overdue grew by 95 percent for loans made in 2006 compared with loans made in 2005. Among subprime loans it grew by a relatively modest 53 percent. Home prices are likely to play an important role in households' ability and desire to honor mortgage commitments. Figure 2 describes trends in home prices from 1987 through 2008 for the ten largest metropolitan statistical areas (MSAs). This figure illustrates the historically high rates of home price growth from 2002 through 2005, as well as the sharp reversal in home prices beginning in 2006. One of the things we consider in this article is whether prime and subprime loans responded similarly to these home price dynamics. Although the delinquency rate among prime mortgages is high and rising fast, it is only about one-fifth the delinquency rate for subprime mortgages. Unfortunately, however, this does not mean that total losses on prime mortgages will be just one-fifth the losses on subprime mortgages. The prime mortgage market is much larger than the subprime mortgage market, representing about 75 percent of all outstanding mortgages (International Monetary Fund, 2008), or a total of $8.3 trillion. (2) Taking the third quarter of 2008 as the starting point, we estimate that total losses from prime loan defaults will be in the neighborhood of $133 billion and that total losses from subprime loan defaults will be about $364 billion. …

34 citations


Posted Content
TL;DR: In this article, the authors present the relevant facts about the recession of 1937 and assess the competing explanations, concluding that monetary policy and fiscal policy do not explain the timing of the downturn but do account well for its severity and most of the recovery.
Abstract: Introduction and summary The U.S. economy is beginning to emerge from a severe economic downturn precipitated by a financial crisis without parallel since the Great Depression. As thoughts turn to the appropriate path of future policy during the recovery, a number of economists have proffered the recession that began in 1937 as a cautionary tale. That sharp but short-lived recession took place while the U.S. economy was recovering from the Great Depression of 1929-33. (1) According to one interpretation, the 1937 recession was caused by premature tightening of monetary policy and fiscal policy prompted by inflation concerns. The lesson to be drawn is that policymakers should err on the side of caution. An alternative explanation is that the recession was caused by increases in labor costs due to the industrial policies that formed part of the New Deal--the policies of social and economic reform introduced in the 1930s by President Franklin D. Roosevelt. If a policy lesson can be drawn from this, it might have more to do with the dangers of interfering with market mechanisms. The goal of this article is to present the relevant facts about the recession of 1937 and assess the competing explanations. Although overshadowed by its more dramatic predecessor, the recession of 1937 has received some attention before, in particular Roose (1954) and Friedman and Schwartz (1963). Then, as now, the competing explanations centered on fiscal policy, that is, the impact of taxation and government spending on the economy; monetary policy, or the management of currency and reserves; and labor relations policy, or more broadly government policy toward businesses. The rest of this article is organized as follows. I first present the salient facts about the 1937 recession. I then review the competing explanations and finally provide a quantitative assessment of their likely contributions to the recession. I find that monetary policy and fiscal policy do not explain the timing of the downturn but do account well for its severity and most of the recovery. Wages explain little of the downturn and none of the recovery. The recession Before describing the salient features of the 1937 recession, I first take up the issue of its timing. The traditional National Bureau of Economic Research (NBER) business cycle dates put the peak of the recession in May 1937 and the trough in June 1938. Romer (1994) argues that there are inconsistencies in the way these dates were established over time, devises an algorithm that closely reproduces the dates of post-war business cycles, and applies it to the Miron and Romer (1990) industrial production series to produce new dates. In the case of the 1937 recession, Romer identifies August 1937 as the start of the recession. Cole and Ohanian (1999) implicitly use the same starting date when they state that industrial production peaked in that month. i will stick to the traditional date for several reasons. One is that Romer (1994) directs her argument mostly at cycles before 1927, when a shift in NBER methodology occurred. Another is that the NBER dating process considers a broader set of series than just industrial production. Roose (1954) lists the peaks of 40 monthly series and shows that 27 series peaked before August. Finally, industrial production as measured by the Board of Governors of the Federal Reserve System peaked in May 1937. There is no controversy over the end date of the recession, set by the NBER and Romer (1994) in June 1938. [FIGURE 1 OMITTED] Figure 1 plots real annual gross domestic product (GDP) per capita (population aged 16 years and older) over the twentieth century. The trend line follows that series' average growth rate over the periods 1919-29 and 1947-97, and is set to coincide with the series in 1929. This is the metric by which Cole and Ohanian (2004) show that the recovery after the Great Depression was weak, since the series does not return to trend until 1942. …

23 citations


Posted Content
TL;DR: In this paper, the authors look at the history of the three Detroit automakers from their heyday in the 1950s through the present, providing a helpful context for analyzing the current situation and illustrate in broad strokes how the Detroit automakers lost nearly half of the market they once dominated.
Abstract: Introduction and summary From the mid-1950s through 2008, the Detroit automakers, once dubbed the "Big Three"--Chrysler LLC, Ford Motor Company, and General Motors Corporation (GM)--lost over 40 percentage points of market share in the United States, after having dominated the industry during its first 50 years. From today's perspective, the elaborately designed tail fins that once adorned the Detroit automakers' luxury marques symbolized the pinnacle of their market power. Fifty years later, the Detroit automakers were playing catch-up to compete with Toyota's very successful entry into the hybrid car segment, the Prius. By 2008, Toyota, the largest Japanese automaker, had become the largest producer of vehicles worldwide--a position that had been previously held by GM for 77 consecutive years. Currently, Chrysler, Ford, and GM, now collectively referred to as the "Detroit Three," find themselves in dire straits. The financial crisis that began in 2007 and the accompanying sharp deceleration of vehicle sales during 2008 raise serious challenges for all automakers. The current troubles of the Detroit Three, however, are also rooted in longer-term trends. In this article, I look at the history of the three Detroit automakers from their heyday in the 1950s through the present, providing a helpful context for analyzing the current situation. I illustrate in broad strokes how the Detroit automakers lost nearly half of the market they once dominated. The auto industry has changed in many ways since the mid- 1950s. The emergence of government regulation for vehicle safety and emissions, the entry of foreign producers of auto parts and vehicles, a dramatic improvement in the quality of vehicles produced, and the implementation of a different production system stand out. Part of the transformation of the North American auto industry has been a remarkable decline of market power for the Detroit automakers over the past five-plus decades (see figure 1). The industrial organization literature suggests that market shares can be a useful initial step in analyzing the competitiveness of an industry (see, for example, Carlton and Perloff, 1990, p. 739)? By that metric, the U.S. auto industry of the 1950s and 1960s was highly concentrated among a small number of companies and therefore not very competitive. On the one hand, the substantial market share decline experienced by the Detroit carmakers since then represents an increase in competition, resulting in more choices, tremendously improved vehicle quality, and increased vehicle affordability for consumers. On the other hand, the shift in market share from Detroit's carmakers to foreign-headquartered producers has had important regional economic implications. Traditional locales of automotive activity in the Midwest continue to decline as communities located in southern states, such as Kentucky and Tennessee, have seen a sizable influx of auto-related manufacturing activity. (2) For example, between 2000 and 2008, the U.S. auto industry (that is, assembly and parts production combined), shed over 395,000 jobs; 42 percent of these job losses occurred in Michigan alone? These regional effects of the auto industry restructuring were heightened by the sharp industry downturn during 2008. Today, the Detroit Three are fighting for their very survival in the face of a rapid cyclical downturn that extends to all major markets. No carmaker has been shielded from the economic downturn. Even Toyota faces a downgrade of its long-term corporate credit rating. Yet the Detroit Three entered this recession at less than full strength, as they were already grappling with serious structural problems, such as the sizable legacy costs of their retired employees and their over-dependence on sales of large cars and trucks. In that way, cyclical and structural issues are currently intermingled. [FIGURE 1 OMITTED] It turns out that the decline in the U. …

20 citations


Posted Content
TL;DR: Using samples from a variety of RFID-enabled credit cards, this study observes that the cardholder's name and often credit card number and expiration are leaked in plaintext to unauthenticated readers.
Abstract: Introduction An increasing number of credit cards now contain a tiny wireless computer chip and antenna based on RFID (radio frequency identification) and contactless smart card technology. (1) The RFID-enabled credit cards permit contactless payments that are fast, easy, and often more reliable than magnetic stripe card transactions, and only physical proximity (rather than contact) is required between this type of credit card and the reader. An estimated 20 million RFID-enabled credit cards and 150,000 vendor readers are already deployed in the U.S. (Bray, 2006). According to Visa USA, "This has been the fastest acceptance of new payment technology in the history of the industry" (Bray, 2006). The conveniences of RFID-enabled credit cards also lead to new risks for security and privacy. Traditional (magnetic stripe) credit cards require visual access or direct physical contact for retrieving information, such as the cardholder's name and the credit card number. By contrast, RFID-enabled credit cards make these and other sensitive pieces of data available using a small radio transponder that is energized and interrogated by a reader. Experimental results Although RFID-enabled credit cards are widely reported to use sophisticated cryptography, (2) our experiments found several surprising vulnerabilities in every system we examined. We collected two commercial readers from two independent manufacturers and approximately 20 RFID-enabled credit cards issued in the last year from three major payment associations and several issuing banks in the U.S. We were unable to locate public documentation on the proprietary commands used by RFID-enabled credit cards. Thus, we reverse-engineered the protocols and constructed inexpensive devices that emulate both the credit cards and readers. The experiments indicate that all the cards are susceptible to live relay attacks (in which an attacker relays verbatim a message from the sender to a valid receiver of the message), all the cards are susceptible to disclosure of personal information, and many of the cards are susceptible to various types of replay attacks (a form of network attack in which a valid data transmission is maliciously or fraudulently repeated or delayed). In addition, we successfully completed a proof-of-concept cross-contamination attack. Given the size and diversity of our sample set, we believe that our results reflect the current state of deployed RFID-enabled credit cards; however, card issuers continue to innovate and will likely add new security features. Our findings are not necessarily exhaustive, and there may exist cards that use security mechanisms beyond what we have observed. Background In this section, we provide some background on the current state and standards of RFID technology and its deployment throughout the United States. Scale of current deployment Several large chain stores in the U.S. have deployed many thousands of RFID readers for credit cards: CVS Pharmacies (all 5,300 locations), McDonald's (12,000 of 13,700 locations), the Regal Entertainment Group of movie theaters, and several other large vendors (Koper, 2006; and O'Connor, 2006). Reports estimate that 20 million to 55 million RFID-enabled credit cards are in circulation, which is 5 percent to 14 percent of all credit cards (Averkamp, 2005; Bray, 2006; and Koper, 2006). In addition to traditional payment contexts, RFID-enabled credit cards are becoming accepted in other contexts such as public transportation (Heydt-Benjamin, Chae, et al., 2006). The New York City subway (Metropolitan Transit Authority, 2006) recently started a trial of 30 stations accepting an estimated 100,000 RFID-enabled credit cards (SourceMedia Inc., 2006). A participant in this trial uses her credit card as a transit ticket as well as a credit card in place of the traditional magnetic-stripe-based dedicated subway tickets. Integration of radio frequency technology into existing credit card infrastructure In a typical deployment, an RFID-enabled credit card reader is attached to a traditional cash register. …

14 citations


Posted Content
TL;DR: In this paper, the authors discuss three types of payments fraud: first-party fraud, which is the abuse of account privileges by the account holders themselves, or the acquisition or expansion of those privileges by deceitful means.
Abstract: It is a great pleasure to be addressing this august group. As some of you know, I began my career at the Federal Reserve back in 1982. So speaking to you is like a homecoming for me. I have been fortunate in my career to participate in the U.S. banking economy from three perspectives: at the Fed, obviously a policymaking central bank; at Citibank, a lender; and at two financial technology providers, including 12 years at IBM (International Business Machines) and the last year at Fair Isaac, a leader in decision management technology. From these three perspectives, I have seen the tremendous collaboration that exists in the banking industry on the issue of fraud. However, from my current vantage point, I am also able to see a disturbing trend: More companies are declining to participate in some of these collaborative, consortium-based best practices. The reason is simple: They see a competitive advantage to keeping their information and experience to themselves. This raises some key issues for the financial services industry. Do we want to fight fraud or move it around? That is, do we want to reduce the amount of fraudulent activity overall, or are we content to just have the most advanced banks move it to the less advanced banks, and to shift it from well-protected channels to less protected channels? Does a failure to maximize our effectiveness at fraud prevention have even deeper consequences? Which people, which groups, and which activities might we be funding if we allow fraud to persist? And are private industry initiatives enough, or is there a role in fraud prevention for public sector initiatives, mandates, or intervention? I won't leave you guessing as to where I'm going with this. My experience has taught me the following. * Fraud is too important to the economic and social well-being of our country to let it persist and grow. * Individual gains must be balanced by the collective good. * It is better to stop a fraudster than send him to the bank next door. Now, my company is in the business of giving banks a competitive advantage. We have used consortium approaches to defeat fraud. We believe these collaborative approaches, along with ubiquity in protection, are essential ingredients in the fraud-fighting formula. They are necessary to reduce the "balloon effect" in fraud prevention, where progress in fighting a segment of fraud succeeds primarily in moving fraud from one place to another. We win when fraud loses--and fraud loses when we fight it together. Types of payments fraud Let me start by simply defining the key areas of payments fraud I'm discussing here. Fundamentally, we can divide fraud into two categories. There is first-party fraud, which is the abuse of account privileges by the account holders themselves, or the acquisition or expansion of those privileges by deceitful means. There is also third-party fraud, which is often identity fraud, or the abuse of one person's account by another. For the purposes of this talk, I am not discussing insider fraud, which is the misuse of a customer account by bank employees or others involved in the provision and distribution of financial services products. First-party fraud typically involves your customer opening an account with you, with the intention of violating the terms of the account agreement. It can also involve a borrower selling his information to criminals or constructing a fraudulent identity or deceitful credentials for gaining credit. This type of fraud very often shows up in the collections queue as bad debt. But it is not traditional bad debt--when it is intentional, it is fraud. [FIGURE 1 OMITTED] Third-party fraud is what we usually think of when we consider fraud. This is stolen identities, the use of lost or stolen cards, and the counterfeiting of cards or other means of account access. It encompasses a wide range of techniques. …

14 citations


Posted Content
TL;DR: Schreft et al. as mentioned in this paper explored the concept of efficient confidentiality, using some ideas from economic theory, and found that the costs of identity theft are large and easy to find, and when the time and out-of-pocket costs incurred to resolve the crime are added in, identity theft cost U.S. consumers $61 billion in 2006.
Abstract: Introduction and summary A byproduct of improved information technology has been a loss of privacy. Personal information that was once confined to dusty archives can now be readily obtained from proprietary data services, or it may be freely available (and, as Facebook users know, often voluntarily provided and accessible) through the Internet. While the increased collection and dissemination of personal data have undoubtedly provided economic benefits, they have also diminished people's sense of privacy and, in some cases, given rise to new types of crime. Is this loss of privacy good or bad? Press accounts repeatedly argue the latter: Too much data are being collected in ways that are too easy for criminals to access. (1) But in a thought-provoking essay, Swire (2003) argues that a meaningful answer to this question requires some notion of efficient confidentiality of personal data--that is, of a degree of privacy that properly balances the costs and benefits of our newfound loss of anonymity. In this article, we explore the concept of efficient confidentiality, using some ideas from economic theory. Loss of privacy: The costs are large and easy to find The most dramatic consequence of the increased availability of personal information has been the emergence of a new form of payment fraud, identity theft. The 1998 U.S. Identity Theft and Assumption Deterrence Act (ITADA) defines identity theft as the knowing transfer, possession, or usage of any name or number that identifies another person, with the intent of committing or aiding or abetting a crime. Traditional varieties of identity theft, such as check forgery, have long flourished, but over the last decade, identity theft has become a major category of crime and a significant policy issue. (2) Identity theft takes many guises, but it is divided into two general categories: existing account fraud and new account fraud. Existing account fraud occurs when a thief uses an existing credit card or similar account information to illicitly obtain money or goods. New account fraud (traditionally) occurs when a thief makes use of another individual's personal information to open one or more new accounts in the victim's name. Both types of identity theft depend on easy access to other people's data. Today, identity theft is big business. A study conducted by the Federal Trade Commission (FTC), encompassing both new account fraud and existing account fraud, indicates that in 2006 identity thieves stole about $49.3 billion from U.S. consumers. (3) When the time and out-of-pocket costs incurred to resolve the crime are added in, identity theft cost U.S. consumers $61 billion in 2006 (Schreft, 2007). Even this is a conservative estimate, however, as it omits certain categories of identity theft and some types of costs that are not generally known to consumers. For example, an increasingly prevalent type of identity theft is fictitious or synthetic identity fraud, in which a thief combines information taken from a variety of sources to open accounts in the name of a new fictitious identity (Cheney, 2005; and Coggeshall, 2007). There is no single victim, in contrast to traditional types of identity theft, but retailers and ultimately consumers end up bearing the cost. Much of the data used in identity theft is obtained through low-tech channels. In consumer surveys, victims who know how their identifying information was stolen commonly attribute identity theft to stolen wallets or mail or to personal acquaintance with the identity thief(Kim, 2008). In these same surveys, however, the large majority of identity theft victims are unable to pinpoint how the thief obtained their data. Available evidence suggests that much of these data are obtained through illicit access (called "breaches") of commercial or government databases. Statistics on data breaches are available from information security websites, such as Attrition. …

13 citations


Posted Content
TL;DR: In this paper, the authors examine the role of structural change in the U.S. unemployment rate and find that it is influenced by more than simply aggregate conditions such as weak economic conditions.
Abstract: Introduction and summary The Federal Reserve, in its policy analysis, must carefully weigh incoming data and evaluate likely future outcomes before determining how best to obtain its twin goals of employment growing at potential and price stability. It is tempting to regard high or rising unemployment as a sign of a weak economy. And, normally, a weak economy is one with little inflationary pressure and, therefore, room for expansionary monetary policy to stimulate growth. But unemployment is influenced by more than simply aggregate conditions. in a dynamic economy that responds to changing opportunities, some industries are shrinking while others are growing. Labor must flow from declining industries to expanding ones. This adjustment takes time. it takes time for employees in declining sectors to learn about new opportunities in other industries, acquire necessary skills, apply for job openings, and potentially relocate. And during this period of adjustment, the unemployment rate rises as waning industries lay off workers. Thus, the unemployment rate may increase or decrease, even though the aggregate state of the economy remains stable, simply because the labor market adjusts to shifting patterns of production. For policymakers, it is essential to decipher what portion of a rising unemployment rate is due to a cyclical slowdown in which many sectors of the economy are simultaneously affected, as opposed to a structural realignment in production in which particular sectors of the economy are affected. The two factors ideally should result in different policy responses. If unemployment is rising because of a weak economy, the textbook response is for the Fed to take a more accommodative policy stance. If, instead, the unemployment rate is rising because of underlying compositional shifts in employment, an easing of monetary policy may discourage declining industries from contracting by keeping them marginally profitable, impeding the adjustment process. Furthermore, this policy may also encourage inflation as employers across a broad spectrum of industries compete for scarce labor resources. Thus, comprehending the underlying sources of movements in the unemployment rate is more than just a theoretical exercise: It has practical implications for monetary policy. As a first step toward evaluating the role of structural change, I need to be able to measure it. Lilien (1982) suggests a dispersion measure that is a weighted average of squared deviations of industry employment growth rates from aggregate employment growth. Abraham and Katz (1986) argue that Lilien's measure does not properly account for cyclical shifts in employment across industries, instead conflating cyclical variation with structural change. When aggregate economic conditions are weak, certain sectors are affected more than others because demand for their products is more cyclically sensitive, but as soon as economic conditions improve, these sectors will also recover more quickly. The Lilien measure more accurately captures both cyclical variation in employment responses and structural changes in the composition of employment across industries, making it impossible to disentangle the importance of the two effects on the measure of dispersion. The sectoral shifts hypothesis has been revisited more recently by Phelan and Trejos (2000) and Bloom, Floetotto, and Jaimovich (2009). Phelan and Trejos (2000) calibrate a job creation/job destruction model to data from the U.S. labor market to suggest that permanent changes in sectoral composition can precipitate aggregate economic downturns. Bloom, Floetotto, and Jaimovich (2009) examine the effect of what they term "uncertainty shocks" on business cycle dynamics, arguing that increases in uncertainty lead to a decline in economic activity in affected industries, followed by a rebound. Increasing uncertainty, in their view, causes firms to be more cautious in their hiring and investment decisions and impedes the reallocation of capital across sectors. …

13 citations


Posted Content
TL;DR: Hansen and Sargent as mentioned in this paper argue that gradualism is not a general feature of robust control and that the results from early work on robust monetary policy stem from particular features of the economic environments those papers studied.
Abstract: Introduction and summary Policymakers are often required to make decisions in the face of uncertainty. For example, they may lack the timely data needed to choose the most appropriate course of action at a given point in time. Alternatively, they may be unable to gauge whether the models they rely on to guide their decisions can account for all of the issues that are relevant to their decisions. These concerns obviously arise in the formulation of monetary policy, where the real-time data relevant for deciding on policy are limited and the macroeconomic models used to guide policy are at best crude simplifications. Not surprisingly, a long-standing practical question for monetary authorities concerns how to adjust their actions given the uncertainty they face. The way economists typically model decision-making under uncertainty assumes that policymakers can assign probabilities to the various scenarios they might face. Given these probabilities, they can compute an expected loss for each policy--that is, the expected social cost of the outcomes implied by each policy. The presumption is that policymakers would prefer the policy associated with the smallest expected loss. One of the most influential works on monetary policy under uncertainty based on this approach is Brainard (1967). That paper considered a monetary authority trying to meet some target--for example, an inflation target or an output target. Brainard showed that under certain conditions, policymakers who face uncertainty about their economic environment should react less to news that they are likely to miss their target than policymakers who are fully informed about their environment. This prescription is often referred to as "gradual" policy. Over the years, gradualism has come to be viewed as synonymous with caution. After all, it seems intuitive that if policymakers are unsure about their environment, they should avoid reacting too much to whatever information they do receive, given that they have only limited knowledge about the rest of the environment. Although minimizing expected loss is a widely used criterion for choosing policy, in some situations it may be difficult for policymakers to assign expected losses to competing policy choices. This is because it is hard to assign probabilities to rare events that offer little historical precedent by which to judge their exact likelihood. For this reason, some economists have considered an alternative approach to policymaking in the face of uncertainty that does not require knowing the probability associated with all possible scenarios. This approach is largely inspired by work on robust control of systems in engineering. Like policymakers, engineers must deal with significant uncertainty--specifically, about the systems they design; thus they are equally concerned with how to account for such uncertainty in their models. Economic applications based on this approach are discussed in a recent book by Hansen and Sargent (2008). The policy recommendations that emerge from this alternative approach are referred to as robust policies, reflecting the fact that this approach favors policies that avoid large losses in all relevant scenarios, regardless of how likely they are. Interestingly, early applications of robust control to monetary policy seemed to contradict the gradualist prescription articulated by Brainard (1967), suggesting that policymakers facing uncertainty should respond more aggressively to news that they are likely to miss their target than policymakers facing no uncertainty. Examples of such findings include Sargent (1999), Giannoni (2002), and Onatski and Stock (2002); their results contradict the conventional wisdom based on Brainard (1967), which may help to explain the tepid response to robust control in some monetary policy circles. In this article, I argue that aggressiveness is not a general feature of robust control and that the results from early work on robust monetary policy stem from particular features of the economic environments those papers studied. …

12 citations


Posted Content
TL;DR: In this article, the authors evaluate the efficiency of fraud liability allocation rules in current card-based payment systems, focusing on the broader category of payments fraud and whether or not it is precipitated by identity theft.
Abstract: Introduction and summary In the absence of a significant (and right now unforeseeable) shift in the retail payments landscape in the United States, consumers will continue to reach consistently (and often) for their debit and credit cards. They will use these cards when paying for goods and services in face-to-face, Internet, mail order, and telephone order transactions. Likewise, criminals will continue to use tried-and-true tactics and will develop innovative methods to perpetrate payment card fraud. At the intersection of consumers conducting legitimate card transactions and fraudsters pursuing their illegal ends is a tangled web of public laws and private card network rules. These laws and rules allocate fraud risk among the consumers, card issuers, and merchants participating in card-based payment systems. In theory, one would hope that these laws and rules for payment card transactions are thoughtfully designed to encourage behavior that minimizes fraud losses to the system as a whole. In reality, systemwide fraud reduction is often not the principal objective behind particular public laws or private rules affecting fraud liability allocation. Consequently, these laws and rules may fail to promote efficient fraud avoidance; indeed, in some instances, they may actually discourage fraud avoidance. Defining the issue The first step in evaluating the efficiency of fraud liability allocation rules in current card-based payment systems is to define the issue. Doing so requires an understanding of the difference between identity theft and common payment card fraud, as well as an understanding of the workings of the card-based payment systems at issue. Identity theft versus fraud News stories abound about identity theft resulting from dumpster divers absconding with old bank statements and criminals rifling through mail and intercepting credit card offers. Further, email accounts are barraged with phishing attempts and other web-based schemes craftily designed to lure consumers into revealing personal identification information that can be used for nefarious purposes. Typically, the fraudsters intend to use the ill-gotten fruits of their snooping to impersonate their victims and access their credit or asset accounts. This is identity theft, and it is an increasingly pervasive problem in the United States and throughout the world. During 2007, Consumer Sentinel, a network that collects information about consumer fraud and identity theft from the Federal Trade Commission and over 125 other organizations, recorded 258,427 identity theft complaints. (1) Identity theft is distinguishable from common financial fraud. Identity theft is generally defined as "the use of personal identifying information to commit some form of fraud." (2) In contrast, fraud is simply "[a] knowing misrepresentation of the truth ... to induce another to act to his or her detriment." (3) As noted in the definition of identity theft, fraud is typically the end goal of identity theft. However, often fraud is committed without antecedent theft of Social Security numbers or other assumption of identity. Along with the cases of identity theft reported in 2007, 555,472 cases of non-identity-theft-related fraud were reported during the same year. (4) Given that card-based payment systems (and other payment systems, for that matter) seek to prevent monetary fraud perpetrated through the system regardless of how the information used to perpetrate the fraud was obtained, here I focus on the broader category of payments fraud--whether or not it is precipitated by identity theft. There is no need to steal another person's identity to perpetrate simple payment card fraud--all the perpetrator needs to do is obtain a person's payment card or payment card information? Distinguishing fraud from identity theft is important to the discussion that follows for two reasons. First, fraud is broader and more pervasive than identity theft. …

11 citations


Posted Content
TL;DR: In this paper, the authors use a simple model of teacher demand and supply in order to gauge the implications of baby boomer retirements on the projected demand for new teachers, and they find that more teachers will retire between 2010 and 2020 than in any other decade since the end of World War II.
Abstract: Introduction and summary Teachers play a vital role in their students' educational performance. In addition, there is a correlation between a teacher's experience and her effectiveness in the classroom--at least in the first few years of her career. These intuitive outcomes are supported by a large body of research literature. (1) With this in mind, it is reasonable to view rising rates of teacher turnover (since the early 1990s) as a cause for concern. Further, we expect that retirements, which have driven some of this increase, will accelerate to record levels in the coming decade as growing numbers of baby boomers reach retirement age. (2) This pattern will inevitably necessitate a significant increase in the demand for new teachers. Some communities--for example, poor urban districts, which tend to have especially high teacher turnover rates and severe recruitment problems (3)--might be particularly susceptible to declining teacher quality as a result of increased retirements. In this article, we use a simple model of teacher demand and supply in order to gauge the implications of baby boomer retirements on the projected demand for new teachers. Our forecast links estimates of demand for all teachers with the expected supply of returning teachers through 2020 (that is, the 2020-21 school year). We assume any shortfall would have to be addressed by hiring additional teachers. We discuss how projected demand for new teachers compares with the past half century and what types of schools are likely to have to augment their teacher hiring over the coming decade. We also calculate bow much teacher salaries would have to increase in order to fill the gap between teacher supply and demand. To compute the supply and demand of the teacher market, we use a variety of data sets and sources--for example, the U.S. Census Bureau's Decennial Census and Current Population Survey (CPS) and various publications of the U.S. Department of Education's National Center for Education Statistics (NCES), including its 2003-04 Schools and Staffing Survey (SASS) and the accompanying 2004-05 Teacher Follow-up Survey (TFS). We estimate the number of new full-time public school teachers (4) needed from 2009 through 2020 will be between 2.3 million and 4.5 million, with the range encompassing reasonable assumptions about fertility rates, student-teacher ratios, and turnover propensity. Our preferred calculations--based partly on the latest teacher data available from the 2003-04 school year (and therefore not accounting for the economic downturn that began in late 2007)--predict roughly 277,000 new full-time public school teachers needed in 2009--10, rising to 303,000 new teachers by 2020-21, or 3.5 million for all school years between 2009-10 and 2020-21. Retirements account for about one-third of the teachers who leave the teaching work force over this period. Adding the private school sector to these calculations raises the number of new teachers needed by about 20 percent, to 4.2 million, but lowers the fraction due to retirements by roughly 3 percentage points. These numbers, in isolation, are difficult to assess without some historical context. Therefore, we provide rough estimates of projected demand for new teachers over the past six decades using U.S. Decennial Censuses, combined with analogous hiring projections for the years 2010 and 2020. We find that more teachers will retire between 2010 and 2020 than in any other decade since the end of World War II. But because of relatively slower projected growth in the school-age population, the total number of new teachers needed for all reasons (including retirements) is within historical norms. Indeed, normalized by the size of the aggregate labor force (one rough measure of the potential teacher work force), demand for new teachers will be similar in magnitude in the coming decade to that in past decades. Therefore, we would not expect the increase in forthcoming retirements, in the aggregate, to have a significant impact on national levels of teacher hiring much beyond the variation in teacher hiring needed in the past. …

9 citations


Posted Content
TL;DR: A recent roundtable discussion on retail payments fraud as mentioned in this paper focused on four main themes: changing landscape of retail payment fraud, current trends, emerging concerns, and areas for improvement in fraud detection and prevention.
Abstract: Let me begin by saying that I am not here to lecture, but rather to learn. Today, I would like to talk about a couple of things. First, I would like to start with some themes that emerged from a roundtable discussion that the Federal Reserve held last year with industry leaders on emerging issues involving fraud in the retail payments system. This is important to the Federal Reserve. The outputs from the roundtable are used to direct the Federal Reserve's research and inform its work. Thus, hearing your perspectives on those themes today is important. The second thing I would like to talk about is an area in which I have been doing research. These are the emerging trends in new account fraud detection for applicants on the Internet, where businesses are not physically present to authenticate the identity of customers. As everybody here knows, this is an area of growing interest throughout the banking industry. Findings from the roundtable discussion on retail payments fraud Let me start with the roundtable that the Federal Reserve sponsored last year. Fourteen industry expert--including merchants and representatives from payments system providers, financial institutions, and law enforcement organizations--participated. Overall, these leaders agreed that, although the current level of payments fraud is being effectively managed and does not represent a crisis, organizations must constantly adapt to keep pace with criminal activity and with changes in technology and payment methods. While the dollar amount of fraud relative to business revenues in the United States is likely declining, the costs associated with fraud mitigation are substantial and increasing. The roundtable discussions focused on four main themes: 1) the changing landscape of retail payments fraud, 2) current trends, 3) emerging concerns, and 4) areas for improvement in fraud detection and prevention. The following paragraphs sum up our discussions involving these four themes. The changing landscape of retail payments fraud Despite declining use of checks across the country, industry leaders find that the largest number of fraud attempts remains in check payments. Fraud losses are also highest for checks on a comparative basis with other payment methods. A number of participants stated that business losses resulting from check fraud are significantly higher than losses from noncheck payment types because checks are relatively easy to alter or forge, using readily available printers, scanners, and computer software. Moreover, changes in the payments system and in criminal behavior have introduced additional risk. One key change in the payments system has been the proliferation of commerce conducted over the Internet. The Internet has created new means for criminals to gain access to consumers' personal and financial information, and has facilitated the formation of extensive illegal networks through which criminals buy and sell this information without the limits of geography. Indeed, substantial Internet fraud operations are now linked to sites located in certain developing countries. The Internet has also accelerated worldwide information-sharing among criminals regarding successful fraudulent schemes, so that new fraud techniques now move quickly around the world. In addition, the growth in online commerce has led to an increase in the number of transactions in which merchants are not physically present to authenticate the identities of purchasers. That said, some changes in the payments system have helped reduce risk, such as faster clearing of check payments associated with Check 21 (1) and check-to-automated-clearinghouse (ACH) conversion. Being able to clear payments more quickly can mean that a fraudulent check may be returned before a collecting bank makes funds available to the depositor. At a minimum, faster returns help inform banks and their customers that fraud is taking place. But some feel that ACH e-check payments may be more vulnerable to fraud than other ACH standard code categories, such as ACH transactions initiated via telephone. …

Posted Content
TL;DR: Payment fraud can be broadly defined as any activity that uses information from any type of payments transaction for unlawful gain and can be committed by a consumer (first-party fraud), or consumers can be victimized by fraudsters operating within financial institutions or as part of criminal enterprises (third party fraud) as mentioned in this paper.
Abstract: An overview of payments fraud Payments fraud can be broadly defined as any activity that uses information from any type of payments transaction for unlawful gain. Such fraud can be perpetrated on any type of payments device, including credit and debit cards, cash, checks, online or mobile payments, and automated clearinghouse (ACH) transactions. Payments fraud can be committed knowingly by a consumer (first-party fraud), or consumers can be victimized by fraudsters operating within financial institutions or as part of criminal enterprises (third-party fraud). Payments fraud has received extensive attention in the popular press and in public policy venues recently, and the payments industry is fighting the perception that fraud is now occurring at unmanageable levels. While there has been increasing emphasis on all types of payments fraud, fraud perpetrated by criminals has received special attention of late. (1) Fraud is a very real threat to the payments system's efficiency. According to one recent report, 71 percent of surveyed organizations experienced payments fraud in 2007, and over one-third of those firms reported financial losses stemming from the fraudulent activity. (2) As another example of the size of the payments fraud problem, in a 2007 data breach involving TJX Companies Inc. (the holding company of retailers T. J. Maxx, Marshalls, Winners, HomeGoods, TK Maxx, A. J. Wright, and HomeSense), 45,700,000 credit card and debit card account numbers were stolen, along with 455,000 merchandise return records containing customer names and driver's license numbers. Latest reports allege that an additional 48 million people have been affected for a total of over 30 percent of the entire U.S. population. The situation has cost TJX Companies Inc. more than $130 million in settlement claims. The breach was a worldwide effort perpetrated by criminals from the United States, Eastern Europe, and China. The U.S. Department of Justice has arrested 11 people in this case, which is the largest hacking and identity theft case ever prosecuted by the department. (3) As more payments become electronic, the size and scope of payments fraud has grown, in part because the relevant parties in a payments transaction do not know one another. Information about those parties is vital to prevent fraud and enable legitimate transactions. However, as innovations in payments technology have made authentication of information more reliable, other technological innovations have made that information more widely available and subject to abuse. Fraud such as counterfeiting or check forgery has always had a global reach. However, payments fraud used to be much more reliant on physical connections between parties, such as the theft of an individual checkbook or credit card. Today, modern databases, online information sharing, and increased access points have opened up opportunities for sophisticated criminal gangs to perpetrate fraud from remote comers of the globe. Further, the growing presence of nonbanks and third-party service providers means that regulated financial institutions must consider the security of those providers. At the same time, new laws and standards are being developed for payment activities and instruments. While the continual refining of systems and rules arguably makes payments easier and more efficient, the fast pace of change can compound fraud potential as fraudsters hunt to exploit the weakest link in the emerging systems. In this complex environment, market participants and governments must determine whether new payment types carry excessive fraud risk; who is liable when payments fraud occurs; how losses are allocated; what consumer protections should be in place; how notification of fraud should be handled; and how standards should be defined to minimize the incidence of fraud. It is a tall order, but payments providers must also identify consumers whom they have never met and authorize electronic transactions from which they might be far removed. …

Posted Content
TL;DR: In this article, the authors discuss the effect of long run labor income risk on the valuation of pension plan obligations, their funding, and the allocation of pension assets across different investment classes.
Abstract: Many financial advisors and much of the academic literature often argue that young people should place most of their savings in stocks In contrast, a significant fraction of US households do not hold stocks Investors typically hold very little in stocks when they are young, progressively increase their holdings as they age, and decrease their exposure to stock market risk when they approach retirement The authors show how long-run labor income risk helps explain this evidence Moreover, they discuss the effect of long-run labor income risk on the valuation of pension plan obligations, their funding, and the allocation of pension assets across different investment classes

Posted Content
TL;DR: The 2008 Payments Fraud: Perception Versus Reality Conference as mentioned in this paper brought together decision-makers from the banking, payments, legal, regulatory, and merchant communities for a wide-ranging discussion of the threats to the security of the payments system and how those threats might best be addressed.
Abstract: In this special issue of Economic Perspectives, we present selected papers based on our recent conference, Payments Fraud: Perception Versus Reality, hosted by the Federal Reserve Bank of Chicago on June 5-6, 2008. The conference brought together decision-makers from the banking, payments, legal, regulatory, and merchant communities for a wide-ranging discussion of the threats to the security of the payments system and how those threats might best be addressed. The volume starts with an extensive summary of conference presentations, keynote addresses, and open floor discussions, written by Tiffany Gates and Katy Jacob. In order to give a sense of the intense back-and-forth exchanges that took place during this day-and-a-half-long event, the authors structure their summary around the broad themes of the discussion rather than simply presenting a chronological account. The themes are as follows: organizational structures for management of fraud risks; technological innovation; alignment of incentives for fraud prevention among consumers, merchants, and payments providers; and regulatory policies. Gates and Jacob's article highlights the challenges involved in bringing the various constituencies together to forge common ways to address fraud in payments systems. Gates and Jacob find that payments fraud cannot be eliminated without decreasing the openness and efficiency of the payments system. In the current environment, technological innovations have enabled system participants to enhance payments security, at the same time that technology has made it easier for criminals to perpetrate payments fraud remotely. Practitioners are constantly weighing the costs and benefits of payments fraud mitigation and are looking to the public sector to offer guidance and support. As the industry combats payments fraud, companies are banding together to find common solutions. For instance, throughout the conference, financial industry participants emphasized the concept of enterprise-wide fraud management, while many also acknowledged the difficulties faced by small merchants and many financial institutions in fashioning such holistic strategies. A number of legal professionals stressed the detrimental effects of legacy laws and regulations that evolved independently around individual payment product lines. Together, these viewpoints contributed to a budding consensus on the importance of dedicated high-level executive involvement in payments fraud management and of outsourcing development of fraud prevention tools to specialized entities. The rest of this volume is devoted to articles that address in greater detail some of the key topics discussed at the conference. The contributors of these papers span the spectrum of thought leaders in combating payments fraud--industry experts in fraud detection systems, legal professionals, academic researchers in economics and technology, and senior officials of the Federal Reserve System. The first article is written by Bruce J. Summers. His paper provides a synthesis of the approaches of practitioners and economists to thinking about the problems in containing retail payments fraud. As Summers makes clear, these approaches differ somewhat for reasons that have to do with both perspective and analytical framework. Yet, both parties are integral in formulating a coherent public policy response to the problem of payments fraud. In particular, payments industry practitioners tend to regard fraud as a persistent but manageable problem that requires both unrelenting attention and significant expenditures. These expenditures on fraud mitigation have resulted in declining rates of fraud losses. Still, there is concern that maintaining such results in the future will require ever-expanding expenditures. Part of this argument rests on the view that fraud threats to electronic payments networks arise globally, are increasingly sophisticated, and propagate quickly. …