scispace - formally typeset
Search or ask a question

Showing papers in "The Review of Economic Studies in 2004"


Journal ArticleDOI
TL;DR: Benabou as discussed by the authors developed a growth theory that captures the endogenous replacement of physical capital accumulation by human capital accumulation as a prime engine of economic growth in the transition from the Industrial Revolution to modern growth.
Abstract: This paper develops a growth theory that captures the replacement of physical capital accumulation by human capital accumulation as a prime engine of growth along the process of development. It argues that the positive impact of inequality on the growth process was reversed in this process. In early stages of the Industrial Revolution, when physical capital accumulation was the prime source of growth, inequality stimulated development by channelling resources towards individuals with a higher propensity to save. As human capital emerged as a growth engine, equality alleviated adverse effects of credit constraints on human capital accumulation, stimulating the growth process. This research develops a growth theory that captures the endogenous replacement of physical capital accumulation by human capital accumulation as a prime engine of economic growth in the transition from the Industrial Revolution to modern growth. The proposed theory offers a unified account for the effect of income inequality on the growth process during this transition. It argues that the replacement of physical capital accumulation by human capital accumulation as a prime engine of economic growth changed the qualitative impact of inequality on the process of development. In the early stages of the Industrial Revolution, when physical capital accumulation was the prime source of economic growth, inequality enhanced the process of development by channelling resources towards individuals whose marginal propensity to save is higher. In the later stages of the transition to modern growth, as human capital emerged as a prime engine of economic growth, equality alleviated the adverse effect of credit constraints on human capital accumulation and stimulated the growth process. The proposed theory unifies two fundamental approaches regarding the effect of income distribution on the process of development: the Classical approach and the Credit Market Imperfection approach. 1 The Classical approach was originated by Smith (1776) and was further interpreted and developed by Keynes (1920), Lewis (1954), Kaldor (1957), and Bourguignon (1981). According to this approach, saving rates are an increasing function of wealth and inequality therefore channels resources towards individuals whose marginal propensity to save 1. The socio-political economy approach provides an alternative mechanism: equality diminishes the tendency for socio-political instability, or distortionary redistribution, and hence it stimulates investment and economic growth. See the comprehensive survey of Benabou (1996b).

898 citations


Journal ArticleDOI
TL;DR: The authors construct a quantitative, general equilibrium, overlapping-generations model in which parents and children are linked by accidental and voluntary bequests and by earnings ability, and show that the introduction of a bequest motive generates lifetime savings profiles more consistent with the data.
Abstract: Previous work has had difficulty generating household saving behaviour that makes the distribution of wealth much more concentrated than that of labour earnings, and that makes the richest households hold onto large amounts of wealth, even during very old age. I construct a quantitative, general equilibrium, overlapping-generations model in which parents and children are linked by accidental and voluntary bequests and by earnings ability. I show that voluntary bequests can explain the emergence of large estates, while accidental bequests alone cannot, and that adding earnings persistence within families increases wealth concentration even more. I also show that the introduction of a bequest motive generates lifetime savings profiles more consistent with the data.

677 citations


Report SeriesDOI
TL;DR: In this paper, a semi-parametric estimator for estimating the income effect in a labor market participation problem using a large micro data set from the British FES was proposed.
Abstract: This paper develops and implements semiparametric methods for estimating binary response (binary choice) models with continuous endogenous regressors. It extends existing results on semiparametric estimation in single index binary response models to the case of endogenous regressors. It develops a control function approach to account for endogeneity in triangular and fully simulataneous binary response models. The proposed estimation method is applied to estimate the income effect in a labor market participation problem using a large micro data set from the British FES. The semiparametric estimator is found to perform well, detecting a significant attenuation bias. The proposed estimator is contrasted to the corresponding Probit and Linear Probability specifications. JEL: C14, C25, C35, J22.

490 citations


Journal ArticleDOI
TL;DR: In this paper, the authors model the incentives of the media to deliver news to different groups and predict that this change should raise spending on government programmes used by poor and rural voters.
Abstract: If better informed voters receive favourable policies, then mass media will affect policy because mass media provide most of the information people use in voting. This paper models the incentives of the media to deliver news to different groups. The increasing-returns-to-scale technology and advertising financing of media firms induce them to provide more news to large groups, such as taxpayers and dispersed consumer interests, and groups that are valuable to advertisers. This news bias alters the tradeoff in political competition and therefore introduces a bias in public policy. The paper also discusses the effects of broadcast media replacing newspapers as the main information source about politics. The model predicts that this change should raise spending on government programmes used by poor and rural voters.

451 citations


Journal ArticleDOI
TL;DR: In this paper, the authors developed a general model of lending in the presence of endogenous borrowing constraints, where the ability to raise short-term capital is limited by some prespecified function of debt.
Abstract: We develop a general model of lending in the presence of endogenous borrowing constraints. Borrowing constraints arise because borrowers face limited liability and debt repayment cannot be perfectly enforced. In the model, the dynamics of debt are closely linked with the dynamics of borrowing constraints. In fact, borrowing constraints must satisfy a dynamic consistency requirement: the value of outstanding debt restricts current access to short-term capital, but is itself determined by future access to credit. This dynamic consistency is not guaranteed in models of exogenous borrowing constraints, where the ability to raise short-term capital is limited by some prespecified function of debt. We characterize the optimal default-free contract—which minimizes borrowing constraints at all histories—and derive implications for firm growth, survival, leverage and debt maturity. The model is qualitatively consistent with stylized facts on the growth and survival of firms. Comparative statics with respect to technology and default constraints are derived. Borrowing constraints are an important determinant of firm growth and survival. 1 Such constraints may arise in connection to the financing of investment opportunities faced by firms or temporary liquidity needs, such as those required to survive a recession. This paper develops a theory of endogenous borrowing constraints and studies its implications for firm dynamics. In our model, debt is constrained by the firm’s limited liability and option to default. A lending contract specifies an initial loan size, future financing, and a repayment schedule. The choice of these variables in turn determines future growth, the firm’s future borrowing capacity, and its ability and willingness to repay. Hence, borrowing constraints and firm dynamics are jointly determined. We study this dynamic design problem. Our model builds on Thomas and Worral’s (1994) model of foreign direct investment. At time zero a risk neutral borrower (firm or entrepreneur) has a project which requires a fixed initial set-up cost. Every period the project yields revenues that increase with the amount of 1. There is considerable evidence suggesting that financing constraints are important determinants of firm dynamics. Gertler and Gilchrist (1994) argue that liquidity constraints may explain why small manufacturing firms respond more to a tightening of monetary policy than do larger manufacturing firms. Perez-Quiros and Timmermann (2000) show that in recessions smaller firms are more sensitive to the worsening of credit market conditions as measured by higher interest rates and default premia. Evans and Jovanovic (1989) show that the liquidity constraints are essential in the decision to become an entrepreneur. Fazzari, Hubbard and Petersen (1988), among others, view financial constraints as an explanation for the dynamic behaviour of aggregate investment, and Cabral and Mata (1996) are able to fit reasonably well the size distribution of Portuguese manufacturing firms by estimating a simple model of financing constraints. For surveys see Hubbard (1998) and Stein (2003).

397 citations


Journal ArticleDOI
Marc Rysman1
TL;DR: In this article, the importance of network effects in the market for Yellow Pages was estimated and it was shown that internalizing network effects would significantly increase the surplus of a more competitive market is preferable.
Abstract: This paper estimates the importance of network effects in the market for Yellow Pages. I estimate three simultaneous equations: consumer demand for usage of a directory, advertiser demand for advertising and a publisher's first-order condition (derived from profit-maximizing behaviour). Estimation shows that advertisers value consumer usage and that consumers value advertising, implying a network effect. I find that internalizing network effects would significantly increase surplus. As an application, I consider whether the market benefits from monopoly (which takes advantage of network effects) or oligopoly (which reduces market power). I find that a more competitive market is preferable. Copyright 2004, Wiley-Blackwell.

362 citations


Journal ArticleDOI
Bruce Shearer1
TL;DR: In this article, a field experiment is used to estimate the gain in productivity that is realized when workers are paid piece rates rather than fixed wages, and the experimental results suggest that the average productivity gain, outside of the experimental conditions, would be at least 21.7%.
Abstract: Data from a field experiment are used to estimate the gain in productivity that is realized when workers are paid piece rates rather than fixed wages. The experiment was conducted within a tree-planting firm and provides daily observations on individual worker productivity under both compensation systems. Unrestricted statistical methods estimate the productivity gain to be 20%. Since planting conditions potentially affect incentives, structural econometric methods are used to generalize the experimental results to out-of-sample conditions. The structural results suggest that the average productivity gain, outside of the experimental conditions, would be at least 21.7%.

348 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider an inÞnitely repeated Bertrand game, in which prices are publicly observed and each broker receives a privately observed, i.i.d. cost shock in each period.
Abstract: We consider an inÞnitely repeated Bertrand game, in which prices are publicly observed and each Þrm receives a privately observed, i.i.d. cost shock in each period. We focus on symmetric perfect public equilibria (SPPE), wherein any “punishments” are borne equally by all Þrms. We identify a tradeoff that is associated with collusive pricing schemes in which the price to be charged by each Þrm is strictly increasing in its cost level: such “fully sorting” schemes offer efficiency beneÞts, as they ensure that the lowest-cost Þrm makes the current sale, but they also imply an informational cost (distorted pricing and/or equilibrium-path price wars), since a higher-cost Þrm must be deterred from mimicking a lower-cost Þrm by charging a lower price. A rigid-pricing scheme, where a Þrm’s collusive price is independent of its current cost position, sacriÞces efficiency beneÞts but also diminishes the informational cost. For a wide range of settings, the optimal symmetric collusive scheme requires (i). the absence of equilibrium-path price wars and (ii). a rigid price. If Þrms are sufficiently impatient, however, the rigid-pricing scheme cannot be enforced, and the collusive price of lower-cost Þrms may be distorted downward, in order to diminish the incentive to cheat. When the model is modiÞed to include i.i.d. public demand shocks, the downward pricing distortion that accompanies a Þrm’s lower-cost realization may occur only when current demand is high.

332 citations


Journal ArticleDOI
TL;DR: In this article, the authors focus on the arrangements for collective decision making in a world where agents must be motivated to acquire information and identify some basic forces shaping the design of panels of decision makers-referees, managers, jurors, etc.
Abstract: This paper is concerned with the arrangements for collective decision making in a world where agents must be motivated to acquire information. The analysis identifies some basic forces shaping the design of panels of decision makers-referees, managers, jurors, etc.-in situations where decision makers are useful only in so far as they expend some effort to gain information about the alternatives at hand. These situations are common, one being the refereeing process, where an editor requires the opinions of a number of experts who must read the paper (acquire information) in order to give an opinion as to publication. Analogously, in committees which screen applicants to a programme, the committee members must gain information about the applicant's qualifications and likelihood of success in order to evaluate whether to admit him/her. Finally, in trial juries it is important that jurors pay attention to the evidence in order to make an informed judgement.1 We focus on environments where information is a public good in the sense that the social benefits of one decision maker acquiring information exceed the private benefits. A good mechanism in this environment must not only aggregate information efficiently, but also induce the decision makers to acquire information. We analyse the design of voting mechanisms. Our problem is to choose two parameters which determine both the incentive to acquire information and the efficiency with which information is aggregated. The first parameter is the size of the decision-making unit: the number of committee members. The second is the plurality required to overturn the status quo: the voting rule. Our main result is this: a voting rule that requires a large plurality (in the extreme, unanimity) to upset the status quo can be optimal only if the information available to each committee member is sufficiently accurate. When individual information is noisy, the drawback

316 citations


Journal ArticleDOI
TL;DR: In this paper, the authors provide a model of boom-bust episodes in middle-income countries based on sectoral differences in corporate finance: the nontradables sector is special in that it faces a contract enforceability problem and enjoys bailout guarantees.
Abstract: This paper provides a model of boom-bust episodes in middle-income countries. It is based on sectoral differences in corporate finance: the nontradables sector is special in that it faces a contract enforceability problem and enjoys bailout guarantees. As a result, currency mismatch and borrowing constraints arise endogenously in that sector. This sectoral asymmetry allows the model to replicate the main features of observed boom-bust episodes. In particular, episodes begin with a lending boom and a real appreciation, peak in a self-fulfilling crisis during which a real depreciation coincides with widespread bankruptcies, and end in a recession and credit crunch. The nontradables sector accounts for most of the volatility in output and credit. In the last two decades, many middle-income countries have experienced boom-bust episodes centred around balance-of-payments crises. There is now a well known set of stylized facts. The typical episode began with a lending boom and an appreciation of the real exchange rate. In the crisis that eventually ended the boom, a real depreciation coincided with widespread defaults by the domestic private sector on unhedged foreign-currency-denominated debt. The typical crisis came as a surprise to financial markets, and with hindsight it is not possible to pinpoint a large "fundamental" shock as an obvious trigger. After the crisis, foreign lenders were often bailed out. However, domestic credit fell dramatically and recovered much more slowly than output. This paper proposes a theory of boom-bust episodes that emphasizes sectoral asymmetries in corporate finance. It is motivated by an additional set of facts that has received little attention in the literature: the tradables (T-) and nontradables (N-) sectors fared quite differently in most boom-bust episodes. While the N-sector was typically growing faster than the T-sector during a boom, it fell harder during the crisis and took longer to recover afterwards. Moreover, most of the guaranteed credit extended during the boom went to the N-sector, and most bad debt later surfaced there. Our analysis is based on two key assumptions that are motivated by the institutional environment of middle-income countries. First, N-sector firms are run by managers who issue debt, but cannot commit to repay. In contrast, T-sector firms have access to perfect financial markets. Second, there are systemic bailout guarantees: lenders are bailed out if a critical mass of borrowers defaults. The first part of this paper derives optimal investment and financing choices for the N-sector when these imperfections are present. We show that both borrowing constraints and a risky currency mismatch of assets and liabilities arise in equilibrium. Moreover, even in a world with no exogenous shocks, self-fulfilling crises can occur. The second part of this paper

315 citations


Journal ArticleDOI
TL;DR: In this article, the authors build a model of currency crises where a single large investor and a continuum of small investors independently decide whether to attack a currency based on their private information about fundamentals.
Abstract: Do large investors increase the vulnerability of a country to speculative attacks in the foreign exchange markets? To address this issue, we build a model of currency crises where a single large investor and a continuum of small investors independently decide whether to attack a currency based on their private information about fundamentals. Even abstracting from signaling, the presence of the large investor does make all other traders more aggressive in their selling. Relative to the case in which there is no large investors, small investors attach the currency when fundamentals are stronger. Yet, the difference can be small, or null, depending on the relative precision of private information of the small and large investors. Adding signaling makes the influence of the large trader on small traders behaviour much stronger.

Journal ArticleDOI
TL;DR: In this article, the effect of community identity on investment behavior in the knitted garment industry in the South Indian town of Tirupur was studied and it was shown that the differences in investment cannot be explained by productivity differences alone.
Abstract: This paper studies the effect of community identity on investment behaviour in the knitted garment industry in the South Indian town of Tirupur. We document very large and systematic differences in both levels of capital stock and the capital intensity of production in firms owned by people from two different community groups. We argue that the differences in investment cannot be explained by productivity differences alone. We suggest that the most likely explanation is that the two communities differ in their access to capital.

Journal ArticleDOI
TL;DR: In this paper, the authors examine the robustness of information cascades in laboratory experiments and find that the subjects' inferences become significantly more noisy on higher levels of the thought process and that only short chains of reasoning are applied by the subjects.
Abstract: We examine the robustness of information cascades in laboratory experiments Apart from the situation in which each player can obtain a signal for free (as in the experiment by Anderson and Holt (1997), the case of costly signals is studied where players decide whether or not to obtain private information, at a small but positive cost In the equilibrium of this game, only the first player buys a signal and makes a decision based on this information whereas all following players do not buy a signal and herd behind the first player The experimental results show that too many signals are bought and the equilibrium prediction performs poorly To explain these observations, the depth of the subjects' reasoning process is estimated, using a statistical error-rate model Allowing for different error rates on different levels of reasoning, we find that the subjects' inferences become significantly more noisy on higher levels of the thought process, and that only short chains of reasoning are applied by the subjects

Journal ArticleDOI
TL;DR: In this article, the authors presented an oligopoly model with consumer search where the equilibrium expected price may be constant, increasing or nonmonotonic in the number of firms, and analyzed a one-shot simultaneous move game.
Abstract: discover prices. There are three distinct price-dispersed equilibria characterized by low, moderate and high search intensity. The effects of an increase in the number of firms on search behaviour, expected prices, price dispersion and welfare are sensitive (i) to the equilibrium consumers' search intensity, and (ii) to the status quo number of firms. For instance, when consumers search with low intensity, an increase in the number of firms reduces search, does not affect expected price, leads to greater price dispersion and reduces welfare. In contrast, when consumers search with high intensity, increased competition results in more search and lower prices when the number of competitors in the market is low to begin with, but in less search and higher prices when the number of competitors is large. Duopoly yields identical expected price and price dispersion but higher welfare than an infinite number of firms. A question that has received much attention in economic theory is how the number of competitors and market outcomes are related. Despite the existence of work showing the contrary (see, e.g. Satterthwaite (1979), Rosenthal (1980)), the prediction that emerges from a market where firms interact in a Cournot fashion has come to dominate economic thought, namely, that an increase in the number of competitors leads to larger aggregate production, lower market price and improved market performance measured in terms of some social welfare criterion (Ruffin, 1971). This paper challenges the generality of this belief by presenting an oligopoly model with consumer search where the equilibrium expected price may be constant, increasing or nonmonotonic in the number of firms. We present an economy with two types of consumers: consumers who search for prices at no cost ("fully-informed"), and consumers who must pay a fixed search cost for each price quotation they observe ("less-informed").1 All buyers have the same willingness to pay for the good and buy either a single unit or nothing at all. On the supply side of the market, there are N firms producing a homogeneous good at constant marginal cost. We analyse a one-shot simultaneous move game: firms set prices and consumers decide how many prices to search for at the same moment in time. Our model is essentially an oligopolistic version of Burdett and Judd (1983) where some consumers search costlessly. In this model, consumers search using a fixed-sample-size search strategy, i.e. they choose the number of prices to observe before receiving any offer. Nonsequential search is appealing when consumers find it more advantageous to gather price information quickly (Morgan and Manning, 1985). This occurs when the search outcome is observed with delay and delay is costly. For instance, an MBA graduate looking for a job 1. The presence in the model of consumers who search costlessly captures the fact that some people have a

Journal ArticleDOI
TL;DR: In this paper, the authors evaluate the importance of cross-elasticity between stores of the same firm in the U.K. supermarket industry and measure market power by calculating the effect of merger and demerger on Nash equilibrium prices.
Abstract: Multi-store firms are common in the retailing industry. Theory suggests that cross-elasticities between stores of the same firm enhance market power. To evaluate the importance of this effect in the U.K. supermarket industry, we estimate a model of consumer choice and expenditure using three data sources: profit margins for each chain, a survey of consumer choices and a data-set of store characteristics. To permit plausible substitution patterns, the utility model interacts consumer and store characteristics. We measure market power by calculating the effect of merger and demerger on Nash equilibrium prices. Demerger reduces the prices of the largest firms by between 2 and 3.8% depending on local concentration; mergers between the largest firms lead to price increases up to 7.4%.

Journal ArticleDOI
TL;DR: In this paper, the authors provide empirical restrictions of a model of optimal order submissions in a limit order market and compute a semiparametric test using a sample of order submissions from the Stockholm Stock Exchange, using the order prices, execution probabilities and picking o risks of the orders chosen by the traders in the sample.
Abstract: We provide empirical restrictions of a model of optimal order submissions in a limit order market. A trader’s optimal order submission in our model depends on the trader’s valuation for the asset and the trade os between order prices, execution probabilities and picking o risks. The optimal order submission strategy is a monotone function of a trader’s valuation for the asset. We use the monotonicity restriction to test if the traders follow the optimal order submission strategy in our sample, using the order prices, execution probabilities and picking o risks of the orders chosen by the traders in the sample. We compute a semiparametric test using a sample of order submissions from the Stockholm Stock Exchange. We do not reject the monotonicity restriction for buy orders or sell orders separately, but do reject the monotonicity restriction when we combine buy and sell orders. The expected payos from submitting limit orders away from the quotes are too low relative to the expected payos from submitting limit orders close to the quotes to rationalize all the observed order submissions in our sample.

ReportDOI
TL;DR: In this article, an empirical dynamic oligopoly model of the commercial aircraft industry is used to analyse industry pricing, industry performance, and optimal industry policy, and they find that the unconstrained MPE is quite efficient from a social perspective, providing only 10% less welfare on average than a social planner would obtain.
Abstract: This paper uses an empirical dynamic oligopoly model of the commercial aircraft industry to analyse industry pricing, industry performance, and optimal industry policy. A novel feature of the model with respect to the previous literature is that entry, exit, prices, and quantities are endogenously determined in Markov perfect equilibrium (MPE). We find that many unusual aspects of the aircraft data, such as high concentration and persistent pricing below static marginal cost, are explained by this model. We also find that the unconstrained MPE is quite efficient from a social perspective, providing only 10% less welfare on average than a social planner would obtain. Finally, we provide simulation evidence that an anti-trust policy in the form of a concentration restriction would be welfare reducing.

Journal ArticleDOI
TL;DR: In this paper, a mechanism design problem where borrowers share information about each other, but their limited side contracting ability prevents them from writing complete insurance contracts is studied, and a lending mechanism which efficiently induces mutual insurance is derived.
Abstract: Many believe that a key innovation by the Grameen Bank is to encourage borrowers to help each other in hard times. To analyse this, we study a mechanism design problem where borrowers share information about each other, but their limited side contracting ability prevents them from writing complete insurance contracts. We derive a lending mechanism which efficiently induces mutual insurance. It is necessary for borrowers to submit reports about each other to achieve efficiency. Such cross-reporting increases the bargaining power of unsuccessful borrowers, and is robust to collusion against the bank. Copyright 2004, Wiley-Blackwell.

Journal ArticleDOI
TL;DR: In this article, the authors show that if the probability that a player is a dominant strategy type is sufficiently small, then there is an equilibrium of the cheap-talk extension of the game where the probability of an arms race is close to zero.
Abstract: Two players simultaneously decide whether or not to acquire new weapons in an arms race game. Each player's type determines his propensity to arm. Types are private information, and are independently drawn from a continuous distribution. With probability close to one, the best outcome for each player is for neither to acquire new weapons (although each prefers to acquire new weapons if he thinks the opponent will). There is a small probability that a player is a dominant strategy type who always prefers to acquire new weapons. We find conditions under which the unique Bayesian-Nash equilibrium involves an arms race with probability one. However, if the probability that a player is a dominant strategy type is sufficiently small, then there is an equilibrium of the cheap-talk extension of the game where the probability of an arms race is close to zero. Copyright 2004, Wiley-Blackwell.

Journal ArticleDOI
TL;DR: In this article, the authors consider the implications of firms posting contracts, in a random matching model with on-the-job search, and show that the effect on the labour market is to reduce turnover, below the level required for efficient matching.
Abstract: A common assumption in equilibrium search and matching models of the labour market is that each firm posts a wage, to be paid to any worker hired. This paper considers the implications of firms posting contracts, in a random matching model with on-the-job search. More complex contracts enable firms to address both recruitment and retention problems by, for example, increasing the wage with tenure. The effect on the labour market is to reduce turnover, below the level required for efficient matching of workers to firms. Copyright 2004, Wiley-Blackwell.

Journal ArticleDOI
TL;DR: In this paper, the authors introduce a model of representative democracy that allows for strategic parties, strategic candidates, strategic voters, and multiple districts, and show that if the distribution of policy preferences is sufficiently similar across districts and sufficiently close to uniform within districts, then the number of effective parties is larger under Proportional Representation than under Plurality Voting (extending the Duvergerian predictions).
Abstract: I introduce a model of representative democracy that allows for strategic parties, strategic candidates, strategic voters, and multiple districts. If the distribution of policy preferences is sufficiently similar across districts and sufficiently close to uniform within districts, then the number of effective parties is larger under Proportional Representation than under Plurality Voting (extending the Duvergerian predictions), and both electoral systems determine the median voter’s preferred policy outcome. However, for more asymmetric distributions of preferences the Duvergerian predictions can be reversed ,a nd the policy outcome with Proportional Representation is always (weakly) more moderate than the one produced through Plurality Voting. Welfare analysis can be done, and the results do not depend on whether voters are sincere or strategic. Sincere voting induces more party formation, and strategic voting should be observed more often under Proportional Representation.

Journal ArticleDOI
TL;DR: In this paper, a stylized model of dynamic labour market interactions where labour reallocation costs are partly financed by uninsured workers' consumption flows is studied. And the second-best equilibrium is shown to increase productive efficiency if their administrative deadweight costs are not too large, and increase workers' welfare as long as employers' firing costs at least partly finance workers' mobility.
Abstract: Models of labour market equilibrium where forward-looking decisions maximize both profits and labour income on a risk-neutral basis offer valuable insights into the effects of employment protection legislation. Since risk-neutral behaviour in the labour market presumes perfect insurance, however, job security provisions plays no useful role in such models. This paper studies a stylized model of dynamic labour market interactions where labour reallocation costs are partly financed by uninsured workers' consumption flows. In the resulting second-best equilibrium, provisions that shift labour reallocation costs to risk-neutral employers can increase productive efficiency if their administrative dead-weight costs are not too large, and increase workers' welfare as long as employers' firing costs at least partly finance workers' mobility.

Journal ArticleDOI
TL;DR: In this paper, an asymptotic distribution theory for a class of Generalized Method of Moments estimators that arise in the study of dierentiated product markets was provided.
Abstract: We provide an asymptotic distribution theory for a class of Generalized Method of Moments estimators that arise in the study of dierentiated product markets when the number of observations is associated with the number of products within a given market. We allow for three sources of error: sampling error in estimating market shares, simulation error in approximating the shares predicted by the model, and the underlying model error. It is shown that the estimators are CAN provided the size of the consumer sample and the number of simulation draws grow at a large enough rate relative to the number of products. We consider the implications of the results for Berry, Levinsohn, and Pakes’ (1995) random coecient logit model and the pure characteristic model analyzed in Berry and Pakes (2002). The required rates dier for these two frequently used demand models. A small Monte Carlo study shows that the dierence in asymptotic properties of the two models are reflected, in quite a striking way, in the models’ small sample properties. Moreover the limit distributions provide a good approximation to the actual monte carlo distribution of the parameter estimates. The results have important implications for the computational burden of the two models.

Journal ArticleDOI
TL;DR: In this article, a matching model emphasizing high hazard rates among newly formed firm-worker matches is proposed to explain the persistence of the U.S. unemployment rate, and the authors show that recurring job loss can account for the fact that the unemployment rate remains elevated for as much as 4 or 5 years following an initial jump.
Abstract: Standard models of employment fluctuations cannot reconcile the unemployment rate's remarkable persistence with the high job-finding rates found in worker flows data. A matching model emphasizing high hazard rates among newly formed firm-worker matches can resolve this shortcoming. In the model, matches are experience goods; consequently, newly employed workers face higher hazard rates. Following a job loss, workers may experience several short-lived jobs before finding stable employment. At an aggregate level, an initial burst of job loss precipitates a steady flow of recurring job loss. A simulation shows that this recurring job loss can account for the fact that the unemployment rate remains elevated for as much as 4 or 5 years following an initial jump. Intuitively, the labour market frictions associated with the difficult and time-consuming process of matching unemployed workers with appropriate jobs represent an appealing way to enhance conventional dynamic general equilibrium models of economic fluctuations that generally exhibit very little internal propagation of business cycle shocks.1 Nevertheless, although conventional matching models, such as the canonical Mortensen and Pissarides (1994) model, successfully explain many other important features of the labour market, they fail to explain the high degree of persistence observed in the U.S. unemployment rate. The primary reason for this failure is that data on labour market flows show that unemployed workers actually find jobs quite quickly. Therefore, following a shock that triggers a burst of job loss, the high job-finding rate implies that the unemployment rate in these models returns quite rapidly back toward its average level. This represents a significant obstacle to the notion that frictions in the labour market account for much of the propagation of business cycle shocks. This paper asserts that existing matching models fail to capture an important feature of the labour market that can reconcile the dramatic persistence of the unemployment rate with the evidence of high job-finding rates. Specifically, although unemployed workers find jobs quickly, the newly found jobs often last only a short time. After an initial job loss, a worker may experience several short-lived jobs before settling into more stable employment. This recurring job loss slows the pace at which the unemployment rate returns to its normal level following an adverse shock.

Journal ArticleDOI
Claudio Michelacci1, Javier Suarez1
TL;DR: In this paper, the authors claim that the stock market encourages business creation, innovation, and growth by allowing the recycling of "informed capital'' due to incentive and information problems, new start-ups face high flotation costs.
Abstract: We claim that the stock market encourages business creation, innovation, and growth by allowing the recycling of "informed capital''. Due to incentive and information problems, new start-ups face high flotation costs. Sustaining a tight relationship with a monitor (bank, venture capitalist) allows them to postpone their going public decision until profitability prospects are clearer or incentive problems are less severe. However, monitors' informed capital is in limited supply and the earlier young firms go public the quicker this capital is redirected towards new start-ups. Hence factors that lead to the emergence of a stock market for young firms also encourage business creation. Given the role of new businesses in innovation, our theory suggests a novel linkage between financial development and growth.

Journal ArticleDOI
TL;DR: In this article, the authors develop a simple yet powerful positive model where the majority rule governing future elections is itself chosen in an election and analyze an overlapping generations model of voting on reform opportunities, which they model as similar to investment projects.
Abstract: Political economy models usually take as given the rules which govern the political decision making, with the simple majority rule being the most popular model. In this paper, we develop a simple yet powerful positive model where the majority rule governing future elections is itself chosen in an election. We observe a large variety of majority rules governing elections in the real world. On the one side, referenda are usually decided by a simple majority. At the other extreme, many international organizations require unanimity in votes among the member states. In the European Union’s council of ministers, some proposals require only a simple majority, some a supermajority of about 71% (62 votes out of 87) and some require unanimous consent. 1 Explicit supermajorities 2 are required in most countries for a change of the constitution, but in some U.S. states also for a tax raise. Even more important could be what we might call implicit supermajorities: in parliamentary systems with a strong committee organization, a legislative proposal usually needs the support of both the respective committee and the house. In parliamentary systems with two chambers, certain legislative proposals need the support of both chambers. 3 Previous rationalizations of qualified majority rules focus on the problem of Condorcet cycles under simple majority rule in n-dimensional elections and on commitment problems; see our literature review below. Our model presents a new rationale for rules requiring qualified majorities in elections. We analyse an overlapping generations model of voting on reform opportunities, which we model as similar to investment projects: at first, there is a cost and then there are benefits. Since voters have a finite lifetime, old voters will be less keen on reforms

Journal ArticleDOI
TL;DR: In this paper, the authors analyse the efficiency of the labour market outcome in a competitive search equilibrium model with endogenous turnover and endogenous general human capital formation, and they show that search frictions do not distort training decisions if firms and their employees are able to coordinate efficiently, for instance, by using long-term contracts.
Abstract: We analyse the efficiency of the labour market outcome in a competitive search equilibrium model with endogenous turnover and endogenous general human capital formation. We show that search frictions do not distort training decisions if firms and their employees are able to coordinate efficiently, for instance, by using long-term contracts. In the absence of efficient coordination devices there is too much turnover and too little investment in general training. Nonetheless, the number of training firms and the amount of training provided are constrained optimal, and training subsidies therefore reduce welfare. Copyright 2004, Wiley-Blackwell.

Journal ArticleDOI
TL;DR: In this article, the authors model bargaining situations in which parties have the option to terminate the negotiation, resulting in a termination outcome that depends on the offers made in the negotiation phase.
Abstract: We model bargaining situations in which parties have the option to terminate the negotiation, resulting in a termination outcome that depends on the offers made in the negotiation phase. The key features of the model are that 1) making a concession in the negotiation phase increases the other party’s termination option payoff and 2) the termination outcome induces an efficiency loss as compared with a negotiated agreement. The main Þnding is that the mere threat of termination forces equilibrium concessions in the negotiation phase to be gradual, and the degree of gradualism is characterized. The model also applies to contribution games in which partial projects can be implemented. Our Þndings are contrasted with those appearing in the literature.

Journal ArticleDOI
TL;DR: In this article, the authors investigate firms' incentives to invest in cost reduction in the first price sealed bid auction, a format largely used for procurement, and find that firms will tend to underinvest in costs reduction because they anticipate fiercer head-on competition.
Abstract: This paper investigates firms’ incentives to invest in cost reduction in the first price sealed bid auction, a format largely used for procurement. Two central features of the model are that we allow firms to be heterogeneous and that investment is observable. We find that firms will tend to underinvest in cost reduction because they anticipate fiercer head-on competition. Using the second price auction as a benchmark, we also find that the first price auction will elicit less investment from market participants and that this is socially inefficient. These results have implications for market design when investment is important. Different market institutions provide different incentives for firms to reduce costs, acquire information, expand, and so on, in short, to undertake any activity that affects their competitive positions. Accordingly, the performance of these market institutions should be judged not only from a static perspective—taking firms’ competitive positions as given—but also from a dynamic perspective. Auction markets are a natural competitive environment to study because there is a clear sense in which we can design such markets by choosing appropriate rules for competition. In this paper, we study how the first price auction (FPA), a format commonly used for procurement, affects firms’ incentives to invest in cost reduction. Many procurement markets are characterized by a small number of participants, heterogeneity among firms and low rates of turnover. Therefore, it is interesting to investigate to what extent the format used for the auction contributes to these characteristics. The basic ingredients of our model are as follows. There are N firms and one of them has the opportunity, prior to the auction, to make an investment. We model this investment as an improvement in the ex ante distribution of costs for this firm. The main question we ask is: what are the firm’s incentives to invest given that this investment is observed by its competitors? Clearly, the benefits of such investment depend on how the firm’s competitors react. In other words, to determine the firm’s incentive to invest, we need to compare the equilibrium at the procurement stage if it makes this investment and if it does not. In Section 3, we derive new comparative statics results for the FPA. We find that after the investment, the investor’s opponents will bid collectively more aggressively. This result holds generally when there are only two bidders or when the investor’s opponents are symmetric (Proposition 1). Otherwise, a sufficient condition is that investment changes market leadership (Proposition 2). In the language of industrial organization, investment has a negative strategic effect in the FPA. This erodes 1

Journal ArticleDOI
TL;DR: In this paper, the authors study the evolution of lifetime labour income inequality by constructing present value life cycle measures that incorporate both earnings and employment risk, and find that even though lifetime income inequality is 40% less than earnings inequality, the total increase in lifetime inequality over the past 20 years is the same as earnings inequality.
Abstract: In this paper we study the evolution of lifetime labour income inequality by constructing present value life cycle measures that incorporate both earnings and employment risk. We find that, even though lifetime income inequality is 40% less than earnings inequality, the total increase in lifetime income inequality over the past 20 years is the same as earnings inequality. While the total increase is the same, the pathways there differ with earnings inequality experiencing a steady increase and lifetime income inequality increasing in spurts particularly in the latter half of the 1990s. Finally, we find the changes in lifetime income inequality are primarily driven by changes in earnings mobility and changes in the earnings distribution itself, changes in employment risk and the composition of the sample, such as the shift toward attaining more education and the ageing population, do not play a large role. © 2004 The Review of Economics Studies Limited.