scispace - formally typeset
Search or ask a question

Showing papers in "The Review of Economic Studies in 1989"


Journal ArticleDOI
TL;DR: In contrast to the competitive model, a reasonable model of endogenous acquisition of costly private information is obtained, even when traders are risk-neutral as discussed by the authors, and prices reveal less information than in the competition equilibrium.
Abstract: Competitive rational expectations models have the unsatisfactory property, dubbed the "schizophrenia" problem by Hellwig, that each trader takes the equilibrium price as given despite the fact that he influences that price An examination of information aggregation in a non-competitive rational expectations model using a Nash equilibrium in demand functions shows that the schizophrenia problem is avoided by having each trader take into account the effect his demand has on the equilibrium price Given a distribution of private information across traders, prices reveal less information than in the competition equilibrium, and prices no longer become fully informative in the limit as noise trading vanishes or as traders become risk neutral With small traders, the model may become one of monopolistic competition, not perfect competition In contrast to the competitive model, a reasonable model of endogenous acquisition of costly private information is obtained, even when traders are risk-neutral

872 citations


Journal ArticleDOI
TL;DR: The evidence for the contrary position, that permanent income is in fact less smooth than measured income, so that the smoothness of consumption cannot be straightforwardly explained by permanent income theory as discussed by the authors.
Abstract: For thirty years it has been accepted that consumption is smooth because permanent income is smoother than measured income. This paper considers the evidence for the contrary position, that permanent income is in fact less smooth than measured income, so that the smoothness of consumption cannot be straightforwardly explained by permanent income theory. The paper argues that in postwar U.S. quarterly data, consumption is smooth because it responds with a lag to changes in income.

564 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that a very natural game, similar to one often used elsewhere in the literature to model private provision, in fact fully implements the core of this economy in undominated perfect equilibria.
Abstract: Standard economic intuition would say that private provision of public goods will be inefficient due to free-rider problems. This view is in contrast to the results in the literature on full implementation where it is shown that (under certain conditions) games exist which only have efficient equilibria. The games usually used to demonstrate existence are quite complex and seem "unnatural" possibly leading to the perception that implementation requires a central authority to choose and impose the game. In a simple public goods setting, we show that a very natural game—similar to one often used elsewhere in the literature to model private provision—in fact fully implements the core of this economy in undominated perfect equilibria. More specifically, we consider a complete information economy with one private good and two possible social decisions. Agents voluntarily contribute any non-negative amount of the private good they choose and the social decision is to provide the public good iff contributions are sufficient to pay for it. The contributions are refunded otherwise. The set of undominated perfect equilibrium outcomes of this game is exactly the core of the economy. We give some extensions of this result, discuss the role of perfection and alternative equilibrium notions, and discuss the intuition and implications of the results.

539 citations


Journal ArticleDOI
TL;DR: In this article, the authors developed a criterion for determining whether an economy is dynamically efficient, which involves a comparison of the cash flows generated by capital with the level of investment, and their application to the United States economy and the economies of other major OECD nations suggests that they are dynamically efficient.
Abstract: The issue of dynamic efficiency is central to analyses of capital accumulation and economic growth. Yet the question of what characteristics should be examined to determine whether actual economies are dynamically efficient is unresolved. This paper develops a criterion for determining whether an economy is dynamically efficient. The criterion, which holds for economies in which technological progress and population growth are stochastic, involves a comparison of the cash flows generated by capital with the level of investment. Its application to the United States economy and the economies of other major OECD nations suggests that they are dynamically efficient.

462 citations


Journal ArticleDOI
TL;DR: In this article, a decentralized process for the diffusion of knowledge is analyzed, where the economy converges from an initial distribution of knowledge over agents to the steady-state distribution, which is unique.
Abstract: This paper analyzes a decentralized process for the diffusion of knowledge. In equilibrium, the economy converges from an initial distribution of knowledge over agents to the steady-state distribution, which is unique. Because of the public good aspect of information, too little learning takes place, and ideas are implemented too early. The key difference between earlier formulations of search externalities by Diamond, Mortensen, and Spence on the one hand, and our own on the other, is that here spillovers of knowledge depend not only on how hard people are trying, but also on the differences in what they know: if all of us know the same thing, we cannot learn from each other. The model also addresses the following two substantive questions: first, the relationship between inequality and growth, noted some time ago by Kuznets, and second, the effect on growth of improvements in the communication technology.

367 citations


Journal ArticleDOI
TL;DR: In this paper, an estimable structural dynamic model of married women's labour force participation and fertility in which wages are stochastic and work experience or cumulative participation is endogeneous is presented.
Abstract: This paper presents and estimates a dynamic model of married women's labour force participation and fertility in which the effect of work experience on wages is explicitly taken into account. Because current participation alters future potential earnings, the investment return to work will be an important factor in the current work decision in any forward-looking behavioural model. The model is estimated using the National Longitudinal Surveys mature women's cohort. We use the estimates of our model to predict changes in the lifecycle patterns of employment due to changes in schooling, fertility, husband's income, and the magnitude of the experience effect on wages. We find that although work experience increases the disutility of further work, the effect is overwhelmed by the positive effect of experience on wages, leading to persistence in the employment patterns of these women. In addition we find that an increase in young children and in husband's income substantially reduces participation while increased schooling has a powerful positive impact on participation. This paper presents an estimable structural dynamic model of married women's labour force participation and fertility in which wages are stochastic and work experience or cumulative participation is endogeneous. The model is structural in the sense that the parameters which are estimated are contained in the fundamental relationships governing behaviour, namely the utility function and the constraints. The model is contained in the class of models which describe the life-cycle capital accumulation process with endogeneous labour supply such as Weiss (1972) and Heckman (1976). It is closest in spirit to that of Weiss and Gronau (1981).1 The basic feature of their model and ours is that labour market participation affects future wages, which then affects future participation. The investment return to current work will necessarily be taken into account in any forward-looking optimizing model. As Weiss and Gronau note, estimates of labour supply models have ignored the inherent behavioural dynamics associated with a positive wageexperience profile. There is no adequate empirical treatment of the human capital investment dimension of the labour force participation decision in the literature. Heckman and Willis (1977) have studied a sequential discrete choice model of the labour force participation of married women in a reduced-form framework. Their work

327 citations


Journal ArticleDOI
TL;DR: In this article, the effects of R&D spillovers and calculating the social and private rates of return were investigated in four industries and it was shown that the social return exceeds the private return in each industry.
Abstract: dynamic duality. We are particularly interested in the effects of R&D spillovers and in calculating the social and private rates of return. There are three effects associated with the intra-industry R&D spillover. First, costs decline as knowledge expands for the externality-receiving firms. Second, production structures are affected, as factor demands change in response to the spillover. Third, the rates of capital accumulation are affected by the R&D spillover. These cost-reducing, factor-biasing and capital adjustment effects are estimated for four industries. The existence of R&D spillovers implies that the social and private rates of return to R&D capital differ. We estimate that the social return exceeds the private return in each industry. Moreover, there is significant variation across industries in the differential between the social and private rates of return.

327 citations


Journal ArticleDOI
TL;DR: In this paper, the empirical relation between nominal exchange rates and macroeconomic fundamentals for five major OECD countries between 1974 and 1987 was examined using a variety of parametric and non-parametric techniques.
Abstract: This paper examines the empirical relation between nominal exchange rates and macroeconomic fundamentals for five major OECD countries between 1974 and 1987. Five theoretical models of exchange rate determination are considered. Potential non-linearities are examined using a variety of parametric and non-parametric techniques. We find that the poor explanatory power of the models considered cannot be attributed to non-linearities arising from time-deformation or improper functional form.

325 citations


Journal ArticleDOI
TL;DR: In this article, Cohen et al. show that, in a stock market with transaction costs, this interaction between thinness and volatility can produce multiple steady state equilibria, some characterized by low trade and high volatility, and others by high trade and low volatility.
Abstract: Thin equity markets cannot accommodate temporary bulges of buy or sell orders without large price movements. The resulting volatility can induce risk-averse transactors who face transaction costs to desert these markets. Thus thinness and the related price volatility may become joint self-perpetuating features of an equity market, irrespective of the volatility of asset fundamentals. If, however, appropriate incentive schemes are adopted to encourage entry by additional investors, this vicious circle can be broken, eventually shifting the market to a selfsustaining, superior equilibrium characterized by a higher number of transactors, lower price volatility and larger supply of the asset. A number of empirical studies (Cohen et al. (1976), Telser and Higimbotham (1977), Pagano (1986), Tauchen and Pitts (1983)) have found that thin speculative markets are ceteris paribus more volatile than deep ones. A plausible explanation for this finding is that thin markets are generally characterized by small numbers of transactors per unit time, and thus their prices are more sensitive to the impact of individual traders' demand shocks. Conversely, in deep markets, transactors are so many that the uncorrelated demand shocks experienced by individual traders tend to offset each other and leave market prices largely unaffected. While this suggests a rationale for the observed relationship between market size and price volatility, it does so by taking market size as the exogenous factor. However, the volatility of a speculative market may feed back on its size, in the sense that the high liquidation risk implied by very volatile prices can induce potential entrants to keep out of the market. This paper shows that, in a stock market with transaction costs, this interaction between thinness and volatility can produce multiple steady-state equilibria, some characterized by low trade and high volatility, and others by high trade and low volatility. Whether the market will settle in one equilibrium or another depends entirely on the expectations held by economic agents (Sections 2 and 3). The existence of these multiple "bootstrap" equilibria can be explained heuristically as follows. Each additional trader generates a positive externality for other (actual or potential) traders by decreasing the riskiness of the stock; lower risk in turn tends to attract more investors, with the effect of raising stock prices and inducing corporations to issue additional equity. We thus have a feedback loop between market size and price volatility-where market size is measured both along the dimension of the number of traders and along that of the total stock of equities. If however investors face transactions costs in the stock market, this positive feedback may fail to be operative: if the volume of trade is expected to be small, investors with relatively high transaction costs will abstain from trading. Thus the market

303 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a simple counterexample to the belief that policy cooperation among benevolent governments is desirable, and explain circumstances under which such countereness are possible and relates them to the literature on time inconsistency.
Abstract: This paper presents a simple counterexample to the belief that policy cooperation among benevolent governments is desirable. It also explains circumstances under which such counterexamples are possible and relates them to the literature on time inconsistency. Since the work of Hamada (1976), investigating the effects of increasing policy cooperation among countries has been a major topic in international economics. A standard conclusion of this work is that increasing policy cooperation among countries is desirable. In a seminal paper, Rogoff (1985) has challenged this view. Using a simple monetary model, Rogoff shows that cooperation among policy makers can lead to a lower level of welfare than noncooperation does. Rogoff's result has caused much consternation among those who advocate policy cooperation, and his work has been criticized along several dimensions. For example, some authors, including Canzoneri and Henderson (1988), have noted that a key assumption in Rogoff's model is that the objective function of each country's policy maker does not coincide with the objective function of its residents. Indeed, if in his model policy makers maximize the welfare of their country's residents, the counterexample is overturned and cooperation strictly dominates noncooperation. This feature leads some to interpret Rogoff's result as simply saying that if policy makers form a coalition against the private sector, they may be worse off than if they do not. Others, such as Neck and Dockner (1988), have claimed that Rogoff's result depends on private agents acting strategically. Under this interpretation, Rogoff's result is relevant to, say, economies with a large trade union, but not to economies with a large number of competitive private agents. In a somewhat different vein, Persson (1988) and, especially, Devereux (1986a,b) have questioned the significance of welfare comparisons across different institutional regimes in a model without a solid foundation for the behavioural relationships. This paper presents a simple model in which governments are benevolent, but cooperation is still undesirable. The model is a two-country version of Fischer's (1980) optimal tax model. In it, private agents are competitive (in that each agent takes both prices and government policies as uninfluenced by his actions) and each government maximizes the welfare of its country's residents. In the paper, the two different regimes-cooperative and noncooperative

247 citations


Journal ArticleDOI
TL;DR: In this article, trade in a simple market with an explicit rule for price formation is modelled as a Bayesian game, and the difference between a trader's bid and his reservation value is maximally 0(1/m) where m is the number of traders on each side of the market.
Abstract: A trader who privately knows his preferences may misrepresent them in order to influence the market price. This strategic behaviour may prevent realization of all gains from trade. In this paper, trade in a simple market with an explicit rule for price formation is modelled as a Bayesian game. We show that the difference between a trader's bid and his reservation value is maximally 0(1/m) where m is the number of traders on each side of the market. Competitive pressure as m increases thus quickly overcomes the inefficiency private information causes and f,ces the market towards an efficient allocation.

Journal ArticleDOI
Urs Schweizer1
TL;DR: In this article, a simple game of litigation and settlement with incomplete information is considered, where parties are assumed to have the choice between settling their dispute out of court or resorting to costly litigation.
Abstract: This paper deals with a simple game of litigation and settlement with incomplete information. Parties are assumed to have the choice between settling their dispute out of court or resorting to costly litigation. The set of sequential equilibria is characterized and conditions are given under which an efficient equilibrium does exist. Efficient equilibria, however, will be ruled out by various tests of refinement. A comparative statics analysis is carried out with respect to the quality of private information which parties are assumed to receive before any moves have to be made.

Journal ArticleDOI
TL;DR: In this paper, the authors examined the predictability of rates of return on gold and silver and found that the correlation dimension is between 6 and 7 while the Kolmogorov entropy is about 0-2 for both assets.
Abstract: The predictability of rates of return on gold and silver are examined. Econometric tests do not reject the martingale hypothesis for either asset. This failure to reject is shown to be misleading. Correlation dimension estimates indicate a structure not captured by ARCH. The correlation dimension is between 6 and 7 while the Kolmogorov entropy is about 0-2 for both assets. The evidence is consistent with a nonlinear deterministic data generating process underlying the rates of return. The evidence is certainly not sufficient to rule out the possibility of some degree of randomness being present.

Journal ArticleDOI
TL;DR: In this paper, the minimum Chi-squared method is used to compare the asymptotic relative efficiency of marginal and new conditional maximum likelihood estimators for this class of models.
Abstract: Estimation in a class of simultaneous equation limited dependent variable models is considered. The minimum Chi-squared method is used to compare the asymptotic relative efficiency of marginal and new conditional maximum likelihood estimators for this class of models. Efficient minimum Chi-squared estimation procedures are described. A two-step algorithm based on a conditional maximum likelihood estimator provides a natural framework for both computing a linearized and locating the joint maximum likelihood estimator. The unimodality of the simultaneous equation tobit likelihood function is proved and this model is used to illustrate the empirical application of some of the estimators considered in the paper. The relative efficiency of these estimators in the simultaneous equation tobit model is examined in a set of Monte-Carlo experiments. Many applications of microeconomic theory to individual data face the joint problems of censoring and simultaneity. In particular, the dependent variable under investigation may not be continuously observed and some of the conditioning variables representing the outcome of other decisions by the individual may be simultaneously determined. Smith and Blundell (1986) developed an asymptotically efficient test for exogeneity or simultaneity in the simultaneous equation tobit model. As a byproduct, a conditional maximum likelihood estimator was obtained which is consistent under the alternative hypothesis of simultaneity. Nelson and Olsen (1978), Amemiya (1978,1979) and Heckman (1978) consider a number of consistent marginal maximum likelihood estimators based on marginal maximum likelihood estimators of the reduced-form parameters for the probit and tobit models. In contrast, the estimator derived from Smith and Blundell (1986) is based on the corresponding conditional maximum likelihood estimators. This paper is concerned with estimation in a class of simultaneous limited dependent variable regression models which includes the simultaneous probit and tobit models as special cases. Analogous marginal and conditional maximum likelihood estimators to those of Nelson and Olsen (1978), Amemiya (1978, 1979), Heckman (1978) and Smith and Blundell (1986) are derived and other new estimators are suggested. The minimum Chi-squared approach discussed by Ferguson (1958), Malinvaud (1970) and Rothenberg (1973) is used to compare the asymptotic relative efficiency of various estimators considered. A simple two-step algorithm based on the conditional maximum likelihood

Journal ArticleDOI
TL;DR: In this paper, a specific characteristics framework is proposed to construct linkages between alternative conceptual approaches to model product differentiation, which is illustrated for the logit, probit and linear probability models of discrete choice theory.
Abstract: We propose a specific characteristics framework in order to construct linkages between alternative conceptual approaches to modelling product differentiation. First, it is shown that a demand system which satisfies the gross substitutes property imposes specific requirements on the locations of products. In particular, the dimension of the characteristics space must be larger than or equal to the number of products minus one. We then identify a method for casting a given demand system (subject to certain restrictions) into our characteristics framework. This is illustrated for the logit, probit and linear probability models of discrete choice theory. Finally, we find a characteristics representation of the CES representative consumer.

Journal ArticleDOI
TL;DR: In this paper, the authors propose a test for serial dependence on the test statistic's form, which relates closely to recent proposals of Powell, Stock, Stoker and Robinson in cross-sectional applications.
Abstract: A restriction on a semiparametric or nonparametric econometric time series model determines the value of a finite-dimensional functional τ of an infinite-dimensional nuisance function. The estimate of τ and its estimated covariance matrix use nonparametric probability and spectral density estimation. A consequent test of the restriction is given approximate large sample justification under absolute regularity on the time series and other conditions. The methodology relates closely to recent proposals of Powell, Stock, Stoker and Robinson in cross-sectional applications, but serial dependence generally affects the test statistic's form, as well as statistical theory.

Journal ArticleDOI
TL;DR: In this article, the authors analyse a four-period complete-information model of a market with switching costs in which new entry occurs after the second period, and distinguish between two types of price war that can occur, and show how the type or mixture of types that arises depends on the size of switching costs.
Abstract: In many markets consumers have "switching costs", for example learning costs or transaction costs, of changing between functionally equivalent brands of a product, or of using any brand for the first time. We analyse a four-period complete-information model of a market with switching costs in which new entry occurs after the second period. The new entry results, in equilibrium, in a price war. That is, the new entrants' prices are higher in the post-entry period than in the entry period, and the incumbent's price falls in either the pre-entry period or the entry period and subsequently rises. We can interpret the incumbent's lowering its price in the pre-entry period as limit-pricing behaviour. We distinguish between two types of price war that can occur, and show how the type or mixture of types that arises depends on the size of switching costs.

Journal ArticleDOI
TL;DR: In this article, the authors present simple conditions and a simple proof of the existence of equilibrium in asset markets where short-selling is allowed and satiation is possible, and show that the choice sets will be unbounded if short selling is allowed.
Abstract: This paper presents simple conditions and a simple proof of the existence of equilibrium in asset markets where short-selling is allowed and satiation is possible. Unlike standard non-satiation assumptions, the one used here is weak enough to be reasonable in the mean-variance Capital Asset Pricing Model and in asset market models where investors maximize expected utility and where total returns to individual assets may be negative. This paper analyses the existence of equilibrium in exchange economy models where choice sets may be unbounded below and satiation is likely. There are two main examples of such models. One is an asset market model where investors maximize expected utility and the total return to individual assets may be negative with positive probability. The other is the mean-variance Capital Asset Pricing Model (CAPM), where investors do not necessarily maximize expected utility but instead maximize a function of only the mean and variance of return to their portfolios. In both of these models, the investors' choice sets will be unbounded if short-selling is allowed. To short-sell a share of an asset means to borrow it and sell it, promising to buy it back and return it to the lender at a later date. Formally, short-selling corresponds to holding a negative number of shares. If unlimited short-selling is allowed, then there is no limit to how large negative numbers of shares can be held, and therefore the choice sets are unbounded. Satiation can also occur both in the mean-variance model and in the expected-utility model (unless there is a riskless asset). In the expected-utility model, there may be satiation if the individual assets have negative returns with positive probability. As the investor gets more and more shares, the potential positive returns get larger and larger, but so do the potential negative returns. If the investor is very averse to large negative returns, then he may eventually be satiated and not want any more shares. In the mean-variance model, the expected return to an investor's portfolio increases as he holds more and more shares of the assets, but so does the variance of return. It may be that at some point, the additional expected return gained from adding more shares to the portfolio is not sufficient to compensate for the increase in variance. If so, then there will be satiation. Satiation in the mean-variance model iie analysed in Nielsen (1987, 1988). The expected-utility model with short-selling has been used in many specific analyses of portfolio selection and asset pricing. To name but one, Connor (1984) studies arbitrage pricing in a general equilibrium model where there are no short sales restrictions. The mean-variance CAPM is of course very prominent in the finance literature. Because the

Journal ArticleDOI
TL;DR: This article showed that most methods of incorporating demographic variation into separable models can be represented in a form identical to Barten equivalence scales, except that the scales themselves depend on the exact mix of goods that comprise each group, as well as on demographic variables.
Abstract: This paper shows that most methods of incorporating demographic variation into separable models can be represented in a form that is identical to Barten equivalence scales, except that the scales themselves depend on the exact mix of goods that comprise each group, as well as on demographic variables. This generalization of Barten scales is shown to be more plausible than ordinary scales, can be used to increase the efficiency of demand system estimation, and can overcome Muellbauer's under-identification result for cross-sectional estimation of equivalence scales.

Journal ArticleDOI
TL;DR: In this paper, the authors prove that a Pareto-improving change in tariffs and domestic taxes exists if a productivity improving change in tariff exists and if the Weymark condition on the matrix of household demands holds.
Abstract: that only has tariffs and domestic commodity taxes as policy instruments. The concept of a productivity improvement in tariffs and taxes is introduced and conditions for its existence are established. We prove that a Pareto-improving change in tariffs and domestic taxes exists if a productivity-improving change in tariffs exists and if the Weymark condition on the matrix of household demands holds. Conditions are established for particular tariff reforms, such as proportional reductions and reductions of extreme rates, to yield Pareto improvements in welfare.

Journal ArticleDOI
TL;DR: In this paper, an infinite-horizon sequential bargaining game with one-sided offers is analyzed, where the seller knows the value of the object and the buyer does not, and the influence of relative discount factors on the solution is studied.
Abstract: The paper analyzes an infinite-horizon sequential bargaining game (with one-sided offers) between a buyer and a seller when the buyer's valuation depends on the seller's; the seller knows the value of the object and the buyer does not. The influence of relative discount factors on the solution is studied. It is shown, for example, that an impasse may result if the buyer (offeror) is too impatient relative to the seller: the buyer makes a single take-it-or-leave-it offer.

Journal ArticleDOI
Byoung Heon Jun1
TL;DR: In this paper, the authors study a union formation decision problem when workers consist of two groups distinguished by different productivities, and provide a model which captures this complexity and, at the same time, clarifies the underlying structure of bargaining power.
Abstract: We study a union formation decision problem when workers consist of two groups distinguished by different productivities. Workers may form either a joint union or two separate unions. The whole decision process is modelled as an extensive-form bargaining game. Workers form a joint union when the sizes or productivities of the groups are similar. In the first case, there is a wage differential which is more (less) than proportional to the productivity difference if the size of the more productive is smaller (larger) than that of the less productive. In the second case, there is no wage differential. In this paper we study a union formation decision problem when workers consist of two groups distinguished by different productivities. They may form either a joint union or two separate unions, depending on the relative size and productivity of the two groups. The whole decision process is modelled as an extensive-form bargaining game in which the two groups of workers bargain with each other as well as with a firm. The labour market is sharply distinguished from the goods market by the widespread practice of collective bargaining. Usually several groups of workers with different characteristics (for example, skill level or geographical location) are represented as a single unit, and/or several employers form a single bargaining unit against a giant union. This multiplicity of bargaining units creates a complicated bargaining structure and makes the outcome hard to analyse. The purpose of this paper is to provide a model which captures this complexity and, at the same time, clarifies the underlying structure of bargaining power. Although economists have been aware of the multiplicity problem for a long time, it has been received little rigorous theoretical treatment until recently.' Davidson (1985) examined the possible gains that can be obtained by trade unions through joint bargaining in an oligopolistic industry. There is a positive externality due to the fact that a higher wage rate in one firm generates a larger demand (hence a higher wage) in competing firms. When a joint union is organized, these externalities are internalized to the benefit of workers. Horn and Wolinsky (1985) investigated the equilibrium pattern of unionization in an industry where a firm employs two types of workers. They found that workers tend to form a single union when the two types are substitutes in the sense that the incremental contribution of one group to the firm's revenue is decreasing with respect to the size of the other group. Since workers' bargaining power comes from the loss that they can

Journal ArticleDOI
TL;DR: In this article, the authors consider a small open economy with a safe sector and a risky sector, and they show that policies that sustain informationally constrained Pareto optima should not include tariffs.
Abstract: This paper considers a small open economy with a safe sector and a risky sector. The probability of success in the risky activity differs across individuals, and is private information. It is shown that policies that sustain informationally constrained Pareto optima should not include tariffs. A laissez-faire competitive equilibrium, if it exists, is Pareto optimal. These results contrast with previous literature on the role of tariffs as insurance, where private risk markets are assumed away in an ad hoc manner.

Journal ArticleDOI
Homi Kharas1, Brian Pinto1
TL;DR: This article showed that a policy of adjusting the official exchange rate towards the black market rate may cause the economy to converge to a high-inflation saddle-point stable equilibrium where money inflation elasticity exceeds unity.
Abstract: With dual exchange rates, where a managed official exchange rate co-exists with a floating black market rate, a given budget deficit may be consistent with many different inflation rates rather than two, which is the normal result in closed economy systems. Further, all these inflation equilibria are saddle-point stable. A policy of adjusting the official exchange rate towards the black market rate may cause the economy to converge to a high-inflation, saddle-point stable equilibrium where money inflation elasticity exceeds unity. The analytics are motivated and illustrated by the Bolivian hyperinflation of 1984–1985.

Journal ArticleDOI
TL;DR: In this article, the problem of designing mechanisms whose Nash allocations coincide with the Lindahl allocations for public goods economies with more than one private good was considered and a single-valued, feasible, and continuous outcome function was proposed.
Abstract: This paper considers the problem of designing mechanisms whose Nash allocations coincide with the Lindahl allocations for public goods economies with more than one private good. Unlike previous mechanisms, the mechanism presented here has a single-valued, feasible, and continuous outcome function. Furthermore, when there are no public goods in economies, feasible and continuous implementation of the (constrained) Walrasian correspondence can be obtained as a corollary of our Theorem 1.

Journal ArticleDOI
TL;DR: In this paper, the welfare properties of the equilibrium timing of price changes are studied. But the authors focus on the effect of price level inertia on aggregate price level fluctuations and do not consider the effects of price-level inertia on stock market prices.
Abstract: This paper studies the welfare properties of the equilibrium timing of price changes. Staggered price setting has the advantage that it permits rapid adjustment to firm-specific shocks, but the disadvantages that it causes unwanted fluctuations in relative prices and that, by creating price level inertia, it can increase aggregate fluctuations. Because each firm ignores its contribution to these problems, staggering can be a stable equilibrium even if it is highly inefficient. In addition, there can be multiple equilibria in the timing of prices changes; indeed, whenever there is an inefficient staggered equilibrium, there is also an efficient equilibrium with synchronized price setting.

Journal ArticleDOI
TL;DR: In this paper, the existence of a game form that is feasible, both for equilibrium and disequilibrium strategies, continuous, and for which the set of Nash equilibria coincides with the constrained set of (constrained) Walrasian equilibrium for all pure exchange economies is shown.
Abstract: There has been a great deal of research in recent years investigating the question of whether or not there exist institutions (game forms) for which the set of equilibria will coincide with the set of Walrasian equilibria. In this paper we show the existence of a game form that is feasible, both for equilibrium and disequilibrium strategies, continuous, and for which the set of Nash equilibria coincides with the set of (constrained) Walrasian equilibria for all pure exchange economies. The game form allows agents to behave strategically both with respect to their preferences and their initial endowments.

Journal ArticleDOI
TL;DR: This article showed that welfare optima cannot be implemented as dynastic equilibria with positive levels of transfers under relatively weak conditions, and that intergenerational altruism ordinarily renders the objectives of social planners dynamically inconsistent, thereby making implementation of welfare optimization problematic.
Abstract: are three central findings. First, under relatively weak conditions, welfare optima cannot be implemented as dynastic equilibria with positive levels of transfers. Second, intergenerational altruism ordinarily renders the objectives of social planners dynamically inconsistent, thereby making implementation of welfare optima problematic. Third, if a planner successfully resolves dynamic inconsistency by committing himself to respect the preferences of deceased generations, and if there are a sufficient number of prior generations, then in a specific set of cases dynastic equilibria are approximately welfare optimal.

Journal ArticleDOI
TL;DR: In this article, the authors investigate the design of trade policies in an uncertain world and show that with a sufficient amount of uncertainty, both governments regulate their firms through subsidies, reflecting an important tradeoff between the strategic advantages of direct quantity controls and flexibility gained by the use of subsidies.
Abstract: This paper investigates the design of trade policies in an uncertain world. Governments in each of two countries select between direct quantity controls and subsidies in an attempt to shift profits in favour of domestic, imperfectly competitive firms. The equilibrium of this bilateral policy game depends critically on the variability of the environment. In a world of certainty, both governments would choose to regulate the behaviour of their firms through direct quantity controls. With a sufficient amount of uncertainty, both governments regulate their firms through subsidies. This result reflects an important tradeoff between the strategic advantages of direct quantity controls and flexibility gained by the use of subsidies

Journal ArticleDOI
TL;DR: In this paper, the authors show that the expected utility maximizing behavior is equivalent to dynamically consistent bidding in ascending-bid auctions and bidding the value of the object in second-price sealed-bid auction.
Abstract: Analyzing the optimal bidding behaviour in ascending-bid auctions and second-price sealed-bid auctions with independent private values, we show that expected utility maximizing behaviour is equivalent to: (a) dynamically consistent bidding in ascending-bid auctions; (b) the equivalence of the optimal bids in ascending-bid auctions and in second-price sealed-bid auctions; (c) bidding the value of the object in second-price sealed-bid auctions. In addition, the optimal bid in ascending-bid auctions equals the value of the object if and only if the bidder's preferences on lotteries are both quasi-concave and quasi-convex.