scispace - formally typeset
Search or ask a question

Showing papers in "Econometrica in 1989"


Journal ArticleDOI
TL;DR: In this article, the parameters of an autoregression are viewed as the outcome of a discrete-state Markov process, and an algorithm for drawing such probabilistic inference in the form of a nonlinear iterative filter is presented.
Abstract: This paper proposes a very tractable approach to modeling changes in regime. The parameters of an autoregression are viewed as the outcome of a discrete-state Markov process. For example, the mean growth rate of a nonstationary series may be subject to occasional, discrete shifts. The econometrician is presumed not to observe these shifts directly, but instead must draw probabilistic inference about whether and when they may have occurred based on the observed behavior of the series. The paper presents an algorithm for drawing such probabilistic inference in the form of a nonlinear iterative filter

9,189 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider the null hypothesis that a time series has a unit root with possibly nonzero drift against the alternative that the process is "trend-stationary" and show how standard tests of the unit root hypothesis against trend stationary alternatives cannot reject the unit-root hypothesis if the true data generating mechanism is that of stationary fluctuations around a trend function which contains a one-time break.
Abstract: We consider the null hypothesis that a time series has a unit root with possibly nonzero drift against the alternative that the process is «trend-stationary». The interest is that we allow under both the null and alternative hypotheses for the presence for a one-time change in the level or in the slope of the trend function. We show how standard tests of the unit root hypothesis against trend stationary alternatives cannot reject the unit root hypothesis if the true data generating mechanism is that of stationary fluctuations around a trend function which contains a one-time break

7,471 citations


Journal ArticleDOI
TL;DR: In this article, the authors propose simple and directional likelihood-ratio tests for discriminating and choosing between two competing models whether the models are nonnested, overlapping or nested and whether both, one, or neither is misspecified.
Abstract: In this paper, we propose a classical approach to model selection. Using the Kullback-Leibler Information measure, we propose simple and directional likelihood-ratio tests for discriminating and choosing between two competing models whether the models are nonnested, overlapping or nested and whether both, one, or neither is misspecified. As a prerequisite, we fully characterize the asymptotic distribution of the likelihood ratio statistic under the most general conditions.

5,661 citations


Journal ArticleDOI
TL;DR: In this paper, a class of recursive, but not necessarily expected utility, preferences over intertemporal consumption lotteries is developed, which allows risk attitudes to be disentangled from the degree of inter-temporal substitutability, leading to a model of asset returns in which appropriate versions of both the atemporal CAPM and the inter-time consumption-CAPM are nested as special cases.
Abstract: This paper develops a class of recursive, but not necessarily expected utility, preferences over intertemporal consumption lotteries An important feature of these general preferences is that they permit risk attitudes to be disentangled from the degree of intertemporal substitutability Moreover, in an infinite horizon, representative agent context these preference specifications lead to a model of asset returns in which appropriate versions of both the atemporal CAPM and the intertemporal consumption-CAPM are nested as special cases In our general model, systematic risk of an asset is determined by covariance with both the return to the market portfolio and consumption growth, while in each of the existing models only one of these factors plays a role This result is achieved despite the homotheticity of preferences and the separability of consumption and portfolio decisions Two other auxiliary analytical contributions which are of independent interest are the proofs of (i) the existence of recursive intertemporal utility functions, and (ii) the existence of optima to corresponding optimization problems In proving (i), it is necessary to define a suitable domain for utility functions This is achieved by extending the formulation of the space of temporal lotteries in Kreps and Porteus (1978) to an infinite horizon framework A final contribution is the integration into a temporal setting of a broad class of atemporal non-expected utility theories For homogeneous members of the class due to Chew (1985) and Dekel (1986), the corresponding intertemporal asset pricing model is derived

4,218 citations


Book ChapterDOI
TL;DR: In this paper, an axiom of comonotonic independence is introduced, which weakens the von Neumann-Morgenstern axiom for independence, and the expected utility of an act with respect to the nonadditive probability is computed using the Choquet integral.
Abstract: An act maps states of nature to outcomes; deterministic outcomes as well as random outcomes are included. Two acts f and g are comonotonic, by definition, if it never happens that f(s) >- f(t) and g(t) >- g(s) for some states of nature s and t. An axiom of comonotonic independence is introduced here. It weakens the von Neumann-Morgenstern axiom of independence as follows: If f >- g and if f, g, and h are comonotonic, then cff +(l-a)h>-ag+(1 -ac)h. If a nondegenerate, continuous, and monotonic (state independent) weak order over acts satisfies comonotonic independence, then it induces a unique non-(necessarily-)additive probability and a von Neumann-Morgenstern utility. Furthermore, one can compute the expected utility of an act with respect to the nonadditive probability, using the Choquet integral. This extension of the expected utility theory covers situations, as the Ellsberg paradox, which are inconsistent with additive expected utility. The concept of uncertainty aversion and interpretation of comonotonic independence in the context of social welfare functions are included.

2,898 citations


Journal ArticleDOI
TL;DR: In this article, a test for the ex ante efficiency of a given portfolio of assets is analyzed, and the sensitivity of the test to the portfolio choice and to the number of assets used to determine the ex post mean-variance efficient frontier is analyzed.
Abstract: A test for the ex ante efficiency of a given portfolio of assets is analyzed. The relevant statistic has a tractable small sample distribution. Its power function is derived and used to study the sensitivity of the test to the portfolio choice and to the number of assets used to determine the ex post mean-variance efficient frontier. Several intuitive interpretations of the test are provided, including a simple mean-standard deviation geometric explanation. A univariate test, equivalent to our multivariate-based method, is derived, and it suggests some useful diagnostic tools which may explain why the null hypothesis is rejected. Empirical examples suggest that the multivariate approach can lead to more appropriate conclusions than those based on traditional inference which relies on a set of dependent univariate statistics.

2,129 citations


Journal ArticleDOI
TL;DR: In this article, conditions under which the numerical approximation of a posterior moment converges almost surely to the true value as the number of Monte Carlo replications increases, and the numerical accuracy of this approximation may be assessed reliably, are set forth.
Abstract: Methods for the systematic application of Monte Carlo integration with importance sampling to Bayesian inference in econometric models are developed. Conditions under which the numerical approximation of a posterior moment converges almost surely to the true value as the number of Monte Carlo replications increases, and the numerical accuracy of this approximation may be assessed reliably, are set forth. Methods for the analytical verification of these conditions are discussed

1,649 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a simple modification of a conventional method of moments estimator for a discrete response model, replacing response probabilities that require numerical integration with estimators obtained by Monte Carlo simulation.
Abstract: This paper proposes a simple modification of a conventional method of moments estimator for a discrete response model, replacing response probabilities that require numerical integration with estimators obtained by Monte Carlo simulation. This method of simulated moments (MSM) does not require precise estimates of these probabilities for consistency and asymptotic normality, relying instead on the law of large numbers operating across observations to control simulation error, and hence can use simulations of practical size. The method is useful for models such as high-dimensional multinomial probit (MNP), where computation has restricted applications.

1,621 citations


Journal ArticleDOI
TL;DR: The authors demontre un theoreme de limite centrale general for des estimateurs definis par minimisation de la longueur d'une fonction de critere aleatoire a valeurs vectorielles.
Abstract: On demontre un theoreme de limite centrale general pour des estimateurs definis par minimisation de la longueur d'une fonction de critere aleatoire a valeurs vectorielles. Aucune hypothese de regularite suffisante n'est imposee sur la fonction de critere, pour que les resultats puissent etre appliques a une classe d'estimlateurs de simulation assez large. Des analyses completes de deux estimateurs de simulation l'un introduit par Pakes, l'autre par McFadden, illustrent l'application des theoremes generaux

1,493 citations


Journal ArticleDOI
TL;DR: In this paper, the authors model an oligopoly facing uncertain demand in which each firm chooses as its strategy a "supply function" relating its quantity to its price, and prove the existence of a Nash equilibrium in supply functions for a symmetric oligopoly producing a homogeneous good.
Abstract: We model an oligopoly facing uncertain demand in which each firm chooses as its strategy a "supply function" relating its quantity to its price. Such a strategy allows a firm to adapt better to the uncertain environment than either setting a fixed price or setting a fixed quantity; commitment to a supply function may be accomplished in practice by the choice of the firm's size and structure, its culture and values, and the incentive systems and decision rules for its employees. In the absence of uncertainty, there exists an enormous multiplicity of equilibria in supply functions, but uncertainty, by forcing each firm's supply function to be optimal against a range of possible residual demand curves, dramatically reduces the set of equilibria. Under uncertainty, we prove the existence of a Nash, equilibrium in supply functions for a symmetric oligopoly producing a homogeneous good and give sufficient conditions for uniqueness. We perform comparative statics with respect to firms' costs, the industry demand, the nature of the demand uncertainty, and the number of firms, and sketch the extension to differentiated products. Firms' equilibrium supply functions are steeper with marginal cost curves that are steeper relative to demand, fewer firms, more highly differentiated products, and demand uncertainty that is relatively greater at higher prices. The steeper are the supply functions firms choose in equilibrium, the more closely competition resembles the Cournot model (which exogenously imposes vertical supply functions-fixed quantities); with flatter equilibrium supply functions, competition is closer to the Bertrand model (which exogenously imposes horizontal supply functions-fixed prices).

1,394 citations


Journal ArticleDOI
TL;DR: In this article, the authors extended the literature on optimal economic growth to allow for optimizing choices of fertility and intergenerational transfers, and used the model to assess the effects of child-rearing costs, the tax system, the conditions of technology and preferences, and shocks to the initial levels of population and the capital stock.
Abstract: Altruistic parents make choices of family size along with decisions about consumption and intergenerational transfers. The authors apply this framework to a closed economy, where the determination of interest rates and wage rates is simultaneous with the determination of population growth and the accumulation of capital. Thus, they extend the literature on optimal economic growth to allow for optimizing choices of fertility and intergenerational transfers. The authors use the model to assess the effects of child-rearing costs, the tax system, the conditions of technology and preferences, and shocks to the initial levels of population and the capital stock. Copyright 1989 by The Econometric Society.

Journal ArticleDOI
TL;DR: In this paper, the density-weighted average derivative of a general regression function is estimated using nonparametric kernel estimators of the density of the regressors, based on sample analogues of the product moment representation of the average derivative.
Abstract: This paper gives a solution to the problem of estimating coefficients of index models, through the estimation of the density-weighted average derivative of a general regression function. The estimators, based on sample analogues of the product moment representation of the average derivative, are constructed using nonparametric kernel estimators of the density of the regressors. Asymptotic normality is established using extensions of classical U-statistic theorems, and asymptotic bias is reduced through use of a higher-order kernel

Journal ArticleDOI
TL;DR: In this article, a real-valued function P is defined on the space of cooperative games with transferable utility, satisfying the following condition: in every game, the marginal contributions of all players (according to P) are efficient (i.e., add up to the worth of the grand coalition).
Abstract: Let P be a real-valued function defined on the space of cooperative games with transferable utility, satisfying the following condition: In every game, the marginal contributions of all players (according to P) are efficient (i.e., add up to the worth of the grand coalition). It is proved that there exists just one such function P--called the potential--and moreover that the resulting payoff vector coincides with the Shapley value. The potential approach yields other characterizations for the value; in particular, in terms of a new internal consistency property. Further results deal with weighted values and with the nontransferable utility case. Copyright 1989 by The Econometric Society.

Journal ArticleDOI
TL;DR: In this article, a lifetime utility model, in which the date of death is uncertain and in which bequests give utility, is analyzed and estimated, and the parameter estimates imply that most inheritances are accidental, the result of mortality risk and the shape of the desired consumption path is sensitive to variations in mortality rates.
Abstract: A lifetime utility model, in which the date of death is uncertain and in which bequests give utility, is analyzed and estimated. The parameter estimates imply that most bequests are accidental, the result of mortality risk, and that the shape of the desired consumption path is sensitive to variations in mortality rates. On average, the elderly in the sample dissave, which is consistent with a life-cycle model in which utility does not depend on bequests. Copyright 1989 by The Econometric Society.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the enforceability of employment contracts when employees' performance cannot be verified in court so that piece-rate contracts are not legally enforceable, and show that many such equilibria exist.
Abstract: This paper considers the enforceability of employment contracts when employees' performance cannot be verified in court so that piece-rate contracts are not legally enforceable. Part I shows that there exists a variety of self-enforcing implicit contracts, modelled as perfect equilibria in a repeated game, and characterizes all the wage and performance outcomes that can be implemented. Implementation requires a strictly positive surplus from employment, the form of the contract depending on how this surplus is divided between firm and employee. Piece-rate contracts, and contracts with an informally agreed bonus, can be made self-enforcing but the use of severance pay and bonding does not extend the set of implementable allocations. The resulting contracts resemble actual labor contracts more than do the contracts in standard principal-agent models. Part II analyses market equilibrium with these contracts, also modelled as perfect equilibria in a repeated game, and shows that many such equilibria exist. Unfilled vacancies and unemployed workers can co-exist despite the existence of contracts that are potentially mutually beneficial. For those jobs that are filled, any division of the potential surplus is possible so that the market can have, at the same time, involuntary unemployment and vacancies that are unfilled despite filled jobs earning positive profits. As a criterion for selecting equilibria, a notion of renegotiation proofness is applied. Then either all workers are employed or all jobs filled but any division of the potential surplus is still possible. The paper explores what further restrictions on beliefs give rise to a Walrasian outcome, in which all the potential surplus goes to the short side of the market, and to an efficiency wage type outcome, in which the potential surplus goes to the long side.

ReportDOI
TL;DR: In this article, the authors present an empirical analysis of individual earnings and hours data from three different longitudinal surveys, and find that this structure is very similar across data sets, and may be adequately summarized by a simple components-of-variance model.
Abstract: This paper presents an empirical analysis of individual earnings and hours data from three different longitudinal surveys. In the first part of the paper we catalog the main features of the covariance structure of earnings and hours changes. We find that this structure is very similar across data sets, and may be adequately summarized by a simple components-of-variance model. In the second part of the paper we offer an interpretation of this model in terms of a simple life-cycle labor supply model

Journal ArticleDOI
TL;DR: The authors investigated the extent to which specification error can explain the rejections of over-identifying restrictions of the intertemporal capital asset pricing model when tested using data on consumption growth and asset returns, particu- larly when additively separable, constant risk utility is attributed to the representa- tive agent.
Abstract: The overidentifying restrictions of the intertemporal capital asset pricing model are usually rejected when tested using data on consumption growth and asset returns, particu- larly when additively separable, constant relative risk utility is attributed to the representa- tive agent. This article investigates the extent to which specification error can explain these rejections. The empirical strategy is limited information maximum likelihood in conjunc- tion with seminonparametric (expanding parameter space) representations for both the law of motion and utility. We find that consumption growth and asset returns display conditional heterogeneity, but this fact does not account for rejection of the overidentifying restrictions as might be anticipated from the work of Hansen, Singleton, and others using generalized method of moments methods. We also find that expansion of the parameter space in the direction of nonseparable utility causes the overidentifying restrictions to be accepted. Our estimation strategy provides information on the manner in which the restrictions distort the law of motion. In particular, imposition of additively separable, constant relative risk aversion utility causes the conditional variance of consumption growth to be overpredicted, the conditional covariance of asset returns with consumption growth to be overpredicted, and an equity premium. Imposition of nonseparable seminon- parametric utility causes distortion in these same directions, though the distortions are much smaller which is consistent with the outcomes of the tests of the restrictions.

Journal ArticleDOI
TL;DR: In this article, a single long-run player plays a simultaneous-move stage game against a sequence of opponents who play only once, but observe all previous play, and the payoff in any Nash equilibrium exceeds a bound that converges to the Stackelberg payoff as his discount factor approaches one.
Abstract: A single long-run player plays a simultaneous-move stage game against a sequence of opponents who play only once, but observe all previous play. Let the "Stackelberg strategy" be the pure strategy to which the long-run player would most like to commit himself. If there is positive prior probability that the long-run player will always play the Stackelberg strategy, then his payoff in any Nash equilibrium exceeds a bound that converges to the Stackelberg payoff as his discount factor approaches one. When the stage game is not simultaneous move, this result must be modified to account for the possibility that distinct strategies of the long-run player are observationally equivalent.

Journal ArticleDOI
TL;DR: In this paper, the authors compare the equilibrium behavior and outcomes in a model of two-party competition for legislative seats, under two different assumptions about the parties' goals: 1) parties maximize the expected number of seats won; and 2) parties maxim the probability of winning a majority of the seats.
Abstract: This paper compares the equilibrium behavior and outcomes in a model of two-party competition for legislative seats, under two different assumptions about the parties' goals: 1) parties maximize the expected number of seats won; and 2) parties maximize the probability of winning a majority of the seats. The two goals may lead to qualitatively different behavior, and studying the differences yields insights into the relationship between the goals, and the role of assymetries between the parties

Journal ArticleDOI
TL;DR: In this article, a transferable utility economy in which each agent holds a resource which can be used in combination with the resources of other agents to generate value is studied using a dynamic model of bargaining.
Abstract: A transferable utility economy in which each agent holds a resource which can be used in combination with the resources of other agents to generate value (according to the characteristic function V) is studied using a dynamic model of bargaining. The main theorem establishes that the payoffs associated with efficient equilibria converge to the agents' Shapley values as the time between periods of the dynamic game goes to zero. In addition it is demonstrated that an efficient equilibrium exists and is unique when an additivity condition is satisfied. To demonstrate the sensitivity of the solution to the institutional detail we modify the model to allow for partnerships and show that the Shapley value is no longer achieved.

Journal ArticleDOI
TL;DR: The authors analyzes durable goods monopoly in an infinite-horizon, discrete-time game and proves that, as the time interval between successive offers approaches zero, all seller payoffs between zero and static monopoly profits are supported by subgame perfect equilibria.
Abstract: This paper analyzes durable goods monopoly in an infinite-horizon, discrete-time game. We prove that, as the time interval between successive offers approaches zero, all seller payoffs between zero and static monopoly profits are supported by subgame perfect equilibria. This reverses a well-known conjecture of Coase. Alternatively, one can interpret the model as a sequential bargaining game with one-sided incomplete information in which an uninformed seller makes all the offers. Our folk theorem for seller payoffs equally applies to the set of sequential equilibria of this bargaining game.

Journal ArticleDOI
TL;DR: In this paper, the authors define a solution concept for transferable utility cooperative games in characteristic function form, in a framework where individuals believe in equality as a desirable social goal, although private preferences dictate selfish behavior.
Abstract: We define a new solution concept for transferable utility cooperative games in characteristic function form, in a framework where individuals believe in equality as a desirable social goal, although private preferences dictate selfish behavior. This latter aspect implies that the solution outcome(s) must satisfy core-like participation constraints while the concern for equality entails choice of Lorenz maximal elements from within the set of payoffs satisfying the participation constraints

Journal ArticleDOI
TL;DR: This paper showed that the difference between the AM estimator and the HT estimator lies in the treatment of the time-varying explanatory variables which are uncorrelated with the effects.
Abstract: IN AN IMPORTANT RECENT PAPER, Hausman and Taylor (1981)-hereafter HT-considered the instrumental-variable estimation of a regression model using panel data, when the individual effects may be correlated with a subset of the explanatory variables. They provided a simple consistent estimator and an efficient estimator. More recently, Amemiya and MaCurdy (1986)-hereafter AM-have suggested an alternative estimator which is more efficient than the HT estimator, under certain conditions and given stronger assumptions than HT made. However, the relationship between the HT and AM papers is less clear than it might be, in part because of notational differences between the two papers. In this paper we clarify the relationship between the HT and AM estimators, and we show that the difference between these estimators lies in the treatment of the time-varying explanatory variables which are uncorrelated with the effects: HT use each such variable as two instruments (means and deviations from means), while AM use such variables as T + 1 instruments (as deviations from means and also separately for each of the T available time periods). This enables us to make clear the conditions under which the AM estimator is more efficient than the HT estimator. We also present each estimator in a form which allows it to be calculated using standard instrumental-variables (two-stage least squares) software. Following the AM path one step further, we then define a third (BMS) estimator which, under yet stronger assumptions, is more efficient than the AM estimator. Both HT and AM use as instruments the deviations from means of the time-varying variables which are correlated with the effects. A more efficient estimator may be obtained by using separately the (T - 1) linearly independent values of these deviations from individual means. Consistency requires that these be legitimate instruments, and whether this is so depends on why these time-varying variables are correlated with the effects. For example, if such correlation arises solely because of a time-invariant component which is removed in taking deviations from individual means, these instruments are legitimate.

Journal ArticleDOI
TL;DR: In this article, the authors provide conditions on the primitives of a continuous-time economy under which there exist equilibria obeying the Consumption-Based Capital Asset Pricing Model (CCAPM).
Abstract: The paper provides conditions on the primitives of a continuous-time economy under which there exist equilibria obeying the Consumption-Based Capital Asset Pricing Model (CCAPM). The paper also extends the equilibrium characterization of interest rates of Cox, Ingersoll, and Ross (1985) to multi-agent economies. We do not use a Markovian state assumption. THIS WORK PROVIDES sufficient conditions on agents' primitives for the validity of the Consumption-Based Capital Asset Pricing Model (CCAPM) of Breeden (1979). As a necessary condition, Breeden showed that in a continuous-time equilibrium satisfying certain regularity conditions, one can characterize returns on securities as follows. The expected "instantaneous" rate of return on any security in excess of the riskless interest rate (the security's expected excess rate of return) is a multiple common to all securities of the "instantaneous covariance" of this excess return with aggregate consumption increments. This common multiple is the Arrow-Pratt measure of risk aversion of a representative agent. (Rubinstein (1976) published a discrete-time precursor of this result.) The exis- tence of equilibria satisfying Breeden's regularity conditions had been an open issue. We also show that the validity of the CCAPM does not depend on Breeden's assumption of Markov state information, and present a general asset pricing model extending the results of Cox, Ingersoll, and Ross (1985) as well as the discrete-time results of Rubinstein (1976) and Lucas (1978) to a multi-agent environment. Since the CCAPM was first proposed, much effort has been directed at finding sufficient conditions on the model primitives: the given assets, the agents' preferences, the agents' consumption endowments, and (in a production econ- omy) the feasible production sets. Conditions sufficient for the existence of continuous-time equilibria were shown in Duffie (1986), but the equilibria demonstrated were not shown to satisfy the additional regularity required for the CCAPM. In particular, Breeden assumed that all agents choose pointwise interior consumption rates, in order to characterize asset prices via the first order conditions of the Bellman equation. Interiority was also assumed by Huang (1987) in demonstrating a representative agent characterization of equilibrium, an approach exploited here. The use of dynamic programming and the Bellman equation, aside from the difficulty it imposes in verifying the existence of interior 1 Financial support from the National Science Foundation is gratefully acknowledged. We thank

Journal ArticleDOI
TL;DR: In this article, the consumption function is represented as a fixed point of a nonlinear monotone operator that is defined such that its nth iteration computes the consumption functions n steps away from the horizon for a corresponding finite-horizon economy.
Abstract: This paper develops a method to study an infinite-horizon production economy distorted by a state-dependent income tax. This setting permits taxes to depend on the capital stock, an endogenous state variable, and thus captures situations where the evolution for capital and taxes is jointly determined. To solve this model, the consumption function is represented as a fixed point of a nonlinear monotone operator that is defined such that its nth iteration computes the consumption function n steps away from the horizon for a corresponding finite-horizon economy. The oparator's monotonicity is exploited in proving the existence of, and constructing, an equilibrium. Copyright 1991 by The Econometric Society.

Journal ArticleDOI
TL;DR: In this article, a simple theoretical framework that can be used to organize the debate about the question of whether such a model is identified and about how much predictive content it retains is presented.
Abstract: Economic models with multiple equilibria are now so common in economics that we need a general framework for statistical inference in such models. This note offers a simple theoretical framework that can be used to organize the debate about the question of whether such a model is identified, and about the question of how much predictive content it retains

Journal ArticleDOI
TL;DR: In this paper, the authors propose a continuous-time game model with a grid that is infinitely fine and show that for any sufficiently fine grid, there will exist an e-subgame perfect equilibrium for the corresponding game played on that grid which is within -of" the continuous time equilibrium.
Abstract: We propose a new framework for games in continuous time that conforms as closely as possible to the conventional discrete-time framework. In this paper, we take the view that continuous time can be viewed as "discrete time, but with a grid that is infinitely fine." Specifically, we define a class of continuous-time strategies with the following property: when restricted to an arbitrary, increasingly fine sequence of discrete-time grids, any profile of strategies drawn from this class generates a convergent sequence of outcomes, whose limit is independent of the sequence of grids. We then define the continuous-time outcome to be this limit. Because our continuous-time model conforms so closely to the conventional discrete-time model, we can readily compare the predictions of the two frameworks. Specifically, we ask two questions. First, is discrete-time with a very find grid a good proxy for continuous time? Second, does every subgame perfect equilibrium in our model have a discrete-time analog? Our answer to the first question is the following "upper hemi-continuity" result: Suppose a sequence of discrete-time e-subgame-perfect equilibria increasingly closely approximates (in a special sense) a given continuous-time profile, with - converging to zero along as the period length shrinks. Then the continuous-time profile will be an exact equilibrium for the corresponding continuous-time game. Our second answer is a lower hemi-continuity result that holds under weak conditions. Fix a perfect equilibrium for a continuous-time game and a positive ?. Then for any sufficiently fine grid, there will exist an e-subgame perfect equilibrium for the corresponding game played on that grid which "is within - of" the continuous-time equilibrium. Our model yields sharp predictions in a variety of industrial organization applications. We first consider several variants of a familiar preemption model. Next, we analyze a stylized model of a patent race. Finally, we obtain a striking uniqueness result for a class of "repeated" games.

Journal ArticleDOI
TL;DR: In this paper, a new test is proposed of the hypothesis of "perfect aggregation" which tests the validity of aggregation either through coefficient equality or through the stability over time of the composition of the regressors across the micro units.
Abstract: This paper deals with the problem of aggregation where the focus of the analysis is whether to predict aggregate variables using macro or micro equations. The GrunfeldGriliches prediction criterion for choosing between aggregate and disaggregate equations is generalized to allow for contemporaneous covariances between the disturbances of micro equations and the possibility of different parametric restrictions on the equations of the disaggregate model. A new test is proposed of the hypothesis of 'perfect aggregation' which tests the validity of aggregation either through coefficient equality or through the stability over time of the composition of the regressors across the micro units. The tools developed in the paper are then applied to employment demand functions for the UK economy disaggregated by 40 industries. Firstly a set of unrestricted log-linear dynamic specifications are estimated for the disaggregate equations and then linear parameter restrictions are imposed as appropriate. Corresponding unrestricted and restricted aggregate equations are estimated. Two different levels of aggregation are considered: aggregation over the 23 manufacturing industries and aggregation over all 40 industries of the economy. In both cases the hypothesis of perfect aggregation is firmly rejected. For the manufacturing industries the prediction criterion marginally favors the aggregate equation but over all industries the disaggregated equations are strongly preferred.

Journal ArticleDOI
TL;DR: In this paper, the value of one in-kind transfer, food stamps, is estimated by evaluating the experience of an actual conversion from stamps to cash in Puerto Rico in 1982.
Abstract: The value of one in-kind transfer, food stamps, is estimated by evaluating the experience of an actual conversion from stamps to cash in Puerto Rico in 1982. The evidence indicates that the cashout of the stamps had no detectable influence on food expenditures. The explanation partly lies in the distribution of expenditures, for the stamps were inframarginal for most recipients. However, some evidence indicates that trafficking in stamps was widespread as well, including indirect evidence from estimation of the piecewise-linear constraint model. Copyright 1989 by The Econometric Society.

Journal ArticleDOI
TL;DR: In this paper, the authors present an analysis of the structure of competitive equilibrium in a model whose principal feature is incomplete financial markets formulated in the spirit of Arrow, where the overall payoffs or returns from financial instruments are assumed to be fixed or predetermined, independently of the operation of the economy.
Abstract: This paper presents an analysis of the structure of competitive equilibrium in a model whose principal feature is incomplete financial markets formulated in the spirit of Arrow. Specifically, the overall payoffs or returns from financial instruments are assumed to be fixed or predetermined, independently of the operation of the economy, and these instruments are assumed to be fewer in number than required to span all potential spot markets for commodities. Our main result establishes that market incompleteness generates a corresponding degree of allocation indeterminateness: Suppose there are N + 1 spot markets, but only 0< M < N (linearly independent) financial instruments, so that the deficiency in financial markets is 0 < n = N - M < N. Then, subject to some relatively innocuous technical qualifications, the set of equilibrium allocations contains a smooth, n-dimensional submanifold. We also indicate how real indeterminacy may increase when returns (for example, the market price and promised interest on an ordinary bond) are treated as variables rather than parameters.