scispace - formally typeset
Search or ask a question

Showing papers in "Econometrica in 1991"


Journal ArticleDOI
TL;DR: In this article, an exponential ARCH model is proposed to study volatility changes and the risk premium on the CRSP Value-Weighted Market Index from 1962 to 1987, which is an improvement over the widely-used GARCH model.
Abstract: This paper introduces an ARCH model (exponential ARCH) that (1) allows correlation between returns and volatility innovations (an important feature of stock market volatility changes), (2) eliminates the need for inequality constraints on parameters, and (3) allows for a straightforward interpretation of the "persistence" of shocks to volatility. In the above respects, it is an improvement over the widely-used GARCH model. The model is applied to study volatility changes and the risk premium on the CRSP Value-Weighted Market Index from 1962 to 1987. Copyright 1991 by The Econometric Society.

10,019 citations


Journal ArticleDOI
TL;DR: In this article, the authors derived the likelihood analysis of vector autoregressive models allowing for cointegration and showed that the asymptotic distribution of the maximum likelihood estimator of the cointegrating relations can be found by reduced rank regression and derives the likelihood ratio test of structural hypotheses about these relations.
Abstract: This paper contains the likelihood analysis of vector autoregressive models allowing for cointegration. The author derives the likelihood ratio test for cointegrating rank and finds it asymptotic distribution. He shows that the maximum likelihood estimator of the cointegrating relations can be found by reduced rank regression and derives the likelihood ratio test of structural hypotheses about these relations. The author shows that the asymptotic distribution of the maximum likelihood estimator is mixed Gaussian, allowing inference for hypotheses on the cointegrating relation to be conducted using the Chi(" squared") distribution. Copyright 1991 by The Econometric Society.

9,112 citations


Journal ArticleDOI
TL;DR: Using these results, data-dependent automatic bandwidth/lag truncation parameters are introduced and asymptotically optimal kernel/weighting scheme and bandwidth/agreement parameters are obtained.
Abstract: This paper is concerned with the estimation of covariance matrices in the presence of heteroskedasticity and autocorrelation of unknown forms. Currently available estimators that are designed for this context depend upon the choice of a lag truncation parameter and a weighting scheme. Results in the literature provide a condition on the growth rate of the lag truncation parameter as T \rightarrow \infty that is sufficient for consistency. No results are available, however, regarding the choice of lag truncation parameter for a fixed sample size, regarding data-dependent automatic lag truncation parameters, or regarding the choice of weighting scheme. In consequence, available estimators are not entirely operational and the relative merits of the estimators are unknown. This paper addresses these problems. The asymptotic truncated mean squared errors of estimators in a given class are determined and compared. Asymptotically optimal kernel/weighting scheme and bandwidth/lag truncation parameters are obtained using an asymptotic truncated mean squared error criterion. Using these results, data-dependent automatic bandwidth/lag truncation parameters are introduced. The finite sample properties of the estimators are analyzed via Monte Carlo simulation.

4,219 citations


ReportDOI
TL;DR: In this paper, a test for long-term memory that is robust to short-range dependence is developed, which is a modification of the R/S statistic, and the relevant asymptotic sampling theory is derived via functional central limit theory.
Abstract: A test for long-term memory that is robust to short-range dependence is developed. It is a modification of the R/S statistic, and the relevant asymptotic sampling theory is derived via functional central limit theory. Contrary to previous findings, when applied to daily and monthly stock returns indexes over several time periods this test yields no evidence of long-range dependence once short-range dependence is accounted for. Monte Carlo experiments show that the modified R/S test has power against at least two specific models of long-term memory, suggesting that models with short-range dependence may adequately capture the behavior of historical stock returns. Copyright 1991 by The Econometric Society.

1,575 citations


ReportDOI
TL;DR: In this article, the authors discuss the theory of saving when consumers are not permitted to borrow, and the ability of such a theory to account for some of the stylized facts of saving behavior.
Abstract: This paper is concerned with the theory of saving when consumers are not permitted to borrow, and with the ability of such a theory to account for some of the stylized facts of saving behavior. The models presented in the paper seem to account for important aspects of reality that are not explained by traditional life-cycle models. Copyright 1991 by The Econometric Society.

1,446 citations


Journal ArticleDOI
TL;DR: In this paper, an axiomatic model of preferences over lotteries is developed, which is consistent with the Allais paradox, includes expected utility theory as a special case, and is only one parameter ("beta") richer than the expected utility model.
Abstract: An axiomatic model of preferences over lotteries is developed. It is shown that this model is consistent with the Allais paradox, includes expected utility theory as a special case, and is only one parameter (" beta") richer than the expected utility model. Allais paradox type behavior is identified with positive values of "beta." Preferences with positive "beta" are said to be disappointment averse. It is shown that risk aversion implies disappointment aversion and that the Arrow-Pratt measures of risk aversion can be generalized in a straight-forward manner to the current framework. Copyright 1991 by The Econometric Society.

1,247 citations


Journal ArticleDOI
TL;DR: In this paper, evolutionary games are introduced as models for repeated anonymous strategic interaction: actions (or behaviors) which are more "fit" given the current distribution of behaviors, tend over time to displace less fit behaviors.
Abstract: Evolutionary games are introduced as models for repeated anonymous strategic interaction: actions (or behaviors) which are more "fit," given the current distribution of behaviors, tend over time to displace less fit behaviors. Cone fields characterize the continuous-time processes compatible with a given fitness (or payoff) function. For large classes of dynamics, it is shown that all stable steady states are Nash equilibria and that all Nash equilibria are steady states. The biologists' evolutionarily stable strategy condition is shown to be less closely related to the dynamic equilibria. Economic examples and a literature survey are also provided. Copyright 1991 by The Econometric Society.

1,244 citations


Journal ArticleDOI
TL;DR: In this paper, the authors show that full system maximum likelihood brings the problem of inference within the family covered by the locally asymptotically mixed normal (LAMM) asymPTotic theory, provided all unit roots have been eliminated.
Abstract: Properties of maximum likelihood estimates of cointegrated systems are studied. Alternative formulations are considered, including a new triangular system error correction mechanism. We demonstrate that full system maximum likelihood brings the problem of inference within the family covered by the locally asymptotically mixed normal asymptotic theory, provided all unit roots have been eliminated by specification and data transformation. Methodological issues provide a major focus of the paper. Our results favor use of full system estimation in error correction mechanisms or subsystem methods that are asymptotically equivalent. They also point to disadvantages in the use of unrestricted VAR's formulated in levels and of certain single equation approaches to estimation of error correction mechanisms. Copyright 1991 by The Econometric Society.

1,031 citations


Journal ArticleDOI
TL;DR: In this paper, the authors developed a discrete state space solution method for a class of nonlinear rational expectations models by using numerical quadrature rules to approximate the integral operators that arise in stochastic intertemporal models.
Abstract: The paper develops a discrete state space solution method for a class of nonlinear rational expectations models. The method works by using numerical quadrature rules to approximate the integral operators that arise in stochastic intertemporal models. The method is particularly useful for approximating asset pricing models and has potential applications in other problems as well. An empirical application uses the method to study the relationship between the risk premium and the conditional variability of the equity return under an ARCH endowment process. NONLINEAR DYNAMIC RATIONAL EXPECTATIONS MODELS rarely admit explicit solutions. Techniques like the method of undetermined coefficients or forward- looking expansions, which often work well for linear models, rarely provide explicit solutions for nonlinear models. The lack of explicit solutions compli- cates the tasks of analyzing the dynamic properties of such models and generat- ing simulated realizations for applied policy work and other purposes. This paper develops a discrete state-space approximation method for a specific class of nonlinear rational expectations models. The class of models is distinguished by two features: First, the solution functions for the endogenous variables are functions of at most a finite number of lags of an exogenous stationary state vector. Second, the expectational equations of the model take the form of integral equations, or more precisely, Fredholm equations of the second type. The key component of the method is a technique, based on numerical quadrature, for forming a discrete approximation to a general time series conditional density. More specifically, the technique provides a means for calibrating a Markov chain, with a discrete state space, whose probability distribution closely approximates the distribution of a given time series. The quality of the approximation can be expected to get better as the discrete state space is made sufficiently finer. The term "discrete" is used here in reference to the range space of the random variables and not to the time index; time is always discrete in our analysis. The discretization technique is primarily useful for taking a discrete approxi- mation to the conditional density of the strictly exogenous variables of a model. The specification of this conditional density could be based on a variety of 1Financial support under NSF Grants SES-8520244 and SES-8810357 is acknowledged. We thank the co-editor and referees of earlier drafts for many, many helpful comments that substantially improved the manuscript.

955 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present an approach to the theory of imperfect competition and apply it to study price competition among differentiated products, including products with multi-dimen-sional attributes.
Abstract: We present a new approach to the theory of imperfect competition and apply it to study price competition among differentiated products. The central result provides general conditions under which there exists a pure-strategy price equilibrium for any number of firms producing any set of products. This includes products with multi-dimen- sional attributes. In addition to the proof of existence, we provide conditions for uniqueness. Our analysis covers location models, the characteristics approach, and probabilistic choice together in a unified framework. To prove existence, we employ aggregation theorems due to Prekopa (1971) and Borell (1975). Our companion paper (Caplin and Nalebuff (1991)) introduces these theorems and develops the application to super-majority voting rules. WE PRESENT A NEW APPROACH to the theory of imperfect competition and apply it to study price competition among differentiated products. The central result is that there exists a pure-strategy price equilibrium for any number of firms producing any set of products. In addition to the proof of existence, we provide conditions for uniqueness. Our model both unites diverse strands of the earlier literature and opens up uncharted areas for future analysis. In particular, we expand the traditional one-dimensional framework to allow for multi-dimen- sional product differentiation. Our approach involves twin restrictions on consumer preferences: one on individuals' preferences, the other on the distribution of preferences across society. These are generalizations of the restrictions supporting 64%-majority rule presented in Caplin and Nalebuff (1988). To prove existence, we apply a new technique of aggregation. This technique is valuable in a variety of other problems. In the companion paper, we use the aggregation result to generalize our earlier work on 64%-majority rule and to characterize the relationship between the distribution of human capital and the distribution of income (Caplin and Nalebuff (1991)). There are additional applications in statistics and in search theory. We begin with a brief review of the early literature on imperfect competition, describing in more detail the existence problem and previous solutions. Section 3 presents our twin assumptions, and shows that they cover many standard cases. In Section 4, we introduce the aggregation theorem and use it in the analysis of demand functions. The proof of existence of equilibrium is in Section

688 citations


Journal ArticleDOI
TL;DR: This article examined the effects of male and female labor supply on household demands and presented a simple and robust test for the separability of commodity demands from labor supply, finding that separability is rejected.
Abstract: We examine the effects of male and female labor supply on household demands and present a simple and robust test for the separability of commodity demands from labor supply. Using data on individual households from six years of the UK FES we estimate a demand system for seven goods which includes hours and participation dummies as conditioning variables. Allowance is made for the possible endogeneity of these conditioning labor supply variables. We find that separability is rejected. Furthermore, we present evidence that ignoring the effects of labor supply leads to bias in the parameter estimates.

Journal ArticleDOI
TL;DR: In this paper, the authors discuss economic processes that may give rise to spatial patterns in data and explore the relative merits of alternative modeling approaches when data are spatially correlated, focusing on cases in which such a framework may be preferred to the more general fixed effects framework that nests it.
Abstract: This paper discusses economic processes that may give rise to spatial patterns in data and explores the relative merits of alternative modeling approaches when data are spatially correlated. An estimation scheme is presented that allows for spatial random effects and attention is focused on cases in which such a framework may be preferred to the more general fixed effects framework that nests it. The models presented are used together with information on the location of households in an Indonesian socioeconomic survey to test spatial relationships in Indonesian demand for rice. Copyright 1991 by The Econometric Society.

Journal ArticleDOI
TL;DR: In this paper, the authors argue that the conventional interpretation of a strategy is not consistent with the manner in which it is applied, and that this inconsistency frequently results in confusion and misunderstanding, and they argue that a good model in game theory has to be realistic in the sense that it provides a model for the perception of real life social phenomena.
Abstract: The paper is a discussion of the interpretation of game theory. Game theory is viewed as an abstract inquiry into the concepts used in social reasoning when dealing with situations of conflict and not as an attempt to predict behavior. The first half of the paper deals with the notion of "strategy." Its principal claim is that the conventional interpretation of a "strategy" is not consistent with the manner in which it is applied, and that this inconsistency frequently results in confusion and misunderstanding. In order to prove this point, the term "strategy" is discussed in three contexts: extensive games in which players have to act more than once in some prespecified order, games normally analyzed using mixed strategies, and games with limited memory. The paper endorses the view that equilibrium strategy describes a player's plan of action, as well as those considerations which support the optimality of his plan rather than being merely a description of a "plan of action." Deviation from the perception of a strategy as a mere "plan of action" fits in well with the interpretation of the notion "game" which is discussed in the second half of this paper. It is argued that a good model in game theory has to be realistic in the sense that it provides a model for the perception of real life social phenomena. It should incorporate a description of the relevant factors involved, as perceived by the decision makers. These need not necessarily represent the physical rules of the world. It is not meant to be isomorphic with respect to "reality" but rather with respect to our perception of regular phenomena in reality.

Journal ArticleDOI
TL;DR: In this article, the authors consider a situation where each agent can allocate his effort to various production activities called tasks, and the principal, who cannot observe the effort chosen by each agent, designs wage schedules contingent on outcomes.
Abstract: This paper concerns moral hazard problems in multi-agent situations where coopera- tion is an issue. Each agent chooses his own effort, which improves stochastically the outcome of his own task. He also chooses the amount of "help" to extend to other agents, which improves their performance. By selecting appropriate compensation schemes, the principal can design a task structure: the principal may prefer an unambiguous division of labor, where each agent specializes in his own task; or the principal may prefer teamwork where each agent is motivated to help other agents. We provide a sufficient condition for teamwork to be optimal, based on its incentive effects. We also show a nonconvexity of the optimal task structure: The principal wants either an unambiguous division of labor or a substantial teamwork. THIS PAPER CONCERNS moral hazard problems in multi-agent situations where cooperation is an issue. We consider a situation where each agent can allocate his effort to various production activities called tasks. Tasks are assumed to be "independent" of each other: the outcome of each task depends on an exoge- nous random variable which is stochastically independent of the random vari- ables affecting the other tasks; and revenues from each task only depend on the outcome of that task. Relative performance evaluation therefore does not give a reason for the wage schedule to an agent to depend on the outcome of the tasks assigned to the other agents. (See Baiman and Demski (1980), Green and Stokey (1983), Holmstrom (1982), Lazear and Rosen (1981), Mookherjee (1984), and Nalebuff and Stiglitz (1983).) We focus on incentives of agents to "help" each other. Each agent chooses his own effort level which improves stochastically the outcome of the task for which he is mainly responsible. Agents also choose the amount of "help" to extend to other agents which improves the outcomes of their tasks. The principal, who cannot observe the effort chosen by each agent, designs wage schedules contingent on outcomes. By selecting appropriate wage schedules, the principle can design a task structure: The principal may prefer a specialized task structure, where each agent is inclined not to help other agents and specializes in his own task. In this case, by the assumption of independent tasks, each agent can be treated completely separately. The principal however may choose a nonspecialized task structure, called teamwork, in which agents are motivated

Journal ArticleDOI
TL;DR: In this paper, it was shown that the usual test statistics and covariance matrices for autoregressions, which characterize the likelihood shape in dynamic models just as in static regression models, should be reported without any corrections for the special unit root distribution theory.
Abstract: skewed asymptotically, while the likelihood and hence the posterior pdf remains symmetric. We show that no single prior can rationalize treating p-values as probabilities in these models, and we display examples of the sample-dependent "priors" that would do so. We argue that these results imply at a minimum that the usual test statistics and covariance matrices for autoregressions, which characterize the likelihood shape in dynamic models just as in static regression models, should be reported without any corrections for the special unit root distribution theory, even if the corrected classical p-values are reported as well.

Journal ArticleDOI
TL;DR: In this article, the authors examined the constrained optimal pattern of capital flows between a lender and a borrower in an environment in which there are two impediments to forming contracts: the first impediment to contracting arises from the assumption that lenders cannot observe whether borrowers invest or consume borrowed funds.
Abstract: In this paper, I examine the constrained optimal pattern of capital flows between a lender and a borrower in an environment in which there are two impediments to forming contracts. The first impediment to contracting arises from the assumption that lenders cannot observe whether borrowers invest or consume borrowed funds. This assumption leads to a moral hazard problem in investment. The second impediment arises from the assumption that the borrower, as a sovereign nation, may choose to repudiate his debts. The optimal contract is shown to specify that the borrowing country experience a capital outflow when the worst realizations of national output occur. This seemingly perverse capital outflow forms a necessary part of the optimal solution to the moral hazard problem in investment.

Journal ArticleDOI
TL;DR: In this article, a simple nonparametric test of rank using Engel curve data is described and applied to U.S. and U.K. consumer survey data, and the results are used to assess theoretical and empirical aggregation error in representative consumer models, and to explain a representative consumer paradox.
Abstract: W. M. Gorman's (1981) concept of Engel curve "rank" is extended to apply to any demand system. Rank is shown to have implications for specification, separability, and aggregation of demands. A simple nonparametric test of rank using Engel curve data is described and applied to U.S. and U.K. consumer survey data. The test employs a new general method for testing the rank of estimated matrices. The results are used to assess theoretical and empirical aggregation error in representative consumer models, and to explain a representative consumer paradox. Copyright 1991 by The Econometric Society.

Journal ArticleDOI
TL;DR: In this article, the authors established the asymptotic normality of series estimators for nonparametric regression models, such as additive interactive regression, semiparametric regression, and semi-parametric index regression.
Abstract: This paper establishes the asymptotic normality of series estimators for nonparametric regression models. Gallant's Fourier flexible form estimators, trigonometric series estimators, and polynomial series estimators are prime examples of the estimators covered by the results. The results apply to a wide variety of estimates in the regression model under consideration, including derivatives and integrals of the regression function. The errors in the model may be homoskedastic or heteroskedastic. The paper also considers series estimators for additive interactive regression, semiparametric regression, and semiparametric index regression models, and shows them to be consistent and asymptotically normal. Copyright 1991 by The Econometric Society.


Journal ArticleDOI
TL;DR: In this article, a normal Roy model with four sectors is developed to derive tests of several assumptions on the working of the labor market: strongly or weakly competitive or segmented.
Abstract: A normal Roy model with four sectors is developed. It allows to derive tests of several assumptions on the working of the labor market: strongly or weakly competitive or segmented. It shows that more important a feature of labor markets than segmentation is the presence of comparative advantages for individuals between the various economic sectors. The model is applied to data on women's labor-force participation in the main towns of Colombia in 1980. It uses multivariate probit and Tobit techniques. Copyright 1991 by The Econometric Society.

Journal ArticleDOI
TL;DR: Subgroup consistency as mentioned in this paper is a simple and attractive property which requires the overall level of poverty to fall if a subgroup of the population experiences a reduction in poverty, while poverty in the rest of population remains unchanged.
Abstract: It seems desirable that the overall level of poverty should fall whenever poverty decreases within some subgroup of the population and is unchanged outside that group. Yet this simple and attractive property, which we call "subgroup consistency," is violated by many of the poverty indices suggested in recent years. This paper characterizes the class of subgroup consistent poverty iindices, and identifies the special features associated with this property. ONE OF THE MOST APPEALING PROPERTIES of poverty indices suggested in recent years is a simple consistency axiom which requires the overall level of poverty to fall if a subgroup of the population experiences a reduction in poverty, while poverty in the rest of the population remains unchanged.2 This property-which we term "subgroup consistency"-is desirable for a number of reasons. From a practical point of view, it is needed- to coordinate the effects of a decentralized strategy towards poverty alleviation. For a decentralized strategy typically in- volves a collection of activities targeted at specific subgroups or regions of the country. If the poverty indicator is not subgroup consistent, we may be faced with a situation in which each local effort achieves its objective of reducing poverty within its targeted group, and yet the level of poverty in the population as a whole increases. Subgroup consistency may therefore be viewed as an essential counterpart to a coherent poverty program. Subgroup consistency may also be regarded as a natural analogue of the monotonicity condition of Sen (1976), since monotonicity requires that aggre- gate poverty fall (or, at least, does not increase) if one person's poverty is reduced, ceteris paribus, while subgroup consistency demands that aggregate poverty fall if one subgroup's poverty is reduced, ceteris paribus. Furthermore, subgroup consistency is closely related to the property of "decomposability," which allows aggregate poverty to be expressed as a population-share weighted average of subgroup poverty levels, and hence facilitates the disaggregated- analysis of poverty by region or ethnic group of the type undertaken by Anand (1983). As it happens, the traditional poverty indices used by Anand and others -the headcount ratio (the fraction of the population that is poor) and the 1 Earlier versions of this paper were presented at a conference on Measurement and Modelling


Journal ArticleDOI
TL;DR: The main result of the paper is a characterization of voting by committees, which is the class of all voting schemes that satisfy voter sovereignty and non-manipulability on the domain of separable preferences.
Abstract: The main result of this paper characterizes voting by committees. There are n voters and K objects. Voters must choose a subset of K. Voting by committees is defined by one monotone family of winning coalitions for each object; an object is chosen if it is supported by one of its winning coalitions. This is proven to be the class of all voting schemes satisfying voter sovereignty and nonmanipulability on the domain of separable preferences. The result is analogous to the characterization of Clarke-Groves schemes in that it exhibits the class of all nonmanipulable schemes on an important domain. Copyright 1991 by The Econometric Society.

Journal ArticleDOI
TL;DR: In this article, a dynamic programming model of job exit behavior and retirement is constructed and estimated using the method of simulated moments, allowing for both unobserved individual effects and unobserved job-specific "match" effects.
Abstract: A dynamic programming model of job exit behavior and retirement is constructed and estimated using the method of simulated moments. The model and estimation method allow for both unobserved individual effects and unobserved job-specific "match" effects. The model is estimated using two different assumptions about individual discount factors. First, a static model, with the discount factor equal to zero, is estimated. Then a dynamic model, with the discount factor equal to .95 is estimated. In both models, it is found that bad health, age, and lack of education increase the probability of retirement. The dynamic model performs better than the static model and has different implications for retirement behavior. The job-specific effects are an important source of unobserved heterogeneity. Copyright 1991 by The Econometric Society.

Journal ArticleDOI
TL;DR: In the field of non-cooperative game theory, Nash equilibrium has played a central role as a solution concept as mentioned in this paper, and it is assumed that all players have consistent hierarchies of beliefs where the game and their priors are common knowledge.
Abstract: IN THE FIELD OF NONCOOPERATIVE GAME THEORY, Nash equilibrium (Nash (1951)) has played a central role as a solution concept. In bold strokes, one may discern two major interpretations of Nash equilibrium in the context of rational players. The first, which is close to the "eductive" interpretation of Binmore (1987, 1988) and the "complete information" interpretation of Kaneko (1987), assumes that the game is played exactly once (if it is a repeated game, the repetition occurs once), and the players have sufficient knowledge and ability to analyze the game in a rational manner. Sometimes it is assumed that all players have consistent hierarchies of beliefs, where the game and their priors are common knowledge. Bayesian interpretation such as proposed by Aumann (1987) advanced this idea to the level that the players have a common prior. From this point of view, however, Nash equilibrium seems far from being satisfactory

Journal ArticleDOI
TL;DR: This article provided conditions under which the mean voter's most preferred outcome is unbeatable according to a 64%-majority rule. But this result does not extend to elections in which candidates differ in more than one dimension.
Abstract: A celebrated result of Black (1948a) demonstrates the existence of a simple-majority winner when preferences are single-peaked. The social choice follows the preferences of the median voter: the median voter's most-preferred outcome beats any alternative. However, this conclusion does not extend to elections in which candidates differ in more than one dimension. This paper provides a multi-dimensional analog of the median voter result. We provide conditions under which the mean voter's most preferred outcome is unbeatable according to a 64%-majority rule. The conditions supporting this result represent a significant generalization of Caplin and Nalebuff (1988). The proof of our mean voter result uses a mathematical aggregation theorem due to Prekopa (1971, 1973) and Borell (1975). This theorem has broad applications in economics. An application to the distribution of income is described at the end of this paper; results on imperfect competition are presented in the companion paper, Caplin and Nalebuff (1991).

ReportDOI
TL;DR: In this paper, the authors derived from a model of investment with multiple capital goods a one-to-one relation between the growth rate of the capital aggregate and the stock market-based Q. Identification is achieved by combining the theoretical structure of the Q model and an assumed serial correlation structure of technology shock which is the error term in the growth-Q equation.
Abstract: We derive from a model of investment with multiple capital goods a one-to-one relation between the growth rate of the capital aggregate and the stock market-based Q. We estimate the growth-Q relation using a panel of Japanese manufacturing firms taking into account the endogeneity of Q. Identification is achieved by combining the theoretical structure of the Q model and an assumed serial correlation structure of the technology shock which is the error term in the growth-Q equation. For early years of our sample. cash flow has significant explanatory power over and above Q. The significance of cash flow disappears for more recent years for the heavy industry when Japanese capital markets was liberalized. The estimated Q coefficient implies that the adjustment cost is less than a half of gross profits net of the adjustment cost.

Journal ArticleDOI
TL;DR: In this article, a matching problem is considered in which sellers can publicly commit to a trading price that differs from the price at which buyers expect to trade elsewhere in the market, and the equilibrium ex ante price offer lies below the price associated with the Nash bargaining split.
Abstract: A matching problem is considered in which sellers can publicly commit to a trading price that differs from the price at which buyers expect to trade elsewhere in the market. When demand and supply are nearly equal, the equilibrium ex ante price offer lies below the price associated with the Nash bargaining split. This relationship reverses when the level of excess demand is large. Sellers always have an incentive to make ex ante offers when prices elsewhere are determined by Nash bargaining. This can be interpreted to mean that Nash bargaining is an unstable pricing institution. Copyright 1991 by The Econometric Society.

Journal ArticleDOI
TL;DR: In this article, the authors distinguish among the effects of reducing the interest rate, shortening the period over which actions are held fixed, and shortening a lag with which accumulated information is reported.
Abstract: In a repeated partnership game with imperfect monitoring, we distinguish among the effects of (1) reducing the interest rate, (2) shortening the period over which actions are held fixed, and (3) shortening the lag with which accumulated information is reported. All three changes are equivalent in games with perfect monitoring. With imperfect monitoring, reducing the interest rate always increases the possibilities for cooperation, but the other two changes always have the reverse effect when the interest rate is small.

Journal ArticleDOI
TL;DR: In this paper, a non-Archimedean variant of subjective expected utility where decisionmakers have lexicographic beliefs is developed, which can be made to satisfy admissibility and yield well-defined conditional probabilities and at the same time allow for "null" events.
Abstract: Conventional Bayesian theory of choice under uncertainty, subjective expected utility theory, fails to satisfy the properties of admissibility and existence of well-defined conditional probabilities; weakly dominated acts may be chosen, and the usual definition of conditional probabilities applies only to nonnull events. This paper develops a non-Archimedean variant of subjective expected utility where decisionmakers have lexicographic beliefs. This generalization can be made to satisfy admissibility and yield well-defined conditional probabilities and at the same time allow for "null" events. Copyright 1991 by The Econometric Society.