scispace - formally typeset
Search or ask a question

Showing papers in "Econometrica in 1988"


Journal ArticleDOI
TL;DR: In this article, the authors consider estimation and testing of vector autoregressio n coefficients in panel data, and apply the techniques to analyze the dynamic relationships between wages an d hours worked in two samples of American males.
Abstract: This paper considers estimation and testing of vector autoregressio n coefficients in panel data, and applies the techniques to analyze the dynamic relationships between wages an d hours worked in two samples of American males. The model allows for nonstationary individual effects and is estimated by applying instrumental variables to the quasi-differenced autoregressive equations. The empirical results suggest the absence of lagged hours in the wage forecasting equation. The results also show that lagged hours is important in the hours equation. Copyright 1988 by The Econometric Society.

3,736 citations


Journal ArticleDOI
TL;DR: In this article, a variable aleatoire (X,Z) dans #7B-R P ×#7b-R q is considered, and an estimateur generalisant l'estimateur des moindres carres ordinaires en inserant des estimateurs non parametriques de la regression dans la projection orthogonale non lineaire sur Z is constructed.
Abstract: On considere une variable aleatoire (X,Z) dans #7B-R P ×#7B-R q . On construit un estimateur generalisant l'estimateur des moindres carres ordinaires en inserant des estimateurs non parametriques de la regression dans la projection orthogonale non lineaire sur Z

2,393 citations


Journal ArticleDOI
TL;DR: The authors empirically tested and rejected classical competitive theories of wage determination by examining differences in wages for equally skilled workers across industries, and found that the dispersion in wages across industries as measured by the standard deviation in industry wage differentials is substantial.
Abstract: This paper empirically tests and rejects classical competitive theories of wage determination by examining differences in wages for equally skilled workers across industries. Human capital earnings functions are estimated using cross-sectional and longitudinal data from the CPS and QES. The major finding is that the dispersion in wages across industries as measured by the standard deviation in industry wage differentials is substantial. Furthermore, F tests of the joint significance of industry dummy variables are decisively rejected. These differences are very difficult to link to unobserved differences in ability or to compensating differentials for working conditions. Fixed effects models are estimated using two longitudinal data sets to control for constant, unmeasured worker characteristics that might bias cross-sectional estimates. Because measurement error is a serious problem in looking at workers who report changing industries, we use estimates of industry classification error rates to adjust the longitudinal results. In the fixed effects analysis, the industry wage differentials are sizable and are very similar to the cross-sectional estimates. In addition, the fixed effects estimates are robust under a variety of assumptions about classification errors and are similar using both data sets. These findings cast doubt on explanations of industry wage differentials based on unmeasured ability. Additional analysis finds that the industry wage structure is highly correlated for workers in small and large firms, in different regions of the U.S., and with varying job tenures. Finally, evidence is presented demonstrating that turnover has a negative relationship with industry wage differentials. These findings suggest that workers in high wage industries receive noncompetitive rents.

1,715 citations


Journal ArticleDOI
TL;DR: In this paper, the household behavior is modeled as a two-member collectivity taking Pareto-efficient decisions, and the consequences of this assumption are analyzed in a three-good model, in which only total consumption and each member's labor supply are observable.
Abstract: Traditionally, household behavior is derived from the maximization of a unique utility function. In this paper, we propose an alternative approach, in which the household is modeled as a two-member collectivity taking Pareto-efficient decisions. The consequences of this assumption are analyzed in a three-good model, in which only total consumption and each member's labor supply are observable. If the agents are assumed egoistic (i.e., they are only concerned with their own leisure and consumption), it is possible to derive falsifiable conditions upon household labor supplies from both a parametric and nonparametric viewpoint. If, alternatively, agents are altruistic, restrictions obtain in the nonparametric context; useful interpretation stems from the comparison with the characterization of aggregate demand for a private-good economy.

1,654 citations


Book ChapterDOI
TL;DR: In this paper, Tirole et al. studied spot asset trading in an environment in which all investors receive the same dividend from a known probability distribution at the end of each of T = 15 (or 30) trading periods.
Abstract: Spot asset trading is studied in an environment in which all investors receive the same dividend from a known probability distribution at the end of each of T= 15 (or 30) trading periods. Fourteen of twenty-two experiments exhibit price bubbles followed by crashes relative to intrinsic dividend value. When traders are experienced this reduces, but does not eliminate, the probability of a bubble. The regression of changes in mean price on lagged excess bids (number of bids minus the number of offers in the previous period), P, - Pt_= a + f(B,_1 - O_l), supports the hypothesis that -a = E(d), the one-period expected value of the dividend, and that f > 0, where excess bids is a surrogate measure of excess demand arising from homegrown capital gains (losses) expectations. Thus, when (Bt-1 - O _-) goes to zero we have convergence to rational expectations in the sense of Fama (1970), that arbitrage becomes unprofitable. The observed bubble phenomenon can also be interpreted as a form of temporary myopia (Tirole, 1982) from which agents learn that capital gains expectations are only temporarily sustainable, ultimately inducing common expectations, or "priors" (Tirole, 1982). Four of twenty-six experiments, all using experienced subjects, yield outcomes that appear to the "chart's eye" to converge "early" to rational expectations, although even in these cases we get > 0, and small price fluctuations of a few cents that invite "scalping."

1,377 citations


Journal ArticleDOI
TL;DR: The authors presentation d'un cadre systematique pour etudier des jeux repetes indefiniment avec actualisation, en s'attachant plus particulierement aux equilibres parfaits de strategie pure.
Abstract: Presentation d'un cadre systematique pour etudier des jeux repetes indefiniment avec actualisation, en s'attachant plus particulierement aux equilibres parfaits de strategie pure

1,188 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider auctions for a single indivisible object, in which the bidders have information about each other which is not available to the seller and show that the seller can use this information to his own benefit, and completely characterize the environ- ments in which a well chosen auction gives him the same expected payoff as that obtainable were he able to sell the object with full information about the bidder's willingness to pay.
Abstract: We consider auctions for a single indivisible object, in the case where the bidders have information about each other which is not available to the seller. We show that the seller can use this information to his own benefit, and we completely characterize the environ- ments in which a well chosen auction gives him the same expected payoff as that obtainable were he able to sell the object with full information about each bidder's willingness to pay. We provide this characterization for auctions in which the bidders have dominant strate- gies, and for those where the relevant equilibrium concept is Bayesian Nash. In both set-ups, the existence of these auctions hinges on the possibility of constructing lotteries with the correct properties. WE CONSIDER the situation in which an agent, the seller, possesses one indivisible unit of a good to which he attaches no value. But the good has value to a number of potential buyers, and its transfer to one of them would increase social welfare. In particular, the transfer to the buyer with the highest valuation maximizes social welfare. In this paper, we completely characterize environments in which the seller can design an auction that will enable him to capture for himself the full increase in social welfare induced by the transfer of the good to the bidder with the highest willingness to pay. If the seller had full information about the reservation prices of potential buyers, his optimal selling strategy would be very simple. He would announce a price equal or very close to the highest reservation value. The optimal strategy for the bidder with the highest evaluation would be to accept the offer. (Note that we are treating a situation in which the seller can commit himself to a price.) As a result of the exchange, the utility of the seller increases by the full amount of the increase in social welfare, and he has been able to fully extract the surplus. In many circumstances, however, a seller has only imperfect knowledge of the buyers' willingnesses to pay. In this case, he must find some mechanism, or auction, which will enable him to maximize his benefit from the sale of the object. The auction literature starts with this observation and shows how the seller can, by an astute choice of auction, extract the largest possible fraction of the surplus. In general, the literature has shown that this proportion is strictly less than one. In some circumstances, the bidders will have information about each other which is not available to the seller. For instance, in auctions for petroleum drilling rights, bidders know the results of geological tests which they have

925 citations


Journal ArticleDOI
TL;DR: In this paper, a simple two-period principal/agent model is studied, where the principal updates the incentive scheme after observing the agent's first-period performance. But the agent has superior information about his ability, and the strategies are required to be perfect, and updating of the principal's beliefs about the agents ability follows Bayes' rule.
Abstract: The paper studies a simple two-period principal/agent model in which the principal updates the incentive scheme after observing the agent's first-period performance. The agent has superior information about his ability. The principal offers a first period incentive scheme and observes some measure of the agent's first-period performance (cost or profit), which depends on the agent's ability and (unobservable) first-period effort. The relationship is entirely run by short-term contracts. In the second period the principal updates the incentive scheme and the agent is free to accept the new incentive scheme or to quit. The strategies are required to be perfect, and updating of the principal's beliefs about the agent's ability follows Bayes' rule. The central theme of the paper is that the ratchet effect leads to much pooling in the first period. First, for any first-period incentive scheme, there exists no separating equilibrium. Second, when the uncertainty about the agent's ability is small, the optimal scheme must involve a large amount of pooling. The paper also gives necessary and sufficient conditions for the existence of partition equilibria and looks at the effect of cost uncertainty.

684 citations


Journal ArticleDOI
TL;DR: In this article, the authors provide game theoretic foundations for the classic kinked demand curve equilibrium and Edgeworth cycle and analyze a model in which firms take turns choosing prices; the model is intended to capture the idea of reactions based on short-run commitment.
Abstract: We provide game theoretic foundations for the classic kinked demand curve equilibrium and Edgeworth cycle. We analyze a model in which firms take turns choosing prices; the model is intended to capture the idea of reactions based on short-run commitment. In a Markov perfect equilibrium (MPE), a firm's move in any period depends only on the other firm's current price. There are multiple MPE's, consisting of both kinked demand curve equilibria and Edgeworth cycles. In any MPE, profit is bounded away from the Bertrand equilibrium level. We show that a kinked demand curve at the monopoly price is the unique symmetric "renegotiation proof" equilibrium when there is little discounting. We then endogenize the timing by allowing firms to move at any time subject to short-run commitments. We find that firms end up alternating, thus vindicating the ad hoc timing assumption of our simpler model. We also discuss how the model can be enriched to provide explanations for excess capacity and market sharing. KEiiwoRDs: Tacit collusion, Markov perfect equilibrium, kinked demand curve, Edgeworth cycle, excess capacity, market sharing, endogenous timing.

606 citations


Journal ArticleDOI
TL;DR: In this paper, the authors report on three series of experiments all of which were predicted to have performed identically by the theory of rational expectations, and demonstrate the importance of market institutions and trading instruments in achievement of equilibrium.
Abstract: The idea that markets might aggregate and disseminate information and also resolve conflicts is central to the literature on decentralization (Hurwicz, 1972) and rational expectations (Lucas, 1972). We report on three series of experiments all of which were predicted to have performed identically by the theory of rational expectations. In two of the three series (one in which participants trade a complete set of Arrow-Debreu securities and a second in which all participants have identical preferences), double auction trading leads to efficient aggregation of diverse information and rational expectations equilibrium. Failure of the third series to exhibit such convergence demonstrates the importance of market institutions and trading instruments in achievement of equilibrium.

595 citations


Journal ArticleDOI
TL;DR: The authors test whether a model of reputation formation in an incomplete information game, using sequential equilibrium, predicts behavior of players in an experiment and conclude that sequential equilibrium with homemade incomplete information describes actual behavior well enough that it is plausible to apply it to theoretical settings where individuals make choices.
Abstract: We test whether a model of reputation formation in an incomplete information game, using sequential equilibrium, predicts behavior of players in an experiment. Subjects play an abstracted lending game: a B player lends or does not lend; then if B lends, an E player can pay back or renege. The game is played 8 times, and there is a small controlled probability that the E player's induced preferences make him prefer to pay back (but usually he prefers to renege). In sequential equilibrium, even E players who prefer to renege should pay back in early periods of the game, and renege with increasing frequency in later periods, to establish reputations for preferring to pay back. After many repetitions of the 8-period game, actual play is roughly like the sequential equilibrium, except that E players pay back later in the game and more often than they should. This behavior is rational if B players have a "homemade" prior probability of .17 (in addition to the controlled probability) that E players will prefer to pay back. We conclude that sequential equilibrium with homemade incomplete information describes actual behavior well enough that it is plausible to apply it to theoretical settings where individuals make choices (e.g., product markets, labor markets, bargaining).

Journal ArticleDOI
TL;DR: In this article, a suite of conditio ns that are applicable to the multistatistic case and replace the objectionable convex distribution function assumption is presented. But they do not consider the assumption that the distribution function of output is convex in the agent's action.
Abstract: It is of interest to know when the incentive compatibility conditio n in principal-agent problems can be replaced by the condition that the agent's expected utility be stationary in effort. The Mirrlees-Rogerson conditions do not work if the principal can observe more than one observable statistic. Also, the Mirrlees-Rogerson assumption that the distribution function of output is convex in the agent's action is unsatisfactory even in the context of the basi c model; it is too restrictive. The paper presents a suite of conditio ns that are applicable to the multistatistic case and replaces the objectionable convex distribution function assumption. Copyright 1988 by The Econometric Society.

Journal ArticleDOI
TL;DR: In this paper, the authors examined the use of stage mechanisms in implementation problems and provided a partial characterization of the set of subgam e perfect implementable choice rules, showing that in many economic environments, virtually an f choice rule can be implemented.
Abstract: This paper examines the use of stage mechanisms in implementation problems and provides a partial characterization of the set of subgam e perfect implementable choice rules. It is shown that, in many economic environments, virtually an y choice rule can be implemented. To illustrate the power of this approach, the paper discusses a number of models in which it is possible to implement the first-best (although it wouldn't have been possible to do so without using stage mechanisms). The diversity of these models suggests that subgame perfect implementation may find wide application. Copyright 1988 by The Econometric Society.

Journal ArticleDOI
TL;DR: In this paper, the authors introduce a class of alternating-move infinite-horizon models of duopoly, where the timing is meant to capture the presence of short-run commitments.
Abstract: The paper introduces a class of alternating-move infinite-horizon models of duopoly The timing is meant to capture the presence of short-run commitments Markov perfect equilibrium (MPE) in this context requires strategies to depend only on the action to which one's opponent is currently committed The dynamic programming equations for an MPE are derived The first application of the model is to a natural monopoly, in which fixed costs are so large that at most one firm can make a profit The firms install short-run capacity In the unique symmetric MPE, only one firm is active and practices the quantity analogue of limit pricing For commitments of brief duration, the market is almost contestable We conclude with a discussion of more general models in which the alternating timing is derived rather than imposed Our companion paper applies the model to price competition and provides equilibrium foundations for kinked demand curves and Edgeworth cycles

ReportDOI
TL;DR: In this article, it was shown that the variance of the innovation in the stock price is smaller than that of a stock price forecast made from a subset of the market's information set.
Abstract: A standard efficient markets model states that a stock price equals the expected present discounted value of its dividends, with a constant discount rate. This is shown to imply that the variance of the innovation in the stock price is smaller than that of a stock price forecast made from a subset of the market's information set. The implication follows even if prices and dividends require differencing to induce stationarity. The relation between the variances appears not to hold for some annual U.S. stock market data. The rejection of the model is both quantitatively and statistically significant.

Journal ArticleDOI
TL;DR: In this article, the authors examined US household fertility and female labor supply over the life cycle using data from the Panel Study of Income Dynamics and found that while parents cannot perfectly control conceptions variations in child care costs do affect life cycle spacing of births.
Abstract: This paper examines US household fertility and female labor supply over the life cycle using data from the Panel Study of Income Dynamics. The authors investigate how maternal time inputs and market expenditures on offspring as well as the benefits they yield their parents vary with ages of offspring and influence female labor supply and contraceptive behavior. The econometric framework combines a female labor supply model and a contraceptive choice index function. It also accounts for the fact that conceptions are not perfectly controllable events. Using longitudinal data on married couples the authors estimate these equations and test alternative specifications of the technologies governing child care. The findings suggest that while parents cannot perfectly control conceptions variations in child care costs do affect life cycle spacing of births. Furthermore the results demonstrate the gains of modeling the linkages between female labor supply and fertility behavior at the household level. (authors modified)

ReportDOI
TL;DR: In this paper, the authors present a variety of estimates of the effect of training on the probability of employment for the 1976 cohort of adult male participants in the Comprehensive Employment and Training Act (CETA) program, ranging from a simple comparison of pre-and post-training employment probabilities between trainees and nonparticipants, to a fully specified first-order Markov model of employment probabilities with individual heterogeneity.
Abstract: Despite over two decades of U.S. experience in operating large scale subsidized training programs for low income and unemployed workers, the effects of these programs are still highly controversial. The controversy arises from the difficulty of specifying the model of participant outcomes in the absence of training that is necessary in any nonexperimental program evaluation. In this paper we suggest that some of these difficulties may be overcome by focusing on a very simple measure of outcomes: namely, the probability of employment. We present a variety of estimates of the effect of training on the probability of employment for the 1976 cohort of adult male participants in the Comprehensive Employment and Training Act (CETA) program. Our methods range from a simple comparison of pre- and post-training employment probabilities between trainees and nonparticipants, to a fully specified first-order Markov model of employment probabilities with individual heterogeneity. There is consistent evidence across methods that CETA participation increased the probability of employment in the three years after training by from 2 to 5 percentage points. Classroom training programs appear to have had significantly larger effects than on-the-job programs, although the estimated effects of both kinds of programs are positive. We also find that movements in and out of employment for trainees and nonparticipants are reasonably well described by a first-order process, conditional on individual heterogeneity. In the context of this model, CETA participation appears to have increased both the probability of moving into employment, and the probability of continuing employment.

Journal ArticleDOI
TL;DR: In this paper, the authors show that under fairly general conditions, ordinary least squares and linear instrumental variables estimators are asymptotically normal when a regression equation has nonstationary right hand side variables.
Abstract: Under fairly general conditions, ordinary least squares and linear instrumental variables estimators are asymptotically normal when a regression equation has nonstationary right hand side variables. Standard formulas may be used to calculate a consistent estimate of the asymptotic variance-covariance matrix of the estimated parameter vector, even if the disturbances are conditionally heteroskedastic and autocorrelated. So inference may proceed in the usual way. The key requirements are that the nonstationary variables share a common unit root and that the unconditional mean of their first differences is nonzero. Copyright 1988 by The Econometric Society.

Journal ArticleDOI
TL;DR: In this article, the concept de processus aleatoire vectoriel presque integre was introduced, which is a type of processus aide a travailler vers une theorie asymptotique generale de la regression for des series temporelles multiples.
Abstract: On introduit le concept de processus aleatoire vectoriel presque integre. Ce type de processus aide a travailler vers une theorie asymptotique generale de la regression pour des series temporelles multiples

Journal ArticleDOI
TL;DR: In this paper, the problem of controlling a stochastic process with unknown parameters over an infinite horizon with discounting is considered, where agents express beliefs about unknown parameters in terms of distributions.
Abstract: The problem of controlling a stochastic process, with unknown parameters over an infinite horizon, with discounting is considered. Agents express beliefs about unknown parameters in terms of distributions. Under general conditions, the sequence of beliefs converges to a limit distribution. The limit distribution may or may not be concentrated at the true parameter value. In some cases, complete learning is optimal; in others, the optimal strategy does not imply complete learning. The paper concludes with examination of some special cases and a discussion of a procedure for generating examples in which incomplete learning is optimal. Copyright 1988 by The Econometric Society.

Journal ArticleDOI
TL;DR: In this article, the authors extend Varian's (1984) nonparametric production analysis to situations when the set of observed output, input, and price data is not consistent with profit maximization for at least one firm.
Abstract: In this paper we extend Varian's (1984) nonparametric production analysis to situations when the set of observed output, input, and price data is not consistent with profit maximization for at least one firm. In such cases, Varian's results imply that no production possibility set containing all observations can rationalize the observed data. We identify each firm whose performance, given the prices faced by it, may be found consistent with profit maximization relative to some production possibility set containing all observed output-input vectors. We show that the set 4' of all such firms can itself be weaklv rationalized in the sense that there exists a (closed, convex, and "monotone") production possibility set that contains all the observations, and relative to which the performance of all the firms in the set 8O is consistent with profit maximization given their respective prices. By definition, firms not included in this largest set d of efficient observations unambiguously deviate from profit maximizing behavior for any production possibility set containing all observations. We follow Farrell (1957) and analyze these deviations into technical and allocative efficiency measures, considering as admissible all closed, convex, and "monotone" production possibility sets relative to which the performance of each firm in the set g remains consistent with profit maximization. We then describe nonparametric methods for determining the tightest upper and lower bounds on the technical, allocative, and aggregate efficiency measures evaluated relative to all such admissible production possibility sets. It is seen that the tightest upper bound on the technical efficiency measure is the same as the value computed by the nonparametric efficiency evaluation technique known as data envelopment analysis, thus establishing a link between this literature in management science/operations research and the nonparametric production analysis in economics.

Journal ArticleDOI
TL;DR: In this paper, a modified CUSUM test, suggested by J. M. Dufour (1982), was investigated for structural change in a linear model with lagged dependent variables among the regressors.
Abstract: The well-known CUSUM test for structural change is investigated whe n there are lagged dependent variables among the regressors in a linear model. The authors show that both a modified CUSUM test, suggested b y J. M. Dufour (1982), and the straightforward CUSUM test retain their asymptotic significance levels in dynamic models, and find that the power depends crucially on the angle between the mean regressor and the structural shift. Copyright 1988 by The Econometric Society.

Journal ArticleDOI
TL;DR: In this paper, the complexity of a strategy in a repeated game is defined as the cardinality of the induced strategy set, i.e., the number of distinct strategies induced by the original strategy in all possible subgames.
Abstract: A measure of complexity for repeated game strategies is studied. This measure facilitates the investigation of some issues regarding finite rationality and the structure of subgame perfect equilibria of repeated games with discounting. Specifically, the complexity of a strategy in a given repeated game is defined to be the cardinality of the induced strategy set, i.e., the number of distinct strategies induced by the original strategy in all possible subgames. We observe that this cardinality is equal to the size (cardinality of the state set) of the smallest automaton which can implement the strategy. Thus, in a sense, complexity is measured on the basis of the amount of computing power inherent in the strategy. A measure of strategic memory is also studied. The following results are obtained: (1) combining two notions of "bounded rationality" (epsilon equilibrium and finite complexity), we find that every subgame perfect equilibrium of the repeated game can be approximated (with regard to payoffs) by a subgame perfect epsilon equilibrium of finite complexity. (2) For a generic class of normal form stage games, at every discount robust subgame perfect (DRSP) equilibrium, there are necessary relationships among the complexities and memories of the players' strategies. In the two player case, strategies must be equally complex and must have equal memories. (3) For a second class of two pla-yer stage games, we show that the payoff vectors for all DRSP equilibria are obtainable via equilibria in which the players' strategies are equally complex and have equal memoiies.

Journal ArticleDOI
TL;DR: In this article, the effects of spurious detrending in regression are examined in the context of models where the generating mechanism is systematically misspecified by the presence of deterministic time trends.
Abstract: This paper studies the effects of spurious detrending in regression. The asymptotic behavior of traditional least squares estimators and tests is examined in the context of models where the generating mechanism is systematically misspecified by the presence of deterministic time trends. Most previous work on the subject has relied upon Monte Carlo studies to understand the issues involved in detrending data that are generated by integrated processes and our analytical results help to shed light on many of the simulation findings. Standard F tests and Hausman tests are shown to inadequately discriminate between the competing hypotheses. Durbin-Watson statistics, on the other hand, are shown to be valuable measures of series stationarity. The asymptotic properties of regressions and excess volatility tests with detrended integrated time series are also explored.

Journal ArticleDOI
TL;DR: In this article, a definition of consensus social base is proposed, based on deux restrictions: les preferences individuelles and l'autre sur la distribution des preferences.
Abstract: Introduction d'une definition de consensus social base sur deux restrictions: une sur les preferences individuelles et l'autre sur la distribution des preferences. Dans ce cas de consensus, une regle de majorite a 64% presente de nombreuses proprietes interessantes par opposition aux regles de super-majorite possedant des proprietes paradoxales comme les regles electorales americaines

Journal ArticleDOI
TL;DR: For a particular utilitarian social welfare function, the problem faced by a central planner can be broken down into two subproblems: a standard problem o f optimally allocating aggregate consumption over time and a problem of distributing aggregate consumption optimally at each moment among those alive.
Abstract: This paper analyzes aspects of optimal fiscal policy for economies with capital ac cumulation and finitely-lived, heterogeneous agents For a particular utilitarian social welfare function, the problem faced by a central planner can be broken down into two subproblems: a standard problem o f optimally allocating aggregate consumption over time and a problem of distributing aggregate consumption optimally at each moment among those alive If it can use a sufficiently rich set of lump-sum taxes and transfers, the government can replicate the command optimum as a market equilibrium outcome No issue of government debt is needed to achieve this decentralization Copyright 1988 by The Econometric Society

Journal ArticleDOI
TL;DR: In this paper, household economies of scale (arising from the existence of househo ld public goods, increasing returns in household production, and/or bulk discounts) are incorporated into a utility-theoretic model of household demands.
Abstract: Household economies of scale (arising from the existence of househo ld public goods, increasing returns in household production, and/or bulk discounts) are incorporated into a utility-theoretic model of household demands. Individuals are assumed to be identical and symmetrically treated within households. Economies of scale parameters for five goods are estimated using the quadratic expenditu re system and data from the U.S. Consumer Expenditure Survey on expenditures by all-adult households. Results suggest the existence o f significant economies of scale in the consumption of all of the included goods, with economies being especially pronounced in the consumption of shelter. Copyright 1988 by The Econometric Society.

Journal ArticleDOI
TL;DR: In this paper, the authors examine whether nonseparable preference structures are important in characterizing life-cycle labor supply and find that the standard assumption of intertemporally separable pre ferences for leisure is not consistent with the data.
Abstract: In this paper, the authors examine whether nonseparable preference structures ar e important in characterizing life-cycle labor supply. Using longitud inal data on prime-age males from the Panel Study of Income Dynamics, they estimate a model of life-cycle leisure and consumption under un certainty in which intertemporal preferences are allowed to be nonsep arable in leisure. The model tests several alternative specifications considered in the literature. They investigate the robustness of the ir findings to certain forms of population heterogeneity and types of model misspecification. Across alternative specifications, the autho rs find that the standard assumption of intertemporally separable pre ferences for leisure is not consistent with the data. Copyright 1988 by The Econometric Society.

Journal ArticleDOI
TL;DR: This paper examined the convergence properties of Bayesian-Cournot equilibria as the economy is replicated and concluded that large Cournot markets do not aggregate information efficiently except possibly when the production technology exhibits constant returns to scale.
Abstract: Consider a homogeneous product market where firms have private information about an uncertain demand parameter and compete in quantities. We examine the convergence properties of Bayesian-Cournot equilibria as the economy is replicated and conclude that large Cournot (or almost competitive) markets do not aggregate information efficiently except possibly when the production technology exhibits constant returns to scale. Even in a competitive market there is a welfare loss with respect to the first best outcome due to incomplete information in general. Nevertheless a competitive market is efficient, taking as given the decentralized private information structure of the economy. Endogenous (and costly) information acquisition is examined and seen to imply that the market outcome always falls short of the first best level with decreasing returns to scale. The results are also shown to be robust to the addition of extra rounds of competition which allows firms to use the information revealed by past prices. Explicit closed form solutions yielding comparative static results are obtained for models characterized by quadratic payoffs and affine information structures.

ReportDOI
TL;DR: In this article, priority service contracts are defined as the tank order in which a customer is served out of the available supply, until all customers are served or supply is exhausted, i.e., a customer's priority in obtaining service.
Abstract: : The contracts that interest us here are called priority service contracts. The salient feature of such contracts is that they specify each customer's priority in obtaining service. That is, they specify the tank order in which a customer is served out of the available supply, until all customers are served or supply is exhausted. Such contracts essentially establish queues for customers. In Section 1 the author provides some background about priority service. Section 2 formulates a basic model and offer several illustrations. Also describes two main examples that motivate the theoretical development. Section 3 derives some key results that show how the prices of priority service contracts are designed to induce customers to self-select efficient service orders. In Section 4 the author discusses various ways that state enterprises can organize markets that implement priority service efficiently. In Section 5 we study the operation of competitive markets for priority service. Section 6 concludes with some summary remarks. Two themes are emphasized. One is that a state enterprise can promote substantial efficiency gains by substituting priority service for absent spot markets. The other is that oligopolistic firms may have insufficient incentives to offer efficient product diversity; consequently, allocative efficiency depends on entry of numerous firms. Even so, dispersal of supplies among many firms can prevent productive efficiency when there are advantages from pooling supplies.