scispace - formally typeset
Search or ask a question

Showing papers in "The Review of Economic Studies in 1980"


Journal ArticleDOI
TL;DR: The Lagrange multiplier (LM) statistic as mentioned in this paper is based on the maximum likelihood ratio (LR) procedure and is used to test the effect on the first order conditions for a maximum of the likelihood of imposing the hypothesis.
Abstract: Many econometric models are susceptible to analysis only by asymptotic techniques and there are three principles, based on asymptotic theory, for the construction of tests of parametric hypotheses. These are: (i) the Wald (W) test which relies on the asymptotic normality of parameter estimators, (ii) the maximum likelihood ratio (LR) procedure and (iii) the Lagrange multiplier (LM) method which tests the effect on the first order conditions for a maximum of the likelihood of imposing the hypothesis. In the econometric literature, most attention seems to have been centred on the first two principles. Familiar " t-tests " usually rely on the W principle for their validity while there have been a number of papers advocating and illustrating the use of the LR procedure. However, all three are equivalent in well-behaved problems in the sense that they give statistics with the same asymptotic distribution when the null hypothesis is true and have the same asymptotic power characteristics. Choice of any one principle must therefore be made by reference to other criteria such as small sample properties or computational convenience. In many situations the W test is attractive for this latter reason because it is constructed from the unrestricted estimates of the parameters and their estimated covariance matrix. The LM test is based on estimation with the hypothesis imposed as parametric restrictions so it seems reasonable that a choice between W or LM be based on the relative ease of estimation under the null and alternative hypotheses. Whenever it is easier to estimate the restricted model, the LM test will generally be more useful. It then provides applied researchers with a simple technique for assessing the adequacy of their particular specification. This paper has two aims. The first is to exposit the various forms of the LM statistic and to collect together some of the relevant research reported in the mathematical statistics literature. The second is to illustrate the construction of LM tests by considering a number of particular econometric specifications as examples. It will be found that in many instances the LM statistic can be computed by a regression using the residuals of the fitted model which, because of its simplicity, is itself estimated by OLS. The paper contains five sections. In Section 2, the LM statistic is outlined and some alternative versions of it are discussed. Section 3 gives the derivation of the statistic for

5,826 citations


Journal ArticleDOI
TL;DR: In this article, the problem of finding consistent estimators in other models is non-trivial, however, since the number of incidental parameters is increasing with sample size, and it is well known that analysis of covariance in the linear regression model does not have this consistency property.
Abstract: This paper deals with data that has a group structure. A simple example in the context of a linear regression model is E(yitlx, 1S, ar) = P'xit + ai (i = 1, ...,9 N; t = 1, ... T), where there are T observations within each of N groups. The ai are group specific parameters. Our primary concern is with the estimation of f3, a parameter vector common to all groups. The role of the ai is to control for group specific effects; i.e. for omitted variables that are constant within a group. The regression function that does not condition on the group will not in general identify 1: E(yitlx, 13) 0 1'xit. In this case there is an omitted variable bias. An important application is generated by longitudinal or panel data, in which there are two or more observations on each individual. Then the group is the individual, and the ai capture individual differences. If these person effects are correlated with x, then a regression function that fails to control for them will not identify f. In another important application the group is a family, with observations on two or more siblings within the family. Then the ai capture omitted variables that are family specific, and they give a concrete representation to family background. We shall assume that observations from different groups are independent. Then the ai are incidental parameters (Neyman and Scott (1948)), and 0, which is common to the independent sampling units, is a vector of structural parameters. In the application to sibling data, T is small, typically T= 2, whereas there may be a large number of families. Small T and large N are also characteristic of many of the currently available longitudinal data sets. So a basic statistical issue is to develop an estimator for j that has good properties in this case. In particular, the estimator ought to be consistent as N -> ac for fixed T. It is well-known that analysis of covariance in the linear regression model does have this consistency property. The problem of finding consistent estimators in other models is non-trivial, however, since the number of incidental parameters is increasing with sample size. We shall work with the following probability model: Yit is a binary variable with

2,398 citations



Journal ArticleDOI
TL;DR: The authors examines the properties of additively separable inequality measures and investigates the possibility of decomposition of such a measure by population subgroups, and the scope for treating different groups in different ways within the overall measure.
Abstract: This paper examines the properties of the family of additively separable inequality measures. In particular it investigates the possibility of decomposition of such a measure by population subgroups, and the scope for treating different groups in different ways within the overall measure. This differential treatment of subpopulations is potentially very important, as we shall see from a simplified example. Table I depicts an eight-" person " society arranged into two groups so that persons 5 to 8 (in group 2) have exactly twice the incomes of persons 1 to 4 (in group 1) respectively. Call the values of the inequality measure for group 1, for group 2 and for the whole population IP, j2 and I* respectively. If the inequality measure used is mean-independent and the same for either group and for the total, and if each income recipient is identical in every respect other than income, we expect Table I to yield I = 12 and I*> I. If the measure is decomposable then we can write I* = f(I', 12, PB) where jE is " between-group " inequality found by applying the measure to the vector of group average incomes ($2,500, $5,000). TABLE I

472 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of non-imposed and non-neutral social orderings when individual welfares satisfy various measurability/comparability assumptions, including the cardinality and/or interpersonal comparability of welfares.
Abstract: At a general level, the information that may be required to enable a planner to judge which of two states is socially preferable may be of a diverse character. It is convenient to partition this information into the welfare and non-welfare characteristics of social states. Welfare characteristics consist of the individual welfares achieved in different states; minimally, individual welfares will be ordinal and interpersonally non-comparable, but it is possible to consider situations where information about the cardinality and/or interpersonal comparability of welfares is also relevant. Non-welfare characteristics are more difficult to describe; as well as a physical description of a particular state, they may also be a description of the evolution of a state. Thus, for example, non-welfare characteristics may include information about whether claims bestowed in the past are settled in the state under consideration. In the language of social choice theory, if welfare characteristics are not always deemed relevant then the SWF (the rule for moving from characteristics to an ordering of social states) is said to be imposed, and if non-welfare characteristics are not deemed relevant then the SWF is said to be neutral. Social choice theory is conventionally concerned with nonimposed SWFs; although rarely mentioned, much of it also deals with non-neutral SWFs. For instance, in the problem studied by Arrow (1963), neutrality is not invoked and it does not follow from the axioms that he lays down. Arrow showed that there exists a dictator but, when this dictator is indifferent between two states, it is possible that non-welfare characteristics of the states will determine the social ordering. On the other hand, Arrow took welfares to be ordinal and non-comparable so that a considerable amount of information about welfares was deemed irrelevant, given that such information might be available. In a series of recent papers, the implications of allowing information concerned with the cardinal and comparable nature of welfares to influence the social ordering have been considered. Most of these studies have dealt with the characterization of either utilitarianism (d'Aspremont and Gevers (1977), Deschamps and Gevers (1978), Maskin (1978)) or the lexicographic extension to the Rawlsian maximin criterion (leximin) (Hammond (1976a), Sen (1976) and (1977), Strasnick (1976), d'Aspremont and Gevers (1977), Deschamps and Gevers (1978), Gevers (1979)). As these rules are neutral, conditions must be invoked which ensure the neutrality of the derived social ordering. This paper considers the derivation of non-imposed and non-neutral social orderings when individual welfares satisfy various measurability/comparability assumptions. The axioms used by Arrow are modified so as to admit the influence of different types of information. Section 2 deals with the formulation of the problem and there is a discussion of the various assumptions that can be made about the measurability/comparability of welfares. Section 3 considers the influence that non-welfare characteristics can have upon the social ordering. Further, a procedure is developed which allows permissible rules to be

328 citations


Journal ArticleDOI
W. M. Gorman1
TL;DR: In this paper, the authors present one possible approach to this problem, based on some rather obvious economic ideas, to analyze quality differentials in the agricultural marketing research of the egg market.
Abstract: In his recent paper Dr Bressler (1956) stated the opinion that the most pressing problem in agricultural marketing research is the analysis of quality differentials.' He felt that these were particularly important in the egg market, and that an analysis of these differentials would be useful from the point of view of the Iowan farmers. In this paper I present one possible approach to this problem, based on some rather obvious economic ideas.

313 citations


Journal ArticleDOI
TL;DR: In this article, a model of auto ownership and work-trip mode choices is developed and estimated with explicit account taken of the interaction between the choices, and various explanatory variables are included so that a variety of policies and scenarios can be examined with the models.
Abstract: For analysis of many transportation-related policies, it is useful to know the change which a particular policy would induce in the number of autos owned by households and the number of workers who take transit and auto to work. Models of households' choices of how many autos to own (called the auto ownership choice) and workers' choices of mode (called the work-trip mode choice) are intended to provide information about the effects of various policies. This information can be used, along with other information and ideas, in deciding which policies should be implemented. Previous studies of auto ownership and work-trip mode choices have generally been deficient in important ways.' First, most research has not confronted the simultaneity of the choices. Except for Lerman and Ben-Akiva (1975), past researchers have modelled either the auto ownership choice or the work-trip mode choice, not both. When modelling one of the choices, the simultaneity with the other choice has not been satisfactorily incorporated. Of the auto ownership models, Wharton (1977) included an explanatory variable defined as the number of autos the household uses for work trips, but did not adjust its estimation techniques to account for the endogeneity of this variable. None of the other auto ownership models included any variables relating to work-trip mode. In the work-trip mode-choice studies, more emphasis has been placed on the simultaneity of mode choice and auto ownership, but the results have been similarly unsatisfactory. Warner (1962), Lisco (1967), and Quarmby (1967) included auto ownership variables in their models with the (implicit) assumption that the number of autos is exogenous to the mode choice. On the other hand, Lave (1969) recognized that auto ownership is endogeneous and did not include an auto ownership variable on the grounds that his model formulation is a reduced-form equation. Train (1976b) used both approaches, estimating mode choice models with and without an auto ownership variable, and noted the problems inherent in each specification. Aside from the simultaneity problem, previous models have limited usefulness because they include only a few explanatory variables. This fact limits the number of policies and scenarios which can be analysed with the models. For example, in most mode choice models the time spent out of the vehicle (walking, waiting for a bus, and so on) is included as an explanatory variable. Since out-of-vehicle time is not decomposed into time spent waiting for transit and time spent walking to and from transit, the effect of policies trading off these two components cannot be analysed. (An example of such a policy is to place more buses on fewer bus lines, thus decreasing wait times and increasing walk times.) Analogously, most auto ownership models ignore the effect of family structure (household size and number of children, for example) on auto ownership decisions.2 The present study confronts these problems in the previous research. Models of mode choice and auto ownership are developed and estimated with explicit account taken of the interaction between the choices. Numerous explanatory variables are included so that a variety of policies and scenarios can be examined with the models. Throughout the study,

259 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that the generalized Hartwick rule is sufficient to give a constant utility maximin path, or a little more precisely, a " regular " maxim in path as defined in Burmeister and Hammond (1977), provided that there is free disposal and an absence of " stock reversal ", as explained in Section 4.
Abstract: In a series of recent papers, Hartwick (1977a, b, c, 1978b, c) has shown in a number of special models that keeping investment equal to the rents (really profits from the flow of depletion) from exhaustible resources under competitive pricing yields a path of constant consumption. We shall call this the " Hartwick Rule ". Our purpose is to examine this striking rule in a general context. We shall allow many types of consumption goods and endogenous labour supplies. We shall also allow heterogeneous capital goods, and treat exhaustible or renewable resources as special capital goods: exhaustible resources can be depleted but not produced, renewable ones can also be produced. One restriction we impose is that there is no population growth or technical progress. Hartwick (1977b) does allow these, but as they require a fortuitous coincidence of different exogenous rates if the rule is to remain valid, we do not think it worthwhile to attempt that generalization. In our general framework the Hartwick rule becomes " keep the total value of net investment under competitive pricing equal to zero ". This is then shown to be sufficient to give a constant utility path. The desirability of such a simple unified treatment should be evident. It even proves possible to generalize the rule to " keep the present discounted value of total net investment under competitive pricing constant over time "; indeed, the generalized Hartwick rule is necessary and sufficient for constant utility. More importantly, while these rules give intergenerational equality, it remains to be seen whether they yield the best paths of this kind, i.e. Rawlsian paths. We therefore show that the generalized Hartwick rule is sufficient to give a constant utility maximin path, or a little more precisely, a " regular " maximin path as defined in Burmeister and Hammond (1977), provided that there is free disposal and an absence of " stock reversal ", as explained in Section 4.

255 citations


Journal ArticleDOI
TL;DR: In this article, it is shown that if firms are small relative to the market, then the market outcome is approximately competitive, and if firms have strictly U-shaped average cost curves, then individual firm behaviour converges to competitive behaviour.
Abstract: Despite the fact that the assumptions underlying perfect competition never actually hold, the use of the competitive model, as an idealization, is justified if the predictions of the model approximate the outcomes of situations it is used to represent. In partial equilibrium analysis, this justification is embodied in the " Folk Theorem " which states that if firms are small relative to the market, then the market outcome is approximately competitive. This paper provides a precise statement and proof of the " Folk Theorem " for competitive markets with a single homogeneous good, and free entry and exit. It is shown that if firms are small relative to the market then there is a Cournot equilibrium with free entry; furthermore, any Cournot equilibrium with free entry is approximately competitive. More specifically, if we consider an appropriate sequence of markets in which firms become arbitrarily small relative to the market, then there is a Cournot equilibrium with free entry for all markets in the tail of the sequence, and aggregate equilibrium output converges to perfectly competitive output. If firms have strictly U-shaped average cost curves, then individual firm behaviour converges to competitive behaviour. The treatment of free entry distinguishes this paper from other papers dealing with the " Folk Theorem ", where either the number of firms is exogeneous, ruling out free entry, or free entry is treated as being equivalent to a zero profit condition, ignoring the integer problem that arises when the number of firms is finite but unspecified. Firms may become small relative to the market in two ways: through changes in technology, absolute firm size (the smallest output at which minimum average cost is attained) may become small, or, through shifts in demand, the absolute size of the market (the market demand at competitive price) may become large. We allow both types of changes here, though shifts in demand, especially in the form of replication of the consumer sector, may be more familiar. In his conclusion, Ruffin (1971) presents a verbal argument for the " Folk Theorem " which is based on replication of demand and entry. Hart (1979), though not concerned with existence, shows that in a general equilibrium model with differentiated products and free entry, equilibria are approximately competitive (Pareto optimal) when consumers have been replicated a sufficient number of times. The paper is organized as follows: Section 1 contains the perfectly competitive model and its assumptions, Section 2 contains the assumptions and definitions for the imperfectly competitive model, Section 3 contains an example contrasting the usual treatment of the " Folk Theorem" and the present approach, Section 4 contains the proofs of the main results, and Section 5 contains remarks on the results and indicates how some of the assumptions that are used can be weakened.

206 citations


Journal ArticleDOI
TL;DR: In this paper, a simple model of pollution control is presented, where the number of rounds of communication between the planner and the rest of the organization is classified in terms of the value of establishing additional information channels.
Abstract: The design of incentive schemes for implementing optimal plans in organizations where information differs across agents has been much studied in recent years; (see, e.g. Groves (1973), Weitzman (1974, 1978), Maskin (1977), Kwerel (1977) and the contributions to the Review of Economic Studies Symposium (1979)). One manner of classifying these studies is in terms of the number of rounds of communication between the planner and the rest of the organization. Such a classification is useful precisely because it is a pre-requisite for assessing the value of establishing additional information channels. In this note we shall provide such a classification in the context of a simple model of pollution control. By so doing we hope to place at least some of the contributions in a more general perspective. We consider an economic environment consisting of n firms (i, j = 1, ..., n). Firm i faces a cost function C(x1, Oi), where xi is the firm's pollution emission level (xi E R') and Oi is a parameter (possibly a vector) known to the firm but not to the regulator. Let Oi be the set of possible values of Oi, and take 0 = (01, ..., oi ..., O) and 0-i = (01, ..., Oi-1, Oi+11 ...i On),

197 citations


Journal ArticleDOI
TL;DR: In this article, a study which developed a formulae of optimal quantity-dependent pricing schemes for several products was presented. But the authors focused on a single product model and not on the Pareto optimality.
Abstract: Focuses on a study which developed a formulae of optimal quantity-dependent pricing schemes for several products. Description of the single product model; Details on the Pareto optimality; Information on the multi-product pricing. (Из Ebsco)

Journal ArticleDOI
TL;DR: In this paper, the authors present a hypothesis about macroeconomic relationships in CPEs drawn originally from unsystematic observation, which they are now able to test systematically against aggregate time-series data.
Abstract: The centrally planned economies (CPEs) are said to suffer from sustained, significant repressed inflation (Grossman (1966); Bush (1973)). With stable official prices, visitors observe queues, shortages, and black market activity, and the East European press reports these phenomena in circumstantial detail. Scholars cite such anecdotal evidence and generalize from it (Katsenelinboigen (1977)). The literature stresses "overfull employment" or "taut" planning (Holzman (1956); Hunter (1961)), "planners' tension ", "pressure" (Levine (1966)) or "suction" (Kornai (1971)); use of the consumer sector as a buffer to absorb unforeseen shocks; overfulfilment of the wage fund plan and underfulfilment of the real wage plan; "excessive" household savings; the rising subsidies required to maintain fixed prices (Garvy (1975)); quality deterioration, and "hidden" price increases. This is a fundamental proposition in the conventional wisdom about CPEs. We view it, however, as a hypothesis about macroeconomic relationships in CPEs drawn originally from unsystematic observation, which we are now able to test systematically against aggregate time-series data. This is our objective here. We do not doubt that there is excess real demand within the state production sector in CPEs, in part imposed consciously by the planners to elicit more output. Nor do we question that this could generate excess demand in the markets linking the state production sector to the household sector: the markets for consumer goods and services and for labour (Kornai (1978) so argues, although he focuses on the state production sector). But for these markets the planners have always stressed the importance of macroeconomic equilibrium because the effects of excess demand here are so clearly dysfunctional (disincentives to labour supply, black markets, weakening of "labour discipline", etc.). The "balance of money incomes and expenditures of the population" (BMIEP) is a key element in the planning process, and the planners dispose of powerful policy instruments to achieve its targets (Portes (1977); Rudcenko (1978)). Again, we do not question that there are numerous chronic and substantial microlevel disequilibria: excess demands for certain goods (housing, meat, automobiles, and many varieties-especially those of higher quality-of other goods at a more disaggregated level), but also well-publicized excess supplies of others (e.g. low-quality clothing often accumulates in "immobile ", "unsaleable" stocks). That relative prices are distorted is hardly surprising or controversial, however, when all consumer prices, at the finest level of


Journal ArticleDOI
TL;DR: In this article, the authors argue that the econometrician's search for an acceptable representation of the process generating the data being analysed is made easier by the use of both economic theory and the methods of time series analysis.
Abstract: structure of economic relationships has long been recognised (e.g. Nerlove (1972)), and has caused some researchers recently to rely almost exclusively on the methods of time series analysis for model building with economic time series data. Furthermore, a number of studies of the forecasting performance of econometric models vis a' vis that of time series models (e.g. Naylor etal (1972) and further references in Prothero and Wallis (1977)) have been interpreted as demonstrating the superiority of time series model building methodology over that of econometrics. To the extent that econometric models have been based on static economic theory, with dynamics possibly introduced via serially correlated error processes, or have been in the mould of simple models involving first order dynamics such as the partial adjustment and adaptive expectations models, the implied criticism of econometric modelling is probably valid. However, econometricians need not restrict the range of models and techniques in this way, for they are fortunate in being able to combine structural information from economic theory, (especially for long-run equilibrium or steady-state behaviour), with the techniques of time series analysis and those of econometrics. We believe that the econometrician's search for an acceptable representation of the process generating the data being analysed is made easier by the use of both economic theory and the methods of time series analysis, and that the latter are complementary to econometric methods rather than substitutes for them. Rather than abandoning an econometric approach to modelling altogether and using " black-box " time series methods, we favour an approach which uses reasonable statistical procedures to test various hypotheses (which are too often arbitrarily selected and assumed to be valid), contained within a general unrestricted model, and then incorporates this evidence in a model whose structure is suggested by general economic considerations, to obtain an


Journal ArticleDOI
Avishay Braverman1
TL;DR: In this paper, a general model for studying the connection between imperfect information and imperfect competition is presented, comparing the methodology involved in generating monopolistic competition due to consumers' imperfect information with the methodology that is involved in creating monopolistically competitive equilibria due to product differentiation.
Abstract: A general model is presented for studying the connection between imperfect information and imperfect competition, comparing the methodology involved in generating monopolistic competition due to consumers' imperfect information with the methodology involved in generating monopolistic competition due to product differentiation. The model specifically shows how different monopolistically competitive equilibria may arise from consumers' imperfect information regarding different prices of a homogeneous commodity. Under such a structure, there is no longer one market place, each firm may constitute a local market, and different consumers with different search costs may equilibrate the market at different prices. The nature of the differences in consumers' search costs determine what type of an equilibrium arises: a pefectly competitive, a monopolistically competitive, or a two-price equilibrium. There is need for a dynamic model that deals with the problems of maintained ignorance and the possibility of a firm's acquiring a price reputation over time.

Journal ArticleDOI
TL;DR: In this article, the authors define a class of models with several regimes which is flexible enough to cover such situations, and illustrate their argument by reference to an economy which is "controlled " by a policy maker shifting between instruments at some, possibly unknown, points of time.
Abstract: In recent years, increasing attention has been devoted to models with a finite (usually small) number of regimes. Various strategies have been discussed in the literature to handle situations where each regime is characterized by a different value of a common parameter vector. See e.g. Barten and Bronsard (1970), Goldfeld and Quandt (1973), Poirier (1976),... . It appears however that no satisfactory treatment has yet been given to cases where the partitioning between " endogenous " and " exogenous " variables changes over time. Our objective is therefore to define a class of models with several regimes which is flexible enough to cover such situations. For convenience, we shall illustrate our argument by reference to an economy which is "controlled " by a policy maker shifting between instruments at some, possibly unknown, points of time. For tractability we shall mainly restrict our attention to a class of dynamic linear models although the concepts we introduce apply in a much broader framework. The possibility that the switching times could be endogenous to the model, such as in disequilibrium models will not be investigated here: work in progress indicates however that our approach can be extended in such directions. The paper is organized as follows: In Section 2 we shall discuss at length the issues to be faced by means of a simple example, taken from Goldfeld and Quandt (1973). In Section 3 we shall introduce the concepts which are needed for our analysis; linear dynamic models, LIML estimation and exogeneity. In Section 4, we shall discuss models with several regimes and concentrate in particular on imposing appropriate restrictions on the parameters characterizing different regimes. It will be shown that it is possible to preserve some of the operational features of LIML procedures.

Journal ArticleDOI
TL;DR: In this article, a series of models of resource markets whose demand and supply functions incorporate the idea that an exhaustible resource is an asset whose rate of price appreciation is a factor determining holding decisions is presented.
Abstract: In recent years there have been many analyses of the rate of resource depletion, both with a view to defining an optimal depletion rate (as in Dasgupta and Heal (1974), Heal (1975)) and also with a view to analysing the depletion rate that one might expect to result from market forces (as in Dasgupta (1973), Solow (1974), Stiglitz (1974)). It is easily established (see Heal (1975), Solow (1974)) that a necessary condition for a finite stock of an exhaustible resource to be allocated efficiently over time is that the price, net of extraction costs, should rise at a rate equal to the rate of return on other assets. And, not surprisingly, competitive markets will under certain circumstances realize this condition. In particular, if owners of the resource regard it as a capital asset constituting an element of their portfolio, then they will hold it just as long as the return that it gives them (the rate of increase of the net price) is no less than the returns available elsewhere. Equilibrium in the asset market will then imply the realization of the necessary condition mentioned earlier. This simple but convincing theorizing clearly implies that if resource markets are functioning efficiently, there will be a strong association between the rates of change of resource prices and the rates of return on other assets. In particular, as certain commodities (for example, copper, tin, lead and zinc) are exhaustible resources, the theory would predict that in an efficient allocation the rates of change of their prices would be related to rates of return on other assets. Our aim in this paper is to construct and test a series of models of resource markets whose demand and supply functions incorporate the idea that an exhaustible resource is an asset whose rate of price appreciation is a factor determining holding decisions, and which explicitly recognize the possibility of arbitrage between resource markets and markets for other capital assets. The conclusions we reach are very tentative, but suggest that the matter is considerably more complex than simple equilibrium theory would suggest. In particular, the returns to other assets do appear to be important determinants of resource price movements, but it seems to be changes in these returns, rather than their level, that have the greatest influence. There are a variety of possible explanations of this, and we try to discriminate between these in the latter part of the paper.

Journal ArticleDOI
TL;DR: In this article, it is shown that the unbiased estimator of o.2 is s2= (y -Xb)'(y -xb)'s(y-Xb)/(Tk).
Abstract: where b = (X'X)Y'X'y is the least squares estimator of fl. It is easily shown that z is N(6, o_2 V) where V = R (X'X) 'R'. The usual unbiased estimator of o.2 is s2= (y -Xb)'(y -Xb)/(Tk). In situations in which we wish only to decide whether H is true or not we can use a direct test of H such as an F test. It is perhaps more common that when H is rejected we want to know which components of 6 are different from zero and of the non-zero components which are positive and which negative. In this situation we have a multiple decision problem and a natural solution is to use an induced test. As an example suppose in the case q = 2 that we wish to test the hypothesis H: 01 =02 = 0. Since H is true if and only if the separate hypotheses H1: 01 = 0 and H2: 02= 0 are both true, this suggests a sequence of separate tests which will induce a test of H. Testing the two hypotheses H1 and H2 where we are interested in whether 01 or 62 or both are different from zero induces a multiple decision problem in which the four possible decisions are

Journal ArticleDOI
TL;DR: In this article, it was shown that if the test score is a sufficient statistic of productivity, a Nash equilibrium always exists; workers who pass and who fail the test receive wages equal to their respective marginal product and the fee charged for taking the test is equal to the actual cost of administering the test.
Abstract: In most exchanges of labour services, there is substantial ignorance about the ability of individual workers. As long as a firm is not paying wholly on the basis of piece work, it has an incentive to learn the ability of its applicants, and the more able workers have an incentive to sort themselves from the less able. The most common means of sorting workers is through an examination, i.e. a questionnaire or an apprenticeship programme during which performance is monitored. The cost of an examination increases with its accuracy and precision and one would seldom expect a perfectly accurate and precise test to be used by firms. One way of increasing the effectiveness of a test is to impose penalties upon workers who receive low examination scores and rewards for those who receive high scores. Those who think it is likely that they will receive a high score are most likely to apply for the examination. Thus, the firm has converted a single examination into a two-part test; those who " pass " receive high scores not only on the firm examination, but also on their own self-appraisal. We show that when tests (apprenticeships) are used as self-selection devices, so that in effect workers are charged an application fee, a Nash equilibrium with free entry may not exist. When an equilibrium does exist, it is characterized by a wage distribution and workers who fail the test receive a net wage below their net marginal product, while those who pass the test receive a net wage above their net marginal product. The failures subsidize the successes. The application fee paid by a job applicant is an increasing function of his perceived productivity and is greater than the real testing costs incurred by the firm. These results do not depend upon any monopoly power of workers or firms nor upon the test being costly. However, they do depend upon the test being an imprecise measure of productivity. If the test score is a sufficient statistic of productivity, a Nash equilibrium always exists; workers who pass and who fail the test receive wages equal to their respective marginal product and the fee charged for taking the test is equal to the actual cost of administering the test. By introducing informational considerations and viewing low-wage training programmes as tests, we provide a new explanation for the positive correlation between wages and seniority. This increase in wages has been attributed to on-the-job training (see

Journal ArticleDOI
TL;DR: In this paper, a simple three equation model of wage-price inflation is presented, where the endogenous variables to be explained have been taken to be the index of retail prices, the Index of weekly wage rates, and the official average earnings index.
Abstract: This paper reports the results of estimating a simple three equation model of wage-price inflation, where the endogenous variables to be explained have been taken to be the index of retail prices, the index of weekly wage rates, and the official average earnings index. These three variables were chosen to be explained together, firstly to avoid the choice as to whether the wage rates index or the average earnings variable should represent the labour cost variable, and secondly to allow the alternatives of using hourly wage rates, weekly wage rates, and the appropriately adjusted average earnings for the various exogenous variables (such as overtime working) to be resolved empirically. The price equation has been fully discussed already in a previous paper (1976) and will only briefly be discussed here. The wage and earnings equations were estimated by OLS, 2SLS and FIML methods using an eclectic approach to previous explanations of these variables. The form of the model estimated here is a development of that used by Espasa (1973), but also explores some hypotheses suggested by Johnston and Timbrell (1973) and Parkin etal (1976). In the discussion of the wage equation by Espasa he noted that the rate of change of the wage index could be significantly related to the real wage rate (with interpretation as in Sargan (1964)), but also to the ratio of average earnings to the wage rate index, with a possible interpretation that if earnings are high compared with the wage rate then activity is high, and also workers try to consolidate their temporary prosperity by incorporating the higher level of earnings into the basic wage rate. Alternatively the interaction between earnings and wage rates can perhaps be regarded as an inadequate and aggregated representation of the battle of the differentials. Previous work has attempted to build disaggregated models of the labour market in which each occupational group of workers responds to differentials between their own wage level and those of other groups of workers, for example the work by Vernon reported in Sargan (1971). Each macrovariable can be regarded as a differently weighted aggregate of the underlying microwage-variables, and the dynamic models which are estimated for the macro-variables represents an empirical attempt to represent the complex dynamics of the micro-model. Following the previous work by Johnston and Timbrell (1973) it was decided to explore the use of the income tax retention rate as a variable in the wage equation. It was also decided following Parkin et al (1972), (1976), to explore the use of expected rates of price inflation. The equations were initially estimated using single equation estimators, but were re-estimated by simultaneous equation system estimators. All the equations in this paper are in log linear form with the symbols representing the logarithms of the economic variables. The most general form of wage equation used can be summarized as


Journal ArticleDOI
TL;DR: In this paper, the authors developed a test based on Cox's ((1961) and (1962)) procedure for testing separate families of hypotheses; the work is thus an extension of earlier econometric applications of Cox's test to single equation linear regressions and to many equation non-linear regression in Pesaran and Deaton (1978).
Abstract: One of the problems most frequently encountered by the applied econometrician is the choice between logarithmic and linear regression models. Economic theory is rarely of great help although there are cases where one or other specification is clearly inappropriate; for example, in demand analysis constant elasticity specifications are inconsistent with the budget constraint. Nor are standard statistical tests very useful; R2 statistics are not commensurable between models with dependent variables in levels and in logarithms and the comparison of likelihoods has no firm basis in statistical inference. In this paper, we develop a practical text based upon Cox's ((1961) and (1962)) procedure for testing separate families of hypotheses; the work is thus an extension of earlier econometric applications of Cox's test to single equation linear regressions in Pesaran (1974) and to many equation non-linear regression in Pesaran and Deaton (1978). The test we develop here is applicable to two competing single-equation models, one of which explains the level of a variable up to an additive error, the other of which explains its logarithm, again up to an additive error. Hence, in terms of the levels of the variables, we are testing for multiplicative versus additive errors, and it is this which differentiates this paper from the earlier work in which an additive error was always assumed. We shall also allow, as in the earlier papers, the deterministic parts of the regressions to be linear or non-linear and to have the same or different independent variables; it is thus possible to test for functional form and specification in a very general way. Section 1 of the paper defines the problem and derives the test statistics. The formulae allow the calculation of two statistics, No and N1 say, the first of which is asymptotically distributed as N(0, 1) if the logarithmic specification is correct, the second, for all practical purposes, as N(0, 1) if the linear model is true. Section 2 discusses problems associated with the calculation of the statistics and shows how they can be surmounted. Section 3 presents the results of Monte-Carlo experiments designed to evaluate the potential of the test in practice. We investigate, in particular, the shape of the actual distributions of No and N1 in samples of sizes 20, 40 and 80 as well as comparing the performance of the Cox procedure with that of the likelihood ratio test, as proposed by Sargan (1964). Finally, we offer some evidence of the ability of the procedure to detect total misspecification when neither of the hypotheses is true. Section 4 contains a summary and conclusions. The general issues of statistical inference raised by the use of the Cox procedure in econometrics as well as alternative testing procedures have already been widely discussed, see Pesaran and Deaton (1978), Quandt (1974) and Amemiya (1976). In this case, however, there exists one very obvious alternative procedure. This is to specify the model,


Journal ArticleDOI
TL;DR: McKelvey and Cohen as discussed by the authors showed that global cycling is ubiquitous for the same reason that majority rule equilibria rarely exist, namely, that the distribution of voters is rarely symmetric enough.
Abstract: Recent studies use two distinct approaches to study majority rule intransitivities. First, McKelvey (1976, 1979) and Cohen (1979) examine global cycling sets in multidimensional spaces. Second, Schofield (1977, 1978a, b) investigates local continuous cycling. Both approaches lead to the conclusion that cycling sets tend to be large. In this paper, these studies are related to each other and to the work of Matthews (1978, 1979) on undominated directions. Simple observations lead to a new and stronger result indicating the extreme pervasiveness of global cycling. The key observation is that global cycling is ubiquitous for the same reason that majority rule equilibria rarely exist, namely, that the distribution of voters is rarely symmetric enough.

Journal ArticleDOI
TL;DR: In this article, the authors consider a consumption-loan type economy where each generation lives for a finite number of periods but generations overlap each other and the economy continues without interruption, and they show that any Pareto efficient allocation can be achieved as a compensated equilibrium by a suitable choice of lump-sum (taxes and) transfers (either in the form of commodity transfers or in form of paper asset transfers).
Abstract: Samuelson (1958) shows that in a certain type of infinite horizon economy, a competitive equilibrium may not be Pareto efficient. He also shows that creation of a monetary asset may restore efficiency of the equilibrium. In this paper, we will investigate this class of economies more fully, focusing upon the relationship between Pareto efficiency and lumpsum (taxes and) transfers. We consider a consumption-loan type economy where each generation lives for a finite number of periods but generations overlap each other and the economy continues without interruption. Following the original Samuelson paper (1958), we assume that all consumption goods are completely perishable and that there is no production. However, we assume that all consumers in this economy can borrow from each other. In this sense, all consumers are capable of issuing IOU's or inside money. Despite the existence of inside money, the economy may still be unable to achieve a Pareto efficient allocation as an Arrow-Debreu competitive equilibrium, i.e. a competitive equilibrium with perfect foresight. This anomaly arises because of a particular feature of the consumption-loan type formulation, namely a double infinity of time and agents (Shell (1971)). In particular, it requires an introduction of paper assets in order to achieve a Pareto efficient equilibrium allocation. With this paper asset (which is not an obligation for any consumer in the economy and therefore which may be called outside money) the economy may be able to borrow from the infinite future and to make everyone in the economy better off. The purpose of this paper is twofold. In the first half of the paper, we shall extend the well-known fundamental theorem of welfare economics. We shall show that any Pareto efficient allocation can be achieved as a compensated equilibrium by a suitable choice of lump-sum (taxes and) transfers (either in the form of commodity transfers or in the form of paper asset transfers). This result, however, is fundamentally different from the well-known result in the finite horizon case. If the horizon is finite, the lump-sum transfers, even in the form of paper assets, must cancel out in aggregate. In other words, the government budget must balance. However, in the consumption-loan type models,

Journal ArticleDOI
Knud J1, Rgen Munk
TL;DR: In this article, the model considered by Stiglitz and Dasgupta is reformulated using expenditure functions and the purpose of the analysis is to characterize the optimal tax structure when not all commodities can be taxed and to show that this can be done with a minimum of mathematics.
Abstract: In this paper, the model considered by Stiglitz and Dasgupta is reformulated using expenditure functions. The purpose of the analysis is to characterize the optimal tax structure when not all commodities can be taxed and to show that this can be done with a minimum of mathematics. The results will be somewhat at variance with the results obtained by Stiglitz and Dasgupta (1971), especially for the case of decreasing returns to scale. Section 2 contains the mathematical formulation and treatment of the problem. The model differs only in minor respects from that considered by Stiglitz and Dasgupta in the second half of their paper: by not explicitly allowing for public goods, by assuming one consumer (which, however, amounts to the same thing as assuming many equally treated identical consumers) and by treating homothetic production functions as a special case. Section 3 contains the characterization of the optimal tax structure under various assumptions, and in Section 4 the results are summarized.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of estimating the implicit utility function subject to a non-linear, kinked, and even discontinuous budget constraint and propose a general method of estimation when the implicit budget constraint is nonlinear.
Abstract: Individual choices are often characterized by economists as resulting from maximization of an implicit utility function subject to a budget constraint. Informal graphical descriptions of the outcome of this process usually presume that the budget constraint, determined by individual income and prices, is linear-that the price of a good is independent of the amount of it that is purchased, and that income is independent of the amount purchased. But governmental regulations in particular-as well as non-governmental practices-often produce non-linear, kinked, and even discontinuous budget constraints. The relationship between hours worked and income, for example, would be linear if the wage rate didn't depend on hours worked. But, because of the progressive federal income tax structure, individuals actually face a net marginal wage rate that declines with income. The budget constraint is non-linear. Negative income tax plans often prescribe one tax rate up to a so-called " breakeven " point, and another thereafter. There is a kink at the breakeven point.1 Social security regulations impose low tax rates on wage income up to a given level and a very high tax rate on each additional dollar of income. Most existing health insurance plans, as well as proposed national health insurance schemes, include some combination of a deductible, a coinsurance rate, and possibly a maximum health care expenditure level. The price of a dollar's worth of health care is one up to the amount of the deductible; it is the coinsurance rate between the amount of the deductible and the maximum expenditure, and is zero thereafter. Again, the implied budget constraint is non-linear; it has "kinks" in it. Some proposed housing subsidy schemes stipulate that low income families receive " housing" payments, but only after a minimum expenditure for housing. The implied budget constraint is discontinuous. This paper proposes a rather general method of estimation when the implicit budget constraint is non-linear. But it does so by addressing a particular problem-the analysis of data generated by treatments in the recent Housing Demand Experiment, that can be thought of as creating discontinuous individual budget constraints. It rests on the assumption that the relative value that individuals attach to purchased goods can be described by a functional relationship that assigns weights to goods, or to the dollar expenditures for these goods, a " utility " function. The key parameters of this function are "taste" parameters that are assumed to depend on individual characteristics of decision makers and to be random, given measured characteristics. That is, they depend on observed as well as unobserved attributes of individuals or of their environment. In addition, we assume that persons are not always able to match expenditures to hypothetical best, or maximizing, values. Although the approach is motivated by the idea of utility


Journal ArticleDOI
TL;DR: In this paper, a price equation for the British economy is discussed based on a normal-costs assumption similar to that used by Godley-Nordhaus (1972), and the paper can be regarded as an attempt to test a specification similar to those of these authors by considering a wide range of alternative specifications.
Abstract: After a general discussion of alternative formulations of the Almon distributed lag (as in Fair and Jaffee (1971), Godfrey and Poskitt (1975), Robinson (1970), Trivedi and Pagan (1976)), particularly the problem of choosing the maximum lag, the paper uses this in discussing a price equation for the British economy. This is based on a normal-costs assumption similar to that used by Godley-Nordhaus (1972), and the paper can be regarded as an attempt to test a specification similar to that of these authors by considering a wide range of alternative specifications.