scispace - formally typeset
Search or ask a question
Author

Trevor Breusch

Other affiliations: University of Southampton
Bio: Trevor Breusch is an academic researcher from Australian National University. The author has contributed to research in topics: Regression analysis & Earnings. The author has an hindex of 23, co-authored 43 publications receiving 10621 citations. Previous affiliations of Trevor Breusch include University of Southampton.

Papers
More filters
Journal ArticleDOI
TL;DR: The Lagrange multiplier (LM) statistic as mentioned in this paper is based on the maximum likelihood ratio (LR) procedure and is used to test the effect on the first order conditions for a maximum of the likelihood of imposing the hypothesis.
Abstract: Many econometric models are susceptible to analysis only by asymptotic techniques and there are three principles, based on asymptotic theory, for the construction of tests of parametric hypotheses. These are: (i) the Wald (W) test which relies on the asymptotic normality of parameter estimators, (ii) the maximum likelihood ratio (LR) procedure and (iii) the Lagrange multiplier (LM) method which tests the effect on the first order conditions for a maximum of the likelihood of imposing the hypothesis. In the econometric literature, most attention seems to have been centred on the first two principles. Familiar " t-tests " usually rely on the W principle for their validity while there have been a number of papers advocating and illustrating the use of the LR procedure. However, all three are equivalent in well-behaved problems in the sense that they give statistics with the same asymptotic distribution when the null hypothesis is true and have the same asymptotic power characteristics. Choice of any one principle must therefore be made by reference to other criteria such as small sample properties or computational convenience. In many situations the W test is attractive for this latter reason because it is constructed from the unrestricted estimates of the parameters and their estimated covariance matrix. The LM test is based on estimation with the hypothesis imposed as parametric restrictions so it seems reasonable that a choice between W or LM be based on the relative ease of estimation under the null and alternative hypotheses. Whenever it is easier to estimate the restricted model, the LM test will generally be more useful. It then provides applied researchers with a simple technique for assessing the adequacy of their particular specification. This paper has two aims. The first is to exposit the various forms of the LM statistic and to collect together some of the relevant research reported in the mathematical statistics literature. The second is to illustrate the construction of LM tests by considering a number of particular econometric specifications as examples. It will be found that in many instances the LM statistic can be computed by a regression using the residuals of the fitted model which, because of its simplicity, is itself estimated by OLS. The paper contains five sections. In Section 2, the LM statistic is outlined and some alternative versions of it are discussed. Section 3 gives the derivation of the statistic for

5,826 citations

Journal ArticleDOI
TL;DR: In this paper, a simple test for heteroscedastic disturbances in a linear regression model is developed using the framework of the Lagrangian multiplier test, and the criterion is given as a readily computed function of the OLS residuals.
Abstract: A simple test for heteroscedastic disturbances in a linear regression model is developed using the framework of the Lagrangian multiplier test. For a wide range of heteroscedastic and random coefficient specifications, the criterion is given as a readily computed function of the OLS residuals. Some finite sample evidence is presented to supplement the general asymptotic properties of Lagrangian multiplier tests.

3,629 citations

Journal ArticleDOI
01 Dec 1978

1,221 citations

Journal ArticleDOI
TL;DR: In this paper, the authors discuss the best way to formulate and estimate a dynamic econometric model when interest focuses mainly upon its long-run properties, using results derived for the more general context of transformed regression models, and show how point estimates and the standard errors of long run multipliers and long run structural coefficients can be obtained using standard estimation methods.
Abstract: This paper discusses the best way to formulate and estimate a dynamic econometric model when interest focuses mainly upon its long-run properties. Using results derived for the more general context of transformed regression models, it is shown how point estimates and the standard errors of long-run multipliers and long-run structural coefficients can be obtained using standard estimation methods. It is argued that such formulations are preferable to other specifications such as the error correction model. If the explanatory variables that enter the long-run solution are trend-stationary then it is found that no harm is done to the asymptotic properties of the long-run coefficients by omitting short-run dynamics entirely, though this is not recommended in practice. The results of this paper are related to the concept of co-integration and to the work of Engle and Granger. Finally, a new methodology for the construction of dynamic models is proposed.

325 citations

Journal ArticleDOI
TL;DR: This paper showed that the difference between the AM estimator and the HT estimator lies in the treatment of the time-varying explanatory variables which are uncorrelated with the effects.
Abstract: IN AN IMPORTANT RECENT PAPER, Hausman and Taylor (1981)-hereafter HT-considered the instrumental-variable estimation of a regression model using panel data, when the individual effects may be correlated with a subset of the explanatory variables. They provided a simple consistent estimator and an efficient estimator. More recently, Amemiya and MaCurdy (1986)-hereafter AM-have suggested an alternative estimator which is more efficient than the HT estimator, under certain conditions and given stronger assumptions than HT made. However, the relationship between the HT and AM papers is less clear than it might be, in part because of notational differences between the two papers. In this paper we clarify the relationship between the HT and AM estimators, and we show that the difference between these estimators lies in the treatment of the time-varying explanatory variables which are uncorrelated with the effects: HT use each such variable as two instruments (means and deviations from means), while AM use such variables as T + 1 instruments (as deviations from means and also separately for each of the T available time periods). This enables us to make clear the conditions under which the AM estimator is more efficient than the HT estimator. We also present each estimator in a form which allows it to be calculated using standard instrumental-variables (two-stage least squares) software. Following the AM path one step further, we then define a third (BMS) estimator which, under yet stronger assumptions, is more efficient than the AM estimator. Both HT and AM use as instruments the deviations from means of the time-varying variables which are correlated with the effects. A more efficient estimator may be obtained by using separately the (T - 1) linearly independent values of these deviations from individual means. Consistency requires that these be legitimate instruments, and whether this is so depends on why these time-varying variables are correlated with the effects. For example, if such correlation arises solely because of a time-invariant component which is removed in taking deviations from individual means, these instruments are legitimate.

263 citations


Cited by
More filters
Book
01 Jan 2001
TL;DR: This is the essential companion to Jeffrey Wooldridge's widely-used graduate text Econometric Analysis of Cross Section and Panel Data (MIT Press, 2001).
Abstract: The second edition of this acclaimed graduate text provides a unified treatment of two methods used in contemporary econometric research, cross section and data panel methods. By focusing on assumptions that can be given behavioral content, the book maintains an appropriate level of rigor while emphasizing intuitive thinking. The analysis covers both linear and nonlinear models, including models with dynamics and/or individual heterogeneity. In addition to general estimation frameworks (particular methods of moments and maximum likelihood), specific linear and nonlinear methods are covered in detail, including probit and logit models and their multivariate, Tobit models, models for count data, censored and missing data schemes, causal (or treatment) effects, and duration analysis. Econometric Analysis of Cross Section and Panel Data was the first graduate econometrics text to focus on microeconomic data structures, allowing assumptions to be separated into population and sampling assumptions. This second edition has been substantially updated and revised. Improvements include a broader class of models for missing data problems; more detailed treatment of cluster problems, an important topic for empirical researchers; expanded discussion of "generalized instrumental variables" (GIV) estimation; new coverage (based on the author's own recent research) of inverse probability weighting; a more complete framework for estimating treatment effects with panel data, and a firmly established link between econometric approaches to nonlinear panel data and the "generalized estimating equation" literature popular in statistics and other fields. New attention is given to explaining when particular econometric methods can be applied; the goal is not only to tell readers what does work, but why certain "obvious" procedures do not. The numerous included exercises, both theoretical and computer-based, allow the reader to extend methods covered in the text and discover new insights.

28,298 citations

Journal ArticleDOI
TL;DR: In this article, a new class of stochastic processes called autoregressive conditional heteroscedastic (ARCH) processes are introduced, which are mean zero, serially uncorrelated processes with nonconstant variances conditional on the past, but constant unconditional variances.
Abstract: Traditional econometric models assume a constant one-period forecast variance. To generalize this implausible assumption, a new class of stochastic processes called autoregressive conditional heteroscedastic (ARCH) processes are introduced in this paper. These are mean zero, serially uncorrelated processes with nonconstant variances conditional on the past, but constant unconditional variances. For such processes, the recent past gives information about the one-period forecast variance. A regression model is then introduced with disturbances following an ARCH process. Maximum likelihood estimators are described and a simple scoring iteration formulated. Ordinary least squares maintains its optimality properties in this set-up, but maximum likelihood is more efficient. The relative efficiency is calculated and can be infinite. To test whether the disturbances follow an ARCH process, the Lagrange multiplier procedure is employed. The test is based simply on the autocorrelation of the squared OLS residuals. This model is used to estimate the means and variances of inflation in the U.K. The ARCH effect is found to be significant and the estimated variances increase substantially during the chaotic seventies.

20,728 citations

Journal ArticleDOI
TL;DR: In this paper, a natural generalization of the ARCH (Autoregressive Conditional Heteroskedastic) process introduced in 1982 to allow for past conditional variances in the current conditional variance equation is proposed.

17,555 citations

Journal ArticleDOI
TL;DR: In this paper, a framework for efficient IV estimators of random effects models with information in levels which can accommodate predetermined variables is presented. But the authors do not consider models with predetermined variables that have constant correlation with the effects.

16,245 citations

Book
28 Apr 2021
TL;DR: In this article, the authors proposed a two-way error component regression model for estimating the likelihood of a particular item in a set of data points in a single-dimensional graph.
Abstract: Preface.1. Introduction.1.1 Panel Data: Some Examples.1.2 Why Should We Use Panel Data? Their Benefits and Limitations.Note.2. The One-way Error Component Regression Model.2.1 Introduction.2.2 The Fixed Effects Model.2.3 The Random Effects Model.2.4 Maximum Likelihood Estimation.2.5 Prediction.2.6 Examples.2.7 Selected Applications.2.8 Computational Note.Notes.Problems.3. The Two-way Error Component Regression Model.3.1 Introduction.3.2 The Fixed Effects Model.3.3 The Random Effects Model.3.4 Maximum Likelihood Estimation.3.5 Prediction.3.6 Examples.3.7 Selected Applications.Notes.Problems.4. Test of Hypotheses with Panel Data.4.1 Tests for Poolability of the Data.4.2 Tests for Individual and Time Effects.4.3 Hausman's Specification Test.4.4 Further Reading.Notes.Problems.5. Heteroskedasticity and Serial Correlation in the Error Component Model.5.1 Heteroskedasticity.5.2 Serial Correlation.Notes.Problems.6. Seemingly Unrelated Regressions with Error Components.6.1 The One-way Model.6.2 The Two-way Model.6.3 Applications and Extensions.Problems.7. Simultaneous Equations with Error Components.7.1 Single Equation Estimation.7.2 Empirical Example: Crime in North Carolina.7.3 System Estimation.7.4 The Hausman and Taylor Estimator.7.5 Empirical Example: Earnings Equation Using PSID Data.7.6 Extensions.Notes.Problems.8. Dynamic Panel Data Models.8.1 Introduction.8.2 The Arellano and Bond Estimator.8.3 The Arellano and Bover Estimator.8.4 The Ahn and Schmidt Moment Conditions.8.5 The Blundell and Bond System GMM Estimator.8.6 The Keane and Runkle Estimator.8.7 Further Developments.8.8 Empirical Example: Dynamic Demand for Cigarettes.8.9 Further Reading.Notes.Problems.9. Unbalanced Panel Data Models.9.1 Introduction.9.2 The Unbalanced One-way Error Component Model.9.3 Empirical Example: Hedonic Housing.9.4 The Unbalanced Two-way Error Component Model.9.5 Testing for Individual and Time Effects Using Unbalanced Panel Data.9.6 The Unbalanced Nested Error Component Model.Notes.Problems.10. Special Topics.10.1 Measurement Error and Panel Data.10.2 Rotating Panels.10.3 Pseudo-panels.10.4 Alternative Methods of Pooling Time Series of Cross-section Data.10.5 Spatial Panels.10.6 Short-run vs Long-run Estimates in Pooled Models.10.7 Heterogeneous Panels.Notes.Problems.11. Limited Dependent Variables and Panel Data.11.1 Fixed and Random Logit and Probit Models.11.2 Simulation Estimation of Limited Dependent Variable Models with Panel Data.11.3 Dynamic Panel Data Limited Dependent Variable Models.11.4 Selection Bias in Panel Data.11.5 Censored and Truncated Panel Data Models.11.6 Empirical Applications.11.7 Empirical Example: Nurses' Labor Supply.11.8 Further Reading.Notes.Problems.12. Nonstationary Panels.12.1 Introduction.12.2 Panel Unit Roots Tests Assuming Cross-sectional Independence.12.3 Panel Unit Roots Tests Allowing for Cross-sectional Dependence.12.4 Spurious Regression in Panel Data.12.5 Panel Cointegration Tests.12.6 Estimation and Inference in Panel Cointegration Models.12.7 Empirical Example: Purchasing Power Parity.12.8 Further Reading.Notes.Problems.References.Index.

10,363 citations