scispace - formally typeset
Search or ask a question
Author

Peter Schmidt

Bio: Peter Schmidt is an academic researcher from National Institutes of Health. The author has contributed to research in topics: Measurement invariance & Estimator. The author has an hindex of 105, co-authored 638 publications receiving 61822 citations. Previous affiliations of Peter Schmidt include University of Potsdam & University of Haifa.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, a test of the null hypothesis that an observable series is stationary around a deterministic trend is proposed, where the series is expressed as the sum of deterministic trends, random walks, and stationary error.

10,068 citations

Journal ArticleDOI
TL;DR: In this paper, the authors define the disturbance term as the sum of symmetric normal and (negative) half-normal random variables, and consider various aspects of maximum-likelihood estimation for the coefficients of a production function with an additive disturbance term of this sort.

8,058 citations

Journal ArticleDOI
TL;DR: In this paper, the expected value of u, conditional on (v − u ) is considered, where v is a normal error term representing pure randomness, and u is a non-negative error term describing technical inefficiency.

3,378 citations

Journal ArticleDOI
TL;DR: In this article, the authors consider the estimation of a stochastic frontier production function, which is the type introduced by Aigner, Lovell, and Schmidt (1977) and Meeusen and van den Broeck (1977).
Abstract: This article considers estimation of a stochastic frontier production function-the type introduced by Aigner, Lovell, and Schmidt (1977) and Meeusen and van den Broeck (1977). Such a production frontier model consists of a production function of the usual regression type but with an error term equal to the sum of two parts. The first part is typically assumed to be normally distributed and represents the usual statistical noise, such as luck, weather, machine breakdown, and other events beyond the control of the firm. The second part is nonpositive and represents technical inefficiencythat is, failure to produce maximal output, given the set of inputs used. Realized output is bounded from above by a frontier that includes the deterministic part of the regression, plus the part of the error representing noise; so the frontier is stochastic. There also exist socalled deterministic frontier models, whose error term contains only the nonpositive component, but we will not consider them here (e.g., see Greene 1980). Frontier models arise naturally in the problem of efficiency measurement, since one needs a bound on output to measure efficiency. A good survey of such production functions and their relationship to the measurement of productive efficiency was given by F0rsund, Lovell, and Schmidt (1980).

1,518 citations

Journal ArticleDOI
TL;DR: In this article, the authors consider the efficient instrumental variables estimation of a panel data model with heterogeneity in slopes as well as intercepts and apply their methodology to a frontier production function with cross-sectional and temporal variation in levels of technical efficiency.

1,186 citations


Cited by
More filters
Book
01 Jan 2001
TL;DR: This is the essential companion to Jeffrey Wooldridge's widely-used graduate text Econometric Analysis of Cross Section and Panel Data (MIT Press, 2001).
Abstract: The second edition of this acclaimed graduate text provides a unified treatment of two methods used in contemporary econometric research, cross section and data panel methods. By focusing on assumptions that can be given behavioral content, the book maintains an appropriate level of rigor while emphasizing intuitive thinking. The analysis covers both linear and nonlinear models, including models with dynamics and/or individual heterogeneity. In addition to general estimation frameworks (particular methods of moments and maximum likelihood), specific linear and nonlinear methods are covered in detail, including probit and logit models and their multivariate, Tobit models, models for count data, censored and missing data schemes, causal (or treatment) effects, and duration analysis. Econometric Analysis of Cross Section and Panel Data was the first graduate econometrics text to focus on microeconomic data structures, allowing assumptions to be separated into population and sampling assumptions. This second edition has been substantially updated and revised. Improvements include a broader class of models for missing data problems; more detailed treatment of cluster problems, an important topic for empirical researchers; expanded discussion of "generalized instrumental variables" (GIV) estimation; new coverage (based on the author's own recent research) of inverse probability weighting; a more complete framework for estimating treatment effects with panel data, and a firmly established link between econometric approaches to nonlinear panel data and the "generalized estimating equation" literature popular in statistics and other fields. New attention is given to explaining when particular econometric methods can be applied; the goal is not only to tell readers what does work, but why certain "obvious" procedures do not. The numerous included exercises, both theoretical and computer-based, allow the reader to extend methods covered in the text and discover new insights.

28,298 citations

Report SeriesDOI
TL;DR: In this paper, two alternative linear estimators that are designed to improve the properties of the standard first-differenced GMM estimator are presented. But both estimators require restrictions on the initial conditions process.

19,132 citations

Journal ArticleDOI
TL;DR: In this paper, a framework for efficient IV estimators of random effects models with information in levels which can accommodate predetermined variables is presented. But the authors do not consider models with predetermined variables that have constant correlation with the effects.

16,245 citations

Journal ArticleDOI
TL;DR: The CCR ratio form introduced by Charnes, Cooper and Rhodes, as part of their Data Envelopment Analysis approach, comprehends both technical and scale inefficiencies via the optimal value of the ratio form, as obtained directly from the data without requiring a priori specification of weights and/or explicit delineation of assumed functional forms of relations between inputs and outputs as mentioned in this paper.
Abstract: In management contexts, mathematical programming is usually used to evaluate a collection of possible alternative courses of action en route to selecting one which is best. In this capacity, mathematical programming serves as a planning aid to management. Data Envelopment Analysis reverses this role and employs mathematical programming to obtain ex post facto evaluations of the relative efficiency of management accomplishments, however they may have been planned or executed. Mathematical programming is thereby extended for use as a tool for control and evaluation of past accomplishments as well as a tool to aid in planning future activities. The CCR ratio form introduced by Charnes, Cooper and Rhodes, as part of their Data Envelopment Analysis approach, comprehends both technical and scale inefficiencies via the optimal value of the ratio form, as obtained directly from the data without requiring a priori specification of weights and/or explicit delineation of assumed functional forms of relations between inputs and outputs. A separation into technical and scale efficiencies is accomplished by the methods developed in this paper without altering the latter conditions for use of DEA directly on observational data. Technical inefficiencies are identified with failures to achieve best possible output levels and/or usage of excessive amounts of inputs. Methods for identifying and correcting the magnitudes of these inefficiencies, as supplied in prior work, are illustrated. In the present paper, a new separate variable is introduced which makes it possible to determine whether operations were conducted in regions of increasing, constant or decreasing returns to scale in multiple input and multiple output situations. The results are discussed and related not only to classical single output economics but also to more modern versions of economics which are identified with "contestable market theories."

14,941 citations

Book
28 Apr 2021
TL;DR: In this article, the authors proposed a two-way error component regression model for estimating the likelihood of a particular item in a set of data points in a single-dimensional graph.
Abstract: Preface.1. Introduction.1.1 Panel Data: Some Examples.1.2 Why Should We Use Panel Data? Their Benefits and Limitations.Note.2. The One-way Error Component Regression Model.2.1 Introduction.2.2 The Fixed Effects Model.2.3 The Random Effects Model.2.4 Maximum Likelihood Estimation.2.5 Prediction.2.6 Examples.2.7 Selected Applications.2.8 Computational Note.Notes.Problems.3. The Two-way Error Component Regression Model.3.1 Introduction.3.2 The Fixed Effects Model.3.3 The Random Effects Model.3.4 Maximum Likelihood Estimation.3.5 Prediction.3.6 Examples.3.7 Selected Applications.Notes.Problems.4. Test of Hypotheses with Panel Data.4.1 Tests for Poolability of the Data.4.2 Tests for Individual and Time Effects.4.3 Hausman's Specification Test.4.4 Further Reading.Notes.Problems.5. Heteroskedasticity and Serial Correlation in the Error Component Model.5.1 Heteroskedasticity.5.2 Serial Correlation.Notes.Problems.6. Seemingly Unrelated Regressions with Error Components.6.1 The One-way Model.6.2 The Two-way Model.6.3 Applications and Extensions.Problems.7. Simultaneous Equations with Error Components.7.1 Single Equation Estimation.7.2 Empirical Example: Crime in North Carolina.7.3 System Estimation.7.4 The Hausman and Taylor Estimator.7.5 Empirical Example: Earnings Equation Using PSID Data.7.6 Extensions.Notes.Problems.8. Dynamic Panel Data Models.8.1 Introduction.8.2 The Arellano and Bond Estimator.8.3 The Arellano and Bover Estimator.8.4 The Ahn and Schmidt Moment Conditions.8.5 The Blundell and Bond System GMM Estimator.8.6 The Keane and Runkle Estimator.8.7 Further Developments.8.8 Empirical Example: Dynamic Demand for Cigarettes.8.9 Further Reading.Notes.Problems.9. Unbalanced Panel Data Models.9.1 Introduction.9.2 The Unbalanced One-way Error Component Model.9.3 Empirical Example: Hedonic Housing.9.4 The Unbalanced Two-way Error Component Model.9.5 Testing for Individual and Time Effects Using Unbalanced Panel Data.9.6 The Unbalanced Nested Error Component Model.Notes.Problems.10. Special Topics.10.1 Measurement Error and Panel Data.10.2 Rotating Panels.10.3 Pseudo-panels.10.4 Alternative Methods of Pooling Time Series of Cross-section Data.10.5 Spatial Panels.10.6 Short-run vs Long-run Estimates in Pooled Models.10.7 Heterogeneous Panels.Notes.Problems.11. Limited Dependent Variables and Panel Data.11.1 Fixed and Random Logit and Probit Models.11.2 Simulation Estimation of Limited Dependent Variable Models with Panel Data.11.3 Dynamic Panel Data Limited Dependent Variable Models.11.4 Selection Bias in Panel Data.11.5 Censored and Truncated Panel Data Models.11.6 Empirical Applications.11.7 Empirical Example: Nurses' Labor Supply.11.8 Further Reading.Notes.Problems.12. Nonstationary Panels.12.1 Introduction.12.2 Panel Unit Roots Tests Assuming Cross-sectional Independence.12.3 Panel Unit Roots Tests Allowing for Cross-sectional Dependence.12.4 Spurious Regression in Panel Data.12.5 Panel Cointegration Tests.12.6 Estimation and Inference in Panel Cointegration Models.12.7 Empirical Example: Purchasing Power Parity.12.8 Further Reading.Notes.Problems.References.Index.

10,363 citations