scispace - formally typeset
Search or ask a question
Author

Daniel McFadden

Bio: Daniel McFadden is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Medicare Part D & Discrete choice. The author has an hindex of 74, co-authored 243 publications receiving 60638 citations. Previous affiliations of Daniel McFadden include Cambridge Systematics & University of California.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the adequacy of a mixing specification can be tested simply as an omitted variable test with appropriately definedartificial variables, and a practicalestimation of aarametricmixingfamily can be run by MaximumSimulated Likelihood EstimationorMethod ofSimulatedMoments, andeasilycomputedinstruments are provided that make the latter procedure fairly eAcient.
Abstract: SUMMARY Thispaperconsidersmixed,orrandomcoeAcients,multinomiallogit (MMNL)modelsfordiscreteresponse, andestablishesthefollowingresults.Undermildregularityconditions,anydiscretechoicemodelderivedfrom random utility maximization has choice probabilities that can be approximated as closely as one pleases by a MMNLmodel.PracticalestimationofaparametricmixingfamilycanbecarriedoutbyMaximumSimulated LikelihoodEstimationorMethodofSimulatedMoments,andeasilycomputedinstrumentsareprovidedthat make the latter procedure fairly eAcient. The adequacy of a mixing specification can be tested simply as an omittedvariabletestwithappropriatelydefinedartificialvariables.Anapplicationtoaproblemofdemandfor alternativevehiclesshowsthatMMNL provides aflexible and computationally practical approach todiscrete response analysis. Copyright # 2000 John Wiley & Sons, Ltd.

3,967 citations

Journal Article
TL;DR: The problem of translating the theory of economic choice behavior into concrete models suitable for analyzing housing location and methods for controlling the size of data collection and estimation tasks by sampling alternatives from the full set of alternatives are discussed.
Abstract: The problem of translating the theory of economic choice behavior into concrete models suitable for analyzing housing location is discussed. The analysis is based on the premise that the classical, economically rational consumer will choose a residential location by weighing the attributes of each available alternative and by selecting the alternative that maximizes utility. The assumption of independence in the commonly used multinomial logit model of choice is relaxed to permit a structure of perceived similarities among alternatives. In this analysis, choice is described by a multinomial logit model for aggregates of similar alternatives. Also discussed are methods for controlling the size of data collection and estimation tasks by sampling alternatives from the full set of alternatives. /Author/

3,138 citations

Book ChapterDOI
TL;DR: In this paper, conditions for obtaining cosistency and asymptotic normality of a very general class of estimators (extremum estimators) are given to enable approximation of the SDF.
Abstract: Asymptotic distribution theory is the primary method used to examine the properties of econometric estimators and tests We present conditions for obtaining cosistency and asymptotic normality of a very general class of estimators (extremum estimators) Consistent asymptotic variance estimators are given to enable approximation of the asymptotic distribution Asymptotic efficiency is another desirable property then considered Throughout the chapter, the general results are also specialized to common econometric estimators (eg MLE and GMM), and in specific examples we work through the conditions for the various results in detail The results are also extended to two-step estimators (with finite-dimensional parameter estimation in the first step), estimators derived from nonsmooth objective functions, and semiparametric two-step estimators (with nonparametric estimation of an infinite-dimensional parameter in the first step) Finally, the trinity of test statistics is considered within the quite general setting of GMM estimation, and numerous examples are given

2,954 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, the authors draw on recent progress in the theory of property rights, agency, and finance to develop a theory of ownership structure for the firm, which casts new light on and has implications for a variety of issues in the professional and popular literature.

49,666 citations

Book
01 Jan 2001
TL;DR: This is the essential companion to Jeffrey Wooldridge's widely-used graduate text Econometric Analysis of Cross Section and Panel Data (MIT Press, 2001).
Abstract: The second edition of this acclaimed graduate text provides a unified treatment of two methods used in contemporary econometric research, cross section and data panel methods. By focusing on assumptions that can be given behavioral content, the book maintains an appropriate level of rigor while emphasizing intuitive thinking. The analysis covers both linear and nonlinear models, including models with dynamics and/or individual heterogeneity. In addition to general estimation frameworks (particular methods of moments and maximum likelihood), specific linear and nonlinear methods are covered in detail, including probit and logit models and their multivariate, Tobit models, models for count data, censored and missing data schemes, causal (or treatment) effects, and duration analysis. Econometric Analysis of Cross Section and Panel Data was the first graduate econometrics text to focus on microeconomic data structures, allowing assumptions to be separated into population and sampling assumptions. This second edition has been substantially updated and revised. Improvements include a broader class of models for missing data problems; more detailed treatment of cluster problems, an important topic for empirical researchers; expanded discussion of "generalized instrumental variables" (GIV) estimation; new coverage (based on the author's own recent research) of inverse probability weighting; a more complete framework for estimating treatment effects with panel data, and a firmly established link between econometric approaches to nonlinear panel data and the "generalized estimating equation" literature popular in statistics and other fields. New attention is given to explaining when particular econometric methods can be applied; the goal is not only to tell readers what does work, but why certain "obvious" procedures do not. The numerous included exercises, both theoretical and computer-based, allow the reader to extend methods covered in the text and discover new insights.

28,298 citations

Report SeriesDOI
TL;DR: In this paper, two alternative linear estimators that are designed to improve the properties of the standard first-differenced GMM estimator are presented. But both estimators require restrictions on the initial conditions process.

19,132 citations

Journal ArticleDOI
TL;DR: In this article, the authors argue that the style in which their builders construct claims for a connection between these models and reality is inappropriate, to the point at which claims for identification in these models cannot be taken seriously.
Abstract: Existing strategies for econometric analysis related to macroeconomics are subject to a number of serious objections, some recently formulated, some old. These objections are summarized in this paper, and it is argued that taken together they make it unlikely that macroeconomic models are in fact over identified, as the existing statistical theory usually assumes. The implications of this conclusion are explored, and an example of econometric work in a non-standard style, taking account of the objections to the standard style, is presented. THE STUDY OF THE BUSINESS cycle, fluctuations in aggregate measures of economic activity and prices over periods from one to ten years or so, constitutes or motivates a large part of what we call macroeconomics. Most economists would agree that there are many macroeconomic variables whose cyclical fluctuations are of interest, and would agree further that fluctuations in these series are interrelated. It would seem to follow almost tautologically that statistical models involving large numbers of macroeconomic variables ought to be the arena within which macroeconomic theories confront reality and thereby each other. Instead, though large-scale statistical macroeconomic models exist and are by some criteria successful, a deep vein of skepticism about the value of these models runs through that part of the economics profession not actively engaged in constructing or using them. It is still rare for empirical research in macroeconomics to be planned and executed within the framework of one of the large models. In this lecture I intend to discuss some aspects of this situation, attempting both to offer some explanations and to suggest some means for improvement. I will argue that the style in which their builders construct claims for a connection between these models and reality-the style in which "identification" is achieved for these models-is inappropriate, to the point at which claims for identification in these models cannot be taken seriously. This is a venerable assertion; and there are some good old reasons for believing it;2 but there are also some reasons which have been more recently put forth. After developing the conclusion that the identification claimed for existing large-scale models is incredible, I will discuss what ought to be done in consequence. The line of argument is: large-scale models do perform useful forecasting and policy-analysis functions despite their incredible identification; the restrictions imposed in the usual style of identification are neither essential to constructing a model which can perform these functions nor innocuous; an alternative style of identification is available and practical. Finally we will look at some empirical work based on an alternative style of macroeconometrics. A six-variable dynamic system is estimated without using 1 Research for this paper was supported by NSF Grant Soc-76-02482. Lars Hansen executed the computations. The paper has benefited from comments by many people, especially Thomas J. Sargent

11,195 citations

Book
28 Apr 2021
TL;DR: In this article, the authors proposed a two-way error component regression model for estimating the likelihood of a particular item in a set of data points in a single-dimensional graph.
Abstract: Preface.1. Introduction.1.1 Panel Data: Some Examples.1.2 Why Should We Use Panel Data? Their Benefits and Limitations.Note.2. The One-way Error Component Regression Model.2.1 Introduction.2.2 The Fixed Effects Model.2.3 The Random Effects Model.2.4 Maximum Likelihood Estimation.2.5 Prediction.2.6 Examples.2.7 Selected Applications.2.8 Computational Note.Notes.Problems.3. The Two-way Error Component Regression Model.3.1 Introduction.3.2 The Fixed Effects Model.3.3 The Random Effects Model.3.4 Maximum Likelihood Estimation.3.5 Prediction.3.6 Examples.3.7 Selected Applications.Notes.Problems.4. Test of Hypotheses with Panel Data.4.1 Tests for Poolability of the Data.4.2 Tests for Individual and Time Effects.4.3 Hausman's Specification Test.4.4 Further Reading.Notes.Problems.5. Heteroskedasticity and Serial Correlation in the Error Component Model.5.1 Heteroskedasticity.5.2 Serial Correlation.Notes.Problems.6. Seemingly Unrelated Regressions with Error Components.6.1 The One-way Model.6.2 The Two-way Model.6.3 Applications and Extensions.Problems.7. Simultaneous Equations with Error Components.7.1 Single Equation Estimation.7.2 Empirical Example: Crime in North Carolina.7.3 System Estimation.7.4 The Hausman and Taylor Estimator.7.5 Empirical Example: Earnings Equation Using PSID Data.7.6 Extensions.Notes.Problems.8. Dynamic Panel Data Models.8.1 Introduction.8.2 The Arellano and Bond Estimator.8.3 The Arellano and Bover Estimator.8.4 The Ahn and Schmidt Moment Conditions.8.5 The Blundell and Bond System GMM Estimator.8.6 The Keane and Runkle Estimator.8.7 Further Developments.8.8 Empirical Example: Dynamic Demand for Cigarettes.8.9 Further Reading.Notes.Problems.9. Unbalanced Panel Data Models.9.1 Introduction.9.2 The Unbalanced One-way Error Component Model.9.3 Empirical Example: Hedonic Housing.9.4 The Unbalanced Two-way Error Component Model.9.5 Testing for Individual and Time Effects Using Unbalanced Panel Data.9.6 The Unbalanced Nested Error Component Model.Notes.Problems.10. Special Topics.10.1 Measurement Error and Panel Data.10.2 Rotating Panels.10.3 Pseudo-panels.10.4 Alternative Methods of Pooling Time Series of Cross-section Data.10.5 Spatial Panels.10.6 Short-run vs Long-run Estimates in Pooled Models.10.7 Heterogeneous Panels.Notes.Problems.11. Limited Dependent Variables and Panel Data.11.1 Fixed and Random Logit and Probit Models.11.2 Simulation Estimation of Limited Dependent Variable Models with Panel Data.11.3 Dynamic Panel Data Limited Dependent Variable Models.11.4 Selection Bias in Panel Data.11.5 Censored and Truncated Panel Data Models.11.6 Empirical Applications.11.7 Empirical Example: Nurses' Labor Supply.11.8 Further Reading.Notes.Problems.12. Nonstationary Panels.12.1 Introduction.12.2 Panel Unit Roots Tests Assuming Cross-sectional Independence.12.3 Panel Unit Roots Tests Allowing for Cross-sectional Dependence.12.4 Spurious Regression in Panel Data.12.5 Panel Cointegration Tests.12.6 Estimation and Inference in Panel Cointegration Models.12.7 Empirical Example: Purchasing Power Parity.12.8 Further Reading.Notes.Problems.References.Index.

10,363 citations