scispace - formally typeset
Search or ask a question
Journal Article•DOI•

An empirical assessment of fixed and random parameter logit models using crash- and non-crash-specific injury data

01 May 2011-Accident Analysis & Prevention (Accid Anal Prev)-Vol. 43, Iss: 3, pp 1140-1147
TL;DR: Using 5-year data from interstate highways in Indiana, the analysis shows that, while models that do not use detailed crash-specific data do not perform as well as those that do, random parameter models using less detailed data still can provide a reasonable level of accuracy.
About: This article is published in Accident Analysis & Prevention.The article was published on 2011-05-01. It has received 287 citations till now. The article focuses on the topics: Crash & Poison control.
Citations
More filters
Journal Article•DOI•
TL;DR: A review of the evolution of methodological applications and available data in highway-accident research can be found in this article, where fruitful directions for future methodological developments are identified and the role that new data sources will play in defining these directions is discussed.

923 citations

Journal Article•DOI•
TL;DR: In this article, a detailed discussion of the unobserved heterogeneity in highway accident data and analysis is presented along with their strengths and weaknesses, as well as a summary of the fundamental issues and directions for future methodological work that address this problem.

843 citations

Journal Article•DOI•
TL;DR: This paper summarizes the evolution of research and current thinking as it relates to the statistical analysis of motor-vehicle injury severities, and provides a discussion of future methodological directions.

818 citations


Cites background from "An empirical assessment of fixed an..."

  • ...A recent study by Anastasopoulos and Mannering (2011) explored the potential loss in model accuracy when using non-crashspecific data as opposed to detailed post-crash data....

    [...]

Journal Article•DOI•
TL;DR: Several other factors were found to significantly increase the probability of fatal injury for drivers in single-vehicle crashes, most notably: male driver, drunk driving, unsafe speed, older driver (65+) driving an older vehicle, and darkness without streetlights.

276 citations

Journal Article•DOI•
TL;DR: In this article, the authors investigated risk factors that significantly contribute to the injury severity of bicyclists in bicycle/motor-vehicle crashes while systematically accounting for unobserved heterogeneity within the crash data.

249 citations

References
More filters
Book•
01 Jan 2003
TL;DR: In this paper, the authors describe the new generation of discrete choice methods, focusing on the many advances that are made possible by simulation, and compare simulation-assisted estimation procedures, including maximum simulated likelihood, method of simulated moments, and methods of simulated scores.
Abstract: This book describes the new generation of discrete choice methods, focusing on the many advances that are made possible by simulation. Researchers use these statistical methods to examine the choices that consumers, households, firms, and other agents make. Each of the major models is covered: logit, generalized extreme value, or GEV (including nested and cross-nested logits), probit, and mixed logit, plus a variety of specifications that build on these basics. Simulation-assisted estimation procedures are investigated and compared, including maximum simulated likelihood, method of simulated moments, and method of simulated scores. Procedures for drawing from densities are described, including variance reduction techniques such as anithetics and Halton draws. Recent advances in Bayesian procedures are explored, including the use of the Metropolis-Hastings algorithm and its variant Gibbs sampling. No other book incorporates all these fields, which have arisen in the past 20 years. The procedures are applicable in many fields, including energy, transportation, environmental studies, health, labor, and marketing.

7,768 citations

Journal Article•DOI•
TL;DR: In this article, the adequacy of a mixing specification can be tested simply as an omitted variable test with appropriately definedartificial variables, and a practicalestimation of aarametricmixingfamily can be run by MaximumSimulated Likelihood EstimationorMethod ofSimulatedMoments, andeasilycomputedinstruments are provided that make the latter procedure fairly eAcient.
Abstract: SUMMARY Thispaperconsidersmixed,orrandomcoeAcients,multinomiallogit (MMNL)modelsfordiscreteresponse, andestablishesthefollowingresults.Undermildregularityconditions,anydiscretechoicemodelderivedfrom random utility maximization has choice probabilities that can be approximated as closely as one pleases by a MMNLmodel.PracticalestimationofaparametricmixingfamilycanbecarriedoutbyMaximumSimulated LikelihoodEstimationorMethodofSimulatedMoments,andeasilycomputedinstrumentsareprovidedthat make the latter procedure fairly eAcient. The adequacy of a mixing specification can be tested simply as an omittedvariabletestwithappropriatelydefinedartificialvariables.Anapplicationtoaproblemofdemandfor alternativevehiclesshowsthatMMNL provides aflexible and computationally practical approach todiscrete response analysis. Copyright # 2000 John Wiley & Sons, Ltd.

3,967 citations

Posted Content•
TL;DR: In this paper, simple quasi-likelihood methods for estimating regression models with a fractional dependent variable and for performing asymptotically valid inference are proposed, and they apply these methods to a data set of employee participation rates in 401(k) pension plans.
Abstract: We offer simple quasi-likelihood methods for estimating regression models with a fractional dependent variable and for performing asymptotically valid inference. Compared with log-odds type procedures, there is no difficulty in recovering the regression function for the fractional variable, and there is no need to use ad hoc transformations to handle data at the extreme values of zero and one. We also offer some new, simple specification tests by nesting the logit or probit function in a more general functional form. We apply these methods to a data set of employee participation rates in 401(k) pension plans.

3,243 citations

Journal Article•DOI•
TL;DR: In this paper, the authors develop attractive functional forms and simple quasi-likelihood estimation methods for regression models with a fractional dependent variable, and apply these methods to a data set of employee participation rates in 401 (k) pension plans.
Abstract: SUMMARY We develop attractive functional forms and simple quasi-likelihood estimation methods for regression models with a fractional dependent variable. Compared with log-odds type procedures, there is no difficulty in recovering the regression function for the fractional variable, and there is no need to use ad hoc transformations to handle data at the extreme values of zero and one. We also offer some new, robust specification tests by nesting the logit or probit function in a more general functional form. We apply these methods to a data set of employee participation rates in 401 (k) pension plans. I. INTRODUCTION Fractional response variables arise naturally in many economic settings. The fraction of total weekly hours spent working, the proportion of income spent on charitable contributions, and participation rates in voluntary pension plans are just a few examples of economic variables bounded between zero and one. The bounded nature of such variables and the possibility of observing values at the boundaries raise interesting functional form and inference issues. In this paper we specify and analyse a class of functional forms with satisfying econometric properties. We also synthesize and expand on the generalized linear models (GLM) literature from statistics and the quasi-likelihood literature from econometrics to obtain robust methods for estimation and inference with fractional response variables. We apply the methods to estimate a model of employee participation rates in 401 (k) pension plans. The key explanatory variable of interest is the plan's 'match rate,' the rate at which a firm matches a dollar of employee contributions. The empirical work extends that of Papke (1995), who studied this problem using linear spline methods. Spline methods are fiexible, but they do not ensure that predicted values lie in the unit interval. To illustrate the methodological issues that arise with fractional dependent variables, suppose that a variable y, O^y^l, is to be explained by a 1 x/^ vector of explanatory variables \ = {Xi,X2 XK), with the convention that Xi = l. The population model

2,933 citations

01 Jan 1981

2,235 citations