scispace - formally typeset
Search or ask a question
Author

L Mountain

Bio: L Mountain is an academic researcher. The author has contributed to research in topics: Carriageway. The author has an hindex of 1, co-authored 1 publications receiving 3 citations.
Topics: Carriageway

Papers
More filters
01 Jan 2013
TL;DR: This paper addresses a number of methodological issues that arise in seeking practical and efficient ways to update PAMs, whether by re-calibration or byRe-fitting, including the choice of distributional assumption for overdispersion, and considerations about the most efficient and convenient ways to fit the required models.
Abstract: Reliable predictive accident models (PAMs) have a variety of important uses in traffic safety research and practice They are used to help identify sites in need of remedial treatment, in the design of transport schemes to assess safety implications, and to estimate the effectiveness of remedial treatments The PAMs currently in use in the UK are now quite old; the data used in their development was gathered up to 30 years ago Many changes have occurred over that period in road and vehicle design, in road safety campaigns and legislation, and the national accident rate has fallen substantially It seems unlikely that these aging models can be relied upon to provide accurate and reliable predictions of accident frequencies on the roads today This paper addresses a number of methodological issues that arise in seeking practical and efficient ways to update PAMs Models for accidents on rural single carriageway roads have been chosen to illustrate these issues, including the choice of distributional assumption for overdispersion, the choice of goodness of fit measures, questions of independence between observations in different years, and between links on the same scheme, the estimation of trends in the models, the uncertainty of predictions, as well as considerations about the most efficient and convenient ways to fit the required models, given the considerable advances that have been seen in statistical computing software in recent years

4 citations


Cited by
More filters
01 Jan 2016
TL;DR: Sample-size guidelines were prepared based on the coefficient of variation of the crash data that are needed for the calibration process and they can be used for all facility types and both for segment and intersection prediction models.
Abstract: The Highway Safety Manual (HSM) prediction models are fitted and validated based on the crash data collected from a selected number of states in the United States. Therefore, for a jurisdiction to be able to fully benefit from applying these models, it is necessary to calibrate them to local conditions. The first edition of the HSM recommends calibrating the models using a one size fits-all sample-size of 30 to 50 locations with total of at least 100 crashes per year. However, the HSM recommendation is not fully supported by documented studies. The objectives of this paper are consequently to: 1) examine the required sample size based on the characteristics of the data that will be used for the recalibration process; and, 2) propose revised guidelines. The objectives were accomplished using simulation runs for different scenarios that characterized the sample mean and variance of the data. The simulation results indicate that as the ratio of the standard deviation to the mean (i.e., coefficient of variation) of the crash data increases, a larger sample-size is warranted to fulfil certain levels of accuracies. Taking this observation into account, sample-size guidelines were prepared based on the coefficient of variation of the crash data that are needed for the recalibration process. The guidelines were then successfully applied to the two observed datasets. The proposed guidelines can be used for all facility types and both for segment and intersection prediction models.

39 citations

01 Jan 2016
TL;DR: Two popular techniques from the two approaches are compared: negative binomial models for the parametric approach and kernel regression for the nonparametric counterpart, and it is shown that the kernel regression method outperforms the model-based approach for predictive performance, and that performance advantage increases noticeably as data available for calibration grow.
Abstract: Crash data for road safety analysis and modeling are growing steadily in size and completeness due to latest advancement in information technologies. This increased availability of large datasets has generated resurgent interest in applying data-driven nonparametric approach as an alternative to the traditional parametric models for crash risk prediction. This paper investigates the question of how the relative performance of these two alternative approaches changes as crash data grows. The authors focus on comparing two popular techniques from the two approaches: negative binomial models (NB) for the parametric approach and kernel regression (KR) for the nonparametric counterpart. Using two large crash datasets, the authors investigate the performance of these two methods as a function of the amount of training data. Through a rigorous bootstrapping validation process, the study found that the two approaches exhibit strikingly different patterns, especially, in terms of sensitivity to data size. The kernel regression method outperforms the model based approach – NB in terms of predictive performance and that performance advantage increases noticeably as data available for calibration grows. With the arrival of the Big Data era and the added benefits of enabling automated road safety analysis and improved responsiveness to latest safety issues, nonparametric techniques (especially those of modern machine approaches) could be included as one of the important tools for road safety studies.

10 citations

Graham Wood1
01 Jan 2004
TL;DR: In this article, the authors describe how confidence intervals (for example, for the true accident rate at given flows) and prediction intervals can be produced using spreadsheet technology, which can be used for estimating the number of accidents at a new site with given flows.
Abstract: Generalised linear models, with "log" link and either Poisson or negative binomial errors, are commonly used for relating accident rates to explanatory variables. This paper adds to the toolkit for such models. It describes how confidence intervals (for example, for the true accident rate at given flows) and prediction intervals (for example, for the number of accidents at a new site with given flows) can be produced using spreadsheet technology.

8 citations

Journal ArticleDOI
06 Apr 2022
TL;DR: In this article , the authors compare the performance of a scalar calibration factor and a calibration function for different ranges of data characteristics (i.e., sample mean and variance) as well as the sample size.
Abstract: Abstract The Highway Safety Manual (HSM) recommends calibrating Safety Performance Functions using a scalar calibration factor. Recently, a few studies explored the merits of estimating a calibration function instead of a calibration factor. Although it seems a promising approach, it is not clear when a calibration function should be preferred over a scalar calibration factor. On the one hand estimating a scalar factor is easier than estimating a calibration function; on the other hand, the calibration results may improve using a calibration function. This study performs a simulation study to compare the two calibration strategies for different ranges of data characteristics (i.e.: sample mean and variance) as well as the sample size. A measure of prediction accuracy is used to compare the two methods. The results show that as the sample size increases, or variation of data decreases, the calibration function performs better than the scalar calibration factor. If the analyst can collect a sample of at least 150 locations, calibration function is recommended over the scalar factor. If the HSM recommendation of 30-50 locations is used and the analyst desires a better accuracy, calibration function is recommended only if the coefficient of variation of data is less than 2. Otherwise, calibration factor yields better results.

1 citations