scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Comparison of parametric and nonparametric models for traffic flow forecasting

TL;DR: This research effort seeks to examine the theoretical foundation of nonparametric regression and to answer the question of whether non parametric regression based on heuristically improved forecast generation methods approach the single interval traffic flow prediction performance of seasonal ARIMA models.
Abstract: Single point short-term traffic flow forecasting will play a key role in supporting demand forecasts needed by operational network models. Seasonal autoregressive integrated moving average (ARIMA), a classic parametric modeling approach to time series, and nonparametric regression models have been proposed as well suited for application to single point short-term traffic flow forecasting. Past research has shown seasonal ARIMA models to deliver results that are statistically superior to basic implementations of nonparametric regression. However, the advantages associated with a data-driven nonparametric forecasting approach motivate further investigation of refined nonparametric forecasting methods. Following this motivation, this research effort seeks to examine the theoretical foundation of nonparametric regression and to answer the question of whether nonparametric regression based on heuristically improved forecast generation methods approach the single interval traffic flow prediction performance of seasonal ARIMA models.
Citations
More filters
Journal ArticleDOI
TL;DR: A novel deep-learning-based traffic flow prediction method is proposed, which considers the spatial and temporal correlations inherently and is applied for the first time that a deep architecture model is applied using autoencoders as building blocks to represent traffic flow features for prediction.
Abstract: Accurate and timely traffic flow information is important for the successful deployment of intelligent transportation systems. Over the last few years, traffic data have been exploding, and we have truly entered the era of big data for transportation. Existing traffic flow prediction methods mainly use shallow traffic prediction models and are still unsatisfying for many real-world applications. This situation inspires us to rethink the traffic flow prediction problem based on deep architecture models with big traffic data. In this paper, a novel deep-learning-based traffic flow prediction method is proposed, which considers the spatial and temporal correlations inherently. A stacked autoencoder model is used to learn generic traffic flow features, and it is trained in a greedy layerwise fashion. To the best of our knowledge, this is the first time that a deep architecture model is applied using autoencoders as building blocks to represent traffic flow features for prediction. Moreover, experiments demonstrate that the proposed method for traffic flow prediction has superior performance.

2,306 citations


Cites methods from "Comparison of parametric and nonpar..."

  • ...The SARIMA models and the nonparametric regression forecasting methods were evaluated in [49]....

    [...]

Journal ArticleDOI
TL;DR: A comparison with different topologies of dynamic neural networks as well as other prevailing parametric and nonparametric algorithms suggests that LSTM NN can achieve the best prediction performance in terms of both accuracy and stability.
Abstract: Neural networks have been extensively applied to short-term traffic prediction in the past years. This study proposes a novel architecture of neural networks, Long Short-Term Neural Network (LSTM NN), to capture nonlinear traffic dynamic in an effective manner. The LSTM NN can overcome the issue of back-propagated error decay through memory blocks, and thus exhibits the superior capability for time series prediction with long temporal dependency. In addition, the LSTM NN can automatically determine the optimal time lags. To validate the effectiveness of LSTM NN, travel speed data from traffic microwave detectors in Beijing are used for model training and testing. A comparison with different topologies of dynamic neural networks as well as other prevailing parametric and nonparametric algorithms suggests that LSTM NN can achieve the best prediction performance in terms of both accuracy and stability.

1,521 citations


Cites background from "Comparison of parametric and nonpar..."

  • ...In the past decades, a number of ARIMA based time series models have been proposed for traffic prediction (Williams and Hoel, 2003; Smith et al., 2002; Williams, 2001; Chandra and Al-Deek, 2009)....

    [...]

Journal ArticleDOI
TL;DR: The theoretical basis for modeling univariate traffic condition data streams as seasonal autoregressive integrated moving average processes as well as empirical results using actual intelligent transportation system data are presented and found to be consistent with the theoretical hypothesis.
Abstract: This article presents the theoretical basis for modeling univariate traffic condition data streams as seasonal autoregressive integrated moving average processes. This foundation rests on the Wold decomposition theorem and on the assertion that a one-week lagged first seasonal difference applied to discrete interval traffic condition data will yield a weakly stationary transformation. Moreover, empirical results using actual intelligent transportation system data are presented and found to be consistent with the theoretical hypothesis. Conclusions are given on the implications of these assertions and findings relative to ongoing intelligent transportation systems research, deployment, and operations.

1,406 citations

Journal ArticleDOI
TL;DR: The OL-SVR model is compared with three well-known prediction models including Gaussian maximum likelihood (GML), Holt exponential smoothing, and artificial neural net models and suggests that GML, which relies heavily on the recurring characteristics of day-to-day traffic, performs slightly better than other models under typical traffic conditions, as demonstrated by previous studies.
Abstract: Most literature on short-term traffic flow forecasting focused mainly on normal, or non-incident, conditions and, hence, limited their applicability when traffic flow forecasting is most needed, i.e., incident and atypical conditions. Accurate prediction of short-term traffic flow under atypical conditions, such as vehicular crashes, inclement weather, work zone, and holidays, is crucial to effective and proactive traffic management systems in the context of intelligent transportation systems (ITS) and, more specifically, dynamic traffic assignment (DTA). To this end, this paper presents an application of a supervised statistical learning technique called Online Support Vector machine for Regression, or OL-SVR, for the prediction of short-term freeway traffic flow under both typical and atypical conditions. The OL-SVR model is compared with three well-known prediction models including Gaussian maximum likelihood (GML), Holt exponential smoothing, and artificial neural net models. The resultant performance comparisons suggest that GML, which relies heavily on the recurring characteristics of day-to-day traffic, performs slightly better than other models under typical traffic conditions, as demonstrated by previous studies. Yet OL-SVR is the best performer under non-recurring atypical traffic conditions. It appears that for deployed ITS systems that gear toward timely response to real-world atypical and incident situations, OL-SVR may be a better tool than GML.

644 citations


Cites methods from "Comparison of parametric and nonpar..."

  • ...Several other techniques have been applied to predict real-time traffic flow, including multivariate state space time series (Stathopoulos & Karlaftis, 2003), multivariate non-parametric regression (Clark, 2003; Smith & Demetsky, 1996), nearest neighbor non-parametric regression (Davis & Nihan, 1991; Smith et al., 2002), dynamic generalized linear models (Lan & Miaou, 1999), and Kalman filtering models (Okutani & Stephanedes, 1984)....

    [...]

  • ...…& Karlaftis, 2003), multivariate non-parametric regression (Clark, 2003; Smith & Demetsky, 1996), nearest neighbor non-parametric regression (Davis & Nihan, 1991; Smith et al., 2002), dynamic generalized linear models (Lan & Miaou, 1999), and Kalman filtering models (Okutani & Stephanedes, 1984)....

    [...]

Journal ArticleDOI
TL;DR: Past research is extended by providing an advanced, genetic algorithm based, multilayered structural optimization strategy that can assist both in the proper representation of traffic flow data with temporal and spatial characteristics as well as in the selection of the appropriate neural network structure.
Abstract: Short-term forecasting of traffic parameters such as flow and occupancy is an essential element of modern Intelligent Transportation Systems research and practice. Although many different methodologies have been used for short-term predictions, literature suggests neural networks as one of the best alternatives for modeling and predicting traffic parameters. However, because of limited knowledge regarding a network’s optimal structure given a specific dataset, researchers have to rely on time consuming and questionably efficient rules-of-thumb when developing them. This paper extends past research by providing an advanced, genetic algorithm based, multilayered structural optimization strategy that can assist both in the proper representation of traffic flow data with temporal and spatial characteristics as well as in the selection of the appropriate neural network structure. Further, it evaluates the performance of the developed network by applying it to both univariate and multivariate traffic flow data from an urban signalized arterial. The results show that the capabilities of a simple static neural network, with genetically optimized step size, momentum and number of hidden units, are very satisfactory when modeling both univariate and multivariate traffic data.

594 citations

References
More filters
Book
01 Jan 1970
TL;DR: In this article, a complete revision of a classic, seminal, and authoritative book that has been the model for most books on the topic written since 1970 is presented, focusing on practical techniques throughout, rather than a rigorous mathematical treatment of the subject.
Abstract: From the Publisher: This is a complete revision of a classic, seminal, and authoritative book that has been the model for most books on the topic written since 1970. It focuses on practical techniques throughout, rather than a rigorous mathematical treatment of the subject. It explores the building of stochastic (statistical) models for time series and their use in important areas of application —forecasting, model specification, estimation, and checking, transfer function modeling of dynamic relationships, modeling the effects of intervention events, and process control. Features sections on: recently developed methods for model specification, such as canonical correlation analysis and the use of model selection criteria; results on testing for unit root nonstationarity in ARIMA processes; the state space representation of ARMA models and its use for likelihood estimation and forecasting; score test for model checking; and deterministic components and structural components in time series models and their estimation based on regression-time series model methods.

19,748 citations

Journal ArticleDOI
TL;DR: This revision of a classic, seminal, and authoritative book explores the building of stochastic models for time series and their use in important areas of application —forecasting, model specification, estimation, and checking, transfer function modeling of dynamic relationships, modeling the effects of intervention events, and process control.
Abstract: From the Publisher: This is a complete revision of a classic, seminal, and authoritative book that has been the model for most books on the topic written since 1970. It focuses on practical techniques throughout, rather than a rigorous mathematical treatment of the subject. It explores the building of stochastic (statistical) models for time series and their use in important areas of application —forecasting, model specification, estimation, and checking, transfer function modeling of dynamic relationships, modeling the effects of intervention events, and process control. Features sections on: recently developed methods for model specification, such as canonical correlation analysis and the use of model selection criteria; results on testing for unit root nonstationarity in ARIMA processes; the state space representation of ARMA models and its use for likelihood estimation and forecasting; score test for model checking; and deterministic components and structural components in time series models and their estimation based on regression-time series model methods.

12,650 citations

Journal ArticleDOI
TL;DR: Time Series Analysis and Forecasting: principles and practice as mentioned in this paper The Oxford Handbook of Quantitative Methods, Vol. 3, No. 2: Statistical AnalysisTime-Series ForecastingPractical Time-Series AnalysisApplied Bayesian Forecasting and Time Series AnalysisSAS for Forecasting Time SeriesApplied Time Series analysisTime Series analysisElements of Nonlinear Time Series analyses and forecastingTime series analysis and forecasting by Example.
Abstract: Advances in Time Series Analysis and ForecastingThe Analysis of Time SeriesForecasting: principles and practiceIntroduction to Time Series Analysis and ForecastingThe Oxford Handbook of Quantitative Methods, Vol. 2: Statistical AnalysisTime-Series ForecastingPractical Time Series AnalysisApplied Bayesian Forecasting and Time Series AnalysisSAS for Forecasting Time SeriesApplied Time Series AnalysisTime Series AnalysisElements of Nonlinear Time Series Analysis and ForecastingTime Series Analysis and Forecasting by ExampleIntroduction to Time Series Analysis and ForecastingTime Series Analysis and AdjustmentSpatial Time SeriesPractical Time Series Forecasting with RA Very British AffairMachine Learning for Time Series Forecasting with PythonTime Series with PythonTime Series Analysis: Forecasting & Control, 3/EIntroduction to Time Series Forecasting With PythonThe Analysis of Time SeriesTime Series Analysis and Its ApplicationsForecasting and Time Series AnalysisIntroduction to Time Series and ForecastingIntroduction to Time Series Analysis and ForecastingTime Series Analysis in the Social SciencesPractical Time Series AnalysisTime Series Analysis and ForecastingTheory and Applications of Time Series AnalysisApplied Time SeriesSAS for Forecasting Time Series, Third EditionTime Series AnalysisPredictive Modeling Applications in Actuarial ScienceIntroductory Time Series with RHands-On Time Series Analysis with RAdvances in Time Series ForecastingTime Series Analysis and Forecasting Using Python & RAdvanced Time Series Data Analysis

6,184 citations

Book
01 Jun 1976
TL;DR: In this paper, Fourier analysis is used to estimate the mean and autocorrelations of the Fourier spectral properties of a Fourier wavelet and the estimated spectrum of the wavelet.
Abstract: Moving Average and Autoregressive Processes. Introduction to Fourier Analysis. Spectral Theory and Filtering. Some Large Sample Theory. Estimation of the Mean and Autocorrelations. The Periodogram, Estimated Spectrum. Parameter Estimation. Regression, Trend, and Seasonality. Unit Root and Explosive Time Series. Bibliography. Index.

4,532 citations

Journal ArticleDOI
TL;DR: A general approach to Time Series Modelling and ModeLLing with ARMA Processes, which describes the development of a Stationary Process in Terms of Infinitely Many Past Values and the Autocorrelation Function.
Abstract: Preface 1 INTRODUCTION 1.1 Examples of Time Series 1.2 Objectives of Time Series Analysis 1.3 Some Simple Time Series Models 1.3.3 A General Approach to Time Series Modelling 1.4 Stationary Models and the Autocorrelation Function 1.4.1 The Sample Autocorrelation Function 1.4.2 A Model for the Lake Huron Data 1.5 Estimation and Elimination of Trend and Seasonal Components 1.5.1 Estimation and Elimination of Trend in the Absence of Seasonality 1.5.2 Estimation and Elimination of Both Trend and Seasonality 1.6 Testing the Estimated Noise Sequence 1.7 Problems 2 STATIONARY PROCESSES 2.1 Basic Properties 2.2 Linear Processes 2.3 Introduction to ARMA Processes 2.4 Properties of the Sample Mean and Autocorrelation Function 2.4.2 Estimation of $\gamma(\cdot)$ and $\rho(\cdot)$ 2.5 Forecasting Stationary Time Series 2.5.3 Prediction of a Stationary Process in Terms of Infinitely Many Past Values 2.6 The Wold Decomposition 1.7 Problems 3 ARMA MODELS 3.1 ARMA($p,q$) Processes 3.2 The ACF and PACF of an ARMA$(p,q)$ Process 3.2.1 Calculation of the ACVF 3.2.2 The Autocorrelation Function 3.2.3 The Partial Autocorrelation Function 3.3 Forecasting ARMA Processes 1.7 Problems 4 SPECTRAL ANALYSIS 4.1 Spectral Densities 4.2 The Periodogram 4.3 Time-Invariant Linear Filters 4.4 The Spectral Density of an ARMA Process 1.7 Problems 5 MODELLING AND PREDICTION WITH ARMA PROCESSES 5.1 Preliminary Estimation 5.1.1 Yule-Walker Estimation 5.1.3 The Innovations Algorithm 5.1.4 The Hannan-Rissanen Algorithm 5.2 Maximum Likelihood Estimation 5.3 Diagnostic Checking 5.3.1 The Graph of $\t=1,\ldots,n\ 5.3.2 The Sample ACF of the Residuals

3,732 citations