scispace - formally typeset
Search or ask a question
Author

Xiaolei Ma

Bio: Xiaolei Ma is an academic researcher from Beihang University. The author has contributed to research in topics: Artificial neural network & Deep learning. The author has an hindex of 29, co-authored 83 publications receiving 5977 citations. Previous affiliations of Xiaolei Ma include Chinese Ministry of Public Security & University of Washington.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, a series of data-mining algorithms to extract an individual truck's trip-chaining information from multiday GPS data was presented, which showed that 51% of the trucks in the data set had at least one trip chain.
Abstract: Freight systems are a critical yet complex component of the transportation domain. Understanding the dynamic of freight movements will help in better management of freight demand and eventually improve freight system efficiency. This paper presents a series of data-mining algorithms to extract an individual truck’s trip-chaining information from multiday GPS data. Individual trucks’ anchor points were identified with the spatial clustering algorithm for density-based spatial clustering of applications with noise. The anchor points were linked to construct individual trucks’ trip chains with 3-day GPS data, which showed that 51% of the trucks in the data set had at least one trip chain. A partitioning around medoids nonhierarchical clustering algorithm was applied to group trucks with similar trip-chaining characteristics. Four clusters were generated and validated by visual inspection when the trip-chaining statistics were distinct from each other. This study sheds light on modeling freight-chaining behav...

22 citations

Journal ArticleDOI
21 Sep 2017-Sensors
TL;DR: Copula-based models are proposed for the spatial interpolation of traffic flow from remote traffic microwave sensors and demonstrate significant potential to impute missing data in large-scale transportation networks.
Abstract: Issues of missing data have become increasingly serious with the rapid increase in usage of traffic sensors. Analyses of the Beijing ring expressway have showed that up to 50% of microwave sensors pose missing values. The imputation of missing traffic data must be urgently solved although a precise solution that cannot be easily achieved due to the significant number of missing portions. In this study, copula-based models are proposed for the spatial interpolation of traffic flow from remote traffic microwave sensors. Most existing interpolation methods only rely on covariance functions to depict spatial correlation and are unsuitable for coping with anomalies due to Gaussian consumption. Copula theory overcomes this issue and provides a connection between the correlation function and the marginal distribution function of traffic flow. To validate copula-based models, a comparison with three kriging methods is conducted. Results indicate that copula-based models outperform kriging methods, especially on roads with irregular traffic patterns. Copula-based models demonstrate significant potential to impute missing data in large-scale transportation networks.

22 citations

Journal ArticleDOI
TL;DR: A trajectory reconstruction model integrated into the technique for order preference by similarity to an ideal solution and depth-first search to manage the vehicles’ incomplete records phenomenon is built and results show that the method would be affected by the number of missing records.
Abstract: Using perception data to excavate vehicle travel information has been a popular area of study In order to learn the vehicle travel characteristics in the city of Ruian, we developed a common metho

22 citations

Journal ArticleDOI
TL;DR: A two-stage heuristic method integrating the initial heuristic algorithm and hybrid heuristic algorithms to study the VRPSPDP problem is proposed and it is indicated that the proposed algorithm is superior to these three algorithms for VRPS PDP in terms of total travel cost and average loading rate.
Abstract: The vehicle routing problem (VRP) is a well-known combinatorial optimization issue in transportation and logistics network systems. There exist several limitations associated with the traditional VRP. Releasing the restricted conditions of traditional VRP has become a research focus in the past few decades. The vehicle routing problem with split deliveries and pickups (VRPSPDP) is particularly proposed to release the constraints on the visiting times per customer and vehicle capacity, that is, to allow the deliveries and pickups for each customer to be simultaneously split more than once. Few studies have focused on the VRPSPDP problem. In this paper we propose a two-stage heuristic method integrating the initial heuristic algorithm and hybrid heuristic algorithm to study the VRPSPDP problem. To validate the proposed algorithm, Solomon benchmark datasets and extended Solomon benchmark datasets were modified to compare with three other popular algorithms. A total of 18 datasets were used to evaluate the effectiveness of the proposed method. The computational results indicated that the proposed algorithm is superior to these three algorithms for VRPSPDP in terms of total travel cost and average loading rate.

19 citations

Journal ArticleDOI
13 Jan 2016-PLOS ONE
TL;DR: An innovative computational approach to accurately estimate OD matrices using link-level traffic flow data is proposed, and useful insight for optimal parameter selection in modeling travelers’ route choice behavior is provided.
Abstract: This paper proposes a two-stage algorithm to simultaneously estimate origin-destination (OD) matrix, link choice proportion, and dispersion parameter using partial traffic counts in a congested network. A non-linear optimization model is developed which incorporates a dynamic dispersion parameter, followed by a two-stage algorithm in which Generalized Least Squares (GLS) estimation and a Stochastic User Equilibrium (SUE) assignment model are iteratively applied until the convergence is reached. To evaluate the performance of the algorithm, the proposed approach is implemented in a hypothetical network using input data with high error, and tested under a range of variation coefficients. The root mean squared error (RMSE) of the estimated OD demand and link flows are used to evaluate the model estimation results. The results indicate that the estimated dispersion parameter theta is insensitive to the choice of variation coefficients. The proposed approach is shown to outperform two established OD estimation methods and produce parameter estimates that are close to the ground truth. In addition, the proposed approach is applied to an empirical network in Seattle, WA to validate the robustness and practicality of this methodology. In summary, this study proposes and evaluates an innovative computational approach to accurately estimate OD matrices using link-level traffic flow data, and provides useful insight for optimal parameter selection in modeling travelers' route choice behavior.

18 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A comparison with different topologies of dynamic neural networks as well as other prevailing parametric and nonparametric algorithms suggests that LSTM NN can achieve the best prediction performance in terms of both accuracy and stability.
Abstract: Neural networks have been extensively applied to short-term traffic prediction in the past years. This study proposes a novel architecture of neural networks, Long Short-Term Neural Network (LSTM NN), to capture nonlinear traffic dynamic in an effective manner. The LSTM NN can overcome the issue of back-propagated error decay through memory blocks, and thus exhibits the superior capability for time series prediction with long temporal dependency. In addition, the LSTM NN can automatically determine the optimal time lags. To validate the effectiveness of LSTM NN, travel speed data from traffic microwave detectors in Beijing are used for model training and testing. A comparison with different topologies of dynamic neural networks as well as other prevailing parametric and nonparametric algorithms suggests that LSTM NN can achieve the best prediction performance in terms of both accuracy and stability.

1,521 citations

Journal ArticleDOI
Zheng Zhao1, Weihai Chen1, Xingming Wu1, Peter C. Y. Chen, Jingmeng Liu1 
TL;DR: A novel traffic forecast model based on long short-term memory (LSTM) network is proposed, which considers temporal-spatial correlation in traffic system via a two-dimensional network which is composed of many memory units.
Abstract: Short-term traffic forecast is one of the essential issues in intelligent transportation system. Accurate forecast result enables commuters make appropriate travel modes, travel routes, and departure time, which is meaningful in traffic management. To promote the forecast accuracy, a feasible way is to develop a more effective approach for traffic data analysis. The availability of abundant traffic data and computation power emerge in recent years, which motivates us to improve the accuracy of short-term traffic forecast via deep learning approaches. A novel traffic forecast model based on long short-term memory (LSTM) network is proposed. Different from conventional forecast models, the proposed LSTM network considers temporal-spatial correlation in traffic system via a two-dimensional network which is composed of many memory units. A comparison with other representative forecast models validates that the proposed LSTM network can achieve a better performance.

1,204 citations

Journal ArticleDOI
TL;DR: In this article, a novel neural network-based traffic forecasting method, the temporal graph convolutional network (T-GCN) model, which is combined with the graph convolutionsal network and the gated recurrent unit (GRU), is proposed.
Abstract: Accurate and real-time traffic forecasting plays an important role in the intelligent traffic system and is of great significance for urban traffic planning, traffic management, and traffic control. However, traffic forecasting has always been considered an “open” scientific issue, owing to the constraints of urban road network topological structure and the law of dynamic change with time. To capture the spatial and temporal dependences simultaneously, we propose a novel neural network-based traffic forecasting method, the temporal graph convolutional network (T-GCN) model, which is combined with the graph convolutional network (GCN) and the gated recurrent unit (GRU). Specifically, the GCN is used to learn complex topological structures for capturing spatial dependence and the gated recurrent unit is used to learn dynamic changes of traffic data for capturing temporal dependence. Then, the T-GCN model is employed to traffic forecasting based on the urban road network. Experiments demonstrate that our T-GCN model can obtain the spatio-temporal correlation from traffic data and the predictions outperform state-of-art baselines on real-world traffic datasets. Our tensorflow implementation of the T-GCN is available at https://www.github.com/lehaifeng/T-GCN .

1,188 citations

Journal ArticleDOI
TL;DR: In this article, the authors provide a thorough overview on using a class of advanced machine learning techniques, namely deep learning (DL), to facilitate the analytics and learning in the IoT domain.
Abstract: In the era of the Internet of Things (IoT), an enormous amount of sensing devices collect and/or generate various sensory data over time for a wide range of fields and applications. Based on the nature of the application, these devices will result in big or fast/real-time data streams. Applying analytics over such data streams to discover new information, predict future insights, and make control decisions is a crucial process that makes IoT a worthy paradigm for businesses and a quality-of-life improving technology. In this paper, we provide a thorough overview on using a class of advanced machine learning techniques, namely deep learning (DL), to facilitate the analytics and learning in the IoT domain. We start by articulating IoT data characteristics and identifying two major treatments for IoT data from a machine learning perspective, namely IoT big data analytics and IoT streaming data analytics. We also discuss why DL is a promising approach to achieve the desired analytics in these types of data and applications. The potential of using emerging DL techniques for IoT data analytics are then discussed, and its promises and challenges are introduced. We present a comprehensive background on different DL architectures and algorithms. We also analyze and summarize major reported research attempts that leveraged DL in the IoT domain. The smart IoT devices that have incorporated DL in their intelligence background are also discussed. DL implementation approaches on the fog and cloud centers in support of IoT applications are also surveyed. Finally, we shed light on some challenges and potential directions for future research. At the end of each section, we highlight the lessons learned based on our experiments and review of the recent literature.

903 citations