scispace - formally typeset
Search or ask a question

Showing papers in "Transportation Research Part C-emerging Technologies in 2013"


Journal ArticleDOI
TL;DR: This study suggests that mobile phone trace data represent a reasonable proxy for individual mobility and show enormous potential as an alternative and more frequently updatable data source and a compliment to the conventional travel surveys in mobility study.
Abstract: Large-scale urban sensing data such as mobile phone traces are emerging as an important data source for urban modeling. This study represents a first step towards building a methodology whereby mobile phone data can be more usefully applied to transportation research. In this paper, we present techniques to extract useful mobility information from the mobile phone traces of millions of users to investigate individual mobility patterns within a metropolitan area. The mobile-phone-based mobility measures are compared to mobility measures computed using odometer readings from the annual safety inspections of all private vehicles in the region to check the validity of mobile phone data in characterizing individual mobility and to identify the differences between individual mobility and vehicular mobility. The empirical results can help us understand the intra-urban variation of mobility and the non-vehicular component of overall mobility. More importantly, this study suggests that mobile phone trace data represent a reasonable proxy for individual mobility and show enormous potential as an alternative and more frequently updatable data source and a compliment to the conventional travel surveys in mobility study.

527 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed an efficient and effective data-mining procedure that models the travel patterns of transit riders in Beijing, China and identified trip chains based on the temporal and spatial characteristics of their smart card transaction data.
Abstract: To mitigate the congestion caused by the ever increasing number of privately owned automobiles, public transit is highly promoted by transportation agencies worldwide. A better understanding of travel patterns and regularity at the “magnitude” level will enable transit authorities to evaluate the services they offer, adjust marketing strategies, retain loyal customers and improve overall transit performance. However, it is fairly challenging to identify travel patterns for individual transit riders in a large dataset. This paper proposes an efficient and effective data-mining procedure that models the travel patterns of transit riders in Beijing, China. Transit riders’ trip chains are identified based on the temporal and spatial characteristics of their smart card transaction data. The Density-based Spatial Clustering of Applications with Noise (DBSCAN) algorithm then analyzes the identified trip chains to detect transit riders’ historical travel patterns and the K-Means++ clustering algorithm and the rough-set theory are jointly applied to cluster and classify travel pattern regularities. The performance of the rough-set-based algorithm is compared with those of other prevailing classification algorithms. The results indicate that the proposed rough-set-based algorithm outperforms other commonly used data-mining algorithms in terms of accuracy and efficiency.

510 citations


Journal ArticleDOI
TL;DR: This paper provides a broad, but not exhaustive overview of the crowd motion simulation models of the last decades and argues that any model used for crowd simulation should be able to simulate most of the phenomena indicated in this paper.
Abstract: Currently, pedestrian simulation models are used to predict where, when and why hazardous high density crowd movements arise. However, it is questionable whether models developed for low density situations can be used to simulate high density crowd movements. The objective of this paper is to assess the existent pedestrian simulation models with respect to known crowd phenomena in order to ascertain whether these models can indeed be used for the simulation of high density crowds and to indicate any gaps in the field of pedestrian simulation modeling research. This paper provides a broad, but not exhaustive overview of the crowd motion simulation models of the last decades. It is argued that any model used for crowd simulation should be able to simulate most of the phenomena indicated in this paper. In the paper cellular automata, social force models, velocity-based models, continuum models, hybrid models, behavioral models and network models are discussed. The comparison shows that the models can roughly be divided into slow but highly precise microscopic modeling attempts and very fast but behaviorally questionable macroscopic modeling attempts. Both sets of models have their use, which is highly dependent on the application the model has originally been developed for. Yet, for practical applications, that need both precision and speed, the current pedestrian simulation models are inadequate.

407 citations


Journal ArticleDOI
TL;DR: The control of a network of signalized intersections is considered, with the advantage of MP over other SF network control formulations is that it only requires local information at each intersection and provably maximizes throughput.
Abstract: The control of a network of signalized intersections is considered. Vehicles arrive in iid (independent, identically distributed) streams at entry links, independently make turns at intersections with fixed probabilities or turn ratios, and leave the network upon reaching an exit link. There is a separate queue for each turn movement at each intersection. These are point queues with no limit on storage capacity. At each time the control selects a ‘stage’, which actuates a set of simultaneous vehicle movements at given iid saturation flow rates. Network evolution is modeled as a controlled store-and-forward (SF) queuing network. The control can be a function of the state, which is the vector of all the queue lengths. A set of demands is said to be feasible if there is a control that stabilizes the queues, that is the time-average of every mean queue length is bounded. The set of feasible demands D is a convex set defined by a collection of linear inequalities involving only the mean values of the demands, turn ratios and saturation rates. If the demands are in the interior Do of D, there is a fixed-time control that stabilizes the queues. The max pressure (MP) control is introduced. At each intersection, MP selects a stage that depends only on the queues adjacent to the intersection. The MP control does not require knowledge of the mean demands. MP stabilizes the network if the demand is in Do. Thus MP maximizes network throughput. MP does require knowledge of mean turn ratios and saturation rates, but an adaptive version of MP will have the same performance, if turn movements and saturation rates can be measured. The advantage of MP over other SF network control formulations is that it (1) only requires local information at each intersection and (2) provably maximizes throughput. Examples show that other local controllers, including priority service and fully actuated control, may not be stabilizing. Several modifications of MP are offered including one that guarantees minimum green for each approach and another that considers weighted queues; also discussed is the effect of finite storage capacity.

403 citations


Journal ArticleDOI
TL;DR: In this paper, a survey of speed models in maritime transportation is presented, that is, models in which speed is one of the decision variables and a taxonomy of such models is also presented, according to a set of parameters.
Abstract: International shipping accounts for 2.7% of worldwide CO 2 emissions, and measures to curb future emissions growth are sought with a high sense of urgency. With the increased quest for greener shipping, reducing the speed of ships has obtained an increased role as one of the measures to be applied toward that end. Already speed has been important for economic reasons, as it is a key determinant of fuel cost, a significant component of the operating cost of ships. Moreover, speed is an important parameter of the overall logistical operation of a shipping company and of the overall supply chain and may directly or indirectly impact fleet size, ship size, cargo inventory costs and shippers’ balance sheets. Changes in ship speed may also induce modal shifts, if cargo can choose other modes because they are faster. However, as emissions are directly proportional to fuel consumed, speed is also very much connected with the environmental dimension of shipping. So when shipping markets are in a depressed state and “slow-steaming” is the prevalent practice for economic reasons, an important side benefit is reduced emissions. In fact there are many indications that this practice, very much applied these days, will be the norm in the future. This paper presents a survey of speed models in maritime transportation, that is, models in which speed is one of the decision variables. A taxonomy of such models is also presented, according to a set of parameters.

385 citations


Journal ArticleDOI
TL;DR: This paper presents a review of highway-based evacuation modeling and simulation and its evolution over the past decade, including the current state of modeling in the forecasting of evacuation travel demand, distribution and assignment of evacuation demand to regional road networks to reach destinations.
Abstract: This paper presents a review of highway-based evacuation modeling and simulation and its evolution over the past decade. The review includes the major components of roadway transportation planning and operations, including the current state of modeling in the forecasting of evacuation travel demand, distribution and assignment of evacuation demand to regional road networks to reach destinations, assignment of evacuees to various modes of transportation, and evaluation and testing of alternative management strategies to increase capacity of evacuation networks or manage demand. Although this discussion does not cover recent work in other modes used in evacuation such as air, rail, and pedestrian, this paper does highlight recent interdisciplinary modeling work in evacuation to help bridge the gap between the behavioral sciences and engineering and the application of emerging techniques for the verification, validation, and calibration of models. The manuscript also calls attention to special considerations and logistical difficulties, which have received limited attention to date. In addition to these concerns, the following future directions are discussed: further interdisciplinary efforts, including incorporating the medical community; using new technologies for communication of warnings and traffic condition information, data collection, and increased modeling resolution and confidence; using real-time information; and further model refinements and validation.

371 citations


Journal ArticleDOI
TL;DR: In this article, a genetic algorithm is developed to solve the multi-station problem through a special binary coding method that indicates a train departure or cancellation at every possible time point, and a local improvement algorithm is presented to find optimal timetables for individual station cases.
Abstract: This article focuses on optimizing a passenger train timetable in a heavily congested urban rail corridor. When peak-hour demand temporally exceeds the maximum loading capacity of a train, passengers may not be able to board the next arrival train, and they may be forced to wait in queues for the following trains. A binary integer programming model incorporated with passenger loading and departure events is constructed to provide a theoretic description for the problem under consideration. Based on time-dependent, origin-to-destination trip records from an automatic fare collection system, a nonlinear optimization model is developed to solve the problem on practically sized corridors, subject to the available train-unit fleet. The latest arrival time of boarded passengers is introduced to analytically calculate effective passenger loading time periods and the resulting time-dependent waiting times under dynamic demand conditions. A by-product of the model is the passenger assignment with strict capacity constraints under oversaturated conditions. Using cumulative input–output diagrams, we present a local improvement algorithm to find optimal timetables for individual station cases. A genetic algorithm is developed to solve the multi-station problem through a special binary coding method that indicates a train departure or cancellation at every possible time point. The effectiveness of the proposed model and algorithm are evaluated using a real-world data set.

369 citations


Journal ArticleDOI
TL;DR: A tensor pattern which is an extension of matrix is introduced into modeling the traffic data for the first time, which can give full play to traffic spatial–temporal information and preserve the multi-way nature of traffic data.
Abstract: Missing and suspicious traffic data are inevitable due to detector and communication malfunctions, which adversely affect the transportation management system (TMS). In this paper, a tensor pattern which is an extension of matrix is introduced into modeling the traffic data for the first time, which can give full play to traffic spatial–temporal information and preserve the multi-way nature of traffic data. To estimate the missing value, a tensor decomposition based Imputation method has been developed. This approach not only inherits the advantages of imputation methods based on matrix pattern for estimating missing points, but also well mines the multi-dimensional inherent correlation of traffic data. Experiments demonstrate that the proposed method achieves a better imputation performance than the state-of-the-art imputation approach even when the missing ratio is up to 90%. Furthermore, the experimental results show that the proposed method can address the extreme case where the data of one or several days are completely missing, and additionally it can be employed to recover the missing traffic data in adverse weather as well.

361 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigated the impact of voluntary secondary task uptake on the system supervisory responsibilities of drivers experiencing highly-automated vehicle control, and found that participants became more heavily involved with the in-vehicle entertainment tasks than they were in manual driving, affording less visual attention to the road ahead.
Abstract: Previous research has indicated that high levels of vehicle automation can result in reduced driver situation awareness, but has also highlighted potential benefits of such future vehicle designs through enhanced safety and reduced driver workload. Well-designed automation allows drivers’ visual attention to be focused away from the roadway and toward secondary, in-vehicle tasks. Such tasks may be pleasant distractions from the monotony of system monitoring. This study was undertaken to investigate the impact of voluntary secondary task uptake on the system supervisory responsibilities of drivers experiencing highly-automated vehicle control. Independent factors of Automation Level (manual control, highly-automated) and Traffic Density (light, heavy) were manipulated in a repeated-measures experimental design. 49 drivers participated using a high-fidelity driving simulator that allowed drivers to see, hear and, crucially, feel the impact of their automated vehicle handling. Drivers experiencing automation tended to refrain from behaviours that required them to temporarily retake manual control, such as overtaking, resulting in an increased journey time. Automation improved safety margins in car following, however this was restricted to conditions of light surrounding traffic. Participants did indeed become more heavily involved with the in-vehicle entertainment tasks than they were in manual driving, affording less visual attention to the road ahead. This might suggest that drivers are happy to forgo their supervisory responsibilities in preference of a more entertaining highly-automated drive. However, they did demonstrate additional attention to the roadway in heavy traffic, implying that these responsibilities are taken more seriously as the supervisory demand of vehicle automation increases. These results may dampen some concerns over driver underload with vehicle automation, assuming vehicle manufacturers embrace the need for positive system feedback and drivers also fully appreciate their supervisory obligations in such future vehicle designs.

298 citations


Journal ArticleDOI
TL;DR: Probabilistic principle component analysis (PPCA) based imputing method is extended to utilize the information of multiple points to improve imputing performance and proves that imputing errors can be notably reduced, if temporal–spatial dependence has been appropriately considered.
Abstract: The missing data problem remains as a difficulty in a diverse variety of transportation applications, e.g. traffic flow prediction and traffic pattern recognition. To solve this problem, numerous algorithms had been proposed in the last decade to impute the missed data. However, few existing studies had fully used the traffic flow information of neighboring detecting points to improve imputing performance. In this paper, probabilistic principle component analysis (PPCA) based imputing method, which had been proven to be one of the most effective imputing methods without using temporal or spatial dependence, is extended to utilize the information of multiple points. We systematically examine the potential benefits of multi-point data fusion and study the possible influence of measurement time lags. Tests indicate that the hidden temporal–spatial dependence is nonlinear and could be better retrieved by kernel probabilistic principle component analysis (KPPCA) based method rather than PPCA method. Comparison proves that imputing errors can be notably reduced, if temporal–spatial dependence has been appropriately considered.

257 citations


Journal ArticleDOI
TL;DR: In this paper, a genetic algorithm incorporating Monte Carlo simulation is proposed to solve the problem of deadheading in a special case of the stop-skipping problem, allowing a bus vehicle to skip stops between the dispatching terminal point and a designated stop.
Abstract: When a bus is late and behind schedule, the stop-skipping scheme allows the bus vehicle to skip one or more stops to reduce its travel time. The deadheading problem is a special case of the stop-skipping problem, allowing a bus vehicle to skip stops between the dispatching terminal point and a designated stop. At the planning level, the optimal operating plans for these two schemes should be tackled for the benefits of bus operator as well as passengers. This paper aims to propose a methodology for this objective. Thus, three objectives are first proposed to reflect the benefits of bus operator and/or passengers, including minimizing the total waiting time, total in-vehicle travel time and total operating cost. Then, assuming random bus travel time, the stop-skipping is formulated as an optimization model minimizing the weighted sum of the three objectives. The deadheading problem can be formulated via the same minimization model further adding several new constraints. Then, a Genetic Algorithm Incorporating Monte Carlo Simulation is proposed to solve the optimization model. As validated by a numerical example, the proposed algorithm can obtain a satisfactory solution close to the global optimum.

Journal ArticleDOI
TL;DR: This paper constructs a new kernel function using a wavelet function to capture the non-stationary characteristics of the short-term traffic speed data and uses the Phase Space Reconstruction theory to identify the input space dimension.
Abstract: Based on the previous literature review, this paper builds a short-term traffic speed forecasting model using Support Vector Machine (SVM) regression theory (referred as SVM model in this paper). Besides the advantages of the SVM model, it also has some limitations. Perhaps the biggest one lies in choice of the appropriate kernel function for the practical problem; how to optimize the parameters efficiently and effectively presents another one. Unfortunately, these limitations are still research topics in current literature. This paper puts an effort to investigate these limitations. In order to find the effective way to choose the appropriate and suitable kernel function, this paper constructs a new kernel function using a wavelet function to capture the non-stationary characteristics of the short-term traffic speed data. In order to find the efficient way to identify the model structure parameters, this paper uses the Phase Space Reconstruction theory to identify the input space dimension. To take the advantage of these components, the paper proposes a short-term traffic speed forecasting hybrid model (Chaos–Wavelet Analysis-Support Vector Machine model, referred to as C-WSVM model in this paper). The real traffic speed data is applied to evaluate the performance and practicality of the model and the results are encouraging. The theoretical advantage and better performance from the study indicate that the C-WSVM model has good potential to be developed and is feasible for short-term traffic speed forecasting study.

Journal ArticleDOI
TL;DR: An analytical queuing and network decomposition model developed to study the complex phenomenon of the propagation of delays within a large network of major airports is described and provides insights into the interactions through which delays propagate through the network and the often-counterintuitive consequences.
Abstract: We describe an analytical queuing and network decomposition model developed to study the complex phenomenon of the propagation of delays within a large network of major airports. The Approximate Network Delays (AND) model computes the delays due to local congestion at individual airports and captures the " ripple effect" that leads to the propagation of these delays. The model operates by iterating between its two main components: a queuing engine (QE) that computes delays at individual airports and a delay propagation algorithm (DPA) that updates flight schedules and demand rates at all the airports in the model in response to the local delays computed by the QE. The QE is a stochastic and dynamic queuing model that treats each airport in the network as a M(t)/. Ek(t)/1 queuing system. The AND model is very fast computationally, thus making possible the exploration at a macroscopic level of the impacts of a large number of scenarios and policy alternatives on system-wide delays. It has been applied to a network consisting of the 34 busiest airports in the continental United States and provides insights into the interactions through which delays propagate through the network and the often-counterintuitive consequences. Delay propagation tends to " smoothen" daily airport demand profiles and push more demands into late evening hours. Such phenomena are especially evident at hub airports, where some flights may benefit considerably (by experiencing reduced delays) from the changes that occur in the scheduled demand profile as a result of delays and delay propagation. © 2011 Elsevier Ltd.

Journal ArticleDOI
TL;DR: In this paper, the authors used a fleet replacement optimization framework, a wide range of scenarios, and current USA market data to find the key economic and technological breakeven values where electric vehicles become competitive against conventional diesel counterparts.
Abstract: Electric commercial vehicles’ (ECVs) energy costs are almost four times less expensive than conventional diesel trucks, on a per-mile basis, at current USA market values. However, ECVs are approximately three times more expensive in terms of vehicle purchase costs. In addition, electric vehicles are simpler and cheaper to maintain there are more uncertainties associated to the life and long-term costs of the ECV batteries. Furthermore, there are limitations in terms of miles driven per day without recharging. These economic and technological tradeoffs motivate this research. Utilizing a fleet replacement optimization framework, a wide range of scenarios, and current USA market data this research finds the key economic and technological breakeven values where ECVs become competitive against conventional diesel counterparts. The results clearly indicate that only in scenarios with high utilization (over 16,000 miles per year per truck) the electric vehicles are competitive, this is especially valid if a battery replacement is not required before the electric commercial vehicle is replaced. The breakeven analysis results show that a 9–27% ECV price reduction can greatly increase their competitiveness when vehicles are driven over 12,000 miles per year.

Journal ArticleDOI
TL;DR: A three-layer neural network model is proposed to estimate complete link travel time for individual probe vehicle traversing the link and results suggest that the Artificial Neural Network model outperforms the analytical model.
Abstract: In the urban signalized network, travel time estimation is a challenging subject especially because urban travel times are intrinsically uncertain due to the fluctuations in traffic demand and supply, traffic signals, stochastic arrivals at the intersections, etc. In this paper, probe vehicles are used as traffic sensors to collect traffic data (speeds, positions and time stamps) in an urban road network. However, due to the low polling frequencies (e.g. 1 min or 5 min), travel times recorded by probe vehicles provide only partial link or route travel times. This paper focuses on the estimation of complete link travel times. Based on the information collected by probe vehicles, a three-layer neural network model is proposed to estimate complete link travel time for individual probe vehicle traversing the link. This model is discussed and compared with an analytical estimation model which was developed by Hellinga et al. (2008). The performance of these two models are evaluated with data derived from VISSIM simulation model. Results suggest that the Artificial Neural Network model outperforms the analytical model.

Journal ArticleDOI
TL;DR: Results show that the use of accelerometer data can make a substantial contribution to successful imputation of transportation mode, and the accelerometer only approach outperforms the GPS only approach in terms of the predictive accuracy.
Abstract: Potential advantages of global positioning systems (GPS) in collecting travel behavior data have been discussed in several publications and evidenced in many recent studies. Most applications depend on GPS information only. However, transportation mode detection that relies only on GPS information may be erroneous due to variance in device performance and settings, and the environment in which measurements are made. Accelerometers, being used mainly for identifying peoples’ physical activities, may offer new opportunities as these devices record data independent of exterior contexts. The purpose of this paper is therefore to examine the merits of employing accelerometer data in combination with GPS data in transportation mode identification. Three approaches (GPS data only, accelerometer data only and a combination of both accelerometer and GPS data) are examined. A Bayesian Belief Network model is used to infer transportation modes and activity episodes simultaneously. Results show that the use of accelerometer data can make a substantial contribution to successful imputation of transportation mode. The accelerometer only approach outperforms the GPS only approach in terms of the predictive accuracy. The approach which combines GPS and accelerometer data yields the best performance.

Journal ArticleDOI
TL;DR: In this article, the authors present an iterative approach, which integrates a PEV electricity demand model and a power system simulation to reveal potential bottlenecks in the electric grid caused by PEV energy demand.
Abstract: The introduction of plug-in hybrid electric vehicles (PHEVs) and electric vehicles (EVs), commonly referred to as plug-in electric vehicles (PEVs), could trigger a stepwise electrification of the whole transportation sector. However, the potential impact of PEV charging on the electric grid is not fully known, yet. This paper presents an iterative approach, which integrates a PEV electricity demand model and a power system simulation to reveal potential bottlenecks in the electric grid caused by PEV energy demand. An agent-based traffic demand model is used to model the electricity demand of each vehicle over the day. An approach based on interconnected multiple energy carrier systems is used as a model for a possible future energy system. Experiments demonstrate that the model is sensitive to policy changes, e.g., changes in electricity price result in modified charging patterns. By implementing an intelligent vehicle charging solution it is demonstrated how new charging schemes can be designed and tested using the proposed framework.

Journal ArticleDOI
TL;DR: How mobility of urban street networks could be improved by managing vehicle accumulation and redistributing network traffic via strategies such as demand management and disseminating real-time traveler information (adaptive driving) is examined.
Abstract: This study explores the limiting properties of network-wide traffic flow relations under heavily congested conditions in a large-scale complex urban street network; these limiting conditions are emulated in the context of dynamic traffic assignment (DTA) experiments on an actual large network. The primary objectives are to characterize gridlock and understand its dynamics. This study addresses a gap in the literature with regard to the existence of exit flow and recovery period. The one-dimensional theoretical Network Fundamental Diagram (NFD) only represents steady-state behavior and holds only when the inputs change slowly in time and traffic is distributed homogenously in space. Also, it does not describe the hysteretic behavior of the network traffic when a gridlock forms or when network recovers. Thus, a model is proposed to reproduce hysteresis and gridlock when homogeneity and steady-state conditions do not hold. It is conjectured that the network average flow can be approximated as a non-linear function of network average density and variation in link densities. The proposed model is calibrated for the Chicago Central Business District (CBD) network. We also show that complex urban networks with multiple route choices, similar to the idealized network tested previously in the literature, tend to jam at a range of densities that are smaller than the theoretical average network jam density. Also it is demonstrated that networks tend to gridlock in many different ways with different configurations. This study examines how mobility of urban street networks could be improved by managing vehicle accumulation and redistributing network traffic via strategies such as demand management and disseminating real-time traveler information (adaptive driving). This study thus defines and explores some key characteristics and dynamics of urban street network gridlocks including gridlock formation, propagation, recovery, size, etc.

Journal ArticleDOI
TL;DR: This research provides new possibilities for fully utilizing the partial information obtained from urban taxicab data for estimating network condition, which is not only very useful but also is inexpensive and has much better coverage than traditional sensor data.
Abstract: Taxicabs equipped with Global Positioning System (GPS) devices can serve as useful probes for monitoring the traffic state in an urban area. This paper presents a new descriptive model for estimating hourly average of urban link travel times using taxicab origin–destination (OD) trip data. The focus of this study is to develop a methodology to estimate link travel times from OD trip data and demonstrate the feasibility of estimating network condition using large-scale geo-location data with partial information. The data, collected from the taxicabs in New York City, provides the locations of origins and destinations, travel times, fares and other information of taxi trips. The new model infers the possible paths for each trip and then estimates the link travel times by minimizing the error between the expected path travel times and the observed path travel times. The model is evaluated using a test network from Midtown Manhattan. Results indicate that the proposed method can efficiently estimate hourly average link travel times. This research provides new possibilities for fully utilizing the partial information obtained from urban taxicab data for estimating network condition, which is not only very useful but also is inexpensive and has much better coverage than traditional sensor data.

Journal ArticleDOI
TL;DR: In this article, a pseudospectral method is used to solve the problem of train optimal control under constraints and fixed arrival time, where the objective function is a trade-off between the energy consumption and the riding comfort.
Abstract: The optimal trajectory planning problem for train operations under constraints and fixed arrival time is considered. The varying line resistance, variable speed restrictions, and varying maximum traction force are included in the problem definition. The objective function is a trade-off between the energy consumption and the riding comfort. Two approaches are proposed to solve this optimal control problem. First, we propose to use the pseudospectral method, a state-of-the-art method for optimal control problems, which has not used for train optimal control before. In the pseudospectral method, the optimal trajectory planning problem is recast into a multiple-phase optimal control problem, which is then transformed into a nonlinear programming problem. However, the calculation time for the pseudospectral method is too long for the real-time application in an automatic train operation system. To shorten the computation time, the optimal trajectory planning problem is reformulated as a mixed-integer linear programming (MILP) problem by approximating the nonlinear terms in the problem by piecewise affine functions. The MILP problem can be solved efficiently by existing solvers that guarantee to return the global optimum for the proposed MILP problem. Simulation results comparing the pseudospectral method, the new MILP approach, and a discrete dynamic programming approach show that the pseudospectral method has the best control performance, but that if the required computation time is also take into consideration, the MILP approach yields the best overall performance. More specifically, for the given case study the control performance of the pseudospectral approach is about 10% better than that of the MILP approach, and the computation time of the MILP approach is two to three orders of magnitude smaller than that of the pseudospectral method and the discrete dynamic programming approach.

Journal ArticleDOI
TL;DR: A probabilistic map matching approach that generates a set of potential true paths, and associates a likelihood with each of them, and shows the viability of applying the proposed method in a real route choice modeling context.
Abstract: Smartphones have the capability of recording various kinds of data from built-in sensors such as GPS in a non-intrusive, systematic way. In transportation studies, such as route choice modeling, the discrete sequences of GPS data need to be associated with the transportation network to generate meaningful paths. The poor quality of GPS data collected from smartphones precludes the use of state of the art map matching methods. In this paper, we propose a probabilistic map matching approach. It generates a set of potential true paths, and associates a likelihood with each of them. Both spatial (GPS coordinates) and temporal information (speed and time) is used to calculate the likelihood of the data for a specific path. Applications and analyses on real trips illustrate the robustness and effectiveness of the proposed approach. Also, as an application example, a Path-Size Logit model is estimated based on a sample of real observations. The estimation results show the viability of applying the proposed method in a real route choice modeling context. (C) 2012 Elsevier Ltd. All rights reserved.

Journal ArticleDOI
TL;DR: In this paper, a route-choice experiment with 36 participants, involving 20 repetitions under three different levels of information accuracy, was conducted to investigate the impact of travel time uncertainty.
Abstract: Advanced Travel Information Systems (ATISs) are designed to assist travellers in making better travel choices by providing pre-trip and en-route information such as travel times on the relevant alternatives. Travellers’ choices are likely to be sensitive to the accuracy of the provided information in addition to travel time uncertainty. A route-choice experiment with 36 participants, involving 20 repetitions under three different levels of information accuracy was conducted to investigate the impact of information accuracy. In each experiment respondents had to choose one of three routes (risky, useless and reliable). Provided information included descriptive information about the average estimated travel times for each route, prescriptive information regarding the suggested route and experiential feedback information about the actual travel times on all routes. Aggregate analysis using non-parametric statistics and disaggregate analysis using a mixed logit choice model were applied. The results suggest decreasing accuracy shifts choices mainly from the riskier to the reliable route but also to the useless alternative. Prescriptive information has the largest behavioural impact followed by descriptive and experiential feedback information. Risk attitudes also seem to play a role. The implications for ATIS design and future research are further discussed.

Journal ArticleDOI
TL;DR: This paper investigated the effects of lane-changing in driver behavior by measuring the induced transient behavior and the change in driver characteristics, i.e., changes in driver response time and minimum spacing and found that the transition largely consists of a pre-insertion transition and a relaxation process.
Abstract: This paper investigates the effects of lane-changing in driver behavior by measuring (i) the induced transient behavior and (ii) the change in driver characteristics, ie, changes in driver response time and minimum spacing We find that the transition largely consists of a pre-insertion transition and a relaxation process These two processes are different but can be reasonably captured with a single model The findings also suggest that lane-changing induces a regressive effect on driver characteristics: a timid driver (characterized by larger response time and minimum spacing) tends to become less timid and an aggressive driver less aggressive We offer an extension to Newell's car-following model to describe this regressive effect and verify it using vehicle trajectory data

Journal ArticleDOI
TL;DR: In this article, a cooperative vehicle intersection control (CVIC) algorithm for an urban intersection that does not require a stop-and-go style traffic signal and demonstrated significant mobility improvements over an actuated traffic signal control.
Abstract: Connected Vehicle (CV) technology, formerly known as IntelliDrive, has emerged and is expected to provide unprecedented improvements in mobility. A recent study developed a cooperative vehicle intersection control (CVIC) algorithm for an urban intersection that does not require a stop-and-go style traffic signal and demonstrated significant mobility improvements over an actuated traffic signal control. This paper expanded the algorithm and implemented it to a corridor consisting of multiple intersections. In addition, this paper investigated sustainability aspects of the CVIC system for an urban traffic control system by applying surrogate safety assessment model (SSAM) and VT-Micro model to measure safety and environmental impacts, respectively. A simulation-based case study was performed on a hypothetical arterial consisting of four intersections with eight traffic congestion cases covering low to high volume conditions. When compared to the coordinated actuated control, the CVIC system dramatically reduced the total delay times for the volume cases considered (i.e., from 82% to 100% delay time savings observed). The CVIC system also reduced the number of rear-end crash events by 30–87% for the volume cases considered, indicating that safer driving conditions would be achieved with the CVIC system. Finally, the CVIC system contributed to improving the air quality (i.e., 12–36% CO2 emission reduction) and saving fuel consumptions (11–37% of gas saving).

Journal ArticleDOI
TL;DR: This paper introduces a path inference method for low-frequency floating car data, assesses its performance, and compares it to recent methods using a set of ground truth data.
Abstract: The use of probe vehicles in traffic management is growing rapidly. The reason is that the required data collection infrastructure is increasingly in place in urban areas with a significant number of mobile sensors constantly moving and covering expansive areas of the road network. In many cases, the data is sparse in time and location and includes only geo-location and timestamp. Extracting paths taken by the vehicles from such sparse data is an important step towards travel time estimation and is referred to as the map-matching and path inference problem. This paper introduces a path inference method for low-frequency floating car data, assesses its performance, and compares it to recent methods using a set of ground truth data.

Journal ArticleDOI
TL;DR: In this paper, the authors use a surrogate metric of acceptance defined as a threshold frequency of need for alternative transportation above which all users would not accept the inconvenience, and show that although the market acceptance and electrification potential of EVs are severely limited by battery cost, it is possible to determine an optimal EV range.
Abstract: The environmental and economic impact of electric vehicles (EVs) will depend on the fraction of users that can accept an EV of a given capability, and then in turn on how those EVs are actually used. Historically, estimates of the fraction of total travel that could be electrified as a function of EV range are based on vehicle usage data for large populations of vehicles, most often the National Household Travel Survey (NHTS). Two assumptions implicit in such estimates are subject to question: (1) that any user could accept an EV as a primary vehicle and would use it for all trips within its range, and (2) that the usage patterns of any individual EV user are the same as that exhibited by entire population. The first assumption is clearly unrealistic; willingness to accept an EV is dependent on the transportation needs and alternatives readily available to each individual user. As a surrogate for a priori knowledge of individual preferences, we use a crude metric of acceptance defined as a threshold frequency of need for alternative transportation above which all users would not accept the inconvenience. To test the validity of the second assumption and better estimate market and electrification potential, we analyze roughly 1 year of usage data for each of 133 instrumented vehicles in Minneapolis–St. Paul. We find a characteristic individual usage pattern that does not resemble the average over a large number of vehicles. Using the surrogate metric of EV acceptance and a simple payback model, we show that although the market acceptance and electrification potential of EVs are severely limited by battery cost, it is possible to determine an optimal EV range. Using the same usage data and payback model, we show that plug-in hybrid electric vehicles (PHEVs) can be much more effective than all-electric vehicles in electrifying personal transportation.

Journal ArticleDOI
TL;DR: An approach for local traffic state estimation and prediction is presented, which exploits available (traffic and other) information and uses data-driven computational approaches and has shown to outperform current state-of-the-art models.
Abstract: Traffic state prediction is a key problem with considerable implications in modern traffic management. Traffic flow theory has provided significant resources, including models based on traffic flow fundamentals that reflect the underlying phenomena, as well as promote their understanding. They also provide the basis for many traffic simulation models. Speed–density relationships, for example, are routinely used in mesoscopic models. In this paper, an approach for local traffic state estimation and prediction is presented, which exploits available (traffic and other) information and uses data-driven computational approaches. An advantage of the method is its flexibility in incorporating additional explanatory variables. It is also believed that the method is more appropriate for use in the context of mesoscopic traffic simulation models, in place of the traditional speed–density relationships. While these general methods and tools are pre-existing, their application into the specific problem and their integration into the proposed framework for the prediction of traffic state is new. The methodology is illustrated using two freeway data sets from Irvine, CA, and Tel Aviv, Israel. As the proposed models are shown to outperform current state-of-the-art models, they could be valuable when integrated into existing traffic estimation and prediction models.

Journal ArticleDOI
TL;DR: In this paper, the authors developed a random-parameter hazard-based model to understand hurricane evacuation timing by individual households and found that the variables related to household location, destination characteristics, socio-economic characteristics, evacuation notice and household decision making are key determinants of the departure time.
Abstract: The goal of this paper is to develop a random-parameter hazard-based model to understand hurricane evacuation timing by individual households. The choice of departure time during disasters is a complex dynamic process and depends on the risk that the hazard represents, the characteristics of the household and the built environment features. However, the risk responses are heterogeneous across the households; this unobserved heterogeneity is captured through random parameters in the model. The model is estimated with data from Hurricane Ivan including households from Alabama, Louisiana, Florida and Mississippi. It is found that the variables related to household location, destination characteristics, socio-economic characteristics, evacuation notice and household decision making are key determinants of the departure time. As such the developed model provides some fundamental inferences about hurricane evacuation timing behavior.

Journal ArticleDOI
TL;DR: A hybrid rollout algorithm is proposed for the solution of the inventory routing problem in which a supplier has to serve a set of retailers and its performance is evaluated on a large set of randomly generated problem instances.
Abstract: In this paper, we study an inventory routing problem in which a supplier has to serve a set of retailers. For each retailer, a maximum inventory level is defined and a stochastic demand has to be satisfied over a given time horizon. An order-up-to level policy is applied to each retailer, i.e. the quantity sent to each retailer is such that its inventory level reaches the maximum level whenever the retailer is served. An inventory cost is applied to any positive inventory level, while a penalty cost is charged and the excess demand is not backlogged whenever the inventory level is negative. The problem is to determine a shipping strategy that minimizes the expected total cost, given by the sum of the expected total inventory and penalty cost at the retailers and of the expected routing cost. A hybrid rollout algorithm is proposed for the solution of the problem and its performance is evaluated on a large set of randomly generated problem instances.

Journal ArticleDOI
TL;DR: This work characterize analytically the error introduced by the VT-macro model relative to the original VT-micro model, and presents an empirical analysis of the error and the computation time based on calibrated models of the Dutch A12 freeway.
Abstract: Traffic control approaches based on on-line optimization require fast and accurate integrated models for traffic flow, emission, and fuel consumption. In this context, one may want to integrate macroscopic traffic flow models with microscopic emission and fuel consumption models, which can result in shorter simulation times with fairly accurate estimates of the emissions and fuel consumption. In general, however, macroscopic traffic flow models and microscopic emission and fuel consumption models cannot be integrated with each other. We provide a general framework to integrate these two kinds of models. We illustrate the approach by considering the macroscopic traffic flow model METANET 1 and the microscopic emission and fuel consumption model VT-micro, 2 resulting in the so called the “VT-macro” model. Moreover, we characterize analytically the error introduced by the VT-macro model relative to the original VT-micro model. We further present an empirical analysis of the error and the computation time based on calibrated models of the Dutch A12 freeway.