scispace - formally typeset
Search or ask a question

Showing papers in "Transportation Research Part C-emerging Technologies in 2009"


Journal ArticleDOI
TL;DR: An innovative method that combines GPS logs, Geographic Information System (GIS) technology and an interactive web-based validation application is presented, demonstrating that GPS-based methods now provide reliable multi-day data.
Abstract: In the past few decades, travel patterns have become more complex and policy makers demand more detailed information. As a result, conventional data collection methods seem no longer adequate to satisfy all data needs. Travel researchers around the world are currently experimenting with different Global Positioning System (GPS)-based data collection methods. An overview of the literature shows the potential of these methods, especially when algorithms that include spatial data are used to derive trip characteristics from the GPS logs. This article presents an innovative method that combines GPS logs, Geographic Information System (GIS) technology and an interactive web-based validation application. In particular, this approach concentrates on the issue of deriving and validating trip purposes and travel modes, as well as allowing for reliable multi-day data collection. In 2007, this method was used in practice in a large-scale study conducted in the Netherlands. In total, 1104 respondents successfully participated in the one-week survey. The project demonstrated that GPS-based methods now provide reliable multi-day data. In comparison with data from the Dutch Travel Survey, travel mode and trip purpose shares were almost equal while more trips per tour were recorded, which indicates the ability of collecting trips that are missed by paper diary methods.

484 citations


Journal ArticleDOI
TL;DR: This paper solves the problem of measuring intersection queue length by exploiting the queue discharge process in the immediate past cycle by using high-resolution "event-based" traffic signal data and applying Lighthill-Whitham-Richards shockwave theory.
Abstract: How to estimate queue length in real-time at signalized intersection is a long-standing problem. The problem gets even more difficult when signal links are congested. The traditional input-output approach for queue length estimation can only handle queues that are shorter than the distance between vehicle detector and intersection stop line, because cumulative vehicle count for arrival traffic is not available once the detector is occupied by the queue. In this paper, instead of counting arrival traffic flow in the current signal cycle, we solve the problem of measuring intersection queue length by exploiting the queue discharge process in the immediate past cycle. Using high-resolution "event-based" traffic signal data, and applying Lighthill-Whitham-Richards (LWR) shockwave theory, we are able to identify traffic state changes that distinguish queue discharge flow from upstream arrival traffic. Therefore, our approach can estimate time-dependent queue length even when the signal links are congested with long queues. Variations of the queue length estimation model are also presented when "event-based" data is not available. Our models are evaluated by comparing the estimated maximum queue length with the ground truth data observed from the field. Evaluation results demonstrate that the proposed models can estimate long queues with satisfactory accuracy. Limitations of the proposed model are also discussed in the paper.

382 citations


Journal ArticleDOI
TL;DR: A preliminary simulation-based investigation of the signal control problem for a large-scale urban road network using two novel methodologies demonstrates the comparative efficiency and real-time feasibility of the developed signal control methods.
Abstract: The problem of designing network-wide traffic signal control strategies for large-scale congested urban road networks is considered. One known and two novel methodologies, all based on the store-and-forward modeling paradigm, are presented and compared. The known methodology is a linear multivariable feedback regulator derived through the formulation of a linear-quadratic optimal control problem. An alternative, novel methodology consists of an open-loop constrained quadratic optimal control problem, whose numerical solution is achieved via quadratic programming. Yet a different formulation leads to an open-loop constrained nonlinear optimal control problem, whose numerical solution is achieved by use of a feasible-direction algorithm. A preliminary simulation-based investigation of the signal control problem for a large-scale urban road network using these methodologies demonstrates the comparative efficiency and real-time feasibility of the developed signal control methods.

320 citations


Journal ArticleDOI
TL;DR: It is illustrated how the introduction of better operations research-based decision-support software could very significantly improve the ultimate performance of Freight ITS.
Abstract: While it is certainly too early to make a definitive assessment of the effectiveness of Intelligent Transportation Systems (ITS), it is not to take stock of what has been achieved and to think about what could be achieved in the near future. In our opinion, ITS developments have been up to now largely hardware-driven and have led to the introduction of many sophisticated technologies in the transportation arena, while the development of the software component of ITS, models and decision-support systems in particular, is lagging behind. To reach the full potential of ITS, one must thus address the challenge of making the most intelligent usage possible of the hardware that is being deployed and the huge wealth of data it provides. We believe that transportation planning and management disciplines, operations research in particular, have a key role to play with respect to this challenge. The paper focuses on Freight ITS: Commercial Vehicle Operations and Advanced Fleet Management Systems, City Logistics, and electronic business. The paper reviews main issues, technological challenges, and achievements, and illustrates how the introduction of better operations research-based decision-support software could very significantly improve the ultimate performance of Freight ITS.

289 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed an enhanced weight-based topological map-matching (tMM) algorithm, which improves the performance of the original tMM algorithm by introducing two new weights for turn-restriction at junctions and link connectivity.
Abstract: Map-matching (MM) algorithms integrate positioning data from a Global Positioning System (or a number of other positioning sensors) with a spatial road map with the aim of identifying the road segment on which a user (or a vehicle) is travelling and the location on that segment. Amongst the family of MM algorithms consisting of geometric, topological, probabilistic and advanced, topological MM (tMM) algorithms are relatively simple, easy and quick, enabling them to be implemented in real-time. Therefore, a tMM algorithm is used in many navigation devices manufactured by industry. However, existing tMM algorithms have a number of limitations which affect their performance relative to advanced MM algorithms. This paper demonstrates that it is possible by addressing these issues to significantly improve the performance of a tMM algorithm. This paper describes the development of an enhanced weight-based tMM algorithm in which the weights are determined from real-world field data using an optimisation technique. Two new weights for turn-restriction at junctions and link connectivity are introduced to improve the performance of matching, especially at junctions. A new procedure is developed for the initial map-matching process. Two consistency checks are introduced to minimise mismatches. The enhanced map-matching algorithm was tested using field data from dense urban areas and suburban areas. The algorithm identified 96.8% and 95.93% of the links correctly for positioning data collected in urban areas of central London and Washington, DC, respectively. In case of suburban area, in the west of London, the algorithm succeeded with 96.71% correct link identification with a horizontal accuracy of 9.81 m (2σ). This is superior to most existing topological MM algorithms and has the potential to support the navigation modules of many Intelligent Transport System (ITS) services. © 2009 Elsevier Ltd. All rights reserved.

204 citations


Journal ArticleDOI
TL;DR: A real-time arterial data collection and archival system developed at the University of Minnesota is described, followed by an innovative algorithm for time-dependent arterial travel time estimation using the archived traffic data.
Abstract: Estimation of time-dependent arterial travel time is a challenging task because of the interrupted nature of urban traffic flows. Many research efforts have been devoted to this topic, but their successes are limited and most of them can only be used for offline purposes due to the limited availability of traffic data from signalized intersections. In this paper, we describe a real-time arterial data collection and archival system developed at the University of Minnesota, followed by an innovative algorithm for time-dependent arterial travel time estimation using the archived traffic data. The data collection system simultaneously collects high-resolution “event-based” traffic data including every vehicle actuations over loop detector and every signal phase changes from multiple intersections. Using the “event-based” data, we estimate time-dependent travel time along an arterial by tracing a virtual probe vehicle. At each time step, the virtual probe has three possible maneuvers: acceleration, deceleration and no-speed-change. The maneuver decision is determined by its own status and surrounding traffic conditions, which can be estimated based on the availability of traffic data at intersections. An interesting property of the proposed model is that travel time estimation errors can be self-corrected, because the trajectory differences between a virtual probe vehicle and a real one can be reduced when both vehicles meet a red signal phase and/or a vehicle queue. Field studies at a 11-intersection arterial corridor along France Avenue in Minneapolis, MN, demonstrate that the proposed model can generate accurate time-dependent travel times under various traffic conditions.

186 citations


Journal ArticleDOI
TL;DR: A practical system is described for the real-time estimation of travel time across an arterial segment with multiple intersections based on matching vehicle signatures from wireless sensors based on a statistical model of the signatures.
Abstract: A practical system is described for the real-time estimation of travel time across an arterial segment with multiple intersections. The system relies on matching vehicle signatures from wireless sensors. The sensors provide a noisy magnetic signature of a vehicle and the precise time when it crosses the sensors. A match (re-identification) of signatures at two locations gives the corresponding travel time of the vehicle. The travel times for all matched vehicles yield the travel time distribution. Matching results can be processed to provide other important arterial performance measures including capacity, volume/capacity ratio, queue lengths, and number of vehicles in the link. The matching algorithm is based on a statistical model of the signatures. The statistical model itself is estimated from the data, and does not require measurement of ‘ground truth’. The procedure does not require measurements of signal settings; in fact, signal settings can be inferred from the matched vehicle results. The procedure is tested on a 1.5 km (0.9 mile)-long segment of San Pablo Avenue in Albany, CA, under different traffic conditions. The segment is divided into three links: one link spans four intersections, and two links each span one intersection.

172 citations


Journal ArticleDOI
TL;DR: In this article, an adaptive traffic signal controller for real-time operation is presented, which is built on approximate dynamic programming (ADP) to reduce computational burden by using an approximation to the value function of the dynamic programming.
Abstract: This paper presents a study on an adaptive traffic signal controller for real-time operation. The controller aims for three operational objectives: dynamic allocation of green time, automatic adjustment to control parameters, and fast revision of signal plans. The control algorithm is built on approximate dynamic programming (ADP). This approach substantially reduces computational burden by using an approximation to the value function of the dynamic programming and reinforcement learning to update the approximation. We investigate temporal-difference learning and perturbation learning as specific learning techniques for the ADP approach. We find in computer simulation that the ADP controllers achieve substantial reduction in vehicle delays in comparison with optimised fixed-time plans. Our results show that substantial benefits can be gained by increasing the frequency at which the signal plans are revised, which can be achieved conveniently using the ADP approach.

168 citations


Journal ArticleDOI
TL;DR: This work provides a linear bilevel programming formulation for this hazmat transportation network design problem that takes into account both total risk minimization and risk equity, and provides a heuristic algorithm for the bileVEL model able to always find a stable solution.
Abstract: In this work we consider the following hazmat transportation network design problem. A given set of hazmat shipments has to be shipped over a road transportation network in order to transport a given amount of hazardous materials from specific origin points to specific destination points, and we assume there are regional and local government authorities that want to regulate the hazmat transportations by imposing restrictions on the amount of hazmat traffic over the network links. In particular, the regional authority aims to minimize the total transport risk induced over the entire region in which the transportation network is embedded, while local authorities want the risk over their local jurisdictions to be the lowest possible, forcing the regional authority to assure also risk equity. We provide a linear bilevel programming formulation for this hazmat transportation network design problem that takes into account both total risk minimization and risk equity. We transform the bilevel model into a single-level mixed integer linear program by replacing the second level (follower) problem by its KKT conditions and by linearizing the complementary constraints, and then we solve the MIP problem with a commercial optimization solver. The optimal solution may not be stable, and we provide an approach for testing its stability and for evaluating the range of its solution values when it is not stable. Moreover, since the bilevel model is difficult to be solved optimally and its optimal solution may not be stable, we provide a heuristic algorithm for the bilevel model able to always find a stable solution. The proposed bilevel model and heuristic algorithm are experimented on real scenarios of an Italian regional network.

166 citations


Journal ArticleDOI
TL;DR: This paper has developed an IEEE FIPA compliant mobile agent system called Mobile-C and designed an agent-based real-time traffic detection and management system (ABRTTDMS), which takes advantages of both stationary agents and mobile agents.
Abstract: Agent technology is rapidly emerging as a powerful computing paradigm to cope with the complexity in dynamic distributed systems, such as traffic control and management systems. However, while a number of agent-based traffic control and management systems have been proposed and the multi-agent systems have been studied, to the best of our knowledge, the mobile agent technology has not been applied to this field. In this paper, we propose to integrate mobile agent technology with multi-agent systems to enhance the ability of the traffic management systems to deal with the uncertainty in a dynamic environment. In particular, we have developed an IEEE FIPA compliant mobile agent system called Mobile-C and designed an agent-based real-time traffic detection and management system (ABRTTDMS). The system based on Mobile-C takes advantages of both stationary agents and mobile agents. The use of mobile agents allows ABRTTDMS dynamically deploying new control algorithms and operations to respond unforeseen events and conditions. Mobility also reduces incident response time and data transmission over the network. The simulation of using mobile agents for dynamic algorithm and operation deployment demonstrates that mobile agent approach offers great flexibility in managing dynamics in complex systems.

157 citations


Journal ArticleDOI
TL;DR: It is concluded that the approach overcomes the drawbacks of both approaches by combining neural networks in a committee using Bayesian inference theory and leads to improved travel time prediction accuracy.
Abstract: Short-term prediction of travel time is one of the central topics in current transportation research and practice. Among the more successful travel time prediction approaches are neural networks and combined prediction models (a 'committee'). However, both approaches have disadvantages. Usually many candidate neural networks are trained and the best performing one is selected. However, it is difficult and arbitrary to select the optimal network. In committee approaches a principled and mathematically sound framework to combine travel time predictions is lacking. This paper overcomes the drawbacks of both approaches by combining neural networks in a committee using Bayesian inference theory. An 'evidence' factor can be calculated for each model, which can be used as a stopping criterion during training, and as a tool to select and combine different neural networks. Along with higher prediction accuracy, this approach allows for accurate estimation of confidence intervals for the predictions. When comparing the committee predictions to single neural network predictions on the A12 motorway in the Netherlands it is concluded that the approach indeed leads to improved travel time prediction accuracy.

Journal ArticleDOI
TL;DR: This research refines unconventional techniques for estimating speed at a single-loop detectors, yielding estimates that approach the accuracy of a dual-loop detector's measurements.
Abstract: Roadway usage, particularly by large vehicles, is one of the fundamental factors determining the lifespan of highway infrastructure. Operating agencies typically employ expensive classification stations to monitor large vehicle usage. Meanwhile, single-loop detectors are the most common vehicle detector and many new, out-of-pavement detectors seek to replace loop detectors by emulating the operation of single-loop detectors. In either case, collecting reliable length data from these detectors has been considered impossible due to the noisy speed estimates provided by conventional data aggregation at single-loop detectors. This research refines non-conventional techniques for estimating speed at single-loop detectors, yielding estimates that approach the accuracy of a dual-loop detector's measurements. Employing these speed estimation advances, this research brings length based vehicle classification to single-loop detectors (and by extension, many of the emerging out-of-pavement detectors). The classification methodology is evaluated against concurrent measurements from video and dual-loop detectors. To capture higher truck volumes than empirically observed, a process of generating synthetic detector actuations is developed. By extending vehicle classification to single-loop detectors, this work leverages the existing investment deployed in single-loop detector count stations and real-time traffic management stations. The work also offers a viable treatment in the event that one of the loops in a dual-loop detector classification station fails and thus, also promises to improve the reliability of existing classification stations.

Journal ArticleDOI
TL;DR: In this article, a car-following model from the driver's perspective is proposed to explain why departure headways follow such a log-normal distribution, which provides an acceptable description of the unseen interactions between the vehicles in a discharging queue.
Abstract: Modeling departure headways at signalized intersections attracts constant research efforts in the last four decades due to its importance. But most previous approaches focus on the mean departure headways only. In this paper, two interesting findings are presented. First, the distributions of the departure headways at each position in a queue are revealed to approximately follow a certain log-normal distribution (except the first one) and the corresponding mean values level out gradually. Second, a car-following model from the driver's perspective is proposed to explain why the departure headways follow such a log-normal distribution. The consistency between the empirical and simulation results indicates that this new car-following model provides an acceptable description of the unseen interactions between the vehicles in a discharging queue. The new model is also useful for simulation based intersection capacity analysis and traffic signal control.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed model formulations based on arc variables for both flow and design, as well as formulations with path flow variables and new cycle design variables for consolidation-based freight carriers.
Abstract: In this paper, we address the service network design with asset management problem, which integrates asset management considerations into service network design models for consolidation-based freight carriers. We propose model formulations based on arc variables for both flow and design, as well as formulations with path flow variables and new cycle design variables. Problem instances reflecting actual planning problems are used in the computational study to analyze the strengths and weaknesses of the various model formulations and the impact of asset management considerations on the transportation plan and the computational effort. Experimental results indicate that formulations based on cycle variables outperform traditional arc-based formulations, and that considering asset management issues may significantly impact the outcome of service planning models.

Journal ArticleDOI
TL;DR: In this paper, the problem of clean vehicle allocation in an existing public fleet is faced by introducing a method for solving the transit network design problem in a multimodal, demand elastic urban context dealing with the impacts deriving from transportation emissions.
Abstract: The use of fossil fuels in transportation generates harmful emissions that accounts for nearly half of the total pollutants in urban areas. Dealing with this issue, local authorities are dedicating specific efforts to seize the opportunity offered by new fuels and technological innovations in achieving a cleaner urban mobility. In fact, authorities are improving environmental performances of their public transport fleet by procuring cleaner vehicles, usually called low and zero emission vehicles (LEV and ZEV, respectively). Nevertheless there seems to be a lack of methodologies for supporting stakeholders in decisions related to the introduction of green vehicles, whose allocation should be performed since the network design process in order to optimize their available green capacity. In this paper, the problem of clean vehicle allocation in an existing public fleet is faced by introducing a method for solving the transit network design problem in a multimodal, demand elastic urban context dealing with the impacts deriving from transportation emissions. The solving procedure consists of a set of heuristics which includes a routine for route generation and a genetic algorithm for finding a sub-optimal set of routes with the associated frequencies.

Journal ArticleDOI
TL;DR: A detailed optimization model is used to analyze one of the proposed traffic control policies, called green wave, which consists in letting trains wait at the stations to avoid speed profile modifications in open corridors, and shows a comparison of the delays obtained.
Abstract: In order to face the expected growth of transport demand in the next years, several new traffic control policies have been proposed and analyzed both to generate timetables and to effectively manage the traffic in real-time. In this paper, a detailed optimization model is used to analyze one such policy, called green wave, which consists in letting trains wait at the stations to avoid speed profile modifications in open corridors. Such policy is expected to be especially effective when the corridors are the bottleneck of the network. However, there is a lack of quantitative studies on the real-time effects of using this policy. To this end, this work shows a comparison of the delays obtained when trains are allowed or not to change their speed profile in open corridors. An extensive computational study is described for two practical dispatching areas of the Dutch railway network.

Journal ArticleDOI
TL;DR: In this paper, the authors present the methodology and results of estimation of an integrated driving behavior model that attempts to integrate various driving decisions, such as lane changing and acceleration decisions jointly and so, captures interdependencies between these behaviors and represents drivers' planning capabilities.
Abstract: This paper presents the methodology and results of estimation of an integrated driving behavior model that attempts to integrate various driving decisions. The model explains lane changing and acceleration decisions jointly and so, captures inter-dependencies between these behaviors and represents drivers' planning capabilities. It introduces new models that capture drivers' choice of a target gap that they intend to use in order to change lanes, and acceleration models that capture drivers' behavior to facilitate the completion of a desired lane change using the target gap. The parameters of all components of the model are estimated simultaneously with the maximum likelihood method and using detailed vehicle trajectory data collected in a freeway section in Arlington, Virginia. The estimation results are presented and discussed in detail.

Journal ArticleDOI
TL;DR: Results indicate that dual memory models offer better representation of the original time-series than classical models; further, forcing the differentiation parameter of ARIMA model to equal 1 leads to over-inflated moving average terms and, consequently, to questionable models with artificial correlation structures.
Abstract: In transportation analyses, autoregressive integrated moving average (ARIMA) and generalized autoregressive conditional heteroskedasticity (GARCH) models have been widely used mainly because of their well established theoretical foundation and ease of application. However, they lack the ability to capture long memory properties and do not jointly treat the mean and variance (variability) of a time-series. We employ fractionally integrated dual memory models and compare results to classical time-series models in a traffic engineering context. Results indicate that dual memory models offer better representation of the original time-series than classical models; further, forcing the differentiation parameter of ARIMA model to equal 1 leads to over-inflated moving average terms and, consequently, to questionable models with artificial correlation structures.

Journal ArticleDOI
TL;DR: In this article, a mesoscopic model for airport terminal performance analysis has been developed, that strikes a balance between flexibility and realistic results, adopting a system dynamics approach, enabling quick and easy model building and providing the capability of being adaptable to the configuration and operational characteristics of a wide spectrum of airport terminals in a user-friendly manner.
Abstract: Decision making for airport terminal planning, design and operations is a challenging task, since it should consider significant trade-offs regarding alternative operational policies and physical terminal layout concepts. Existing models and tools for airport terminal analysis and performance assessment are too specific (i.e., models of specific airports) or general simulation platforms that require substantial airport modelling effort. In addition, they are either too detailed (i.e., microscopic) or too aggregate (i.e., macroscopic), affecting, respectively, the flexibility of the model to adapt to any airport and the level of accuracy of the results obtained. Therefore, there is a need for a generic decision support tool that will incorporate sufficient level of detail for assessing airport terminal performance. To bridge this gap, a mesoscopic model for airport terminal performance analysis has been developed, that strikes a balance between flexibility and realistic results, adopting a system dynamics approach. The proposed model has a modular architecture and interface, enabling quick and easy model building and providing the capability of being adaptable to the configuration and operational characteristics of a wide spectrum of airport terminals in a user-friendly manner. The capabilities of the proposed model have been demonstrated through the analysis of the Athens International Airport terminal.

Journal ArticleDOI
TL;DR: In this article, the authors focus on the performance issues of the two on-board subsystems: navigation and communication, and a study of the literature in the field is completed with the evaluation of different system prototypes.
Abstract: Collision avoidance support systems (CASS) are nowadays one of the main fields of interest in the area of road transportation. Among the different approaches, those systems based on vehicle cooperation to avoid collisions present the most promising perspectives. Works available in the current literature have in common that the performance of such solutions strongly relies on the operation of two on-board subsystems: navigation and communications. However, the performance of these two subsystems is usually underestimated when the whole solution is evaluated. Collision avoidance support applications can be considered among the most critical vehicular services, and this is the reason why this paper focuses on the performance issues of these two subsystems. Main issues regarding navigation and communication performance are discussed along the paper, and a study of the literature in the field is completed with the evaluation of different system prototypes. Communication and navigation tests in real environments yield further conclusions discussed in the paper.

Journal ArticleDOI
TL;DR: In this article, the authors presented the results of a microscopic traffic simulation study of the potential effects of an overtaking assistant for two-lane rural roads, which was developed to support two-way rural roads.
Abstract: This paper presents the results of a microscopic traffic simulationstudy of the potential effects of an overtaking assistant fortwo-lane rural roads. The overtaking assistant is developed to suppor ...

Journal ArticleDOI
TL;DR: It is thought that waiting profiles provide a promising protocol to tackle the problem of aligning barge rotations with quay schedules of terminals in the port of Rotterdam and an information exchange based on waiting profiles reduces the average tardiness per barge with almost 80% when compared to the situation with no information exchange.
Abstract: We consider the problem of aligning barge rotations with quay schedules of terminals in the port of Rotterdam. Every time a barge visits the port, it has to make a rotation along, on average, eight terminals to load and unload containers. A central solution, e.g., a trusted party that coordinates the activities of all barges and terminals, is not feasible for several reasons. We propose a multi-agent based approach of the problem, since a multi-agent system can mirror to a large extent the way the business network is currently organized and can provide a solution that is acceptable to each of the parties involved. We examine the value of exchanging different levels of information and evaluate the performance by means of simulation. We compare the results with an off-line scheduling algorithm. The results indicate that, in spite of the limited information available, our distributed approach performs quite well when compared to the central approach. In addition, our experiments indicate that an information exchange based on waiting profiles reduces the average tardiness per barge with almost 80% when compared to the situation with no information exchange. We therefore think that waiting profiles provide a promising protocol to tackle this problem.

Journal ArticleDOI
TL;DR: A Dynamic Multi-Period Routing Problem faced by a company which deals with on-line pick-up requests and has to serve them by a fleet of uncapacitated vehicles over a finite time horizon is considered.
Abstract: We consider a Dynamic Multi-Period Routing Problem (DMPRP) faced by a company which deals with on-line pick-up requests and has to serve them by a fleet of uncapacitated vehicles over a finite time horizon. When a request is issued, a deadline of a given number of days d ⩽ 2 is associated to it: if d = 1 the request has to be satisfied on the same day (unpostponable request) while if d = 2 the request may be served either on the same day or on the day after (postponable request). At the beginning of each day some requests are already known, while others may arrive as time goes on. Every day the company faces on-line requests by possibly making new plans for the service and decides whether or not to serve postponable requests without knowing the set of new requests that will be issued the day after. The company objective is to satisfy all the received requests while minimizing the average operational costs per day. The daily cost includes a very high cost paid for each request forwarded to a back-up service. We propose different short term routing strategies and analyze their impact on the long term objective. Extensive computational results are provided on randomly generated instances simulating different real case scenarios and conclusions are drawn on the effectiveness of the strategies.

Journal ArticleDOI
TL;DR: It is confirmed that the lower number of (stochastic) equations (independent observed link flows) with respect to the unknowns (O–D flows) represents the main reason for the failure of the O–D matrix correction procedure, and it is shown that satisfactory correction is generally obtained when the number of equations is greater than theNumber of unknowns.
Abstract: Correction of the O–D matrix from traffic counts is a classical procedure usually adopted in transport engineering by practitioners for improving the overall reliability of transport models. Recently, Papola and Marzano [Papola, A., Marzano, V., 2006. How can we trust in the O–D matrix correction procedure using traffic counts? In: Proceedings of the 2006 ETC Conference, Strasbourg] showed through laboratory experiments that this procedure is generally unable to provide for effective correction of the O–D matrix. From a theoretical standpoint, this result can be justified by the lower number of (stochastic) equations (independent observed link flows) with respect to the unknowns (O–D flows). This paper first confirms that this represents the main reason for the failure of this procedure, showing that satisfactory correction is generally obtained when the number of equations is greater than the number of unknowns. Then, since this circumstance does not occur in practice, where the number of O–D pairs usually far exceeds the number of link counts, we explore alternative assumptions and contexts, allowing for a proper balance between unknowns and equations. This can be achieved by moving to within-day dynamic contexts, where a much larger number of equations are generally available. In order to bound the corresponding increase in the number of unknowns, specific reasonable hypotheses on O–D flow variation across time slices must be introduced. In this respect, we analyze the effectiveness of the O–D matrix correction procedure in the usually adopted linear hypothesis on the dynamic process evolution of O–D flows and under the assumption of constant distribution shares. In the second case it is shown that satisfactory corrections can be performed using a small number of time slices of up to 3 min in length, leading to a time horizon in which the hypothesis of constant distribution shares can be regarded as trustworthy and realistic.

Journal ArticleDOI
TL;DR: The results of the performance testing conducted in this paper demonstrates the superior predictive accuracy and drastically lower computational requirements of the SPN compared to either the neural network or the nearest neighbor approach.
Abstract: Short-term traffic volume forecasting represents a critical need for Intelligent Transportation Systems. This paper develops a novel forecasting approach inspired by human memory, called the spinning network (SPN). The approach is then used for short-term traffic volume forecasting, utilizing a data set compiled from real-world traffic volume data obtained from the Hampton Roads traffic operations center in Virginia. To assess the accuracy of the SPN approach, its performance is compared to two other approaches, namely a back propagation neural network and a nearest neighbor approach. The transferability of the SPN approach and its ability to forecast for longer time periods into the future is also assessed. The results of the performance testing conducted in this paper demonstrates the superior predictive accuracy and drastically lower computational requirements of the SPN compared to either the neural network or the nearest neighbor approach. The tests also confirm the ability of the SPN to predict traffic volumes for longer time periods into the future, as well as the transferability of the approach to other sites.

Journal ArticleDOI
TL;DR: In this paper, two different ways to manage availability information in parking facilities were evaluated in a parking facility and the results showed that the level 4 PARC systems are sufficiently effective in small facilities; the option of preparing “false” information when 5 or 10 free spaces are left on a garage level in order to influence user decisions has few practical repercussions.
Abstract: Two different ways to manage availability information in parking facilities were evaluated in this article. First, in level 4 PARC systems (Parking Access and Revenue Control), when occupancy percentages for all garage levels are above 90% and 95%, the information on variable message sign (VMS) panel will indicate that there are no free spaces – censoring information. In this article, zoning is understood as the second information management tool; it consists of placing vehicle detection systems at intermediary points around the facility in such a way as to separate it into internal zones. It is a variation applicable to facilities with level 2 PARC systems, such as those in shopping centres, where a modification of the vehicle counting algorithm in the main program of the PARC system allows the determination of the number of free spaces in specific zones within the parking facility. According to the simulations that were carried out after testing an existing choice model, level 4 PARC systems are sufficiently effective in small facilities; the option of preparing “false” information when 5 or 10 free spaces are left on a garage level in order to influence user decisions has few practical repercussions. However, the separation into internal zones proposed for level 2 PARC systems shows a 16.2% reduction in search time.

Journal ArticleDOI
TL;DR: A machine vision based approach has been considered to emulate the visual abilities of the human operator to enable automation of the process and show a competitive performance when compared to a human operator.
Abstract: Wooden railway sleeper inspections in Sweden are currently performed manually by a human operator; such inspections are to large extent based on visual analysis. In this paper a machine vision based approach has been considered to emulate the visual abilities of the human operator to enable automation of the process. Digital images from either ends (left and right) of the sleepers have been acquired. A pattern recognition approach has been adopted to classify the condition of the sleeper into classes (good or bad) and thereby achieve automation. Appropriate image analysis techniques were applied and relevant features such as the number of cracks on a sleeper, average length and width of the crack and the condition of the metal plate were determined. Feature fusion has been proposed in order to integrate the features obtained from each end for the classification task which follows. The effect of using classifiers like multi-layer perceptron and support vector machines has been tested and compared. Results obtained from the experiments show that multi-layer perceptron and support vector machines have achieved encouraging results, with a classification accuracy of 90%; thereby exhibiting a competitive performance when compared to a human operator.

Journal ArticleDOI
TL;DR: An ensemble approach for CD in Free Flight is proposed, capable of accommodating existing as well as new CD models and can be extended to other ATM concepts as well.
Abstract: Airborne separation assurance is a key requirement for Free Flight operations. A variety of conflict detection (CD) and resolution algorithms have been developed for this task. A lack of rigorous evaluation and the existence of an infinite number of possible conflict geometries in Free Flight makes the choice of which algorithm to be placed in the cockpit a challenging task for the designers of future air traffic management (ATM) systems. In this paper, we propose an ensemble approach for CD in Free Flight. The ensemble consists of several CD algorithms, a rule set for each algorithm describing its learned behavior from its past performance and a switch mechanism to choose an appropriate CD algorithm given probe characteristics. A novel mechanism to evolve complex conflict scenarios, using genetic algorithms (GA), is developed and integrated in a fast time air traffic simulator to generate the performance data of CD algorithms. Data mining techniques are then employed to identify implicit patterns in the probe characteristics where the CD algorithms missed or falsely identified a conflict. These patterns are formulated as rule sets for each CD algorithm and are then used by a switch in the ensemble to route a probe for conflict prediction. Given probe characteristics, the CD algorithm, which is less likely to miss or falsely identify a conflict, is selected to evaluate the probe for potential conflict. The performance of the ensemble and of individual algorithms is evaluated by comparing the Pareto efficient set of solutions generated by them. The ensemble approach demonstrates a significant reduction in the number of missed detects and false alarms as compared to individual algorithms. The proposed methodology is capable of accommodating existing as well as new CD models and can be extended to other ATM concepts as well.

Journal ArticleDOI
TL;DR: Another simpler solution, with fewer parameters, is investigated, which consists in introducing a relaxation procedure within the car-following rules and proposing a new insertion decision algorithm in order to loosen the links between both model components.
Abstract: A classical way to represent vehicle interactions at merges at the microscopic scale is to combine a gap-acceptance model with a car-following algorithm. However, in congested conditions (when a queue spills back on the major road), outputs of such a combination may be irrelevant if anticipatory aspects of vehicle behaviours are disregarded (like in single-level gap-acceptance models). Indeed, the insertion decision outcomes are so closely bound to the car-following algorithm that irrelevant results are produced. On the one hand, the insertion decision choice is sensitive to numerical errors due to the car-following algorithm. On the other hand, the priority sharing process observed in congestion cannot be correctly reproduced because of the constraints imposed by the car-following on the gap-acceptance model. To get over these issues, more sophisticated gap-acceptance algorithms accounting for cooperation and aggressiveness amongst drivers have been recently developed (multi-level gap-acceptance models). Another simpler solution, with fewer parameters, is investigated in this paper. It consists in introducing a relaxation procedure within the car-following rules and proposing a new insertion decision algorithm in order to loosen the links between both model components. This approach will be shown to accurately model the observed flow allocation pattern in congested conditions at an aggregate scale.

Journal ArticleDOI
TL;DR: Fast, accurate measurement with an on-board inertial system together with a method to evaluate measurement uncertainty, particularly for any variables obtained indirectly and an algorithm for segmentation and fitting geometric curves to the experimental points, following current highway design standards.
Abstract: Digital maps can provide support for numerous advanced driver assistance systems (ADAS) aimed at improving road safety. These new uses require more highly detailed and precise maps. The use of a datalog vehicle to collect roadway geometry data can fulfil these specifications. This paper presents fast, accurate measurement with an on-board inertial system together with a method to evaluate measurement uncertainty, particularly for any variables obtained indirectly. It also presents an algorithm for segmentation and fitting geometric curves to the experimental points, following current highway design standards. The algorithms have been applied to real road measurements. Segmentation has been done in straight alignments, circular curves and transition curves whose characteristic parameters are calculated. It has been seen that with a very small data set it is possible to reconstruct the measured geometry with few discrepancies regarding the experimental points.