scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A framework for and empirical study of algorithms for traffic assignment

TL;DR: This study implements a flexible software framework that maximises the usage of common code and ensures comparison of algorithms on common ground and analyzes groups of algorithms that are based on common principles.
About: This article is published in Computers & Operations Research.The article was published on 2015-02-01 and is currently open access. It has received 50 citations till now. The article focuses on the topics: Traffic congestion & Network planning and design.

Summary (6 min read)

1. Introduction and Motivation

  • The TA model describes travel behaviour of road users.
  • The wide research interest in this problem has several reasons.
  • It is not clear if the authors used the same framework for all algorithms and how carefully they followed the descriptions of the algorithms available in the literature.
  • In Section 4, various implemented methods for solving traffic assignment are described.

2. Problem Formulation

  • This section introduces a mathematical formulation of the TA problem and notation that is used throughout the paper.
  • Link flow is the number of vehicles per time unit on each link.
  • In the following, it is assumed that these requirements are satisfied.
  • The formulation (2) is sometimes referred to as link-route or path flow formulation [13, 14].

3. Literature Overview

  • One of the possible ways to classify traffic assignment algorithms is according to how the solution is represented: link-based (solution variables are link flows), path-based (solution variables are path flows) and bush-based (solution variables are link flows coming from a particular origin), see [15].
  • Path-based methods decompose the TA problem into sub-problems corresponding to O-D pairs.
  • Its main idea is to shift flow from the longest path (path with maximum cost) to the shortest path (path with minimum cost).
  • During such analysis it is important to pay attention to how the algorithms were compared (re-implemented by the authors of the study or using existing software), what instances were used and how precise the obtained solutions were.
  • Since sometimes contradictory conclusions are drawn in different papers, this alignment is approximate.

4. Algorithms

  • One of the reasons for the development of various traffic assignment algorithms is a specific problem structure that can be exploited by solution methods in many different ways [13].
  • The constraints of the TA problem represent a polyhedral set and the objective function is convex.
  • The algorithms described later in this section belong to this type of non-linear optimisation methods.
  • Another property of traffic assignment that can be exploited is the O-D pair separability: the flow conservation constraints of one O-D pair do not affect those of any other pair.
  • This fact gives raise to various algorithms based on the idea of decomposing the problem into smaller sub-problems that can be solved one after another or in parallel [13].

4.1. Link-based Algorithms

  • This section presents Frank-Wolfe and two of its modifications: conjugate and bi-conjugate Frank-Wolfe methods.
  • All these path flows are then projected on all links using equation (1) which gives the initial feasible link flows.
  • For the TA problem this linearised problem becomes a shortest path problem and, hence, AON is applied [14].
  • The CFW and BFW algorithms use information about previously generated directions of descent in order to find a new one.
  • A called points of sight that store information about previously generated directions of descent (for details see Mitradjieva and Lindberg [20]).

4.2. Path-based Algorithms

  • Path-based methods exploit the O-D pair separability and a path flow formulation (2) of the TA problem.
  • Let K+p denote the set of paths between O-D pair p that are currently in use, i.e. they carry positive flow.
  • Due to the decomposition by OD pairs, the solution to the original TA problem is found by solving several single commodity sub-problems sequentially until the desired precision of the solution is reached.
  • As presented in Patriksson [13], for the single commodity sub-problem the feasible direction of descent is defined as the path cost differences between cheaper and costlier paths.
  • If the direction of descent is scaled by the Hessian matrix, then the resulting method corresponds to a constrained version of the Newton method [41].

4.2.1. Path Equilibration

  • The PE algorithm equalises the costs of the current longest path l ∈ K+p with positive flow and the current shortest path s ∈ K+p .
  • 2Gauss-Seidel decomposition refers to the process of solving a problem by decomposing it into several sub-problem and solving them sequentially several times until a required precision is achieved [13].
  • 14 Since only two path costs are equalised, the Newton step can be applied as discussed in Section 4.4.3.
  • Originally PE was implemented to solve the traffic assignment problem with quadratic link cost functions and was applied only to small instances [27].
  • The authors implemented both scaled and non-scaled descent directions for this algorithm.

4.2.2. Gradient Projection

  • The GP algorithm considers several paths at each iteration.
  • After several runs of the algorithm the authors set α to 0.25, because this value allows all tested instances to converge.
  • Chen et al. [42] and Chen et al. [43] also suggest a self-adaptive step size strategy for GP and other algorithms that the authors did not implement in their study.
  • 15 This version of GP with projection operator can be classified as a scaled gradient projection with diagonal approximation of the Hessian [14].
  • This approach was presented in Cheng et al. [45].

4.2.3. Projected Gradient

  • The main idea of the PG algorithm is to move flow from the paths that have cost greater than the current average path cost to the paths that have cost less than the average value.
  • For this particular algorithm, it is not obvious how the Hessian matrix can be approximated.
  • It is not clear how to 16 choose these two paths (in GP Newton flow is always shifted from non-shortest paths to the shortest path).
  • Therefore, scaled direction of descent is not implemented in their study for PG.

4.2.4. Improved Social Pressure

  • The ISP algorithm is based on the idea of “social pressure” which is defined as the difference between the cost of a path and the cost of the shortest path [31].
  • This is equivalent to defining a direction of descent d and moving the solution along this direction.
  • ISP scales some elements of the direction of descent by a scaling factor based on second derivative information.

4.3.1. Algorithm B

  • Its main idea is similar to PE (see Section 4.2.1), i.e. the flow is shifted from the longest used path to the shortest path, but within a given bush.
  • Both segments do not share links and start at the same node.
  • Once the segments 19 are built, flow is moved from the segment with higher cost to the segment with lower cost.
  • As in the case of PE, in order to find the amount of flow to move several approaches can be implemented, i.e. feasible direction with a line search and Newton step.

4.3.2. Linear User Cost Equilibrium Algorithm

  • This algorithm was proposed by Gentile [8].
  • The main idea of this method is to seek at every node the equilibrium flow coming from the same origin among the in-coming links of this node.
  • LUCE uses flow portions φij, ∀(i, j) ∈ Unlike algorithm B, this sub-problem is based on flow portions φij.
  • This raises certain difficulties related to the fact that the second order derivative of the objective function with respect to φij is not available in closed form [47].
  • The practical implications of this are discussed in Xie et al. [47] and in Section 5.2.3.

4.3.3. TAPAS

  • This algorithm was developed by Bar-Gera [37].
  • The algorithm uses paired alternative segments (PAS).
  • First, it loops through all origins (during this stage new PASs are created and flow shifts are performed).
  • In addition, each PAS has a set of relevant origins, which means that when a flow shift within a given PAS is performed, origin flows of relevant origins must be updated (for details see [37]).
  • Unlike algorithm B, one flow move in TAPAS usually involves changes of origin flows of several origins.

4.4.1. Shortest Path

  • Each TA method requires solving shortest path problems many times.
  • For this purpose, the authors implemented the label correcting algorithm presented in Sheffi [6].
  • Once all O-D pairs coming from one origin are considered, a new shortest path tree is calculated for the next origin.
  • This approach might lead to an increase of the total number of iterations required by a path-based algorithm since the information about new shortest paths is available only 3An implementation of the A* algorithm was kindly provided by Boshen Chen. 22 when the algorithm proceeds to the next origin.
  • The authors did not implement this particular shortest path strategy in their study.

4.4.3. Newton Step

  • This section presents how to calculate a flow shift between two paths in order to equalise their costs using the Newton method.
  • That is why projection onto the feasible set is required: Fl = Fl −min{Fl,∆F}, Fs = Fs +min{Fl,∆F}. (15) A Newton step can be applied iteratively to the two paths under consideration until the desired precision is reached.
  • It is usually applied only once [47] and the authors follow this convention.

4.4.4. Equilibration Strategies

  • Path- and bush-based algorithms iterate between improving an element of decomposition (add better paths to a set of active paths, add better links to a bush, create new PASs) and equilibrating it.
  • This set of steps can be performed once for each of the decomposition elements or several times before proceeding to the next one.
  • In the following, the strategy of improving and equilibrating each element only once will be called Equilibration I and the strategy of improving and equilibrating a current element several times will be called Equilibration II.
  • These approaches are also presented in the literature [32, 49].
  • The authors also limit the maximum number of these iterations to ten in order to have more control over the total running time of the algorithms.

4.5. Difficulties of Comparison of Groups of Algorithms

  • As can be seen from Sections 4.1-4.3, the algorithms that belong to the same group of methods (link-, path- and bush-based) share the same framework and differ only slightly.
  • This allows fair comparison between the algorithms from the same group.
  • The 25 methods belonging to different groups require special algorithms specific only to that particular group.
  • All path-based methods implement A* to solve point-to-point shortest path problems.
  • There might exist more efficient algorithms for this purpose that might lead to a better performance of the entire group of path-based methods.

5.1. Problem Instances and Computational Environment

  • The authors performed computational tests on the instances available at the web-site http:// www.bgu.ac.il/~bargera/tntp/.
  • The main characteristics of these instances are presented in Table 3.
  • All instances use BPR link cost functions of the form ca(fa) = freeF low · ( 1 + B · ( fa capacity )power) , where freeF low,B, capacity and power are function parameters.
  • For all 26 algorithms the authors used extended floating point precision (C++ long double type).

5.2. Results

  • The authors use the relative gap RGAP (see equation (3)) as a convergence criterion because it is a common measure of convergence (see [10, 32, 35, 39]), and it can be calculated for all tested algorithms.
  • The authors decided to perform several numerical tests.
  • Since tests on large instances might require long computational time, the authors first performed tests on small and medium instances with different configuration options for all algorithms.
  • Based on the results of these tests, the authors chose the best configuration options for each algorithm and eliminated methods that do not seem promising and performed numerical tests on large instances with the remaining algorithms.
  • The required accuracy of the solution was set to ǫ = 10−14, i.e. the algorithms were stopped after the relative gap was less than ǫ.

5.2.1. Test 1: High Precision

  • Test 1 was performed on the five small and medium instances listed in Table 3.
  • This observation regarding different configurations of the algorithms shows that if a line search is applied, it is better to approximate the step size using quadratic approximation.
  • As a result of this choice, this particular configuration does not always perform well, but converges for all instances.
  • CPU times reported on theses figures consider only pure iteration time.

5.2.2. Test 2: Equilibration II

  • This numerical test investigates the impact of Equilibration II (several steps of improving and equilibrating are applied to the element of decomposition under consideration before proceeding to the next one).
  • In general, Equilibration II leads to a significant increase of computational time with only a few exceptions: GP Newton performs better on all instances and the majority of algorithms perform better on the Barcelona instance with Equilibration II.
  • The authors observe that convergence patterns are much “smoother” compared to Equilibration I .
  • This is due to the fact that each element of decomposition is closer to an equilibrium since instead 36 of just one equilibration step several steps are applied to a given element of decomposition before proceeding to the next one.
  • App. B Newton TAPAS Newton Figure 12: Test 2. Convergence on the Barcelona instance.

5.2.3. Test 3: Time Limit

  • This numerical test considers the algorithms FW, CFW, BFW, PG and LUCE.
  • LUCE, however, does not correct its direction of descent which causes the issues with convergence.
  • Thus, when the two similar numbers are subtracted in order to find a direction of descent, the precision is lost.
  • Since the step size is bounded by one, this situation might occur during later stages compared to PG.
  • Link-based algorithms represented by FW, CFW and BFW converge fast during the initial iterations, but start tailing in the vicinity of the equilibrium and cannot achieve highly precise solutions in a reasonable amount of time; 2. LUCE can achieve relatively high precision of around 10−12, also known as Bottom line.

5.2.4. Test 4: Large Instances

  • Test 4 aims to compare the best algorithms on large instances (the three last instances in Table 3).
  • CFW and FW are not present in this test since they always perform worse compared to BFW.
  • The Philadelphia instance, however, shows a different pattern.
  • LUCE is able to achieve higher precision than any of the path-based approaches.
  • For large instances, TAPAS and algorithm B are usually the best choice if a highly accurate solution is required; 2. A careful implementation of TAPAS is necessary in order to achieve the best results, also known as Bottom line.

5.3. Summary

  • This section summarises the main findings from the performed numerical tests.
  • This issue underlines the fact that it is better to apply methods that are based on exact second derivative information which is the case for all path-based methods and bush-based methods that shift flows between segments (B and TAPAS).
  • Algorithms also differ in the amount of information generated along with link flows.
  • TAPAS aims at addressing this difficulty by providing consistent path flows that satisfy the condition of proportionality.
  • Bush-based methods can also provide path flow information.

Did you find this useful? Give us your feedback

Citations
More filters
Journal ArticleDOI
TL;DR: This paper proposes a generalized Physarum model to solve the shortest path problem in directed and asymmetric graphs and extends it further to resolve the network design problem with multiple source nodes and sink nodes and demonstrates that thephysarum solver converges to the user-optimized (Wardrop) equilibrium by dynamically updating the costs of links in the network.
Abstract: Finding an equilibrium state of the traffic assignment plays a significant role in the design of transportation networks. We adapt the path finding mathematical model of slime mold Physarum polycephalum to solve the traffic equilibrium assignment problem. We make three contributions in this paper. First, we propose a generalized Physarum model to solve the shortest path problem in directed and asymmetric graphs. Second, we extend it further to resolve the network design problem with multiple source nodes and sink nodes. At last, we demonstrate that the Physarum solver converges to the user-optimized (Wardrop) equilibrium by dynamically updating the costs of links in the network. In addition, convergence of the developed algorithm is proved. Numerical examples are used to demonstrate the efficiency of the proposed algorithm. The superiority of the proposed algorithm is demonstrated in comparison with several other algorithms, including the Frank-Wolfe algorithm, conjugate Frank-Wolfe algorithm, biconjugate Frank-Wolfe algorithm, and gradient projection algorithm.

38 citations


Cites background from "A framework for and empirical study..."

  • ...basic idea is to decompose the problem into a sequence of subproblems, which operate on acyclic subnetworks of the original transportation network [24]....

    [...]

Journal ArticleDOI
TL;DR: Numerical experiments show the efficiency of the proposed model for traffic congestion mitigation, and reveal that interaction effects across the tradable credit scheme and the discrete network design problem which amplify their individual effects can achieve better performance than the sequential decision problems.

28 citations

Journal ArticleDOI
TL;DR: A bilevel mathematical optimization model is formulated to maximize the transportation system resilience and restore its performance through two network reconfiguration schemes: contraflow and crossing elimination at intersections.
Abstract: Evacuating residents out of affected areas is an important strategy for mitigating the impact of natural disasters. However, the resulting abrupt increase in the travel demand during evacuation causes severe congestions across the transportation system, which thereby interrupts other commuters' regular activities. In this article, a bilevel mathematical optimization model is formulated to address this issue, and our research objective is to maximize the transportation system resilience and restore its performance through two network reconfiguration schemes: contraflow (also referred to as lane reversal) and crossing elimination at intersections. Mathematical models are developed to represent the two reconfiguration schemes and characterize the interactions between traffic operators and passengers. Specifically, traffic operators act as leaders to determine the optimal system reconfiguration to minimize the total travel time for all the users (both evacuees and regular commuters), while passengers act as followers by freely choosing the path with the minimum travel time, which eventually converges to a user equilibrium state. For each given network reconfiguration, the lower-level problem is formulated as a traffic assignment problem (TAP) where each user tries to minimize his/her own travel time. To tackle the lower-level optimization problem, a gradient projection method is leveraged to shift the flow from other nonshortest paths to the shortest path between each origin-destination pair, eventually converging to the user equilibrium traffic assignment. The upper-level problem is formulated as a constrained discrete optimization problem, and a probabilistic solution discovery algorithm is used to obtain the near-optimal solution. Two numerical examples are used to demonstrate the effectiveness of the proposed method in restoring the traffic system performance.

26 citations


Cites methods from "A framework for and empirical study..."

  • ...In this article, we choose the gradient projection method (Perederieieva et al., 2015) due to its efficiency and simplicity....

    [...]

Journal ArticleDOI
TL;DR: This article significantly accelerate the computation of flow patterns, enabling interactive transportation and urban planning applications by building a traffic assignment procedure upon customizable contraction hierarchies (CCH), revisiting and carefully engineering CCH customization and queries, and adapting CCH to compute batched point-to-point shortest paths.
Abstract: Given an urban road network and a set of origin-destination pairs, the traffic assignment problem asks for the traffic flow on each road segment. Common solution algorithms require a large number of shortest-path computations. In this article, we significantly accelerate the computation of flow patterns, enabling interactive transportation and urban planning applications. We achieve this by building a traffic assignment procedure upon customizable contraction hierarchies (CCH), revisiting and carefully engineering CCH customization and queries, and adapting CCH to compute batched point-to-point shortest paths. Although motivated by the traffic assignment problem, our optimizations apply to CCH in general. In contrast to previous work, our evaluation uses real-world production data for all parts of the input. On a metropolitan area encompassing about 2.7 million inhabitants, we decrease the flow-pattern computation for a typical 1-hour morning peak (a quarter million trips) from 90.9 to 14.1 seconds on one core and 2.4 seconds on a 16-core machine. This represents a speedup of 37 over the state of the art and more than three orders of magnitude over the Dijkstra-based baseline.

25 citations


Cites background or methods from "A framework for and empirical study..."

  • ...While this instance is significantly smaller than road networks studied before for evaluating point-to-point queries [5], it is the largest available to us that provides real-world capacities and O-D pairs, and is still an order of magnitude larger than the instances collected in Reference [3], and the instances considered in a recent overview of the state-of-the-art in the area of traffic assignment [51]....

    [...]

  • ...This is not surprising [51], since there is a CFW algorithm at the heart of our traffic assignment....

    [...]

  • ...[51] give a recent overview of practical traffic assignment algorithms....

    [...]

  • ...However, the most common criterion in both research papers and practice [24, 30, 34, 49, 51, 53, 60] is the relative gap [11]....

    [...]

  • ...[51], resort to the 50-year-old A* algorithm [37] to compute shortest paths....

    [...]

Journal ArticleDOI
TL;DR: In this paper, a new class of path-based solution algorithms is proposed to solve the restricted stochastic user equilibrium (RSUE), as introduced in Watling et al. (2015).
Abstract: We propose a new class of path-based solution algorithms to solve the Restricted Stochastic User Equilibrium (RSUE), as introduced in Watling et al. (2015). The class allows a flexible specification of how the choice sets are systematically grown by considering congestion effects and how the flows are allocated among routes. The specification allows adapting traditional path-based stochastic user equilibrium flow allocation methods (originally designed for pre-specified choice sets) to the generic solution algorithm. We also propose a cost transformation function and show that by using this we can, for certain Logit-type choice models, modify existing path-based Deterministic User Equilibrium solution methods to compute RSUE solutions. The transformation function also leads to a two-part relative gap measure for consistently monitoring convergence to a RSUE solution. Numerical tests are reported on two real-life cases, in which we explore convergence patterns and choice set composition and size, for alternative specifications of the RSUE model and solution algorithm.

23 citations


Cites methods from "A framework for and empirical study..."

  • ...In the case of DUE, while several path-based algorithms have been proposed (see Perederieieva et al., 2015, for a recent paper testing several variants), link-based formulations retain an attraction due to the fact that we can only hope at best for DUE uniqueness in the space of link flows, not…...

    [...]

References
More filters
Book
01 Jan 1995

12,671 citations


"A framework for and empirical study..." refers background or methods in this paper

  • ...The main idea behind this type of methods is: starting from a feasible solution, a feasible direction of descent is calculated and the solution is moved along this direction [41]....

    [...]

  • ...If the direction of descent is scaled by the Hessian matrix, then the resulting method corresponds to a constrained version of the Newton method [41]....

    [...]

  • ...As mentioned in Bertsekas [41], when the constraint set is polyhedral, as is the case with TA, FW might converge sub-linearly....

    [...]

  • ...Other optimisation formulations of the TA problem based on a different set of decision variables can be found in Patriksson [13] and Bertsekas [14]....

    [...]

  • ...If an approximation of the Hessian is used instead, then the method can be viewed as a scaled gradient projection or constrained quasi-Newton approach [41]....

    [...]

01 Jan 1952

3,821 citations


"A framework for and empirical study..." refers background in this paper

  • ...The most well-known assumptions are the ones following Wardrop’s first principle (also called user equilibrium condition): “The journey times on all the routes actually used are equal, and less than those which would be experienced by a single vehicle on any unused route” [2]....

    [...]

Journal ArticleDOI

3,154 citations


"A framework for and empirical study..." refers methods in this paper

  • ...The most well-known such algorithm is Frank-Wolfe (FW), a general algorithm for convex optimisation problems [16]....

    [...]

Book
01 Jan 1985

2,277 citations


"A framework for and empirical study..." refers background or methods in this paper

  • ...Exactly: find a value of λ that minimises the objective function in (11), for example, by using bisection, see [6];...

    [...]

  • ...A zone refers to an area of a transportation network that can vary from a city block to a neighbourhood [6]....

    [...]

  • ...For this purpose, we implemented the label correcting algorithm presented in Sheffi [6]....

    [...]

  • ...If these assumptions are satisfied, solving the following optimisation problem (2) results in the link flows satisfying the user equilibrium condition [6]....

    [...]

  • ...Moreover, this direction is also a direction of descent [6]....

    [...]

Frequently Asked Questions (2)
Q1. What are the contributions mentioned in the paper "A framework for and empirical study of algorithms for traffic assignment" ?

Once such a model is created, it can be used to analyse the usage of a road network and to predict the impact of implementing a potential project. In this study, the authors consider the static deterministic user equilibrium TA model. In order to achieve this goal, the authors implement a flexible software framework that maximises usage of common code and, hence, ensures comparison of algorithms on common ground. In addition, the authors implement and compare several different methods for solving sub-problems and discuss issues related to accumulated numerical errors that might occur when highly accurate solutions are required. 

The future development of this research consists in further study of numerical issues when high accuracy is required. In particular, the authors want to investigate how the direction of descent of the PG algorithm can be corrected. It will also be interesting to investigate the impact of randomization and PAS management strategies on the performance of TAPAS.