scispace - formally typeset
Search or ask a question

Showing papers in "Networks in 2008"


Journal ArticleDOI
01 Jan 2008-Networks
TL;DR: The resource constrained elementary shortest path problem (RCESPP) arises as a pricing subproblem in branch‐and‐price algorithms for vehicle‐routing problems with additional constraints and is addressed by addressing the optimization of the RCESPP and presenting and comparing three methods.
Abstract: The resource constrained elementary shortest path problem (RCESPP) arises as a pricing subproblem in branch-and-price algorithms for vehicle routing problems with additional constraints. We address the optimization of the RCESPP and we present and compare three methods. The frst method is a well-known exact dynamic programming algorithm improved by new ideas, such as bi-directional search with resource-based bounding. The second method consists of a branch-and-bound algorithm, where lower bounds are computed by dynamic programming with state space relaxation; we show how bounded bi-directional search can be adapted to state space relaxation and we present different branching strategies and their hybridization. The third method, called decremental state space relaxation, is a new one; exact dynamic programming and state space relaxation are two special cases of this new method. The experimental comparison of the three methods is defnitely favourable to decremental state space relaxation. Computational results are given for different kinds of resources, arising from the capacitated vehicle routing problem, the vehicle routing problem with distribution and collection and the vehicle routing problem with capacities and time windows

258 citations


Journal IssueDOI
01 May 2008-Networks
TL;DR: This work addresses the optimization of the resource constrained elementary shortest path problem (RCESPP) and presents and compares three methods, including a well-known exact dynamic-programming algorithm improved by new ideas, such as bidirectional search with resource-based bounding.
Abstract: The resource constrained elementary shortest path problem (RCESPP) arises as a pricing subproblem in branch-and-price algorithms for vehicle-routing problems with additional constraints. We address the optimization of the RCESPP and we present and compare three methods. The first method is a well-known exact dynamic-programming algorithm improved by new ideas, such as bidirectional search with resource-based bounding. The second method consists in a branch-and-bound algorithm, where lower bounds are computed by dynamic-programming with state-space relaxation; we show how bounded bidirectional search can be adapted to state-space relaxation and we present different branching strategies and their hybridization. The third method, called decremental state-space relaxation, is a new one; exact dynamic-programming and state-space relaxation are two special cases of this new method. The experimental comparison of the three methods is definitely favorable to decrement state-space relaxation. Computational results are given for different kinds of resources, arising from the capacitated vehicle-routing problem, the vehicle-routing problem with distribution and collection, and the vehicle-routing problem with capacities and time windows. © 2007 Wiley Periodicals, Inc. NETWORKS, 2008

223 citations


Journal IssueDOI
01 Jan 2008-Networks
TL;DR: In this article, a Tabu Search algorithm is proposed to solve the 2L-CVRP problem, in which the loading component of the problem is solved through heuristics, lower bounds, and a truncated branch-and-bound procedure.
Abstract: This article addresses the well-known Capacitated Vehicle Routing Problem (CVRP), in the special case where the demand of a customer consists of a certain number of two-dimensional weighted items. The problem calls for the minimization of the cost of transportation needed for the delivery of the goods demanded by the customers, and carried out by a fleet of vehicles based at a central depot. In order to accommodate all items on the vehicles, a feasibility check of the two-dimensional packing (2L) must be executed on each vehicle. The overall problem, denoted as 2L-CVRP, is NP-hard and particularly difficult to solve in practice. We propose a Tabu Search algorithm, in which the loading component of the problem is solved through heuristics, lower bounds, and a truncated branch-and-bound procedure. The effectiveness of the algorithm is demonstrated through extensive computational experiments. © 2007 Wiley Periodicals, Inc. NETWORKS, 2008

173 citations


Journal IssueDOI
01 Dec 2008-Networks
TL;DR: A new algorithm for CSPP is presented that Lagrangianizes those constraints, optimizes the resultinglagrangian function, identifies a feasible solution, and then closes any optimality gap by enumerating near-shortest paths, measured with respect to the Lagrangiaized length.
Abstract: The constrained shortest-path problem (CSPP) generalizes the standard shortest-path problem by adding one or more path-weight side constraints. We present a new algorithm for CSPP that Lagrangianizes those constraints, optimizes the resulting Lagrangian function, identifies a feasible solution, and then closes any optimality gap by enumerating near-shortest paths, measured with respect to the Lagrangianized length. “Near-shortest” implies e-optimal, with a varying e that equals the current optimality gap. The algorithm exploits a variety of techniques: a new path-enumeration method; aggregated constraints; preprocessing to eliminate edges that cannot form part of an optimal solution; “reprocessing” that reapplies preprocessing steps as improved solutions are found; and, when needed, a “phase-I procedure” to identify a feasible solution before searching for an optimal one. The new algorithm is often an order of magnitude faster than a state-of-the-art label-setting algorithm on singly constrained randomly generated grid networks. On multiconstrained grid networks, road networks, and networks for aircraft routing the advantage varies but, overall, the new algorithm is competitive with the label-setting algorithm. © 2008 Wiley Periodicals, Inc. NETWORKS, 2008 This article is a US Government work and, as such, is in the public domain in the United States of America.

130 citations


Journal IssueDOI
01 Oct 2008-Networks
TL;DR: Improvements in the objective function values over the shortest path network interdiction problem with symmetric information with asymmetric information are demonstrated.
Abstract: We consider an extension of the shortest path network interdiction problem. In this problem an evader attempts to minimize the length of the shortest path between the origin and the destination in a network, while an interdictor attempts to maximize the length of this shortest path by interdicting network arcs using limited resources. We consider the case where there is asymmetric information, i.e., the evader and the interdictor have different levels of information about the network. We formulate this problem as a nonlinear mixed integer program and show that this formulation can be converted to a linear mixed integer program. Computational results demonstrate improvements in the objective function values over the shortest path network interdiction problem with symmetric information. © 2008 Wiley Periodicals, Inc. NETWORKS, 2008

102 citations


Journal IssueDOI
01 Oct 2008-Networks
TL;DR: Computational results are given showing the efficacy of a sampling-based approach to solve the network interdiction problem in which the successful destruction of an arc of the network is a Bernoulli random variable, and the objective is to minimize the maximum expected flow of the adversary.
Abstract: The network interdiction problem involves interrupting an adversary's ability to maximize flow through a capacitated network by destroying portions of the network. A budget constraint limits the amount of the network that can be destroyed. In this article, we study a stochastic version of the network interdiction problem in which the successful destruction of an arc of the network is a Bernoulli random variable, and the objective is to minimize the maximum expected flow of the adversary. Using duality and linearization techniques, an equivalent deterministic mixed integer program is formulated. The structure of the reformulation allows for the application of decomposition techniques for its solution. Using a parallel algorithm designed to run on a distributed computing platform known as a computational grid, we give computational results showing the efficacy of a sampling-based approach to solve the problem. © 2008 Wiley Periodicals, Inc. NETWORKS, 2008

92 citations


Journal IssueDOI
01 Jan 2008-Networks
TL;DR: This work considers the problem of locating hubs and assigning terminals to hubs for a telecommunication network and presents two formulations and shows that the constraints are facet-defining inequalities in both cases.
Abstract: We consider the problem of locating hubs and assigning terminals to hubs for a telecommunication network. The hubs are directly connected to a central node and each terminal node is directly connected to a hub node. The aim is to minimize the cost of locating hubs, assigning terminals and routing the traffic between hubs and the central node. We present two formulations and show that the constraints are facet-defining inequalities in both cases. We test the formulations on a set of instances. Finally, we present a heuristic based on Lagrangian relaxation. © 2007 Wiley Periodicals, Inc. NETWORKS, 2008

63 citations


Journal IssueDOI
01 Jan 2008-Networks
TL;DR: This paper shows that it can obtain an optimal solution of the block-to-train assignment problem within a few minutes of computational time, and can obtain heuristic solutions with 1–2p deviations from the optimal solutions within a a few seconds.
Abstract: Railroad planning involves solving two optimization problems: (i) the blocking problem, which determines what blocks to make and how to route traffic over these blocks; and (ii) the train schedule design problem, which determines train origins, destinations, and routes. Once the blocking plan and train schedule have been obtained, the next step is to determine which trains should carry which blocks. This problem, known as the block-to-train assignment problem, is considered in this paper. We provide two formulations for this problem: an arc-based formulation and a path-based formulation. The latter is generally smaller than the former, and it can better handle practical constraints. We also propose exact and heuristic algorithms based on the path-based formulation. Our exact algorithm solves an integer programming formulation with CPLEX using both a priori generation and dynamic generation of paths. Our heuristic algorithms include a Lagrangian relaxation-based method as well as a greedy construction method. We present computational results of our algorithms using the data provided by a major US railroad. We show that we can obtain an optimal solution of the block-to-train assignment problem within a few minutes of computational time, and can obtain heuristic solutions with 1–2p deviations from the optimal solutions within a few seconds. © 2007 Wiley Periodicals, Inc. NETWORKS, 2008

59 citations


Journal IssueDOI
01 Oct 2008-Networks
TL;DR: In this article, the authors consider a stochastic network interdiction problem in which the goal is to detect an evader, who selects a maximum-reliability path, i.e., to maximize the detection probability.
Abstract: We consider a stochastic network interdiction problem in which the goal is to detect an evader, who selects a maximum-reliability path. Subject to a resource constraint, the interdictor installs sensors on a subset of the network's arcs to minimize the value of the evader's maximum-reliability path, i.e., to maximize the detection probability. When this decision is made, the evader's origin–destination pair is known to the interdictor only through a probability distribution. Our model is framed as a stochastic mixed-integer program and solved by an enhanced L-shaped decomposition method. Our primary enhancement is via a valid inequality, which we call a step inequality. In earlier work [Morton et al., IIE Trans 39 (2007), 3–14], we developed step inequalities for the special case in which the evader encounters at most one sensor on an origin–destination path. Here, we generalize the step inequality to the case where the evader encounters multiple sensors. In this more general setting, the step inequality is tightly coupled to the decomposition scheme. An efficient separation algorithm identifies violated step inequalities and strengthens the linear programming relaxation of the L-shaped method's master program. We apply this solution procedure with further computational enhancements to a collection of test problems. © 2008 Wiley Periodicals, Inc. NETWORKS, 2008

55 citations


Journal IssueDOI
01 Oct 2008-Networks
TL;DR: The decontamination problem in a hypercube network of size n is considered and it is shown that when the agents have the capability to clone combined with either visibility or synchronicity, the move complexity can be reduced at the expense of an increase in the number of agents.
Abstract: In this article we consider the decontamination problem in a hypercube network of size n. The nodes of the network are assumed to be contaminated and they have to be decontaminated by a sufficient number of agents. An agent is a mobile entity that asynchronously moves along the network links and decontaminates all the nodes it touches. A decontaminated node that is not occupied by an agent is re-contaminated if it has a contaminated neighbor. We consider some variations of the model based on the capabilities of mobile agents: locality, where the agents can only access local information; visibility, where they can “see” the state of their neighbors; and cloning, where they can create copies of themselves. We also consider synchronicity as an alternative system requirement. For each model, we design a decontamination strategy and we make several observations. For agents with locality, our strategy is based on the use of a coordinator that leads the other agents. Our strategy results in an optimal number of agents, $\Theta ({n \over \sqrt{\log n}})$, and requires O(n log n) moves and O(n log n) time steps. For agents with visibility, we assume that the agents can move autonomously. In this setting, our decontamination strategy achieves an optimal time complexity (log n time steps), but the number of agents increases to $ {n \over 2}$. Finally, we show that when the agents have the capability to clone combined with either visibility or synchronicity, we can reduce the move complexity—which becomes optimal—at the expense of an increase in the number of agents. © 2008 Wiley Periodicals, Inc. NETWORKS, 2008 A preliminary version of this paper appeared in IPDPS 2005.

48 citations


Journal IssueDOI
01 Oct 2008-Networks
TL;DR: It is shown that for some networks, including trees, the optimal searcher, and hider strategies have a simple structure, and it is assumed that the searcher can choose his starting point.
Abstract: We analyze a zero-sum game between a blind unit-speed searcher and a stationary hider on a given network Q, where the payoff is the time for the searcher to reach the hider. In contrast to the standard game studied in the literature, we do not assume that the searcher has to start from a fixed point (known to the hider) but can choose his starting point. We show that for some networks, including trees, the optimal searcher, and hider strategies have a simple structure. © 2008 Wiley Periodicals, Inc. NETWORKS, 2008

Journal IssueDOI
01 Mar 2008-Networks
TL;DR: Some edge-fault-tolerant properties of the folded hypercube, a variant of the regular hypercube that is obtained by adding an edge to every pair of nodes with complementary addresses, are analyzed.
Abstract: In this article, we analyze some edge-fault-tolerant properties of the folded hypercube, a variant of the regular hypercube that is obtained by adding an edge to every pair of nodes with complementary addresses. We show that an n-dimensional folded hypercube is (n - 2)-edge-fault-tolerant Hamiltonian-connected when n(≥ 2) is even, (n - 1)-edge-fault-tolerant strongly Hamiltonian-laceable when n(≥ 1) is odd, and (n - 2)-edge-fault-tolerant hyper Hamiltonian-laceable when n(≥ 3) is odd. © 2007 Wiley Periodicals, Inc. NETWORKS, 2008

Journal IssueDOI
01 Dec 2008-Networks
TL;DR: It is proved that the problem of designing the fastest Black Hole Search is not polynomial-time approximable within any constant factor less than $389 \over 388$ (unless P = NP), and a 6-approximation algorithm is given, thus improving on the 9.3-app correlation algorithm.
Abstract: A black hole is a highly harmful stationary process residing in a node of a network and destroying all mobile agents visiting the node without leaving any trace. The Black Hole Search is the task of locating all black holes in a network, through the exploration of its nodes by a set of mobile agents. In this article we consider the problem of designing the fastest Black Hole Search, given the map of the network, the starting node and a subset of nodes of the network initially known to be safe. We study the version of this problem that assumes that there is at most one black hole in the network and there are two agents, which move in synchronized steps. We prove that this problem is not polynomial-time approximable within any constant factor less than $389 \over 388$ (unless P = NP). We give a 6-approximation algorithm, thus improving on the 9.3-approximation algorithm from (Czyzowicz et al., Fundamenta Informaticae 71 (2006), 229–242). We also prove APX-hardness for a restricted version of the problem, in which only the starting node is initially known to be safe. © 2008 Wiley Periodicals, Inc. NETWORKS, 2008 Part of this work was done while E. Markou, T. Radzik and F. Sarracco were visiting the LaBRI (Laboratoire Bordelais de Recherche en Informatique) in Bordeaux.

Journal IssueDOI
01 Mar 2008-Networks
TL;DR: This article presents a linear integer programming formulation of the habitat fragmentation problem using graph theory concepts and an empirical application of the model to a real data set involving 744 sites and 32 endangered-threatened bird species is presented.
Abstract: Habitat fragmentation is often cited as one of the most important factors that adversely affect species persistence and survival ability. Contiguity of habitat sites is usually desirable when designing a conservation reserve. If a contiguous reserve is not feasible, due to landscape characteristics or economic constraints, designing a reserve network with minimal fragmentation may be a viable strategy. This article presents a linear integer programming formulation of the problem using graph theory concepts. A graph is constructed where nodes correspond to individual sites and directed arcs are defined for pairs of nodes corresponding to adjacent sites. The model determines a minimal representative tree as a subgraph where each node in the tree corresponds to either a selected reserve site or a gap site. Reserve fragmentation is defined as the sum of gap sites, which is to be minimized. An important computational problem is the formation of cycles when determining the minimal representative tree. This problem is resolved using an iterative procedure that utilizes Dantzig-cuts when a cycle occurs in the solution. Arbitrarily generated data sets are used to explore the computational efficiency of this approach. Finally an empirical application of the model to a real data set involving 744 sites and 32 endangered-threatened bird species is presented. © 2007 Wiley Periodicals, Inc. NETWORKS, 2008

Journal IssueDOI
01 Oct 2008-Networks
TL;DR: A model in which service providers own the routes in a network and set prices to maximize their profits, while users choose the amount of flow to send and the routing of the flow according to Wardrop's principle is studied.
Abstract: In this paper, we present a combined study of price competition and traffic control in a congested network. We study a model in which service providers own the routes in a network and set prices to maximize their profits, while users choose the amount of flow to send and the routing of the flow according to Wardrop's principle. When utility functions of users are concave and have concave first derivatives, we characterize a tight bound of 2-3 on efficiency in pure strategy equilibria of the price competition game. We obtain the same bound under the assumption that there is no fixed latency cost, i.e., the latency of a link at zero flow is equal to zero. These bounds are tight even when the numbers of routes and service providers are arbitrarily large. © 2008 Wiley Periodicals, Inc. NETWORKS, 2008

Journal IssueDOI
01 Jul 2008-Networks
TL;DR: The focus has primarily been on dynamic random graph models that attempt to account for the observed statistical properties of web-like networks through certain dynamic processes guided by simple stochastic rules.
Abstract: Various random graph models have recently been proposed to replicate and explain the topology of large, complex, real-life networks such as the World Wide Web and the Internet. These models are surveyed in this article. Our focus has primarily been on dynamic random graph models that attempt to account for the observed statistical properties of web-like networks through certain dynamic processes guided by simple stochastic rules. Particular attention is paid to the equivalence between mathematical definitions of dynamic random graphs in terms of inductively defined probability spaces and algorithmic definitions of such models in terms of recursive procedures. Several techniques that have been employed for studying dynamic random graphs—both heuristic and analytic—are expounded. Each technique is illustrated through its application in analyzing various graph parameters, such as degree distribution, degree-correlation between adjacent nodes, clustering coefficient, distribution of node-pair distances, and connected-component size. A discussion of the most recent salient work and a comprehensive list of references in this rapidly-expanding area are included. © 2007 Wiley Periodicals, Inc. NETWORKS, 2008

Journal IssueDOI
01 Dec 2008-Networks
TL;DR: Using the authors' constructions, this work can for the first time prove NP-hardness of these problems for all real distance-power gradients α > 0 (resp. α > 1 for broadcast) in 2D, and prove APX- hardness of all three problems in 3D for allα > 1.
Abstract: We investigate the computational hardness of the connectivity, the strong connectivity, and the broadcast type of range assignment problems in ℝ2 and ℝ3. We present new reductions for the connectivity problem, which are easily adapted to suit the other two problems. All reductions are considerably simpler than the technically quite involved ones used in earlier works on these problems. Using our constructions, we can for the first time prove NP-hardness of these problems for all real distance-power gradients α > 0 (resp. α > 1 for broadcast) in 2D, and prove APX-hardness of all three problems in 3D for all α > 1. Our reductions yield improved lower bounds on the approximation ratios for all problems where APX-hardness was known before already. In particular, we derive the overall first APX-hardness proof for broadcast. This was an open problem posed in earlier work in this area, as was the question whether (strong) connectivity remains NP-hard for α = 1. In addition, we give the first hardness results for so-called well-spread instances. On the positive side, we prove that two natural greedy algorithms are 2-approximations for (strong) connectivity, and show that the factor 2 is tight in ℝ2 for α > 1. We also analyze the performance guarantee of the well-known MST-heuristic as a function of the input size. © 2008 Wiley Periodicals, Inc. NETWORKS, 2008

Journal IssueDOI
01 May 2008-Networks
TL;DR: The computational complexity of the H-CONTRACTIBILITY problem for certain classes of pattern graphs is determined, and in all connected cases that are known to be polynomially solvable, the pattern graph H does not have a dominating vertex.
Abstract: For a fixed pattern graph H, let H-CONTRACTIBILITY denote the problem of deciding whether a given input graph is contractible to H. This paper is part I of our study on the computational complexity of the H-CONTRACTIBILITY problem. We continue a line of research that was started in 1987 by Brouwer and Veldman, and we determine the computational complexity of the H-CONTRACTIBILITY problem for certain classes of pattern graphs. In particular, we pinpoint the complexity for all graphs H with five vertices except for two graphs, whose polynomial time algorithms are presented in part II. Interestingly, in all connected cases that are known to be polynomially solvable, the pattern graph H has a dominating vertex, whereas in all cases that are known to be NP-complete, the pattern graph H does not have a dominating vertex. © 2007 Wiley Periodicals, Inc. NETWORKS, 2008 An earlier version of this paper appeared in the Proceedings of the 29th International Workshop on Graph-Theoretic Concepts in Computer Science (WG 2003).

Journal IssueDOI
01 Aug 2008-Networks
TL;DR: In this paper, the authors presented polynomial-time algorithms for two remaining pattern graphs with five vertices and a dominating vertex, and showed that in all connected cases that are known to be polynomially solvable, the pattern graph H does not have a dominating node.
Abstract: For a fixed pattern graph H, let H-CONTRACTIBILITY denote the problem of deciding whether a given input graph is contractible to H. This article is part II of our study on the computational complexity of the H-CONTRACTIBILITY problem. In the first article we pinpointed the complexity for all pattern graphs with five vertices except for two pattern graphs H. Here, we present polynomial time algorithms for these two remaining pattern graphs. Interestingly, in all connected cases that are known to be polynomially solvable, the pattern graph H has a dominating vertex, whereas in all cases that are known to be NP-complete, the pattern graph H does not have a dominating vertex. © 2008 Wiley Periodicals, Inc. NETWORKS, 2008

Journal IssueDOI
01 Mar 2008-Networks
TL;DR: In this article, it was shown that if it is possible to decide if a set of vertices W ⊆ V is a transversal in time S(n) (where n = |V|), then it is also possible to find a minimum size transversality in O(n3S(n)).
Abstract: A hypergraph H = (V,E) is a subtree hypergraph if there is a tree T on V such that each hyperedge of E induces a subtree of T. Since the number of edges of a subtree hypergraph can be exponential in n = |V|, one can not always expect to be able to find a minimum size transversal in time polynomial in n. In this paper, we show that if it is possible to decide if a set of vertices W ⊆ V is a transversal in time S(n) (where n = |V|), then it is possible to find a minimum size transversal in O(n3S(n)). This result provides a polynomial algorithm for the Source Location Problem: a set of (k,l)-sources for a digraph D = (V,A) is a subset S of V such that for any v ∈ V there are k arc-disjoint paths that each join a vertex of S to v and l arc-disjoint paths that each join v to S. The Source Location Problem is to find a minimum size set of (k,l)-sources. We show that this is a case of finding a transversal of a subtree hypergraph, and that in this case S(n) is polynomial. © 2007 Wiley Periodicals, Inc. NETWORKS, 2008

Journal IssueDOI
01 Oct 2008-Networks
TL;DR: It is shown that for network partly Eulerian networks, a strategy consisting equiprobably of a minimal (Chinese Postman) covering path and its reverse path is optimal for the searcher, while the optimal hider strategy is to assume that the Searcher must start at the center of the tree, and to optimize in that (known) game.
Abstract: We analyze the hide-and-seek game Γ(G) on certain networks G. The hider picks a hiding point yin Gand the searcher picks a unit speed path S(t) in G, starting at any point S(0). The payoff in this zero-sum game is the capture time T = T(S,y) = min{t: S(t) = y}. Such games have been studied before, but mainly with the simplifying assumption that the searcher's starting point S(0) is specified and known to the hider. We call a network partly Eulerian if it consists of a tree (of length aand radius r) to which a finite number of disjoint Eulerian networks (of total length b) are attached, each at a single point. We show that for such networks, a strategy consisting equiprobably of a minimal (Chinese Postman) covering path and its reverse path is optimal for the searcher, while the optimal hider strategy is to assume that the searcher must start at the center of the tree, and to optimize in that (known) game. The value of the game Γ(G) is a+b-2 -r. This simplifies and extends a similar result of Dagan and Gal for search games on trees. © 2008 Wiley Periodicals, Inc. NETWORKS, 2008

Journal IssueDOI
01 Dec 2008-Networks
TL;DR: In this article, the optimal linear inequalities for the stability number and the number of edges are obtained for the class of connected graphs, and several optimal inequalities are established for three invariants: the maximum degree, the irregularity, and the diameter.
Abstract: Optimality of a linear inequality in finitely many graph invariants is defined through a geometric approach. For a fixed number of graph vertices, consider all the tuples of values taken by the invariants on a selected class of graphs. Then form the polytope which is the convex hull of all these tuples. By definition, the optimal linear inequalities correspond to the facets of this polytope. They are finite in number, are logically independent, and generate precisely all the linear inequalities valid on the class of graphs. The computer system GraPHedron, developed by some of the authors, is able to produce experimental data about such inequalities for a “small” number of vertices. It greatly helps in conjecturing optimal linear inequalities, which are then hopefully proved for any number of vertices. Two examples are investigated here for the class of connected graphs. First, all the optimal linear inequalities for the stability number and the number of edges are obtained. To this aim, a problem of Ore (1962) related to the Turan Theorem (1941) is solved. Second, several optimal inequalities are established for three invariants: the maximum degree, the irregularity, and the diameter. © 2008 Wiley Periodicals, Inc. NETWORKS, 2008

Journal IssueDOI
01 Jan 2008-Networks
TL;DR: In this article, the minimum spanning arborescence problem (MSAP) and the Directed Node Weighted Steiner Tree Problem (DNWSTP) are considered and a multicommodity flow reformulation for the MSAP is proposed.
Abstract: The Minimum Arborescence problem (MAP) consists of finding a minimum cost arborescence in a directed graph. This problem is NP-Hard and is a generalization of two well-known problems: the Minimum Spanning Arborescence Problem (MSAP) and the Directed Node Weighted Steiner Tree Problem (DNWSTP). We start the model presentation in this paper by describing four models for the MSAP (including two new ones, using so called “connectivity” constraints which forbid disconnected components) and we then describe the changes induced on the polyhedral structure of the problem by the removal of the spanning property. Only two (the two new ones) of the four models for the MSAP remain valid when the spanning property is removed. We also describe a multicommodity flow reformulation for the MAP that differs from well-known multicommodity flow reformulations in the sense that the flow conservation constraints at source and destination are replaced by inequalities. We show that the linear programming relaxation of this formulation is equivalent to the linear programming relaxation of the best of the two previous valid formulations and we also propose two Lagrangean relaxations based on the multicommodity flow reformulation. From the upper bound perspective, we describe a constructive heuristic as well as a local search procedure involving the concept of key path developed earlier for the Steiner Tree Problem. Numerical experiments taken from instances with up to 400 nodes are used to evaluate the proposed methods. © 2007 Wiley Periodicals, Inc. NETWORKS, 2008

Journal IssueDOI
01 Sep 2008-Networks
TL;DR: In this article, the super connectedness of the Cartesian product of two connected graphs with maximum connectivity was shown to be Ω(Km × Kn) = minlm + 2n - 4, 2m + n - 4r for m + n ≥ 6.
Abstract: The super connectivity κ1 of a connected graph G is the minimum number of vertices whose deletion results in a disconnected graph without isolated vertices; this is a more refined index than the connectivity parameter κ. This article provides bounds for the super connectivity κ1 of the Cartesian product of two connected graphs, and thus generalizes the main result of Shieh on the super connectedness of the Cartesian product of two regular graphs with maximum connectivity. Particularly, we determine that κ1(Km × Kn) = minlm + 2n - 4, 2m + n - 4r for m + n ≥ 6 and state sufficient conditions to guarantee κ1(K2 × G) = 2κ(G). As a consequence, we immediately obtain the super connectivity of the n-cube for n ≥ 3. © 2008 Wiley Periodicals, Inc. NETWORKS, 2008

Journal IssueDOI
01 Sep 2008-Networks
TL;DR: This paper presents cut-based models for the problem of minimizing the total energy required by multicasting, and proves the equivalence in strength between these models and their flow-based counterparts.
Abstract: A multicast session in a wireless ad hoc network concerns routing messages from a source to a set of destination devices. Transmitting messages consumes energy at the source and intermediate devices of the session. Since a battery is the only energy source in many applications of wireless ad hoc networks, energy efficiency is an important performance measure of multicasting. In this paper, we present and analyze integer programming models for the problem of minimizing the total energy required by multicasting. We start from a straightforward multicommodity flow model, which is strengthened by a more efficient representation of transmission power. Further strengthening is accomplished by lifting the capacity constraints of the model. We then present cut-based models for the problem, and prove, from a bounding standpoint, the equivalence in strength between these models and their flow-based counterparts. By expanding the underlying graph, we show that the problem can be transformed into finding a minimum Steiner arborescence. The expanded graph arises also in the separation procedure for solving one of the cut-based models. In addition to a theoretical analysis of the relation between various models, we perform extensive computational experiments to study the numerical strengths of these models and their efficiency in solving the problem. © 2008 Wiley Periodicals, Inc. NETWORKS, 2008

Journal IssueDOI
01 May 2008-Networks
TL;DR: It is shown that both the w-Rabin number and the strong w- Rabin number of a k-dimensional folded hypercube are equal to ⌈k-2⌉ for 1 ≤ w ≤ ⌊k- 2⌊- 1, and ⌉k-1⌈+ 1 for ⌓ w ≤ k + 1, where each path obtained is shortest or second shortest.
Abstract: The w-Rabin number of a network W is the minimum l so that for any w + 1 distinct nodes s, d1,d2,…,dw of W, there exist w node-disjoint paths from s to d1,d2,…,dw, respectively, whose maximal length is not greater than l, where w is not greater than the node connectivity of W. If ld1,d2,…,dwr is allowed to be a multiset, then the resulting minimum l is called the strong w-Rabin number of W. In this article, we show that both the w-Rabin number and the strong w-Rabin number of a k-dimensional folded hypercube are equal to ⌈k-2⌉ for 1 ≤ w ≤ ⌈k-2⌉- 1, and ⌈k-2⌉+ 1 for ⌈k-2⌉ ≤ w ≤ k + 1, where k ≥ 5. Each path obtained is shortest or second shortest. The results of this paper also solve an open problem raised by Liaw and Chang. © 2007 Wiley Periodicals, Inc. NETWORKS, 2008

Journal IssueDOI
01 Jan 2008-Networks
TL;DR: The tree partitioning problem can be solved in polynomial time either by linear programming or by suitable convex nondifferentiable optimization algorithms, and a dynamic programming algorithm is developed which solves the problem on trees in O(np) time.
Abstract: This paper deals with the following graph partitioning problem. Consider a connected graph with n nodes, p of which are centers, while the remaining ones are units. For each unit-center pair there is a fixed service cost and the goal is to find a partition into connected components such that each component contains only one center and the total service cost is minimum. This problem is known to be NP-hard on general graphs, and here we show that it remains such even if the service cost is monotone and the graph is bipartite. However, in this paper we derive some polynomial time algorithms for trees. For this class of graphs we provide several reformulations of the problem as integer linear programs proving the integrality of the corresponding polyhedra. As a consequence, the tree partitioning problem can be solved in polynomial time either by linear programming or by suitable convex nondifferentiable optimization algorithms. Moreover, we develop a dynamic programming algorithm, whose recursion is based on sequences of minimum weight closure problems, which solves the problem on trees in O(np) time. © 2007 Wiley Periodicals, Inc. NETWORKS, 2008

Journal IssueDOI
01 Aug 2008-Networks
TL;DR: A class of infinite network-flow problems whose flow balance constraints are inequalities is studied and it is shown that the simplex method can be implemented in such a way that each pivot takes only a finite amount of time.
Abstract: We study minimum-cost network-flow problems in networks with a countably infinite number of nodes and arcs and integral flow data. This problem class contains many nonstationary planning problems over time where no natural finite planning horizon exists. We use an intuitive natural dual problem and show that weak and strong duality hold. Using recent results regarding the structure of basic solutions to infinite-dimensional network-flow problems we extend the well-known finite-dimensional network simplex method to the infinite-dimensional case. In addition, we study a class of infinite network-flow problems whose flow balance constraints are inequalities and show that the simplex method can be implemented in such a way that each pivot takes only a finite amount of time. © 2008 Wiley Periodicals, Inc. NETWORKS, 2008

Journal IssueDOI
01 Dec 2008-Networks
TL;DR: A polynomial algorithm for finding valid cycles, which indicates which parts of the routing patterns are in conflict and can be used for changing the routingpatterns to make the problem feasible, is presented.
Abstract: Many telecommunication networks use the open shortest path first (OSPF) protocol for the routing of traffic In such networks, each router sends the traffic on the shortest paths to the destination, with respect to the link weights assigned An interesting question is whether or not a set of desired routing patterns can be obtained in an OSPF network by assigning appropriate weights If not, we wish to find the source of the infeasibility We study these issues by formulating a mathematical model and investigating its feasibility A certain structure, called valid cycle, is found to be present in most infeasible instances This yields new necessary conditions, stronger than those previously known, for the existence of weights yielding a set of given desired shortest path graphs A valid cycle indicates which parts of the routing patterns are in conflict and can be used for changing the routing patterns to make the problem feasible A polynomial algorithm for finding valid cycles is presented, the method is illustrated by a numerical example, and computational tests are reported © 2008 Wiley Periodicals, Inc NETWORKS, 2008

Journal IssueDOI
01 Dec 2008-Networks
TL;DR: Optimal two-period groomings are found for small grooming ratios using techniques from the theory of graphs and designs on graph decompositions of Kn that embed graphs of Kv for v ≤ n.
Abstract: Minimizing the number of add-drop multiplexers (ADMs) in a unidirectional SONET ring can be formulated as a graph decomposition problem. When traffic requirements are uniform and all-to-all, groomings that minimize the number of ADMs (equivalently, the drop cost) have been characterized for grooming ratio at most six. However, when two different traffic requirements are supported, these solutions do not ensure optimality. In two-period optical networks, n vertices are required to support a grooming ratio of Ca in the first time period, while in the second time period a grooming ratio of Cb, Cb