scispace - formally typeset
Search or ask a question

Showing papers on "Longest path problem published in 2008"


Proceedings ArticleDOI
25 Mar 2008
TL;DR: This paper proposes a novel algorithm to find the minimum-travel-time path with the best departure time for a LTT(vs, v query over a large graph, which outperforms existing algorithms in terms of both time complexity in theory and efficiency in practice.
Abstract: The spatial and temporal databases have been studied widely and intensively over years. In this paper, we study how to answer queries of finding the best departure time that minimizes the total travel time from a place to another, over a road network, where the traffic conditions dynamically change from time to time. We study a generalized form of this problem, called the time-dependent shortest-path problem. A time-dependent graph GT is a graph that has an edge-delay function, wi, j(t), associated with each edge (vi, vj), to be stored in a database. The edge-delay function wi, j(t) specifies how much time it takes to travel from node vi to node vj, if it departs from vi at time t. A user-specified query is to ask the minimum-travel-time path, from a source node, vs, to a destination node, ve, over the time-dependent graph, GT, with the best departure time to be selected from a time interval T. We denote this user query as LTT(vs, ve, T) over GT. The challenge of this problem is the added complexity due to the time dependency in the time-dependent graph. That is, edge delays are not constants, and can vary from time to time. In this paper, we propose a novel algorithm to find the minimum-travel-time path with the best departure time for a LTT(vs, ve, T) query over a large graph GT. Our approach outperforms existing algorithms in terms of both time complexity in theory and efficiency in practice. We will discuss the design of our algorithm, together with its correctness and complexity. We conducted extensive experimental studies over large graphs and will report our findings.

252 citations


Book ChapterDOI
07 Jul 2008
TL;DR: It is shown that there are adversary strategies which force the expected cover time of a simple random walk on connected dynamic graphs to be exponential, and a simple strategy is provided, the lazyrandom walk, that guarantees polynomial cover time regardless of the changes made by the adversary.
Abstract: Motivated by real world networks and use of algorithms based on random walks on these networks we study the simple random walks on dynamicundirected graphs with fixed underlying vertex set, i.e., graphs which are modified by inserting or deleting edges at every step of the walk. We are interested in the expected time needed to visit all the vertices of such a dynamic graph, the cover time, under the assumption that the graph is being modified by an oblivious adversary. It is well known that on connected staticundirected graphs the cover time is polynomial in the size of the graph. On the contrary and somewhat counter-intuitively, we show that there are adversary strategies which force the expected cover time of a simple random walk on connected dynamic graphs to be exponential. We relate this result to the cover time of static directed graphs. In addition we provide a simple strategy, the lazyrandom walk, that guarantees polynomial cover time regardless of the changes made by the adversary.

228 citations


Book ChapterDOI
30 May 2008
TL;DR: A new prototype to solve the problem of finding all Pareto-optimal solutions in a multi-criteria setting of the shortest path problem in time-dependent graphs based on a multi -criteria generalization of Dijkstra's algorithm is presented.
Abstract: We study the problem of finding all Pareto-optimal solutions in a multi-criteria setting of the shortest path problem in time-dependent graphs. This has important applications in timetable information systems for train schedules. We present a new prototype to solve this problem in a fully realistic scenario based on a multi-criteria generalization of Dijkstra's algorithm. As optimization criteria we use travel time and number of train changes, as well as a new criterion "reliability of transfers". The performance of the prototype and various speed-up techniques are analyzed experimentally on a large set of real test instances. In comparison with a base-line implementation, our prototype achieves significant speed-up factors of 20 with respect to the number of label creations and of 138 with respect to label insertions into the priority queue. We also compare our prototype with a time-expanded graph model.

120 citations


Journal ArticleDOI
TL;DR: The most critical path and the relative path degree of criticality are defined, which are theoretically sound and easy to use in practice.

94 citations


Journal ArticleDOI
21 May 2008
TL;DR: This paper describes hybrid ant colony algorithms (HACAs) proposed for path planning in sparse graphs and demonstrates the excellent convergence property and robustness of HACAs in uncovering low risk and Hamiltonian visitation paths.
Abstract: The general problem of path planning can be modeled as a traveling salesman problem which assumes that a graph is fully connected. Such a scenario of full connectivity is however not always realistic. One such motivating example for us is the application of path planning for unmanned reconnaissance aerial vehicles (URAVs). URAVs are widely deployed for photography or imagery gathering missions of sites of interest. These sites can be targets in a combat zone to be investigated or sites inaccessible by ground transportation, such as those hit by forest fires, earthquake or other forms of natural disasters. The navigation environment is one where the overall configuration of the problem is a sparse graph. Unlike graphs that are fully connected, sparse graphs are not always Hamiltonian. In this paper, we describe hybrid ant colony algorithms (HACAs) proposed for path planning in sparse graphs since existing ant colony solvers designed for solving TSP do not apply to the present context directly. HACAs represent ant inspired algorithms incorporated with a local search procedure and some heuristic techniques for uncovering feasible route(s) or path(s) in a sparse graph within tractable time. Empirical results conducted on a set of generated sparse graphs demonstrate the excellent convergence property and robustness of HACAs in uncovering low risk and Hamiltonian visitation paths. Further, the obtained results also indicate that HACAs converge to secondary closed paths in situations where a Hamiltonian cycle does not exist theoretically or is not attainable within the bounded computational time window.

74 citations


Proceedings ArticleDOI
15 Dec 2008
TL;DR: It is proved that the generator will produce graphs which obey many patterns and laws observed to date, and an intuitive and easy way to construct weighted, time-evolving graphs is proposed.
Abstract: How do real, weighted graphs change over time? What patterns, if any, do they obey? Earlier studies focus on unweighted graphs, and, with few exceptions, they focus on static snapshots. Here, we report patterns we discover on several real, weighted, time-evolving graphs. The reported patterns can help in detecting anomalies in natural graphs, in making link prediction and in providing more criteria for evaluation of synthetic graph generators. We further propose an intuitive and easy way to construct weighted, time-evolving graphs. In fact, we prove that our generator will produce graphs which obey many patterns and laws observed to date. We also provide empirical evidence to support our claims.

60 citations


Journal ArticleDOI
TL;DR: This paper proposes a special computational device which uses light rays for solving the Hamiltonian path problem on a directed graph and shows that the proposed device can solve small and medium instances of the problem in reasonable time.
Abstract: In this paper we propose a special computational device which uses light rays for solving the Hamiltonian path problem on a directed graph. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. In each node the rays are uniquely marked so that they can be easily identified. At the destination node we will search only for particular rays that have passed only once through each node. We show that the proposed device can solve small and medium instances of the problem in reasonable time.

60 citations


Journal ArticleDOI
TL;DR: This work gives an efficient algorithm for finding a maximum collection of vertex-disjoint A-paths each of non-zero weight when A = V, equivalent to the maximum matching problem.
Abstract: Let G = (V, E) be an oriented graph whose edges are labelled by the elements of a group Γ and let A ? V. An A-path is a path whose ends are both in A. The weight of a path P in G is the sum of the group values on forward oriented arcs minus the sum of the backward oriented arcs in P. (If Γ is not abelian, we sum the labels in their order along the path.) We give an efficient algorithm for finding a maximum collection of vertex-disjoint A-paths each of non-zero weight. When A = V this problem is equivalent to the maximum matching problem.

48 citations


Book ChapterDOI
26 May 2008
TL;DR: The main contribution of this paper is a new polynomial time algorithm for the mwss in claw-free graphs, and a rough analysis of the complexity of this algorithm gives a time bound of O(n6), where n is the number of vertices in the graph, and which the authors hope can be improved by a finer analysis.
Abstract: In this paper, we introduce two powerful graph reductions for the maximum weighted stable set (mwss) in general graphs. We show that these reductions allow to reduce the mwss in claw-free graphs to the mwss in a class of quasi-line graphs, that we call bipolar-free. For this latter class, we provide a new algorithmic decomposition theorem running in polynomial time. We then exploit this decomposition result and our reduction tools again to transform the problem to either a single matching problem or a longest path computation in an acyclic auxiliary graph (in this latter part we use some results of Pulleyblank and Shepherd [10]). Putting all the pieces together, the main contribution of this paper is a new polynomial time algorithm for the mwss in claw-free graphs. A rough analysis of the complexity of this algorithm gives a time bound of O(n6), where n is the number of vertices in the graph, and which we hope can be improved by a finer analysis. Incidentally, we prove that the mwss problem can be solved efficiently for any class of graphs that admits a "suitable" decomposition into pieces where the mwss is easy.

45 citations


Journal ArticleDOI
TL;DR: It is proved that deciding whether there exist k pairwise vertex/edge disjoint properly edge-colored s-t paths/trails in a c-edge-colored graph G^c is NP-complete even for k=2 and c=@W(n^2), where n denotes the number of vertices in G^ c.

45 citations


Book ChapterDOI
23 Jun 2008
TL;DR: In this article, the authors consider a generalization of the shortest-path problem, called the L-constrained shortest path problem, where the concatenated labels along the shortest path form a word of a regular language.
Abstract: We consider a generalization of the shortest-path problem: given an alphabet Σ, a graph Gwhose edges are weighted and Σ-labeled, and a regular language L? Σ*, the L-constrained shortest-path problemconsists of finding a shortest path pin Gsuch that the concatenated labels along pform a word of L. This definition allows to model, e. g., many traffic-planning problems. We present extensions of well-known speed-up techniques for the standard shortest-path problem, and conduct an extensive experimental study of their performance with various networks and language constraints. Our results show that depending on the network type, both goal-directed and bidirectional search speed up the search considerably, while combinations of these do not.

Journal ArticleDOI
TL;DR: A method to locate possible segments that cause extra delays on circuit paths by guiding the solutions of the linear constraints solved by a linear programming solver to build linear constraints, which greatly increases the efficiency of the diagnosis process.
Abstract: Diagnosis tools can be used to speed up the process for finding the root causes of functional or performance problems in a VLSI circuit. In this paper, we propose a method to locate possible segments that cause extra delays on circuit paths. We use the delay bounds of the tested paths to build linear constraints. By guiding the solutions of the linear constraints solved by a linear programming solver, we can identify segments with extra delays. Also, with the ranks of segment delays, we can prioritize the search for possible locations of failed segments. Besides, we also propose to reduce the search space by identifying indistinguishable segments. Essentially, we cannot separate segments in the same category no matter which segments have faults. This approach greatly increases the efficiency of the diagnosis process. Three main features of the proposed method are that: 1) it does not assume any delay fault model; 2) it derives diagnosis results directly from test data; and 3) it is able to diagnose failures caused by multiple delay defects. These features make our proposed method more realistic on solving the real problems occurring in the manufacturing process. In the experimental results, for most cases of injecting 5% of the longest path delay, the probabilities are over 90% for locating faulty segments within the list of top-ten suspects, and the average rankings, that is often referred to as first hit rank (FHR), which is defined as the rank of the first hit of the defect in the ranking list, are among the top five suspect locations for single fault injection. In the experimental results of multiple faults injection, the average FHRs are also lower than 5 for all cases of injecting 1% of the longest path delay.

Book ChapterDOI
Hedi Ayed1, Djamel Khadraoui1, Zineb Habbas1, Pascal Bouvry1, Jean François Merche1 
08 Sep 2008
TL;DR: Following the strategy, a new graph structure to abstract multimodal networks is introduced, and this step is seen as the implimentation of the algorithm, so the author can get an idea on its performance.
Abstract: Route guidance solutions used to be applied to single transportation mode. The new trend today is to find route guidance approaches able to propose routes which may involve multi transportation modes. Such route guidance solutions are said to be multi modal. This document presents our contribution to multimodal route guidance problem. Following our strategy, we introduce a new graph structure to abstract multimodal networks. The graph structure is called transfer graph. A transfer graph is described by a set of (sub) graphs called components. They are connected via transfer points. By transfer point we mean any node common to two distinct components of a transfer graph. So a transfer graph is distinct from a partitioned graph. An example of transfer graph is a multimodal network in which all participating unimodal networks are not merged, but are kept separated instead. Since a multimodal network is reducible to a transfer graph, transfer graph based approach can be used for multimodal route guidance. Finally, to give meaning to our work, we try to insert our approach with the shortest path service in Carlink project. This step is seen as the implimentation of our algorithm, so we can get an idea on its performance.

Proceedings Article
20 Jan 2008
TL;DR: In this paper, the authors present a near-linear time algorithm for computing replacement paths in weighted planar directed graphs, in which the length of the replacement path is computed in O(n log 3 n) time.
Abstract: Let G = (V(G), E(G)) be a weighted directed graph and let P be a shortest path from s to t in G. In the replacement paths problem we are required to compute for every edge e in P, the length of a shortest path from s to t that avoids e. The fastest known algorithm for solving the problem in weighted directed graphs is the trivial one: each edge in P is removed from the graph in its turn and the distance from s to t in the modified graph is computed. The running time of this algorithm is O (mn + n2 log n), where n = |V(G)| and m = |E(G)|. The replacement paths problem is strongly motivated by two different applications. First, the fastest algorithm to compute the k simple shortest paths from s to t in directed graphs [21, 13] repeatedly computes the replacement paths from s to t. Its running time is O(kn(m + n log n)). Second, the computation of Vickrey pricing of edges in distributed networks can be reduced to the replacement paths problem. An open question raised by Nisan and Ronen [16] asks whether it is possible to compute the Vickrey pricing faster than the trivial algorithm described in the previous paragraph. In this paper we present a near-linear time algorithm for computing replacement paths in weighted planar directed graphs. In particular, the algorithm computes the lengths of the replacement paths in O(n log3 n) time. This result immediately improves the running time of the two applications mentioned above by almost a linear factor. Our algorithm is obtained by combining several new ideas with a data structure of Klein [12] that supports multi-source shortest paths queries in planar directed graphs in logarithmic time. Our algorithm can be adapted to address the variant of the problem in which one is interested in the replacement path itself (rather than the length of the path). In that case the algorithm is executed in a preprocessing stage constructing a data structure that supports replacement path queries in time O(h), where h is the number of hops in the replacement path. In addition, we can handle the variant in which vertices should be avoided instead of edges.

Proceedings ArticleDOI
10 Nov 2008
TL;DR: This paper presents a new, game-theoretic approach to estimating WCET based on performing directed measurements on the target platform, and proves that the algorithm can converge to find the longest path with high probability.
Abstract: Estimating the worst-case execution time (WCET) of tasks is a key step in the design of reliable real-time software and systems In this paper, we present a new, game-theoretic approach to estimating WCET based on performing directed measurements on the target platform We model the estimation problem as a game between our algorithm (player) and the environment of the program (adversary), where the player seeks to find the longest path through the program while the adversary sets environment parameters to thwart the player We present both theoretical and experimental results demonstrating the utility of our approach On the theoretical side, we prove that our algorithm can converge to find the longest path with high probability Experimental results indicate that our approach is competitive with an existing technique based on static analysis and integer programming Moreover, the approach can be easily applied to even complex hardware/software platforms

Journal ArticleDOI
TL;DR: This work studies different classes of digraphs, which are generalizations of tournaments, to have the property of possessing a maximal independent set intersecting every non-augmentable path (in particular, every longest path), and presents results on strongly internally and finally non-AUgmentable paths.

01 Jan 2008
TL;DR: The first truly subcubic algorithm for finding a maximum weight triangle in a node-weighted graph is obtained, the first to break the cubic barrier, and a nonalgebraic, combinatorial approach is considered more efficient in practice compared to methods based on fast matrix multiplication.
Abstract: Problems related to computing optimal paths have been abundant in computer science since its emergence as a field. Yet for a large number of such problems we still do not know whether the state-of-the-art algorithms are the best possible. A notable example of this phenomenon is the all pairs shortest paths problem in a directed graph with real edge weights. The best algorithm (modulo small polylogarithmic improvements) for this problem runs in cubic time, a running time known since the 1960s (by Floyd and Warshall). Our grasp of many such fundamental algorithmic questions is far from optimal, and the major goal of this thesis is to bring some new insights into efficiently solving path problems in graphs. We focus on several path problems optimizing different measures: shortest paths, maximum bottleneck paths, minimum nondecreasing paths, and various extensions. For the all-pairs versions of these path problems we use an algebraic approach. We obtain improved algorithms using reductions to fast matrix multiplication. For maximum bottleneck paths and minimum nondecreasing paths we are the first to break the cubic barrier, obtaining truly subcubic strongly polynomial algorithms. We also consider a nonalgebraic, combinatorial approach, which is considered more efficient in practice compared to methods based on fast matrix multiplication. We present a combinatorial data structure that maintains a matrix so that products with given sparse vectors can be computed efficiently. This allows us to obtain good running times for path problems in unweighted sparse graphs. This thesis also gives algorithms for some single source path problems. We obtain the first linear time algorithm for the single source minimum nondecreasing paths problem. We give some extensions to this, including an algorithm to find cheapest minimum nondecreasing paths. Besides finding optimal paths, we consider the related problem of finding optimal cycles. In particular, we focus on the problem of finding in a weighted graph a triangle of maximum weight sum. We obtain the first truly subcubic algorithm for finding a maximum weight triangle in a node-weighted graph. We also present algorithms for the edge-weighted case. These algorithms immediately imply good algorithms for finding maximum weight k-cliques, or arbitrary maximum weight pattern subgraphs of fixed size.

Journal ArticleDOI
TL;DR: A hierarchical path planning algorithm (HIPLA) for real time path planning problems where the computational time is of critical significance and the main idea is to significantly reduce the search space for path computation by searching in a high-level abstraction graph, whose nodes are associated with precomputed risk estimates.
Abstract: Time is a critical factor in several path planning problems such as flood emergency rescue operations, escape planning from fires and chemical warfare agents dispersed in large buildings, evacuation from urban areas during natural disasters such as earthquakes, and military personnel movement. We propose a hierarchical path planning algorithm (HIPLA) for real time path planning problems where the computational time is of critical significance. The main idea of HIPLA is to significantly reduce the search space for path computation by searching in a high-level abstraction graph, whose nodes are associated with precomputed risk estimates. The cumulative risk associated with all nodes along a path determines the quality of a path. We present a detailed experimental analysis of HIPLA by comparing it with two well-known approaches viz., shortest path algorithm (SPAH) [1] and Dijkstra's algorithm with pruning [2] for large node-weighted graphs.

Journal ArticleDOI
TL;DR: This work surveys recent techniques for dynamic graph weights as well as dynamic graph topology, and examines the need for computing point-to-point shortest paths on large-scale road networks whose arcs are weighted with a travelling time which depends on traffic conditions.

Book ChapterDOI
07 Apr 2008
TL;DR: It is proved that deciding whether there exist k pairwise vertex/edge disjoint Properly Edge-Colored s - t paths/trails in a c-edge-colored graph Gc is NP-complete even for k = 2 and c = Ω(n2), where n denotes the number of vertices in Gc.
Abstract: This paper deals with the existence and search of Properly Edge-Colored paths/trails between two, not necessarily distinct, vertices s and t in an edge-colored graph from an algorithmic perspective. First we show that several versions of the s - t path/trail problem have polynomial solutions including the shortest path/trail case. We give polynomial algorithms for finding a longest Properly Edge-Colored path/trail between s and t for some particular graphs and characterize edge-colored graphs without Properly Edge-Colored closed trails. Next, we prove that deciding whether there exist k pairwise vertex/edge disjoint Properly Edge-Colored s - t paths/trails in a c-edge-colored graph Gc is NP-complete even for k = 2 and c = Ω(n2), where n denotes the number of vertices in Gc. Moreover, we prove that these problems remain NP-complete for c-colored graphs containing no Properly Edge-Colored cycles and c = Ω(n). We obtain some approximation results for those maximization problems together with polynomial results for some particulars classes of edge-colored graphs.

Journal ArticleDOI
TL;DR: Polynomial time algorithms for finding a longest cycle and a longest path in a Ptolemaic graph are proposed, which use the dynamic programming technique on a laminar structure of cliques, which is a recent characterization of PtoLemaic graphs.
Abstract: Longest path problem is a problem for finding a longest path in a given graph. While the graph classes in which the Hamiltonian path problem can be solved efficiently are widely investigated, there are few known graph classes such that the longest path problem can be solved efficiently. Polynomial time algorithms for finding a longest cycle and a longest path in a Ptolemaic graph are proposed. Ptolemaic graphs are the graphs that satisfy the Ptolemy inequality, and they are the intersection of chordal graphs and distance-hereditary graphs. The algorithms use the dynamic programming technique on a laminar structure of cliques, which is a recent characterization of Ptolemaic graphs.

Journal ArticleDOI
TL;DR: It is shown that both models under investigation can be transformed to an equivalent reverse 2-median problem on a path and an O(nlogn) algorithm is proposed, where n is the number of vertices of the path.

Journal ArticleDOI
TL;DR: T theoretical properties of multi-level overlay graphs are shown that lead to the definition of a new data structure for the computation and the maintenance of an overlay graph of G while weight decrease or weight increase operations are performed on G.
Abstract: Multi-level overlay graphs represent a speed-up technique for shortest paths computation which is based on a hierarchical decomposition of a weighted directed graph G. They have been shown to be experimentally efficient, especially when applied to timetable information. However, no theoretical result on the cost of constructing, maintaining and querying multi-level overlay graphs in a dynamic environment is known. In this paper, we show theoretical properties of multi-level overlay graphs that lead us to the definition of a new data structure for the computation and the maintenance of an overlay graph of G while weight decrease or weight increase operations are performed on G. Our solution is theoretically faster than the recomputation from scratch and allows queries that can be performed more efficiently than running Dijkstra’s shortest paths algorithm on G.

Journal ArticleDOI
TL;DR: Taken overall the work provides the means to design an entire transparent survivable island that respects the transparent reach limits of a given ultra-long-haul technology.
Abstract: In a transparent optical network it is desirable to have design control over the length of normal working paths and over the end-to-end length of paths in any restored network state. An obvious approach with p-cycles is to limit the maximum allowable circumference of candidate cycles considered in the network design. But this is somewhat inefficient and does not directly control the end-to-end length of paths in a restored state; it only controls the maximum length of protection path-segments that might be substituted into a working path on failure. Another basic strategy is now considered. It consists of systematically matching shorter working paths with longer protection path-segments through p-cycles, and vice versa, with direct consideration of the end-to-end length of paths in the restored network state during the design. This complementary matching notion is studied through an integer linear programming (ILP) model to minimize cost while intelligently associating longer working paths with shorter protection path-segments and vice versa. The basic ILP is adapted in one case to minimize the average restored state path lengths; in another to achieve the least possible longest path length; and, finally, to also constrain all restored path lengths under a fixed limit. Each variation can also be subject to a requirement of using only the theoretically minimal spare capacity or, through bi-criteria methods, a minimal amount of additional spare capacity for the corresponding objective on path lengths. Taken overall the work provides the means to design an entire transparent survivable island that respects the transparent reach limits of a given ultra-long-haul technology. A heuristic combination of ILP and genetic algorithm methods is also developed to solve some of the larger problems and is shown to perform well.

Proceedings ArticleDOI
01 Sep 2008
TL;DR: This paper provides and proves the convergence of a min-sum algorithm to compute the shortest path between two nodes in a graph with positive edge weights.
Abstract: Solving the distributed shortest path problem has important applications in the theory of distributed systems, most notably routing. In this paper, we provide and prove the convergence of a min-sum algorithm to compute the shortest path between two nodes in a graph with positive edge weights. Unlike the standard distributed shortest path algorithms, the rate of convergence depends on the weight of the minimal path and not necessarily the number of nodes in the network.

Book ChapterDOI
15 Dec 2008
TL;DR: This work addresses the problem of finding a polynomial-time approximation scheme for shortest bounded-curvature paths in the presence of obstacles and clarifies the critical factors contributing to the complexity of bounded-Curvature motion planning.
Abstract: We address the problem of finding a polynomial-time approximation scheme for shortest bounded-curvature paths in the presence of obstacles. Given an arbitrary environment $\mathcal{E}$ consisting of polygonal obstacles, two feasible configurations, a length l, and an approximation factor e, our algorithm either (i) verifies that every feasible bounded-curvature path joining the two configurations is longer than l or (ii) constructs such a path Π whose length is at most (1 + e) times the length of the shortest such path. The run time of our algorithm is polynomial in n (the total number of obstacle vertices and edges in $\mathcal{E}$), m (the bit precision of the input), e -1, and l. For general polygonal environments, there is no known upper bound on the length, or description, of a shortest feasible bounded-curvature path as a function of n and m. Furthermore, even if the length and description of a shortest path are known to be linear in n and m, finding such a path is known to be NP-hard [14]. Previous results construct (1 + e) approximations to the shortest e-robust bounded-curvature path [11,3] in time that is polynomial in n and e -1. (Intuitively, a path is e-robust if it remains feasible when simultaneously twisted by some small amount at each of its environment contacts.) Unfortunately, e-robust solutions do not exist for all problem instances that admit bounded-curvature paths. Furthermore, even if a e-robust path exists, the shortest bounded-curvature path may be arbitrarily shorter than the shortest e-robust path. In effect, these earlier results confound two distinct sources of problem difficulty, measured by e -1 and l. Our result is not only more general, but it also clarifies the critical factors contributing to the complexity of bounded-curvature motion planning.

Journal ArticleDOI
TL;DR: This paper proves the strong path partition conjecture for k=2 for all digraphs and the proof is constructive and it extends the proof for k =1.
Abstract: Berge's strong path partition conjecture from 1982 generalizes and extends Dilworth's theorem and the Greene-Kleitman theorem which are well known for partially ordered sets. The conjecture is known to be true for all digraphs only for k=1 (by the Gallai-Milgram theorem) and for k>[email protected] (where @l is the cardinality of the longest path in the graph). The attempts made, so far, to prove the conjecture for other values of k have yielded proofs for acyclic digraphs, but not for general digraphs. In this paper, we prove the conjecture for k=2 for all digraphs. The proof is constructive and it extends the proof for k=1.

Proceedings Article
01 Jan 2008
TL;DR: An optimal algorithm is proposed based on the Stair Normal Interval Representation (SNIR) matrix that characterizes proper interval graphs, in which every maximal clique of the graph is represented by one matrix element; the proposed algorithm uses this structural property in order to determine directly the paths in an optimal solution.
Abstract: In this paper we consider the k-fixed-endpoint path cover problem on proper interval graphs, which is a generalization of the path cover problem. Given a graph G and a set T of k vertices, a k-fixed-endpoint path cover of G with respect to T is a set of vertex-disjoint simple paths that covers the vertices of G, such that the vertices of T are all endpoints of these paths. The goal is to compute a k-fixed-endpoint path cover of G with minimum cardinality. We propose an optimal algorithm for this problem with runtime O(n), where n is the number of intervals in G. This algorithm is based on the Stair Normal Interval Representation (SNIR) matrix that characterizes proper interval graphs. In this characterization, every maximal clique of the graph is represented by one matrix element; the proposed algorithm uses this structural property, in order to determine directly the paths in an optimal solution.

Journal ArticleDOI
TL;DR: In this article, a Branch and Price algorithm is proposed to solve the maximum flow problem with flow width constraints, where one block defines the path while the other one sends the right amount of flow on it.

Journal ArticleDOI
TL;DR: Two new bounding operations, the detachment-cut and the H-cut, are introduced, to further reduce the size of the search space and are compared with those of Fujiwara et al. (2008) using some chemical compound data obtained from the KEGG LIGAND database.
Abstract: This paper considers the problem of enumerating all non-isomorphic tree-like chemical graphs with given path frequency, where "tree-like" means that the graph can be viewed as a tree if multiple edges (i.e., edges with the same end points) and a benzene ring are treated as one edge and one vertex, respectively, and "path frequency" is a vector of the numbers of specified vertex-labeled paths that must be realized in every output. This and related problems have several potential applications such as classification of chemical compounds, structure determination using mass-spectrum and/or NMR and design of novel chemical compounds. For this problem, several studies have been done. Recently, Fujiwara et al. (2008) showed two formulations and for each of them, they gave a branch-and-bound algorithm, which combined efficient enumeration of non-isomorphic trees with bounding operations based on the path frequency and the atom-atom bonds to avoid the generation of invalid trees. In this paper, based on their work and a result of Nagamochi (2006), we introduce two new bounding operations, the detachment-cut and the H-cut, to further reduce the size of the search space. We performed computational experiments to compare our proposed algorithms with those of Fujiwara et al. (2008) using some chemical compound data obtained from the KEGG LIGAND database (http://www.genome.jp/kegg/ligand.html). The results show that our proposed algorithms are much faster than their algorithms.