scispace - formally typeset
Search or ask a question

Showing papers on "Longest path problem published in 2021"


Journal ArticleDOI
TL;DR: Numerical simulations show that the proposed task assignment algorithms can lead to satisfying solutions against popular genetic algorithms.
Abstract: This article studies the task assignment problem for a fleet of dispersed vehicles to efficiently visit a set of target locations where some target locations might be unreachable for one or several vehicles. The objectives are to visit as many target locations as possible by using the minimum number of vehicles while minimizing the vehicles’ total travel time. We first propose a target merging strategy to deal with the optimization problem, which is in general NP-hard, and show that for the special case of a single vehicle, it requires linear time to calculate the maximum number of targets to be visited. Second, we design a longest path-based algorithm and analyze the cases in which the objective to visit the maximum number of targets by using the minimum number of vehicles can be obtained through the proposed algorithm within linear running time. Once the targets to be visited and the corresponding employed vehicles are determined, the marginal-cost-based target inserting principle to be discussed guarantees that the chosen targets will be visited within a computable finite maximal travel time, which is at most twice of the optimal when the cost matrix is symmetric. Integrating the longest path-based algorithm with two target inserting principles used to minimize the vehicles’ total travel time, we design two two-phase task assignment algorithms. Furthermore, we propose a one-phase algorithm to optimize the multiple objectives simultaneously by improving a co-evolutionary multipopulation genetic algorithm. Numerical simulations show that the proposed task assignment algorithms can lead to satisfying solutions against popular genetic algorithms.

10 citations


Journal ArticleDOI
TL;DR: Numerical experiments show that both DPIA-GR andDPIA-LRGR solve the problem efficiently and outperform CPLEX and GA, but DPIA -LRGR offers better performance.
Abstract: This paper considers a joint order acceptance and scheduling problem under a general scenario. A manufacturer receives multiple orders with a given revenue, processing time, release date, due date, deadline, and earliness and tardiness penalties. The manufacturer can be seen as a single-machine system. Due to limited capacity, the manufacturer cannot process every order and needs to determine the optimal set of accepted orders and corresponding production schedule such that the total profit is maximized. The manufacturer can extend its capacity with overtime by paying an additional cost. A time-indexed formulation is presented to model the problem. Two exact algorithms are proposed. The first algorithm, denoted by DPIA-GR, is a dynamic programming (DP)-based algorithm that starts by solving a relaxed version of the original model and successively recovers the relaxed constraint until an optimal solution to the original problem is achieved. The second algorithm, denoted by DPIA-LRGR, improves DPIA-GR by incorporating Lagrangian relaxation (LR). The subgradient method is employed to find the optimal Lagrangian multipliers. The relaxed model in DPIA-GR and the LR model in DPIA-LRGR can be represented using a weighted di-graph. Both algorithms are equivalent to finding the longest path in the graph and applying a graph reduction strategy to prevent unnecessary computational time and memory usage. A genetic algorithm (GA) is also proposed to solve large-scale versions of the problem. Numerical experiments show that both DPIA-GR and DPIA-LRGR solve the problem efficiently and outperform CPLEX and GA, but DPIA-LRGR offers better performance.

8 citations


Journal ArticleDOI
TL;DR: This study aims to present a critical-path based method to consider the critical path problem for workflow scheduling in cloud based on the previous optimal workflow scheduling method, GWO-based (Grey Wolf Optimization).
Abstract: When each task of the longest path in a task-dependent scientific workflow must meet a deadline, the path is called critical. Tasks in a critical path have priority over tasks in non-critical paths. Considering this fact that less methods have already dealt with the critical path problem for workflow scheduling in cloud, this study aims to present a critical-path based method to consider the problem based on our previous optimal workflow scheduling method, GWO-based (Grey Wolf Optimization). We applied our study to balance and imbalance scientific workflows. Our results show that considering the critical path improves the completion time of workflows while maintaining a proper level of resource cost and resource utilization. Moreover, to show the effectiveness of the current study, we compared the performance of the proposed method with non-critical-path aware algorithms, using three different indicators. The simulation demonstrates that compared to PGWO as the base method, the proposed approach achieves (1) approximately 68% improvement for makespan, (2) more accuracy in population sampling for about 70% of workflows, and (3) avoidance of the cost increases in more than 50% of workflows. Moreover, the proposed method decreases makespan approximately 3 times compared to the constrained-based approaches.

8 citations


Journal ArticleDOI
TL;DR: In this paper, the length of the longest cycle in a sparse random graph G n, p, p = c / n, c constant was discussed. But the authors were only able to explicitly give the values p 1, p 2, although they could in principle compute any p k.

7 citations


Journal ArticleDOI
TL;DR: Improved Structural Perturbation Algorithm (ISPA) as mentioned in this paper was proposed to accelerate the calculation of the length of the longest path in the graph in each iteration by partitioning the graph into two types of sets and applying two different processes to calculate the effects of the perturbations depending on the node was in.

7 citations


Proceedings ArticleDOI
TL;DR: In this article, the authors study the problem of finding a truthful mechanism that can compete with the overall longest path while incentivizing approximate truthfulness, i.e., requiring that hiding nodes cannot increase a player's utility by more than a factor of $1 + o(1).
Abstract: Motivated by kidney exchange, we study the following mechanism-design problem: On a directed graph (of transplant compatibilities among patient-donor pairs), the mechanism must select a simple path (a chain of transplantations) starting at a distinguished vertex (an altruistic donor) such that the total length of this path is as large as possible (a maximum number of patients receive a kidney). However, the mechanism does not have direct access to the graph. Instead, the vertices are partitioned over multiple players (hospitals), and each player reports a subset of her vertices to the mechanism. In particular, a player may strategically omit vertices to increase how many of her vertices lie on the path returned by the mechanism. Our objective is to find mechanisms that limit incentives for such manipulation while producing long paths. Unfortunately, in worst-case instances, competing with the overall longest path is impossible while incentivizing (approximate) truthfulness, i.e., requiring that hiding nodes cannot increase a player's utility by more than a factor of $1 + o(1)$. We therefore adopt a semi-random model where a small ($o(n)$) number of random edges are added to worst-case instances. While it remains impossible for truthful mechanisms to compete with the overall longest path, we give a truthful mechanism that competes with a weaker but non-trivial benchmark: the length of any path whose subpaths within each player have a minimum average length. In fact, our mechanism satisfies even a stronger notion of truthfulness, which we call matching-time incentive compatibility. This notion of truthfulness requires that each player not only reports her nodes truthfully but also does not stop the returned path at any of her nodes in order to divert it to a continuation inside her own subgraph.

5 citations


Journal ArticleDOI
TL;DR: A polynomial algorithm for the longest cycle problem on interval graphs was proposed in this paper, which runs in O(n 8 ) time, where n is the number of vertices of the input graph.

4 citations


Journal ArticleDOI
TL;DR: This paper presents a minimum spectrum utilization (SU) and average path length (APL) approach to solve the (off-line) routing and spectrum allocation problem (RSA) based on combining a simple ordering pre-computation strategy, namely most subcarriers first (MSF) with three nature-inspired algorithms.
Abstract: Flexible optical network architectures are considered a very promising solution where spectrum resources are allocated within flexible frequency grids. This paper presents a minimum spectrum utilization (SU) and average path length (APL) approach to solve the (off-line) routing and spectrum allocation problem (RSA) based on combining a simple ordering pre-computation strategy, namely most subcarriers first (MSF) with three nature-inspired algorithms. These algorithms are ant colony optimization, differential evolution based relative position indexing (DE-RPI), and differential evolution general combinatorial (DE-GC). We begin by showing that MSF is the most effective ordering pre-computation strategy when compared to other well-known typical heuristics in the literature, such as first-fit, and longest path first. Then, we apply MSF in combination with the three nature-inspired algorithms to simultaneously optimize the SU and APL. The usefulness of MSF ordering pre-computation strategy is presented via a comparison of results obtained when using and not using MSF under the same scenarios. The algorithms are evaluated in benchmark optical networks, such as the NSFNet, the European optical network, and the 40-node USA network. We show that DE-RPI with MSF ordering pre-computation is the best option to solve the RSA problem, obtaining an average improvement percentage in the range of 0.9772–4.4086% on the SU and from $$-0.1668$$ to 0.8511% on the APL when compared to other meta-heuristics, either with or without the MSF ordering policy.

4 citations


Proceedings ArticleDOI
18 Jul 2021
TL;DR: In this article, the authors study the problem of finding a truthful mechanism that can compete with the overall longest path while incentivizing (approximate) truthfulness, i.e., requiring that hiding nodes cannot increase a player's utility by more than a factor of 1 + o(1).
Abstract: Motivated by kidney exchange, we study the following mechanism-design problem: On a directed graph (of transplant compatibilities among patient--donor pairs), the mechanism must select a simple path (a chain of transplantations) starting at a distinguished vertex (an altruistic donor) such that the total length of this path is as large as possible (a maximum number of patients receive a kidney). However, the mechanism does not have direct access to the graph. Instead, the vertices are partitioned over multiple players (hospitals), and each player reports a subset of her vertices to the mechanism. In particular, a player may strategically omit vertices to increase how many of her vertices lie on the path returned by the mechanism. Our objective is to find mechanisms that limit incentives for such manipulation while producing long paths. Unfortunately, in worst-case instances, competing with the overall longest path is impossible while incentivizing (approximate) truthfulness, i.e., requiring that hiding nodes cannot increase a player's utility by more than a factor of 1 + o(1). We therefore adopt a semi-random model where a small ($o(n)$) number of random edges are added to worst-case instances. While it remains impossible for truthful mechanisms to compete with the overall longest path, we give a truthful mechanism that competes with a weaker but non-trivial benchmark: the length of any path whose subpaths within each player have a minimum average length. In fact, our mechanism satisfies even a stronger notion of truthfulness, which we call matching-time incentive compatibility. This notion of truthfulness requires that each player not only reports her nodes truthfully but also does not stop the returned path at any of her nodes in order to divert it to a continuation inside her own subgraph.

4 citations


Journal ArticleDOI
TL;DR: In this paper, a quadratic unconstrained binary optimization (QUBO) formulation of the longest path problem on graphs is proposed. But it is not known to have an efficient classical solution in the general case.

3 citations


Journal ArticleDOI
TL;DR: This paper gives a linear-time parallel algorithm for embedding of linear arrays (paths) of maximum length in O -shaped meshes and proves some upper bounds on the length of the longest paths.
Abstract: Embedding an interconnection network into another network is one of the important problems in parallel processing. In this paper, we study embedding of linear arrays (paths) of maximum length in O-shaped meshes (O-shaped grid graphs). This is equal to finding a longest path in an O-shaped mesh (grid graph). An O-shaped mesh is a 2D mesh that a smaller 2D mesh is removed from it. The removed nodes can be considered as faulty processor. We give a linear-time parallel algorithm for this problem. To show the algorithm finds an optimal path, first we prove some upper bounds on the length of the longest paths, then we show that how our algorithm meets these upper bounds.

Journal ArticleDOI
02 Mar 2021-Chaos
TL;DR: In this article, the global memory capacity of a reservoir network with a directed acyclic network (DAN) was examined, and it was shown that the global MC is bounded by the length of the longest path of the reservoir DAN, which can be regarded as the cluster of reservoir nodes with the same memory profile.
Abstract: Reservoir computing (RC) is an attractive area of research by virtue of its potential for hardware implementation and low training cost. An intriguing research direction in this field is to interpret the underlying dynamics of an RC model by analyzing its short-term memory property, which can be quantified by the global index: memory capacity (MC). In this paper, the global MC of the RC whose reservoir network is specified as a directed acyclic network (DAN) is examined, and first we give that its global MC is theoretically bounded by the length of the longest path of the reservoir DAN. Since the global MC is technically influenced by the model hyperparameters, the dependency of the MC on the hyperparameters of this RC is then explored in detail. In the further study, we employ the improved conventional network embedding method (i.e., struc2vec) to mine the underlying memory community in the reservoir DAN, which can be regarded as the cluster of reservoir nodes with the same memory profile. Experimental results demonstrate that such a memory community structure can provide a concrete interpretation of the global MC of this RC. Finally, the clustered RC is proposed by exploiting the detected memory community structure of DAN, where its prediction performance is verified to be enhanced with lower training cost compared with other RC models on several chaotic time series benchmarks.

Journal ArticleDOI
TL;DR: In this article, an improved ant colony algorithm to determine the critical path by setting the path distance and time as negative, while the transition probability remains unchanged, is proposed to improve the efficiency of critical path computation.
Abstract: In large and complex project schedule networks, existing algorithms to determine the critical path are considerably slow. Therefore, an algorithm with a faster convergence is needed to improve the efficiency of the critical path computation. The ant colony algorithm was first applied to the travelling salesman problem to determine the shortest path. However, many problems require the longest path in practice; the critical path in the scheduling problem is the longest path in the scheduling network. In this study, an improved ant colony algorithm to determine the critical path by setting the path distance and time as negative, while the transition probability remains unchanged, is proposed. The case of a coal power plant engineering, procurement, and construction (EPC) project was considered. The results show that a peak number of optimal solutions appeared at approximately the 9th iteration; however, instabilities and continued fluctuations were observed even afterward, indicating that the algorithm has a certain randomness. Convergence is apparent at the 29th iteration; after the 34th iteration, a singular optimal solution, the longest or critical path, is obtained, indicating that the convergence rate can be controlled and that the critical path can be obtained by setting appropriate parameters in the solution method. This has been found to improve the efficiency of calculating the critical path. Case validation and algorithm performance testing confirmed that the improved ant colony algorithm can determine the critical path problem and make it computationally intelligent.

Journal ArticleDOI
TL;DR: It is shown that connected graphs admit sublinear longest path transversals and this improves an earlier result of Rautenbach and Sereni and is related to the fifty-year-old question of whether connected ...
Abstract: We show that connected graphs admit sublinear longest path transversals. This improves an earlier result of Rautenbach and Sereni and is related to the fifty-year-old question of whether connected ...

Journal ArticleDOI
TL;DR: This paper proves Seymour's Second Neighborhood Conjecture, which states that every simple oriented graph has a vertex such that the cardinality of its second neighborhood is greater than or equal to the cardinalities of its first neighborhood, for 6-antitransitive simple oriented graphs.

Journal ArticleDOI
TL;DR: It is proved that the problem is NP -hard even on directed acyclic graphs with the mean of the length of each arc being restricted to an integer, and a fully polynomial approximation scheme (FPTAS) is presented that iteratively solves deterministic shortest path problems.

Journal ArticleDOI
TL;DR: In undirected graphs, it is shown that MBT has no efficient $\exp(-O(\log^{0.63}{n}))$-approximation under the exponential time hypothesis, and inapproximability results rely on self-improving reductions and structural properties of binary trees.
Abstract: We introduce and investigate the approximability of the maximum binary tree problem (MBT) in directed and undirected graphs. The goal in MBT is to find a maximum-sized binary tree in a given graph. MBT is a natural variant of the well-studied longest path problem, since both can be viewed as finding a maximum-sized tree of bounded degree in a given graph. The connection to longest path motivates the study of MBT in directed acyclic graphs (DAGs), since the longest path problem is solvable efficiently in DAGs. In contrast, we show that MBT in DAGs is hard: it has no efficient $$\exp (-O(\log n/ \log \log n))$$ -approximation under the exponential time hypothesis, where n is the number of vertices in the input graph. In undirected graphs, we show that MBT has no efficient $$\exp (-O(\log ^{0.63}{n}))$$ -approximation under the exponential time hypothesis. Our inapproximability results rely on self-improving reductions and structural properties of binary trees. We also show constant-factor inapproximability assuming $${\mathbf {P}} e \mathbf {NP}$$ . In addition to inapproximability results, we present algorithmic results along two different flavors: (1) We design a randomized algorithm to verify if a given directed graph on n vertices contains a binary tree of size k in $$2^k \mathsf {poly}(n)$$ time. (2) Motivated by the longest heapable subsequence problem, introduced by Byers, Heeringa, Mitzenmacher, and Zervas, ANALCO 2011, which is equivalent to MBT in permutation DAGs, we design efficient algorithms for MBT in bipartite permutation graphs.

Proceedings ArticleDOI
05 Jan 2021
TL;DR: In this paper, a two-stage multi-objective control method for distribution network is proposed to solve the problem of optimal dispatching between source, network and flexible loads, and improve the safety and stability operation level of distribution network.
Abstract: In order to solve the problem of optimal dispatching between source, network and flexible loads, and improve the safety and stability operation level of distribution network, this paper proposes a two stage multi-objective control method for distribution network The first stage optimization control is the day ahead scheduling, the control objectives is source and load balance, and minimum of the longest path of feeder in distribution network The second stage is hourly control, the control objectives of static voltage stability margin and the active power loss are used to effectively improve the coordination, safety, and economy of distribution network An improved multi-objective harmony search algorithm is proposed to solve this two stage multi-objective optimization model Finally, the effectiveness of the proposed two stage multi-objective control method is verified by IEEE 33 bus system with distributed generations

Posted Content
TL;DR: The fixed edge-length planar realization problem (FEPR) was introduced by Eades and Wormald as mentioned in this paper, where the problem is to find a planar straight-line drawing of a weighted planar graph where the Euclidean length of each edge is fixed.
Abstract: We study a classic problem introduced thirty years ago by Eades and Wormald. Let $G=(V,E,\lambda)$ be a weighted planar graph, where $\lambda: E \rightarrow \mathbb{R}^+$ is a length function. The Fixed Edge-Length Planar Realization problem (FEPR for short) asks whether there exists a planar straight-line realization of $G$, i.e., a planar straight-line drawing of $G$ where the Euclidean length of each edge $e \in E$ is $\lambda(e)$. Cabello, Demaine, and Rote showed that the FEPR problem is NP-hard, even when $\lambda$ assigns the same value to all the edges and the graph is triconnected. Since the existence of large triconnected minors is crucial to the known NP-hardness proofs, in this paper we investigate the computational complexity of the FEPR problem for weighted $2$-trees, which are $K_4$-minor free. We show its NP-hardness, even when $\lambda$ assigns to the edges only up to four distinct lengths. Conversely, we show that the FEPR problem is linear-time solvable when $\lambda$ assigns to the edges up to two distinct lengths, or when the input has a prescribed embedding. Furthermore, we consider the FEPR problem for weighted maximal outerplanar graphs and prove it to be linear-time solvable if their dual tree is a path, and cubic-time solvable if their dual tree is a caterpillar. Finally, we prove that the FEPR problem for weighted $2$-trees is slice-wise polynomial in the length of the longest path.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a novel placement method based on the layer layout, which can reduce the total HPWL and total area by an average of 77.77% and 57.56%, respectively.
Abstract: With the demand for data processing and storage increasing rapidly, the existing CMOS-based computing system is gradually unable to meet the amplification demand because of its high-power consumption. Due to high frequency and low power consumption, superconducting rapid single-flux-quantum (RSFQ) logic circuit technology is an attractive candidate. In this paper, we propose a novel placement method based on the layer layout. We layer the gates according to the logic level, which is the length of the longest path in terms of the number of clocked gates from any primary input (PI) of the circuit to the gate. Dummy gates are inserted so that any two adjacent layers can form a bipartite graph. Then the gates of each layer are reordered to minimize the half-perimeter wirelength (HPWL). Finally, the gates of each layer are assigned vertical positions to make the edges as straight as possible. The distance between adjacent layers is determined by the number of bending points of the edges between layers. We use several superconducting RSFQ logic circuits to evaluate the effectiveness of our proposed placement method, which uses HPWL and area as evaluation metrics. The experimental results show that compared with the Simulated Annealing (SA)-based placement method, the proposed approach can reduce the total HPWL and total area by an average of 77.77% and 57.56%, respectively.

Posted Content
TL;DR: The Max Growth System (MGS) as discussed by the authors is a generalization of the Infinite Bin Model that has been the object of study of a number of papers and is related to the charged random graph.
Abstract: Our object of study is the asymptotic growth of heaviest paths in a charged (weighted with signed weights) complete directed acyclic graph. Edge charges are i.i.d. random variables with common distribution $F$ supported on $[−\infty, 1$] with essential supremum equal to 1 (a charge of $−\infty$ is understood as the absence of an edge). The asymptotic growth rate is a constant that we denote by $C(F)$. Even in the simplest case where $F = p\delta_1 + (1 − p)\delta_{-\infty}$, corresponding to the longest path in the Barak-Erdős random graph, there is no closed-form expression for this function, but good bounds do exist. In this paper we construct a Markovian particle system that we call "Max Growth System" (MGS), and show how it is related to the charged random graph. The MGS is a generalization of the Infinite Bin Model that has been the object of study of a number of papers. We then identify a random functional of the process that admits a stationary version and whose expectation equals the unknown constant $C(F)$. Furthermore, we construct an effective perfect simulation algorithm for this functional which produces samples from the random functional.

Journal ArticleDOI
22 Feb 2021-Order
TL;DR: The notion of properly ordered coloring (POC) of a weighted graph was introduced in this paper, which generalizes the notion of vertex coloring of a graph, and it has been shown that the number of colors required by a POC can be bounded by t for any graph G.
Abstract: We introduce the notion of a properly ordered coloring (POC) of a weighted graph, that generalizes the notion of vertex coloring of a graph. Under a POC, if xy is an edge, then the larger weighted vertex receives a larger color; in the case of equal weights of x and y, their colors must be different. In this paper, we shall initiate the study of this special coloring in graphs. For a graph G, we introduce the function f(G) which gives the maximum number of colors required by a POC over all weightings of G. We show that f(G) = l(G), where l(G) is the number of vertices of a longest path in G. Another function we introduce is χPOC(G; t) giving the minimum number of colors required over all weightings of G using t distinct weights. We show that the ratio of χPOC(G; t) − 1 to χ(G) − 1 can be bounded by t for any graph G; in fact, the result is shown by determining χPOC(G; t) when G is a complete multipartite graph. We also determine the minimum number of colors to give a POC on a vertex-weighted graph in terms of the number of vertices of a longest directed path in an orientation of the underlying graph. This extends the so called Gallai-Hasse-Roy-Vitaver theorem, a classical result concerning the relationship between the chromatic number of a graph G and the number of vertices of a longest directed path in an orientation of G.

Posted Content
TL;DR: In this paper, the authors study the relative importance of the average in-degree and cumulative advantage effect and implement a generalization where the indegree depends on the number of nodes.
Abstract: An important knowledge dimension of science and technology is the extent to which their development is cumulative, that is, the extent to which later findings build on earlier ones. Cumulative knowledge structures can be studied using a network approach, in which nodes represent findings and links represent knowledge flows. Of particular interest to those studies is the notion of network paths and path length. Starting from the Price model of network growth, we derive an exact solution for the path length distribution of all unique paths from a given initial node to each node in the network. We study the relative importance of the average in-degree and cumulative advantage effect and implement a generalization where the in-degree depends on the number of nodes. The cumulative advantage effect is found to fundamentally slow down path length growth. As the collection of all unique paths may contain many redundancies, we additionally consider the subset of the longest paths to each node in the network. As this case is more complicated, we only approximate the longest path length distribution in a simple context. Where the number of all unique paths of a given length grows unbounded, the number of longest paths of a given length converges to a finite limit, which depends exponentially on the given path length. Fundamental network properties and dynamics therefore characteristically shape cumulative structures in those networks, and should therefore be taken into account when studying those structures.

Posted Content
TL;DR: In this paper, a web application is developed for coloring the edges of a graph with the targets as nodes, and with the interaction between targets as the edges, and an exhaustive binary comparison is done to analyze the intra-and inter-goal target interactions, entailing over 14000 comparisons.
Abstract: The United Nations developed the 17 Sustainable Development Goals (SDGs), with 169 targets, to serve as a plan for solving the world's problems and achieving a more sustainable future. This is modeled as a graph with the targets as nodes, and with the interaction between targets as the edges of the graph. An exhaustive binary comparison is done to analyze the intra- and inter-goal target interactions, entailing over 14000 comparisons. The task is to assign a 'color' to an edge: positive (indivisible), zero (consistent) or negative (cancelling). This is done via a panel of experts who will evaluate the target interactions, through a web application that was developed for coloring the edges. This is an on-going study, and so far, of the 1256 edges colored, only 36 are cancelling (negative), or 2.86%; more than 97% are positive interactions. So far, the "most negative" interactions involve: "Climate Change"; "Life Below Water"; "Peace, Justice and Strong Institutions"; and "Decent Work and Economic Growth". Most useful for planning might be the "longest path of positive targets" feature, which searches, via a directed acrylic graph and a topological sort, for a path with only positive or neutral edges, avoiding targets which have red edges emanating from them, i.e., a path of targets that have no conflicts with other targets. Currently, this path has more than 130 nodes. This study can help researchers analyze which targets enable or constrain each other, what mitigation can be done to avoid conflicts, and can be configured for sub-national or regional study. Web app at: this http URL

Posted Content
TL;DR: The notion of properly ordered coloring (POC) of a weighted graph was introduced in this paper, which generalizes the notion of vertex coloring of a graph and gives the maximum number of colors required by a POC over all weightings of a vertex.
Abstract: We introduce the notion of a properly ordered coloring (POC) of a weighted graph, that generalizes the notion of vertex coloring of a graph. Under a POC, if $xy$ is an edge, then the larger weighted vertex receives a larger color; in the case of equal weights of $x$ and $y$, their colors must be different. In this paper, we shall initiate the study of this special coloring in graphs. For a graph $G$, we introduce the function $f(G)$ which gives the maximum number of colors required by a POC over all weightings of $G$. We show that $f(G)=\ell(G)$, where $\ell(G)$ is the number of vertices of a longest path in $G$. Another function we introduce is $\chi_{POC}(G;t)$ giving the minimum number of colors required over all weightings of $G$ using $t$ distinct weights. We show that the ratio of $\chi_{POC}(G;t)-1$ to $\chi(G)-1$ can be bounded by $t$ for any graph $G$; in fact, the result is shown by determining $\chi_{POC}(G;t)$ when $G$ is a complete multipartite graph. We also determine the minimum number of colors to give a POC on a vertex-weighted graph in terms of the number of vertices of a longest directed path in an orientation of the underlying graph. This extends the so called Gallai-Hasse-Roy-Vitaver theorem, a classical result concerning the relationship between the chromatic number of a graph $G$ and the number of vertices of a longest directed path in an orientation of $G$.