scispace - formally typeset
Search or ask a question

Showing papers on "Assignment problem published in 2016"


Journal ArticleDOI
TL;DR: The implementation of the distributed auction algorithm and sequential convex programming using model predictive control produces the swarm assignment and trajectory optimization (SATO) algorithm that transfers a swarm of robots or vehicles to a desired shape in a distributed fashion.
Abstract: This paper presents a distributed, guidance and control algorithm for reconfiguring swarms composed of hundreds to thousands of agents with limited communication and computation capabilities. This algorithm solves both the optimal assignment and collision-free trajectory generation for robotic swarms, in an integrated manner, when given the desired shape of the swarm without pre-assigned terminal positions. The optimal assignment problem is solved using a distributed auction assignment that can vary the number of target positions in the assignment, and the collision-free trajectories are generated using sequential convex programming. Finally, model predictive control is used to solve the assignment and trajectory generation in real time using a receding horizon. The model predictive control formulation uses current state measurements to resolve for the optimal assignment and trajectory. The implementation of the distributed auction algorithm and sequential convex programming using model predictive control produces the swarm assignment and trajectory optimization SATO algorithm that transfers a swarm of robots or vehicles to a desired shape in a distributed fashion. Once the desired shape is uploaded to the swarm, the algorithm determines where each robot goes and how it should get there in a fuel-efficient, collision-free manner. Results of flight experiments using multiple quadcopters show the effectiveness of the proposed SATO algorithm.

160 citations


Journal ArticleDOI
TL;DR: This paper shows that the relation between associations of two observations is the equivalence relation in the data association problem, based on the spatial–temporal constraint that the trajectories of different objects must be disjoint, and develops a connected component model (CCM), which can efficiently obtain the global solution of the MDA problem for multi-object tracking by optimizing a sequence of independent data association subproblems.
Abstract: In multi-object tracking, it is critical to explore the data associations by exploiting the temporal information from a sequence of frames rather than the information from the adjacent two frames. Since straightforwardly obtaining data associations from multi-frames is an NP-hard multi-dimensional assignment (MDA) problem, most existing methods solve this MDA problem by either developing complicated approximate algorithms, or simplifying MDA as a 2D assignment problem based upon the information extracted only from adjacent frames. In this paper, we show that the relation between associations of two observations is the equivalence relation in the data association problem, based on the spatial–temporal constraint that the trajectories of different objects must be disjoint. Therefore, the MDA problem can be equivalently divided into independent subproblems by equivalence partitioning. In contrast to existing works for solving the MDA problem, we develop a connected component model (CCM) by exploiting the constraints of the data association and the equivalence relation on the constraints. Based upon CCM, we can efficiently obtain the global solution of the MDA problem for multi-object tracking by optimizing a sequence of independent data association subproblems. Experiments on challenging public data sets demonstrate that our algorithm outperforms the state-of-the-art approaches.

137 citations


Journal ArticleDOI
TL;DR: This paper reviews research into solving the two-dimensional (2D) rectangular assignment problem and combines the best methods to implement a k-best 2D rectangular assignment algorithm with bounded runtime.
Abstract: This paper reviews research into solving the two-dimensional (2D) rectangular assignment problem and combines the best methods to implement a k-best 2D rectangular assignment algorithm with bounded runtime. This paper condenses numerous results as an understanding of the "best" algorithm, a strong polynomial-time algorithm with a low polynomial order (a shortest augmenting path approach), would require assimilating information from many separate papers, each making a small contribution. 2D rectangular assignment Matlab code is provided.

135 citations


Journal ArticleDOI
TL;DR: It is proved that an indefinite relaxation (when solved exactly) almost always discovers the optimal permutation, while a common convex relaxation almost always fails to discover the optimalpermutation.
Abstract: Graph matching—aligning a pair of graphs to minimize their edge disagreements—has received wide-spread attention from both theoretical and applied communities over the past several decades, including combinatorics, computer vision, and connectomics. Its attention can be partially attributed to its computational difficulty. Although many heuristics have previously been proposed in the literature to approximately solve graph matching, very few have any theoretical support for their performance. A common technique is to relax the discrete problem to a continuous problem, therefore enabling practitioners to bring gradient-descent-type algorithms to bear. We prove that an indefinite relaxation (when solved exactly) almost always discovers the optimal permutation, while a common convex relaxation almost always fails to discover the optimal permutation. These theoretical results suggest that initializing the indefinite algorithm with the convex optimum might yield improved practical performance. Indeed, experimental results illuminate and corroborate these theoretical findings, demonstrating that excellent results are achieved in both benchmark and real data problems by amalgamating the two approaches.

122 citations


Journal ArticleDOI
TL;DR: The authors compare the assignment-based strategy with two popular rule-based strategies and evaluate dispatching strategies in detail in the city of Berlin and the neighboring region of Brandenburg using the microscopic large-scale MATSim simulator.
Abstract: This study proposes and evaluates an efficient real-time taxi dispatching strategy that solves the linear assignment problem to find a globally optimal taxi-to-request assignment at each decision epoch. The authors compare the assignment-based strategy with two popular rule-based strategies. They evaluate dispatching strategies in detail in the city of Berlin and the neighboring region of Brandenburg using the microscopic large-scale MATSim simulator. The assignment-based strategy produced better results for both drivers (less idle driving) and passengers (less waiting). However, computing the assignments for thousands of taxis in a huge road network turned out to be computationally demanding. Certain adaptations pertaining to the cost matrix calculation were necessary to increase the computational efficiency and assure real-time responsiveness.

119 citations


Journal ArticleDOI
Haibin Zhu1
01 Apr 2016
TL;DR: This paper formalizes the group role assignment problem when faced with the constraint of conflicting agents, verifies the benefits of solving the problem, proves that such a problem is a subproblem of the extended integer linear programming (x-ILP) problem, proposes a practical approach to the solution, and assures performance based on the results of experiments.
Abstract: Role assignment is a critical element in the role-based collaboration process. There are many constraints to be considered when undertaking this task. This paper formalizes the group role assignment problem when faced with the constraint of conflicting agents, verifies the benefits of solving the problem, proves that such a problem is a subproblem of the extended integer linear programming (x-ILP) problem, proposes a practical approach to the solution, and assures performance based on the results of experiments. The contributions of this paper include: 1) formalization of the proposed problem; 2) verification of the benefit achieved by avoiding conflicts in role assignment through simulation; 3) theoretical proof that conflict avoidance is a subproblem of the x-ILP problem that is nonpolynomial-complete; and 4) a practical solution based on the IBM ILOG CPLEX optimization package (ILOG) and verification of the scale of problems that can be solved with ILOG. The proposed approach is validated by simulation experiments. Its efficiency is verified by comparison with the previous exhaustive search-based approach.

86 citations


Journal ArticleDOI
TL;DR: Taking into account the conditions of the backhaul in terms of delay and wireless channel quality, joint design and optimization of the caching and user association policy to minimize the average download delay is studied in a cache-enabled heterogeneous network.
Abstract: To alleviate the backhaul burden and reduce user-perceived latency, content caching at base stations has been identified as a key technology. However, the caching strategy design at the wireless edge is challenging, especially when both wired backhaul condition and wireless channel quality are considered in the optimization. In this paper, taking into account the conditions of the backhaul in terms of delay and wireless channel quality, joint design and optimization of the caching and user association policy to minimize the average download delay is studied in a cache-enabled heterogeneous network. We first prove the joint caching and association optimization problem is NP-hard based on a reduction to the facility location problem. Furthermore, in order to reduce the complexity, a distributed algorithm is developed by decomposing the NP-hard problem into an assignment problem solvable by the Hungarian method and two simple linear integer subproblems, with the aid of McCormick envelopes and the Lagrange partial relaxation method. Simulation results reveal a near-optimal performance that performs up to 22% better in term of delay compared with those in the literatures at a low complexity of $O\left ({ {{{n{m^{3}}} /{\varepsilon ^{2}}}} }\right)$ .

82 citations


Journal ArticleDOI
TL;DR: A solution to the M-M assignment problem by improving the K-M algorithm with backtracking (KMB) is proposed and the proposed KMB algorithm is valid and the worst time complexity is O ( ( ? L a i ) 3 ) .

81 citations


Journal ArticleDOI
TL;DR: A novel distance-reliability ratio algorithm based on a combinatorial fractional programming approach that reduces travel costs by 80% while maximizing reliability when compared to existing algorithms and a novel algorithm that uses an interval estimation heuristic to approximate worker reliabilities is proposed.
Abstract: We introduce min-cost max-reliability assignment problem in spatial crowdsourcing.We propose a novel distance-reliability ratio approach to address the problem.We extend the proposed approach for dynamic estimation of worker reliabilities.We present the performance of algorithms on synthetic and real-world datasets.The proposed approach achieves lower travel costs while maximizing the reliability. Spatial crowdsourcing has emerged as a new paradigm for solving problems in the physical world with the help of human workers. A major challenge in spatial crowdsourcing is to assign reliable workers to nearby tasks. The goal of such task assignment process is to maximize the task completion in the face of uncertainty. This process is further complicated when tasks arrivals are dynamic and worker reliability is unknown. Recent research proposals have tried to address the challenge of dynamic task assignment. Yet the majority of the proposals do not consider the dynamism of tasks and workers. They also make the unrealistic assumptions of known deterministic or probabilistic workers' reliabilities. In this paper, we propose a novel approach for dynamic task assignment in spatial crowdsourcing. The proposed approach combines bi-objective optimization with combinatorial multi-armed bandits. We formulate an online optimization problem to maximize task reliability and minimize travel costs in spatial crowdsourcing. We propose the distance-reliability ratio (DRR) algorithm based on a combinatorial fractional programming approach. The DRR algorithm reduces travel costs by 80% while maximizing reliability when compared to existing algorithms. We extend the DRR algorithm for the scenario when worker reliabilities are unknown. We propose a novel algorithm (DRR-UCB) that uses an interval estimation heuristic to approximate worker reliabilities. Experimental results demonstrate that the DRR-UCB achieves high reliability in the face of uncertainty. The proposed approach is particularly suited for real-life dynamic spatial crowdsourcing scenarios. This approach is generalizable to the similar problems in other areas in expert systems. First, it encompasses online assignment problems when the objective function is a ratio of two linear functions. Second, it considers situations when intelligent and repeated assignment decisions are needed under uncertainty.

79 citations


Journal ArticleDOI
TL;DR: It is shown that the analytical and multi-parametric DSAP is a quadratic assignment problem (QAP) and thus non-deterministic polynomial-time (NP)-hard, and a greedy genetic algorithm (GA) is developed for handling the computational complexity of the DSAP.
Abstract: This article defines a new dynamic storage assignment problem (DSAP) and develops an integrated mechanism for optimization purpose, based on the ABC classification and mutual affinity of products. A product affinity-based heuristic (PABH)—a technique based on data mining—is developed for calculation of pairwise relationships between products. It is shown that the analytical and multi-parametric DSAP is a quadratic assignment problem (QAP) and thus non-deterministic polynomial-time (NP)-hard. A greedy genetic algorithm (GA) is therefore developed for handling the computational complexity of the DSAP. Performance comparisons between the new approach and the traditional ABC classification method are conducted. The experimental results show that a preferred storage assignment approach is to simultaneously maximize the sum of affinity values and the product of zone indicators and order frequencies based on traditional ABC classification. The experiments on a distribution center of a family care product manufacturer indicate 7.14 to 104.48 % improvement in the average order picking time.

75 citations


Journal ArticleDOI
TL;DR: This paper proposes an iterative rounding algorithm and an optimal branch-and-bound (BnB) algorithm to solve the non-orthogonal dynamic spectrum sharing for device-to-device (D2D) communications in the D2D underlaid cellular network and proves that it achieves at least 1/2 of the optimal weighted sum-rate.
Abstract: In this paper, we study the non-orthogonal dynamic spectrum sharing for device-to-device (D2D) communications in the D2D underlaid cellular network. Our design aims to maximize the weighted system sum-rate under the constraints that: 1) each cellular or active D2D link is assigned one subband and 2) the required minimum rates for cellular and active D2D links are guaranteed. To solve this problem, we first characterize the optimal power allocation solution for a given subband assignment. Based on this result, we formulate the subband assignment problem by using the graph-based approach, in which each link corresponds to a vertex and each subband assignment is represented by a hyper-edge. We then propose an iterative rounding algorithm and an optimal branch-and-bound (BnB) algorithm to solve the resulting graph-based problem. We prove that the iterative rounding algorithm achieves at least 1/2 of the optimal weighted sum-rate. Extensive numerical studies illustrate that the proposed iterative rounding algorithm significantly outperforms the conventional spectrum sharing algorithms and attains almost the same system sum-rate as the optimal BnB algorithm.

Journal ArticleDOI
TL;DR: This paper formally defines the problem of bottleneck-aware social event arrangement (BSEA), and devise two greedy heuristic algorithms, Greedy and Random+Greedy, and a local-search-based optimization technique to solve the BSEA problem.
Abstract: With the popularity of mobile computing and social media, various kinds of online event-based social network (EBSN) platforms, such as Meetup, Plancast and Whova, are gaining in prominence. A fundamental task of managing EBSN platforms is to recommend suitable social events to potential users according to the following three factors: spatial locations of events and users, attribute similarities between events and users, and friend relationships among users. However, none of the existing approaches considers all the aforementioned influential factors when they recommend users to proper events. Furthermore, the existing recommendation strategies neglect the bottleneck cases of the global recommendation. Thus, it is impossible for the existing recommendation solutions to be fair in real-world scenarios. In this paper, we first formally define the problem of bottleneck-aware social event arrangement (BSEA), which is proven to be NP-hard. To solve the BSEA problem approximately, we devise two greedy heuristic algorithms, Greedy and Random+Greedy, and a local-search-based optimization technique. In particular, the Greedy algorithm is more effective but less efficient than the Random+Greedy algorithm in most cases. Moreover, a variant of the BSEA problem, called the Extended BSEA problem, is studied, and the above solutions can be extended to address this variant easily. Finally, we conduct extensive experiments on real and synthetic datasets which verify the efficiency and effectiveness of our proposed algorithms.

Journal ArticleDOI
TL;DR: In this paper, the problem of determining the generalized degrees of freedom (GDoF) region achievable by treating interference as Gaussian noise (TIN) derived by Geng et al. from a combinatorial optimization perspective was reformulated for single-antenna Gaussian interference channels, and a low-complexity GDoF-based distributed link scheduling and power control mechanism was proposed.
Abstract: For single-antenna Gaussian interference channels, we reformulate the problem of determining the generalized degrees of freedom (GDoF) region achievable by treating interference as Gaussian noise (TIN) derived by Geng et al. from a combinatorial optimization perspective. We show that the TIN power control problem can be cast into an assignment problem, such that the globally optimal power allocation variables can be obtained by well-known polynomial time algorithms (e.g., centralized Hungarian method or distributed Auction algorithm). Furthermore, the expression of the TIN-achievable GDoF region (TINA region) can be substantially simplified with the aid of maximum weighted matchings. We also provide conditions under which the TINA region is a convex polytope that relax those by Geng et al. For these new conditions, together with a channel connectivity (i.e., interference topology) condition, we show TIN optimality for a new class of interference networks that is not included, nor includes, the class found by Geng et al. Building on the above insights, we consider the problem of joint link scheduling and power control in wireless networks, which has been widely studied as a basic physical layer mechanism for device-to-device communications. Inspired by the relaxed TIN channel strength condition as well as the assignment-based power allocation, we propose a low-complexity GDoF-based distributed link scheduling and power control mechanism (ITLinQ+) that improves upon the ITLinQ scheme proposed by Naderializadeh and Avestimehr and further improves over the heuristic approach known as FlashLinQ. It is demonstrated by simulation that ITLinQ+ without power control provides significant average network throughput gains over both ITLinQ and FlashLinQ, and yet still maintains the same level of implementation complexity. Furthermore, when ITLinQ+ is augmented by power control, it provides an energy efficiency substantially larger than that of ITLinQ and FlashLinQ, at the cost of additional complexity and some signaling overhead.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the integrated berth allocation and quay crane assignment problem in container terminals and proposed a deterministic model by considering the setup time of quay cranes.
Abstract: This paper investigates the integrated berth allocation and quay crane assignment problem in container terminals. A deterministic model is formulated by considering the setup time of quay cranes. However, data uncertainties widely exist, and it may cause the deterministic solution to be infeasible. To handle the uncertainties, a robust optimization model is established. Furthermore, to control the level of conservativeness, another robust optimization model with the price constraints is proposed. A genetic algorithm and an insertion heuristic algorithm are suggested to obtain near optimal solutions. Computational experiments indicate that the presented models and algorithms are effective to solve the problems.

Proceedings ArticleDOI
03 Apr 2016
TL;DR: This work proposes three algorithms that aim at assigning energy efficient trajectories for a fleet of UAVs and adopts the concept of space discretization to aid with collision avoidance, and presents a more realistic view of the space a UAV occupies.
Abstract: Unmanned Aerial Vehicles are miniature air-crafts that have proliferated in many military and civil applications. Their affordability allows for tasks to be held with not just one but a fleet of UAVs. One of the problems that arise with the use of multi-UAVs is the multi-UAV path planning and assignment problem. We propose three algorithms that aim at assigning energy efficient trajectories for a fleet of UAVs. Our optimal path planning solution (OPP) is formulated using a Mixed Integer Linear Programming model (MILP). We also propose two other heuristic solutions that are greedy in nature; namely, Greedy Least Cost (GLC) and First Detect First Reserve (FDFR). To aid with collision avoidance, we adopt the concept of space discretization, and present a more realistic view of the space a UAV occupies. The comparative study of our proposed solutions reveals insightful trade-offs between energy consumption and complexity.

Journal ArticleDOI
TL;DR: An emerging metaheuristic methodology referred to as Cohort Intelligence (CI) in the socio-inspired optimization domain is applied in order to solve three selected combinatorial optimization problems.

Journal ArticleDOI
TL;DR: This letter investigates the pilot contamination problem in massive MIMO networks from a system level point of view and proposes two game-theoretic approaches that model the pilot assignment problem, and proves that one of these games is a potential game and that the adaptation processes following best and better response dynamics will converge to a Nash equilibrium.
Abstract: In this letter, we investigate the pilot contamination problem in massive MIMO networks from a system level point of view. We propose two game-theoretic approaches that model the pilot assignment problem. We also prove that one of these games is a potential game, and show that the adaptation processes following best and better response dynamics will converge to a Nash equilibrium. We then model the problem as an optimization problem. Finally, we compare the game theoretic results with the optimal and random pilot assignments. Our simulation results show that the game solution significantly outperforms the random assignment and performs as well as the optimal pilot assignment.

01 Jan 2016
TL;DR: In Proc.
Abstract: In Proc. of the Fourth International Symposium on Operations Research, Cambridge, Mass. PIERSKALLA, W. P. 1967. Optimal Issuing Policies in Inventory Management-I. Mgmt. Sci. 13, 395-412. PIERSKALLA, W. P., AND ROACH, C. D. 1972. Optimal Issuing Policies for Perishable Inventory. Mgmt. Sci. 18, 603-614. ZEHNA, P. A. 1962. Inventory Depletion Policies. Studies in Applied Probability. Stanford University Press, Stanford, Calif.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed and analyzed a distance-constrained traffic assignment problem with trip chains embedded in equilibrium network flows, where a trip chain is defined as a series of trips between two possible charging opportunities.
Abstract: This paper proposes and analyzes a distance-constrained traffic assignment problem with trip chains embedded in equilibrium network flows. The purpose of studying this problem is to develop an appropriate modeling tool for characterizing traffic flow patterns in emerging transportation networks that serve a massive adoption of plug-in electric vehicles. This need arises from the facts that electric vehicles suffer from the “range anxiety” issue caused by the unavailability or insufficiency of public electricity-charging infrastructures and the far-below-expectation battery capacity. It is suggested that if range anxiety makes any impact on travel behaviors, it more likely occurs on the trip chain level rather than the trip level, where a trip chain here is defined as a series of trips between two possible charging opportunities (Tamor et al., 2013). The focus of this paper is thus given to the development of the modeling and solution methods for the proposed traffic assignment problem. In this modeling paradigm, given that trip chains are the basic modeling unit for individual decision making, any traveler’s combined travel route and activity location choices under the distance limit results in a distance-constrained, node-sequenced shortest path problem. A cascading labeling algorithm is developed for this shortest path problem and embedded into a linear approximation framework for equilibrium network solutions. The numerical result derived from an illustrative example clearly shows the mechanism and magnitude of the distance limit and trip chain settings in reshaping network flows from the simple case characterized merely by user equilibrium.

Posted Content
TL;DR: A modified Physarum -inspired model for the user equilibrium problem is proposed, and by decomposing traffic flux based on origin nodes, the traffic flux from different origin–destination pairs can be distinguished in the proposed model.
Abstract: The user equilibrium traffic assignment principle is very important in the traffic assignment problem. Mathematical programming models are designed to solve the user equilibrium problem in traditional algorithms. Recently, the Physarum shows the ability to address the user equilibrium and system optimization traffic assignment problems. However, the Physarum model are not efficient in real traffic networks with two-way traffic characteristics and multiple origin-destination pairs. In this article, a modified Physarum-inspired model for the user equilibrium problem is proposed. By decomposing traffic flux based on origin nodes, the traffic flux from different origin-destination pairs can be distinguished in the proposed model. The Physarum can obtain the equilibrium traffic flux when no shorter path can be discovered between each origin-destination pair. Finally, numerical examples use the Sioux Falls network to demonstrate the rationality and convergence properties of the proposed model.

Journal ArticleDOI
01 Sep 2016
TL;DR: An efficient parallelization of the augmenting path search phase of the Hungarian algorithm is described, which reveals that the GPU-accelerated versions are extremely efficient in solving large problems, as compared to their CPU counterparts.
Abstract: Linear Assignment is one of the most fundamental problems in operations research.A creative parallelization of a Hungarian-like algorithm on GPU cluster.Efficient parallelization of the augmenting path search step.Large problems with 1.6 billion variables can be solved.It is probably the fastest LAP solver using a GPU. In this paper, we describe parallel versions of two different variants (classical and alternating tree) of the Hungarian algorithm for solving the Linear Assignment Problem (LAP). We have chosen Compute Unified Device Architecture (CUDA) enabled NVIDIA Graphics Processing Units (GPU) as the parallel programming architecture because of its ability to perform intense computations on arrays and matrices. The main contribution of this paper is an efficient parallelization of the augmenting path search phase of the Hungarian algorithm. Computational experiments on problems with up to 25 million variables reveal that the GPU-accelerated versions are extremely efficient in solving large problems, as compared to their CPU counterparts. Tremendous parallel speedups are achieved for problems with up to 400 million variables, which are solved within 13 seconds on average. We also tested multi-GPU versions of the two variants on up to 16 GPUs, which show decent scaling behavior for problems with up to 1.6 billion variables and dense cost matrix structure.

Proceedings ArticleDOI
22 May 2016
TL;DR: A dynamic two-stage design for downlink OFDMA resource allocation and BBU-RRH assignment in C-RAN is proposed, which achieves not only a high satisfaction rate for mobile users, but also minimal power consumption and significant BBUs savings, compared to state-of-the-art schemes.
Abstract: Cloud-Radio Access Network (C-RAN) is a new emerging technology that holds alluring promises for Mobile network operators regarding capital and operation cost savings. However, many challenges still remain before full commercial deployment of C-RAN solutions. Dynamic resource allocation algorithms are needed to cope with significantly fluctuating traffic loads. Those algorithms must target not only a better quality of service delivery for users, but also less power consumption and better interference management, with the possibility to turn off RRHs that are not transmitting. To this end, we propose in this paper a dynamic two-stage design for downlink OFDMA resource allocation and BBU-RRH assignment in C-RAN. Specifically, we first model the resource and power allocation problem in a mixed integer linear problem for real-time fluctuating traffic of mobile users. Then, we propose a Knapsack formulation to model the BBU-RRH assignment problem. Simulation results show that our proposal achieves not only a high satisfaction rate for mobile users, but also minimal power consumption and significant BBUs savings, compared to state-of-the-art schemes.

Journal ArticleDOI
TL;DR: A novel nonlinear integer programming formulation is presented, its mathematical properties and paradoxical phenomena are analyzed, and a generalized Benders decomposition framework is suggested for its solutions.
Abstract: This article defines, formulates, and solves a new equilibrium traffic assignment problem with side constraints-the traffic assignment problem with relays. The relay requirement arises from the driving situation that the onboard fuel capacity of vehicles is lower than what is needed for accomplishing their trips and the number and distribution of refueling infrastructures over the network are under the expected level. We proposed this problem as a modeling platform for evaluating congested regional transportation networks that serve plug-in electric vehicles in addition to internal combustion engine vehicles, where battery-recharging or battery-swapping stations are scarce. Specifically, we presented a novel nonlinear integer programming formulation, analyzed its mathematical properties and paradoxical phenomena, and suggested a generalized Benders decomposition framework for its solutions. In the algorithmic framework, a gradient projection algorithm and a labeling algorithm are adopted for, respectively, solving the primal problem and the relaxed master problem-the shortest path problem with relays. The modeling and solution methods are implemented for solving a set of example network problems. The numerical analysis results obtained from the implementation clearly show how the driving range limit and relay station location reshape equilibrium network flows.

Journal ArticleDOI
TL;DR: The concept of priority is introduced to the SWDCP-SCmules scheme, and the simulated annealing for priority assignment SA-PA algorithm is proposed to guide the priority assignment to quantify the value of data and find a well-performed selection strategy.
Abstract: To enable the intelligent management of Smart City and improve overall social welfare, it is desirable for the status of infrastructures detected and reported by intelligent devices embedded in them to be forwarded to the data centers. Using “SCmules” such as taxis, to opportunistically communicate with intelligent devices and collect data from the sparse networks formed by them in the process of moving is an economical and effective way to achieve this goal. In this paper, the social welfare data collection paradigm SWDCP-SCmules data collection framework is proposed to collect data generated by intelligent devices and forward them to data centers, in which “SCmules” are data transmitters picking up data from nearby intelligent devices and then store-carry-forwarding them to nearby data centers via short-range wireless connections in the process of moving. Because of the storage limitations, “SCmules” need to weigh the value of data and select some less valuable data to discard when necessary. To quantify the value of data and find a well-performed selection strategy, the concept of priority is introduced to the SWDCP-SCmules scheme, and then, the simulated annealing for priority assignment SA-PA algorithm is proposed to guide the priority assignment. The SA-PA algorithm is a universal algorithm that can improve the performance of SWDCP-SCmules scheme by finding better priority assignment with respect to various optimization targets, such as maximizing collection rate or minimizing redundancy rate, in which priority assignment problem is converted into an optimization problem and simulated annealing is used to optimize the priority assignment. From the perspective of machine learning, the process of optimization is equal to automatically learn social-aware patterns from past GPS trajectory data. Experiments based on real GPS trajectory data of taxis in Beijing are conducted to show the effectiveness and efficiency of SWDCP-SCmules scheme and SA-PA algorithm.

Proceedings ArticleDOI
09 Apr 2016
TL;DR: The original Kuhn-Munkres algorithm is improved by utilizing the sparsity structure of the cost matrix, and two algorithms are proposed, sparsity based KM(sKM) and parallel KM(pKM), which provides a parallel way to solve assignment problem with considerable accuracy loss.
Abstract: Kuhn-Munkres algorithm is one of the most popular polynomial time algorithms for solving clas- sical assignment problem. The assignment problem is to find a n a ssignment o f t he j obs t o t he w orkers that has minimum cost, given a cost matrix X 2 R mn , where the element in the i-th row and j-th column rep- resents the cost of assigning the i-th job to the j-th worker. the time complexity of Kuhn-Munkres algorithm is O(mn 2 ), which brings prohibitive computational burden on large scale matrices, limiting the further usage of these methods in real applications. Motivated by this observation, a series of acceleration skills and paral- lel techniques have been studied on special structure. In this paper, we improve the original Kuhn-Munkres algorithm by utilizing the sparsity structure of the cost matrix, and propose two algorithms, sparsity based KM(sKM) and parallel KM(pKM). Furthermore, numerical experiments are given to show the efficiency of our algorithm. We empirically evaluate the proposed algorithm sKM) and (pKM) on random generated largescale datasets. Results have shown that sKM) greatly improves the computational performance. At the same time, (pKM) provides a parallel way to solve assignment problem with considerable accuracy loss.

Journal ArticleDOI
TL;DR: This paper introduces a discrete particle swarm optimization algorithm for resolving high-order graph matching problems, which incorporates several re-defined operations, a problem-specific initialization method based on heuristic information, and aproblem-specific local search procedure.

Book
09 Jan 2016
TL;DR: This unique text/reference presents a thorough introduction to the field of structural pattern recognition, with a particular focus on graph edit distance (GED) and a detailed review of a diverse selection of novel methods related to GED.
Abstract: This unique text/reference presents a thorough introduction to the field of structural pattern recognition, with a particular focus on graph edit distance (GED). The book also provides a detailed review of a diverse selection of novel methods related to GED, and concludes by suggesting possible avenues for future research. Topics and features: formally introduces the concept of GED, and highlights the basic properties of this graph matching paradigm; describes a reformulation of GED to a quadratic assignment problem; illustrates how the quadratic assignment problem of GED can be reduced to a linear sum assignment problem; reviews strategies for reducing both the overestimation of the true edit distance and the matching time in the approximation framework; examines the improvement demonstrated by the described algorithmic framework with respect to the distance accuracy and the matching time; includes appendices listing the datasets employed for the experimental evaluations discussed in the book.

Journal ArticleDOI
TL;DR: Two mixed integer programming models are proposed: the first model solves the integrated flight scheduling and fleet assignment problem and the second model further considers the itinerary price elasticity, which can achieve a significant profit improvement under a reasonable computation time.

Journal ArticleDOI
TL;DR: The proposed algorithm to detect one-mode community structures in bipartite networks, and to deduce which one- mode community structures are weighted, successfully finds overlapping vertices between one- Mode communities.
Abstract: In this paper, an algorithm is proposed to detect one-mode community structures in bipartite networks, and to deduce which one-mode community structures are weighted. After analyzing the topological properties in bipartite networks, bipartite clustering triangular is introduced. First, bipartite networks are projected into two weighted one-mode networks by bipartite clustering triangular. Then all the maximal sub-graphs from two one-mode weighted networks are extracted and the maximal sub-graphs are merged together using a weighted clustering threshold. In addition, the proposed algorithm successfully finds overlapping vertices between one-mode communities. Experimental results using some real-world network data shows that the performance of the proposed algorithm is satisfactory.

Journal ArticleDOI
TL;DR: A hybrid particle swarm optimization (HPSO), combining an improved PSO with an event-based heuristic, is proposed to deal with two specific seaside operations planning problems, the dynamic and discrete BAP (DDBAP) and the dynamic QCAP (DQCAP).
Abstract: Berth allocation problem (BAP) and quay crane assignment problem (QCAP) are two essential seaside operations planning problems faced by operational planners of a container terminal. The two planning problems have been often solved by genetic algorithms (GAs) separately or simultaneously. However, almost all these GAs can only support time-invariant QC assignment in which the number of QCs assigned to a ship is unchanged. In this study a hybrid particle swarm optimization (HPSO), combining an improved PSO with an event-based heuristic, is proposed to deal with two specific seaside operations planning problems , the dynamic and discrete BAP (DDBAP) and the dynamic QCAP (DQCAP). In the HPSO, the improved PSO first generates a DDBAP solution and a DQCAP solution with time-invariant QC assignment. Then, the event-based heuristic transforms the DQCAP solution into one with variable-in-time QC assignment in which the number of QCs assigned to a ship can be further changed. To investigate its effeteness, the HPSO has been compared to a GA (namely GA1) with time-invariant QC assignment and a hybrid GA (HGA) with variable-in-time QC assignment. Experimental results show that the HPSO outperforms the HGA and GA1 in terms of fitness value (FV).