scispace - formally typeset
Search or ask a question

Showing papers on "Greedy algorithm published in 2023"


Journal ArticleDOI
TL;DR: In this paper , a mixed-integer linear programming model and a critical path-based accelerated evaluation method are proposed to minimize makespan, total flow time, and total energy consumption simultaneously.
Abstract: The flowshop sequence-dependent group scheduling problem (FSDGSP) with the production efficiency measures has been extensively studied due to its wide industrial applications. However, energy efficiency indicators are often ignored in the literature. This article considers the FSDGSP to minimize makespan, total flow time, and total energy consumption, simultaneously. After the problem-specific knowledge is extracted, a mixed-integer linear programming model and a critical path-based accelerated evaluation method are proposed. Since the FSDGSP includes multiple coupled subproblems, a greedy cooperative co-evolutionary algorithm (GCCEA) is designed to explore the solution space in depth. Meanwhile, a random mutation operator and a greedy energy-saving strategy are employed to adjust the processing speeds of machines to obtain a potential nondominated solution. A large number of experimental results show that the proposed algorithm significantly outperforms the existing classic multiobjective optimization algorithms, which is due to the usage of problem-related knowledge.

24 citations


Journal ArticleDOI
TL;DR: In this article , the authors considered the community partition problem under the independent cascade (IC) model in social networks and formulated the problem as a combinatorial optimization problem that aims at partitioning a given social network into disjoint communities.
Abstract: Community partition is an important problem in many areas, such as biology networks and social networks. The objective of this problem is to analyze the relationships among data via the network topology. In this article, we consider the community partition problem under the independent cascade (IC) model in social networks. We formulate the problem as a combinatorial optimization problem that aims at partitioning a given social network into disjoint $m$ communities. The objective is to maximize the sum of influence propagation of a social network through maximizing it within each community. The existing work shows that the influence maximization for community partition problem (IMCPP) is NP-hard. We first prove that the objective function of IMCPP under the IC model is neither submodular nor supermodular. Then, both supermodular upper bound and submodular lower bound are constructed and proved so that the sandwich framework can be applied. A continuous greedy algorithm and a discrete implementation are devised for upper and lower bound problems. The algorithm for both of the two problems gets a $1-1/e$ approximation ratio. We also present a simple greedy algorithm to solve the original objective function and apply the sandwich approximation framework to it to guarantee a data-dependent approximation factor. Finally, our algorithms are evaluated on three real datasets, which clearly verifies the effectiveness of our method in the community partition problem, as well as the advantage of our method against the other methods.

10 citations


Journal ArticleDOI
TL;DR: In this paper , a mixed-integer linear programming model of DANWFSP with total flowtime criterion is proposed, and a population-based iterated greedy algorithm (PBIGA) is presented to address the problem.
Abstract: This article investigates a distributed assembly no-wait flow-shop scheduling problem (DANWFSP), which has important applications in manufacturing systems. The objective is to minimize the total flowtime. A mixed-integer linear programming model of DANWFSP with total flowtime criterion is proposed. A population-based iterated greedy algorithm (PBIGA) is presented to address the problem. A new constructive heuristic is presented to generate an initial population with high quality. For DANWFSP, an accelerated NR3 algorithm is proposed to assign jobs to the factories, which improves the efficiency of the algorithm and saves CPU time. To enhance the effectiveness of the PBIGA, the local search method and the destruction-construction mechanisms are designed for the product sequence and job sequence, respectively. A selection mechanism is presented to determine, which individuals execute the local search method. An acceptance criterion is proposed to determine whether the offspring are adopted by the population. Finally, the PBIGA and seven state-of-the-art algorithms are tested on 810 large-scale benchmark instances. The experimental results show that the presented PBIGA is an effective algorithm to address the problem and performs better than recently state-of-the-art algorithms compared in this article.

4 citations


Journal ArticleDOI
TL;DR: In this paper , a hybrid greedy hill climbing algorithm (HGHC) was proposed to ensure both effectiveness and near-optimal results for generating a small number of test data.
Abstract: In combinatorial testing development, the fabrication of covering arrays is the key challenge by the multiple aspects that influence it. A wide range of combinatorial problems can be solved using metaheuristic and greedy techniques. Combining the greedy technique utilizing a metaheuristic search technique like hill climbing (HC), can produce feasible results for combinatorial tests. Methods based on metaheuristics are used to deal with tuples that may be left after redundancy using greedy strategies; then the result utilization is assured to be near-optimal using a metaheuristic algorithm. As a result, the use of both greedy and HC algorithms in a single test generation system is a good candidate if constructed correctly. This study presents a hybrid greedy hill climbing algorithm (HGHC) that ensures both effectiveness and near-optimal results for generating a small number of test data. To make certain that the suggested HGHC outperforms the most used techniques in terms of test size. It is compared to others in order to determine its effectiveness. In contrast to recent practices utilized for the production of covering arrays (CAs) and mixed covering arrays (MCAs), this hybrid strategy is superior since allowing it to provide the utmost outcome while reducing the size and limit the loss of unique pairings in the CA/MCA generation.

3 citations


Journal ArticleDOI
TL;DR: In this article , a novel method to improve biogeography-based optimization (BBO) for solving the traveling salesman problem (TSP) is developed, which is comprised of a greedy randomized adaptive search procedure, the 2-opt algorithm, and G2BBO.
Abstract: We develop a novel method to improve biogeography-based optimization (BBO) for solving the traveling salesman problem (TSP). The improved method is comprised of a greedy randomized adaptive search procedure, the 2-opt algorithm, and G2BBO. The G2BBO formulation is derived and the process flowchart is shown in this article. For solving TSP, G2BBO effectively avoids the local minimum problem and accelerates convergence by optimizing the initial values. To demonstrate, we adopt three public datasets (eil51, eil76, and kroa100) from TSPLIB and compare them with various well-known algorithms. The results of G2BBO as well as the other algorithms perform close enough to the optimal solutions in eil51 and eil76 where simple TSP coordinates are considered. In the case of kroa100, with more complicated coordinates, G2BBO shows greater performance over other methods.

3 citations


Journal ArticleDOI
TL;DR: In this paper , the authors investigated the UAV trajectory planning strategies for localizing a target mobile device in emergency situations by measuring the strength of the signal (e.g., LTE wireless communication signal) from the target device.
Abstract: This study investigates unmanned aerial vehicle (UAV) trajectory planning strategies for localizing a target mobile device in emergency situations. The global navigation satellite system (GNSS)-based accurate position information of a target mobile device in an emergency may not be always available to first responders. For example, 1) GNSS positioning accuracy may be degraded in harsh signal environments and 2) in countries where emergency positioning service is not mandatory, some mobile devices may not report their locations. Under the cases mentioned above, one way to find the target mobile device is to use UAVs. Dispatched UAVs may search the target directly on the emergency site by measuring the strength of the signal (e.g., LTE wireless communication signal) from the target mobile device. To accurately localize the target mobile device in the shortest time possible, UAVs should fly in the most efficient way possible. The two popular trajectory optimization strategies of UAVs are greedy and predictive approaches. However, the research on localization performances of the two approaches has been evaluated only under favorable settings (i.e., under good UAV geometries and small received signal strength (RSS) errors); more realistic scenarios still remain unexplored. In this study, we compare the localization performance of the greedy and predictive approaches under realistic RSS errors (i.e., up to 6 dB according to the ITU-R channel model).

3 citations


Journal ArticleDOI
TL;DR: In this paper , the authors proposed a centralized heuristic based on tree search guided by a graph convolutional network (GCN) and a 1-step rollout to solve the maximum weighted independent set (MWIS) problem.
Abstract: Efficient scheduling of transmissions is a key problem in wireless networks. The main challenge stems from the fact that optimal link scheduling involves solving a maximum weighted independent set (MWIS) problem, which is known to be NP-hard. In practical schedulers, centralized and distributed greedy heuristics are commonly used to approximately solve the MWIS problem. However, most of these greedy heuristics ignore important topological information of the wireless network. To overcome this limitation, we propose fast heuristics based on graph convolutional networks (GCNs) that can be implemented in centralized and distributed manners. Our centralized heuristic is based on tree search guided by a GCN and 1-step rollout. In our distributed MWIS solver, a GCN generates topology-aware node embeddings that are combined with per-link utilities before invoking a distributed greedy solver. Moreover, a novel reinforcement learning scheme is developed to train the GCN in a non-differentiable pipeline. Test results on medium-sized wireless networks show that our centralized heuristic can reach a near-optimal solution quickly, and our distributed heuristic based on a shallow GCN can reduce by nearly half the suboptimality gap of the distributed greedy solver with minimal increase in complexity. The proposed schedulers also exhibit good generalizability across graph and weight distributions.

3 citations


Journal ArticleDOI
TL;DR: In this paper , a data-driven approach to sub-optimally allocate charging stations for electric vehicles (EVs) in an early-stage setting is presented, where the authors investigate the following problem: for a city with a limited budget for public EV charging infrastructure construction, where should the charging stations be deployed to promote the transition of EVs from traditional cars?
Abstract: This paper presents a novel and practical data-driven approach to sub-optimally allocate charging stations for electric vehicles (EVs) in an early-stage setting. Specifically, we investigate the following problem: For a city with a limited budget for public EV charging infrastructure construction, where should the charging stations be deployed in order to promote the transition of EVs from traditional cars? We develop a $\delta$ -nearest model and a $K$ -nearest model that can capture people's satisfaction towards a certain design and formulate the early-stage EV charging station placement problem as a monotone submodular maximization problem utilizing fine-grained population, trip, transportation network and POI data. A greedy-based algorithm is proposed to solve the problem efficiently with a provable approximation ratio. A case study of Haikou is provided to demonstrate the effectiveness of our approach.

3 citations


Journal ArticleDOI
TL;DR: In this paper , a polynomial algorithm is proposed for computing a minimum plaintext representation of k-mer sets, as well as an efficient near-minimum greedy heuristic.
Abstract: Abstract We propose a polynomial algorithm computing a minimum plain-text representation of k -mer sets, as well as an efficient near-minimum greedy heuristic. When compressing read sets of large model organisms or bacterial pangenomes, with only a minor runtime increase, we shrink the representation by up to 59% over unitigs and 26% over previous work. Additionally, the number of strings is decreased by up to 97% over unitigs and 90% over previous work. Finally, a small representation has advantages in downstream applications, as it speeds up SSHash-Lite queries by up to 4.26× over unitigs and 2.10× over previous work.

2 citations


Journal ArticleDOI
TL;DR: In this article , an improved greedy strategy with a depth-first search mechanism was proposed to address the coverage problem of underwater targets, where the optimal selection criteria of deployment location at each step and a flexible search process was converted into multiple local optimal solution problems to improve solution efficiency.
Abstract: Owing to the dynamic characteristics of the underwater environment, the set of underwater targets needing to be covered and monitored by sensor nodes often exhibits a variety of irregular distributions, bringing a certain burden to node deployment. In this letter, an improved greedy strategy with a depth-first search mechanism was proposed to address the coverage problem of underwater targets. With the optimal selection criteria of deployment location at each step and a flexible search process, the global solution problem was converted into multiple local optimal solution problems to improve solution efficiency. Thus the performance of coverage and connectivity can be effectively guaranteed in underwater scenarios with irregular target distributions. The simulation results show that the proposed algorithm has superiority in solving the target coverage problem in underwater acoustic sensor networks (UASNs), which is manifested in node deployment cost and coverage efficiency.

2 citations


Journal ArticleDOI
TL;DR: In this paper , a novel problem that considers the PM problem from the ratio of gain and cost, known as output-to-input ratio maximization (OIRM), is presented.
Abstract: In the last few decades, profit maximization (PM), which is mainly considered for maximizing net profits, i.e., the difference between gain and cost, has been a prominent issue for online social networks (OSNs). However, the output-to-input ratio, which is an important metric in economics, is also worth studying for OSNs. In this article, we present a novel problem that considers the PM problem from the ratio of gain and cost, known as output-to-input ratio maximization (OIRM). Unfortunately, it is neither submodular nor supermodular. The hill-climbing greedy algorithm for solving this problem is a $1-e^{-(1-c_{g})}$ approximation algorithm, where $c_{g}$ is the curvature of the monotone submodular set function $g$ . To speed up the hill-climbing greedy algorithm, we propose the threshold decrease algorithm and prove that its approximation ratio is $1-e^{-(1-c_{g})^{2}}-\epsilon $ . In addition, based on the relationship between classical net PM and OIRM, the algorithms for solving PM can also solve OIRM. Finally, we evaluate the performance of our algorithms using massive experiments on real datasets. To the best of our knowledge, this is the first time to study the OIRM in viral marketing.

Journal ArticleDOI
TL;DR: In this paper , an Iterated Greedy algorithm is proposed for the problem of finding the minimum dominating set in a graph, where the dominating set is defined as a set of vertices such that every vertex outside the set is adjacent to a vertex in the set.

Proceedings ArticleDOI
27 Feb 2023
TL;DR: In this article , the authors explore a variant of the original influence maximization problem where they wish to reach out to and maximize the probability of infection of a small subset of bounded capacity K. They show that this problem does not exhibit the same submodular guarantees as the original IM problem, and they resort to the theory of gamma-weakly sub-modular functions.
Abstract: Influence maximization (IM) refers to the problem of finding a subset of nodes in a network through which we could maximize our reach to other nodes in the network. This set is often called the "seed set", and its constituent nodes maximize the social diffusion process. IM has previously been studied in various settings, including under a time deadline, subject to constraints such as that of budget or coverage, and even subject to measures other than the centrality of nodes. The solution approach has generally been to prove that the objective function is submodular, or has a submodular proxy, and thus has a close greedy approximation. In this paper, we explore a variant of the IM problem where we wish to reach out to and maximize the probability of infection of a small subset of bounded capacity K. We show that this problem does not exhibit the same submodular guarantees as the original IM problem, for which we resort to the theory of gamma-weakly submodular functions. Subsequently, we develop a greedy algorithm that maximizes our objective despite the lack of submodularity. We also develop a suitable learning model that out-competes baselines on the task of predicting the top-K infected nodes, given a seed set as input.

Journal ArticleDOI
TL;DR: In this article , a federated consensus-based algorithm (FCB) was proposed to increase the computational parallelism and accelerate the convergence of sparse recovery. And the authors derived the conditions of exact support recovery and an upper bound of signal recovery error for FCB in the noisy case.
Abstract: In this paper, we develop a new algorithm, named federated consensus-based algorithm (FCB), for sparse recovery, and show its performance in terms of both support recovery and signal recovery. Specifically, FCB is designed on the basis of the federated computational architecture, to increase the computational parallelism and accelerate the convergence. The algorithm design is realized by integrating accelerated projection-based consensus (APC) with greedy techniques. Then, the conditions of exact support recovery and an upper bound of signal recovery error are derived for FCB in the noisy case. From the explicit expression of the signal recovery error bound, it is confirmed that FCB can stably recover sparse signals under appropriate conditions using the coherence statistic of the measurement matrix and the minimum magnitude of nonzero elements of the signal. Experimental results illustrate the performance of FCB, validating our derived conditions of exact support recovery and upper bound of signal recovery error. In summary, FCB utilizes the federated computational architecture, enabling high parallelism and fast convergence, and uses greedy techniques to guarantee stable recovery performance.

Journal ArticleDOI
TL;DR: In this paper , a multi-objective formulation for maximizing the lifetime with target coverage called MO-MMTC is proposed, which accounts for the energy fluctuation among mobile sensors after each movement.
Abstract: Target coverage and lifetime maximization problems are major challenges for mobile wireless sensor networks (MWSN). In this paper, we propose a Multi-Objective formulation for MaxiMizing lifetime with Target Coverage called MO-MMTC, which accounts for the energy fluctuation among mobile sensors after each movement. We prove the formulation to be NP-hard and propose the Enhanced Non-dominated Sorting Genetic Algorithm II (ENSGA-II), a multi-population genetic algorithm, to solve this problem. Experiments are performed to compare ENSGA-II with TV-Greedy, an existing state-of-the-art heuristic for MMTC. Our results show that the proposed algorithm significantly improves many evaluation metrics compared to baseline methods.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed a multi-user scheduling algorithm for 5G IoT systems based on reinforcement learning, which does not need to try different user combinations to maximize the throughput, and it is unnecessary to repeatedly calculate user's achievable data rate, so that the computational load is reduced.
Abstract: MU-MIMO technology is adopted in 5G to support the increasing number of user terminals accessing the 5G IoT systems. The algorithms adopted in the existing literatures for user scheduling in MIMO system are greedy algorithm essentially, which needs to repeatedly calculate the achievable data rate (or its low complexity characterization) of each user during the user selection. Due to the large number of IoT terminals, the existing methods will generate huge computational load. In this paper, we propose a multiuser scheduling algorithm for 5G IoT systems based on reinforcement learning. The user terminal's action-value, which denotes the expectation of user terminal's achievable data rate, is obtained through Q-learning. We define the Q-value as the upper bound of the confidence interval of the user terminal's action-value and the proposed algorithm selects users on the basis of the Q-value. The proposed algorithm does not need to try different user combinations to maximize the throughput, and it is unnecessary to repeatedly calculate user's achievable data rate, so that the computational load is reduced. Simulation and numerical results show that the computational complexity of the proposed algorithm is lower than that of existing algorithms. At the same time, the system throughput achieved by this algorithm is not lower than that of greedy algorithms.


Journal ArticleDOI
TL;DR: In this paper , a greedy interference algorithm based on NOMA was proposed for interference management by frequency allocation in 5G devices, where the aim is to achieve a minimized outage and a maximized sum rate.

Journal ArticleDOI
TL;DR: In this paper , the authors have implemented and compared optimization algorithms such as Intra-Route Local Search (IRLS), Inter-Routes Local Search, and Tabu Search that provide a suboptimal solution to the greedy solution of this NP-hard problem.
Abstract: Due to rapid urbanization, timely delivery using vehicle routing is the most pressing issue for E-commerce logistics and distribution. In this study, we articulate multiple vehicle routing problems with a maximum capacity constraint and no time constraint. A brief literature review is conducted to identify the widely used optimization models for perishable goods delivery in the last mile. We have implemented and compared optimization algorithms such as Intra-Route Local Search, Inter-Route Local Search, and Tabu Search that provide a suboptimal solution to the greedy solution of this NP-hard problem. We compared the solution provided by these algorithms with the optimal solution that can be obtained in exponential time using processing time and precision. The results demonstrate that Tabu search outperforms other techniques for larger instance sizes, but for smaller instance sizes, Local search can produce results that are comparable to TABU search in a significantly shorter amount of time. In addition, the impact of instance size on the performance of the aforementioned algorithms is evaluated. The real-life evaluation is done to understand the use cases of this problem for an e-commerce company that will repeatedly face this challenge during warehouse storage management, last-mile delivery planning, and execution. A comprehensive performance comparison of various algorithms is presented to optimize last-mile delivery for future cases in order to reduce the overall carbon footprint and achieve profitability through the use of sustainable modes of distribution.

Journal ArticleDOI
TL;DR: In this article , the weak greedy approximation algorithm (WGAA) is introduced, which, for each θ, produces two sequences of positive integers (an) and (bn) such that (a) ∑n=1∞1/bn=θ; (b) 1/an+1<θ−∑i=1n1/bi<1/(an+ 1−1) for all n⩾1; (c) there exists t⩽1 such that bn/an⩻t infinitely often.

Journal ArticleDOI
TL;DR: In this paper , Berná et al. introduced the notion of weak semi-greedy, inspired in the concepts of semi greedy and branch semi greedy systems and weak thresholding sets, and proved that in infinite dimensional Banach spaces, the notions of Semi-Greedy, Branch Semi-Greed, Semi-Gredy, Weak Gredy, and almost greedy Markushevich bases are all equivalent.
Abstract: We introduce and study the notion of weak semi-greedy systems—which is inspired in the concepts of semi-greedy and branch semi-greedy systems and weak thresholding sets-, and prove that in infinite dimensional Banach spaces, the notions of semi-greedy, branch semi-greedy, weak semi-greedy, and almost greedy Markushevich bases are all equivalent. This completes and extends some results from (Berná in J Math Anal Appl 470:218–225, 2019; Dilworth et al. in Studia Math 159:67–101, 2003; J Funct Anal 263:3900–3921, 2012). We also exhibit an example of a semi-greedy system that is neither almost greedy nor a Markushevich basis, showing that the Markushevich condition cannot be dropped from the equivalence result. In some cases, we obtain improved upper bounds for the corresponding constants of the systems.

Proceedings ArticleDOI
18 Feb 2023
TL;DR: In this article , a fairness-aware multi-armed bandit is proposed to increase overall player participation and intervention adherence rather than the maximization of total group output, which is traditionally achieved by favoring only highperforming participants.
Abstract: Algorithmic fairness is an essential requirement as AI becomes integrated in society. In the case of social applications where AI distributes resources, algorithms often must make decisions that will benefit a subset of users, sometimes repeatedly or exclusively, while attempting to maximize specific outcomes. How should we design such systems to serve users more fairly? This paper explores this question in the case where a group of users works toward a shared goal in a social exergame called Step Heroes. We identify adverse outcomes in traditional multi-armed bandits (MABs) and formalize the Greedy Bandit Problem. We then propose a solution based on a new type of fairness-aware multi-armed bandit, Shapley Bandits. It uses the Shapley Value for increasing overall player participation and intervention adherence rather than the maximization of total group output, which is traditionally achieved by favoring only high-performing participants. We evaluate our approach via a user study (n=46). Our results indicate that our Shapley Bandits effectively mediates the Greedy Bandit Problem and achieves better user retention and motivation across the participants.

Journal ArticleDOI
TL;DR: In this article , a reinforcement learning technique (in particular a Q-learning method), combined with an auxiliary graph (AG)-based energy efficient greedy method, is used to decide the suitable sequence of traffic allocation such that the overall power consumption in the network reduces.
Abstract: During network planning phase, optimal network planning implemented through efficient resource allocation and static traffic demand provisioning in IP-over-elastic optical network (IP-over-EON) is significantly challenging compared with the fixed-grid wavelength division multiplexing (WDM) network due to increased flexibility in IP-over-EON. Mathematical programming based optimization models used for this purpose may not provide solution for large networks due to large computational complexity. In this regard, a greedy heuristic may be used that intuitively selects traffic elements in sequence from static traffic demand matrix and provisions the traffic elements after necessary resource allocation. However, in general, such greedy heuristics offer suboptimal solutions, since appropriate traffic sequence offering the optimal performance is rarely selected. In this regard, we propose a reinforcement learning technique (in particular a Q-learning method), combined with an auxiliary graph (AG)-based energy efficient greedy method to be used for large network planning. The Q-learning method is used to decide the suitable sequence of traffic allocation such that the overall power consumption in the network reduces. In the proposed heuristic, each traffic from the given static traffic demand matrix is successively selected using the Q-learning method and provisioned using the AG-based greedy method.

Journal ArticleDOI
12 Jan 2023
TL;DR: In this paper , a web-based restaurant delivery system by applying a greedy algorithm to optimize routes and shipping costs is presented, and the results indicate that the greedy algorithm can determine the best route for couriers to make deliveries so that shipping costs become lower.
Abstract: Food delivery application services have been significantly developed in Indonesia. However, several areas have not received application services like this. Orders made by several restaurants still use social media such as Whatsapp, Facebook, and cell phones. Traditional ordering does not have sufficient means to calculate the cost of delivery of orders resulting in cost-efficiency problems. In addition, order delivery routes are a problem for couriers who have to deliver several orders at once. This research builds a web-based restaurant delivery system by applying a greedy algorithm to optimize routes and shipping costs. The results of this study indicate that the greedy algorithm can determine the best route for couriers to make deliveries so that shipping costs become lower. This research contributes as one proof of the application of the greedy algorithm to business problems and restaurants may use the resulting system to increase the effectiveness of order delivery.


Journal ArticleDOI
TL;DR: In this article , a greedy-AutoML framework is proposed to predict soil liquefaction potential problem based on stacking ensemble learning (SEL) combined with a greedy search algorithm.

Journal ArticleDOI
TL;DR: In this article , a mixed integer linear programming model (MILP) is formulated for the problem, and a novel greedy algorithm based on path ranking and a column generation (CG) based approximation algorithm is proposed to reduce the time complexity of problem solving.
Abstract: With the rapid development of programmable data plane (PDP), both segment routing (SR) and in-band network telemetry (INT) have attracted intensive interests. Hence, we have previously proposed the technique of SR-INT, which explores the benefits of SR and INT simultaneously and gets rid of the hassle of the accumulated overheads of them. In this work, we further expand the advantage of SR-INT by studying how to plan the SR-INT schemes of flows at the network level to balance the tradeoff between bandwidth usage and coverage of network monitoring, namely, the problem of “SR-INT orchestration". A mixed integer linear programming model (MILP) is first formulated for the problem, and we prove its NP-hardness. Then, to reduce the time complexity of problem-solving, we propose a novel greedy algorithm based on path ranking and a column generation (CG) based approximation algorithm. Extensive simulations verify the performance of our proposed algorithms.

Journal ArticleDOI
TL;DR: In this article , the authors proposed an efficient surrogate-assisted greedy inclusion algorithm to deal with computationally expensive hypervolume-based environmental selection (HVES), which aims at selecting a subpopulation with the maximal hypervolume from the current population, plays a crucial role in guiding evolution.
Abstract: Hypervolume-based evolutionary algorithms have been widely used to handle many-objective optimization problems. In such algorithms, hypervolume-based environmental selection (HVES), which aims at selecting a subpopulation with the maximal hypervolume (HV) from the current population, plays a crucial role in guiding evolution. However, the computation time of HV increases exponentially with the number of objectives, making the HVES task an expensive optimization problem. In this paper, we propose an efficient surrogate-assisted greedy inclusion algorithm to deal with computationally expensive HVES tasks. It uses a lightweight surrogate model, radial basis function network, to replace the most time-consuming calculations. In addition, an L1-norm distance-based filter is performed as a preselection operator to reduce the search space and avoid some unnecessary calculations. Considering the inevitable approximation errors of surrogate models, we also design an online sampling strategy to enhance the reliability of selected solutions. The proposed algorithm is tested on two types of datasets and compared with six state-of-the-art greedy algorithms. Experimental results show that the proposed algorithm performs excellently on most datasets.

Journal ArticleDOI
TL;DR: In this article , the authors evaluate different approaches that include ant colony optimization, bee colony optimization (BCO), a combination of Genetic Algorithms and Bee Colony optimization, and a Greedy approach.
Abstract: Objectives: Software researchers have been taking advantage of various evolutionary optimization approaches by digitizing them. Test case selection and prioritization based on fault coverage criteria within a time-constrained environment is important in regression testing problem. Methods: This work empirically evaluates different approaches that includes evolutionary approaches (Ant Colony Optimization, Bee Colony Optimization, a combination of Genetic Algorithms and Bee Colony optimization), and a Greedy approach. These tetrad techniques have been successfully applied to regression testing. Also, tools have been developed for their implementation. Eight open-source test programs, written in C language have been used for empirical evaluation of the regression testing approaches. Findings: The accuracy achieved by t-GSC, being a greedy technique, was found to be least; while that of ACO was found to be the best. All the tetrad approaches yielded borderline better or worse results, while all the four gave excellent time and size gains. Novelty: There are many studies available in the literature that compare various regression testing approaches of a similar kind. Instead of repeating the same, it is intended to evaluate two well-accepted approximation approaches: a hybrid approach, and a greedy approach. It has been tried to evaluate the efficiency of the greedy approach with the metaheuristic approach. It is imperative to compare approaches following different algorithmic paradigms, yet trying to solve the same problem. Keywords: Ant Colony Optimization; Bee Colony Optimization; Genetic Algorithms; Greedy Set Cover; Software Testing; empirical comparison

Journal ArticleDOI
TL;DR: In this paper , the authors study the problem of online traffic routing in the context of capacity-constrained parallel road networks and analyzes this problem from two perspectives: a worst-case analysis to identify the limits of deterministic online routing and a data-driven algorithm to solve it.
Abstract: Over the past decade, GPS-enabled traffic applications such as Google Maps and Waze have become ubiquitous and have had a significant influence on billions of daily commuters’ travel patterns. A consequence of the online route suggestions of such applications, for example, via greedy routing, has often been an increase in traffic congestion since the induced travel patterns may be far from the system optimum. Spurred by the widespread impact of traffic applications on travel patterns, this work studies online traffic routing in the context of capacity-constrained parallel road networks and analyzes this problem from two perspectives. First, we perform a worst-case analysis to identify the limits of deterministic online routing. Although we find that deterministic online algorithms achieve finite, problem/instance-dependent competitive ratios in special cases, we show that for a general setting the competitive ratio is unbounded. This result motivates us to move beyond worst-case analysis. Here, we consider algorithms that exploit knowledge of past problem instances and show how to design data-driven algorithms whose performance can be quantified and formally generalized to unseen future instances. We then present numerical experiments based on an application case for the San Francisco Bay Area to evaluate the performance of the proposed data-driven algorithms compared with the greedy algorithm and two look-ahead heuristics with access to additional information on the values of time and arrival time parameters of users. Our results show that the developed data-driven algorithms outperform commonly used greedy online-routing algorithms. Furthermore, our work sheds light on the interplay between data availability and achievable solution quality. History: Accepted by Andrea Lodi, Area Editor for Design and Analysis of Algorithms–Discrete. Funding: This work was supported by National Science Foundation (NSF) Award 1830554 and by the German Research Foundation (DFG) under [Grant 449261765]. Supplemental Material: The e-companion is available at https://doi.org/10.1287/ijoc.2023.1275 .