scispace - formally typeset
Search or ask a question

Showing papers on "Heuristic (computer science) published in 2021"


Journal ArticleDOI
TL;DR: Experimental results show that the AOA provides very promising results in solving challenging optimization problems compared with eleven other well-known optimization algorithms.

1,218 citations


Journal ArticleDOI
TL;DR: This article investigates the unmanned aerial vehicle (UAV)-assisted wireless powered Internet-of-Things system, where a UAV takes off from a data center, flies to each of the ground sensor nodes (SNs) in order to transfer energy and collect data from the SNs, and then returns to the data center.
Abstract: This article investigates the unmanned aerial vehicle (UAV)-assisted wireless powered Internet-of-Things system, where a UAV takes off from a data center, flies to each of the ground sensor nodes (SNs) in order to transfer energy and collect data from the SNs, and then returns to the data center. For such a system, an optimization problem is formulated to minimize the average Age of Information (AoI) of the data collected from all ground SNs. Since the average AoI depends on the UAV’s trajectory, the time required for energy harvesting (EH) and data collection for each SN, these factors need to be optimized jointly. Moreover, instead of the traditional linear EH model, we employ a nonlinear model because the behavior of the EH circuits is nonlinear by nature. To solve this nonconvex problem, we propose to decompose it into two subproblems, i.e., a joint energy transfer and data collection time allocation problem and a UAV’s trajectory planning problem. For the first subproblem, we prove that it is convex and give an optimal solution by using Karush–Kuhn–Tucker (KKT) conditions. This solution is used as the input for the second subproblem, and we solve optimally it by designing dynamic programming (DP) and ant colony (AC) heuristic algorithms. The simulation results show that the DP-based algorithm obtains the minimal average AoI of the system, and the AC-based heuristic finds solutions with near-optimal average AoI. The results also reveal that the average AoI increases as the flying altitude of the UAV increases and linearly with the size of the collected data at each ground SN.

138 citations


Journal ArticleDOI
TL;DR: Experimental results of main parameters selection, path planning performance in different environments, diversity of the optimal solution show that IAACO can make the robot attain global optimization path, and high real-time and stability performances of path planning.

133 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel dynamical hyperparameter optimization method that adaptively optimizes hyperparameters for a given sequence using an action-prediction network leveraged on continuous deep Q-learning.
Abstract: Hyperparameters are numerical pre-sets whose values are assigned prior to the commencement of a learning process. Selecting appropriate hyperparameters is often critical for achieving satisfactory performance in many vision problems, such as deep learning-based visual object tracking. However, it is often difficult to determine their optimal values, especially if they are specific to each video input. Most hyperparameter optimization algorithms tend to search a generic range and are imposed blindly on all sequences. In this paper, we propose a novel dynamical hyperparameter optimization method that adaptively optimizes hyperparameters for a given sequence using an action-prediction network leveraged on continuous deep Q-learning. Since the observation space for object tracking is significantly more complex than those in traditional control problems, existing continuous deep Q-learning algorithms cannot be directly applied. To overcome this challenge, we introduce an efficient heuristic strategy to handle high dimensional state space, while also accelerating the convergence behavior. The proposed algorithm is applied to improve two representative trackers, a Siamese-based one and a correlation-filter-based one, to evaluate its generalizability. Their superior performances on several popular benchmarks are clearly demonstrated. Our source code is available at https://github.com/shenjianbing/dqltracking .

113 citations


Journal ArticleDOI
TL;DR: A novel deterministic algorithm named multiple sub-target artificial potential field (MTAPF) based on an improved APF is presented to make the generated path compliant with USV's dynamics and orientation restrictions and is validated on simulations and proven to work effectively in different environments.

111 citations


Journal ArticleDOI
TL;DR: A new optimization algorithm named Flow Direction Algorithm (FDA), which is a physics-based algorithm that mimics the flow direction to the outlet point with the lowest height in a drainage basin, demonstrates the superior performance of the FDA in solving challenging problems.

91 citations


Journal ArticleDOI
TL;DR: A fuzzy mixed integer linear programming model is designed for cell formation problems including the scheduling of parts within cells in a cellular manufacturing system (CMS) where several automated guided vehicles (AGVs) are in charge of transferring the exceptional parts.
Abstract: In today's competitive environment, it is essential to design a flexible-responsive manufacturing system with automatic material handling systems. In this study, a fuzzy Mixed Integer Linear Programming (MILP) model is designed for Cell Formation Problem (CFP) including the scheduling of parts within cells in a Cellular Manufacturing System (CMS) where several Automated Guided Vehicles (AGVs) are in charge of transferring the exceptional parts. Notably, using these AGVs in CMS can be challenging from the perspective of mathematical modeling due to consideration of AGVs’ collision as well as parts pickup/delivery. This paper tries to investigate the role of AGVs and human factors as indispensable components of automation systems in the cell formation and scheduling of parts under fuzzy processing time. The proposed objective function includes minimizing the makespan and inter-cellular movements of parts. Due to the NP-hardness of the problem, a hybrid Genetic Algorithm (GA/heuristic) and a Whale Optimization Algorithm (WOA) are developed. The experimental results reveal that our proposed algorithms have a high performance compared to CPLEX and other two well-known algorithms, i.e., Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO), in terms of computational efficiency and accuracy. Finally, WOA stands out as the best algorithm to solve the problem.

90 citations


Journal ArticleDOI
TL;DR: The results show that the proposed simulation-based solution approach provides good solutions both in terms of quality and of computational time, and it is shown that the uncertainty in waiting times may have significant impact on route plans.

85 citations


Journal ArticleDOI
TL;DR: An improved artificial immune system (IAIS) algorithm is proposed to solve a special case of the flexible job shop scheduling problem (FJSP), where the processing time of each job is a nonsymmetric triangular interval T2FS (IT2FS) value.
Abstract: In practical applications, particularly in flexible manufacturing systems, there is a high level of uncertainty. A type-2 fuzzy logic system (T2FS) has several parameters and an enhanced ability to handle high levels of uncertainty. This article proposes an improved artificial immune system (IAIS) algorithm to solve a special case of the flexible job shop scheduling problem (FJSP), where the processing time of each job is a nonsymmetric triangular interval T2FS (IT2FS) value. First, a novel affinity calculation method considering the IT2FS values is developed. Then, four problem-specific initialization heuristics are designed to enhance both quality and diversity. To enhance the exploitation abilities, six local search approaches are conducted for the routing and scheduling vectors, respectively. Next, a simulated annealing method is embedded to accept antibodies with low affinity, which can enhance the exploration abilities of the algorithm. Moreover, a novel population diversity heuristic is presented to eliminate antibodies with high crowding values. Five efficient algorithms are selected for a detailed comparison, and the simulation results demonstrate that the proposed IAIS algorithm is effective for IT2FS FJSPs.

81 citations


Journal ArticleDOI
TL;DR: Computational results indicate that the proposed global supply chain network configuration can respond to its global customers’ demand in agile as well as green manner.

77 citations


Journal ArticleDOI
TL;DR: A deep-reinforcement-learning-based three-network double-delay actor-critic (TDAC) control strategy for AGC to handle the strong random disturbance issues induced by the ever-increasing penetration of renewable energy to the power grids is proposed.
Abstract: As the conventional automatic generation control (AGC) is inadequate to deal with the strong random disturbance issues induced by the ever-increasing penetration of renewable energy to the power grids, this article proposes a deep-reinforcement-learning-based three-network double-delay actor-critic (TDAC) control strategy for AGC to handle the above problem, which is mainly developed by multiple neural networks to fit the system action strategies and evaluate the value. The proposed strategy can increase the exploration efficiency and the quality of AGC and improve the system control performance using the modified actor-critic (AC) method with incentive heuristic mechanism, while a novel iterative way of the value function is also used to reduce the bias of optimization effectively for achieving optimal coordinated control of the power grid. The simulations are provided in the work to show the control performance of the strategy. Compared with other smart methods, the simulation study demonstrates that TDAC has excellent exploratory stability and learning ability. Meanwhile, it also can improve the dynamic performance of the power system and achieve the regional optimal coordinated control.

Proceedings ArticleDOI
28 Jun 2021
TL;DR: In this article, a comparison study between different algorithms that are used in the optimization process in order to find the best hyperparameter values for the neural network is carried out and different evaluation measures are used to conduct this comparison such as accuracy and running time.
Abstract: The performance of machine learning algorithms are affected by several factors, some of these factors are related to data quantity, quality, or its features. Another element is the choice of an appropriate algorithm to solve the problem and one major influence is the parameter configuration based on the problem specification. Parameters in machine learning can be classified in two types: (1) model parameters that are internal, configurable, and its value can be estimated from data such as weights of a deep neural network; and (2) hyperparameters, which are external and its values can not be estimated from data such as the learning rate for the training of a neural network. Hyperparameter values may be specified by a practitioner or using a heuristic, or parameter values obtained from other problems can be used etc., however, the best values of these parameters are identified when the algorithm has the highest accuracy, and these could be achieved by tuning the parameters. The main goal of this paper is to conduct a comparison study between different algorithms that are used in the optimization process in order to find the best hyperparameter values for the neural network. The algorithms applied are grid search algorithm, bayesian algorithm, and genetic algorithm. Different evaluation measures are used to conduct this comparison such as accuracy and running time.

Journal ArticleDOI
TL;DR: A multi-start iterated greedy (MSIG) algorithm is proposed to minimize the makespan and has many promising advantages in solving the PM/DPFSP under consideration.
Abstract: In recent years, distributed scheduling problems have been well studied for their close connection with multi-factory production networks. However, the maintenance operations that are commonly carried out on a system to restore it to a specific state are seldom taken into consideration. In this paper, we study a distributed permutation flowshop scheduling problem with preventive maintenance operation (PM/DPFSP). A multi-start iterated greedy (MSIG) algorithm is proposed to minimize the makespan. An improved heuristic is presented for the initialization and re-initialization by adding a dropout operation to NEH2 to generate solutions with a high level of quality and disperstiveness. A destruction phase with the tournament selection and a construction phase with an enhanced strategy are introduced to avoid local optima. A local search based on three effective operators is integrated into the MSIG to reinforce local neighborhood solution exploitation. In addition, a restart strategy is adpoted if a solution has not been improved in a certain number of consecutive iterations. We conducted extensive experiments to test the performance of the presented MSIG. The computational results indicate that the presented MSIG has many promising advantages in solving the PM/DPFSP under consideration.

Journal ArticleDOI
29 Jul 2021
TL;DR: In this article, a cooperative multi-stage hyper-heuristic (CMS-HH) algorithm is proposed to address certain combinatorial optimization problems, including Boolean satisfiability problems, one-dimensional packing problems, permutation flow-shop scheduling, personnel scheduling problems, traveling salesman problems, and vehicle routing problems.
Abstract: A hyper-heuristic algorithm is a general solution framework that adaptively selects the optimizer to address complex problems. A classical hyper-heuristic framework consists of two levels, including the high-level heuristic and a set of low-level heuristics. The low-level heuristics to be used in the optimization process are chosen by the high-level tactics in the hyper-heuristic. In this study, a Cooperative Multi-Stage Hyper-Heuristic (CMS-HH) algorithm is proposed to address certain combinatorial optimization problems. In the CMS-HH, a genetic algorithm is introduced to perturb the initial solution to increase the diversity of the solution. In the search phase, an online learning mechanism based on the multi-armed bandits and relay hybridization technology are proposed to improve the quality of the solution. In addition, a multi-point search is introduced to cooperatively search with a single-point search when the state of the solution does not change in continuous time. The performance of the CMS-HH algorithm is assessed in six specific combinatorial optimization problems, including Boolean satisfiability problems, one-dimensional packing problems, permutation flow-shop scheduling problems, personnel scheduling problems, traveling salesman problems, and vehicle routing problems. The experimental results demonstrate the efficiency and significance of the proposed CMS-HH algorithm.

Journal ArticleDOI
TL;DR: A deep fully convolutional neural network, DeepRx is proposed, which executes the whole receiver pipeline from frequency domain signal stream to uncoded bits in a 5G-compliant fashion and outperforms traditional methods.
Abstract: Deep learning has solved many problems that are out of reach of heuristic algorithms. It has also been successfully applied in wireless communications, even though the current radio systems are well-understood and optimal algorithms exist for many tasks. While some gains have been obtained by learning individual parts of a receiver, a better approach is to jointly learn the whole receiver. This, however, often results in a challenging nonlinear problem, for which the optimal solution is infeasible to implement. To this end, we propose a deep fully convolutional neural network, DeepRx, which executes the whole receiver pipeline from frequency domain signal stream to uncoded bits in a 5G-compliant fashion. We facilitate accurate channel estimation by constructing the input of the convolutional neural network in a very specific manner using both the data and pilot symbols. Also, DeepRx outputs soft bits that are compatible with the channel coding used in 5G systems. Using 3GPP-defined channel models, we demonstrate that DeepRx outperforms traditional methods. We also show that the high performance can likely be attributed to DeepRx learning to utilize the known constellation points of the unknown data symbols, together with the local symbol distribution, for improved detection accuracy.

Journal ArticleDOI
TL;DR: In this article, an ant colony system (ACS)-based algorithm was proposed to obtain good enough paths for UAVs and fully cover all regions efficiently, inspired by the foraging behavior of ants that they can obtain the shortest path between their nest and food.
Abstract: Unmanned aerial vehicle (UAV) has been extensively studied and widely adopted in practical systems owing to its effectiveness and flexibility. Although heterogeneous UAVs have an enormous advantage in improving performance and conserving energy with respect to homogeneous ones, they give rise to a complex path planning problem. Especially in large-scale cooperative search systems with multiple separated regions, coverage path planning which seeks optimal paths for UAVs to completely visit and search all of regions of interest, has a NP-hard computation complexity and is difficult to settle. In this work, we focus on the coverage path planning problem of heterogeneous UAVs, and present an ant colony system (ACS)-based algorithm to obtain good enough paths for UAVs and fully cover all regions efficiently. First, models of UAVs and regions are built, and a linear programming-based formulation is presented to exactly provide the best point-to-point flight path for each UAV. Then, inspired by the foraging behaviour of ants that they can obtain the shortest path between their nest and food, an ACS-based heuristic is presented to seek approximately optimal solutions and minimize the time consumption of tasks in the cooperative search system. Experiments on randomly generated regions have been organized to evaluate the performance of the new heuristic in terms of execution time, task completion time and deviation ratio.

Journal ArticleDOI
TL;DR: In this paper, a hierarchical multiobjective heuristic (HMOH) is proposed to optimize printed-circuit board assembly (PCBA) in a single beam-head surface mounter.
Abstract: This article proposes a hierarchical multiobjective heuristic (HMOH) to optimize printed-circuit board assembly (PCBA) in a single beam-head surface mounter. The beam-head surface mounter is the core facility in a high-mix and low-volume PCBA line. However, as a large-scale, complex, and multiobjective combinatorial optimization problem, the PCBA optimization of the beam-head surface mounter is still a challenge. This article provides a framework for optimizing all the interrelated objectives, which has not been achieved in the existing studies. A novel decomposition strategy is applied. This helps to closely model the real-world problem as the head task assignment problem (HTAP) and the pickup-and-place sequencing problem (PAPSP). These two models consider all the factors affecting the assembly time, including the number of pickup-and-place (PAP) cycles, nozzle changes, simultaneous pickups, and the PAP distances. Specifically, HTAP consists of the nozzle assignment and component allocation, while PAPSP comprises place allocation, feeder set assignment, and place sequencing problems. Adhering strictly to the lexicographic method, the HMOH solves these subproblems in a descending order of importance of their involved objectives. Exploiting the expert knowledge, each subproblem is solved by an elaborately designed heuristic. Finally, the proposed HMOH realizes the complete and optimal PCBA decision making in real time. Using industrial PCB datasets, the superiority of HMOH is elucidated through comparison with the built-in optimizer of the widely used Samsung SM482.

Journal ArticleDOI
TL;DR: An Adaptive Large Neighborhood Search heuristic algorithm is developed for solving the vehicle routing problem with time windows and delivery robots (VRPTWDR), and insights are provided on the use of self-driving parcel delivery robots as an alternative last mile service.

Journal ArticleDOI
TL;DR: In this article, an efficient optimization algorithm that is a hybrid of the iterated greedy and simulated annealing algorithms (hereinafter, referred to as IGSA) was proposed to solve the flexible job shop scheduling problem with crane transportation processes.
Abstract: In this study, we propose an efficient optimization algorithm that is a hybrid of the iterated greedy and simulated annealing algorithms (hereinafter, referred to as IGSA) to solve the flexible job shop scheduling problem with crane transportation processes (CFJSP). Two objectives are simultaneously considered, namely, the minimization of the maximum completion time and the energy consumptions during machine processing and crane transportation. Different from the methods in the literature, crane lift operations have been investigated for the first time to consider the processing time and energy consumptions involved during the crane lift process. The IGSA algorithm is then developed to solve the CFJSPs considered. In the proposed IGSA algorithm, first, each solution is represented by a 2-D vector, where one vector represents the scheduling sequence and the other vector shows the assignment of machines. Subsequently, an improved construction heuristic considering the problem features is proposed, which can decrease the number of replicated insertion positions for the destruction operations. Furthermore, to balance the exploration abilities and time complexity of the proposed algorithm, a problem-specific exploration heuristic is developed. Finally, a set of randomly generated instances based on realistic industrial processes is tested. Through comprehensive computational comparisons and statistical analyses, the highly effective performance of the proposed algorithm is favorably compared against several efficient algorithms.

Journal ArticleDOI
TL;DR: A distributed VNE system with historical archives (HAs) and metaheuristic approaches and the set-based particle swarm optimization (PSO) as the optimizer is proposed to solve the VNE problem in a distributed way.
Abstract: Virtual network embedding (VNE) is an important problem in network virtualization for the flexible sharing of network resources. While most existing studies focus on centralized embedding for VNE, distributed embedding is considered more scalable and suitable for large-scale scenarios, but how virtual resources can be mapped to substrate resources effectively and efficiently remains a challenging issue. In this paper, we devise a distributed VNE system with historical archives (HAs) and metaheuristic approaches. First, we introduce metaheuristic approaches to each delegation of the distributed embedding system as the optimizer for VNE. Compared to the heuristic-based greedy algorithms used in existing distributed embedding approaches, which are prone to be trapped in local optima, metaheuristic approaches can provide better embedding performance for these distributed delegations. Second, an archive-based strategy is also introduced in the distributed embedding system to assist the metaheuristic algorithms. The archives are used to record the up-to-date information of frequently repeated tasks. By utilizing such archives as historical memory, metaheuristic algorithms can further improve embedding performance for frequently repeated tasks. Following this idea, we incorporate the set-based particle swarm optimization (PSO) as the optimizer and propose the distributed VNE system with HAs and set-based PSO (HA-VNE-PSO) system to solve the VNE problem in a distributed way. HA-VNE-PSO is empirically validated in scenarios of different scales. The experimental results verify that HA-VNE-PSO can scale well with respect to substrate networks, and the HA strategy is indeed effective in different scenarios.

Journal ArticleDOI
TL;DR: A novel Mixed-Integer Linear Programming (MILP) mathematical model for Green Inventory-Routing Problem with Time Windows (GIRP-TW) using a piecewise linearization method is proposed and it is demonstrated that the augmented TS algorithm is the best method to yield high-quality solutions.
Abstract: Transportation allocates a significant proportion of Gross Domestic Product (GDP) to each country, and it is one of the largest consumers of petroleum products. On the other hand, many efforts have been made recently to reduce Greenhouse Gas (GHG) emissions by vehicles through redesigning and planning transportation processes. This paper proposes a novel Mixed-Integer Linear Programming (MILP) mathematical model for Green Inventory-Routing Problem with Time Windows (GIRP-TW) using a piecewise linearization method. The objective is to minimize the total cost including fuel consumption cost, driver cost, inventory cost and usage cost of vehicles taking into account factors such as the volume of vehicle load, vehicle speed and road slope. To solve the problem, three meta-heuristic algorithms are designed including the original and augmented Tabu Search (TS) algorithms and Differential Evolution (DE) algorithm. In these algorithms, three heuristic methods of improved Clarke-Wright algorithm, improved Push-Forward Insertion Heuristic (PFIH) algorithm and heuristic speed optimization algorithm are also applied to deal with the routing structure of the problem. The performance of the proposed solution techniques is analyzed using some well-known test problems and algorithms in the literature. Furthermore, a statistical test is conducted to efficiently provide the required comparisons for large-sized problems. The obtained results demonstrate that the augmented TS algorithm is the best method to yield high-quality solutions. Finally, a sensitivity analysis is performed to investigate the variability of the objective function.

Journal ArticleDOI
TL;DR: An intelligent and efficient resource allocation and task offloading algorithm based on the deep reinforcement learning framework of multiagent deep deterministic policy gradient (MADDPG) in a dynamic communication environment is proposed and results show that the proposed algorithm can greatly reduce the energy consumption of each user terminal.
Abstract: The augmented reality (AR) applications have been widely used in the field of Internet of Things (IoT) because of good immersion experience for users, but their ultralow delay demand and high energy consumption bring a huge challenge to the current communication system and terminal power. The emergence of mobile-edge computing (MEC) provides a good thinking to solve this challenge. In this article, we study an energy-efficient task offloading and resource allocation scheme for AR in both the single-MEC and multi-MEC systems. First, a more specific and detailed AR application model is established as a directed acyclic graph according to its internal functionality. Second, based on this AR model, a joint optimization problem of task offloading and resource allocation is formulated to minimize the energy consumption of each user subject to the latency requirement and the limited resources. The problem is a mixed multiuser competition and cooperation problem, which involves the task offloading decision, uplink/downlink transmission resources allocation, and computing resources allocation of users and MEC server. Since it is an NP-hard problem and the communication environment is dynamic, it is difficult for genetic algorithms or heuristic algorithms to solve. Therefore, we propose an intelligent and efficient resource allocation and task offloading algorithm based on the deep reinforcement learning framework of multiagent deep deterministic policy gradient (MADDPG) in a dynamic communication environment. Finally, simulation results show that the proposed algorithm can greatly reduce the energy consumption of each user terminal.

Journal ArticleDOI
TL;DR: A new practical model for CRP that takes both fairness and satisfaction into account simultaneously is proposed, and experimental results show that MOACS generally outperforms the greedy algorithm and some other popular multi-objective optimization algorithms, especially on large-scale instances.
Abstract: The airline crew rostering problem (CRP) is significant for balancing the workload of crew and for improving the satisfaction rate of crew’s preferences, which is related to the fairness and satisfaction of crew. However, most existing work considers only one objective on fairness or satisfaction. In this study, we propose a new practical model for CRP that takes both fairness and satisfaction into account simultaneously. To solve the multi-objective CRP efficiently, we develop an ant colony system (ACS) algorithm based on the multiple populations for multiple objectives (MPMO) framework, termed multi-objective ACS (MOACS). The main contributions of MOACS lie in three aspects. Firstly, two ant colonies are utilized to optimize fairness and satisfaction objectives, respectively. Secondly, a new hybrid complementary heuristic strategy with three kinds of heuristic information schemes is proposed to avoid ant colonies focusing only on their own objectives. Ant colonies randomly choose one of the three schemes to help explore the Pareto front (PF) sufficiently. Thirdly, a local search strategy with two types of local search respectively for fairness and satisfaction is designed to further approach the global PF. The MOACS is applied to seven real-world monthly CRPs with different sizes from a major North-American airline. Experimental results show that MOACS generally outperforms the greedy algorithm and some other popular multi-objective optimization algorithms, especially on large-scale instances.

Journal ArticleDOI
TL;DR: The experimental results show that the proposed improved iterative greedy algorithm based on the groupthink (gIGA) performs significantly better than the other algorithms in comparison by three analytical methods for solving the DAPFSP with TF criterion.

Journal ArticleDOI
TL;DR: The results show that the genetic algorithm can quickly search through the large solution space as compared to local search optimization algorithms to find an edge placement strategy that minimizes the cost function.
Abstract: Rapid developments in industry 4.0, machine learning, and digital twins have introduced new latency, reliability, and processing restrictions in Industrial Internet of Things (IIoT) and mobile devices. However, using current information and communications technology (ICT), it is difficult to optimally provide services that require high computing power and low latency. To meet these requirements, mobile-edge computing is emerging as a ubiquitous computing paradigm that enables the use of network infrastructure components such as cluster heads/sink nodes in IIoT and cellular network base stations to provide local data storage and computation servers at the edge of the network. However, optimal location selection for edge servers within a network out of a very large number of possibilities, such as to balance workload and minimize access delay, is a challenging problem. In this article, the edge server placement problem is addressed within an existing network infrastructure obtained from Shanghai Telecom’s base station data set that includes a significant amount of call data records and locations of actual base stations. The problem of edge server placement is formulated as a multiobjective constraint optimization problem that places edge servers strategically to balance between the workloads of edge servers and reduce access delay between the industrial control center/cellular base stations and edge servers. To search randomly through a large number of possible solutions and selecting those that are most descriptive of optimal solution can be a very time-consuming process, therefore, we apply the genetic algorithm and local search algorithms (hill climbing and simulated annealing) to find the best solution in the least number of solution space explorations. Experimental results are obtained to compare the performance of the genetic algorithm against the above-mentioned local search algorithms. The results show that the genetic algorithm can quickly search through the large solution space as compared to local search optimization algorithms to find an edge placement strategy that minimizes the cost function.

Journal ArticleDOI
TL;DR: This survey targets three main objectives with considering a characteristics of AI-based methods in Fog application placement problem: categorizing evolutionary algorithms, categorizing machine learning algorithms, and categorizing combinatorial algorithms into subcategories.

Proceedings ArticleDOI
14 Nov 2021
TL;DR: In this article, a novel federated learning system with asynchronous tiers under non-i.i.d. training data is proposed, which combines synchronous, intra-tier and asynchronous, cross-tier training.
Abstract: Federated learning (FL) involves training a model over massive distributed devices, while keeping the training data localized and private. This form of collaborative learning exposes new tradeoffs among model convergence speed, model accuracy, balance across clients, and communication cost, with new challenges including: (1) straggler problem---where clients lag due to data or (computing and network) resource heterogeneity, and (2) communication bottleneck---where a large number of clients communicate their local updates to a central server and bottleneck the server. Many existing FL methods focus on optimizing along only one single dimension of the tradeoff space. Existing solutions use asynchronous model updating or tiering-based, synchronous mechanisms to tackle the straggler problem. However, asynchronous methods can easily create a communication bottleneck, while tiering may introduce biases that favor faster tiers with shorter response latencies. To address these issues, we present FedAT, a novel Federated learning system with Asynchronous Tiers under Non-i.i.d. training data. FedAT synergistically combines synchronous, intra-tier training and asynchronous, cross-tier training. By bridging the synchronous and asynchronous training through tiering, FedAT minimizes the straggler effect with improved convergence speed and test accuracy. FedAT uses a straggler-aware, weighted aggregation heuristic to steer and balance the training across clients for further accuracy improvement. FedAT compresses uplink and downlink communications using an efficient, polyline-encoding-based compression algorithm, which minimizes the communication cost. Results show that FedAT improves the prediction performance by up to 21.09% and reduces the communication cost by up to 8.5×, compared to state-of-the-art FL methods.

Journal ArticleDOI
TL;DR: The Generalized Particle Swarm Optimization (GEPSO) algorithm is introduced as a new version of the PSO algorithm for continuous space optimization, which enriches the original PSO by incorporating two new terms into the velocity updating equation, which aim to deepen the interrelations of particles and their knowledge sharing, increase variety in the swarm, and provide a better search in unexplored areas of the search space.

Journal ArticleDOI
TL;DR: In this paper, an opposition-based learning grey wolf optimizer (OGWO) is proposed to boost the performance of GWO, which can help the algorithm jump out of the local optimum and not increase the computational complexity.
Abstract: Grey wolf optimizer is a novel swarm intelligent algorithm. It has received lots of interest from the heuristic algorithm community for its superior optimization capacity and few parameters. However, it is also easy to trap into the local optimum when solving complex and multimodal functions. In order to boost the performance of GWO, an opposition-based learning grey wolf optimizer (OGWO) is proposed. The opposition-based learning approach is incorporated into GWO with a jumping rate, which can help the algorithm jump out of the local optimum and not increase the computational complexity. What is more, the coefficient a → is dynamically adjusted by the nonlinear function to balance exploration and exploitation. The serial experiments have revealed that the proposed algorithm is superior to the conventional heuristic algorithms, it is also better than GWO and its variants.

Journal ArticleDOI
TL;DR: In this article, an efficient computing approach is applied to solve Human Immunodeficiency Virus (HIV) infection spread, which involves CD4+ T-cells by feed-forward artificial neural networks (FF-ANNs) trained with particle swarm optimization (PSO) and interior point method (IPM).