scispace - formally typeset
Search or ask a question

Showing papers on "Load balancing (computing) published in 2019"


Journal ArticleDOI
TL;DR: This paper formulate the joint load balancing and offloading problem as a mixed integer nonlinear programming problem to maximize system utility and develop a low-complexity algorithm to jointly make VEC server selection, and optimize offloading ratio and computation resource.
Abstract: The emergence of computation intensive and delay sensitive on-vehicle applications makes it quite a challenge for vehicles to be able to provide the required level of computation capacity, and thus the performance. Vehicular edge computing (VEC) is a new computing paradigm with a great potential to enhance vehicular performance by offloading applications from the resource-constrained vehicles to lightweight and ubiquitous VEC servers. Nevertheless, offloading schemes, where all vehicles offload their tasks to the same VEC server, can limit the performance gain due to overload. To address this problem, in this paper, we propose integrating load balancing with offloading, and study resource allocation for a multiuser multiserver VEC system. First, we formulate the joint load balancing and offloading problem as a mixed integer nonlinear programming problem to maximize system utility. Particularly, we take IEEE 802.11p protocol into consideration for modeling the system utility. Then, we decouple the problem as two subproblems and develop a low-complexity algorithm to jointly make VEC server selection, and optimize offloading ratio and computation resource. Numerical results illustrate that the proposed algorithm exhibits fast convergence and demonstrates the superior performance of our joint optimal VEC server selection and offloading algorithm compared to the benchmark solutions.

228 citations


Journal ArticleDOI
TL;DR: This article constructs an energy-efficient scheduling framework for MEC-enabled IoVs to minimize the energy consumption of RSUs under task latency constraints to satisfy heterogeneous requirements of communication, computation and storage in IoVs.
Abstract: Although modern transportation systems facilitate the daily life of citizens, the ever-increasing energy consumption and air pollution challenge the establishment of green cities. Current studies on green IoV generally concentrate on energy management of either battery-enabled RSUs or electric vehicles. However, computing tasks and load balancing among RSUs have not been fully investigated. In order to satisfy heterogeneous requirements of communication, computation and storage in IoVs, this article constructs an energy-efficient scheduling framework for MEC-enabled IoVs to minimize the energy consumption of RSUs under task latency constraints. Specifically, a heuristic algorithm is put forward by jointly considering task scheduling among MEC servers and downlink energy consumption of RSUs. To the best of our knowledge, this is a prior work to focus on the energy consumption control issues of MEC-enabled RSUs. Performance evaluations demonstrate the effectiveness of our framework in terms of energy consumption, latency and task blocking possibility. Finally, this article elaborates some major challenges and open issues toward energy-efficient scheduling in IoVs.

200 citations


Journal ArticleDOI
TL;DR: A hybrid task scheduling algorithm named FMPSO that is based on Fuzzy system and Modified Particle Swarm Optimization technique to enhance load balancing and cloud throughput and achieves the goal of minimizing the execution time and resource usage is proposed.

122 citations


Journal ArticleDOI
TL;DR: A systematic literature review has been conducted for hierarchical energy efficient routing protocols reported from 2012 to 2017 and a technical direction for researchers on how to develop routing protocols is provided.
Abstract: In recent years, wireless sensor networks (WSNs) have played a major role in applications such as tracking and monitoring in remote environments. Designing energy efficient protocols for routing of data events is a major challenge due to the dynamic topology and distributed nature of WSNs. Main aim of the paper is to discuss hierarchical routing protocols in order to improve the energy efficiency and network lifetime. This paper provides a discussion about hierarchical energy efficient routing protocols based on classical and swarm intelligence approach. The routing protocols belonging to both categories can be summarized according to energy efficiency, data aggregation, location awareness, QoS, scalability, load balancing, fault tolerance, query based and multipath. A systematic literature review has been conducted for hierarchical energy efficient routing protocols reported from 2012 to 2017. This survey provides a technical direction for researchers on how to develop routing protocols. Finally, research gaps in the reviewed protocols and the potential future aspects have been discussed.

120 citations


Proceedings ArticleDOI
01 Oct 2019
TL;DR: The proposed DeepSlice model will be able to make smart decisions and select the most appropriate network slice, even in case of a network failure, utilizing in-network deep learning and prediction.
Abstract: Existing cellular communications and the upcoming 5G mobile network requires meeting high-reliability standards, very low latency, higher capacity, more security, and high-speed user connectivity. Mobile operators are looking for a programmable solution that will allow them to accommodate multiple independent tenants on the same physical infrastructure and 5G networks allow for end-to-end network resource allocation using the concept of Network Slicing (NS). Data-driven decision making will be vital in future communication networks due to the traffic explosion and Artificial Intelligence (AI) will accelerate the 5G network performance. In this paper, we have developed a ‘DeepSlice’ model by implementing Deep Learning (DL) Neural Network to manage network load efficiency and network availability, utilizing in-network deep learning and prediction. We use available network Key Performance Indicators (KPIs) to train our model to analyze incoming traffic and predict the network slice for an unknown device type. Intelligent resource allocation allows us to use the available resources on existing network slices efficiently and offer load balancing. Our proposed DeepSlice model will be able to make smart decisions and select the most appropriate network slice, even in case of a network failure.

120 citations


Journal ArticleDOI
TL;DR: A state-of-the-art review of issues and challenges associated with existing load-balancing techniques for researchers to develop more effective algorithms is presented.
Abstract: With the growth in computing technologies, cloud computing has added a new paradigm to user services that allows accessing Information Technology services on the basis of pay-per-use at any time and any location. Owing to flexibility in cloud services, numerous organizations are shifting their business to the cloud and service providers are establishing more data centers to provide services to users. However, it is essential to provide cost-effective execution of tasks and proper utilization of resources. Several techniques have been reported in the literature to improve performance and resource use based on load balancing, task scheduling, resource management, quality of service, and workload management. Load balancing in the cloud allows data centers to avoid overloading/underloading in virtual machines, which itself is a challenge in the field of cloud computing. Therefore, it becomes a necessity for developers and researchers to design and implement a suitable load balancer for parallel and distributed cloud environments. This survey presents a state-of-the-art review of issues and challenges associated with existing load-balancing techniques for researchers to develop more effective algorithms.

120 citations


Journal ArticleDOI
TL;DR: This article focuses on the principles and models of resource allocation algorithms in 5G network slicing, and introduces the basic ideas of the SDN and NFV with their roles in network slicing.
Abstract: With the rapid and sustained growth of network demands, 5G telecommunication networks are expected to provide flexible, scalable, and resilient communication and network services, not only for traditional network operators, but also for vertical industries, OTT, and third parties to satisfy their different requirements. Network slicing is a promising technology to establish customized end-to-end logic networks comprising dedicated and shared resources. By leveraging SDN and NFV, network slices associated with resources can be tailored to satisfy diverse QoS and SLA. Resource allocation of network slicing plays a pivotal role in load balancing, resource utilization, and networking performance. In this article, we focus on the principles and models of resource allocation algorithms in 5G network slicing. We first introduce the basic ideas of the SDN and NFV with their roles in network slicing. The MO architecture of network slicing is also studied, which provides a fundamental framework of resource allocation algorithms. Then, resource types with corresponding isolation levels in RAN slicing and CN slicing are analyzed, respectively. Furthermore, we categorize the mathematical models of resource allocation algorithms based on their objectives and elaborate them with typical examples. Finally, open research issues are identified with potential solutions.

115 citations


Journal ArticleDOI
TL;DR: This paper surveys the state-of-the-art proposed techniques toward minimizing the control to data planes communication overhead and controllers’ consistency traffic to enhance the OpenFlow-SDN scalability in the context of logically centralized distributed SDN control plane architecture.
Abstract: Software-defined networking (SDN) is an emerging network architecture that promises to simplify network management, improve network resource utilization, and boost evolution and innovation in traditional networks. The SDN allows the abstraction and centralized management of the lower-level network functionalities by decoupling the network logic from the data forwarding devices into the logically centralized distributed controllers. However, this separation introduces new scalability and performance challenges in large-scale networks of dynamic traffic and topology conditions. Many research studies have represented that centralization and maintaining the global network visibility over the distributed SDN controller introduce scalability concern. This paper surveys the state-of-the-art proposed techniques toward minimizing the control to data planes communication overhead and controllers' consistency traffic to enhance the OpenFlow-SDN scalability in the context of logically centralized distributed SDN control plane architecture. The survey mainly focuses on four issues, including logically centralized visibility, link-state discovery, flow rules placement, and controllers' load balancing. In addition, this paper discusses each issue and presents an updated and detailed study of existing solutions and limitations in enhancing the OpenFlow-SDN scalability and performance. Moreover, it outlines the potential challenges that need to be addressed further in obtaining adaptive and scalable OpenFlow-SDN flow control.

106 citations


Journal ArticleDOI
01 Jan 2019
TL;DR: A batch-based clustering and routing protocol in which the network topology divides the sensor field into equal-sized layers and clusters, and introduces a routing algorithm in which a new node role called “Forwarder” which is capable of relaying the collected data from the layer, it resides in, and far away forwarders toward the base station are introduced.
Abstract: Advances in sensor technology has enabled the development of small, relatively inexpensive, and low-power sensors, which are connected together through wireless medium, forming what is so called Wireless Sensor Networks (WSNs). WSNs have huge number of applications out of which military target tracking and surveillance. However, sensors operate on limited power resources; therefore, utilizing those resources has brought the attention of current researchers. In this paper, we propose a Balanced Power-Aware Clustering and Routing protocol (BPA-CRP). Specifically, we developed a batch-based clustering and routing protocol in which the network topology divides the sensor field into equal-sized layers and clusters. The clustering algorithm allows any cluster to operate multiple rounds (a batch) without any need for set-up overhead. BPA-CRP assigns four different broadcast ranges for each sensor. Not only to this extent, but rather, BPA-CRP introduces a routing algorithm in which a new node role called “Forwarder” which is capable of relaying the collected data from the layer, it resides in, and far away forwarders toward the base station. As a complementary to prior described protocol, BPA-CRP proposes that a batch ends when the energy of any of the forwarders dips below a certain threshold. Additionally, BPA-CRP introduces the “Only Normal” operation mode, which primarily prevents exhausted nodes from serving as cluster heads or forwarders any longer. In fact, all of just mentioned enhancements not only are energy-aware, but also contributes in accomplishing efficient load balancing. Finally, we put proper node death-handling rules, which guarantee that each node dies smoothly without any loss of data, neither causing disruption for the network. Simulation results showed an exceptional performance of BPA-CRP over different relevant works in terms of network lifetime and network energy utilization. The load balancing capability of BPA-CRP is validated as well.

105 citations


Journal ArticleDOI
TL;DR: The proposed BPT-CNN effectively improves the training performance of CNNs while maintaining the accuracy and introduces task decomposition and scheduling strategies with the objectives of thread-level load balancing and minimum waiting time for critical paths.
Abstract: Benefitting from large-scale training datasets and the complex training network, Convolutional Neural Networks (CNNs) are widely applied in various fields with high accuracy. However, the training process of CNNs is very time-consuming, where large amounts of training samples and iterative operations are required to obtain high-quality weight parameters. In this paper, we focus on the time-consuming training process of large-scale CNNs and propose a Bi-layered Parallel Training (BPT-CNN) architecture in distributed computing environments. BPT-CNN consists of two main components: (a) an outer-layer parallel training for multiple CNN subnetworks on separate data subsets, and (b) an inner-layer parallel training for each subnetwork. In the outer-layer parallelism, we address critical issues of distributed and parallel computing, including data communication, synchronization, and workload balance. A heterogeneous-aware Incremental Data Partitioning and Allocation (IDPA) strategy is proposed, where large-scale training datasets are partitioned and allocated to the computing nodes in batches according to their computing power. To minimize the synchronization waiting during the global weight update process, an Asynchronous Global Weight Update (AGWU) strategy is proposed. In the inner-layer parallelism, we further accelerate the training process for each CNN subnetwork on each computer, where computation steps of convolutional layer and the local weight training are parallelized based on task-parallelism. We introduce task decomposition and scheduling strategies with the objectives of thread-level load balancing and minimum waiting time for critical paths. Extensive experimental results indicate that the proposed BPT-CNN effectively improves the training performance of CNNs while maintaining the accuracy.

101 citations


Journal ArticleDOI
TL;DR: A new F 2 F and FRAMES collaboration model is proposed that promotes offloading incoming requests among fog nodes, according to their load and processing capabilities, via a novel load balancing known as Fog Resource manAgeMEnt Scheme (FRAMES).

Journal ArticleDOI
TL;DR: The proposed energy-efficient clustering and hierarchical routing algorithm, EESRA, adopts a three-layer hierarchy to minimize the cluster heads’ load and randomize the selection of cluster heads to extend the network lifespan despite an increase in network size.
Abstract: Many recent wireless sensor network (WSN) routing protocols are enhancements to address specific issues with the “low-energy adaptive clustering hierarchy” (LEACH) protocol. Since the performance of LEACH deteriorates sharply with increasing network size, the challenge for new WSN protocols is to extend the network lifespan while maintaining high scalability. This paper introduces an energy-efficient clustering and hierarchical routing algorithm named energy-efficient scalable routing algorithm (EESRA). The goal of the proposed algorithm is to extend the network lifespan despite an increase in network size. The algorithm adopts a three-layer hierarchy to minimize the cluster heads’ load and randomize the selection of cluster heads. Moreover, EESRA uses multi-hop transmissions for intra-cluster communications to implement a hybrid WSN MAC protocol. This paper compares EESRA against other WSN routing protocols in terms of network performance with respect to changes in the network size. The simulation results show that EESRA outperforms the benchmarked protocols in terms of load balancing and energy efficiency on large-scale WSNs.

Journal ArticleDOI
TL;DR: The objective of this work is to introduce an integrated resource scheduling and load balancing algorithm for efficient cloud service provisioning and results shows that the proposed method achieves better performance in terms of average success rate, resource scheduling efficiency and response time.

Journal ArticleDOI
TL;DR: This paper proposes source-based and destination-based multipath cooperative routing algorithms, which deliver different parts of a data flow along multiple link-disjoint paths dynamically and cooperatively, and designs an efficient No-Stop-Wait ACK mechanism for the NCMCR protocol to accelerate the data transmission.
Abstract: Multipath routing can significantly improve the network throughput and end-to-end (e2e) delay. Network coding based multipath routing removes the complicated coordination among multiple paths so that it further enhances data transmission efficiency. Traditional network coding based multipath routing protocols, however, are inefficient for Low Earth Orbit (LEO) satellite networks with the long link delay and regular network topology . Considering these characteristics, in this paper, we first formulate the multipath cooperative routing problem, then propose a Network Coding based Multipath Cooperative Routing (NCMCR) protocol for LEO satellite networks to improve the throughput. We propose source-based and destination-based multipath cooperative routing algorithms, which deliver different parts of a data flow along multiple link-disjoint paths dynamically and cooperatively. Furthermore, we design an efficient No-Stop-Wait ACK mechanism for our NCMCR protocol to accelerate the data transmission, where a source node continuously sends subsequent batches before it receives ACK messages for the batches sent previously. Under the proposed acknowledgement mechanism, we theoretically analyze the number of coded packets that should be sent and the transmission times of each batch for successfully decoding a batch. NS2-based simulation results demonstrate that our NCMCR outperforms the most related protocols.

Journal ArticleDOI
TL;DR: An optimized solution for network assisted adaptation specifically targeted to mobile streaming in multi-access edge computing (MEC) environments is presented, designed a heuristic-based algorithm with minimum need for parameter tuning and having relatively low complexity.
Abstract: Nearly all bitrate adaptive video content delivered today is streamed using protocols that run a purely client based adaptation logic. The resulting lack of coordination may lead to suboptimal user experience and resource utilization. As a response, approaches that include the network and servers in the adaptation process are emerging. In this article, we present an optimized solution for network assisted adaptation specifically targeted to mobile streaming in multi-access edge computing (MEC) environments. Due to NP-Hardness of the problem, we have designed a heuristic-based algorithm with minimum need for parameter tuning and having relatively low complexity. We then study the performance of this solution against two popular client-based solutions, namely Buffer-Based Adaptation (BBA) and Rate-Based Adaptation (RBA), as well as to another network assisted solution. Our objective is two fold: First, we want to demonstrate the efficiency of our solution and second to quantify the benefits of network-assisted adaptation over the client-based approaches in mobile edge computing scenarios. The results from our simulations reveal that the network assisted adaptation clearly outperforms the purely client-based DASH heuristics in some of the metrics, not all of them, particularly, in situations when the achievable throughput is moderately high or the link quality of the mobile clients does not differ from each other substantially.

Journal ArticleDOI
TL;DR: A lightweight and privacy-friendly masking-based spatial data aggregation scheme for secure forecasting of power demand in smart grids and a secure billing solution for smart grids is proposed.
Abstract: The concept of smart metering allows real-time measurement of power demand which in turn is expected to result in more efficient energy use and better load balancing. However, finely granular measurements reported by smart meters can lead to starkly increased exposure of sensitive information, including various personal attributes and activities. Even though several security solutions have been proposed in recent years to address this issue, most of the existing solutions are based on public-key cryptographic primitives, such as homomorphic encryption and elliptic curve digital signature algorithms which are ill-suited for the resource constrained smart meters. On the other hand, to address the computational inefficiency issue, some masking-based solutions have been proposed. However, these schemes cannot ensure some of the imperative security properties, such as consumer’s privacy and sender authentication. In this paper, we first propose a lightweight and privacy-friendly masking-based spatial data aggregation scheme for secure forecasting of power demand in smart grids. Our scheme only uses lightweight cryptographic primitives, such as hash functions and exclusive-OR operations. Subsequently, we propose a secure billing solution for smart grids. As compared with existing solutions, our scheme is simple and can ensure better privacy protection and computational efficiency, which are essential for smart grids.

Journal ArticleDOI
TL;DR: The proposed GWO-based approach is resulted with higher values of both clustering and routing fitness functions as compared to the existing algorithms, namely, genetic algorithm, particle swarm optimization and multi-objective fuzzy clustering.

Journal ArticleDOI
TL;DR: A detailed encyclopedic review about the load balancing techniques is presented with crucial challenges being addressed so as to develop efficient load balancing algorithms in future.
Abstract: Load unbalancing problem is a multi-variant, multi-constraint problem that degrades performance and efficiency of computing resources. Load balancing techniques cater the solution for load unbalancing situation for two undesirable facets- overloading and under-loading. In contempt of the importance of load balancing techniques to the best of our knowledge, there is no comprehensive, extensive, systematic and hierarchical classification about the existing load balancing techniques. Further, the factors that cause load unbalancing problem are neither studied nor considered in the literature. This paper presents a detailed encyclopedic review about the load balancing techniques. The advantages and limitations of existing methods are highlighted with crucial challenges being addressed so as to develop efficient load balancing algorithms in future. The paper also suggests new insights towards load balancing in cloud computing.

Journal ArticleDOI
TL;DR: To solve the load-balancing problem in the cloud environments, the advantages and disadvantage of the nature-inspired meta-heuristic algorithms have been analyzed and their significant challenges are considered for proposing the techniques that are more effective in the future.

Journal ArticleDOI
TL;DR: In this paper, a load balancing user association scheme for mmWave MIMO cellular networks is proposed, where the user association problem is formulated as mixed integer nonlinear programming and a polynomial-time algorithm called worst connection swapping (WCS) is designed to find a near-optimal solution.
Abstract: User association is necessary in dense millimeter wave (mmWave) networks to determine which base station a user connects to in order to balance base station loads and maximize a network utility. Given that mmWave connections are highly directional and vulnerable to small channel variations, user association changes these connections and hence significantly affects the network interference and consequently the users’ instantaneous rates. In this paper, we introduce a new load balancing user association scheme for mmWave MIMO cellular networks which consider these dependencies. We formulate the user association problem as mixed integer nonlinear programming and design a polynomial-time algorithm, called worst connection swapping (WCS), to find a near-optimal solution. Simulation results confirm that the proposed user association scheme improves network performance significantly by adjusting the interference according to the association, and under the max-min fairness, also enhances cell-edge users’ transmission rates. We also show how the proposed algorithm can be applied under mobility. Furthermore, the proposed WCS algorithm outperforms other generic algorithms for combinatorial programming such as the genetic algorithm in both accuracy and speed at several orders of magnitude faster, and for small networks, where exhaustive search is possible, it reaches the optimal solution.

Journal ArticleDOI
TL;DR: This paper proposes a hybrid metaheuristics technique which combines the osmotic behavior with bio-inspired load balancing algorithms in achieving load balancing between physical machines and shows results that show that OH_BAC decreases energy consumption, the number of VMs migrations and thenumber of shutdown hosts compared to existing algorithms.
Abstract: Cloud computing is increasing rapidly as a successful paradigm presenting on-demand infrastructure, platform, and software services to clients. Load balancing is one of the important issues in cloud computing to distribute the dynamic workload equally among all the nodes to avoid the status that some nodes are overloaded while others are underloaded. Many algorithms have been suggested to perform this task. Recently, worldview is turning into a new paradigm for optimization search by applying the osmosis theory from chemistry science to form osmotic computing. Osmotic computing is aimed to achieve balance in highly distributed environments. The main goal of this paper is to propose a hybrid metaheuristics technique which combines the osmotic behavior with bio-inspired load balancing algorithms. The osmotic behavior enables the automatic deployment of virtual machines (VMs) that are migrated through cloud infrastructures. Since the hybrid artificial bee colony and ant colony optimization proved its efficiency in the dynamic environment in cloud computing, the paper then exploits the advantages of these bio-inspired algorithms to form an osmotic hybrid artificial bee and ant colony (OH_BAC) optimization load balancing algorithm. It overcomes the drawbacks of the existing bio-inspired algorithms in achieving load balancing between physical machines. The simulation results show that OH_BAC decreases energy consumption, the number of VMs migrations and the number of shutdown hosts compared to existing algorithms. In addition, it enhances the quality of services (QoSs) which is measured by service level agreement violation (SLAV) and performance degradation due to migrations (PDMs).

Journal ArticleDOI
TL;DR: A learning-based network path planning method under forwarding constraints for finer-grained and effective traffic engineering is proposed and a sequence-to-sequence model is adapted to learn implicit forwarding paths based on empirical network traffic data.

Journal ArticleDOI
TL;DR: The performance of the proposed load balancing method is evaluated with the existing load balancing methods, such as HBB-LB, DLB, and HDLB for the evaluation metrics load and capacity.
Abstract: Load balancing is the significant task in the cloud computing because the cloud servers need to store avast amount of information which increases the load on the servers. The objective of the load balancing technique is that it maintains a trade-off on servers by distributing equal load with less power. Accordingly, this paper presents the load balancing technique based on the constraint measure. Initially, the capacity and load of each virtual machine are calculated. If the load of the virtual machine is greater than the balanced threshold value then,the load balancing algorithm is used for allocating the tasks. The load balancing algorithm calculates the deciding factor of each virtual machine and checks the load of the virtual machine. Then, it calculates the selection factor of each task. Then, the task which has better selection factor is allocated to the virtual machine. The performance of the proposed load balancing method is evaluated with the existing load balancing methods, such as HBB-LB, DLB, and HDLB for the evaluation metrics load and capacity. The experimental results show that the proposed method migrate only three tasks while the existing method HDLB migrates seven tasks.

Journal ArticleDOI
Xinlu Li1, Brian Keegan, Fredrick Mtenzi, Thomas Weise1, Ming Tan1 
TL;DR: The Energy-Efficient Load Balancing Ant-based Routing Algorithm (EBAR) as discussed by the authors adopts a pseudo-random route discovery algorithm and an improved pheromone trail update scheme to balance the energy consumption of the sensor nodes.
Abstract: Wireless Sensor Networks (WSNs) are a type of self-organizing networks with limited energy supply and communication ability. One of the most crucial issues in WSNs is to use an energy-efficient routing protocol to prolong the network lifetime. We therefore propose the novel Energy-Efficient Load Balancing Ant-based Routing Algorithm (EBAR) for WSNs. EBAR adopts a pseudo-random route discovery algorithm and an improved pheromone trail update scheme to balance the energy consumption of the sensor nodes. It uses an efficient heuristic update algorithm based on a greedy expected energy cost metric to optimize the route establishment. Finally, in order to reduce the energy consumption caused by the control overhead, EBAR utilizes an energy-based opportunistic broadcast scheme. We simulate WSNs in different application scenarios to evaluate EBAR with respect to performance metrics such as energy consumption, energy efficiency, and predicted network lifetime. The results of this comprehensive study show that EBAR provides a significant improvement in comparison to the state-of-the-art approaches EEABR, SensorAnt, and IACO.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed LBLP can work with the existing routing protocols to improve the network throughput substantially and balance the load even when the switching delay is large.
Abstract: Cooperative channel allocation and scheduling are key issues in wireless mesh networks with multiple interfaces and multiple channels. In this paper, we propose a load balance link layer protocol (LBLP) aiming to cooperatively manage the interfaces and channels to improve network throughput. In LBLP, an interface can work in a sending or receiving mode. For the receiving interfaces, the channel assignment is proposed considering the number, position and status of the interfaces, and a task allocation algorithm based on the Huffman tree is developed to minimize the mutual interference. A dynamic link scheduling algorithm is designed for the sending interfaces, making the tradeoff between the end-to-end delay and the interface utilization. A portion of the interfaces can adjust their modes for load balancing according to the link status and the interface load. Simulation results show that the proposed LBLP can work with the existing routing protocols to improve the network throughput substantially and balance the load even when the switching delay is large.

Journal ArticleDOI
TL;DR: A new Cluster Size Load Balancing for CS algorithm (CSLB-CS) is proposed which could achieve optimal utilization of CS method in an IoT-based sensor network and exceeds the performance of hybrid CS and plain CS in terms of network lifetime, overall energy consumption, total number of data transmitted, and reconstruction error.

Journal ArticleDOI
TL;DR: Simulations demonstrate that BalCon and BalConPlus significantly reduce the load imbalance among SDN controllers by migrating only a small number of switches with low computation overhead and is immune to switch migration blackout, an adverse effect in the baseline BalCon.
Abstract: Multiple distributed controllers have been used in software-defined networks (SDNs) to improve scalability and reliability, where each controller manages one static partition of the network. In this paper, we show that dynamic mapping between switches and controllers can improve efficiency in managing traffic load variations. In particular, we propose balanced controller (BalCon) and BalConPlus, two SDN switch migration schemes to achieve load balance among SDN controllers with small migration cost. BalCon is suitable for the scenarios where the network does not require a serial processing of switch requests. For other scenarios, BalConPlus is more suitable, as it is immune to the switch migration blackout and does not cause any service disruption. Simulations demonstrate that BalCon and BalConPlus significantly reduce the load imbalance among SDN controllers by migrating only a small number of switches with low computation overhead. We also build a prototype testbed based on the open-source SDN framework RYU to verify the practicality and effectiveness of BalCon and BalConPlus. Experiment confirms the results of the simulations. It also shows that BalConPlus is immune to switch migration blackout, an adverse effect in the baseline BalCon.

Journal ArticleDOI
TL;DR: Through an extensive performance evaluation study and simulation of large-scale scenarios, the results demonstrated that the protocol achieved better performance compared to the state-of-the-art solutions in terms of network lifetime, energy consumption, routing efficiency, sender waiting time, and duplicate packets.
Abstract: Opportunistic Routing (OR) is adapted to improve the performance of low Duty-cycled Wireless Sensor Networks by exploiting its broadcast nature. In contrast to traditional routing, where packets are transmitted along pre-determined paths, OR uses a prioritization metric to select a set of candidates as potential forwarders. This solves the sender's waiting time problem. However, too many candidates may simultaneously wake-up, generating more duplicate packets, occupying the restricted resources and hinder the packet delivery performance. Consciously, to restrict the number of candidates and to counterbalance between the waiting time problem and the duplicate packets problem, this paper proposed a new protocol that combines two main parts. First, each node defines a Candidates Zone (CZ) by a regular geometric shape of four corners. The packets generated by the node will be routed via any path within the CZ. Expressly, the nodes within the CZ are allowed to be selected as candidates. The size of CZ is controlled by the network density. Second, the candidates within the CZ are prioritized based on the OR metric, which is defined as the multiplication of four-distributions: direction distribution, transmission-distance distribution, perpendicular-distance distribution, and residual energy distribution. Through an extensive performance evaluation study and simulation of large-scale scenarios, the results demonstrated that our protocol achieved better performance compared to the state-of-the-art solutions in terms of network lifetime, energy consumption, routing efficiency, sender waiting time, and duplicate packets.

Journal ArticleDOI
TL;DR: An EDA-GA hybrid scheduling algorithm based on EDA (estimation of distribution algorithm) and GA (genetic algorithm) that can effectively reduce the task completion time and improve the load balancing ability is developed.
Abstract: As one of the hot issues in cloud computing, task scheduling is an important way to meet user needs and achieve multiple goals. With the increasing number of cloud users and growing demand for cloud computing, how to reduce the task completion time and improve the system load balancing ability have attracted increasing interest from academia and industry in recent years. To meet the two aforementioned goals, this paper develops an EDA-GA hybrid scheduling algorithm based on EDA (estimation of distribution algorithm) and GA (genetic algorithm). First, the probability model and sampling method of EDA are used to generate a certain scale of feasible solutions. Second, the crossover and mutation operations of GA are used to expand the search range of solutions. Finally, the optimal scheduling strategy for assigning tasks to virtual machines is realized. This algorithm has advantages of fast convergence speed and strong search ability. The algorithm proposed in this paper is compared with EDA and GA via the CloudSim simulation experiment platform. The experimental results show that the EDA-GA hybrid algorithm can effectively reduce the task completion time and improve the load balancing ability.

Journal ArticleDOI
TL;DR: This paper proposes an approach based on a fine-grained Big Data monitoring method to collect and generate traffic statistics using counter values that can provide a more detailed view of network resource utilization.