scispace - formally typeset
Search or ask a question

Showing papers in "Cluster Computing in 2021"


Journal ArticleDOI
TL;DR: This work presents a novel hybrid antlion optimization algorithm with elite-based differential evolution for solving multi-objective task scheduling problems in cloud computing environments and reveals that MALO outperformed other well-known optimization algorithms.
Abstract: Efficient task scheduling is considered as one of the main critical challenges in cloud computing. Task scheduling is an NP-complete problem, so finding the best solution is challenging, particularly for large task sizes. In the cloud computing environment, several tasks may need to be efficiently scheduled on various virtual machines by minimizing makespan and simultaneously maximizing resource utilization. We present a novel hybrid antlion optimization algorithm with elite-based differential evolution for solving multi-objective task scheduling problems in cloud computing environments. In the proposed method, which we refer to as MALO, the multi-objective nature of the problem derives from the need to simultaneously minimize makespan while maximizing resource utilization. The antlion optimization algorithm was enhanced by utilizing elite-based differential evolution as a local search technique to improve its exploitation ability and to avoid getting trapped in local optima. Two experimental series were conducted on synthetic and real trace datasets using the CloudSim tool kit. The results revealed that MALO outperformed other well-known optimization algorithms. MALO converged faster than the other approaches for larger search spaces, making it suitable for large scheduling problems. Finally, the results were analyzed using statistical t-tests, which showed that MALO obtained a significant improvement in the results.

223 citations


Journal ArticleDOI
TL;DR: This conceptual model integrates the unified theory of acceptance and use of technology model with the task-technology fit (TTF) and information system success (ISS) models, with trust-based information technology innovation adoption constructs, and finds that the ISS, TTF, and UTAUT models positively influence the key factors affecting supply chain employees’ willingness to adopt blockchain.
Abstract: Blockchain overcomes numerous complicated problems related to confidentiality, integrity, availability of fast and secure distributed systems. Using data from a cross-sectoral survey of 449 industries, we investigate factors that hinder or facilitate blockchain adoption in supply chains. To capture the most vital aspects of blockchain adoption in supply chains, our conceptual model integrates the unified theory of acceptance and use of technology (UTAUT) model with the task-technology fit (TTF) and information system success (ISS) models, with trust-based information technology innovation adoption constructs. Using structural equation modelling, we find that the ISS, TTF, and UTAUT models positively influence the key factors affecting supply chain employees’ willingness to adopt blockchain. Our results show that the UTAUT’s social influence factor has no significant effect on the intention to adopt blockchain, while inter-organisational trust has a significant effect on the relationship between the UTAUT dimension and intention to adopt blockchain.

81 citations


Journal ArticleDOI
TL;DR: This paper presents a hybrid solution to handle the resource provisioning issue using workload analysis in a cloud environment and utilized the Imperialist Competition Algorithm and K-means for clustering the workload submitted by end-users.
Abstract: In recent years, cloud computing paradigm has emerged as an internet-based technology to realize the utility model of computing for serving compute-intensive applications. In the cloud computing paradigm, the IT and business resources, such as servers, storage, network, and applications, can be dynamically provisioned to cloud workloads submitted by end-users. Since the cloud workloads submitted to cloud providers are heterogeneous in terms of quality attributes, management and analysis of cloud workloads to satisfy Quality of Service (QoS) requirements can play an important role in cloud resource management. Therefore, it is necessary for the provisioning of proper resources to cloud workloads using clustering of them according to QoS metrics. In this paper, we present a hybrid solution to handle the resource provisioning issue using workload analysis in a cloud environment. Our solution utilized the Imperialist Competition Algorithm (ICA) and K-means for clustering the workload submitted by end-users. Also, we use a decision tree algorithm to determine scaling decisions for efficient resource provisioning. The effectiveness of the proposed approach under two real workloads traces is evaluated. The simulation results demonstrate that the proposed solution reduces the total cost by up to 6.2%, and the response time by up to 6.4%, and increases the CPU utilization by up to 13.7%, and the elasticity by up to 30.8% compared with the other approaches.

77 citations


Journal ArticleDOI
TL;DR: This paper exhausts the literature to highlight the recent IoT security and privacy issues and how blockchain can be utilized to overcome these issues, nevertheless; it addresses challenges and open security issues that blockchain may impose on the current IoT systems.
Abstract: The constant development of interrelated computing devices and the emergence of new network technologies have caused a dramatic growth in the number of Internet of Things (IoT) devices. It has brought great convenience to people’s lives where its applications have been leveraged to revolutionize everyday objects connected in different life aspects such as smart home, healthcare, transportation, environment, agriculture, and military. This interconnectivity of IoT objects takes place through networks on centralized cloud infrastructure that is not constrained to national or jurisdictional boundaries. It is crucial to maintain security, robustness, and trustless authentication to guarantee secure exchange of critical user data among IoT objects. Consequently, blockchain technology has recently emerged as a tenable solution to offer such prominent features. Blockchain’s secure decentralization can overcome security, authentication, and maintenance limitations of current IoT ecosystem. In this paper we conduct a comprehensive literature review to address recent security and privacy challenges related to IoT where they are categorized according to IoT layered architecture: perception, network, and application layer. Further, we investigate blockchain technology as a key pillar to overcome many of IoT security and privacy problems. Additionally, we explore the blockchain technology and its added values when combined with other new technologies as machine learning especially in intrusion detection systems. Moreover, we highlight challenges and privacy issues resulted due to integration of blockchain in IoT applications. Finally, we propose a framework of IoT security and privacy requirements via blockchain technology. Our main contribution is to exhaust the literature to highlight the recent IoT security and privacy issues and how blockchain can be utilized to overcome these issues, nevertheless; we address challenges and open security issues that blockchain may impose on the current IoT systems. Research findings formulate a rigid foundation upon which an efficient and secure adoption of IoT and blockchain is highlighted accordingly.

60 citations


Journal ArticleDOI
TL;DR: In this paper, the authors survey the existing studies which optimize the task offloading in edge networks with mobility management and compare the listed state-of-the-art research works based on the components identified from taxonomy.
Abstract: Technological evolution of mobile devices, such as smart phones, laptops, wearable and other handheld devices have come up with the emergence of different user applications in learning, social networking, entertainment, and community computing domains. Many of such applications are fully or partially offloaded to the nearby server capable with high computing and storage resources. The delivery of task offloading results to the users is a challenge in those networks where the frequency of user mobility is high, leading to increased latency, higher energy consumption and inefficient resource utilization. In this paper, we survey the existing studies which optimize the task offloading in edge networks with mobility management. We formulate taxonomy of the research domain for classification of research works. We compare the listed state-of-the-art research works based on the components identified from taxonomy. Moreover, we debate future research directions for mobility, security, and scalability aware MEC offloading.

56 citations


Journal ArticleDOI
TL;DR: A new method that uses random-key encoding to generate a tour using the Harris Hawk Optimization algorithm to maintain the main capabilities of the HHO algorithm and to take advantage of the capabilities of active mechanisms in the continuous-valued problem space.
Abstract: Travelling Salesman Problem (TSP) is an Np-Hard problem, for which various solutions have been offered so far. Using the Harris Hawk Optimization (HHO) algorithm, this paper presented a new method that uses random-key encoding to generate a tour. This method helps maintain the main capabilities of the HHO algorithm, on the one hand, and to take advantage of the capabilities of active mechanisms in the continuous-valued problem space on the other hand. For the exploration phase, the DE/best/2 mutation mechanism employed in the exploitation phase, besides the main strategies in the HHO algorithm, was used. Ten neighborhood search operators are used, four of which are introduced. These operators were intelligently selected using the MCF. The Lin-Kernighan local search mechanism was utilized to improve the proposed algorithm's performance, and the Metropolis acceptance strategy was employed to escape the local optima trap. Besides, 80 datasets were evaluated in TSPLIB to demonstrate the performance and efficiency of the proposed algorithm. The results showed the excellent performance of the proposed algorithm.

54 citations


Journal ArticleDOI
TL;DR: An early DDoS detection tool is created by using SNORT IDS (Intrusion Detection System), integrated with popularly used SDN controllers (Opendaylight and Open Networking Operating System) and it is found that ODL takes minimum time to detect the successful DDoS attack and more time to go down than ONOS.
Abstract: Software-defined networking (SDN) is an approach in the network that provides many advantages with the help of separating the intelligence of the network (controller) with the underlying network infrastructure (data plane). But this isolation also gives birth to many security concerns; therefore, the need to protect the network from various attacks is becoming mandatory. Distributed Denial of Service (DDoS) in SDN is one such attack that is becoming a hurdle to its growth. Before the mitigation of DDoS attacks, the primary step is to detect them. In this paper, an early DDoS detection tool is created by using SNORT IDS (Intrusion Detection System). This tool is integrated with popularly used SDN controllers (Opendaylight and Open Networking Operating System). For the experimental setup, five different network scenarios are considered. In each scenario number of hosts, switches and data packets vary. For the creation of different hosts, switches the Mininet emulation tool is used whereas for generating the data packets four different penetration tools such as Hping3, Nping, Xerxes, Tor Hammer, LOIC are used. The generated data packets are ranging from (50,000 per second–2,50,000 per second) and the number of hosts/switches are ranging from (50–250) in every scenario respectively. The data traffic is bombarded towards the controllers and the evaluation of these packets is achieved by making use of Wireshark. The analysis of our DDoS detection system is performed on the basis of various parameters such as time to detect the DDoS attack, Round Trip Time (RTT), percentage of packet loss and type of DDoS attack. It is found that ODL takes minimum time to detect the successful DDoS attack and more time to go down than ONOS. Our tool ensures the timely detection of fast DDoS attacks which delivers the better performance of the SDN controller and not compromising the overall functionality of the entire network.

51 citations


Journal ArticleDOI
TL;DR: The proposed SCAGA resulted in better performance when balancing between exploitation and exploration strategies of the search space and was the best overall the tested datasets from the UCI machine learning repository.
Abstract: Feature selection (FS) is a real-world problem that can be solved using optimization techniques. These techniques proposed solutions to make a predictive model, which minimizes the classifier's prediction errors by selecting informative or important features by discarding redundant, noisy, and irrelevant attributes in the original dataset. A new hybrid feature selection method is proposed using the Sine Cosine Algorithm (SCA) and Genetic Algorithm (GA), called SCAGA. Typically, optimization methods have two main search strategies; exploration of the search space and exploitation to determine the optimal solution. The proposed SCAGA resulted in better performance when balancing between exploitation and exploration strategies of the search space. The proposed SCAGA has also been evaluated using the following evaluation criteria: classification accuracy, worst fitness, mean fitness, best fitness, the average number of features, and standard deviation. Moreover, the maximum accuracy of a classification and the minimal features were obtained in the results. The results were also compared with a basic Sine Cosine Algorithm (SCA) and other related approaches published in literature such as Ant Lion Optimization and Particle Swarm Optimization. The comparison showed that the obtained results from the SCAGA method were the best overall the tested datasets from the UCI machine learning repository.

50 citations


Journal ArticleDOI
TL;DR: This paper presents task offloading in the form of a multi-objective optimization problem with a focus on reducing both total power consumption of the system and the delay in executing tasks using two meta-heuristic methods, namely the non-dominated sorting genetic algorithm (NSGA-II) and the Bees algorithm.
Abstract: Due to the limitations associated with the processing capability of mobile devices in cloud environments, various tasks are offloaded to the cloud server. This has led to an increase in the efficiency of mobile applications in the two decades since the advent of the cloud paradigm. However, task offloading may not be a suitable option for delay-sensitive mobile applications because the cloud server is usually located remotely from mobile users. To overcome this problem, fog computing, also known as “Cloud at the Edge”, has been introduced as a complementary solution. On the other hand, although fog computing brings computing and radio resources closer to mobile devices, fog nodes cannot adequately meet users’ needs due to limited computing resources. To minimize delays in responding to mobile users’ requests, it is necessary to establish a trade-off between local execution of requests on end-devices and the fog environment. In this paper, we present task offloading in the form of a multi-objective optimization problem with a focus on reducing both total power consumption of the system and the delay in executing tasks. Then, considering the NP-hardness of the problem, we solve it using two meta-heuristic methods, namely the non-dominated sorting genetic algorithm (NSGA-II) and the Bees algorithm. The simulation results supported the robustness of both meta-heuristic algorithms in terms of energy consumption and delay reduction. The proposed methods achieve a better tradeoff concerning both offloading probability and the power required for data transmission.

50 citations


Journal ArticleDOI
TL;DR: In this article, a hybrid dragonfly algorithm is proposed for task scheduling in IoT cloud computing applications, which mimics the swarming behaviors of dragonflies to decrease the makespan and increase resource utilization.
Abstract: Effective task scheduling is recognized as one of the main critical challenges in cloud computing; it is an essential step for effectively exploiting cloud computing resources, as several tasks may need to be efficiently scheduled on various virtual machines by minimizing makespan and maximizing resource utilization. Task scheduling is an NP-hard problem, and consequently, finding the best solution may be difficult, particularly for Big Data applications. This paper presents an intelligent Big Data task scheduling approach for IoT cloud computing applications using a hybrid Dragonfly Algorithm. The Dragonfly algorithm is a newly introduced optimization algorithm for solving optimization problems which mimics the swarming behaviors of dragonflies. Our algorithm, MHDA, aims to decrease the makespan and increase resource utilization, and is thus a multi-objective approach. β-hill climbing is utilized as a local exploratory search to enhance the Dragonfly Algorithm’s exploitation ability and avoid being trapped in local optima. Two experimental studies were conducted on synthetic and real trace datasets using the CloudSim toolkit to compare MHDA to other well-known algorithms for solving task scheduling problems. The analysis, which included the use of a t-test, revealed that MHDA outperformed other well-known algorithms: MHDA converged faster than other methods, making it useful for Big Data task scheduling applications, and it achieved 17.12% improvement in the results.

49 citations


Journal ArticleDOI
TL;DR: In this paper, an efficient intrusion detection system (IDS) for the cloud environment using ensemble feature selection and classification techniques was developed, which is based on the univariate ensemble features selection technique.
Abstract: Cloud computing is a preferred option for organizations around the globe, it offers scalable and internet-based computing resources as a flexible service. Security is a key concern factor in any cloud solution due to its distributed nature. Security and privacy are huge obstacles faced in its success of the on-demand service as it is easily vulnerable to intruders for any kind of attack. A huge upsurge in network traffic has paved the way to security breaches which are more complicated and widespread. Tackling these attacks has become an inefficient application of traditional intrusion detection systems (IDS) environment. In this research, we developed an efficient Intrusion Detection System (IDS) for the cloud environment using ensemble feature selection and classification techniques. This proposed method was relying on the univariate ensemble feature selection technique, which is used for the selection of valuable reduced feature sets from the given intrusion datasets. While the ensemble classifiers that can competently fuse the single classifiers to produce a robust classifier using the voting technique. An ensemble based proposed method effectively classifies whether the network traffic behavior is normal or attack. The implementation of the proposed method was measured by applying various performance evaluation metrics and ROC-AUC (“area under the receiver operating characteristic curves”) across various classifiers. The results of the proposed methodology achieved a strong considerable amount of performance enhancement compared with other existing methods. Moreover, we performed a pairwise t test and proved that the performance of the proposed method was statistically significantly different from other existing approaches. Finally, the outcome of this investigation was obtained with the best accuracy and lowest false alarm rate (FAR).

Journal ArticleDOI
TL;DR: The state-of-the-art schemes applied in detecting and mitigating anomalies in SDNs are explained, categorized, and compared, and the research gaps and major existing research issues regarding SDN anomaly detection are highlighted.
Abstract: Software defined network (SDN) decouples the network control and data planes. Despite various advantages of SDNs, they are vulnerable to various security attacks such anomalies, intrusions, and Denial-of-Service (DoS) attacks and so on. On the other hand, any anomaly and intrusion in SDNs can affect many important domains such as banking system and national security. Therefore, the anomaly detection topic is a broad research domain, and to mitigate these security problems, a great deal of research has been conducted in the literature. In this paper, the state-of-the-art schemes applied in detecting and mitigating anomalies in SDNs are explained, categorized, and compared. This paper categorizes the SDN anomaly detection mechanisms into five categories: (1) flow counting scheme, (2) information-based scheme, (3) entropy-based scheme, (4) deep learning, and (5) hybrid scheme. The research gaps and major existing research issues regarding SDN anomaly detection are highlighted. We hope that the analyses, comparisons, and classifications might provide directions for further research.

Journal ArticleDOI
TL;DR: The algorithm introduced in this paper utilizes a load balancing routine to maximize resources’ efficiency at execution time and performs task scheduling with the least makespan and cost.
Abstract: Cloud infrastructures are suitable environments for processing large scientific workflows. Nowadays, new challenges are emerging in the field of optimizing workflows such that it can meet user’s service quality requirements. The key to workflow optimization is the scheduling of workflow tasks, which is a famous NP-hard problem. Although several methods have been proposed based on the genetic algorithm for task scheduling in clouds, our proposed method is more efficient than other proposed methods due to the use of new genetic operators as well as modified genetic operators and the use of load balancing routine. Moreover, a solution obtained from a heuristic used as one of the initial population chromosomes and an efficient routine also used for generating the rest of the primary population chromosomes. An adaptive fitness function is used that takes into account both cost and makespan. The algorithm introduced in this paper utilizes a load balancing routine to maximize resources’ efficiency at execution time. The performance of the proposed algorithm is evaluated by comparing the results with state of the art algorithms of this field, and the results indicate that the proposed algorithm has remarkable superiority in comparison to other algorithms and performs task scheduling with the least makespan and cost.

Journal ArticleDOI
TL;DR: This work aims at answering three major questions: firstly, how privacy models and privacy techniques correlate with each other, how the privacy-utility-trade off can be fixed by using different combinations of privacy modelsand privacy techniques and lastly, what are the most relevant privacy techniques that can be adapted to achieve privacy of EHR on cloud.
Abstract: Electronic health records (EHRs) are increasingly employed to maintain, store and share varied types of patient data. The data can also be utilized for various research purposes, such as clinical trials or epidemic control strategies. With the increasing cost and scarcity of healthcare services, healthcare organizations feel at ease in outsourcing these services to cloud-based EHRs. That serves as pay-as-you-go (PAYG) “e-health cloud” models to aid the healthcare organizations handling with existing and imminent demands yet restricting their costs. Technologies can host some risks; hence the privacy of information in these systems is of utmost importance. Regardless of its increased effectiveness and growing eagerness in its adoption, not much care is being employed to the privacy issues that might arise. Privacy preservation need to be reviewed about the changing privacy rules and legislations regarding sensitive personal data. Our work aims at answering three major questions: firstly, how privacy models and privacy techniques correlate with each other, secondly, how we can fix the privacy-utility-trade off by using different combinations of privacy models and privacy techniques and lastly, what are the most relevant privacy techniques that can be adapted to achieve privacy of EHR on cloud.

Journal ArticleDOI
TL;DR: Performance evaluation shows the CTOS and MTOP outperform existing task offloading and scheduling methods in the VCFN in terms of costs and the deadline for IoT applications.
Abstract: These days, the usage of the internet of Vehicle Things (IVoT) applications such as E-Business, E-Train, E-Ambulance has been growing progressively. These applications require mobility-aware delay-sensitive services to execute their tasks. With this motivation, the study has the following contribution. Initially, the study devises a novel cooperative vehicular fog cloud network (VFCN) based on container microservices which offers cost-efficient and mobility-aware services with rich resources for processing. This study devises the cost-efficient task offloading and scheduling (CEMOTS) algorithm framework, which consists of the mobility aware task offloading phase (MTOP) method, which determines the optimal offloading time to minimize the communication cost of applications. Furthermore, CEMOTS offers Cooperative Task Offloading Scheduling (CTOS), including task sequencing and scheduling. The goal is to reduce the application costs of communication cost and computational costs under a given deadline constraint. Performance evaluation shows the CTOS and MTOP outperform existing task offloading and scheduling methods in the VCFN in terms of costs and the deadline for IoT applications.

Journal ArticleDOI
TL;DR: This paper designed a particle swarm optimization trained fuzzy neural network algorithm to solve the path planning problem of intelligent driving vehicles and designed new update rules for inertia weight and learning factors to overcome these problems.
Abstract: The basic fuzzy neural network algorithm has slow convergence and large amount of calculation, so this paper designed a particle swarm optimization trained fuzzy neural network algorithm to solve this problem. Traditional particle swarm optimization is easy to fall into local extremes and has low efficiency, this paper designed new update rules for inertia weight and learning factors to overcome these problems. We also designed training rules for the improved particle swarm optimization to train fuzzy neural network, and the hybrid algorithm is applied to solve the path planning problem of intelligent driving vehicles. The efficiency and practicability of the algorithm are proved by experiments.

Journal ArticleDOI
TL;DR: In this paper, a critical review on recent Blockchain-based methods capable for the decentralization of the future Internet is conducted and two research aspects of Blockchain that provide high impact in realizing the decentralized Internet with respect to current Internet and Blockchain challenges while keeping various design in considerations.
Abstract: Blockchain has made an impact on today’s technology by revolutionizing the financial industry through utilization of cryptocurrencies using decentralized control. This has been followed by extending Blockchain to span several other industries and applications for its capabilities in verification. With the current trend of pursuing the decentralized Internet, many methods have been proposed to achieve decentralization considering different aspects of the current Internet model ranging from infrastructure and protocols to services and applications. This paper investigates Blockchain’s capacities to provide a robust and secure decentralized model for Internet. The paper conducts a critical review on recent Blockchain-based methods capable for the decentralization of the future Internet. We identify and investigate two research aspects of Blockchain that provides high impact in realizing the decentralized Internet with respect to current Internet and Blockchain challenges while keeping various design in considerations. The first aspect is the consensus algorithms that are vital components for decentralization of the Blockchain. We identify three key consensus algorithms including PoP, Paxos, and PoAH that are more adequate for reaching consensus for such tremendous scale Blockchain-enabled architecture for Internet. The second aspect that we investigated is the compliance of Blockchain with various emerging Internet technologies and the impact of Blockchain on those technologies. Such emerging Internet technologies in combinations with Blockchain would help to overcome Blockchain’s established flaws in a way to be more optimized, efficient and applicable for Internet decentralization.

Journal ArticleDOI
TL;DR: An algorithm, namely, GradCent, based on the Stochastic Gradient Descent technique, is proposed, used to develop an upper CPU utilization threshold for detecting overloaded hosts by using a real CPU workload and a dynamic VM selection algorithm called Minimum Size Utilization (MSU) for selecting the VMs from an overloaded host for VM consolidation.
Abstract: Traditional data centers are shifted toward the cloud computing paradigm These data centers support the increasing demand for computational and data storage that consumes a massive amount of energy at a huge cost to the cloud service provider and the environment Considerable energy is wasted to constantly operate idle virtual machines (VMs) on hosts during periods of low load Dynamic consolidation of VMs from overloaded or underloaded hosts is an effective strategy for improving energy consumption and resource utilization in cloud data centers The dynamic consolidation of VM from an overloaded host directly influences the service level agreements (SLAs), utilization of resources, and quality of service (QoS) delivered by the system We proposed an algorithm, namely, GradCent, based on the Stochastic Gradient Descent technique This algorithm is used to develop an upper CPU utilization threshold for detecting overloaded hosts by using a real CPU workload Moreover, we proposed a dynamic VM selection algorithm called Minimum Size Utilization (MSU) for selecting the VMs from an overloaded host for VM consolidation GradCent and MSU maintain the trade-off between energy consumption minimization and QoS maximization under specified SLA goal We used the CloudSim simulations with real-world workload traces from more than a thousand PlanetLab VMs The proposed algorithms minimized energy consumption and SLA violation by 23% and 275% on average, compared with baseline schemes, respectively

Journal ArticleDOI
TL;DR: The proposed algorithm was to enhance search performance by making algorithms greedy and using random numbers according to Chaos Theory on the green cloud computing environment to minimize the makespan and cost of performing tasks and to reduce energy consumption.
Abstract: Workflow is composed of some interdependent tasks and workflow scheduling in the cloud environment that refers to sorting the workflow tasks on virtual machines on the cloud platform. We will encounter many sorting modes with an increase in virtual machines and the variety in task size. Reaching an order with the least makespan is an NP-hard problem. The hardness of this problem increases even more with several contradictory goals. Hence, a meta-heuristic algorithm is what required in reaching the optimal response. Thus, the algorithm is a hybridization of the ant lion optimizer (ALO) algorithm with a Sine Cosine Algorithm (SCA) algorithm and used it multi-objectively to solve the problem of scheduling scientific workflows. The novelty of the proposed algorithm was to enhance search performance by making algorithms greedy and using random numbers according to Chaos Theory on the green cloud computing environment. The purpose was to minimize the makespan and cost of performing tasks, to reduce energy consumption to have a green cloud environment, and to increase throughput. WorkflowSim simulator was used for implementation, and the results were compared with the SPEA2 workflow scheduling algorithm. The results show a decrease in the energy consumed and makespan.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed EdgeSDN-I4COVID architecture for intelligent and efficient management during COVID-19 of the smart industry considering the IoT networks, and presented the SDN-enabled layer, such as data, control, and application, to effectively and automatically monitor the IoT data from a remote location.
Abstract: The industrial ecosystem has been unprecedentedly affected by the COVID-19 pandemic because of its immense contact restrictions. Therefore, the manufacturing and socio-economic operations that require human involvement have significantly intervened since the beginning of the outbreak. As experienced, the social-distancing lesson in the potential new-normal world seems to force stakeholders to encourage the deployment of contactless Industry 4.0 architecture. Thus, human-less or less-human operations to keep these IoT-enabled ecosystems running without interruptions have motivated us to design and demonstrate an intelligent automated framework. In this research, we have proposed "EdgeSDN-I4COVID" architecture for intelligent and efficient management during COVID-19 of the smart industry considering the IoT networks. Moreover, the article presents the SDN-enabled layer, such as data, control, and application, to effectively and automatically monitor the IoT data from a remote location. In addition, the proposed convergence between SDN and NFV provides an efficient control mechanism for managing the IoT sensor data. Besides, it offers robust data integration on the surface and the devices required for Industry 4.0 during the COVID-19 pandemic. Finally, the article justified the above contributions through particular performance evaluations upon appropriate simulation setup and environment.

Journal ArticleDOI
TL;DR: The proposed blind and robust approach for medical image protection consists in embedding patient information and image acquisition data in the image and the integration was performed in the medium frequencies of the image.
Abstract: In order to enhance the security of exchanged medical images in telemedicine, we propose in this paper a blind and robust approach for medical image protection. This approach consists in embedding patient information and image acquisition data in the image. This imperceptible integration must generate the least possible distortion. The watermarked image must present the same clinical reading as the original image. The proposed approach is applied in the frequency domain. For this purpose, four transforms were used: discrete wavelets transform, non-subsampled contourlet transform, non-subsampled shearlet transform and discreet cosine transform. All these transforms was combined with Schur decomposition and the watermark bits were integrated in the upper triangular matrix. To obtain a satisfactory compromise between robustness and imperceptibility, the integration was performed in the medium frequencies of the image. Imperceptibility and robustness experimental results shows that the proposed methods maintain a high quality of watermarked images and are remarkably robust against several conventional attacks.

Journal ArticleDOI
TL;DR: A Deep Reinforcement Learning (DRL) based job scheduler that dispatches the jobs in real time to tackle the problem of dynamic and complex cloud workloads and can significantly outperform the commonly used real-time scheduling algorithms.
Abstract: As the services provided by cloud vendors are providing better performance, achieving auto-scaling, load-balancing, and optimized performance along with low infrastructure maintenance, more and more companies migrate their services to the cloud. Since the cloud workload is dynamic and complex, scheduling the jobs submitted by users in an effective way is proving to be a challenging task. Although a lot of advanced job scheduling approaches have been proposed in the past years, almost all of them are designed to handle batch jobs rather than real-time workloads, such as that user requests are submitted at any time with any amount of numbers. In this work, we have proposed a Deep Reinforcement Learning (DRL) based job scheduler that dispatches the jobs in real time to tackle this problem. Specifically, we focus on scheduling user requests in such a way as to provide the quality of service (QoS) to the end-user along with a significant reduction of the cost spent on the execution of jobs on the virtual instances. We have implemented our method by Deep Q-learning Network (DQN) model, and our experimental results demonstrate that our approach can significantly outperform the commonly used real-time scheduling algorithms.

Journal ArticleDOI
TL;DR: A distributed access control system based on blockchain technology to secure IoT data and can solve the problem of a single point of failure of access control by providing the dynamic and fine-grained access control for IoT data.
Abstract: With the development of the Internet of Things (IoT) field, more and more data are generated by IoT devices and transferred over the network. However, a large amount of IoT data is sensitive, and the leakage of such data is a privacy breach. The security of sensitive IoT data is a big issue, as the data is shared over an insecure network channel. Current solutions include symmetric encryption and access controls to secure the data transfer, but they have some drawbacks such as a single point of failure. Blockchain is a promising distributed ledger technology that can prevent the malicious tampering of data, offering reliable data storage. This paper proposes a distributed access control system based on blockchain technology to secure IoT data. The proposed mechanism is based on fog computing and the concept of the alliance chain. This method uses mixed linear and nonlinear spatiotemporal chaotic systems (MLNCML) and the least significant bit (LSB) to encrypt the IoT data on an edge node and then upload the encrypted data to the cloud. The proposed mechanism can solve the problem of a single point of failure of access control by providing the dynamic and fine-grained access control for IoT data. The experimental results of this method demonstrated that it can protect the privacy of IoT data efficiently.

Journal ArticleDOI
TL;DR: The findings showed that the blockchain could dominate the IoT restrictions, such as data protection and privacy, and supply distributed storage, transparency, trust, and secure distributed IoT networks and supply a beneficial guarantee for the privacy and security of IoT users.
Abstract: The Internet of Things (IoT) has infiltrated extensively into our lifestyles. Nevertheless, IoT privacy remains a significant obstacle, primarily because of the large size and distributed existence of IoT networks. Also, numerous safety, authentication, and maintenance problems of IoT systems have been overcome by the decentralized existence of blockchain. To obviate these privacy difficulties, the privacy challenges of IoT-based blockchain are examined systematically. Totally, 61 papers have been gained by electronic databases and based on different filters, 20 related articles were obtained and analyzed. The findings showed that the blockchain could dominate the IoT restrictions, such as data protection and privacy. It can also supply distributed storage, transparency, trust, and secure distributed IoT networks and supply a beneficial guarantee for the privacy and security of IoT users. Simultaneously, it has low scalability, high computing complexity, IoT-unsuitable latency, and high overhead bandwidth.

Journal ArticleDOI
TL;DR: An effective micro-genetic algorithm is presented in order to choose suitable destinations between physical hosts for VMs in data center physical resources to provide invaluable improvements in terms of power consumption compared with other methods.
Abstract: Efficiency in cloud servers’ power consumption is of paramount importance. Power efficiency makes the reduction in greenhouse gases establishing the concept of green computing. One of the beneficial ways is to apply power-aware methods to decide where to allocate virtual machines (VMs) in data center physical resources. Virtualization is utilized as a promising technology for power-aware VM allocation methods. Since the VM allocation is an NP-complete problem, we use of evolutionary algorithms to solve it. This paper presents an effective micro-genetic algorithm in order to choose suitable destinations between physical hosts for VMs. Our evaluations in simulation environment show that micro-genetic approach provides invaluable improvements in terms of power consumption compared with other methods.

Journal ArticleDOI
TL;DR: In this article, a new spectral clustering algorithm is proposed for attributed graphs that the identified communities have structural cohesiveness and attribute homogeneity, which can improve the similarity degree among the pairs of nodes in the same density region of the attributed network.
Abstract: The most basic and significant issue in complex network analysis is community detection, which is a branch of machine learning. Most current community detection approaches, only consider a network's topology structures, which lose the potential to use node attribute information. In attributed networks, both topological structure and node attributed are important features for community detection. In recent years, the spectral clustering algorithm has received much interest as one of the best performing algorithms in the subcategory of dimensionality reduction. This algorithm applies the eigenvalues of the affinity matrix to map data to low-dimensional space. In the present paper, a new version of the spectral cluster, named Attributed Spectral Clustering (ASC), is applied for attributed graphs that the identified communities have structural cohesiveness and attribute homogeneity. Since the performance of spectral clustering heavily depends on the goodness of the affinity matrix, the ASC algorithm will use the Topological and Attribute Random Walk Affinity Matrix (TARWAM) as a new affinity matrix to calculate the similarity between nodes. TARWAM utilizes the biased random walk to integrate network topology and attribute information. It can improve the similarity degree among the pairs of nodes in the same density region of the attributed network, without the need for parameter tuning. The proposed approach has been compared to other primary and new attributed graph clustering algorithms based on synthetic and real datasets. The experimental results show that the proposed approach is more effective and accurate compared to other state-of-the-art attributed graph clustering techniques.

Journal ArticleDOI
TL;DR: A novel mathematical model is proposed to plan FOG-assisted CnCI for IoTH networks that considers wireless link interfacing gateways as a virtual machine (VM) and the performance of DFWA-3-LSM is better than other experimented algorithms.
Abstract: Transmitting electronic medical records (EMR) and other communication in modern Internet of Things (IoT) healthcare ecosystem is both delay and integrity-sensitive. Transmitting and computing volumes of EMR data on traditional clouds away from healthcare facilities is a main source of trust-deficit using IoT-enabled applications. Reliable IoT-enabled healthcare (IoTH) applications demand careful deployment of computing and communication infrastructure (CnCI). This paper presents a FOG-assisted CnCI model for reliable healthcare facilities. Planning a secure and reliable CnCI for IoTH networks is a challenging optimization task. We proposed a novel mathematical model (i.e., integer programming) to plan FOG-assisted CnCI for IoTH networks. It considers wireless link interfacing gateways as a virtual machine (VM). An IoTH network contains three wirelessly communicating nodes: VMs, reduced computing power gateways (RCPG), and full computing power gateways (FCPG). The objective is to minimize the weighted sum of infrastructure and operational costs of the IoTH network planning. Swarm intelligence-based evolutionary approach is used to solve IoTH networks planning for superior quality solutions in a reasonable time. The discrete fireworks algorithm with three local search methods (DFWA-3-LSM) outperformed other experimented algorithms in terms of average planning cost for all experimented problem instances. The DFWA-3-LSM lowered the average planning cost by 17.31%, 17.23%, and 18.28% when compared against discrete artificial bee colony with 3 LSM (DABC-3-LSM), low-complexity biogeography-based optimization (LC-BBO), and genetic algorithm, respectively. Statistical analysis demonstrates that the performance of DFWA-3-LSM is better than other experimented algorithms. The proposed mathematical model is envisioned for secure, reliable and cost-effective EMR data manipulation and other communication in healthcare.

Journal ArticleDOI
TL;DR: This paper presents a comprehensive survey of SoR strategies in cloud computing and proposes a classification of existing works based on the research methods they use, and presents a tabular representation of all relevant features to facilitate the comparison of soR techniques and the proposal of new enhanced strategies.
Abstract: The recent years have witnessed significant interest in migrating different applications into the cloud platforms. In this context, one of the main challenges for cloud applications providers is how to ensure high availability of the delivered applications while meeting users’ QoS. In this respect, replication techniques are commonly applied to efficiently handle this issue. From the literature, according to the used granularity for replication there are two major approaches to achieve replication: either through replicating the service or the underlying data. The latter one is also known as Data-oriented Replication (DoR), while the former one is referred to as Service-oriented Replication (SoR). DoR is discussed extensively in the available literature and several surveys are already published. However, SoR is still at its infancy and there is a lack of research studies. Hence, in this paper we present a comprehensive survey of SoR strategies in cloud computing. We propose a classification of existing works based on the research methods they use. Then, we carried out an in-depth study and analysis of these works. In addition, a tabular representation of all relevant features is presented to facilitate the comparison of SoR techniques and the proposal of new enhanced strategies.

Journal ArticleDOI
TL;DR: This paper conducts an extensive experimental evaluation and analysis of six popular deep learning frameworks, namely, TensorFlow, MXNet, PyTorch, Theano, Chainer, and Keras, using three types of DL architectures Convolutional Neural Networks (CNN), Faster Region-based convolutional neural networks (Faster R-CNN), and Long Short Term Memory (LSTM).
Abstract: Deep Learning (DL) has achieved remarkable progress over the last decade on various tasks such as image recognition, speech recognition, and natural language processing In general, three main crucial aspects fueled this progress: the increasing availability of large amount of digitized data, the increasing availability of affordable parallel and powerful computing resources (eg, GPU) and the growing number of open source deep learning frameworks that facilitate and ease the development process of deep learning architectures In practice, the increasing popularity of deep learning frameworks calls for benchmarking studies that can effectively evaluate and understand the performance characteristics of these systems In this paper, we conduct an extensive experimental evaluation and analysis of six popular deep learning frameworks, namely, TensorFlow, MXNet, PyTorch, Theano, Chainer, and Keras, using three types of DL architectures Convolutional Neural Networks (CNN), Faster Region-based Convolutional Neural Networks (Faster R-CNN), and Long Short Term Memory (LSTM) Our experimental evaluation considers different aspects for its comparison including accuracy, training time, convergence and resource consumption patterns Our experiments have been conducted on both CPU and GPU environments using different datasets We report and analyze the performance characteristics of the studied frameworks In addition, we report a set of insights and important lessons that we have learned from conducting our experiments

Journal ArticleDOI
TL;DR: An event-driven IoT architecture is presented for data analysis of reliable healthcare applications, including context, event, and service layers, and the CEP method as a novel solution and automated intelligence is applied in the event layer.
Abstract: Internet of Things (IoT) is enhancing the intelligence of the societies through a rapid transition to a smarter, automatic, responsive world due to the dramatic increase in the number of sensors deployed around the world. Collecting, modeling, and reasoning data generated by sensors play a crucial role in data analysis. Analyzing and interpreting real-time information transmitted through heterogeneous wireless networks are challenges that IoT applications encounter. Complex Event Processing (CEP) is a data stream tracking method used to extract the meaningful data obtained from the network results in real-time decision making. Instance data analysis, early diagnosis, and effective treatment of patients through the massive volume of data are considered indispensable parameters that have made the healthcare industry more reliant on real-time event processing than other industries. To achieve actionable insights, forecasting anomaly, and increasing healthcare quality, applying the CEP method is introduced in this area. In this paper, an event-driven IoT architecture is presented for data analysis of reliable healthcare applications, including context, event, and service layers. Dependability parameters are considered in each layer, and the CEP method as a novel solution and automated intelligence is applied in the event layer. Implementation results showed that the CEP method increased reliability, reduced costs, and improved healthcare quality.