scispace - formally typeset
Search or ask a question

Showing papers on "Cloud computing published in 2021"


Journal ArticleDOI
TL;DR: 6G with additional technical requirements beyond those of 5G will enable faster and further communications to the extent that the boundary between physical and cyber worlds disappears.
Abstract: The fifth generation (5G) wireless communication networks are being deployed worldwide from 2020 and more capabilities are in the process of being standardized, such as mass connectivity, ultra-reliability, and guaranteed low latency. However, 5G will not meet all requirements of the future in 2030 and beyond, and sixth generation (6G) wireless communication networks are expected to provide global coverage, enhanced spectral/energy/cost efficiency, better intelligence level and security, etc. To meet these requirements, 6G networks will rely on new enabling technologies, i.e., air interface and transmission technologies and novel network architecture, such as waveform design, multiple access, channel coding schemes, multi-antenna technologies, network slicing, cell-free architecture, and cloud/fog/edge computing. Our vision on 6G is that it will have four new paradigm shifts. First, to satisfy the requirement of global coverage, 6G will not be limited to terrestrial communication networks, which will need to be complemented with non-terrestrial networks such as satellite and unmanned aerial vehicle (UAV) communication networks, thus achieving a space-air-ground-sea integrated communication network. Second, all spectra will be fully explored to further increase data rates and connection density, including the sub-6 GHz, millimeter wave (mmWave), terahertz (THz), and optical frequency bands. Third, facing the big datasets generated by the use of extremely heterogeneous networks, diverse communication scenarios, large numbers of antennas, wide bandwidths, and new service requirements, 6G networks will enable a new range of smart applications with the aid of artificial intelligence (AI) and big data technologies. Fourth, network security will have to be strengthened when developing 6G networks. This article provides a comprehensive survey of recent advances and future trends in these four aspects. Clearly, 6G with additional technical requirements beyond those of 5G will enable faster and further communications to the extent that the boundary between physical and cyber worlds disappears.

935 citations


Journal ArticleDOI
TL;DR: A smart, Deep Reinforcement Learning based Resource Allocation (DRLRA) scheme, which can allocate computing and network resources adaptively, reduce the average service time and balance the use of resources under varying MEC environment is proposed.
Abstract: The development of mobile devices with improving communication and perceptual capabilities has brought about a proliferation of numerous complex and computation-intensive mobile applications. Mobile devices with limited resources face more severe capacity constraints than ever before. As a new concept of network architecture and an extension of cloud computing, Mobile Edge Computing (MEC) seems to be a promising solution to meet this emerging challenge. However, MEC also has some limitations, such as the high cost of infrastructure deployment and maintenance, as well as the severe pressure that the complex and mutative edge computing environment brings to MEC servers. At this point, how to allocate computing resources and network resources rationally to satisfy the requirements of mobile devices under the changeable MEC conditions has become a great aporia. To combat this issue, we propose a smart, Deep Reinforcement Learning based Resource Allocation (DRLRA) scheme, which can allocate computing and network resources adaptively, reduce the average service time and balance the use of resources under varying MEC environment. Experimental results show that the proposed DRLRA performs better than the traditional OSPF algorithm in the mutative MEC conditions.

261 citations


Journal ArticleDOI
TL;DR: The landscape of MAR through the past and its future prospects with respect to the 5G systems and complementary technology MEC are discussed and an informative analysis of the network formation of current and future MAR systems in terms of cloud, edge, localized, and hybrid architectural options is provided.
Abstract: The Augmented Reality (AR) technology enhances the human perception of the world by combining the real environment with the virtual space. With the explosive growth of powerful, less expensive mobile devices, and the emergence of sophisticated communication infrastructure, Mobile Augmented Reality (MAR) applications are gaining increased popularity. MAR allows users to run AR applications on mobile devices with greater mobility and at a lower cost. The emerging 5G communication technologies act as critical enablers for future MAR applications to achieve ultra-low latency and extremely high data rates while Multi-access Edge Computing (MEC) brings enhanced computational power closer to the users to complement MAR. This paper extensively discusses the landscape of MAR through the past and its future prospects with respect to the 5G systems and complementary technology MEC. The paper especially provides an informative analysis of the network formation of current and future MAR systems in terms of cloud, edge, localized, and hybrid architectural options. The paper discusses key application areas for MAR and their future with the advent of 5G technologies. The paper also discusses the requirements and limitations of MAR technical aspects such as communication, mobility management, energy management, service offloading and migration, security, and privacy and analyzes the role of 5G technologies.

259 citations


Journal ArticleDOI
TL;DR: This survey provides a comprehensive tutorial on federated learning and its associated concepts, technologies and learning approaches, and designs a three-level classification scheme that first categorizes the Federated learning literature based on the high-level challenge that they tackle, and classify each high- level challenge into a set of specific low-level challenges to foster a better understanding of the topic.
Abstract: The communication and networking field is hungry for machine learning decision-making solutions to replace the traditional model-driven approaches that proved to be not rich enough for seizing the ever-growing complexity and heterogeneity of the modern systems in the field. Traditional machine learning solutions assume the existence of (cloud-based) central entities that are in charge of processing the data. Nonetheless, the difficulty of accessing private data, together with the high cost of transmitting raw data to the central entity gave rise to a decentralized machine learning approach called Federated Learning . The main idea of federated learning is to perform an on-device collaborative training of a single machine learning model without having to share the raw training data with any third-party entity. Although few survey articles on federated learning already exist in the literature, the motivation of this survey stems from three essential observations. The first one is the lack of a fine-grained multi-level classification of the federated learning literature, where the existing surveys base their classification on only one criterion or aspect. The second observation is that the existing surveys focus only on some common challenges, but disregard other essential aspects such as reliable client selection, resource management and training service pricing. The third observation is the lack of explicit and straightforward directives for researchers to help them design future federated learning solutions that overcome the state-of-the-art research gaps. To address these points, we first provide a comprehensive tutorial on federated learning and its associated concepts, technologies and learning approaches. We then survey and highlight the applications and future directions of federated learning in the domain of communication and networking. Thereafter, we design a three-level classification scheme that first categorizes the federated learning literature based on the high-level challenge that they tackle. Then, we classify each high-level challenge into a set of specific low-level challenges to foster a better understanding of the topic. Finally, we provide, within each low-level challenge, a fine-grained classification based on the technique used to address this particular challenge. For each category of high-level challenges, we provide a set of desirable criteria and future research directions that are aimed to help the research community design innovative and efficient future solutions. To the best of our knowledge, our survey is the most comprehensive in terms of challenges and techniques it covers and the most fine-grained in terms of the multi-level classification scheme it presents.

252 citations


Journal ArticleDOI
TL;DR: The major purpose of this work is to create a novel and secure cache decision system (CDS) in a wireless network that operates over an SB, which will offer the users safer and efficient environment for browsing the Internet, sharing and managing large-scale data in the fog.
Abstract: This work proposes an innovative infrastructure of secure scenario which operates in a wireless-mobile 6G network for managing big data (BD) on smart buildings (SBs). Count on the rapid growth of telecommunication field new challenges arise. Furthermore, a new type of wireless network infrastructure, the sixth generation (6G), provides all the benefits of its past versions and also improves some issues which its predecessors had. In addition, relative technologies to the telecommunications filed, such as Internet of Things, cloud computing (CC) and edge computing (EC), can operate through a 6G wireless network. Take into account all these, we propose a scenario that try to combine the functions of the Internet of Things with CC, EC and BD in order to achieve a Smart and Secure environment. The major purpose of this work is to create a novel and secure cache decision system (CDS) in a wireless network that operates over an SB, which will offer the users safer and efficient environment for browsing the Internet, sharing and managing large-scale data in the fog. This CDS consisted of two types of servers, one cloud server and one edge server. In order to come up with our proposal, we study related cache scenarios systems which are listed, presented, and compared in this work.

229 citations


Journal ArticleDOI
TL;DR: This work presents a novel hybrid antlion optimization algorithm with elite-based differential evolution for solving multi-objective task scheduling problems in cloud computing environments and reveals that MALO outperformed other well-known optimization algorithms.
Abstract: Efficient task scheduling is considered as one of the main critical challenges in cloud computing. Task scheduling is an NP-complete problem, so finding the best solution is challenging, particularly for large task sizes. In the cloud computing environment, several tasks may need to be efficiently scheduled on various virtual machines by minimizing makespan and simultaneously maximizing resource utilization. We present a novel hybrid antlion optimization algorithm with elite-based differential evolution for solving multi-objective task scheduling problems in cloud computing environments. In the proposed method, which we refer to as MALO, the multi-objective nature of the problem derives from the need to simultaneously minimize makespan while maximizing resource utilization. The antlion optimization algorithm was enhanced by utilizing elite-based differential evolution as a local search technique to improve its exploitation ability and to avoid getting trapped in local optima. Two experimental series were conducted on synthetic and real trace datasets using the CloudSim tool kit. The results revealed that MALO outperformed other well-known optimization algorithms. MALO converged faster than the other approaches for larger search spaces, making it suitable for large scheduling problems. Finally, the results were analyzed using statistical t-tests, which showed that MALO obtained a significant improvement in the results.

223 citations


Journal ArticleDOI
TL;DR: In this article, a comprehensive review of emerging technologies for the internet of things (IoT)-based smart agriculture is presented, including unmanned aerial vehicles, wireless technologies, open-source IoT platforms, software defined networking (SDN), network function virtualization (NFV), cloud/fog computing, and middleware platforms.
Abstract: This paper presents a comprehensive review of emerging technologies for the internet of things (IoT)-based smart agriculture. We begin by summarizing the existing surveys and describing emergent technologies for the agricultural IoT, such as unmanned aerial vehicles, wireless technologies, open-source IoT platforms, software defined networking (SDN), network function virtualization (NFV) technologies, cloud/fog computing, and middleware platforms. We also provide a classification of IoT applications for smart agriculture into seven categories: including smart monitoring, smart water management, agrochemicals applications, disease management, smart harvesting, supply chain management, and smart agricultural practices. Moreover, we provide a taxonomy and a side-by-side comparison of the state-of-the-art methods toward supply chain management based on the blockchain technology for agricultural IoTs. Furthermore, we present real projects that use most of the aforementioned technologies, which demonstrate their great performance in the field of smart agriculture. Finally, we highlight open research challenges and discuss possible future research directions for agricultural IoTs.

218 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a comprehensive survey on AIoT to show how AI can empower the IoT to make it faster, smarter, greener, and safer, and highlight the challenges facing AI-oT and some potential research opportunities.
Abstract: In the Internet-of-Things (IoT) era, billions of sensors and devices collect and process data from the environment, transmit them to cloud centers, and receive feedback via the Internet for connectivity and perception. However, transmitting massive amounts of heterogeneous data, perceiving complex environments from these data, and then making smart decisions in a timely manner are difficult. Artificial intelligence (AI), especially deep learning, is now a proven success in various areas, including computer vision, speech recognition, and natural language processing. AI introduced into the IoT heralds the era of AI of things (AIoT). This article presents a comprehensive survey on AIoT to show how AI can empower the IoT to make it faster, smarter, greener, and safer. Specifically, we briefly present the AIoT architecture in the context of cloud computing, fog computing, and edge computing. Then, we present progress in AI research for IoT from four perspectives: 1) perceiving; 2) learning; 3) reasoning; and 4) behaving. Next, we summarize some promising applications of AIoT that are likely to profoundly reshape our world. Finally, we highlight the challenges facing AIoT and some potential research opportunities.

216 citations


Journal ArticleDOI
Lei Liu1, Chen Chen1, Qingqi Pei1, Sabita Maharjan2, Yan Zhang2 
TL;DR: A comprehensive survey of state-of-the-art research on VEC can be found in this paper, where the authors provide an overview of VEC, including the introduction, architecture, key enablers, advantages, challenges as well as several attractive application scenarios.
Abstract: As one key enabler of Intelligent Transportation System (ITS), Vehicular Ad Hoc Network (VANET) has received remarkable interest from academia and industry. The emerging vehicular applications and the exponential growing data have naturally led to the increased needs of communication, computation and storage resources, and also to strict performance requirements on response time and network bandwidth. In order to deal with these challenges, Mobile Edge Computing (MEC) is regarded as a promising solution. MEC pushes powerful computational and storage capacities from the remote cloud to the edge of networks in close proximity of vehicular users, which enables low latency and reduced bandwidth consumption. Driven by the benefits of MEC, many efforts have been devoted to integrating vehicular networks into MEC, thereby forming a novel paradigm named as Vehicular Edge Computing (VEC). In this paper, we provide a comprehensive survey of state-of-art research on VEC. First of all, we provide an overview of VEC, including the introduction, architecture, key enablers, advantages, challenges as well as several attractive application scenarios. Then, we describe several typical research topics where VEC is applied. After that, we present a careful literature review on existing research work in VEC by classification. Finally, we identify open research issues and discuss future research directions.

205 citations


Journal ArticleDOI
TL;DR: A blockchain-enhanced security access control scheme that supports traceability and revocability has been proposed in IIoT for smart factories and has shown that the size of the public/private keys is smaller compared to other schemes, and the overhead time is less for public key generation, data encryption, and data decryption stages.
Abstract: The industrial Internet of Things (IIoT) supports recent developments in data management and information services, as well as services for smart factories. Nowadays, many mature IIoT cloud platforms are available to serve smart factories. However, due to the semicredibility nature of the IIoT cloud platforms, how to achieve secure storage, access control, information update and deletion for smart factory data, as well as the tracking and revocation of malicious users has become an urgent problem. To solve these problems, in this article, a blockchain-enhanced security access control scheme that supports traceability and revocability has been proposed in IIoT for smart factories. The blockchain first performs unified identity authentication, and stores all public keys, user attribute sets, and revocation list. The system administrator then generates system parameters and issues private keys to users. The domain administrator is responsible for formulating domain security and privacy-protection policies, and performing encryption operations. If the attributes meet the access policies and the user's ID is not in the revocation list, they can obtain the intermediate decryption parameters from the edge/cloud servers. Malicious users can be tracked and revoked during all stages if needed, which ensures the system security under the Decisional Bilinear Diffie–Hellman (DBDH) assumption and can resist multiple attacks. The evaluation has shown that the size of the public/private keys is smaller compared to other schemes, and the overhead time is less for public key generation, data encryption, and data decryption stages.

200 citations


Journal ArticleDOI
TL;DR: A deep blockchain framework (DBF) designed to offer security-based distributed intrusion detection and privacy-based blockchain with smart contracts in IoT networks and is compared with peer privacy-preserving intrusion detection techniques, and the experimental outcomes reveal that DBF outperforms the other competing models.
Abstract: There has been significant research in incorporating both blockchain and intrusion detection to improve data privacy and detect existing and emerging cyberattacks, respectively. In these approaches, learning-based ensemble models can facilitate the identification of complex malicious events and concurrently ensure data privacy. Such models can also be used to provide additional security and privacy assurances during the live migration of virtual machines (VMs) in the cloud and to protect Internet-of-Things (IoT) networks. This would allow the secure transfer of VMs between data centers or cloud providers in real time. This article proposes a deep blockchain framework (DBF) designed to offer security-based distributed intrusion detection and privacy-based blockchain with smart contracts in IoT networks. The intrusion detection method is employed by a bidirectional long short-term memory (BiLSTM) deep learning algorithm to deal with sequential network data and is assessed using the data sets of UNSW-NB15 and BoT-IoT. The privacy-based blockchain and smart contract methods are developed using the Ethereum library to provide privacy to the distributed intrusion detection engines. The DBF framework is compared with peer privacy-preserving intrusion detection techniques, and the experimental outcomes reveal that DBF outperforms the other competing models. The framework has the potential to be used as a decision support system that can assist users and cloud providers in securely migrating their data in a timely and reliable manner.

Journal ArticleDOI
TL;DR: This article reformulates the microservice coordination problem using Markov decision process framework and then proposes a reinforcement learning-based online micro service coordination algorithm to learn the optimal strategy, which proves that the offline algorithm can find the optimal solution while the online algorithm can achieve near-optimal performance.
Abstract: As an emerging service architecture, microservice enables decomposition of a monolithic web service into a set of independent lightweight services which can be executed independently. With mobile edge computing, microservices can be further deployed in edge clouds dynamically, launched quickly, and migrated across edge clouds easily, providing better services for users in proximity. However, the user mobility can result in frequent switch of nearby edge clouds, which increases the service delay when users move away from their serving edge clouds. To address this issue, this article investigates microservice coordination among edge clouds to enable seamless and real-time responses to service requests from mobile users. The objective of this work is to devise the optimal microservice coordination scheme which can reduce the overall service delay with low costs. To this end, we first propose a dynamic programming-based offline microservice coordination algorithm, that can achieve the globally optimal performance. However, the offline algorithm heavily relies on the availability of the prior information such as computation request arrivals, time-varying channel conditions and edge cloud's computation capabilities required, which is hard to be obtained. Therefore, we reformulate the microservice coordination problem using Markov decision process framework and then propose a reinforcement learning-based online microservice coordination algorithm to learn the optimal strategy. Theoretical analysis proves that the offline algorithm can find the optimal solution while the online algorithm can achieve near-optimal performance. Furthermore, based on two real-world datasets, i.e., the Telecom's base station dataset and Taxi Track dataset from Shanghai, experiments are conducted. The experimental results demonstrate that the proposed online algorithm outperforms existing algorithms in terms of service delay and migration costs, and the achieved performance is close to the optimal performance obtained by the offline algorithm.

Journal ArticleDOI
TL;DR: A novel healthcare monitoring framework based on the cloud environment and a big data analytics engine is proposed to precisely store and analyze healthcare data, and to improve the classification accuracy.

Journal ArticleDOI
TL;DR: A weighted cost model to minimize the execution time and energy consumption of IoT applications, in a computing environment with multiple IoT devices, multiple fog/edge servers and cloud servers is proposed and a new application placement technique based on the Memetic Algorithm is proposed to make batch application placement decision for concurrent IoT applications.
Abstract: Fog/Edge computing emerges as a novel computing paradigm that harnesses resources in the proximity of the Internet of Things (IoT) devices so that, alongside with the cloud servers, provide services in a timely manner. However, due to the ever-increasing growth of IoT devices with resource-hungry applications, fog/edge servers with limited resources cannot efficiently satisfy the requirements of the IoT applications. Therefore, the application placement in the fog/edge computing environment, in which several distributed fog/edge servers and centralized cloud servers are available, is a challenging issue. In this article, we propose a weighted cost model to minimize the execution time and energy consumption of IoT applications, in a computing environment with multiple IoT devices, multiple fog/edge servers and cloud servers. Besides, a new application placement technique based on the Memetic Algorithm is proposed to make batch application placement decision for concurrent IoT applications. Due to the heterogeneity of IoT applications, we also propose a lightweight pre-scheduling algorithm to maximize the number of parallel tasks for the concurrent execution. The performance results demonstrate that our technique significantly improves the weighted cost of IoT applications up to 65 percent in comparison to its counterparts.

Journal ArticleDOI
TL;DR: This paper proposes a lite distributed semantic communication system based on DL, named L-DeepSC, for text transmission with low complexity, where the data transmission from the IoT devices to the cloud/edge works at the semantic level to improve transmission efficiency.
Abstract: The rapid development of deep learning (DL) and widespread applications of Internet-of-Things (IoT) have made the devices smarter than before, and enabled them to perform more intelligent tasks. However, it is challenging for any IoT device to train and run DL models independently due to its limited computing capability. In this paper, we consider an IoT network where the cloud/edge platform performs the DL based semantic communication (DeepSC) model training and updating while IoT devices perform data collection and transmission based on the trained model. To make it affordable for IoT devices, we propose a lite distributed semantic communication system based on DL, named L-DeepSC, for text transmission with low complexity, where the data transmission from the IoT devices to the cloud/edge works at the semantic level to improve transmission efficiency. Particularly, by pruning the model redundancy and lowering the weight resolution, the L-DeepSC becomes affordable for IoT devices and the bandwidth required for model weight transmission between IoT devices and the cloud/edge is reduced significantly. Through analyzing the effects of fading channels in forward-propagation and back-propagation during the training of L-DeepSC, we develop a channel state information (CSI) aided training processing to decrease the effects of fading channels on transmission. Meanwhile, we tailor the semantic constellation to make it implementable on capacity-limited IoT devices. Simulation demonstrates that the proposed L-DeepSC achieves competitive performance compared with traditional methods, especially in the low signal-to-noise (SNR) region. In particular, while it can reach as large as $40\times $ compression ratio without performance degradation.

Journal ArticleDOI
TL;DR: This paper proposes the first certificateless public verification scheme against procrastinating auditors (CPVPA) by using blockchain technology, and presents rigorous security proofs to demonstrate the security of CPVPA, and conducts a comprehensive performance evaluation to show that CPVpa is efficient.
Abstract: The deployment of cloud storage services has significant benefits in managing data for users. However, it also causes many security concerns, and one of them is data integrity. Public verification techniques can enable a user to employ a third-party auditor to verify the data integrity on behalf of her/him, whereas existing public verification schemes are vulnerable to procrastinating auditors who may not perform verifications on time. Furthermore, most of public verification schemes are constructed on the public key infrastructure (PKI), and thereby suffer from certificate management problem. In this paper, we propose a c ertificateless p ublic v erification scheme against p rocrastinating a uditors (CPVPA) by using blockchain technology . The key idea is to require auditors to record each verification result into a transaction on a blockchain. Because transactions on the blockchain are time-sensitive, the verification can be time-stamped after the transaction is recorded into the blockchain, which enables users to check whether auditors perform the verifications at the prescribed time. Moreover, CPVPA is built on certificateless cryptography, and is free from the certificate management problem. We present rigorous security proofs to demonstrate the security of CPVPA, and conduct a comprehensive performance evaluation to show that CPVPA is efficient.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed the concepts of connected vehicles that exploit vehicular ad hoc network (VANET) communication, embedded system integrated with sensors which acquire the static and dynamic parameter of the electrical vehicle, and cloud integration and dig data analytics tools.
Abstract: Testing and implementation of integrated and intelligent transport systems (IITS) of an electrical vehicle need many high-performance and high-precision subsystems. The existing systems confine themselves with limited features and have driving range anxiety, charging and discharging time issues, and inter- and intravehicle communication problems. The above issues are the critical barriers to the penetration of EVs with a smart grid. This paper proposes the concepts which consist of connected vehicles that exploit vehicular ad hoc network (VANET) communication, embedded system integrated with sensors which acquire the static and dynamic parameter of the electrical vehicle, and cloud integration and dig data analytics tools. Vehicle control information is generated based on machine learning-based control systems. This paper also focuses on improving the overall performance (discharge time and cycle life) of a lithium ion battery, increasing the range of the electric vehicle, enhancing the safety of the battery that acquires the static and dynamic parameter and driving pattern of the electrical vehicle, establishing vehicular ad hoc network (VANET) communication, and handling and analyzing the acquired data with the help of various artificial big data analytics techniques.

Journal ArticleDOI
TL;DR: A real-time vehicle tracking counter for vehicles that combines the vehicle detection and vehicle tracking algorithms to realize the detection of traffic flow is proposed.
Abstract: An intelligent transportation system (ITS) plays an important role in public transport management, security and other issues. Traffic flow detection is an important part of the ITS. Based on the real-time acquisition of urban road traffic flow information, an ITS provides intelligent guidance for relieving traffic jams and reducing environmental pollution. The traffic flow detection in an ITS usually adopts the cloud computing mode. The edge of the network will transmit all the captured video to the cloud computing center. However, the increasing traffic monitoring has brought great challenges to the storage, communication and processing of traditional transportation systems based on cloud computing. To address this issue, a traffic flow detection scheme based on deep learning on the edge node is proposed in this article. First, we propose a vehicle detection algorithm based on the YOLOv3 (You Only Look Once) model trained with a great volume of traffic data. We pruned the model to ensure its efficiency on the edge equipment. After that, the DeepSORT (Deep Simple Online and Realtime Tracking) algorithm is optimized by retraining the feature extractor for multiobject vehicle tracking. Then, we propose a real-time vehicle tracking counter for vehicles that combines the vehicle detection and vehicle tracking algorithms to realize the detection of traffic flow. Finally, the vehicle detection network and multiple-object tracking network are migrated and deployed on the edge device Jetson TX2 platform, and we verify the correctness and efficiency of our framework. The test results indicate that our model can efficiently detect the traffic flow with an average processing speed of 37.9 FPS (frames per second) and an average accuracy of 92.0% on the edge device.

Journal ArticleDOI
TL;DR: The European Open Science Cloud (EOSC) portal has been used for the WeNMR project as mentioned in this paper since 2010 and has implemented numerous web-based services to facilitate the use of advanced computational tools by researchers in the field.
Abstract: Structural biology aims at characterizing the structural and dynamic properties of biological macromolecules with atomic details. Gaining insight into three dimensional structures of biomolecules and their interactions is critical for understanding the vast majority of cellular processes, with direct applications in health and food sciences. Since 2010, the WeNMR project (www.wenmr.eu) has implemented numerous web-based services to facilitate the use of advanced computational tools by researchers in the field, using the high throughput computing infrastructure provided by EGI. These services have been further developed in subsequent initiatives under H2020 projects and are now operating as Thematic Services in the European Open Science Cloud (EOSC) portal (www.eosc-portal.eu), sending >12 millions of jobs and using around 4000 CPU-years per year. Here we review 10 years of successful e-infrastructure solutions serving a large worldwide community of over 23,000 users to date, providing them with user-friendly, web-based solutions that run complex workflows in structural biology. The current set of active WeNMR portals are described, together with the complex backend machinery that allows distributed computing resources to be harvested efficiently.

Journal ArticleDOI
TL;DR: The IoT/IIoT critical infrastructure in industry 4.0 is introduced, and then the blockchain and edge computing paradigms are briefly presented, and it is shown how the convergence of these two paradigm can enable secure and scalable critical infrastructures.
Abstract: Critical infrastructure systems are vital to underpin the functioning of a society and economy. Due to the ever-increasing number of Internet-connected Internet-of-Things (IoT)/Industrial IoT (IIoT), and the high volume of data generated and collected, security and scalability are becoming burning concerns for critical infrastructures in industry 4.0. The blockchain technology is essentially a distributed and secure ledger that records all the transactions into a hierarchically expanding chain of blocks. Edge computing brings the cloud capabilities closer to the computation tasks. The convergence of blockchain and edge computing paradigms can overcome the existing security and scalability issues. In this article, we first introduce the IoT/IIoT critical infrastructure in industry 4.0, and then we briefly present the blockchain and edge computing paradigms. After that, we show how the convergence of these two paradigms can enable secure and scalable critical infrastructures. Then, we provide a survey on the state of the art for security and privacy and scalability of IoT/IIoT critical infrastructures. A list of potential research challenges and open issues in this area is also provided, which can be used as useful resources to guide future research.

Journal ArticleDOI
TL;DR: In this paper, a federated edge learning framework is proposed to aggregate local learning updates at the network edge in lieu of users' raw data to accelerate the training process of deep neural networks.
Abstract: Training task in classical machine learning models, such as deep neural networks, is generally implemented at a remote cloud center for centralized learning, which is typically time-consuming and resource-hungry. It also incurs serious privacy issue and long communication latency since a large amount of data are transmitted to the centralized node. To overcome these shortcomings, we consider a newly-emerged framework, namely federated edge learning , to aggregate local learning updates at the network edge in lieu of users’ raw data. Aiming at accelerating the training process, we first define a novel performance evaluation criterion, called learning efficiency . We then formulate a training acceleration optimization problem in the CPU scenario, where each user device is equipped with CPU. The closed-form expressions for joint batchsize selection and communication resource allocation are developed and some insightful results are highlighted. Further, we extend our learning framework to the GPU scenario. The optimal solution in this scenario is manifested to have the similar structure as that of the CPU scenario, recommending that our proposed algorithm is applicable in more general systems. Finally, extensive experiments validate the theoretical analysis and demonstrate that the proposed algorithm can reduce the training time and improve the learning accuracy simultaneously.

Journal ArticleDOI
TL;DR: JointDNN as discussed by the authors proposes an efficient, adaptive, and practical engine, JointDNN, for collaborative computation between a mobile device and cloud for DNNs in both inference and training phase.
Abstract: Deep learning models are being deployed in many mobile intelligent applications. End-side services, such as intelligent personal assistants, autonomous cars, and smart home services often employ either simple local models on the mobile or complex remote models on the cloud. However, recent studies have shown that partitioning the DNN computations between the mobile and cloud can increase the latency and energy efficiencies. In this paper, we propose an efficient, adaptive, and practical engine, JointDNN, for collaborative computation between a mobile device and cloud for DNNs in both inference and training phase. JointDNN not only provides an energy and performance efficient method of querying DNNs for the mobile side but also benefits the cloud server by reducing the amount of its workload and communications compared to the cloud-only approach. Given the DNN architecture, we investigate the efficiency of processing some layers on the mobile device and some layers on the cloud server. We provide optimization formulations at layer granularity for forward- and backward-propagations in DNNs, which can adapt to mobile battery limitations and cloud server load constraints and quality of service. JointDNN achieves up to 18 and 32 times reductions on the latency and mobile energy consumption of querying DNNs compared to the status-quo approaches, respectively.


Journal ArticleDOI
TL;DR: In this paper, a taxonomy of federated learning over IoT networks is presented, where a set of metrics such as sparsification, robustness, quantization, scalability, security, and privacy are evaluated.
Abstract: The Internet of Things (IoT) will be ripe for the deployment of novel machine learning algorithm for both network and application management. However, given the presence of massively distributed and private datasets, it is challenging to use classical centralized learning algorithms in the IoT. To overcome this challenge, federated learning can be a promising solution that enables on-device machine learning without the need to migrate the private end-user data to a central cloud. In federated learning, only learning model updates are transferred between end-devices and the aggregation server. Although federated learning can offer better privacy preservation than centralized machine learning, it has still privacy concerns. In this paper, first, we present the recent advances of federated learning towards enabling federated learning-powered IoT applications. A set of metrics such as sparsification, robustness, quantization, scalability, security, and privacy, is delineated in order to rigorously evaluate the recent advances. Second, we devise a taxonomy for federated learning over IoT networks. Finally, we present several open research challenges with their possible solutions.

Journal ArticleDOI
TL;DR: This paper provides a complete survey on 5G technology in the agricultural sector and discusses the need for and role of smart and precision farming; benefits of5G; applications of 5G in precision farming such as real-time monitoring, virtual consultation and predictive maintenance, data analytics and cloud repositories; and future prospects.

Journal ArticleDOI
TL;DR: In comparison to existing decentralized fine-grained searchable encryption schemes, the proposed scheme has achieved a significant reduction in storage and computational cost for the secret key associated with users.
Abstract: The concept of sharing of personal health data over cloud storage in a healthcare-cyber physical system has become popular in recent times as it improves access quality. The privacy of health data can only be preserved by keeping it in an encrypted form, but it affects usability and flexibility in terms of effective search. Attribute-based searchable encryption (ABSE) has proven its worth by providing fine-grained searching capabilities in the shared cloud storage. However, it is not practical to apply this scheme to the devices with limited resources and storage capacity because a typical ABSE involves serious computations. In a healthcare cloud-based cyber-physical system (CCPS), the data is often collected by resource-constraint devices; therefore, here also, we cannot directly apply ABSE schemes. In the proposed work, the inherent computational cost of the ABSE scheme is managed by executing the computationally intensive tasks of a typical ABSE scheme on the blockchain network. Thus, it makes the proposed scheme suitable for online storage and retrieval of personal health data in a typical CCPS. With the assistance of blockchain technology, the proposed scheme offers two main benefits. First, it is free from a trusted authority, which makes it genuinely decentralized and free from a single point of failure. Second, it is computationally efficient because the computational load is now distributed among the consensus nodes in the blockchain network. Specifically, the task of initializing the system, which is considered the most computationally intensive, and the task of partial search token generation, which is considered as the most frequent operation, is now the responsibility of the consensus nodes. This eliminates the need of the trusted authority and reduces the burden of data users, respectively. Further, in comparison to existing decentralized fine-grained searchable encryption schemes, the proposed scheme has achieved a significant reduction in storage and computational cost for the secret key associated with users. It has been verified both theoretically and practically in the performance analysis section.

Journal ArticleDOI
TL;DR: A new realistic testbed architecture of IoT network deployed at the IoT lab of the University of New South Wales (UNSW) at Canberra is presented, and four machine learning-based anomaly detection algorithms are validated, revealing a high performance of detection accuracy.

Journal ArticleDOI
TL;DR: An energy-efficient dynamic task offloading algorithm is developed by choosing the optimal computing place in an online way, either on the IoT device, the MEC server or the MCC server with the goal of jointly minimizing the energy consumption and task response time.
Abstract: With the proliferation of compute-intensive and delay-sensitive mobile applications, large amounts of computational resources with stringent latency requirements are required on Internet-of-Things (IoT) devices. One promising solution is to offload complex computing tasks from IoT devices either to mobile-edge computing (MEC) or mobile cloud computing (MCC) servers. MEC servers are much closer to IoT devices and thus have lower latency, while MCC servers can provide flexible and scalable computing capability to support complicated applications. To address the tradeoff between limited computing capacity and high latency, and meanwhile, ensure the data integrity during the offloading process, we consider a blockchain scenario where edge computing and cloud computing can collaborate toward secure task offloading. We further propose a blockchain-enabled IoT-Edge-Cloud computing architecture that benefits both from MCC and MEC, where MEC servers offer lower latency computing services, while MCC servers provide stronger computation power. Moreover, we develop an energy-efficient dynamic task offloading (EEDTO) algorithm by choosing the optimal computing place in an online way, either on the IoT device, the MEC server or the MCC server with the goal of jointly minimizing the energy consumption and task response time. The Lyapunov optimization technique is applied to control computation and communication costs incurred by different types of applications and the dynamic changes of wireless environments. During the optimization, the best computing location for each task is chosen adaptively without requiring future system information as prior knowledge. Compared with previous offloading schemes with/without MEC and MCC cooperation, EEDTO can achieve energy-efficient offloading decisions with relatively lower computational complexity.

Journal ArticleDOI
TL;DR: A many-objective intelligent algorithm with sine function is presented to implement the model, which considers the variation tendency of diversity strategy in the population is similar to the sinefunction, and demonstrates excellent scheduling efficiency and hence enhancing the security.
Abstract: Internet of Things (IoT) is a huge network and establishes ubiquitous connections between smart devices and objects. The flourishing of IoT leads to an unprecedented data explosion, traditional data storing or processing techniques have the problem of low efficiency, and if the data are used maliciously, the security loss may be further caused. Multicloud is a high-performance secure computing platform, which combines multiple cloud providers for data processing, and the distributed multicloud platform ensures the security of data to some extent. Based on multicloud and task scheduling in IoT, this article constructs a many-objective distributed scheduling model, which includes six objectives of total time, cost, cloud throughput, energy consumption, resource utilization, and balancing load. Furthermore, this article presents a many-objective intelligent algorithm with sine function to implement the model, which considers the variation tendency of diversity strategy in the population is similar to the sine function. The experimental results demonstrate excellent scheduling efficiency and hence enhancing the security. This work provides a new idea for addressing the difficult problem of data processing in IoT.

Journal ArticleDOI
TL;DR: A secure intrusion, detection with blockchain based data transmission with classification model for CPS in healthcare sector, which achieves privacy and security and uses a multiple share creation (MSC) model for the generation of multiple shares of the captured image.