scispace - formally typeset
Search or ask a question

Showing papers on "Testbed published in 2021"


Journal ArticleDOI
TL;DR: A new realistic testbed architecture of IoT network deployed at the IoT lab of the University of New South Wales (UNSW) at Canberra is presented, and four machine learning-based anomaly detection algorithms are validated, revealing a high performance of detection accuracy.

136 citations


Journal ArticleDOI
TL;DR: A container scheduling system that enables serverless platforms to make efficient use of edge infrastructures and a method to automatically fine-tune the weights of scheduling constraints to optimize high-level operational objectives such as minimizing task execution time, uplink usage, or cloud execution cost is presented.

95 citations


Journal ArticleDOI
TL;DR: In this paper, the authors design a rigorous testbed for measuring the one-way packet delays between a 5G end device via a radio access network (RAN) to a packet core with sub-microsecond precision as well as measuring the packet core delay with nanosecond precision.
Abstract: A 5G campus network is a 5G network for the users affiliated with the campus organization, e.g., an industrial campus, covering a prescribed geographical area. A 5G campus network can operate as a so-called 5G non-standalone (NSA) network (which requires 4G Long-Term Evolution (LTE) spectrum access) or as a 5G standalone (SA) network (without 4G LTE spectrum access). 5G campus networks are envisioned to enable new use cases, which require cyclic delay-sensitive industrial communication, such as robot control. We design a rigorous testbed for measuring the one-way packet delays between a 5G end device via a radio access network (RAN) to a packet core with sub-microsecond precision as well as for measuring the packet core delay with nanosecond precision. With our testbed design, we conduct detailed measurements of the one-way download (downstream, i.e., core to end device) as well as one-way upload (upstream, i.e., end device to core) packet delays and losses for both 5G SA and 5G NSA hardware and network operation. We also measure the corresponding 5G SA and 5G NSA packet core processing delays for download and upload. We find that typically 95% of the SA download packet delays are in the range from 4–10 ms, indicating a fairly wide spread of the packet delays. Also, existing packet core implementations regularly incur packet processing latencies up to 0.4 ms, with outliers above one millisecond. Our measurement results inform the further development and refinement of 5G SA and 5G NSA campus networks for industrial use cases. We make the measurement data traces publicly available as the IEEE DataPort 5G Campus Networks: Measurement Traces dataset (DOI 10.21227/xe3c-e968).

66 citations


Journal ArticleDOI
TL;DR: A framework coined SDN-RMbw (Software-Defined Networking Resilience Management for Bandwidth), which is a contract-based framework, where the components are bound to bandwidth contracts and a resilience manager and aims at providing fault-resilience as well as adapting to different network-state changes.
Abstract: In this paper, we address a key challenge of managing required bandwidth for traffic flows in Industrial cyber-physical systems (ICPS). To manage the required bandwidth and to improve fault-resilience in such Industrial networks, software-defined networks (SDN) are used. We propose a contract-based framework with the use of SDN where the components are bound to bandwidth-contracts and a resilience manager. The bandwidth-contracts state the bandwidth requirements of the traffic flows. Based on the newly calculated routes, an observer detects whether the contract still satisfies the bandwidth requirements of the traffic flows or the contract gets violated (termed as fault). To provide resilience to such faults in the network, a resilience manager integrated with control logic, decides and executes a suitable response strategy (depending upon the severity of the fault). The proposed framework is evaluated using Ryu SDN controller on hardware testbed. Different tests on the hardware testbed depict that the proposed framework provides enhanced network resilience as compared to the base-line mechanisms. Besides, our extensive experimental emulations are carried out on the Mininet SDN tool for testing the scalability of the proposed framework.

52 citations


Journal ArticleDOI
20 Aug 2021
TL;DR: Intelligent edge computing, emerging technology to reduce energy consumption in processing AI tasks, is introduced to build green AI computing for IIoT applications and a novel algorithm to optimize the scheduling for different AI tasks is proposed.
Abstract: Artificial Intelligence (AI) technology is a huge opportunity for the industrial Internet of Things (IIoT) in the fourth industrial revolution (Industry 4.0). However, most AI-driven applications need high-end servers to process complex AI tasks, bringing high energy consumption to IIoT environments. In this article, we introduce intelligent edge computing, emerging technology to reduce energy consumption in processing AI tasks, to build green AI computing for IIoT applications. We first propose an intelligent edge computing framework with a heterogeneous architecture to offload most AI tasks from servers. To enhance the energy efficiency of various computing resources, we propose a novel algorithm to optimize the scheduling for different AI tasks. In the performance evaluation, we build a small testbed to show the AI-driven IIoT applications’ energy efficiency with intelligent edge computing. Meanwhile, extensive simulation results show that the proposed online scheduling strategy consumes less than 80% energy of the static scheduling and 70% of the first-in, first-out (FIFO) strategy in most settings.

41 citations


Journal ArticleDOI
TL;DR: In this article, an artificial intelligence (AI)-based intrusion detection architecture comprising Deep Learning Engines (DLEs) for identification and classification of the vehicular traffic in the Internet of Vehicles (IoV) networks into potential cyberattack types is presented.
Abstract: Recent advances in the Internet of Things (IoT) and the adoption of IoT in vehicular networks have led to a new and promising paradigm called the Internet of Vehicles (IoV). However, the mode of communication in IoV being wireless in nature poses serious cybersecurity challenges. With many vehicles being connected in the IoV network, the vehicular data is set to explode. Traditional intrusion detection techniques may not be suitable in these scenarios with an extremely large amount of vehicular data being generated at an unprecedented rate and with various types of cybersecurity attacks being launched. Thus, there is a need for the development of advanced intrusion detection techniques capable of handling possible cyberattacks in these networks. Toward this end, we present an artificial intelligence (AI)-based intrusion detection architecture comprising Deep Learning Engines (DLEs) for identification and classification of the vehicular traffic in the IoV networks into potential cyberattack types. Also, taking into consideration the mobility of the vehicles and the realtime requirements of the IoV networks, these DLEs will be deployed on Multi-access Edge Computing (MEC) servers instead of running on the remote cloud. Extensive experimental results using popular evaluation metrics and average prediction time on a MEC testbed demonstrate the effectiveness of the proposed scheme.

36 citations


Journal ArticleDOI
TL;DR: A detailed comparative study on communication and computational overheads, and security, as well as functionality features, reveals that the proposed ACPBS-IoT provides superior security and more functionality Features, and better or comparable overheads than other existing competing access control schemes.
Abstract: Surveillance drones, called as unmanned aerial vehicles (UAV), are aircrafts that are utilized to collect video recordings, still images, or live video of the targets, such as vehicles, people or specific areas. Particularly in battlefield surveillance, there is high possibility of eavesdropping, inserting, modifying or deleting the messages during communications among the deployed drones and ground station server (GSS). This leads to launch several potential attacks by an adversary, such as main-in-middle, impersonation, drones hijacking, replay attacks, etc. Moreover, anonymity and untarcebility are two crucial security properties that need to be maintained in battlefield surveillance communication environment. To deal with such a crucial security problem, we propose a new access control protocol for battlefield surveillance in drone-assisted Internet of Things (IoT) environment, called ACPBS-IoT. Through the detailed security analysis using formal and informal (non-mathematical), and also the formal security verification under automated software simulation tool, we show the proposed ACPBS-IoT can resist several potential attacks needed in battlefield surveillance scenario. Furthermore, the testbed experiments for various cryptographic primitives have been performed for measuring the execution time. Finally, a detailed comparative study on communication and computational overheads, and security as well as functionality features reveals that the proposed ACPBS-IoT provides superior security and more functionality features, and better or comparable overheads than other existing competing access control schemes.

36 citations


Proceedings ArticleDOI
10 May 2021
TL;DR: In this article, an Automated Machine Learning (AutoML) framework is proposed for jointly configuring the service and wireless network parameters, towards maximizing the analytics accuracy subject to minimum frame rate constraints.
Abstract: Video analytics constitute a core component of many wireless services that require processing of voluminous data streams emanating from handheld devices. Multi-Access Edge Computing (MEC) is a promising solution for supporting such resource-hungry services, but there is a plethora of configuration parameters affecting their performance in an unknown and possibly time-varying fashion. To overcome this obstacle, we propose an Automated Machine Learning (AutoML) framework for jointly configuring the service and wireless network parameters, towards maximizing the analytics’ accuracy subject to minimum frame rate constraints. Our experiments with a bespoke prototype reveal the volatile and system/data-dependent performance of the service, and motivate the development of a Bayesian online learning algorithm which optimizes on-the-fly the service performance. We prove that our solution is guaranteed to find a near-optimal configuration using safe exploration, i.e., without ever violating the set frame rate thresholds. We use our testbed to further evaluate this AutoML framework in a variety of scenarios, using real datasets.

35 citations


Journal ArticleDOI
01 May 2021-Agronomy
TL;DR: This study implemented a custom-based sensor node, gateway, and handheld device for real-time transmission of agricultural data to a cloud server and concludes that hybrid range-based localization algorithms are more reliable and scalable for deployment in the agricultural field.
Abstract: The Internet of Things (IoT) is transforming all applications into real-time monitoring systems. Due to the advancement in sensor technology and communication protocols, the implementation of the IoT is occurring rapidly. In agriculture, the IoT is encouraging implementation of real-time monitoring of crop fields from any remote location. However, there are several agricultural challenges regarding low power use and long-range transmission for effective implementation of the IoT. These challenges are overcome by integrating a long-range (LoRa) communication modem with customized, low-power hardware for transmitting agricultural field data to a cloud server. In this study, we implemented a custom-based sensor node, gateway, and handheld device for real-time transmission of agricultural data to a cloud server. Moreover, we calibrated certain LoRa field parameters, such as link budget, spreading factor, and receiver sensitivity, to extract the correlation of these parameters on a custom-built LoRa testbed in MATLAB. An energy harvesting mechanism is also presented in this article for analyzing the lifetime of the sensor node. Furthermore, this article addresses the significance and distinct kinds of localization algorithms. Based on the MATLAB simulation, we conclude that hybrid range-based localization algorithms are more reliable and scalable for deployment in the agricultural field. Finally, a real-time experiment was conducted to analyze the performance of custom sensor nodes, gateway, and handheld devices.

35 citations


Journal ArticleDOI
TL;DR: The hardware implementation of wireless SHARP (w-SHARP), a promising wireless technology for real-time industrial applications that follows the principles of time-sensitive networking and provides time synchronization, time-aware scheduling with bounded latency and high reliability is presented.
Abstract: Real-time Industrial applications in the scope of Industry 4.0. present significant challenges from the communication perspective: low latency, ultra-reliability, and determinism. Given that wireless networks provide a significant cost reduction, lower deployment time, and free movement of the wireless nodes, wireless solutions have attracted the industry attention. However, industrial networks are mostly built by wired means because state-of-the-art wireless networks cannot cope with the industrial applications requirements. In this article, we present the hardware implementation of wireless SHARP (w-SHARP), a promising wireless technology for real-time industrial applications. w-SHARP follows the principles of time-sensitive networking and provides time synchronization, time-aware scheduling with bounded latency and high reliability. The implementation has been carried out on a field-programmable gate array-based software-defined radio platform. We demonstrate, through a hardware testbed, that w-SHARP is able to provide ultra-low control cycles, low latency, and high reliability. This implementation may open new perspectives in the implementation of high-performance industrial wireless networks, as both PHY and MAC layers are now subject to be optimized for specific industrial applications.

32 citations


Journal ArticleDOI
TL;DR: In this article, an actor-critic-based distributed application placement technique, working based on the IMPORTance weighted Actor-Learner Architectures (IMPALA), is proposed for efficient distributed experience trajectory generation that significantly reduces exploration costs of agents.
Abstract: Fog/Edge computing is a novel computing paradigm supporting resource-constrained Internet of Things (IoT) devices by placement of their tasks on edge and/or cloud servers. Recently, several Deep Reinforcement Learning (DRL)-based placement techniques have been proposed in fog/edge computing environments, which are only suitable for centralized setups. The training of well-performed DRL agents requires manifold training data while obtaining training data is costly. Hence, these centralized DRL-based techniques lack generalizability and quick adaptability, thus failing to efficiently tackle application placement problems. Moreover, many IoT applications are modeled as Directed Acyclic Graphs (DAGs) with diverse topologies. Satisfying dependencies of DAG-based IoT applications incur additional constraints and increase the complexity of placement problem. To overcome these challenges, we propose an actor-critic-based distributed application placement technique, working based on the IMPortance weighted Actor-Learner Architectures (IMPALA). IMPALA is known for efficient distributed experience trajectory generation that significantly reduces exploration costs of agents. Besides, it uses an adaptive off-policy correction method for faster convergence to optimal solutions. Our technique uses recurrent layers to capture temporal behaviors of input data and a replay buffer to improve the sample efficiency. The performance results, obtained from simulation and testbed experiments, demonstrate that our technique significantly improves execution cost of IoT applications up to 30\% compared to its counterparts.

Journal ArticleDOI
TL;DR: A novel simulation-driven platform named E-ALPHA (Edge-based Assisted Living Platform for Home cAre) which supports both Edge and Cloud Computing paradigm to develop innovative AAL services in scenarios of different scales is proposed.
Abstract: The ever-growing aging of the population has emphasized the importance of in-home AAL (Ambient Assisted Living) services for monitoring and improving its well-being and health, especially in the context of care facilities (retirement villages, clinics, senior neighborhood, etc). The paper proposes a novel simulation-driven platform named E-ALPHA (Edge-based Assisted Living Platform for Home cAre) which supports both Edge and Cloud Computing paradigm to develop innovative AAL services in scenarios of different scales. E-ALPHA flexibly combines Edge, Cloud or Edge/Cloud deployments, supports different communication protocols, and fosters the interoperability with other IoT platforms. Moreover, the simulation-based design helps in preliminary assessing (i) the expected performance of the service to be deployed according to the infrastructural characteristics of each specific small, medium and large scenario; and (ii) the most appropriate applications/platform configuration for a real deployment (kind and number of involved devices, Edge- or Cloud-based deployment, required connectivity type, etc). In this direction, two different use cases modeled according to realistic input (coming from past experience involving real testbed) are shown in order to demonstrate the potentials of the proposed simulation-driven AAL platform.

Journal ArticleDOI
TL;DR: A novel evaluation testbed which, by building on the outcomes of many separate works reported in the literature, aims to support a comprehensive analysis of the considered design space and an experimental protocol for collecting objective and subjective measures is proposed.
Abstract: A common operation performed in Virtual Reality (VR) environments is locomotion. Although real walking can represent a natural and intuitive way to manage displacements in such environments, its use is generally limited by the size of the area tracked by the VR system (typically, the size of a room) or requires expensive technologies to cover particularly extended settings. A number of approaches have been proposed to enable effective explorations in VR, each characterized by different hardware requirements and costs, and capable to provide different levels of usability and performance. However, the lack of a well-defined methodology for assessing and comparing available approaches makes it difficult to identify, among the various alternatives, the best solutions for selected application domains. To deal with this issue, this article introduces a novel evaluation testbed which, by building on the outcomes of many separate works reported in the literature, aims to support a comprehensive analysis of the considered design space. An experimental protocol for collecting objective and subjective measures is proposed, together with a scoring system able to rank locomotion approaches based on a weighted set of requirements. Testbed usage is illustrated in a use case requesting to select the technique to adopt in a given application scenario.


Journal ArticleDOI
TL;DR: A comprehensive review of the advancement of CP-SGs with their corresponding testbeds including diverse testing paradigms has been performed and broadly discusses CP-SG testbed architectures along with the associated functions and main vulnerabilities.
Abstract: The integration of improved control techniques with advanced information technologies enables the rapid development of smart grids. The necessity of having an efficient, reliable, and flexible communication infrastructure is achieved by enabling real-time data exchange between numerous intelligent and traditional electrical grid elements. The performance and efficiency of the power grid are enhanced with the incorporation of communication networks, intelligent automation, advanced sensors, and information technologies. Although smart grid technologies bring about valuable economic, social, and environmental benefits, testing the combination of heterogeneous and co-existing Cyber-Physical-Smart Grids (CP-SGs) with conventional technologies presents many challenges. The examination for both hardware and software components of the Smart Grid (SG) system is essential prior to the deployment in real-time systems. This can take place by developing a prototype to mimic the real operational circumstances with adequate configurations and precision. Therefore, it is essential to summarize state-of-the-art technologies of industrial control system testbeds and evaluate new technologies and vulnerabilities with the motivation of stimulating discoveries and designs. In this paper, a comprehensive review of the advancement of CP-SGs with their corresponding testbeds including diverse testing paradigms has been performed. In particular, we broadly discuss CP-SG testbed architectures along with the associated functions and main vulnerabilities. The testbed requirements, constraints, and applications are also discussed. Finally, the trends and future research directions are highlighted and specified.

Journal ArticleDOI
TL;DR: In this paper, the authors present a dataset to support researchers in the validation process of solutions such as Intrusion Detection Systems (IDS) based on artificial intelligence and machine learning techniques for the detection and categorization of threats in Cyber Physical Systems (CPS).
Abstract: This paper presents a dataset to support researchers in the validation process of solutions such as Intrusion Detection Systems (IDS) based on artificial intelligence and machine learning techniques for the detection and categorization of threats in Cyber Physical Systems (CPS). To this end, data were acquired from a hardware-in-the-loop Water Distribution Testbed (WDT) which emulates water flowing between eight tanks via solenoid-valves, pumps, pressure and flow sensors. The testbed is composed of a real subsystem that is virtually connected to a simulated one. The proposed dataset encompasses both physical and network data in order to highlight the consequences of attacks in the physical process as well as in network traffic behaviour. Simulations data are organized in four different acquisitions for a total duration of 2 hours by considering normal scenario and multiple anomalies due to cyber and physical attacks.

Journal ArticleDOI
17 Feb 2021-Sensors
TL;DR: The present work defines a novel framework for Cloud Gaming performance evaluation that evaluates the Cloud Gaming approach for different transport networks and scenarios, automating the acquisition of the gaming metrics and identifies the main parameters involved in its performance.
Abstract: Cloud Gaming is a cutting-edge paradigm in the video game provision where the graphics rendering and logic are computed in the cloud. This allows a user’s thin client systems with much more limited capabilities to offer a comparable experience with traditional local and online gaming but using reduced hardware requirements. In contrast, this approach stresses the communication networks between the client and the cloud. In this context, it is necessary to know how to configure the network in order to provide service with the best quality. To that end, the present work defines a novel framework for Cloud Gaming performance evaluation. This system is implemented in a real testbed and evaluates the Cloud Gaming approach for different transport networks (Ethernet, WiFi, and LTE (Long Term Evolution)) and scenarios, automating the acquisition of the gaming metrics. From this, the impact on the overall gaming experience is analyzed identifying the main parameters involved in its performance. Hence, the future lines for Cloud Gaming QoE-based (Quality of Experience) optimization are established, this way being of configuration, a trendy paradigm in the new-generation networks, such as 4G and 5G (Fourth and Fifth Generation of Mobile Networks).

Journal ArticleDOI
TL;DR: FogMon is described, a C++ distributed monitoring prototype targeting Fog computing infrastructures that features a self-organising peer-to-peer topology with self-restructuring mechanisms, and differential monitoring updates, which ensure scalability, fault-tolerance and low communication overhead.

Journal ArticleDOI
TL;DR: An extended review is performed regarding the technologies currently used for the implementation of Automatic Weather Stations and the usage of new emerging technologies such as the Internet of Things, Edge Computing, Deep Learning, LPWAN, etc. in the Implementation of future AWS-based observation systems are presented.
Abstract: Automatic Weather Stations (AWS) are extensively used for gathering meteorological and climatic data. The World Meteorological Organization (WMO) provides publications with guidelines for the implementation, installation, and usages of these stations. Nowadays, in the new era of the Internet of Things, there is an ever-increasing necessity for the implementation of automatic observing systems that will provide scientists with the real-time data needed to design and apply proper environmental policy. In this paper, an extended review is performed regarding the technologies currently used for the implementation of Automatic Weather Stations. Furthermore, we also present the usage of new emerging technologies such as the Internet of Things, Edge Computing, Deep Learning, LPWAN, etc. in the implementation of future AWS-based observation systems. Finally, we present a case study and results from a testbed AWS (project AgroComp) developed by our research team. The results include test measurements from low-cost sensors installed on the unit and predictions provided by Deep Learning algorithms running locally.

Journal ArticleDOI
TL;DR: This article investigates the problem of coordinated path following for fixed-wing unmanned aerial vehicles (UAVs) with speed constraints in the two-dimensional plane with a proposed hybrid control law and develops a hardware-in-the-loop simulation testbed of the multi-UAV system.
Abstract: This article investigates the problem of coordinated path following for fixed-wing unmanned aerial vehicles (UAVs) with speed constraints in the two-dimensional plane. The objective is to steer a fleet of UAVs along the path(s) while achieving the desired sequenced inter-UAV arc distance. In contrast to the previous coordinated path-following studies, we are able through our proposed hybrid control law to deal with the forward speed and the angular speed constraints of fixed-wing UAVs. More specifically, the hybrid control law makes all the UAVs work at two different levels: 1) those UAVs whose path-following errors are within an invariant set (i.e., the designed coordination set) work at the coordination level and 2) the other UAVs work at the single-agent level. At the coordination level, we prove that even with speed constraints, the proposed control law can make sure the path-following errors reduce to zero, while the inter-UAV arc distances converge to the desired value. At the single-agent level, analysis for the path-following error entering the coordination set is provided. We develop a hardware-in-the-loop simulation testbed of the multi-UAV system by using actual autopilots and the X-Plane simulator. The effectiveness of the proposed approach is corroborated with both numerical simulation and the testbed.

Journal ArticleDOI
TL;DR: In this article, a dynamic radio access network slicing resource sharing method aimed to guarantee optimal service level agreements through the monitoring of each slice tenant's key performance indicators is presented, and the solution is validated using a testbed based on the main 5G functionalities.
Abstract: Emerging 5G systems will need to seamlessly guarantee novel types of services in a multi-do-main ecosystem. New methodologies of network and infrastructure sharing facilitate the cooperation among the operators, exploiting the core and access sections of the system architecture. Network slicing (NS) is the operators' best technique for building and managing a network. Without NS, the 5G requirements in terms of flexibility, optimal resource utilization, and investment returns cannot materialize. Before slicing is commercially available, different sections of the 5G architecture should be modified to include NS. In this work, we present a novel dynamic radio access network slicing resource sharing method aimed to guarantee optimal service level agreements through the monitoring of each slice tenant's key performance indicators. The experiments are conducted following the 3GPP specifications, and the solution is validated using a testbed based on the main 5G functionalities.

Journal ArticleDOI
TL;DR: A comprehensive simulation platform for conventional, connected and automated driving from a transportation cyber–physical system perspective, which tightly combines the core components of V2X communication, traffic networks, and autonomous/conventional vehicle model is designed.
Abstract: A comprehensive assessment of connected and automated driving is imperative before its large-scale deployment in reality, which can be economically and effectively implemented via a credible simulation platform. Nonetheless, the key components of traffic dynamics, vehicle modeling, and traffic environment are oversimplified in existing simulators. Current traffic simulators normally simplify the function of connected and autonomous vehicles by proposing incremental improvements to the conventional traffic flow modeling methods, which cannot reflect the characteristics of the realistic connected and autonomous vehicles. On the other hand, typical autonomous vehicle simulators only focus on individual function verification in some specific traffic scenarios, omitting the network-level evaluation by integrating both large-scale traffic networks and vehicle-to-anything (V2X) communication. This paper designs a comprehensive simulation platform for conventional, connected and automated driving from a transportation cyber–physical system perspective, which tightly combines the core components of V2X communication, traffic networks, and autonomous/conventional vehicle model. Specifically, three popular open-source simulators SUMO, Omnet++, and Webots are integrated and connected via the traffic control interface, and the whole simulation platform will be deployed in a Client/Server model. As the demonstration, two typical applications, traffic flow optimization and vehicle eco-driving, are implemented in the simulation platform. The proposed platform provides an ideal and credible testbed to explore the potential social/economic impact of connected and automated driving from the individual level to the large-scale network level.

Journal ArticleDOI
TL;DR: In this article, a real-time testbed for cyber-physical industrial control systems is proposed, where the Tennessee Eastman process is simulated in realtime on a PC and closed-loop controllers are implemented on the Siemens PLCs.
Abstract: Due to recent increase in deployment of Cyber-Physical Industrial Control Systems in different critical infrastructures, addressing cyber-security challenges of these systems is vital for assuring their reliability and secure operation in presence of malicious cyber attacks. Towards this end, developing a testbed to generate real-time data-sets for critical infrastructure that would be utilized for validation of real-time attack detection algorithms are indeed highly needed. This paper investigates and proposes the design and implementation of a cyber-physical industrial control system testbed where the Tennessee Eastman process is simulated in real-time on a PC and the closed-loop controllers are implemented on the Siemens PLCs. False data injection cyber attacks are injected to the developed testbed through the man-in-the-middle structure where the malicious hackers can in real-time modify the sensor measurements that are sent to the PLCs. Furthermore, various cyber attack detection algorithms are developed and implemented in real-time on the testbed and their performance and capabilities are compared and evaluated.

Journal ArticleDOI
TL;DR: An efficient algorithm that adaptively adjusts batch size with scaled learning rate for heterogeneous devices to reduce the waiting time and save battery life is proposed.
Abstract: The emerging Federated Learning (FL) enables IoT devices to collaboratively learn a shared model based on their local datasets. However, due to end devices heterogeneity, it will magnify the inherent synchronization barrier issue of FL and result in non-negligible waiting time when local models are trained with the identical batch size. Moreover, the useless waiting time will further lead to a great strain on devices limited battery life. Herein, we aim to alleviate the negative impact of synchronization barrier through adaptive batch size during model training. When using different batch sizes, stability and convergence of the global model should be enforced by assigning appropriate learning rates on different devices. Therefore, we first study the relationship between batch size and learning rate, and formulate a scaling rule to guide the setting of learning rate in terms of batch size. Then we theoretically analyze the convergence rate of global model and obtain a convergence upper bound. On these bases, we propose an efficient algorithm that adaptively adjusts batch size with scaled learning rate for heterogeneous devices to reduce the waiting time and save battery life. We conduct extensive simulations and testbed experiments, and the experimental results demonstrate the effectiveness of our method.

Journal ArticleDOI
30 Jan 2021-Sensors
TL;DR: In this paper, the suitability of both modes for deploying Internet of Things (IoT) applications considering a low resources testbed comparable to an edge node is explored for deploying IoT applications.
Abstract: Serverless computing, especially implemented through Function-as-a-Service (FaaS) platforms, has recently been gaining popularity as an application deployment model in which functions are automatically instantiated when called and scaled when needed When a warm start deployment mode is used, the FaaS platform gives users the perception of constantly available resources Conversely, when a cold start mode is used, containers running the application’s modules are automatically destroyed when the application has been executed The latter can lead to considerable resource and cost savings In this paper, we explore the suitability of both modes for deploying Internet of Things (IoT) applications considering a low resources testbed comparable to an edge node We discuss the implementation and the experimental analysis of an IoT serverless platform that includes typical IoT service elements A performance study in terms of resource consumption and latency is presented for the warm and cold start deployment mode, and implemented using OpenFaaS, a well-known open-source FaaS framework which allows to test a cold start deployment with precise inactivity time setup thanks to its flexibility This experimental analysis allows to evaluate the aptness of the two deployment modes under different operating conditions: Exploiting OpenFaaS minimum inactivity time setup, we find that the cold start mode can be convenient in order to save edge nodes limited resources, but only if the data transmission period is significantly higher than the time needed to trigger containers shutdown

Journal ArticleDOI
TL;DR: A two-step credit-based Raft consensus mechanism, which can select the orderer nodes dynamically to achieve fast and reliable consensus based on historical behavior records stored in the ledger is designed.
Abstract: 5G-enabled Industrial Internet of Things (IIoT) deployment will bring more severe security and privacy challenges, which puts forward higher requirements for access control. Blockchain-based access control method has become a promising security technology, but it still faces high latency in consensus process and weak adaptability to dynamic changes in network environment. This paper proposes a novel access control framework for 5G-enabled IIoT based on consortium blockchain. We design three types of chaincodes for the framework named Policy Management Chaincode (PMC), Access Control Chaincode (ACC) and Credit Evaluation Chaincode (CEC). The PMC and ACC are deployed on the same data channel to implement the management of access control policies and the authorization of access. The CEC deployed on another channel is used to add behavior records collected from IIoT devices and calculate the credit value of IIoT domain. Specifically, we design a two-step credit-based Raft consensus mechanism, which can select the orderer nodes dynamically to achieve fast and reliable consensus based on historical behavior records stored in the ledger. Furthermore, we implement the proposed framework on a real-world testbed and compare it with the framework based on Practical Byzantine Fault Tolerance (PBFT) consensus. The experiment results show that our proposed framework can maintain lower consensus cost time with 100ms level and achieves 4 to 5 times throughput with lower hardware resource consumption and communication consumption. Besides, our design also improves the security and robustness of the access control process.

Journal ArticleDOI
TL;DR: A novel resource allocation scheme that optimizes the network energy efficiency of a C-RAN and proposes a provably-convergent iterative method to solve the resulting Weighted Sum-Rate maximization problem.
Abstract: Cloud Radio Access Network (C-RAN) is a key architecture for 5G cellular wireless network that aims at improving spectral and energy efficiency of the network by uniting traditional RAN with cloud computing. In this paper, a novel resource allocation scheme that optimizes the network energy efficiency of a C-RAN is designed. First, an energy consumption model that characterizes the computation energy of the BaseBand Unit (BBU) is introduced based on empirical results collected from a programmable C-RAN testbed. Then, an optimization problem is formulated to maximize the energy efficiency of the network, subject to practical constraints including Quality of Service (QoS) requirement, radio remote head transmit power, and fronthaul capacity limits. The formulated Network Energy Efficiency Maximization (NEEM) problem jointly considers the tradeoff among the network accumulated data rate, BBU power consumption, fronthaul cost, and beamforming design. To deal with the non-convexity and mixed-integer nature of the problem, we utilize successive convex approximation methods to transform the original problem into the equivalent Weighted Sum-Rate (WSR) maximization problem. We then propose a provably-convergent iterative method to solve the resulting WSR problem. Extensive simulation results coupled with real-time experiments on a small-scale C-RAN testbed show the effectiveness of our proposed resource allocation scheme and its advantages over existing approaches.


Journal ArticleDOI
13 Jul 2021
TL;DR: The Quantum Scientific Computing Open User Testbed (QSCOUT) at Sandia National Laboratories is a trapped-ion qubit system designed to evaluate the potential of near-term quantum hardware in scientific computing applications.
Abstract: The Quantum Scientific Computing Open User Testbed (QSCOUT) at Sandia National Laboratories is a trapped-ion qubit system designed to evaluate the potential of near-term quantum hardware in scientific computing applications for the U.S. Department of Energy and its Advanced Scientific Computing Research program. Similar to commercially available platforms, it offers quantum hardware that researchers can use to perform quantum algorithms, investigate noise properties unique to quantum systems, and test novel ideas that will be useful for larger and more powerful systems in the future. However, unlike most other quantum computing testbeds, the QSCOUT allows both quantum circuit and low-level pulse control access to study new modes of programming and optimization. The purpose of this article is to provide users and the general community with details of the QSCOUT hardware and its interface, enabling them to take maximum advantage of its capabilities.

Book ChapterDOI
01 Jan 2021
TL;DR: A routing protocol is suggested that manages routing tasks where TLS security has been embedded in the header for OpenFlow packet in the case of heterogeneous distributed SDN controllers.
Abstract: With the rapid growth in the network, a demand for more flexible and secure technology of network programmability is increasing. With the introduction of technology like SDN, a new platform for communication evolved which is free from traditional network barriers like vendor lock-ins, limited scope for innovation, buggy software, etc. No doubt, SDN has solved these problems but SDN security issues are of major concern. SDN security is the key research area that is gaining popularity. Our paper discusses SDN security challenges. In our experiments, we tried to depict the network performance and utilisation of resources in case of security attacks. With the aim of securing SDN in the case of heterogeneous distributed SDN controllers, we suggested a routing protocol that manages routing tasks where TLS security has been embedded in the header for OpenFlow packet. We simulated the system with our testbed of virtual machine with different attacks scripted in python.