scispace - formally typeset
Search or ask a question

Showing papers on "Testbed published in 2020"


Proceedings Article
01 Jan 2020
TL;DR: The Chameleon testbed is a case study in adapting the cloud paradigm for computer science research, and it is made a case that utilizing mainstream technology in research testbeds can increase efficiency without compromising on functionality.
Abstract: The Chameleon testbed is a case study in adapting the cloud paradigm for computer science research. In this paper, we explain how this adaptation was achieved, evaluate it from the perspective of supporting the most experiments for the most users, and make a case that utilizing mainstream technology in research testbeds can increase efficiency without compromising on functionality. We also highlight the opportunity inherent in the shared digital artifacts generated by testbeds and give an overview of the efforts we’ve made to develop it to foster reproducibility.

108 citations


Journal ArticleDOI
TL;DR: An edge computing-based deep learning model is proposed, which utilizes edge computing to migrate the deep learning process from cloud servers to edge nodes, reducing data transmission demands in the IIoT network and mitigating network congestion.
Abstract: As a typical application of the Internet of Things (IoT), the Industrial IoT (IIoT) connects all the related IoT sensing and actuating devices ubiquitously so that the monitoring and control of numerous industrial systems can be realized. Deep learning, as one viable way to carry out big-data-driven modeling and analysis, could be integrated in IIoT systems to aid the automation and intelligence of IIoT systems. As deep learning requires large computation power, it is commonly deployed in cloud servers. Thus, the data collected by IoT devices must be transmitted to the cloud for training process, contributing to network congestion and affecting the IoT network performance as well as the supported applications. To address this issue, in this article, we leverage the fog/edge computing paradigm and propose an edge computing-based deep learning model, which utilizes edge computing to migrate the deep learning process from cloud servers to edge nodes, reducing data transmission demands in the IIoT network and mitigating network congestion. Since edge nodes have limited computation ability compared to servers, we design a mechanism to optimize the deep learning model so that its requirements for computational power can be reduced. To evaluate our proposed solution, we design a testbed implemented in the Google cloud and deploy the proposed convolutional neural network (CNN) model, utilizing a real-world IIoT data set to evaluate our approach. 1 Our experimental results confirm the effectiveness of our approach, which cannot only reduce the network traffic overhead for IIoT but also maintain the classification accuracy in comparison with several baseline schemes. 1 Certain commercial equipment, instruments, or materials are identified in this article in order to specify the experimental procedure adequately. Such identification is not intended to imply recommendation or endorsement by the National Institute of Standards and Technology, nor is it intended to imply that the materials or equipment identified are necessarily the best available for the purpose.

108 citations


Proceedings ArticleDOI
16 Apr 2020
TL;DR: COSMOS' computing and network architectures, the critical building blocks, and its programmability at different layers are described, including software-defined radios, 28 GHz millimeter-wave phased array modules, optical transport network, core and edge cloud, and control and management software.
Abstract: This paper focuses on COSMOS - Cloud enhanced Open Software defined MObile wireless testbed for city-Scale deployment. The COSMOS testbed is being deployed in West Harlem (New York City) as part of the NSF Platforms for Advanced Wireless Research (PAWR) program. It will enable researchers to explore the technology "sweet spot" of ultra-high bandwidth and ultra-low latency in the most demanding real-world environment. We describe the testbed's architecture, the design and deployment challenges, and the experience gained during the design and pilot deployment. Specifically, we describe COSMOS' computing and network architectures, the critical building blocks, and its programmability at different layers. The building blocks include software-defined radios, 28 GHz millimeter-wave phased array modules, optical transport network, core and edge cloud, and control and management software. We describe COSMOS' deployment phases in a dense urban environment, the research areas that could be studied in the testbed, and specific example experiments. Finally, we discuss our experience with using COSMOS as an educational tool.

96 citations


Journal ArticleDOI
TL;DR: An online algorithm, named , which greedily schedules newly arriving tasks and considers whether to replace some existing tasks in order to make the new deadlines satisfied, is proposed.
Abstract: In this article, we study online deadline-aware task dispatching and scheduling in edge computing. We jointly considerthe management of the networking and computing resources to meet the maximum number of deadlines. We propose an online algorithm, named Dedas, which greedily schedules newly arriving tasks and considers whether to replace some existing tasks in order to make the new deadlines satisfied. We derive a non-trivial competitive ratio of Dedas theoretically, and our analysis is asymptotically tight. Besides, we implement a distributed approximation D - Dedas with a better scalability and less than 10 percent performance loss compared with the centralized algorithm Dedas. We then build DeEdge, an edge computing testbed installed with typical latency-sensitive applications such as IoT sensor monitoring and face matching. We adopt a real-world data trace from the Google cluster for large-scale emulations. Extensive testbed experiments and simulations demonstrate that the deadline miss ratio of Dedas is stable for online tasks, which is reduced by up to 60 percent compared with state-of-the-art methods. Moreover, Dedas performs well in minimizing the average task completion time.

80 citations


Journal ArticleDOI
TL;DR: This paper proposes a flow-based policy framework on the basis of two tiers virtualization for vehicular networks using SDNs and presents a proof of concept for leveraging machine learning-enabled resource classification and management through experimental evaluation of special-purpose testbed established in custom mininet setup.
Abstract: The current cellular technology and vehicular networks cannot satisfy the mighty strides of vehicular network demands. Resource management has become a complex and challenging objective to gain expected outcomes in a vehicular environment. The 5G cellular network promises to provide ultra-high-speed, reduced delay, and reliable communications. The development of new technologies such as the network function virtualization (NFV) and software defined networking (SDN) are critical enabling technologies leveraging 5G. The SDN-based 5G network can provide an excellent platform for autonomous vehicles because SDN offers open programmability and flexibility for new services incorporation. This separation of control and data planes enables centralized and efficient management of resources in a very optimized and secure manner by having a global overview of the whole network. The SDN also provides flexibility in communication administration and resource management, which are of critical importance when considering the ad-hoc nature of vehicular network infrastructures, in terms of safety, privacy, and security, in vehicular network environments. In addition, it promises the overall improved performance. In this paper, we propose a flow-based policy framework on the basis of two tiers virtualization for vehicular networks using SDNs. The vehicle to vehicle (V2V) communication is quite possible with wireless virtualization where different radio resources are allocated to V2V communications based on the flow classification, i.e., safety-related flow or non-safety flows, and the controller is responsible for managing the overall vehicular environment and V2X communications. The motivation behind this study is to implement a machine learning-enabled architecture to cater the sophisticated demands of modern vehicular Internet infrastructures. The inclination towards robust communications in 5G-enabled networks has made it somewhat tricky to manage network slicing efficiently. This paper also presents a proof of concept for leveraging machine learning-enabled resource classification and management through experimental evaluation of special-purpose testbed established in custom mininet setup. Furthermore, the results have been evaluated using Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN), and Deep Neural Network (DNN). While concluding the paper, it is shown that the LSTM has outperformed the rest of classification techniques with promising results.

77 citations


Journal ArticleDOI
TL;DR: A data-driven IDS is designed by analyzing the link load behaviors of the Road Side Unit in the IoV against various attacks leading to the irregular fluctuations of traffic flows and a deep learning architecture based on the Convolutional Neural Network is designed to extract the features of link loads, and detect the intrusion aiming at RSUs.
Abstract: As an industrial application of Internet of Things (IoT), Internet of Vehicles (IoV) is one of the most crucial techniques for Intelligent Transportation System (ITS), which is a basic element of smart cities. The primary issue for the deployment of ITS based on IoV is the security for both users and infrastructures. The Intrusion Detection System (IDS) is important for IoV users to keep them away from various attacks via the malware and ensure the security of users and infrastructures. In this paper, we design a data-driven IDS by analyzing the link load behaviors of the Road Side Unit (RSU) in the IoV against various attacks leading to the irregular fluctuations of traffic flows. A deep learning architecture based on the Convolutional Neural Network (CNN) is designed to extract the features of link loads, and detect the intrusion aiming at RSUs. The proposed architecture is composed of a traditional CNN and a fundamental error term in view of the convergence of the backpropagation algorithm. Meanwhile, a theoretical analysis of the convergence is provided by the probabilistic representation for the proposed CNN-based deep architecture. We finally evaluate the accuracy of our method by way of implementing it over the testbed.

75 citations


Journal ArticleDOI
TL;DR: Information-Centric edge is designed and prototype, a general-purpose networking framework that streamlines service invocation and improves the reuse of redundant computation at the edge, resulting in lower task completion times and efficient use of edge computing resources.
Abstract: In today’s era of explosion of Internet of Things (IoT) and end-user devices and their data volume emanating at the network’s edge, the network should be more in-tune with meeting the needs of these demanding edge computing applications. To this end, we design and prototype Information-Centric edge ( ICedge ), a general-purpose networking framework that streamlines service invocation and improves the reuse of redundant computation at the edge. ICedge runs on top of named-data networking, a realization of the information-centric networking vision, and handles the “low-level” network communication on behalf of applications. ICedge features a fully distributed design that: 1) enables users to get seamlessly on-boarded onto an edge network; 2) delivers application invoked tasks to edge nodes for execution in a timely manner; and 3) offers naming abstractions and network-based mechanisms to enable (partial or full) reuse of the results of already executed tasks among users, which we call “compute reuse,” resulting in lower task completion times and efficient use of edge computing resources. Our simulation and testbed deployment results demonstrate that ICedge can achieve up to $50\times $ lower task completion times leveraging its network-based compute reuse mechanism compared to cases, where reuse is not available.

68 citations


Journal ArticleDOI
TL;DR: The aim of DSAJ is to learn the dynamic and complex spectrum environment and obtain an optimal communication strategy through mathematical modeling and applications for both single-user and multi-user systems.
Abstract: Due to the openness of the transmission medium, it is necessary for radio systems to have anti-jamming abilities. Traditional anti-jamming methods such as sequence-based frequency hopping and direct sequence spread spectrum have shortcomings of low spectral efficiency and fixed communication patterns. With the development of software-defined radio, jamming devices are increasingly advanced and efficient. In this article, we propose a new paradigm for anti-jamming called DSAJ. With the help of cognitive radio and machine learning, the aim of DSAJ is to learn the dynamic and complex spectrum environment and obtain an optimal communication strategy. We first introduce the basic concept of anti-jamming communications and provide a brief summary of anti-jamming methods. Then, through a case study, mathematical modeling and applications of DSAJ are discussed for both single-user and multi-user systems. A real-life DSAJ testbed is described, and some potential research directions are discussed.

66 citations


Posted Content
TL;DR: In this article, the authors provide a unified and taxonomic view of current GNN explainability methods and provide a standardized testbed for evaluations, and generate a set of benchmark graph datasets specifically for explainability.
Abstract: Deep learning methods are achieving ever-increasing performance on many artificial intelligence tasks. A major limitation of deep models is that they are not amenable to interpretability. This limitation can be circumvented by developing post hoc techniques to explain the predictions, giving rise to the area of explainability. Recently, explainability of deep models on images and texts has achieved significant progress. In the area of graph data, graph neural networks (GNNs) and their explainability are experiencing rapid developments. However, there is neither a unified treatment of GNN explainability methods, nor a standard benchmark and testbed for evaluations. In this survey, we provide a unified and taxonomic view of current GNN explainability methods. Our unified and taxonomic treatments of this subject shed lights on the commonalities and differences of existing methods and set the stage for further methodological developments. To facilitate evaluations, we generate a set of benchmark graph datasets specifically for GNN explainability. We summarize current datasets and metrics for evaluating GNN explainability. Altogether, this work provides a unified methodological treatment of GNN explainability and a standardized testbed for evaluations.

65 citations


Proceedings ArticleDOI
30 Jul 2020
TL;DR: A network-wide architectural design OmniMon, which simultaneously achieves resource efficiency and full accuracy in flow-level telemetry for large-scale data centers and addresses consistency in network- wide epoch synchronization and accountability in error-free packet loss inference.
Abstract: Network telemetry is essential for administrators to monitor massive data traffic in a network-wide manner. Existing telemetry solutions often face the dilemma between resource efficiency (i.e., low CPU, memory, and bandwidth overhead) and full accuracy (i.e., error-free and holistic measurement). We break this dilemma via a network-wide architectural design OmniMon, which simultaneously achieves resource efficiency and full accuracy in flow-level telemetry for large-scale data centers. OmniMon carefully coordinates the collaboration among different types of entities in the whole network to execute telemetry operations, such that the resource constraints of each entity are satisfied without compromising full accuracy. It further addresses consistency in network-wide epoch synchronization and accountability in error-free packet loss inference. We prototype OmniMon in DPDK and P4. Testbed experiments on commodity servers and Tofino switches demonstrate the effectiveness of OmniMon over state-of-the-art telemetry designs.

57 citations


Journal ArticleDOI
TL;DR: A novel semi-supervised dual isolation forests-based (DIF) attack detection system that has been developed using the normal process operation data only and is demonstrated on a scale-down ICS known as the Secure Water Treatment testbed and the Water Distribution testbed.
Abstract: The cybersecurity of industrial control systems (ICSs) is becoming increasingly critical under the current advancement in the cyber activity and the Internet of Things (IoT) technologies, and their direct impact on several life aspects such as safety, economy, and security. This paper presents a novel semi-supervised dual isolation forests-based (DIF) attack detection system that has been developed using the normal process operation data only and is demonstrated on a scale-down ICS known as the Secure Water Treatment (SWaT) testbed and the Water Distribution (WADI) testbed. The proposed cyber-attack detection framework is composed of two isolation forest models that are trained independently using the normalized raw data and a pre-processed version of the data using Principal Component Analysis (PCA), respectively, to detect attacks by separating-away anomalies. The performance of the proposed method is compared with the previous works, and it demonstrates improvements in terms of the attack detection capability, computational requirements, and applicability to high dimensional systems.

Journal ArticleDOI
TL;DR: This research paper conducts a comprehensive analysis of previous studies on IoT device security with a focus on the various tools used to test IoT devices and the vulnerabilities that were found.

Journal ArticleDOI
TL;DR: This study introduces a modular factory testbed, emphasizing transformability and modularity under a distributed shop-floor control architecture, and presents the main technologies and methods from the four aspects of rapid factory transformation: self-layout recognition, rapid workstation and robot reprogramming, inter-layer information sharing, and configurable software for shop- floor monitoring.
Abstract: The recent manufacturing trend toward mass customization and further personalization of products requires factories to be smarter than ever before in order to: (1) quickly respond to customer requirements, (2) resiliently retool machinery and adjust operational parameters for unforeseen system failures and product quality problems, and (3) retrofit old systems with upcoming new technologies. Furthermore, product lifecycles are becoming shorter due to unbounded and unpredictable customer requirements, thereby requiring reconfigurable and versatile manufacturing systems that underpin the basic building blocks of smart factories. This study introduces a modular factory testbed, emphasizing transformability and modularity under a distributed shop-floor control architecture. The main technologies and methods, being developed and verified through the testbed, are presented from the four aspects of rapid factory transformation: self-layout recognition, rapid workstation and robot reprogramming, inter-layer information sharing, and configurable software for shop-floor monitoring.

Journal ArticleDOI
TL;DR: UbiFlow is presented, the first software-defined IoT system for combined ubiquitous flow control and mobility management in urban heterogeneous networks and adopts multiple controllers to divide urban-scale SDN into different geographic partitions and achieve distributed control of IoT flows.
Abstract: The growth of Internet of Things (IoT) devices with multiple radio interfaces has resulted in a number of urban-scale deployments of IoT multinetworks, where heterogeneous wireless communication solutions coexist (e.g., WiFi, Bluetooth, Cellular). Managing the multinetworks for seamless IoT access and handover, especially in mobile environments, is a key challenge. Software-defined networking (SDN) is emerging as a promising paradigm for quick and easy configuration of network devices, but its application in urban-scale multinetworks requiring heterogeneous and frequent IoT access is not well studied. In this paper we present UbiFlow, the first software-defined IoT system for combined ubiquitous flow control and mobility management in urban heterogeneous networks. UbiFlow adopts multiple controllers to divide urban-scale SDN into different geographic partitions (assigning one controller per partition) and achieve distributed control of IoT flows. A distributed hashing based overlay structure is proposed to maintain network scalability and consistency. Based on this UbiFlow overlay structure, the relevant issues pertaining to mobility management such as scalable control, fault tolerance, and load balancing have been carefully examined and studied. The UbiFlow controller differentiates flow scheduling based on per-device requirements and whole-partition capabilities. Therefore, it can present a network status view and optimized selection of access points in multinetworks to satisfy IoT flow requests, while guaranteeing network performance for each partition. Simulation and realistic testbed experiments confirm that UbiFlow can successfully achieve scalable mobility management and robust flow scheduling in IoT multinetworks; e.g., 67.21 percent throughput improvement, 72.99 percent reduced delay, and 69.59 percent jitter improvements, compared with alternative SDN systems.

Proceedings ArticleDOI
06 Jul 2020
TL;DR: A novel sensing and communication integrated system based on the 5G New Radio frame structure using the millimeter wave (mmWave) communication technology to guarantee the low-latency and high data rate information sharing among vehicles.
Abstract: Facing fatal collisions due to the sensor's failure, the raw sensor information sharing among autonomous driving vehicles is critical important to guarantee the driving safety with the enhanced see-through ability. This paper proposes a novel sensing and communication integrated system based on the 5G New Radio frame structure using the millimeter wave (mmWave) communication technology to guarantee the low-latency and high data rate information sharing among vehicles. And the smart weighted grid searching based fast beam alignment and beam tracking algorithms are proposed and evaluated by the developed hardware testbed. Field test results prove that our proposed algorithms can achieve a stable data rate of 2.8 Gbps within 500 ms latency in a mobile vehicle communication scenario.

Journal ArticleDOI
23 Oct 2020
TL;DR: The development of a PDM platform based on the K11 planetary rover prototype is described, with a variety of injectable fault modes being investigated for electrical, mechanical, and power subsystems of the testbed, along with methods for data collection and processing.
Abstract: As fault diagnosis and prognosis systems in aerospace applications become more capable, the ability to utilize information supplied by them becomes increasingly important. While certain types of vehicle health data can be effectively processed and acted upon by crew or support personnel, others, due to their complexity or time constraints, require either automated or semi-automated reasoning. Prognostics-enabled Decision Making (PDM) is an emerging research area that aims to integrate prognostic health information and knowledge about the future operating conditions into the process of selecting subsequent actions for the system. The newly developed PDM algorithms require suitable software and hardware platforms for testing under realistic fault scenarios. The paper describes the development of such a platform, based on the K11 planetary rover prototype. A variety of injectable fault modes are being investigated for electrical, mechanical, and power subsystems of the testbed, along with methods for data collection and processing. In addition to the hardware platform, a software simulator with matching capabilities has been developed. The simulator allows for prototyping and initial validation of the algorithms prior to their deployment on the K11. The simulator is also available to the PDM algorithms to assist with the reasoning process. A reference set of diagnostic, prognostic, and decision making algorithms is also described, followed by an overview of the current test scenarios and the results of their execution on the simulator.

Journal ArticleDOI
TL;DR: A state of the art “EdgeDrone” concept has been engineered where the standard message transfer strategy of IoT and the opportunistic Ad-Hoc network have been amalgamated.

Journal ArticleDOI
TL;DR: In this paper, a lightweight process migration-based computational offloading framework is proposed for resource-intensive IoT application processing in mobile edge computing networks, which does not require application binaries at edge servers and thus seamlessly migrates native applications.
Abstract: Mobile devices have become an indispensable component of Internet of Things (IoT). However, these devices have resource constraints in processing capabilities, battery power, and storage space, thus hindering the execution of computation-intensive applications that often require broad bandwidth, stringent response time, long-battery life, and heavy-computing power. Mobile cloud computing and mobile edge computing (MEC) are emerging technologies that can meet the aforementioned requirements using offloading algorithms. In this article, we analyze the effect of platform-dependent native applications on computational offloading in edge networks and propose a lightweight process migration-based computational offloading framework. The proposed framework does not require application binaries at edge servers and thus seamlessly migrates native applications. The proposed framework is evaluated using an experimental testbed. Numerical results reveal that the proposed framework saves almost 44% of the execution time and 84% of the energy consumption. Hence, the proposed framework shows profound potential for resource-intensive IoT application processing in MEC.

Journal ArticleDOI
30 Nov 2020-Sensors
TL;DR: This article details the development of a smart irrigation system able to cover large urban areas thanks to the use of Low-Power Wide-Area Network (LPWAN) sensor nodes based on LoRa and LoRaWAN and shows the radio planning tool accuracy, which allows for optimizing the sensor network topology and the overall performance of the network in terms of coverage, cost, and energy consumption.
Abstract: Climate change is driving new solutions to manage water more efficiently. Such solutions involve the development of smart irrigation systems where Internet of Things (IoT) nodes are deployed throughout large areas. In addition, in the mentioned areas, wireless communications can be difficult due to the presence of obstacles and metallic objects that block electromagnetic wave propagation totally or partially. This article details the development of a smart irrigation system able to cover large urban areas thanks to the use of Low-Power Wide-Area Network (LPWAN) sensor nodes based on LoRa and LoRaWAN. IoT nodes collect soil temperature/moisture and air temperature data, and control water supply autonomously, either by making use of fog computing gateways or by relying on remote commands sent from a cloud. Since the selection of IoT node and gateway locations is essential to have good connectivity and to reduce energy consumption, this article uses an in-house 3D-ray launching radio-planning tool to determine the best locations in real scenarios. Specifically, this paper provides details on the modeling of a university campus, which includes elements like buildings, roads, green areas, or vehicles. In such a scenario, simulations and empirical measurements were performed for two different testbeds: a LoRaWAN testbed that operates at 868 MHz and a testbed based on LoRa with 433 MHz transceivers. All the measurements agree with the simulation results, showing the impact of shadowing effects and material features (e.g., permittivity, conductivity) in the electromagnetic propagation of near-ground and underground LoRaWAN communications. Higher RF power levels are observed for 433 MHz due to the higher transmitted power level and the lower radio propagation losses, and even in the worst gateway location, the received power level is higher than the sensitivity threshold (-148 dBm). Regarding water consumption, the provided estimations indicate that the proposed smart irrigation system is able to reduce roughly 23% of the amount of used water just by considering weather forecasts. The obtained results provide useful guidelines for future smart irrigation developers and show the radio planning tool accuracy, which allows for optimizing the sensor network topology and the overall performance of the network in terms of coverage, cost, and energy consumption.

Journal ArticleDOI
TL;DR: A novel DDoS attack mitigation in SDN-based Internet Service Provider (ISP) networks for TCP-SYN and ICMP flood attacks utilizing machine learning approach, i.e., K-Nearest-Neighbor (KNN) and XGBoost is proposed.
Abstract: Keeping Internet users protected from cyberattacks and other threats is one of the most prominent security challenges for network operators nowadays. Among other critical threats, distributed denial-of-service (DDoS) becomes one of the most widespread attacks in the Internet, which is very challenging to mitigate appropriately as DDoS attacks cause the system to stop working by resource exhaustion. Software-defined networking (SDN) has recently emerged as a new networking technology offering unprecedented programmability that allows network operators to configure and manage their infrastructures dynamically. The flexible processing and centralized management of the SDN controller allow flexibly deploying complex security algorithms and mitigation methods. In this paper, we propose a novel DDoS attack mitigation in SDN-based Internet Service Provider (ISP) networks for TCP-SYN and ICMP flood attacks utilizing machine learning approach, i.e., K-Nearest-Neighbor (KNN) and XGBoost. By deploying a testbed, we implement the proposed algorithms, evaluate their accuracy, and address the trade-off between the accuracy and mitigation efficiency. Through extensive experiments, the results show that the algorithms can efficiently mitigate the attack by over 98.0% while benign traffic is not affected.

Journal ArticleDOI
TL;DR: The design and implementation of a comprehensive multi-camera-based testbed for 3-D tracking and control of UAVs, which employs smart cameras with field-programmable gate array modules to allow for real-time computation at a frame rate of 100 Hz is presented.
Abstract: Flight testbeds with multiple unmanned aerial vehicles (UAVs) are especially important to support research on multi-vehicle-related algorithms. The existing platforms usually lack a generic and complete solution allowing for software and hardware design. For such a purpose, this paper presents the design and implementation of a comprehensive multi-camera-based testbed for 3-D tracking and control of UAVs. First, the testbed software consists of a multi-camera system and a ground control system, which performs image processing, camera calibration, 3-D reconstruction, pose estimation, and motion control. In the multi-camera system, the positions and orientations of UAVs are first reconstructed by using epipolar geometric constraints and triangulation methods and then filtered by an extended Kalman filter (EKF). In the ground control system, a classical proportional–derivative (PD) controller is designed to receive the navigation data from the multi-camera system and then generates control commands to the target vehicles. Then, the testbed hardware employs smart cameras with field-programmable gate array (FPGA) modules to allow for real-time computation at a frame rate of 100 Hz. Lightweight quadcopter Parrot Bebop drones are chosen as the target UAVs, which does not require any modification to the hardware. Artificial infrared reflective markers are asymmetrically mounted on target vehicles and observed by multiple infrared cameras located around the flight region. Finally, extensive experiments are performed to demonstrate that the proposed testbed is a comprehensive and complete platform with good scalability applicable for research on a variety of advanced guidance, navigation, and control algorithms.

Journal ArticleDOI
TL;DR: The required enhancements in the SBA to support UAV services including UAV navigation and air traffic management, weather forecasting and UAV connectivity management are analyzed, while emphasizing the role of UAVs as core network equipment with radio and backhaul capabilities.
Abstract: This article provides an overview of enhanced network services, while emphasizing the role of UAVs as core network equipment with radio and backhaul capabilities. Initially, we elaborate the various deployment options, focusing on UAVs as airborne radio, backhaul and core network equipment, pointing out the benefits and limitations. We then analyze the required enhancements in the SBA to support UAV services including UAV navigation and air traffic management, weather forecasting and UAV connectivity management. The use of airborne UAVs network services is assessed via qualitative means, considering the impact on vehicular applications. Finally, an evaluation has been conducted via a testbed implementation, to explore the performance of UAVs as edge cloud nodes, hosting an ACS function responsible for the control and orchestration of a UAV fleet.

Journal ArticleDOI
TL;DR: An integrated approach to analyze and design the cyber security system for a given CPS where the physical threats are identified first to guide the risk assessment process, and a mathematical model is derived for the physical system using a hybrid automaton to enumerate potential hazardous states of the system.

Journal ArticleDOI
TL;DR: A new set of simulation models, organized in a testbed, is presented, providing researchers with a platform able to credibly represent the complexity of modern semiconductor manufacturing.
Abstract: We present a new set of simulation models, organized in a testbed. The aim of the testbed consists in providing researchers with a platform able to credibly represent the complexity of modern semiconductor manufacturing. The testbed is open to public use, and include so far four models. A high-volume/low-mix model and a low-volume/high-mix model constitutes the foundation of the testbed. Two additional models incorporate the complexity of engineering lots. We conclude this paper by presenting a case study that demonstrates that the third and fourth model can be used to assess the performance of integrated dispatching strategies for production and engineering lots.

Journal ArticleDOI
TL;DR: A novel handover roaming mechanism for Low Range Wide Area Network (LoRaWan) protocol that relies on the trusted 5G network to perform IoT device’s authentication and key management, thereby extending the mobility and roaming capabilities of LoRaWAN to global scale is proposed.
Abstract: Despite the latest research efforts to foster mobility and roaming in heterogeneous Low Power Wide Area Networks (LP-WANs) networks, handover roaming of Internet of Things (IoT) devices is not a success mainly due to fragmentation and difficulties to establish trust across different network domains as well as the lack of interoperability of different LP-WANs wireless protocols. To cope with this issue, this paper proposes a novel handover roaming mechanism for Low Range Wide Area Network (LoRaWAN) protocol that relies on the trusted 5G network to perform IoT device's authentication and key management, thereby extending the mobility and roaming capabilities of LoRaWAN to global scale. The proposal enables interoperability between 5G network and LoRaWAN, whereby multi Radio Access Technologies IoT (multi-RAT IoT) devices can exploit both technologies interchangeably, thereby fostering novel IoT mobility and roaming use cases for LP-WANs not experimented so far. Two integration approaches for LoRaWAN and 5G have been proposed, either assuming 5G spectrum connectivity with standard 5G authentication or performing 5G authentication over the LoRaWAN network. The solution has been deployed, implemented and validated in a real and integrated 5G-LoRaWAN testbed, showing its feasibility and security viability.

Journal ArticleDOI
TL;DR: A brain-like distributed control security (BLCS) architecture for F-RON in CPS is proposed by introducing a brain- like security ( BLS) scheme to accomplish the secure cross-domain control among tripartite controllers verification in the scenario of decentralized F-Ron for distributed computing and communications.
Abstract: The cyber-physical system (CPS) has operated, controlled, and coordinated the physical systems integrated by a computing and communication core applied in Industry 4.0. To accommodate CPS services, fog radio and optical networks (F-RON) has become an important supporting physical cyber infrastructure taking advantage of both the inherent ubiquity of wireless technology and the large capacity of optical networks. However, cyber security is the biggest issue in the CPS scenario as there is a trade-off between security control and privacy exposure in F-RON. To deal with this issue, we propose a brain-like distributed control security (BLCS) architecture for F-RON in CPS by introducing a brain-like security (BLS) scheme. BLCS can accomplish the secure cross-domain control among tripartite controllers verification in the scenario of decentralized F-RON for distributed computing and communications, which has no need to disclose the private information of each domain against cyber-attacks. BLS utilizes parts of information to perform control identification through a relation network and a deep learning of behavior library. The functional modules of BLCS architecture are illustrated including various controllers and a brain-like knowledge base. The interworking procedures in distributed control security modes based on BLS are described. The overall feasibility and efficiency of the architecture are experimentally verified on a software defined network testbed in terms of average mistrust rate, path provisioning latency, packet loss probability, and blocking probability. The emulation results are obtained and dissected based on the testbed.

Journal ArticleDOI
TL;DR: This work proposes an intent defined optical network (IDON) architecture toward artificial intelligence-based optical network automated operation and maintenance against service objective, by introducing a self-adapted generation and optimization (SAGO) policy in a customized manner.
Abstract: Traditionally, the operation and maintenance of optical networks rely on the experience of engineers to configure network parameters, involving command-line interface, middle-ware scripting, and troubleshooting. However, with the emerging of newly B5G applications, the traditional configuration cannot meet the requirement of real-time automatic configuration. Operators need a new configuration way without manual intervention at an underlying optical transport network. To cope with this issue, we propose an intent defined optical network (IDON) architecture toward artificial intelligence-based optical network automated operation and maintenance against service objective, by introducing a self-adapted generation and optimization (SAGO) policy in a customized manner. The IDON platform has three key innovations including intent-orient configuration translation, self-adapted generation and optimization policy, and close-loop intent guarantee operation. Focusing specifically on communication requirements, the IDON uses natural language processing to construct semantic graphs to understand, interact, and create the required network configuration. Then, deep reinforcement learning (DRL) is utilized to find the composition policy that satisfies the requirement of intent through the dynamic integration of fine-grained policies. Finally, the deep neural evolutionary network (DNEN) is introduced to achieve the intent guarantee at the milliseconds level. The feasibility and efficiency are verified on enhanced SDN testbed. Finally, we discuss several related challenges and opportunities for unveiling a promising upcoming future of intent defined optical network.

Journal ArticleDOI
TL;DR: Key capabilities of Arena are described by providing examples of published work that employed Arena for applications as diverse as synchronized MIMO transmission schemes,multi-hop ad hoc networking, multi-cell 5G networks, AI-powered Radio-Frequency fingerprinting, secure wireless communications, and spectrum sensing for cognitive radio.

Journal ArticleDOI
25 Sep 2020-Sensors
TL;DR: An experimental 5G testbed has been designed integrating C-RAN and IoT networks and the proposed DELTA machine learning model implemented on a 3D multi-layered fingerprint radiomap has outperformed traditional algorithms such as Support Vector Machine (SVM) and K-Nearest Neighbor (KNN).
Abstract: In the near future, the fifth-generation wireless technology is expected to be rolled out, offering low latency, high bandwidth and multiple antennas deployed in a single access point. This ecosystem will help further enhance various location-based scenarios such as assets tracking in smart factories, precise smart management of hydroponic indoor vertical farms and indoor way-finding in smart hospitals. Such a system will also integrate existing technologies like the Internet of Things (IoT), WiFi and other network infrastructures. In this respect, 5G precise indoor localization using heterogeneous IoT technologies (Zigbee, Raspberry Pi, Arduino, BLE, etc.) is a challenging research area. In this work, an experimental 5G testbed has been designed integrating C-RAN and IoT networks. This testbed is used to improve both vertical and horizontal localization (3D Localization) in a 5G IoT environment. To achieve this, we propose the DEep Learning-based co-operaTive Architecture (DELTA) machine learning model implemented on a 3D multi-layered fingerprint radiomap. The DELTA begins by estimating the 2D location. Then, the output is recursively used to predict the 3D location of a mobile station. This approach is going to benefit use cases such as 3D indoor navigation in multi-floor smart factories or in large complex buildings. Finally, we have observed that the proposed model has outperformed traditional algorithms such as Support Vector Machine (SVM) and K-Nearest Neighbor (KNN).

Proceedings ArticleDOI
01 Jun 2020
TL;DR: This work introduces a novel testbed, called 5GIIK, that provides implementation, management, and orchestration of network slices across all network domains and different access technologies, and identifies design criteria that are a superset of the features present in other state-of-the-art testbeds.
Abstract: Network slicing aims to shape 5G as a flexible, scalable, and demand-oriented network. Research communities deploy small-scale and cost-efficient testbeds in order to evaluate network slicing functionalities. We introduce a novel testbed, called 5GIIK, that provides implementation, management, and orchestration of network slices across all network domains and different access technologies. Our methodology identifies design criteria that are a superset of the features present in other state-of-the-art testbeds and determines appropriate open-source tools for implementing them. 5GIIK is one of the most comprehensive testbeds because it provides additional features and capabilities such as slice provision dynamicity, real-time monitoring of VMs and VNF-onboarding to different VIMs. We illustrate the potentials of the proposed testbed and present initial results.