scispace - formally typeset
Search or ask a question

Showing papers on "Testbed published in 2018"


Journal ArticleDOI
TL;DR: CUIDATS is presented, an IoT hybrid monitoring system for health care environments which integrates RFID and WSN technologies in a single platform providing location, status, and tracking of patients and assets.

121 citations


Journal ArticleDOI
TL;DR: A comprehensive hybrid agent framework combining the foundation for intelligent physical agents (FIPA), IEC 61850, and data distribution service (DDS) standards is proposed, and it was shown that the developed agent platform is industrially applicable for actual smart grid field deployment.
Abstract: This paper presents the development and application of a real-time testbed for multiagent system interoperability. As utility independent private microgrids are installed constantly, standardized interoperability frameworks are required to define behavioral models of the individual agents for expandability and plug-and-play operation. In this paper, we propose a comprehensive hybrid agent framework combining the foundation for intelligent physical agents (FIPAs), IEC 61850, and data distribution service (DDS) standards. The IEC 61850 logical node concept is extended using FIPA-based agent communication language with application specific attributes and deliberative behavior modeling capability. The DDS middleware is adopted to enable a real-time publisher-subscriber interoperability mechanism between platforms. The proposed multi-agent framework was validated in a laboratory-based testbed involving developed intelligent electronic device prototypes and actual microgrid setups. Experimental results were demonstrated for both decentralized and distributed control approaches. Secondary and tertiary control levels of a microgrid were demonstrated for decentralized hierarchical control case study. A consensus-based economic dispatch case study was demonstrated as a distributed control example. It was shown that the developed agent platform is industrially applicable for actual smart grid field deployment.

113 citations


Journal ArticleDOI
TL;DR: 5G-QoE would enable a holistic video flow self-optimisation system employing the cutting-edge Scalable H.265 video encoding to transmit UHD video applications in a QoE-aware manner.
Abstract: Traffic on future fifth-generation (5G) mobile networks is predicted to be dominated by challenging video applications such as mobile broadcasting, remote surgery and augmented reality, demanding real-time, and ultra-high quality delivery. Two of the main expectations of 5G networks are that they will be able to handle ultra-high-definition (UHD) video streaming and that they will deliver services that meet the requirements of the end user’s perceived quality by adopting quality of experience (QoE) aware network management approaches. This paper proposes a 5G-QoE framework to address the QoE modeling for UHD video flows in 5G networks. Particularly, it focuses on providing a QoE prediction model that is both sufficiently accurate and of low enough complexity to be employed as a continuous real-time indicator of the “health” of video application flows at the scale required in future 5G networks. The model has been developed and implemented as part of the EU 5G PPP SELFNET autonomic management framework, where it provides a primary indicator of the likely perceptual quality of UHD video application flows traversing a realistic multi-tenanted 5G mobile edge network testbed. The proposed 5G-QoE framework has been implemented in the 5G testbed, and the high accuracy of QoE prediction has been validated through comparing the predicted QoE values with not only subjective testing results but also empirical measurements in the testbed. As such, 5G-QoE would enable a holistic video flow self-optimisation system employing the cutting-edge Scalable H.265 video encoding to transmit UHD video applications in a QoE-aware manner.

108 citations


Journal ArticleDOI
TL;DR: A wireless sensor network(WSN)-based IoT platform for wide area and heterogeneous sensing applications, consisting of one or multiple WSNs, gateways, a Web server, and a database, that can fulfill the high throughput requirement for high- rate applications and the requirement of long battery life for low-rate applications at the same time.
Abstract: Internet of Things (IoT) is not only a promising research topic but also a blooming industrial trend. Although the basic idea is to bring things or objects into the Internet, there are various approaches, because an IoT system is highly application oriented. This paper presents a wireless sensor network(WSN)-based IoT platform for wide area and heterogeneous sensing applications. The platform, consisting of one or multiple WSNs, gateways, a Web server, and a database, provides a reliable connection between sensors at fields and the database on the Internet. The WSN is built based on the IEEE 802.15.4e time slotted channel hopping protocol, because it has the benefits such as multi-hop transmission, collision-free transmission, and high energy efficiency. In addition to the design of a customized hardware for range extension, a new synchronization scheme and a burst transmission feature are also presented to boost the network capacity and reduce the energy waste. As a result, the proposed platform can fulfill the high throughput requirement for high-rate applications and the requirement of long battery life for low-rate applications at the same time. We have developed a testbed in our campus to validate the proposed system.

102 citations


Journal ArticleDOI
TL;DR: A novel radio resource allocation algorithm leveraging multiobjective reinforcement learning and artificial neural network ensembles able to manage available resources and conflicting mission-based goals is proposed.
Abstract: Future spacecraft communication subsystems will potentially benefit from software-defined radios controlled by artificial intelligence algorithms. In this paper, we propose a novel radio resource allocation algorithm leveraging multiobjective reinforcement learning and artificial neural network ensembles able to manage available resources and conflicting mission-based goals. The uncertainty in the performance of thousands of possible radio parameter combinations and the dynamic behavior of the radio channel over time producing a continuous multidimensional state–action space requires a fixed-size memory continuous state–action mapping instead of the traditional discrete mapping. In addition, actions need to be decoupled from states in order to allow for online learning, performance monitoring, and resource allocation prediction. The proposed approach leverages the authors’ previous research on constraining decisions predicted to have poor performance through ”virtual environment exploration.” The simulation results show the performance for different communication mission profiles, and accuracy benchmarks are provided for the future research reference. The proposed approach constitutes part of the core cognitive engine proof-of-concept delivered to the NASA John H. Glenn Research Center’s SCaN Testbed radios on-board the International Space Station.

99 citations


Journal ArticleDOI
10 Nov 2018-Sensors
TL;DR: A novel mist computing testbed is presented and the importance of selecting a proper ECC curve is demonstrated, showing that, for the tested devices, some curves present worse energy consumption and data throughput than other curves that provide a higher security level.
Abstract: The latest Internet of Things (IoT) edge-centric architectures allow for unburdening higher layers from part of their computational and data processing requirements. In the specific case of fog computing systems, they reduce greatly the requirements of cloud-centric systems by processing in fog gateways part of the data generated by end devices, thus providing services that were previously offered by a remote cloud. Thanks to recent advances in System-on-Chip (SoC) energy efficiency, it is currently possible to create IoT end devices with enough computational power to process the data generated by their sensors and actuators while providing complex services, which in recent years derived into the development of the mist computing paradigm. To allow mist computing nodes to provide the previously mentioned benefits and guarantee the same level of security as in other architectures, end-to-end standard security mechanisms need to be implemented. In this paper, a high-security energy-efficient fog and mist computing architecture and a testbed are presented and evaluated. The testbed makes use of Transport Layer Security (TLS) 1.2 Elliptic Curve Cryptography (ECC) and Rivest-Shamir-Adleman (RSA) cipher suites (that comply with the yet to come TLS 1.3 standard requirements), which are evaluated and compared in terms of energy consumption and data throughput for a fog gateway and two mist end devices. The obtained results allow a conclusion that ECC outperforms RSA in both energy consumption and data throughput for all the tested security levels. Moreover, the importance of selecting a proper ECC curve is demonstrated, showing that, for the tested devices, some curves present worse energy consumption and data throughput than other curves that provide a higher security level. As a result, this article not only presents a novel mist computing testbed, but also provides guidelines for future researchers to find out efficient and secure implementations for advanced IoT devices.

91 citations


Journal ArticleDOI
TL;DR: The development of a Supervisory Control and Data Acquisition (SCADA) system testbed used for cybersecurity research, which provides a good understanding of the effects and consequences of attacks on real SCADA environments.
Abstract: This paper presents the development of a Supervisory Control and Data Acquisition (SCADA) system testbed used for cybersecurity research. The testbed consists of a water storage tank’s control system, which is a stage in the process of water treatment and distribution. Sophisticated cyber-attacks were conducted against the testbed. During the attacks, the network traffic was captured, and features were extracted from the traffic to build a dataset for training and testing different machine learning algorithms. Five traditional machine learning algorithms were trained to detect the attacks: Random Forest, Decision Tree, Logistic Regression, Naive Bayes and KNN. Then, the trained machine learning models were built and deployed in the network, where new tests were made using online network traffic. The performance obtained during the training and testing of the machine learning models was compared to the performance obtained during the online deployment of these models in the network. The results show the efficiency of the machine learning models in detecting the attacks in real time. The testbed provides a good understanding of the effects and consequences of attacks on real SCADA environments.

90 citations


Proceedings ArticleDOI
20 May 2018
TL;DR: A deep-learning classifier that learns hardware imperfections of low-power radios that are challenging to emulate, even for high- power adversaries is presented.
Abstract: At its peak, the Internet-of-Things will largely be composed of low-power devices with wireless radios attached. Yet, secure authentication of these devices amidst adversaries with much higher power and computational capability remains a challenge, even for advanced cryptographic and wireless security protocols. For instance, a high-power software radio could simply replay chunks of signals from a low-power device to emulate it. This paper presents a deep-learning classifier that learns hardware imperfections of low-power radios that are challenging to emulate, even for high- power adversaries. We build an LSTM framework, specifically sensitive to signal imperfections that persist over long durations. Experimental results from a testbed of 30 low-power nodes demonstrate high resilience to advanced software radio adversaries.

87 citations


Journal ArticleDOI
TL;DR: The proposed FabRec system decentralizes critical information about the manufacturer and makes it available on a peer-to-peer network composed of fiduciary nodes to ensure transparency and data provenance through a verifiable audit trail.

87 citations


Journal ArticleDOI
TL;DR: The multihop real-time BLE (MRT-BLE) protocol is proposed, a real- time protocol developed on top of BLE that allows for bounded packet delays over mesh networks and also provides priority support.
Abstract: Industrial wireless sensor networks (IWSNs) are used to acquire sensor data that need real-time processing, therefore they require predictable behavior and real-time guarantees. To be cost effective, IWSNs are also expected to be low cost and low power. In this context, Bluetooth low energy (BLE) is a promising technology, as it allows implementing low-cost industrial networks. As BLE is a short-range technology, a multihop mesh network is needed to cover a large area. Nevertheless, the recently published Bluetooth mesh networking specifications do not provide support for real-time communications over multihop mesh networks. To overcome this limitation, this paper proposes the multihop real-time BLE (MRT-BLE) protocol, a real-time protocol developed on top of BLE, that allows for bounded packet delays over mesh networks. MRT-BLE also provides priority support. This paper describes in detail the MRT-BLE protocol and how to implement it on commercial-off-the-shelf devices. Two kinds of performance evaluation for the MRT-BLE protocol are provided. The first one is a worst case end-to-end delay analysis, while the second one is based on the experimental results obtained through measurements on a real testbed.

85 citations


Journal ArticleDOI
TL;DR: The SDN technology is introduced into the hierarchical structure to decouple data and control planes of ECC and CCN, and an SDN protocol is designed to control the data forwarding.
Abstract: Edge-centric computing (ECC) and content- centric networking (CCN) will be the most important technologies in future 5G networks. However, due to different architectures and protocols, it is still a challenge to fuse ECC and CCN together and provide manageable and flexible services. In this article, we present ECCN, an orchestrating scheme that integrates ECC and CCN into a hierarchical structure with software defined networking (SDN). We introduce the SDN technology into the hierarchical structure to decouple data and control planes of ECC and CCN, and then design an SDN protocol to control the data forwarding. We also implement two demonstration applications in our testbed to evaluate the ECCN scheme. The experimental results from the testbed applications, and extensive simulations show ECCN outperforms original structures.

Journal ArticleDOI
TL;DR: A system identification framework with the new BN-modulated waveform and the clinical HIL simulation testbed can help develop future model-based closed-loop electrical brain stimulation systems for treatment of neurological and neuropsychiatric disorders.
Abstract: Objective Closed-loop electrical brain stimulation systems may enable a precisely-tailored treatment for neurological and neuropsychiatric disorders by controlling the stimulation based on neural activity feedback in real time. Developing model-based closed-loop systems requires a principled system identification framework to quantify the effect of input stimulation on output neural activity by learning an input-output (IO) dynamic model from data. Further, developing these systems needs a realistic clinical simulation testbed to design and validate the closed-loop controllers derived from the IO models before testing in human patients. Approach First, we design a control-theoretic system identification framework to build dynamic IO models for neural activity that are amenable to closed-loop control design. To enable tractable model-based control, we use a data-driven linear state-space IO model that characterizes the effect of input on neural activity in terms of a low-dimensional hidden neural state. To learn the model parameters, we design a novel input waveform-a pulse train modulated by stochastic binary noise (BN) parameters-that we show is optimal for collecting informative IO datasets in system identification and conforms to clinical safety requirements. Second, we further extend this waveform to a generalized BN (GBN)-modulated waveform to reduce the required system identification time. Third, to enable extensive testing of system identification and closed-loop control, we develop a real-time closed-loop clinical hardware-in-the-loop (HIL) simulation testbed using the [Formula: see text] microelectrode recording and stimulation device, which incorporates stochastic noises, unknown disturbances and stimulation artifacts. Using this testbed, we implement both the system identification and the closed-loop controller by taking control of mood in depression as an example. Results Testbed simulation results show that the closed-loop controller designed from IO models identified with the BN-modulated waveform achieves tight control, and performs similar to a controller that knows the true IO model of neural activity. When system identification time is limited, performance is further improved using the GBN-modulated waveform. Significance The system identification framework with the new BN-modulated waveform and the clinical HIL simulation testbed can help develop future model-based closed-loop electrical brain stimulation systems for treatment of neurological and neuropsychiatric disorders.

Journal ArticleDOI
TL;DR: An architecture to enable autonomic slice networking is presented and is experimentally demonstrated by means of a complex use case for a multidomain multilayer multiprotocol label switching-over-optical network.
Abstract: Network slices combine resource virtualization with the isolation level required by future 5G applications. In addition, the use of monitoring and data analytics help to maintain the required network performance, while reducing total cost of ownership. In this paper, an architecture to enable autonomic slice networking is presented. Extended nodes make local decisions close to network devices, whereas centralized domain systems collate and export metered data transparently to customer controllers, all of them leveraging customizable and isolated data analytics processes. Discovered knowledge can be applied for both proactive and reactive network slice reconfiguration, triggered either by service providers or customers, thanks to the interaction with state-of-the-art software-defined networking controllers and planning tools. The architecture is experimentally demonstrated by means of a complex use case for a multidomain multilayer multiprotocol label switching (MPLS)-over-optical network. In particular, the use case consists of the following observe–analyze–act loops: 1) proactive network slice rerouting after bit error rate (BER) degradation detection in a lightpath supporting a virtual link (vlink); 2) reactive core network restoration after optical link failure; and 3) reactive network slice rerouting after the degraded lightpath is restored. The proposed architecture is experimentally validated on a distributed testbed connecting premises in UPC (Spain) and CNIT (Italy).

Journal ArticleDOI
TL;DR: The fog-framework for intelligent public safety in vehicular environment (FISVER) framework applies fog computing in smart video surveillance-based STS to enhance crime assistance in a cost-efficient way and delivers outstanding system performance and device survivability behavior over typical STS use cases.
Abstract: Smart transportation safety (STS) envisions improving public safety through a significant paradigm shift for police authority responses on crimes toward a pro-active one. The application of smart surveillance in STS is critical for automatic and accurate identification of events in case of security threats in target environments. Cloud computing reduces costs and high resource consumption of smart surveillance capable STS systems, at the cost of introducing additional latency through far away centralized systems. In this paper, the fog-framework for intelligent public safety in vehicular environment (FISVER) framework applies fog computing in smart video surveillance-based STS to enhance crime assistance in a cost-efficient way. Through fog-FISVER, in-vehicle and fog infrastructures support autonomous and real-time crime detection on public bus services. A fog-FISVER laboratory testbed prototype was created and extensive evaluations in a real testbed were performed. Results show that fog-FISVER delivers outstanding system performance and device survivability behavior over typical STS use cases.

Journal ArticleDOI
TL;DR: A secure controller-to-controller (C- to-C) protocol is designed that allows SDN-controllers lying in different autonomous systems (AS) to securely communicate and transfer attack information with each other, thus saving valuable time and network resources.
Abstract: Software Defined Networking (SDN) has proved itself to be a backbone in the new network design and is quickly becoming an industry standard. The idea of separation of control plane and data plane is the key concept behind SDN. SDN not only allows us to program and monitor our networks but it also helps in mitigating some key network problems. Distributed denial of service (DDoS) attack is among them. In this paper we propose a collaborative DDoS attack mitigation scheme using SDN. We design a secure controller-to-controller (C-to-C) protocol that allows SDN-controllers lying in different autonomous systems (AS) to securely communicate and transfer attack information with each other. This enables efficient notification along the path of an ongoing attack and effective filtering of traffic near the source of attack, thus saving valuable time and network resources. We also introduced three different deployment approaches i.e., linear, central and mesh in our testbed. Based on the experimental results we demonstrate that our SDN based collaborative scheme is fast and reliable in efficiently mitigating DDoS attacks in real time with very small computational footprints.

Proceedings ArticleDOI
22 Jun 2018
TL;DR: A new method of virtualization is proposed to integrate machining data and operations into the digital-twins using Internet scale machine tool communication method and performance analysis shows that the MTComm based digital twins have an excellent efficiency.
Abstract: Digital-Twins simulate physical world objects by creating 'as-is' virtual images in a cyberspace. In order to create a well synchronized digital-twin simulator in manufacturing, information and activities of a physical machine need to be virtualized. Many existing digital-twins stream read-only data of machine sensors and do not incorporate operations of manufacturing machines through Internet. In this paper, a new method of virtualization is proposed to integrate machining data and operations into the digital-twins using Internet scale machine tool communication method. A fully functional digital-twin is implemented in CPMC testbed using MTComm and several manufacturing application scenarios are developed to evaluate the proposed method and system. Performance analysis shows that it is capable of providing data-driven visual monitoring of a manufacturing process and performing manufacturing operations through digital twins over the Internet. Results of the experiments also shows that the MTComm based digital twins have an excellent efficiency.

Journal ArticleDOI
12 May 2018-Sensors
TL;DR: The framework shows that a combination of fog and cloud computing with a distributed data modeling at the sensor device for wireless sensor networks can be beneficial for Industrial Internet of Things applications.
Abstract: Digitalization is a global trend becoming ever more important to our connected and sustainable society. This trend also affects industry where the Industrial Internet of Things is an important part, and there is a need to conserve spectrum as well as energy when communicating data to a fog or cloud back-end system. In this paper we investigate the benefits of fog computing by proposing a novel distributed learning model on the sensor device and simulating the data stream in the fog, instead of transmitting all raw sensor values to the cloud back-end. To save energy and to communicate as few packets as possible, the updated parameters of the learned model at the sensor device are communicated in longer time intervals to a fog computing system. The proposed framework is implemented and tested in a real world testbed in order to make quantitative measurements and evaluate the system. Our results show that the proposed model can achieve a 98% decrease in the number of packets sent over the wireless link, and the fog node can still simulate the data stream with an acceptable accuracy of 97%. We also observe an end-to-end delay of 180 ms in our proposed three-layer framework. Hence, the framework shows that a combination of fog and cloud computing with a distributed data modeling at the sensor device for wireless sensor networks can be beneficial for Industrial Internet of Things applications.

Journal ArticleDOI
TL;DR: The Hy-LP is proposed - a novel hybrid protocol and development framework for Industrial IoT (IIoT) systems that enables the seamless communication of IIoT sensors and actuators, within and across domains, also facilitating the integration of the Industrial Cloud.

Journal ArticleDOI
TL;DR: Experimental results obtained indicate that the proposed cognitive small world model achieves energy balancing, increases network lifetime, improves energy efficiency, and reduces data latency when compared to results obtained using various state-of-the-art approaches.
Abstract: Energy balancing and faster data transfer over a wireless sensor network (WSN) is an important problem in applications like cyber-physical systems, Internet of things, and context-aware pervasive systems. Addressing this problem leads to increased network lifetime and improved network feasibility for real time applications. In WSNs, sensor nodes transfer the data using multihop data transmission model. The large number of hops required for data transmission leads to poor energy balancing and large data latency across the network. In this paper, we utilize a recent development in social networks called small world characteristics for proposing a novel method of low-latency and energy-balanced data transmission over WSN. Small world WSN (SW-WSN) exhibits low average path length and high average clustering coefficient. A cognitive SW-WSN is developed by adding new links between a selected fraction of nodes and the sink. A new data routing method is also proposed by optimizing energy cost of the links. This method yields uniform energy consumption and faster data transfer. Experiments are conducted using simulations and real node deployments over a WSN testbed. The performance of the proposed method is evaluated by conducting exhaustive analysis of network lifetime, residual energy, and data latency over the WSN. Experimental results obtained indicate that the proposed cognitive small world model achieves energy balancing, increases network lifetime, improves energy efficiency, and reduces data latency when compared to results obtained using various state-of-the-art approaches. The results are motivating enough for the proposed method to be used in large and medium scale network applications.

Journal ArticleDOI
TL;DR: This paper considers the problem of low complexity target tracking to cover and follow moving targets using flying robots using Parrot AR, and proposes three computationally efficient approaches: predictive fuzzy, predictive incremental fuzzy, and local incremental fuzzy.
Abstract: Mobile wireless sensor networks have been extensively deployed for enhancing environmental monitoring and surveillance. The availability of low-cost mobile robots equipped with a variety of sensors makes them promising in target coverage tasks. They are particularly suitable where quick, inexpensive, or nonlasting visual sensing solutions are required. In this paper, we consider the problem of low complexity target tracking to cover and follow moving targets using flying robots. We tackle this problem by clustering targets while estimating the camera location and orientation for each cluster separately through a cover-set coverage method. We also leverage partial knowledge of target mobility to enhance the efficiency of our proposed algorithms. Three computationally efficient approaches are developed: predictive fuzzy , predictive incremental fuzzy , and local incremental fuzzy . The objective is to find a compromise among coverage efficiency, traveled distance, number of drones required, and complexity. The targets move according to one of the following three possible mobility patterns: random waypoint, Manhattan grid, and reference point group mobility patterns. The feasibility of our algorithms and their performance are also tested on a real-world indoor testbed called drone-be-gone , using Parrot AR.Drone quadcopters. The deployment confirms the results obtained with simulations and highlights the suitability of the proposed solutions for real-time applications.

Journal ArticleDOI
TL;DR: A universal learning framework, which is called AI framework based on deep reinforcement learning (DRL), which adopts convolutional neural network and recurrent neural network to model the potential spatial features and sequential features from the raw wireless signal automatically and gets significant improvements and learns intuitive features automatically.
Abstract: To solve the policy optimizing problem in many scenarios of smart wireless network management using a single universal algorithm, this letter proposes a universal learning framework, which is called AI framework based on deep reinforcement learning (DRL). This framework can also solve the problem that the state is painful to design in traditional RL. This AI framework adopts convolutional neural network and recurrent neural network to model the potential spatial features (i.e., location information) and sequential features from the raw wireless signal automatically. These features can be taken as the state definition of DRL. Meanwhile, this framework is suitable for many scenarios, such as resource management and access control due to DRL. The mean value of throughput, the standard deviation of throughput, and handover counts are used to evaluate its performance on the mobility management problem in the wireless local area network on a practical testbed. The results show that the framework gets significant improvements and learns intuitive features automatically.

Proceedings ArticleDOI
20 May 2018
TL;DR: Fogbed is presented, a framework and toolset integration for rapid prototyping of fog components in virtualized environments that enables the deployment of fog nodes as software containers under different network configurations and meets the requirements of low cost, flexible setup and compatibility with real world technologies.
Abstract: The fog computing paradigm extends cloud resources near data sources to overcome limitations of cloud-based IoT centralized architectures. Its involve the running of services and applications on the nodes between the IoT devices and the cloud. The prototyping and testing of these distributed software is challenging due to the fact that not only the fog service has to be tested but also its integration with management systems. Despite recent advances in fog platforms, there exists no readily available testbed which can help researchers to design and test real world fog applications. To this purpose, network simulators and cloud middleware are adapted to enable the investigation of fog solutions. This paper presents Fogbed, a framework and toolset integration for rapid prototyping of fog components in virtualized environments. Using a desktop approach, Fogbed enables the deployment of fog nodes as software containers under different network configurations. Its design meets the requirements of low cost, flexible setup and compatibility with real world technologies. Unlike current approaches, the proposed framework allows for the testing of fog components with third-party systems through standard interfaces. A scheme to observe the behavior of the environment is provided. A case study is presented to demonstrate a fog service analysis. In addition, future developments and research directions are discussed.

Book ChapterDOI
06 Sep 2018
TL;DR: Four feasible attack scenarios on EPIC are described and two of these scenarios, namely a power supply interruption attack and a physical damage attack, and possible mitigation, are demonstrated.
Abstract: Testbeds that realistically mimic the operation of critical infrastructure are of significant value to researchers. One such testbed, named Electrical Power and Intelligent Control (EPIC), is described in this paper together with examples of its use for research in the design of secure smart-grids. EPIC includes generation, transmission, smart home, and micro-grid. EPIC enables researchers to conduct research in an active and realistic environment. It can also be used to understand the cascading effects of failures in one Industrial Control System (ICS) on another, and to assess the effectiveness of novel attack detection algorithms. Four feasible attack scenarios on EPIC are described. Two of these scenarios, demonstrated on EPIC, namely a power supply interruption attack and a physical damage attack, and possible mitigation, are also described.

Journal ArticleDOI
TL;DR: An integrated heterogeneous networking scheme for multi-access edge computing and fiber-wireless access networks that uses network virtualization to achieve the dynamic orchestration of the network, storage, and computing resources to meet diverse application demands is proposed.
Abstract: With the widespread use of smart mobile devices, the exponential growth of mobile Internet traffic and newly emerging services, such as Internet of Things, virtual reality/augmented reality, and serious games, the network performance requirements for delay and bandwidth are increasing. The inherent long-distance propagation and possible network congestion of mobile cloud computing may lead to excessive latency, which cannot satisfy the new delay-sensitive mobile applications. The proximity of edge computing provides the possibility of low-latency access and raises increasing interest from non-mobile operators; therefore, edge computing faces a variety of access network technologies, including wired (fixed) and wireless (mobile) access. In this paper, we propose an integrated heterogeneous networking scheme for multi-access edge computing and fiber-wireless access networks that uses network virtualization to achieve the dynamic orchestration of the network, storage, and computing resources to meet diverse application demands. The global view and centralized control of the entire network and the unified scheduling of the resources in the scheme anticipate the convergence of various types of access networks and the edge cloud. The multipath transmission of the service flows is further combined as an instance of integrated edge cloud networking. An experimental testbed is established in the laboratory, and the performance of the multi-access edge computing and networking is evaluated to verify the feasibility and effectiveness of the scheme. The results demonstrate that the scheme can effectively improve the network performance.

Proceedings ArticleDOI
11 Apr 2018
TL;DR: OpenUAV is presented, an open source test bed for UAV education and research that overcomes barriers and is believed to be the first open-source, cloud-enabled testbed for Uavs.
Abstract: Multirotor Unmanned Aerial Vehicles (UAV) have grown in popularity for research and education, overcoming challenges associated with fixed wing and ground robots. Unfortunately, extensive physical testing can be expensive and time consuming because of short flight times due to battery constraints and safety precautions. Simulation tools offer a low barrier to entry and enable testing and validation before field trials. However, most of the well-known simulators today have a high barrier to entry due to the need for powerful computers and the time required for initial set up. In this paper, we present OpenUAV, an open source test bed for UAV education and research that overcomes these barriers. We leverage the Containers as a Service (CaaS) technology to enable students and researchers carry out simulations on the cloud. We have based our framework on open-source tools including ROS, Gazebo, Docker, PX4, and Ansible, we designed the simulation framework so that it has no special hardware requirements. Two use-cases are presented. First, we show how a UAV can navigate around obstacles, and second, we test a multi-UAV swarm formation algorithm. To our knowledge, this is the first open-source, cloud-enabled testbed for UAVs. The code is available on GitHub: https://github.com/Open-UAV.

Journal ArticleDOI
TL;DR: An extensive power characterization of the system’s operation is presented and an open-source wireless camera network that can adapt to address the requirements of future outdoor video monitoring applications is provided.

Journal ArticleDOI
TL;DR: The configuration and characteristics of the power control systems network is analyzed, which is an area where industrial IoT technology is applied and a testbed environment is built that will be able to stably incorporate new security technologies into the critical industrial infrastructure.
Abstract: In the era of Industry 4.0, information and communication technology (ICT) has been applied to various critical infrastructures, such as power plants, smart factories, and financial networks, to ensure and automate industrial systems. In particular, in the field of power control systems, ICT technology such as industrial internet of things (IoT) is applied for efficient remote measurement. Therefore, legacy systems that were previously operated as standalone now have contact points with the external networks. In this trend, security vulnerabilities from legacy ICT have been inherited by power control systems. Therefore, various security technologies are being researched and developed to cope with cyber vulnerabilities and threats. However, it is risky to apply novel security technologies that are not verified as secure, to power control systems, the availability of which must be guaranteed to provide electricity consistently. Thus, verifying the effectiveness and stability of new security technologies is necessary to apply the technologies to power control systems. In this paper, we analyze the configuration and characteristics of the power control systems network, which is an area where industrial IoT technology is applied. We also build a testbed environment that can verify the security technology and conduct experiments to confirm the security technology for the power control system and the suitability of the testbed. The proposed testbed will be able to stably incorporate new security technologies into the critical industrial infrastructure. Further, it is also expected that the security and stability of the system will be enhanced.

Journal ArticleDOI
TL;DR: This article designs capacity-centric FiWi broadband access networks enhanced with edge computing as well as resulting fiber backhaul sharing and computation offloading capabilities and proposes a TDMA based polling scheme for resource management to guarantee low end-to-end latency.
Abstract: Recently, edge computing has emerged as a promising computing paradigm to meet stringent quality-of-service requirements of an increasing number of latency-sensitive applications. The core principle of edge computing is to bring the capability of cloud computing in close proximity to mobile devices, sensors, actuators, connected things and end users, thereby supporting various types of services and applications at the network edge. In this article, we design capacity-centric FiWi broadband access networks enhanced with edge computing as well as resulting fiber backhaul sharing and computation offloading capabilities. More specifically, we introduce the concept of FiWi enhanced two-level edge computing at the access edge cloud and metro edge cloud. To guarantee low end-to-end latency, we propose a TDMA based polling scheme for resource management. Furthermore, given the vital importance of experimentally demonstrating the potential and practical limitations of edge computing, we develop an experimental testbed for edge computing across converged FiWi broadband access networks. The proof-ofconcept demonstration of the testbed is studied in terms of response time and response time efficiency of both edge clouds, including their respective energy consumption.

Proceedings ArticleDOI
20 May 2018
TL;DR: The proposed ANN-based classifier is shown to outperforms the hybrid hierarchical AMC (HH-AMC) system and is flexible enough to easily expand the dictionary of modulation formats for other applications.
Abstract: In this paper, we design and evaluate a practical AMC system that can be readily deployed to provide robust performance in various real-time commercial scenarios. Thus, our main goal is to develop a robust AMC algorithm with low computational complexity for easy implementation and practical deployment. To this end, we utilize recently revitalized machine learning based approaches used for various classification purposes. In our proposed AMC architecture, we first propose various statistics that serve as features of the AMC signals; next, we design an artificial neural network (ANN) based classifier that performs AMC over a wide range of SNRs. We employ Nesterov accelerated adaptive moment (NADAM) estimation technique to improve the classification performance of our ANN. Further, to establish the practical feasibility of our proposed architecture, we implement it on a SDR testbed. The proposed ANN-based classifier is shown to outperforms the hybrid hierarchical AMC (HH-AMC) system and is flexible enough to easily expand the dictionary of modulation formats for other applications.

Journal ArticleDOI
30 Jul 2018-Sensors
TL;DR: The experimental results show that the proposed OFQS function automatically adapts to the number of instances providing a Quality of Service differentiation based on the different Smart Grid applications requirements, providing a lower packet delivery latency and a higher packet delivery ratio while extending the lifetime of the network compared to solutions in the literature.
Abstract: The Smart Grid (SG) aims to transform the current electric grid into a "smarter" network where the integration of renewable energy resources, energy efficiency and fault tolerance are the main benefits. This is done by interconnecting every energy source, storage point or central control point with connected devices, where heterogeneous SG applications and signalling messages will have different requirements in terms of reliability, latency and priority. Hence, data routing and prioritization are the main challenges in such networks. So far, RPL (Routing Protocol for Low-Power and Lossy networks) protocol is widely used on Smart Grids for distributing commands over the grid. RPL assures traffic differentiation at the network layer in wireless sensor networks through the logical subdivision of the network in multiple instances, each one relying on a specific Objective Function. However, RPL is not optimized for Smart Grids, as its main objective functions and their associated metric does not allow Quality of Service differentiation. To overcome this, we propose OFQS an objective function with a multi-objective metric that considers the delay and the remaining energy in the battery nodes alongside with the dynamic quality of the communication links. Our function automatically adapts to the number of instances (traffic classes) providing a Quality of Service differentiation based on the different Smart Grid applications requirements. We tested our approach on a real sensor testbed. The experimental results show that our proposal provides a lower packet delivery latency and a higher packet delivery ratio while extending the lifetime of the network compared to solutions in the literature.