scispace - formally typeset
Search or ask a question

Showing papers on "Testbed published in 2019"


Journal ArticleDOI
TL;DR: A number of key technical challenges as well as the potential solutions associated with 6G, including physical-layer transmission techniques, network designs, security approaches, and testbed developments are outlined.
Abstract: With the fast development of smart terminals and emerging new applications (e.g., real-time and interactive services), wireless data traffic has drastically increased, and current cellular networks (even the forthcoming 5G) cannot completely match the quickly rising technical requirements. To meet the coming challenges, the sixth generation (6G) mobile network is expected to cast the high technical standard of new spectrum and energy-efficient transmission techniques. In this article, we sketch the potential requirements and present an overview of the latest research on the promising techniques evolving to 6G, which have recently attracted considerable attention. Moreover, we outline a number of key technical challenges as well as the potential solutions associated with 6G, including physical-layer transmission techniques, network designs, security approaches, and testbed developments.

731 citations


Journal ArticleDOI
TL;DR: Experimental testbed reveals that the proposed FCDAA enhances energy efficiency and battery lifetime at acceptable reliability (∼0.95) by appropriately tuning duty cycle and TPC unlike conventional methods.
Abstract: Due to various challenging issues such as, computational complexity and more delay in cloud computing, edge computing has overtaken the conventional process by efficiently and fairly allocating the resources i.e., power and battery lifetime in Internet of things (IoT)-based industrial applications. In the meantime, intelligent and accurate resource management by artificial intelligence (AI) has become the center of attention especially in industrial applications. With the coordination of AI at the edge will remarkably enhance the range and computational speed of IoT-based devices in industries. But the challenging issue in these power hungry, short battery lifetime, and delay-intolerant portable devices is inappropriate and inefficient classical trends of fair resource allotment. Also, it is interpreted through extensive industrial datasets that dynamic wireless channel could not be supported by the typical power saving and battery lifetime techniques, for example, predictive transmission power control (TPC) and baseline. Thus, this paper proposes 1) a forward central dynamic and available approach (FCDAA) by adapting the running time of sensing and transmission processes in IoT-based portable devices; 2) a system-level battery model by evaluating the energy dissipation in IoT devices; and 3) a data reliability model for edge AI-based IoT devices over hybrid TPC and duty-cycle network. Two important cases, for instance, static (i.e., product processing) and dynamic (i.e., vibration and fault diagnosis) are introduced for proper monitoring of industrial platform. Experimental testbed reveals that the proposed FCDAA enhances energy efficiency and battery lifetime at acceptable reliability (∼0.95) by appropriately tuning duty cycle and TPC unlike conventional methods.

213 citations


Journal ArticleDOI
TL;DR: The proposed security testbed is aimed at testing all types of IoT devices, with different software/hardware configurations, by performing standard and advanced security testing, and is effective at detecting vulnerabilities and compromised IoT devices.
Abstract: The Internet of Things (IoT) is a global ecosystem of information and communication technologies aimed at connecting any type of object (thing), at any time, and in any place, to each other and to the Internet. One of the major problems associated with the IoT is the heterogeneous nature of such deployments; this heterogeneity poses many challenges, particularly, in the areas of security and privacy. Specifically, security testing and analysis of IoT devices is considered a very complex task, as different security testing methodologies, including software and hardware security testing approaches, are needed. In this paper, we propose an innovative security testbed framework targeted at IoT devices. The security testbed is aimed at testing all types of IoT devices, with different software/hardware configurations, by performing standard and advanced security testing. Advanced analysis processes based on machine learning algorithms are employed in the testbed in order to monitor the overall operation of the IoT device under test. The architectural design of the proposed security testbed along with a detailed description of the testbed implementation is discussed. The testbed operation is demonstrated on different IoT devices using several specific IoT testing scenarios. The results obtained demonstrate that the testbed is effective at detecting vulnerabilities and compromised IoT devices.

114 citations


Proceedings ArticleDOI
01 Apr 2019
TL;DR: An online algorithm Dedas, which greedily schedules newly arriving tasks and considers whether to replace some existing tasks in order to make the new deadlines satisfied, is proposed, which derives a non-trivial competitive ratio theoretically and is asymptotically tight.
Abstract: In this paper, we study online deadline-aware task dispatching and scheduling in edge computing. We jointly consider management of the networking bandwidth and computing resources to meet the maximum number of deadlines. We propose an online algorithm Dedas, which greedily schedules newly arriving tasks and considers whether to replace some existing tasks in order to make the new deadlines satisfied. We derive a non-trivial competitive ratio theoretically, and our analysis is asymptotically tight. We then build DeEdge, an edge computing testbed installed with typical latency-sensitive applications such as IoT sensor monitoring and face matching. Besides, we adopt a real-world data trace from the Google cluster for large-scale emulations. Extensive testbed experiments and simulations demonstrate that the deadline miss ratio of Dedas is stable for online tasks, which is reduced by up to 60% compared with state-of-the-art methods. Moreover, Dedas performs well in minimizing the average task completion time.

111 citations


Journal ArticleDOI
TL;DR: A framework for IoT is presented that employs an edge computing layer of Fog nodes controlled and managed by an SDN network to achieve high reliability and availability for latency-sensitive IoT applications and achieves higher efficiency in terms of latency and resource utilization.
Abstract: Designing Internet of Things (IoT) applications faces many challenges including security, massive traffic, high availability, high reliability and energy constraints. Recent distributed computing paradigms, such as Fog and multi-access edge computing (MEC), software-defined networking (SDN), network virtualization and blockchain can be exploited in IoT networks, either combined or individually, to overcome the aforementioned challenges while maintaining system performance. In this paper, we present a framework for IoT that employs an edge computing layer of Fog nodes controlled and managed by an SDN network to achieve high reliability and availability for latency-sensitive IoT applications. The SDN network is equipped with distributed controllers and distributed resource constrained OpenFlow switches. Blockchain is used to ensure decentralization in a trustful manner. Additionally, a data offloading algorithm is developed to allocate various processing and computing tasks to the OpenFlow switches based on their current workload. Moreover, a traffic model is proposed to model and analyze the traffic indifferent parts of the network. The proposed algorithm is evaluated in simulation and in a testbed. Experimental results show that the proposed framework achieves higher efficiency in terms of latency and resource utilization.

110 citations


Journal ArticleDOI
TL;DR: A new AI-enabled smart edge with heterogeneous IoT architecture that combines edge computing, caching, and communication, and the Smart-Edge-CoCaCo algorithm is proposed that is lower than that of the traditional cloud computing model with the increase of computing task data and the number of concurrent users.
Abstract: The development of mobile communication technology, hardware, distributed computing, and artificial intelligence (AI) technology has promoted the application of edge computing in the field of heterogeneous IoT in order to overcome the defects of the traditional cloud computing model in the era of big data. In this article, we first propose a new AI-enabled smart edge with heterogeneous IoT architecture that combines edge computing, caching, and communication. Then we propose the Smart-Edge-CoCaCo algorithm. To minimize total delay and confirm the computation offloading decision, Smart-Edge-CoCaCo uses joint optimization of the wireless communication model, the collaborative filter caching model in edge cloud, and the computation offloading model. Finally, we built an emotion interaction testbed to perform computational delay experiments in real environments. The experiment results show that the computation delay of the Smart-Edge-CoCaCo algorithm is lower than that of the traditional cloud computing model with the increase of computing task data and the number of concurrent users.

83 citations


Proceedings ArticleDOI
01 Apr 2019
TL;DR: This paper presents the design and evaluation of a latency-aware edge computing platform, aiming to minimize the end-to-end latency for edge applications, built on Apache Storm and featuring an orchestration framework that breaks down an edge application into Storm tasks as defined by a directed acyclic graph (DAG).
Abstract: Running computer vision algorithms on images or videos collected by mobile devices represent a new class of latency-sensitive applications that expect to benefit from edge cloud computing. These applications often demand real-time responses (e.g., <100 ms), which can not be satisfied by traditional cloud computing. However, the edge cloud architecture is inherently distributed and heterogeneous, requiring new approaches to resource allocation and orchestration. This paper presents the design and evaluation of a latency-aware edge computing platform, aiming to minimize the end-to-end latency for edge applications.The proposed platform is built on Apache Storm, and consists of multiple edge servers with heterogeneous computation (including both GPUs and CPUs) and networking resources. Central to our platform is an orchestration framework that breaks down an edge application into Storm tasks as defined by a directed acyclic graph (DAG) and then maps these tasks onto heterogeneous edge servers for efficient execution. An experimental proof-of-concept testbed is used to demonstrate that the proposed platform can indeed achieve low end-to-end latency: considering a real-time 3D scene reconstruction application, it is shown that the testbed can support up to 30 concurrent streams with an average perframe latency of 32ms, and can achieve 40% latency reduction relative to the baseline Storm scheduling approach.

67 citations


Book ChapterDOI
08 May 2019
TL;DR: Chameleon as discussed by the authors is a set of research or education projects, each providing resources for a number of users, each of which can be organized as a set or education project.
Abstract: Computer Science experiments, whether they comprise the development of new system tools and algorithms or performance evaluation of new hardware, typically require direct access to resources that support deep reconfigurability, the ability to work in an isolated environment, as well as experimenting at scale and on up-to-date hardware. The original Chameleon system was designed to cover a wide range of community needs. Networks continue to evolve, and the network fabric is as much a part of the research focus of Chameleon as the compute or storage. Appliances, i.e., bare metal images, are the main equivalent of “programming tools” in Chameleon. The Chameleon team has developed and supports system appliances to provide access to the variety of hardware on Chameleon and to match the demands of our user community. Chameleon is organized as a set of research or education projects, each providing resources for a number of users. The word “testbed” is thus typically associated with special access resources.

67 citations


Journal ArticleDOI
TL;DR: X60 is introduced, the first SDR-based testbed for 60 GHz WLANs, featuring fully programmable MAC/PHY/Network layers, multi-Gbps rates, and a user-configurable 12-element phased antenna array, and it is found that a one-to-one MCS to SNR mapping is hard to obtain in typical indoor environments.

62 citations


Posted Content
TL;DR: The F1/10 testbed carries a full suite of sensors, perception, planning, control, and networking software stacks that are similar to full scale solutions, and how the platform can be used to augment research and education in autonomous systems, making autonomy more accessible.
Abstract: In 2005 DARPA labeled the realization of viable autonomous vehicles (AVs) a grand challenge; a short time later the idea became a moonshot that could change the automotive industry. Today, the question of safety stands between reality and solved. Given the right platform the CPS community is poised to offer unique insights. However, testing the limits of safety and performance on real vehicles is costly and hazardous. The use of such vehicles is also outside the reach of most researchers and students. In this paper, we present F1/10: an open-source, affordable, and high-performance 1/10 scale autonomous vehicle testbed. The F1/10 testbed carries a full suite of sensors, perception, planning, control, and networking software stacks that are similar to full scale solutions. We demonstrate key examples of the research enabled by the F1/10 testbed, and how the platform can be used to augment research and education in autonomous systems, making autonomy more accessible.

57 citations


Journal ArticleDOI
TL;DR: The potential role of machine learning in the linkto- link aspect of the communication systems is discussed and aspects of the specific neural-network-based reinforcement learning algorithm formation and on-orbit testing are discussed.
Abstract: The National Aeronautics and Space Administration (NASA) is in the midst of defining and developing the future space and ground architecture for the coming decades to return science and exploration discovery data back to investigators on Earth. Optimizing the data return from these missions requires planning, design, standards, and operations coordinated from formulation and development throughout the mission. The use of automation enhanced by cognition and machine learning are potential methods for optimizing data return, reducing costs of operations, and helping manage the complexity of the automated systems. In this article, we discuss the potential role of machine learning in the linkto- link aspect of the communication systems. An experiment using NASA's Space Communication and Navigation Testbed onboard the International Space Station and the ground station located at NASA John H. Glenn Research Center demonstrates for the first time the benefits and challenges of applying machine learning to space links in the actual flight environment. The experiment used machine learning decisions to configure a space link from the ISS-based testbed to the ground station to achieve multiple objectives related to data throughput, bandwidth, and power. Aspects of the specific neural-network-based reinforcement learning algorithm formation and on-orbit testing are discussed.

Proceedings ArticleDOI
02 Jul 2019
TL;DR: By using X-LSTM to predict future usage, a slice broker is more adept to provision a slice and reduce over-provisioning and SLA violation costs by more than 10% in comparison to LSTM and ARIMA.
Abstract: Network slicing will allow 5G network operators to offer a diverse set of services over a shared physical infrastructure. We focus on supporting the operation of the Radio Access Network (RAN) slice broker, which maps slice requirements into allocation of Physical Resource Blocks (PRBs). We first develop a new metric, REVA, based on the number of PRBs available to a single Very Active bearer. REVA is independent of channel conditions and allows easy derivation of an individual wireless link's throughput. In order for the slice broker to efficiently utilize the RAN, there is a need for reliable and short term prediction of resource usage by a slice. To support such prediction, we construct an LTE testbed and develop custom additions to the scheduler. Using data collected from the testbed, we compute REVA and develop a realistic time series prediction model for REVA. Specifically, we present the X-LSTM prediction model, based upon Long Short-Term Memory (LSTM) neural networks. Evaluated with data collected in the testbed, X-LSTM outperforms Autoregressive Integrated Moving Average Model (ARIMA) and LSTM neural networks by up to 31%. X-LSTM also achieves over 91% accuracy in predicting REVA. By using X-LSTM to predict future usage, a slice broker is more adept to provision a slice and reduce over-provisioning and SLA violation costs by more than 10% in comparison to LSTM and ARIMA.

Journal ArticleDOI
TL;DR: This paper presents a distributed MAC framework assisted by machine learning for the Heterogeneous IoT system, where the IoT devices coexist with the WiFi users in the unlicensed industrial, scientific, and medical spectrum.
Abstract: Nowadays, an Internet-of-Things (IoT) connected system brings a tremendous paradigm shift into the medium access control (MAC) design. In this paper, we present a distributed MAC framework assisted by machine learning for the Heterogeneous IoT system, where the IoT devices coexist with the WiFi users in the unlicensed industrial, scientific, and medical (ISM) spectrum. Specifically, the superframe is divided into two phases: a rendezvous phase and a transmission phase. During the rendezvous phase, the gateway that is capable of machine learning predicts the number of WiFi users and the IoT devices by performing a triangular handshake on the primary channel. The prediction takes advantage of the deep neural network (DNN) model which is pretrained on our universal software radio peripheral (USRP2) testbed offline. The gateway allocates the frequency channels to the WiFi and IoT systems based on the inference results. Then, the IoT devices and WiFi users initiate data transmissions during the transmission phase. Furthermore, system throughput is analyzed and optimized in two typical scenarios, respectively. An optimized MAC framework is proposed to maximize the total system throughput by finding the key design parameters. The analytical and simulation results that are conducted using the ns-2 demonstrate the effectiveness of the proposed MAC framework.

Journal ArticleDOI
TL;DR: Numerical results show that near-optimal RWA can be obtained with the ML approach, while reducing computational time up to 93% in comparison to a traditional optimization approach based on integer linear programming.
Abstract: Recently, machine learning (ML) has attracted the attention of both researchers and practitioners to address several issues in the optical networking field. This trend has been mainly driven by the huge amount of available data (i.e., signal quality indicators, network alarms, etc.) and to the large number of optimization parameters which feature current optical networks (such as, modulation format, lightpath routes, transport wavelength, etc.). In this paper, we leverage the techniques from the ML discipline to efficiently accomplish the routing and wavelength assignment (RWA) for an input traffic matrix in an optical WDM network. Numerical results show that near-optimal RWA can be obtained with our approach, while reducing computational time up to 93% in comparison to a traditional optimization approach based on integer linear programming. Moreover, to further demonstrate the effectiveness of our approach, we deployed the ML classifier into an ONOS-based software defined optical network laboratory testbed, where we evaluate the performance of the overall RWA process in terms of computational time.

Journal ArticleDOI
TL;DR: The outcomes show better performance by the proposed system in terms of resource efficiency, agility, and scalability over the traditional IoT surveillance systems and state-of-the-art (SoA) approaches.
Abstract: In this paper, we design and implement a distributed Internet of Things (IoT) framework called IoT-guard, for an intelligent, resource-efficient, and real-time security management system. The system, consisting of edge-fog computational layers, will aid in crime prevention and predict crime events in a smart home environment (SHE). The IoT-guard will detect and confirm crime events in real-time, using Artificial Intelligence (AI) and an event-driven approach to send crime data to protective services and police units enabling immediate action while conserving resources, such as energy, bandwidth (BW), and memory and Central Processing Unit (CPU) usage. In this study, we implement an IoT-guard laboratory testbed prototype and perform evaluations on its efficiency for real-time security application. The outcomes show better performance by the proposed system in terms of resource efficiency, agility, and scalability over the traditional IoT surveillance systems and state-of-the-art (SoA) approaches.

Proceedings ArticleDOI
24 Jun 2019
TL;DR: This paper proposes an approach that emulates such infrastructures in the cloud that developers can freely design emulated fog infrastructure, configure their performance characteristics, and inject failures at runtime to evaluate their application in various deployments and failure scenarios.
Abstract: Fog computing is an emerging computing paradigm that uses processing and storage capabilities located at the edge, in the cloud, and possibly in between. Testing fog applications, however, is hard since runtime infrastructures will typically be in use or may not exist, yet. In this paper, we propose an approach that emulates such infrastructures in the cloud. Developers can freely design emulated fog infrastructures, configure their performance characteristics, and inject failures at runtime to evaluate their application in various deployments and failure scenarios. We also present our proof-of-concept implementation MockFog and show that application performance is comparable when running on MockFog or a small fog infrastructure testbed.

Journal ArticleDOI
01 Sep 2019-Heliyon
TL;DR: The results of the operative tests and the cost comparison of the own-designed developed Web-SCADA system prove its reliability and low-cost, on average an 86% cheaper than a standard brandmark solution, for controlling, monitoring and data logging information, as well as for local and remote operation system when applied to the HRES microgrid testbed.

Journal ArticleDOI
TL;DR: Measurements show that enabling NDN with edge computing is a promising approach to reduce latency and the backbone network traffic and capable of processing large amounts of data quickly and delivering the results to the users in real time.
Abstract: Named data networking (NDN) and edge cloud computing (ECC) are emerging technologies that are considered as the most representative technologies for the future Internet. Both technologies are the promising enabler for the future Internet such as fifth generation (5G) and beyond which requires fast information response time. We believe that clear benefits can be achieved from the interplay of NDN and ECC and enables future network technology to be much more flexible, secure and efficient. In this paper, therefore, we integrate NDN with ECC in order to achieve fast information response time. Our framework is based on N-Tier architecture and comprises of three main Tiers. The NDN is located at the Tier1 (Things/end devices) and comprises of all the basic functionalities that connect Internet of Things (IoT) devices with Tier 2 (Edge Computing), where we have deployed our Edge node application. The Tier 2 is then further connected with Tier 3 (Cloud Computing), where our Cloud node application is deployed at multiple hops on the Microsoft Azure Cloud machine located in Virginia, WA, USA. We implement an NDN-based ECC framework and the outcomes are evaluated through testbed and simulations in terms of interest aggregation, round trip time (RTT), service lookup time with single query lookup time and with various traffic loads (load-based lookup time) from the IoT devices. Our measurements show that enabling NDN with edge computing is a promising approach to reduce latency and the backbone network traffic and capable of processing large amounts of data quickly and delivering the results to the users in real time.

Journal ArticleDOI
TL;DR: This paper presents a framework for a context-aware intrusion detection of a widely deployed Building Automation and Control network, and develops runtime models for service interactions and functionality patterns by modeling the heterogeneous information that is continuously acquired from building assets into a novel BAS context aware data structure.

Journal ArticleDOI
TL;DR: A flexible and low-cost testbed for D-MIMO systems, where a sigma–delta over fiber (SDoF) solution is proposed to address the fundamental challenge of phase-coherent transmission between multiple distributed access points (APs) from a central unit (CU).
Abstract: Distributed multiple-input-multiple-output (D-MIMO) is a very promising technique to improve capacity and quality of service for emerging 5G systems. In this paper, we propose a flexible and low-cost testbed for D-MIMO systems, where a sigma–delta over fiber (SDoF) solution is proposed to address the fundamental challenge of phase-coherent transmission between multiple distributed access points (APs) from a central unit (CU). Employing low-cost standardized optical data interconnects, SDoF, brings a very flexible and software controlled transmission of RF signals over optical fiber. An SDoF-based D-MIMO testbed with a CU feeding 12 distributed APs has been realized and experimentally evaluated. Initial experiments of D-MIMO and conventional colocated MIMO systems at 2.365 GHz have been performed to demonstrate the flexibility and potential of the proposed testbed as a new powerful tool in the design and analysis of emerging communication system concepts.

Journal ArticleDOI
TL;DR: The developed testbed is envisioned as a tool to help smart grid researchers with the study of relevant research problems such as assessing power system resilience against cyber attacks and threats, and verifying the performance of cyber-enabled control schemes.

Proceedings ArticleDOI
22 Feb 2019
TL;DR: This article elaborates the design of the miniature robotic car, the Cambridge Minicar, as well as the fleet’s control architecture, and introduces a unique experimental testbed that consists of a fleet of 16 miniature Ackermann-steering vehicles.
Abstract: We introduce a unique experimental testbed that consists of a fleet of 16 miniature Ackermann-steering vehicles. We are motivated by a lack of available low-cost platforms to support research and education in multi-car navigation and trajectory planning. This article elaborates the design of our miniature robotic car, the Cambridge Minicar, as well as the fleet’s control architecture. Our experimental testbed allows us to implement state-of-the-art driver models as well as autonomous control strategies, and test their validity in a real, physical multi-lane setup. Through experiments on our miniature highway, we are able to tangibly demonstrate the benefits of cooperative driving on multi-lane road topographies. Our setup paves the way for indoor large-fleet experimental research.

Journal ArticleDOI
TL;DR: This paper presents a feature and empirical comparison of several open source CoAP implementations and also analyzes the security libraries they use and analyzes CoAP libraries’ performance in terms of latency, memory and CPU consumption in a real testbed deployed in an industrial scenario, in order to help in adopting a decision criterion for similar deployments.
Abstract: Over the last few years, the Internet of Things (IoT) has grown in protocols, implementations and use cases. In terms of communication protocols, the Constrained Application Protocol (CoAP) prevails among the rest, such as MQ Telemetry Transport (MQTT) or Advanced Message Queuing Protocol (AMQP). This protocol is lightweight and capable of running in resource constrained devices and networks and can be securized using Datagram Transport Layer Security (DTLS). Having a secure channel of communication is important in IoT environments, since IoT devices affect the physical world and exchange personal private data. There exist many implementations of CoAP, each of these with its own particular features and requirements. Therefore, it is important to choose the CoAP implementation that suits better to the specific requirements of each application. This paper presents a feature and empirical comparison of several open source CoAP implementations and also analyzes the security libraries they use. First of all, it surveys current CoAP implementations, and compares them in terms of built-in core, extensions, target platform, programming language and interoperability. Then, a theoretical analysis of the security libraries is provided. Finally, it analyzes CoAP libraries’ performance in terms of latency, memory and CPU consumption in a real testbed deployed in an industrial scenario, in order to help in adopting a decision criterion for similar deployments.

Proceedings ArticleDOI
01 Feb 2019
TL;DR: The design of ISAAC, the Idaho CPS Smart Grid Cybersecurity Testbed, is introduced, which emulates a realistic power utility and provides researchers with the tools needed to develop and test integrated cybersecurity solutions.
Abstract: The landscape of cyber and other threats to Cyber Physical Systems (CPS), such as the Power Grid, is growing rapidly. Realistic and reconfigurable testbeds are needed to be able to develop, test, improve, and deploy practical cybersecurity solutions for CPS. We introduce the design of ISAAC, the Idaho CPS Smart Grid Cybersecurity Testbed. ISAAC is a cross-domain, distributed, and reconfigurable testbed, which emulates a realistic power utility and provides researchers with the tools needed to develop and test integrated cybersecurity solutions. Some components of ISAAC are fully functional, with ongoing research projects utilizing the functional components. When fully developed, the capabilities of ISAAC will include: 1) Multiple emulated power utility substations and control networks; 2) Emulating wide-area power transmission and distribution systems, 3) Emulated SCADA control centers, 4) Advanced visualization and cyber-analytics, including machine learning. ISAAC will enable the development, testing, evaluation, and validation of holistic cyber-physical security approaches for cyber physical systems and the Smart Grid. We hope that our endeavor, ISAAC, will help further the boundaries of CPS research and education.

Proceedings ArticleDOI
16 Apr 2019
TL;DR: The essence of the approach is to adopt local (node-level) scheduling through a time window allocation among the nodes that allows each node to schedule its transmissions using a real-time scheduling policy locally and online.
Abstract: Industry 4.0 is a new industry trend which relies on data driven business model to set the productivity requirements of the cyber physical system. To meet this requirement, Industry 4.0 cyber physical systems need to be highly scalable, adaptive, real-time, and reliable. Recent successful industrial wireless standards such as WirelessHART appeared as a feasible approach for such cyber physical systems. For reliable and real-time communication in highly unreliable environments, they adopt a high degree of redundancy. While a high degree of redundancy is crucial to real-time control, it causes a huge waste of energy, bandwidth, and time under a centralized approach, and are therefore less suitable for scalability and handling network dynamics. To address these challenges, we propose DistributedHART - a distributed real-time scheduling system for WirelessHART networks. The essence of our approach is to adopt local (node-level) scheduling through a time window allocation among the nodes that allows each node to schedule its transmissions using a real-time scheduling policy locally and online. DistributedHART obviates the need of creating and disseminating a central global schedule in our approach, and thereby significantly reducing resource usage and enhancing the scalability. To our knowledge, it is the first distributed real-time multi-channel scheduler for WirelessHART. We have implemented DistributedHART and experimented on a 130-node testbed. Our testbed experiments as well as simulations show at least 85% less energy consumption in DistributedHART compared to existing centralized approach while ensuring similar schedulability.

Proceedings ArticleDOI
15 Mar 2019
TL;DR: BlueFlood adapts concurrent transmissions, as introduced by Glossy, to Bluetooth, and achieves 99% end-to-end delivery ratio in multi-hop Bluetooth networks with a duty cycle of 0.13% for 1-second intervals.
Abstract: Bluetooth is an omnipresent communication technology, available on billions of connected devices today. While it has been traditionally limited to peer-to-peer and star network topology, the recent Bluetooth 5 standard introduces new operating modes to allow for increased reliability and Bluetooth Mesh supports multi-hop networking based on message flooding. In this paper, we present BlueFlood. It adapts concurrent transmissions, as introduced by Glossy, to Bluetooth. The result is fast and efficient network-wide data dissemination in multi-hop Bluetooth networks. Moreover, we show that BlueFlood floods can be reliably received by offthe-shelf Bluetooth devices such as smartphones, opening new applications of concurrent transmissions and a seamless integration with existing technologies. We present an in-depth experimental feasibility study of concurrent transmissions over Bluetooth PHY in a controlled environment. Further, we build a small-scale testbed where we evaluate BlueFlood in real-world settings of a residential environment. We show that BlueFlood achieves 99% end-toend delivery ratio in multi-hop networks with a duty cycle of 0.13% for 1-second intervals.

Posted Content
TL;DR: The Cambridge Minicar as mentioned in this paper is an experimental testbed that consists of a fleet of 16 miniature Ackermann-steering vehicles for cooperative driving on multi-lane road topographies.
Abstract: We introduce a unique experimental testbed that consists of a fleet of 16 miniature Ackermann-steering vehicles. We are motivated by a lack of available low-cost platforms to support research and education in multi-car navigation and trajectory planning. This article elaborates the design of our miniature robotic car, the Cambridge Minicar, as well as the fleet's control architecture. Our experimental testbed allows us to implement state-of-the-art driver models as well as autonomous control strategies, and test their validity in a real, physical multi-lane setup. Through experiments on our miniature highway, we are able to tangibly demonstrate the benefits of cooperative driving on multi-lane road topographies. Our setup paves the way for indoor large-fleet experimental research.

Journal ArticleDOI
TL;DR: An analysis of the adaptive data rate (ADR) scheme through a performance evaluation of a permanent outdoor LoRaWAN tested shows that the scheme’s assignments show the signs of oscillation, with nodes being instructed to abruptly change between SFs.
Abstract: Low-power wide-area network (LPWAN) protocols, such as a long range wide area network (LoRaWAN), are key to ensuring scalable wireless communication for Internet of Things devices. In this paper, an analysis of this protocol through a performance evaluation of a permanent outdoor LoRaWAN tested is presented. To ensure accurate results, tests lasted at least 17 h and required 1000 packets per node. The evaluation focused on the impact that the adaptive data rate (ADR) scheme, payload length, link checks, and acknowledgements had on the packet delivery ratio (PDR) of the testbed. The collected data showed that enabling the ADR scheme reduced the PDR. The ADR scheme had six data rates, which consist of a spreading factor and bandwidth combination, to choose from. Analysis revealed that the scheme primarily assigning either the fastest data rate (SF7BW250) or the slowest (SF12BW125) to nodes, regardless of distance. Furthermore, the scheme’s assignments show the signs of oscillation, with nodes being instructed to abruptly change between SFs. The impact of payload length and link checks on the PDR was not pronounced, but enabling acknowledgements did show significant improvements.

Journal ArticleDOI
TL;DR: The results have shown that the performance of the HIL testbed matches very well with the actual testing vehicles with about 1% error and two new testing capabilities have been demonstrated through two CAV applications.
Abstract: Connected and autonomous vehicle (CAV) applications focusing on energy optimization have attracted a lot of attention recently. However, it is challenging to evaluate various optimizations and controls in real-world traffic scenarios due to safety and technical concerns. In light of this, our previous work has developed a hardware-in-the-loop (HIL) testbed with a laboratory powertrain research platform to evaluate CAV applications. An actual engine is loaded by a hydrostatic dynamometer whose loading torque is controlled in real-time by the simulated vehicle dynamics. The HIL testbed mimics the performance of a target vehicle and the dynamometer generates the same load as the target vehicle. In this work, the HIL testbed is further enhanced to match the performance of actual testing vehicles at the Federal Highway Administration (FHWA) and a living lab is developed to incorporate real traffic information. The same engine as the actual testing vehicles at the FHWA was installed and the vehicle models were calibrated using testing data from actual vehicles. The same roadway conditions (speed limit, the degree of road slope, etc.) were input into the testbed and the dynamometer generates the same load as an actual vehicle’s engine is getting. The HIL testbed emulates the performance of an actual vehicle and both fuel consumption and emissions are measured by precise instruments in the lab. In eight testing scenarios, the results have shown that the performance of the HIL testbed matches very well with the actual testing vehicles with about 1% error. In addition, the living lab has enabled the HIL testbed to interact with real traffic and extended the testing capabilities of the HIL testbed. Two new testing capabilities have been demonstrated through two CAV applications. This is an exciting result since the HIL testbed could provide an effective and economical way for the testing of fuel consumption and emissions on various roadway conditions for CAVs and other types of vehicles.

Journal ArticleDOI
Wooseong Kim1
02 May 2019-Sensors
TL;DR: This study investigated the applicability of the WiGig technology to the GiV2V communications through experiments on a real vehicular testbed and demonstrated that the instantaneous throughput was sufficient to exchange large data between moving vehicles in different road environments.
Abstract: Millimeter wave (mmWave) vehicle-to-vehicle (V2V) communications has received significant attention as one of the key applications in 5G technology, which is called as Giga-V2V (GiV2V). The ultra-wide band of the GiV2V allows vehicles to transfer gigabit data within a few seconds, which can achieve platooning of autonomous vehicles. The platooning process requires the rich data of a 4K dash-camera and LiDAR sensors for accurate vehicle control. To achieve this, 3GPP, a global organization of standards that provides specifications for the 5G mobile technology, is developing a new standard for GiV2V technology by extending the existing specification for device-to-device (D2D) communication. Meanwhile, in the last decade, the mmWave spectrum has been used in the wireless local area network (WLAN) for indoor devices, such as home appliances, based on the IEEE 802.11ad (also known as Wireless Gigabit Alliance (WiGig)) technology, which supports gigabit wireless connectivity of approximately 10 m distance in the 60-GHz frequency spectrum. The WiGig technology has been commercialized and used for various applications ranging from Internet access points to set-top boxes for televisions. In this study, we investigated the applicability of the WiGig technology to the GiV2V communications through experiments on a real vehicular testbed. To achieve this, we built a testbed using commercial off-the-shelf WiGig devices and performed experiments to measure inter-vehicle connectivity on a campus and on city roads with different permitted vehicle speeds. The experimental results demonstrate that disconnections occurred frequently due to the short radio range and the connectivity varied with the vehicle speed. However, the instantaneous throughput was sufficient to exchange large data between moving vehicles in different road environments.