scispace - formally typeset
Search or ask a question

Showing papers on "Emulation published in 2021"


Journal ArticleDOI
TL;DR: Results show RSIR outperforms the Dijkstra’s algorithm in relation to the stretch, link throughput, packet loss, and delay when available bandwidth, delay, and loss are considered individually or jointly for the computation of optimal paths.
Abstract: Traditional routing protocols employ limited information to make routing decisions, which can lead to a slow adaptation to traffic variability, as well as restricted support to the Quality of Service (QoS) requirements of applications. This article introduces a novel approach for routing in Software-defined networking (SDN), called Reinforcement Learning and Software-Defined Networking Intelligent Routing (RSIR). RSIR adds a Knowledge Plane to SDN and defines a routing algorithm based on Reinforcement Learning (RL) that takes into account link-state information to make routing decisions. This algorithm capitalizes on the interaction with the environment, the intelligence provided by RL and the global view and control of the network furnished by SDN, to compute and install, in advance, optimal routes in the forwarding devices. RSIR was extensively evaluated by emulation using real traffic matrices. Results show RSIR outperforms the Dijkstra’s algorithm in relation to the stretch, link throughput, packet loss, and delay when available bandwidth, delay, and loss are considered individually or jointly for the computation of optimal paths. The results demonstrate that RSIR is an attractive solution for intelligent routing in SDN.

50 citations


Journal ArticleDOI
TL;DR: This article proposes a framework for efficient dispatching of stateless tasks to in-network executors so as to minimize the response times while exhibiting short- and long-term fairness, also leveraging information from a virtualized network infrastructure when available.
Abstract: Serverless computing is becoming widely adopted among cloud providers, thus making increasingly popular the Function-as-a-Service (FaaS) programming model, where the developers realize services by packaging sequences of stateless function calls. The current technologies are very well suited to data centers, but cannot provide equally good performance in decentralized environments, such as edge computing systems, which are expected to be typical for Internet of Things (IoT) applications. In this article, we fill this gap by proposing a framework for efficient dispatching of stateless tasks to in-network executors so as to minimize the response times while exhibiting short- and long-term fairness, also leveraging information from a virtualized network infrastructure when available. Our solution is shown to be simple enough to be installed on devices with limited computational capabilities, such as IoT gateways, especially when using a hierarchical forwarding extension. We evaluate the proposed platform by means of extensive emulation experiments with a prototype implementation in realistic conditions. The results show that it is able to smoothly adapt to the mobility of clients and to the variations of their service request patterns, while coping promptly with network congestion.

49 citations


Journal ArticleDOI
TL;DR: Fifty two different applications of DT of distinct engineering domains are presented, which includes its detailed information, state-of-the-art, methodology, proposed approach development, experimental and/or emulation based performance demonstration and finally conclusive summary of the developed tool/technique along with future scope.
Abstract: The digital transformation (DT) is the acquiring the digital tool, techniques, approaches, mechanism etc. for the transformation of the business, applications, services and upgrading the manual process into the automation. The DT enable the efficacy of the system via automation, innovation, creativities. The another concept of DT in the engineering domain is to replace the manual and/or conventional process by means of automation to handle the big-data problems in an efficient way and harness the static/dynamic system information without knowing the system parameters. The DT represents the both opportunities and challenges to the developer and/or user in an organization, such as development and adaptation of new tool and technique in the system and society with respect to the various applications (i.e., digital twin, cybersecurity, condition monitoring and fault detection & diagnosis (FDD), forecasting and prediction, intelligent data analytics, healthcare monitoring, feature extraction and selection, intelligent manufacturing and production, future city, advanced construction, resilient infrastructure, greater sustainability etc.). Additionally, due to high impact of advanced artificial intelligent, machine learning and data analytics techniques, the harness of the profit of the DT is increased globally. Therefore, the integration of DT into all areas deliver a value to the both users as well as developer. In this editorial fifty two different applications of DT of distinct engineering domains are presented, which includes its detailed information, state-of-the-art, methodology, proposed approach development, experimental and/or emulation based performance demonstration and finally conclusive summary of the developed tool/technique along with future scope.

46 citations


Journal ArticleDOI
TL;DR: This article serves to foster and investigate the state-of-the-art techniques in the field of ac grid emulation from the perspective of multiple spatial-scaled and multiple time-scales and the future trends and conclusions are provided.
Abstract: High penetration of distributed generations and active loads have enabled the power electronics converter to become a vital component in the modern power grid system, and the broad employment of grid-connected converters at various power levels is making the grid impedance and characteristics complicated. Consequently, in order to validate more advanced features such as the reliability and stability performances of the grid-connected converters, it is becoming an emerging need to emulate the grid behaviors from more aspects. This article serves to foster and investigate the state-of-the-art techniques in the field of ac grid emulation from the perspective of multiple spatial-scales and multiple time-scales. Four major concepts used for grid emulation with featured principles, including concept I (analog simulation with under-scaled components), concept II (grid characteristics in the real-time simulator), concept III (grid characteristics in the converters structure), and concept IV (grid characteristics in the converters controller), are summarized, respectively, in this article. The practical implementation regarding to the circuit topology and the power supply for the grid emulation system are also discussed. Finally, the future trends and conclusions in the field of ac grid emulation are provided.

44 citations


Posted Content
TL;DR: In this article, the authors present Emukit, a highly adaptable Python toolkit for enriching decision making under uncertainty, allowing users to use state of the art methods including Bayesian optimization, multi-fidelity emulation, experimental design, Bayesian quadrature and sensitivity analysis.
Abstract: Decision making in uncertain scenarios is an ubiquitous challenge in real world systems. Tools to deal with this challenge include simulations to gather information and statistical emulation to quantify uncertainty. The machine learning community has developed a number of methods to facilitate decision making, but so far they are scattered in multiple different toolkits, and generally rely on a fixed backend. In this paper, we present Emukit, a highly adaptable Python toolkit for enriching decision making under uncertainty. Emukit allows users to: (i) use state of the art methods including Bayesian optimization, multi-fidelity emulation, experimental design, Bayesian quadrature and sensitivity analysis; (ii) easily prototype new decision making methods for new problems. Emukit is agnostic to the underlying modeling framework and enables users to use their own custom models. We show how Emukit can be used on three exemplary case studies.

34 citations


Journal ArticleDOI
TL;DR: The objective of this article is to provide a clear presentation of the discretization of continuous-time sliding-mode controllers, also known in the Automatic Control literature as the emulation method, when the implicit (backward) Euler scheme is used.
Abstract: The objective of this article, is to provide a clear presentation of the discretization of continuous-time sliding-mode controllers, also known in the Automatic Control literature as the emulation method, when the implicit (backward) Euler scheme is used. First-order, second-order and homogeneous controllers are considered. The main theoretical results are recalled in each case, and the focus is put on the discrete-time implementation structure and on the algorithms which allow the designer to solve, at each time-step, the one-step generalized equations which are needed to compute the controllers. The article ends with some open issues.

32 citations


Journal ArticleDOI
TL;DR: A deep learning aided constraint encoding method to tackle the frequency-constraint microgrid scheduling problem by using a neural network to approximate the nonlinear function between system operating condition and frequency nadir.
Abstract: In this paper, we introduce a deep learning aided constraint encoding method to tackle the frequency-constraint microgrid scheduling problem. The nonlinear function between system operating condition and frequency nadir is approximated by using a neural network, which admits an exact mixed-integer formulation (MIP). This formulation is then integrated with the scheduling problem to encode the frequency constraint. With the stronger representation power of the neural network, the resulting commands can ensure adequate frequency response in a realistic setting in addition to islanding success. The proposed method is validated on a modified 33-node system. Successful islanding with a secure response is simulated under the scheduled commands using a detailed three-phase model in Simulink. The advantages of our model are particularly remarkable when the inertia emulation functions from wind turbine generators are considered.

32 citations


Proceedings ArticleDOI
24 Jun 2021
TL;DR: SCOPE as mentioned in this paper is an open and softwarized prototyping platform for NextG systems, made up of: (i) a ready-to-use, portable open-source container for instantiating softwarised and programmable cellular network elements (e.g., base stations and users); (ii) an emulation module for diverse real-world deployments, channels and traffic conditions for testing new solutions; (iii) a data collection module for artificial intelligence and machine learning-based applications, and (iv) a set of open APIs for users to control network element functionalities in
Abstract: The cellular networking ecosystem is being radically transformed by openness, softwarization, and virtualization principles, which will steer NextG networks toward solutions running on "white box" infrastructures. Telco operators will be able to truly bring intelligence to the network, dynamically deploying and adapting its elements at run time according to current conditions and traffic demands. Deploying intelligent solutions for softwarized NextG networks, however, requires extensive prototyping and testing procedures, currently largely unavailable. To this aim, this paper introduces SCOPE, an open and softwarized prototyping platform for NextG systems. SCOPE is made up of: (i) A ready-to-use, portable open-source container for instantiating softwarized and programmable cellular network elements (e.g., base stations and users); (ii) an emulation module for diverse real-world deployments, channels and traffic conditions for testing new solutions; (iii) a data collection module for artificial intelligence and machine learning-based applications, and (iv) a set of open APIs for users to control network element functionalities in real time. Researchers can use SCOPE to test and validate NextG solutions over a variety of large-scale scenarios before implementing them on commercial infrastructures. We demonstrate the capabilities of SCOPE and its platform independence by prototyping exemplary cellular solutions in the controlled environment of Colosseum, the world's largest wireless network emulator. We then port these solutions to indoor and outdoor testbeds, namely, to Arena and POWDER, a PAWR platform.

29 citations


Proceedings ArticleDOI
18 Oct 2021
TL;DR: In this article, the authors propose a configurable GPU power model called AccelWattch that can be driven by emulation and trace-driven environments, hardware counters, or a mix of the two, models both PTX and SASS ISAs, accounts for power gating and control-flow divergence, and supports DVFS.
Abstract: Graphics Processing Units (GPUs) are rapidly dominating the accelerator space, as illustrated by their wide-spread adoption in the data analytics and machine learning markets. At the same time, performance per watt has emerged as a crucial evaluation metric together with peak performance. As such, GPU architects require robust tools that will enable them to model both the performance and the power consumption of modern GPUs. However, while GPU performance modeling has progressed in great strides, power modeling has lagged behind. To mitigate this problem we propose AccelWattch, a configurable GPU power model that resolves two long-standing needs: the lack of a detailed and accurate cycle-level power model for modern GPU architectures, and the inability to capture their constant and static power with existing tools. AccelWattch can be driven by emulation and trace-driven environments, hardware counters, or a mix of the two, models both PTX and SASS ISAs, accounts for power gating and control-flow divergence, and supports DVFS. We integrate AccelWattch with GPGPU-Sim and Accel-Sim to facilitate its widespread use. We validate AccelWattch on a NVIDIA Volta GPU, and show that it achieves strong correlation against hardware power measurements. Finally, we demonstrate that AccelWattch can enable reliable design space exploration: by directly applying AccelWattch tuned for Volta on GPU configurations resembling NVIDIA Pascal and Turing GPUs, we obtain accurate power models for these architectures.

28 citations


Journal ArticleDOI
TL;DR: In this article, the authors propose a framework for efficient dispatching of stateless tasks to in-network executors so as to minimize the response times while exhibiting short and long-term fairness, also leveraging information from a virtualized network infrastructure when available.
Abstract: Serverless computing is becoming widely adopted among cloud providers, thus making increasingly popular the Function-as-a-Service (FaaS) programming model, where the developers realize services by packaging sequences of stateless function calls. The current technologies are very well suited to data centers, but cannot provide equally good performance in decentralized environments, such as edge computing systems, which are expected to be typical for Internet of Things (IoT) applications. In this paper, we fill this gap by proposing a framework for efficient dispatching of stateless tasks to in-network executors so as to minimize the response times while exhibiting short- and long-term fairness, also leveraging information from a virtualized network infrastructure when available. Our solution is shown to be simple enough to be installed on devices with limited computational capabilities, such as IoT gateways, especially when using a hierarchical forwarding extension. We evaluate the proposed platform by means of extensive emulation experiments with a prototype implementation in realistic conditions. The results show that it is able to smoothly adapt to the mobility of clients and to the variations of their service request patterns, while coping promptly with network congestion.

26 citations


Proceedings ArticleDOI
14 Nov 2021
TL;DR: APNN-TC as discussed by the authors is the first arbitrary precision neural network framework to exploit quantization benefits on Ampere GPU Tensor Cores, which can achieve significant speedup over CUTLASS kernels and various NN models such as ResNet and VGG.
Abstract: Over the years, accelerating neural networks with quantization has been widely studied. Unfortunately, prior efforts with diverse precisions (e.g., 1-bit weights and 2-bit activations) are usually restricted by limited precision support on GPUs (e.g., int1 and int4). To break such restrictions, we introduce the first Arbitrary Precision Neural Network framework (APNN-TC)1 to fully exploit quantization benefits on Ampere GPU Tensor Cores. Specifically, APNN-TC first incorporates a novel emulation algorithm to support arbitrary short bit-width computation with int1 compute primitives and XOR/AND Boolean operations. Second, APNN-TC integrates arbitrary precision layer designs to efficiently map our emulation algorithm to Tensor Cores with novel batching strategies and specialized memory organization. Third, APNN-TC embodies a novel arbitrary precision NN design to minimize memory access across layers and further improve performance. Extensive evaluations show that APNN-TC can achieve significant speedup over CUTLASS kernels and various NN models, such as ResNet and VGG.

Journal ArticleDOI
TL;DR: The artificial neural network model, trained offline by the data collected from the traditional fast MPC method, is used to control the MMCs with high accuracy and can replace the role of the traditional MPC.
Abstract: This article proposes a machine learning (ML)-based emulation of model predictive control (MPC) for modular multilevel converters (MMCs). In particular, the artificial neural network model, trained offline by the data collected from the traditional fast MPC method, is used to control the MMCs with high accuracy. With this offline training, the majority of computational burden is transferred from online to offline. Therefore, the proposed ML MPC can replace the role of the traditional MPC. The experimental results show that the proposed ML-based MPC has the same performance as the conventional MPC but a significantly computationally efficient structure. The finding from the letter provides ground for many other applications for ML-based emulation of complex controllers in power electronic systems.

Proceedings ArticleDOI
17 Feb 2021
TL;DR: EGEMM-TC as discussed by the authors employs an extendable workflow of hardware profiling and operation design to generate a lightweight emulation algorithm on Tensor Cores with extended-precision, including highly-efficient tensorization to exploit the Tensor Core memory architecture and the instruction-level optimizations to coordinate the emulation computation and memory access.
Abstract: Nvidia Tensor Cores achieve high performance with half-precision matrix inputs tailored towards deep learning workloads. However, this limits the application of Tensor Cores especially in the area of scientific computing with high precision requirements. In this paper, we build Emulated GEMM on Tensor Cores (EGEMM-TC) to extend the usage of Tensor Cores to accelerate scientific computing applications without compromising the precision requirements. First, EGEMM-TC employs an extendable workflow of hardware profiling and operation design to generate a lightweight emulation algorithm on Tensor Cores with extended-precision. Second, EGEMM-TC exploits a set of Tensor Core kernel optimizations to achieve high performance, including the highly-efficient tensorization to exploit the Tensor Core memory architecture and the instruction-level optimizations to coordinate the emulation computation and memory access. Third, EGEMM-TC incorporates a hardware-aware analytic model to offer large flexibility for automatic performance tuning across various scientific computing workloads and input datasets. Extensive evaluations show that EGEMM-TC can achieve on average 3.13× and 11.18× speedup over the cuBLAS kernels and the CUDA-SDK kernels on CUDA Cores, respectively. Our case study on several scientific computing applications further confirms that EGEMM-TC can generalize the usage of Tensor Cores and achieve about 1.8× speedup compared to the hand-tuned, highly-optimized implementations running on CUDA Cores.

Journal ArticleDOI
TL;DR: In this article, the authors provide a comprehensive guide for the practitioner or system emulation researcher to understand the challenges involved in creating, emulating, and analyzing a system from obtaining firmwares to post emulation analysis.
Abstract: System emulation and firmware re-hosting have become popular techniques to answer various security and performance related questions, such as determining whether a firmware contain security vulnerabilities or meet timing requirements when run on a specific hardware platform. While this motivation for emulation and binary analysis has previously been explored and reported, starting to either work or research in the field is difficult. To this end, we provide a comprehensive guide for the practitioner or system emulation researcher. We layout common challenges faced during firmware re-hosting, explaining successive steps and surveying common tools used to overcome these challenges. We provide classification techniques on five different axes, including emulator methods, system type, fidelity, emulator purpose, and control. These classifications and comparison criteria enable the practitioner to determine the appropriate tool for emulation. We use our classifications to categorize popular works in the field and present 28 common challenges faced when creating, emulating, and analyzing a system from obtaining firmwares to post emulation analysis.

Journal ArticleDOI
TL;DR: In this paper, the authors present a hybrid model for uncovering tactics, techniques, and procedures (TTPs) through offensive security, specifically threat hunting via adversary emulation, which is based on a novel approach of inducing adversary emulation (mapping each respective phase) model inside the threat hunting approach.
Abstract: Attackers increasingly seek to compromise organizations and their critical data with advanced stealthy methods, often utilising legitimate tools. In the main, organisations employ reactive approaches for cyber security, focused on rectifying immediate incidents and preventing repeat attacks, through protections such as vulnerability assessment and penetration testing (VAPT) security information and event management (SIEM), firewalls, anti-spam/anti-malware solutions and system patches. Such system have weaknesses in addressing modern modern stealthy attacks. Proactive approaches, have been seen as part of the solution to this problem. However, approaches such as VAPT have limited scope and only works with threats that have already been discovered. Promising methods such as threat hunting are gaining momentum, enabling organisations to identify and rapidly respond to any potential attacks, though they have been criticised for their significant cost. In this paper, we present a novel hybrid model for uncovering tactics, techniques, and procedures (TTPs) through offensive security, specifically threat hunting via adversary emulation. The proposed technique is based on a novel approach of inducing adversary emulation (mapping each respective phase) model inside the threat hunting approach. The experimental results show that the proposed approach uses threat hunting via adversary emulation and has countervailing effects on hunting advance level threats. Moreover, the threat detection ability of the proposed approach utilizes minimum resources. The proposed approach can be used to develop the offensive security-aware environment for organizations to uncover advanced attack mechanisms and test their ability for attack detection.

Journal ArticleDOI
TL;DR: In this paper, the authors set out the current state of climate model emulation and demonstrate how, despite some challenges, recent advances in machine learning provide new opportunities for creating useful statistical models of the climate.
Abstract: Modern weather and climate models share a common heritage and often even components; however, they are used in different ways to answer fundamentally different questions. As such, attempts to emulate them using machine learning should reflect this. While the use of machine learning to emulate weather forecast models is a relatively new endeavour, there is a rich history of climate model emulation. This is primarily because while weather modelling is an initial condition problem, which intimately depends on the current state of the atmosphere, climate modelling is predominantly a boundary condition problem. To emulate the response of the climate to different drivers therefore, representation of the full dynamical evolution of the atmosphere is neither necessary, or in many cases, desirable. Climate scientists are typically interested in different questions also. Indeed emulating the steady-state climate response has been possible for many years and provides significant speed increases that allow solving inverse problems for e.g. parameter estimation. Nevertheless, the large datasets, non-linear relationships and limited training data make climate a domain which is rich in interesting machine learning challenges. Here, I seek to set out the current state of climate model emulation and demonstrate how, despite some challenges, recent advances in machine learning provide new opportunities for creating useful statistical models of the climate. This article is part of the theme issue 'Machine learning for weather and climate modelling'.

Journal ArticleDOI
TL;DR: The TraceGen framework is presented, an automated system focused on the emulation of user actions to create realistic and comprehensive artefacts in an auditable and reproducible manner that is able to produce background artefacts at scale and also the realism of the artefacts compared with their human-generated counterparts.

Journal ArticleDOI
TL;DR: In this paper, an optimal cascaded two-degree of freedom proportional-integral (2DOF) and proportional-derivative with filter (PDF) controller was proposed for load frequency control (LFC) mechanism.

Journal ArticleDOI
TL;DR: In this paper, a Gaussian Process Regression (GPR) based interpolation algorithm is proposed for non-intrusive reduced-order models (ROMs) for advection-dominated systems.

Proceedings ArticleDOI
17 Feb 2021
TL;DR: FABulous as mentioned in this paper is an embedded open-source FPGA framework that provides templates for logic, arithmetic, memory and I/O blocks that can be easily stitched together, whilst enabling users to add their own fully customized blocks and primitives.
Abstract: At the end of CMOS-scaling, the role of architecture design is increasingly gaining importance. Supporting this trend, customizable embedded FPGAs are an ingredient in ASIC architectures to provide the advantages of reconfigurable hardware exactly where and how it is most beneficial. To enable this, we are introducing the FABulous embedded open-source FPGA framework. FABulous is designed to fulfill the objectives of ease of use, maximum portability to different process nodes, good control for customization, and delivering good area, power, and performance characteristics of the generated FPGA fabrics. The framework provides templates for logic, arithmetic, memory, and I/O blocks that can be easily stitched together, whilst enabling users to add their own fully customized blocks and primitives. The FABulous ecosystem generates the embedded FPGA fabric for chip fabrication, integrates Yosys, ABC, VPR and nextpnr as FPGA CAD tools, deals with the bitstream generation and after fabrication tests. Additionally, we provide an emulation path for system development. FABulous was demonstrated for an ASIC integrating a RISC-V core with an embedded FPGA fabric for custom instruction set extensions using a TSMC 180nm process and an open-source 45nm process node.

Posted Content
TL;DR: Colosseum as discussed by the authors is an open-access and publicly-available large-scale wireless testbed for experimental research via virtualized and softwarized waveforms and protocol stacks on a fully programmable, "white-box" platform.
Abstract: Colosseum is an open-access and publicly-available large-scale wireless testbed for experimental research via virtualized and softwarized waveforms and protocol stacks on a fully programmable, "white-box" platform. Through 256 state-of-the-art Software-defined Radios and a Massive Channel Emulator core, Colosseum can model virtually any scenario, enabling the design, development and testing of solutions at scale in a variety of deployments and channel conditions. These Colosseum radio-frequency scenarios are reproduced through high-fidelity FPGA-based emulation with finite-impulse response filters. Filters model the taps of desired wireless channels and apply them to the signals generated by the radio nodes, faithfully mimicking the conditions of real-world wireless environments. In this paper we describe the architecture of Colosseum and its experimentation and emulation capabilities. We then demonstrate the effectiveness of Colosseum for experimental research at scale through exemplary use cases including prevailing wireless technologies (e.g., cellular and Wi-Fi) in spectrum sharing and unmanned aerial vehicle scenarios. A roadmap for Colosseum future updates concludes the paper.

Journal ArticleDOI
TL;DR: A botnet identification algorithm equipped with a cluster expurgation rule, which, under appropriate technical conditions, is shown to provide exact classification of bots and normal users as the observation window size increases, is designed.
Abstract: In a Distributed Denial of Service (DDoS) attack, a network ( botnet ) of dispersed agents ( bots ) sends requests to a website to saturate its resources. Since the requests are sent by automata, the typical way to detect them is to look for some repetition pattern or commonalities between requests of the same user or from different users. For this reason, recent DDoS variants exploit communication layers that offer broader possibility in terms of admissible request patterns, such as, e.g., the application layer. In this case, the malicious agents can pick legitimate messages from an emulation dictionary , and each individual agent sends a relatively low number of admissible requests, so as to make its activity non suspicious. This problem has been recently addressed under the assumption that all the members of the botnet use the same emulation dictionary. This situation is an idealization of what occurs in practice, since different clusters of agents are typically sharing only part of a global emulation dictionary. The diversity among the emulation dictionaries across different clusters introduces significant complexity in the botnet identification challenge. This work tackles this issue and provides the following main contributions. We obtain an analytical characterization of the message innovation rate of the DDoS attack with multiple emulation dictionaries. Exploiting this result, we design a botnet identification algorithm equipped with a cluster expurgation rule , which, under appropriate technical conditions, is shown to provide exact classification of bots and normal users as the observation window size increases. Then, an experimental campaign over real network traces is conducted to assess the validity of the theoretical analysis, as well as to examine the effect of a number of non-ideal effects that are unavoidably observed in practical scenarios.

Journal ArticleDOI
TL;DR: In this paper, an emulation platform that considers both the physical layer and the network (this is, able to emulate the end-to-end chain) is envisaged in the EmulRadio4Rail project.
Abstract: Radio access technologies (RATs) are a key topic in railways, enabling them a better service rendering in terms of shorter headways between trains, higher safety levels and higher customer satisfaction. Very often, these railway RATs need a lot of time to be developed, tested and put into service, which implies a lack of efficiency and bottlenecks in the evolution of railway systems. To solve this situation, an emulation platform that considers both the physical layer and the network (this is, able to emulate the end-to-end chain) is envisaged in the EmulRadio4Rail project. Therefore, the physical layer of many railway scenarios must be emulated, which is a remarkable challenge because railways are very diverse. We see Tapped-Delay Lines (TDL) models as the most efficient way for emulation with the available hardware. In the literature, there are many TDL-based channel models for all the scenarios we considered but one: tunnels. Therefore, in order to fill this gap, we develop a novel TDL model for railway tunnels, considering the impact of the rolling stock (both high-speed railway (HSR) and subway trains). The proposed model allows the full characterization of this scenario in terms of power-delay-profile (PDP), Doppler spectrum and fading characteristics.

Book ChapterDOI
01 Apr 2021
TL;DR: It turns out, the platform can significantly outperform most legacy network emulators regarding to the scalability, agility, and extensibility, with much lower emulation costs.
Abstract: Network emulation is an essential method to test network architecture, protocol and application software during a network’s entire life-cycle. Compared with simulation and test-bed methods, network emulation possesses the advantages of accuracy and cost-efficiency. However, legacy network emulators are typically restricted in scalability, agility, and extensibility, which builds barriers to prevent them from being widely used. In this paper, we introduce the currently prevalent cloud computing and related technologies including resource virtualization, NFV (network functional virtualization), SDN (software-defined networking), traffic control and flow steering to the network emulation domain. We design and implement an innovative cloud-based network emulation platform, aiming at providing users Network Emulation as a Service (NEaaS), which can be conveniently deployed on both public and private clouds. We carried out performance evaluation and discussion on this platform. It turns out, the platform can significantly outperform most legacy network emulators regarding to the scalability, agility, and extensibility, with much lower emulation costs.

Journal ArticleDOI
TL;DR: In this study, a neural network (NN) emulator for radiation parameterization was developed for the use of an operational weather forecasting model in the Korea Meteorological Administration.
Abstract: In this study, a neural network (NN) emulator for radiation parameterization was developed for the use of an operational weather forecasting model in the Korea Meteorological Administration. The de...

Posted ContentDOI
01 May 2021
TL;DR: Deep Reinforcement Learning and SoftwareDefined Networking Intelligent Routing (DRSIR) is introduced, a routing algorithm based on Deep RL (DRL) in SDN that overcomes the limitations of RL-based solutions.
Abstract: Traditional routing protocols employ limited information to make routing decisions which leads to slow adaptation to traffic variability and restricted support to the quality of service requirements of the applications. To address these shortcomings, in previous work, we proposed RSIR, a routing solution based on Reinforcement Learning (RL) in SoftwareDefined Networking (SDN). However, RL-based solutions usually suffer an increase in the learning process when dealing with large action and state spaces. This paper introduces a different routing approach called Deep Reinforcement Learning and SoftwareDefined Networking Intelligent Routing (DRSIR). DRSIR defines a routing algorithm based on Deep RL (DRL) in SDN that overcomes the limitations of RL-based solutions. DRSIR considers path-state metrics to produce proactive, efficient, and intelligent routing that adapts to dynamic traffic changes. DRSIR was evaluated by emulation using real and synthetic traffic matrices. The results show that this solution outperforms the routing algorithms based on the Dijkstra’s algorithm and RSIR, in relation to stretching (stretch), packet loss, and delay. Moreover, the results obtained demonstrate that DRSIR provides a practical and viable solution for routing in SDN.

Journal ArticleDOI
TL;DR: With the proposed VIEC scheme, the inertia time constant can be flexibly emulated and its value can be automatically adjusted according to the rate of change of grid frequency as well as grid frequency deviations, which leads to more effective inertial support across different timescales.
Abstract: This paper proposes a novel variable-inertia emulation control (VIEC) scheme that enables voltage-source-converter based high-voltage DC (VSC-HVDC) transmission systems to flexibly support AC grid frequency stability like synchronous generators. The VIEC scheme allows us to extract the energy from the augmented DC capacitance for inertia emulation by controlling the DC voltage without affecting the stability of the AC system connected on the remote side. In particular, with the proposed VIEC scheme, the inertia time constant can be flexibly emulated and its value can be automatically adjusted according to the rate of change of grid frequency as well as grid frequency deviations. This leads to more effective inertial support across different timescales. Modal analysis is carried out to investigate the impacts of inertia and DC capacitance, and to obtain the optimal control parameters. The effectiveness and advantages of the proposed VIEC scheme are demonstrated on an IEEE benchmark system in the presence of faults and load changes.

Journal ArticleDOI
TL;DR: A testbed is proposed that evaluates the UW communication system in a controlled aquatic environment and simulates the UW channel and sound propagation models and preliminary results obtained show that the proposed solution is indeed superior to existing solutions as it is cost-effective, requires less effort and is reliable.
Abstract: Underwater Wireless Sensor Networks (UWSNs) are playing a vital role in exploring the unseen underwater (UW) natural resources. However, performance evaluation of UWSNs is still a challenging research problem. Various techniques such as, in-field testing, simulation and emulation have been used for the purpose but they all have limitations. For example, in-field testing is expensive as well as extensive; similarly, a simulation model based on assumptions may not provide precise results. Consequently, it is crucial to have a solution that is reliable, inexpensive and requires less effort to validate the functionality of UWSNs and their components. In this paper, a testbed is proposed that evaluates the UW communication system in a controlled aquatic environment and simulates the UW channel and sound propagation models. The constraint of physical access to the testbed facility is resolved by using a web-based monitoring and controlling graphical user interface. Preliminary results obtained by using the testbed in evaluating the performance of UW communication systems and the simulations done show that the proposed solution is indeed superior to existing solutions as it is cost-effective, requires less effort and is reliable.

Journal ArticleDOI
TL;DR: Algorithms are offered that quickly handle the massive output of a surge model while addressing the missing data at unsubmerged locations and a new optimal design criterion for selecting simulations that accounts for the log transform required to statistically model surge data are included.
Abstract: Probabilistic hurricane storm surge forecasting using a high-fidelity model has been considered impractical due to the overwhelming computational expense to run thousands of simulations. This article demonstrates that modern statistical tools enable good forecasting performance using a small number of carefully chosen simulations. This article offers algorithms that quickly handle the massive output of a surge model while addressing the missing data at unsubmerged locations. Also included is a new optimal design criterion for selecting simulations that accounts for the log transform required to statistically model surge data. Hurricane Michael (2018) is used as a testbed for this investigation and provides evidence for the approach’s efficacy in comparison to the existing probabilistic surge forecast method.

Journal ArticleDOI
TL;DR: A comprehensive review of quantum computing emulators and quantum key distillation accelerators on FPGAs can be found in this article, with a balance between theoretical, implementational, and technological results.
Abstract: In the past decades, field-programmable gate arrays (FPGAs) have demonstrated an interesting physical platform to facilitate quantum information processing, particularly in the emergence of domain-specific hardware accelerators for quantum computing emulation and quantum key distillation. While conventional general-purpose hardware platforms have been used for quantum information processing, FPGAs promise deep pipeline parallelism, adaptable interface, and trivial support for custom-precision operation. Therefore, the time is ripe for describing recent development of quantum computing emulators and quantum key distillation accelerators on FPGAs. In this article, we provide a comprehensive review of the state-of-the-art in this active field, with a balance between theoretical, implementational, and technological results. Challenges and promising research opportunities are also discussed.