scispace - formally typeset
Search or ask a question

Showing papers on "Emulation published in 2015"


Journal ArticleDOI
TL;DR: A novel method that has the potential to improve the calibration of complex infectious disease models (hereafter called simulators) is presented in the form of a tutorial and a case study where the history match a dynamic, event-driven, individual-based stochastic HIV simulator is used, using extensive demographic, behavioural and epidemiological data available from Uganda.
Abstract: Advances in scientific computing have allowed the development of complex models that are being routinely applied to problems in disease epidemiology, public health and decision making. The utility of these models depends in part on how well they can reproduce empirical data. However, fitting such models to real world data is greatly hindered both by large numbers of input and output parameters, and by long run times, such that many modelling studies lack a formal calibration methodology. We present a novel method that has the potential to improve the calibration of complex infectious disease models (hereafter called simulators). We present this in the form of a tutorial and a case study where we history match a dynamic, event-driven, individual-based stochastic HIV simulator, using extensive demographic, behavioural and epidemiological data available from Uganda. The tutorial describes history matching and emulation. History matching is an iterative procedure that reduces the simulator's input space by identifying and discarding areas that are unlikely to provide a good match to the empirical data. History matching relies on the computational efficiency of a Bayesian representation of the simulator, known as an emulator. Emulators mimic the simulator's behaviour, but are often several orders of magnitude faster to evaluate. In the case study, we use a 22 input simulator, fitting its 18 outputs simultaneously. After 9 iterations of history matching, a non-implausible region of the simulator input space was identified that was 10(11) times smaller than the original input space. Simulator evaluations made within this region were found to have a 65% probability of fitting all 18 outputs. History matching and emulation are useful additions to the toolbox of infectious disease modellers. Further research is required to explicitly address the stochastic nature of the simulator as well as to account for correlations between outputs.

135 citations


Posted Content
TL;DR: This paper introduces a methodology and implements a scalable framework for discovery of vulnerabilities in embedded web interfaces regardless of the devices' vendor, type, or architecture, and presents the first fully automated framework that applies dynamic firmware analysis techniques to achieve automated vulnerability discovery within embedded firmware images.
Abstract: Embedded devices are becoming more widespread, interconnected, and web-enabled than ever. However, recent studies showed that these devices are far from being secure. Moreover, many embedded systems rely on web interfaces for user interaction or administration. Unfortunately, web security is known to be difficult, and therefore the web interfaces of embedded systems represent a considerable attack surface. In this paper, we present the first fully automated framework that applies dynamic firmware analysis techniques to achieve, in a scalable manner, automated vulnerability discovery within embedded firmware images. We apply our framework to study the security of embedded web interfaces running in Commercial Off-The-Shelf (COTS) embedded devices, such as routers, DSL/cable modems, VoIP phones, IP/CCTV cameras. We introduce a methodology and implement a scalable framework for discovery of vulnerabilities in embedded web interfaces regardless of the vendor, device, or architecture. To achieve this goal, our framework performs full system emulation to achieve the execution of firmware images in a software-only environment, i.e., without involving any physical embedded devices. Then, we analyze the web interfaces within the firmware using both static and dynamic tools. We also present some interesting case-studies, and discuss the main challenges associated with the dynamic analysis of firmware images and their web interfaces and network services. The observations we make in this paper shed light on an important aspect of embedded devices which was not previously studied at a large scale. We validate our framework by testing it on 1925 firmware images from 54 different vendors. We discover important vulnerabilities in 185 firmware images, affecting nearly a quarter of vendors in our dataset. These experimental results demonstrate the effectiveness of our approach.

131 citations


Journal ArticleDOI
TL;DR: A nanoscale, solid-state physically evolving network is experimentally demonstrated, based on the self-organization of Ag nanoclusters under an electric field, which allows the emulation of heterosynaptic plasticity, an important learning rule in biological systems.
Abstract: A nanoscale, solid-state physically evolving network is experimentally demonstrated, based on the self-organization of Ag nanoclusters under an electric field. The adaptive nature of the network is determined by the collective inputs from multiple terminals and allows the emulation of heterosynaptic plasticity, an important learning rule in biological systems. These effects are universally observed in devices based on different switching materials.

129 citations


Journal ArticleDOI
TL;DR: This paper proposes a new cloud-based automation architecture for industrial automation, which includes different functionalities from feedback control and telemetry to plant optimization and enterprise management, and focuses on the feedback control layer as the most time-critical and demanding functionality.
Abstract: New cloud services are being developed to support a wide variety of real-life applications. In this paper, we introduce a new cloud service: industrial automation, which includes different functionalities from feedback control and telemetry to plant optimization and enterprise management. We focus our study on the feedback control layer as the most time-critical and demanding functionality. Today’s large-scale industrial automation projects are expensive and time-consuming. Hence, we propose a new cloud-based automation architecture, and we analyze cost and time savings under the proposed architecture. We show that significant cost and time savings can be achieved, mainly due to the virtualization of controllers and the reduction of hardware cost and associated labor. However, the major difficulties in providing cloud-based industrial automation systems are timeliness and reliability. Offering automation functionalities from the cloud over the Internet puts the controlled processes at risk due to varying communication delays and potential failure of virtual machines and/or links. Thus, we design an adaptive delay compensator and a distributed fault tolerance algorithm to mitigate delays and failures, respectively. We theoretically analyze the performance of the proposed architecture when compared to the traditional systems and prove zero or negligible change in performance. To experimentally evaluate our approach, we implement our controllers on commercial clouds and use them to control: (i) a physical model of a solar power plant, where we show that the fault-tolerance algorithm effectively makes the system unaware of faults, and (ii) industry-standard emulation with large injected delays and disturbances, where we show that the proposed cloud-based controllers perform indistinguishably from the best-known counterparts: local controllers.

105 citations


Journal ArticleDOI
01 Nov 2015
TL;DR: The architectural concept of SUNSET is described and some exemplary results of its use in the field are presented, allowing the performance investigation of underwater systems under different settings and configurations and significantly reduces the cost and complexity of at-sea trials.
Abstract: The Sapienza University Networking framework for underwater Simulation Emulation and real-life Testing (SUNSET) is a toolkit for the implementation and testing of protocols for underwater sensor networks. SUNSET enables a radical new way of performing experimental research on underwater communications. It allows protocol designers and implementors to easily realize their solutions and to evaluate their performance through simulation, in-lab emulation and trials at sea in a direct and transparent way, and independently of specific underwater hardware platforms. SUNSET provides a complete toolchain of pre-deployment and deployment time tools able to identify risks, malfunctioning and under-performing solutions before incurring the expense of going to sea. Novel underwater systems can therefore be rapidly and easily investigated. Heterogeneous underwater communication technologies from different vendors can be used, allowing the evaluation of the impact of different combinations of hardware and software on the overall system performance. Using SUNSET, underwater devices can be reconfigured and controlled remotely in real time, using acoustic links. This allows the performance investigation of underwater systems under different settings and configurations and significantly reduces the cost and complexity of at-sea trials. This paper describes the architectural concept of SUNSET and presents some exemplary results of its use in the field. The SUNSET framework has been extensively validated during more than fifteen at-sea experimental campaigns in the past four years. Several of these have been conducted jointly with the NATO STO Centre for Maritime Research and Experimentation (CMRE) under a collaboration between the University of Rome and CMRE.

100 citations


Proceedings ArticleDOI
09 Feb 2015
TL;DR: Results show that the simulation environment has a remarkable effect on the required time for building a topology, for instance, the powerful resources scenario needed only 0.19 sec, whereas, 5.611 sec were needed when the resources were less.
Abstract: In this paper an evaluation of an SDN emulation tool called Mininet is conducted. Tests were conducted to study Mininnet limitations related to the simulation environment, resource capabilities. To evaluate the later, the scalability of Mininet in term of creating many topologies is tested with varying number of nodes and two different environment scenarios. Results show that the simulation environment has a remarkable effect on the required time for building a topology, for instance, the powerful resources scenario needed only 0.19 sec, whereas, 5.611 sec were needed when the resources were less. However, the required time were increased in both scenarios when the number of nodes was increased into 242.842 and 3718.117 sec for the powerful and less capabilities resources respectively.

99 citations


Journal ArticleDOI
TL;DR: A useful and comprehensive comparison between floating- and fixed-point arithmetic for hardware implementation is provided, and the differences of deeply pipelined and highly paralleled realization schemes and the contribution of schematic and textual programming language methods for design configuration of electrical machine models are addressed.
Abstract: Hardware-in-the-loop (HIL) technology is increasingly becoming the preferred, reliable, and cost-effective alternative in a virtual scenario for tedious, time-consuming, and expensive tests on real devices. This paper presents a digital hardware emulation of commonly used electrical machines for HIL simulation on the field-programmable gate arrays (FPGAs) in a general framework. This paper provides a useful and comprehensive comparison between floating- and fixed-point arithmetic for hardware implementation, and addresses the differences of deeply pipelined and highly paralleled realization schemes, and the contribution of schematic and textual programming language methods for design configuration of electrical machine models. The hardware implementation by these approaches is evaluated in terms of real-time step size, accuracy, and hardware resource consumption. Finally, an experimentally measured electrical machine behavior is employed to demonstrate the effectiveness of the emulated electrical machine.

91 citations


Proceedings ArticleDOI
24 Nov 2015
TL;DR: Quark as discussed by the authors is a performance emulator for persistent memory, which emulates a wide range of NVM latencies and bandwidth characteristics for performance evaluation of emerging byte-addressable NVMs and their impact on applications performance.
Abstract: Next-generation non-volatile memory (NVM) technologies, such as phase-change memory and memristors, can enable computer systems infrastructure to continue keeping up with the voracious appetite of data-centric applications for large, cheap, and fast storage. Persistent memory has emerged as a promising approach to accessing emerging byte-addressable non-volatile memory through processor load/store instructions. Due to lack of commercially available NVM, system software researchers have mainly relied on emulation to model persistent memory performance. However, existing emulation approaches are either too simplistic, or too slow to emulate large-scale workloads, or require special hardware. To fill this gap and encourage wider adoption of persistent memory, we developed a performance emulator for persistent memory, called Quartz. Quartz enables an efficient emulation of a wide range of NVM latencies and bandwidth characteristics for performance evaluation of emerging byte-addressable NVMs and their impact on applications performance (without modifying or instrumenting their source code) by leveraging features available in commodity hardware. Our emulator is implemented on three latest Intel Xeon-based processor architectures: Sandy Bridge, Ivy Bridge, and Haswell. To assist researchers and engineers in evaluating design decisions with emerging NVMs, we extend Quartz for emulating the application execution on future systems with two types of memory: fast, regular volatile DRAM and slower persistent memory. We evaluate the effectiveness of our approach by using a set of specially designed memory-intensive benchmarks and real applications. The accuracy of the proposed approach is validated by running these programs both on our emulation platform and a multisocket (NUMA) machine that can support a range of memory latencies. We show that Quartz can emulate a range of performance characteristics with low overhead and good accuracy (with emulation errors 0.2% - 9%).

87 citations


Journal ArticleDOI
Rong Yu, Yan Zhang1, Yi Liu, Stein Gjessing1, Mohsen Guizani2 
TL;DR: This article presents a comprehensive introduction to PUE attacks, from the attack rationale and its impact on CR networks, to detection and defense approaches, and proposes an admission-control- based defense approach to mitigate the performance degradation of a CR network under a PUE attack.
Abstract: Cognitive radio is a promising technology for next-generation wireless networks in order to efficiently utilize the limited spectrum resources and satisfy the rapidly increasing demand for wireless applications and services. Security is a very important but not well addressed issue in CR networks. In this article we focus on security problems arising from primary user emulation (PUE) attacks in CR networks. We present a comprehensive introduction to PUE attacks, from the attack rationale and its impact on CR networks, to detection and defense approaches. In order to secure CR networks against PUE attacks, a two-level database-assisted detection approach is proposed to detect such attacks. Energy detection and location verification are combined for fast and reliable detection. An admission-control- based defense approach is proposed to mitigate the performance degradation of a CR network under a PUE attack. Illustrative results are presented to demonstrate the effectiveness of the proposed detection and defense approaches.

82 citations


Proceedings ArticleDOI
17 May 2015
TL;DR: MALT, a debugging framework that employs System Management Mode, a CPU mode in the x86 architecture, to transparently study armored malware, which reduces the attack surface at the software level, and advances state-of-the-art debugging transparency.
Abstract: With the rapid proliferation of malware attacks on the Internet, understanding these malicious behaviors plays a critical role in crafting effective defense. Advanced malware analysis relies on virtualization or emulation technology to run samples in a confined environment, and to analyze malicious activities by instrumenting code execution. However, virtual machines and emulators inevitably create artifacts in the execution environment, making these approaches vulnerable to detection or subversion. In this paper, we present MALT, a debugging framework that employs System Management Mode, a CPU mode in the x86 architecture, to transparently study armored malware. MALT does not depend on virtualization or emulation and thus is immune to threats targeting such environments. Our approach reduces the attack surface at the software level, and advances state-of-the-art debugging transparency. MALT embodies various debugging functions, including register/memory accesses, breakpoints, and four stepping modes. We implemented a prototype of MALT on two physical machines, and we conducted experiments by testing an array of existing anti-virtualization, anti-emulation, and packing techniques against MALT. The experimental results show that our prototype remains transparent and undetected against the samples. Furthermore, our prototype of MALT introduces moderate but manageable overheads on both Windows and Linux platforms.

73 citations


Patent
22 Dec 2015
TL;DR: In this article, a cloud-based multi-tier cyber analytics system is provided for integration of cloud-side and on-premise analytics for industrial systems, which includes an emulation runtime engine that executes a virtualized controller on a cloud platform.
Abstract: A cloud-based multi-tier cyber analytics system is provided for integration of cloud-side and on-premise analytics for industrial systems. The analytics system includes an emulation runtime engine that executes a virtualized controller on a cloud platform. The runtime engine serves as a core analytics component by providing a control-level analytics engine with application programming interfaces (APIs) that enable seamless interaction of distributed simulations, cloud level services, and hardware industrial controllers. A cloud-based framework integrates soft control, hard control, and simulation with cloud-level services, and includes components that facilitate near real-time data streaming from the plant floor to the cloud platform to yield an industrial Internet of Things (IoT).

Proceedings ArticleDOI
17 Jun 2015
TL;DR: Experimental results demonstrate that embedding virtual time into Mininet significantly enhances its performance fidelity, and therefore, results in a useful platform for the SDN community to conduct scalable experiments with high fidelity.
Abstract: The advancement of software-defined networking (SDN) technology is highly dependent on the successful transformations from in-house research ideas to real-life products. To enable such transformations, a testbed offering scalable and high fidelity networking environment for testing and evaluating new/existing designs is extremely valuable. Mininet, the most popular SDN emulator by far, is designed to achieve both accuracy and scalability by running unmodified code of network applications in lightweight Linux Containers. However, Mininet cannot guarantee performance fidelity under high workloads, in particular when the number of concurrent active events is more than the number of parallel cores. In this project, we develop a lightweight virtual time system in Linux container and integrate the system with Mininet, so that all the containers have their own virtual clocks rather than using the physical system clock which reflects the serialized execution of multiple containers. With the notion of virtual time, all the containers perceive virtual time as if they run independently and concurrently. As a result, interactions between the containers and the physical system are artificially scaled, making a network appear to be ten times faster from the viewpoint of applications within the containers than it actually is. We also design an adaptive virtual time scheduling subsystem in Mininet, which is responsible to balance the experiment speed and fidelity. Experimental results demonstrate that embedding virtual time into Mininet significantly enhances its performance fidelity, and therefore, results in a useful platform for the SDN community to conduct scalable experiments with high fidelity.

Journal ArticleDOI
TL;DR: This paper aims to design and implement 3D virtual labs, which are considered as a low-cost alternative to educators and students, in science E-learning, and designs and implements a complex application that combines advanced visualization, interactive management through complex virtual devices and intelligent components.

Journal ArticleDOI
TL;DR: A Bayesian approach to Gaussian process modeling capable of incorporating monotonicity information for computer model emulation is developed.
Abstract: In statistical modeling of computer experiments, prior information is sometimes available about the underlying function. For example, the physical system simulated by the computer code may be known to be monotone with respect to some or all inputs. We develop a Bayesian approach to Gaussian process modeling capable of incorporating monotonicity information for computer model emulation. Markov chain Monte Carlo methods are used to sample from the posterior distribution of the process given the simulator output and monotonicity information. The performance of the proposed approach in terms of predictive accuracy and uncertainty quantification is demonstrated in a number of simulated examples as well as a real queuing system application.

Proceedings ArticleDOI
01 Sep 2015
TL;DR: This work proposes novel algorithms for translating signal temporal logic assertions to hardware runtime monitors implemented in field programmable gate array (FPGA) and evaluates the approach on two examples: the mixed signal bounded stabilization property and the serial peripheral interface (SPI) communication protocol.
Abstract: Due to the heterogeneity and complexity of systems-of-systems (SoS), their simulation is becoming very time consuming, expensive and hence impractical. As a result, design simulation is increasingly being complemented with more efficient design emulation. Runtime monitoring of emulated designs would provide a precious support in the verification activities of such complex systems. We propose novel algorithms for translating signal temporal logic (STL) assertions to hardware runtime monitors implemented in field programmable gate array (FPGA). In order to accommodate to this hardware specific setting, we restrict ourselves to past and bounded future temporal operators interpreted over discrete time. We evaluate our approach on two examples: the mixed signal bounded stabilization property and the serial peripheral interface (SPI) communication protocol. These case studies demonstrate the suitability of our approach for runtime monitoring of both digital and mixed signal systems.

Proceedings ArticleDOI
27 Apr 2015
TL;DR: Two approaches for real-time end-to-end over-the-air testing for intelligent traffic systems using virtual radio environments are considered based on coherent wavefield synthesis and emulating the essential features of a realistic radio channel in vehicular ad-hoc networks.
Abstract: We consider two approaches for real-time end-to-end over-the-air testing for intelligent traffic systems using virtual radio environments. One approach is based on coherent wavefield synthesis, while the other aims at emulating the essential features of a realistic radio channel in vehicular ad-hoc networks. Due to finite hardware resources, the former is limited to electrically small systems-under-test, while the latter is suited for typical vehicle-to-vehicle scenarios. Three major aspects of the emulation of radio environments in our facilities in Ilmenau, Germany, are described: 1. Emulation hardware, 2. Channel sounding, modeling, and feature extraction, 3. Emulation methods.

Proceedings ArticleDOI
13 Apr 2015
TL;DR: The manuscript describes implementation alternatives of the virtual function chaining in a SDN scenario, showing that both layer 2 and layer 3 approaches are functionally viable.
Abstract: This manuscript investigates the issue of implementing chains of network functions in a “softwarized” environment where edge network middle-boxes are replaced by software appliances running in virtual machines within a data center. The primary goal is to show that this approach allows space and time diversity in service chaining, with a higher degree of dynamism and flexibility with respect to conventional hardware-based architectures. The manuscript describes implementation alternatives of the virtual function chaining in a SDN scenario, showing that both layer 2 and layer 3 approaches are functionally viable. A proof-of-concept implementation with the Mininet emulation platform is then presented to provide a practical example of the feasibility and degree of complexity of such approaches.

Journal ArticleDOI
TL;DR: In this article, an associative processor combines data storage and data processing, and functions as a massively parallel SIMD processor and a memory at the same time, and an analytic performance model of this computer architecture is introduced.
Abstract: This study presents a computer architecture, where a last-level cache and a SIMD accelerator are replaced by an associative processor. Associative processor combines data storage and data processing, and functions as a massively parallel SIMD processor and a memory at the same time. An analytic performance model of this computer architecture is introduced. Comparative analysis supported by cycle-accurate simulation and emulation shows that this architecture may outperform a conventional computer architecture comprising a SIMD coprocessor and a shared last-level cache while consuming less power.

Journal ArticleDOI
TL;DR: This paper presents the various virtual machine migration techniques and various techniques and parameters available for VM migration in cloud computing.
Abstract: Cloud computing is the delivers the computing services over the internet. Cloud services help individuals and organization to use data that are managed by third parties or another person at remote locations. Virtual Machine (VM) is an emulation of a particular computer system. In cloud computing, Virtual machine migration is a useful tool for migrating Operating System instances across multiple physical machines. It is used to load balancing, fault management, low-level system maintenance and reduce energy consumption. There are various techniques and parameters available for VM migration. This paper presents the various virtual machine migration techniques.

Journal ArticleDOI
TL;DR: The proposed TUNIE architecture is a large-scale emulation testbed for DTN protocol evaluation based on network virtualization capable of simulating reliable DTN environments and obtaining an accurate system performance evaluation.
Abstract: Delay-tolerant networks, DTNs, are characterized by lacking end-to-end paths between communication sources and destinations. A variety of routing schemes have been proposed to provide communication services in DTNs, and credible and flexible protocol evaluation tools are in demand in order to test these DTN routing schemes. By examining the evolution of DTN protocol testing and evaluation, this article discusses the trend toward large-scale mobility trace supported emulation, and we propose TUNIE, a large-scale emulation testbed for DTN protocol evaluation based on network virtualization. Unlike the existing simulation tools and real-life testbeds, which either cannot provide a realistic DTN environment setup or are too costly and time-consuming, our proposed TUNIE architecture is capable of simulating reliable DTN environments and obtaining an accurate system performance evaluation. By system prototype and implementation, we demonstrate TUNIE as a flexible platform for evaluating DTN protocol performance.

Proceedings ArticleDOI
01 Oct 2015
TL;DR: Theoretical results on age for a single link across a wide range of system parameters indicate that the theoretical results are indicative of real world behavior, but the limitations of the theoretical models and the impact on the emulated age are identified.
Abstract: This work focuses on evaluating a new metric for the timeliness of information in a status monitoring system, referred to as the age of information or the status age. Investigation into the age of information metric is fairly recent and has primarily been focused on theoretical analysis, and there has been no evaluation of the metric in a realistic system of networked nodes. We evaluate the age metric in a realistic wireless system using the open source network emulation tools CORE and EMANE. Our goal is to validate theoretical results on age for a single link across a wide range of system parameters. In addition to verifying existing theoretical results, we go further by adding more theoretical results that better model the emulated system than currently offered in the theory. Our results indicate that the theoretical results are indicative of real world behavior, but we also identify the limitations of the theoretical models and the impact on the emulated age.

Proceedings ArticleDOI
26 May 2015
TL;DR: An approach is proposed that can generalize noisy task demonstrations to a new goal point and to an environment with obstacles and incorporates different types of learning from demonstration, which correspond to different kinds of observational learning as outlined in developmental psychology.
Abstract: Dynamic Movement Primitives (DMPs) are a common method for learning a control policy for a task from demonstration. This control policy consists of differential equations that can create a smooth trajectory to a new goal point. However, DMPs only have a limited ability to generalize the demonstration to new environments and solve problems such as obstacle avoidance. Moreover, standard DMP learning does not cope with the noise inherent to human demonstrations. Here, we propose an approach for robot learning from demonstration that can generalize noisy task demonstrations to a new goal point and to an environment with obstacles. This strategy for robot learning from demonstration results in a control policy that incorporates different types of learning from demonstration, which correspond to different types of observational learning as outlined in developmental psychology.

Proceedings ArticleDOI
24 Mar 2015
TL;DR: Dockemu provides the researcher the flexibility to rapidly create networks, up to date operating systems and a user friendly method of installation and configuration, that translates into a streamlined workflow for the emulation of experiments.
Abstract: Dock emu is developed because of the need for a well designed tool for emulating networks. The tool utilizes technologies that are tailored to a current researcher's needs, delivering a robust and dynamic framework. In the past, most emulation tools have tried to provide solutions without taking into consideration that the installation and configuration of the software can be very time consuming, not to mention, the complexity of setting up and running an actual experiment can become also a very complex task. Our approach provides the researcher the flexibility to rapidly create networks (wired or wireless), up to date operating systems and a user friendly method of installation and configuration, that translates into a streamlined workflow for the emulation of experiments. Also, the development of applications or prototypes can be made directly into a real world OS. This saves time, because the prototypes will only be developed once (not for a simulator first and then for a final OS) but the results will also be more accurate. Dock emu utilizes virtualization with Linux Containers through Docker and Linux Bridging along with NS-3 for the emulation of layers 1 and 2 of the OSI model.

Journal ArticleDOI
TL;DR: A hardware-in-the-loop simulation platform for emulating large-scale intelligent transportation systems is presented, which embeds a real vehicle into SUMO, a microscopic road traffic simulation package.
Abstract: A hardware-in-the-loop simulation platform for emulating large-scale intelligent transportation systems is presented. The platform embeds a real vehicle into SUMO, a microscopic road traffic simulation package. Emulations, consisting of the real vehicle, and potentially thousands of simulated vehicles, are run in real time. The platform provides an opportunity for real drivers to gain a feel of being in a large-scale connected vehicle scenario. Various applications of the platform are presented.

Journal ArticleDOI
TL;DR: This work focuses on substituting a computationally expensive simulator by a cheap emulator to enable studying applications where running the simulator is prohibitively expensive, and discusses the approach and evaluates its performance based on a typical example in the realm of computational wind engineering.
Abstract: This work focuses on substituting a computationally expensive simulator by a cheap emulator to enable studying applications where running the simulator is prohibitively expensive The procedure consists of two steps In a first step, the emulator is calibrated to closely mimic the simulator response for a number of pre-defined cases In a second step the calibrated emulator is used as surrogate for the simulator in the otherwise prohibitively expensive application An appealing feature of the proposed framework contrary to other approaches is that the uncertainty on the emulator prediction can be determined While the proposed framework is applicable in virtually all areas of natural sciences, we discuss the approach and evaluate its performance based on a typical example in the realm of computational wind engineering, namely the determination of the wind field in an urban area Display Omitted Statistical model emulation is forwarded as alternative to physics-based simulationMain advantages are 106-108?times lower computational cost & controllable accuracyExtensive performance assessment demonstrates reliability of the methodStep-by-step instructions and references are given to start using model emulationSensitivity to all user-based choices is studied and rules-of-thumb are suggested

Journal ArticleDOI
TL;DR: This approach allows to obtain performance predictions of classical dense linear algebra kernels accurate within a few percents and in a matter of seconds, which allows both runtime and application designers to quickly decide which optimization to enable or whether it is worth investing in higher‐end graphics processing units or not.
Abstract: Summary Multi-core architectures comprising several graphics processing units (GPUs) have become mainstream in the field of high-performance computing. However, obtaining the maximum performance of such heterogeneous machines is challenging as it requires to carefully off-load computations and manage data movements between the different processing units. The most promising and successful approaches so far build on task-based runtimes that abstract the machine and rely on opportunistic scheduling algorithms. As a consequence, the problem gets shifted to choosing the task granularity, task graph structure, and optimizing the scheduling strategies. Trying different combinations of these different alternatives is also itself a challenge. Indeed, obtaining accurate measurements requires reserving the target system for the whole duration of experiments. Furthermore, observations are limited to the few available systems at hand and may be difficult to generalize. In this article, we show how we crafted a coarse-grain hybrid simulation/emulation of StarPU, a dynamic runtime for hybrid architectures, over SimGrid, a versatile simulator of distributed systems. This approach allows to obtain performance predictions of classical dense linear algebra kernels accurate within a few percents and in a matter of seconds, which allows both runtime and application designers to quickly decide which optimization to enable or whether it is worth investing in higher-end graphics processing units or not. Additionally, it allows to conduct robust and extensive scheduling studies in a controlled environment whose characteristics are very close to real platforms while having reproducible behavior. Copyright © 2015 John Wiley & Sons, Ltd.

Proceedings ArticleDOI
11 May 2015
TL;DR: This paper proposes solutions, based on the Mininet environment and the POX Openflow controller, that emulate the effects of three different energy saving protocols, and is validated by comparing energy savings obtained by activating these protocols in an emulated network topology inspired by the Brazilian Research Network.
Abstract: A significant number of green, energy-saving network protocols have been invented in recent years in response to demand for reducing the amount of energy consumed by network infrastructure. In this paper, we report on the difficulties we encountered when building an SDN environment that could emulate energy saving protocols operating at different layers of the network. We propose solutions, based on the Mininet environment and the POX Openflow controller, that emulate the effects of three different energy saving protocols. Our approach is validated by comparing energy savings obtained by activating these protocols in an emulated network topology inspired by the Brazilian Research Network.

Proceedings ArticleDOI
30 Sep 2015
TL;DR: An emulation-based study reveals insights about the broad design space, the expected impact of workload, and gains due to multi-threaded execution in a general purpose two-layer packet-level caching system.
Abstract: Recent work motivates the design of Information-centric routers that make use of hierarchies of memory to jointly scale in the size and speed of content stores. The present paper advances this understanding by (i) instantiating a general purpose two-layer packet-level caching system, (ii) investigating the solution design space via emulation, and (iii) introducing a proof-of-concept prototype. The emulation-based study reveals insights about the broad design space, the expected impact of workload, and gains due to multi-threaded execution. The full-blown system prototype experimentally confirms that, by exploiting both DRAM and SSD memory technologies, ICN routers can sustain cache operations in excess of 10Gbps running on off-the-shelf hardware.

Journal ArticleDOI
TL;DR: The design process of the compensator is described, and it is validated in both the time and frequency domains, and the effectiveness of the compensate is demonstrated by simulation and experimental emulation of aero gas engine dynamics.
Abstract: A system compensator is designed to cancel the mechanical characteristics of a power electronics-controlled induction motor drive with a coupled electrical generator system. The emulation scheme using the compensator enables accurate emulation of high-speed and high-power mechanical system dynamics. The compensator is developed based on the system transient response of the test rig, considering the full operating range of the test system. The design process of the compensator is described, and it is validated in both the time and frequency domains. Finally, the effectiveness of the compensator is demonstrated by simulation and experimental emulation of aero gas engine dynamics.

Journal ArticleDOI
TL;DR: In this paper, the authors extend the emulation logic to the potential for the contagion of non-native conflicts to non-English speaking countries via spillover effects and the desire to emulate events abroad.
Abstract: Violent domestic conflicts spread between countries via spillover effects and the desire to emulate events abroad. Herein, we extend this emulation logic to the potential for the contagion of nonvi...