scispace - formally typeset
Search or ask a question

Showing papers on "Emulation published in 2012"


Proceedings ArticleDOI
10 Dec 2012
TL;DR: This paper puts CBE to the test, using the prototype, Mininet-HiFi, to reproduce key results from published network experiments such as DCTCP, Hedera, and router buffer sizing and suggests that CBE makes research results easier to reproduce and build upon.
Abstract: In an ideal world, all research papers would be runnable: simply click to replicate all results, using the same setup as the authors. One approach to enable runnable network systems papers is Container-Based Emulation (CBE), where an environment of virtual hosts, switches, and links runs on a modern multicore server, using real application and kernel code with software-emulated network elements. CBE combines many of the best features of software simulators and hardware testbeds, but its performance fidelity is unproven.In this paper, we put CBE to the test, using our prototype, Mininet-HiFi, to reproduce key results from published network experiments such as DCTCP, Hedera, and router buffer sizing. We report lessons learned from a graduate networking class at Stanford, where 37 students used our platform to replicate 18 published results of their own choosing. Our experiences suggest that CBE makes research results easier to reproduce and build upon.

507 citations


Proceedings ArticleDOI
19 Sep 2012
TL;DR: This paper presents Multi2Sim, an open-source, modular, and fully configurable toolset that enables ISA-level simulation of an ×86 CPU and an AMD Evergreen GPU, and addresses program emulation correctness, as well as architectural simulation accuracy, using AMD's OpenCL benchmark suite.
Abstract: Accurate simulation is essential for the proper design and evaluation of any computing platform. Upon the current move toward the CPU-GPU heterogeneous computing era, researchers need a simulation framework that can model both kinds of computing devices and their interaction. In this paper, we present Multi2Sim, an open-source, modular, and fully configurable toolset that enables ISA-level simulation of an ×86 CPU and an AMD Evergreen GPU. Focusing on a model of the AMD Radeon 5870 GPU, we address program emulation correctness, as well as architectural simulation accuracy, using AMD's OpenCL benchmark suite. Simulation capabilities are demonstrated with a preliminary architectural exploration study, and workload characterization examples. The project source code, benchmark packages, and a detailed user's guide are publicly available at www.multi2sim.org.

440 citations


Proceedings ArticleDOI
22 Aug 2012
TL;DR: An energy emulation tool that allows developers to estimate the energy use for their mobile apps on their development workstation itself and scale the emulated resources including the processing speed and network characteristics to match the app behavior to that on a real mobile device.
Abstract: Battery life is a critical performance and user experience metric on mobile devices. However, it is difficult for app developers to measure the energy used by their apps, and to explore how energy use might change with conditions that vary outside of the developer's control such as network congestion, choice of mobile operator, and user settings for screen brightness. We present an energy emulation tool that allows developers to estimate the energy use for their mobile apps on their development workstation itself. The proposed techniques scale the emulated resources including the processing speed and network characteristics to match the app behavior to that on a real mobile device. We also enable exploring multiple operating conditions that the developers cannot easily reproduce in their lab. The estimation of energy relies on power models for various components, and we also add new power models for components not modeled in prior works such as AMOLED displays. We also present a prototype implementation of this tool and evaluate it through comparisons with real device energy measurements.

296 citations


Patent
05 Nov 2012
TL;DR: In this paper, the authors describe a method for detecting suspicious behavior associated with an object, instantiating an emulation environment in response to the detected suspicious behavior, processing, recording responses to, and tracing operations of the object within the emulation environment, detecting a divergence between the traced operations of an object within a virtualization environment to the traces of the operation within an emulated environment, re-instantiating the virtualisation environment, providing the recorded response from the emulated object to the object in the VM, monitoring the operation of the objects within the VM and generating a report regarding
Abstract: Systems and methods for virtualization and emulation malware enabled detection are described. In some embodiments, a method comprises intercepting an object, instantiating and processing the object in a virtualization environment, tracing operations of the object while processing within the virtualization environment, detecting suspicious behavior associated with the object, instantiating an emulation environment in response to the detected suspicious behavior, processing, recording responses to, and tracing operations of the object within the emulation environment, detecting a divergence between the traced operations of the object within the virtualization environment to the traced operations of the object within the emulation environment, re-instantiating the virtualization environment, providing the recorded response from the emulation environment to the object in the virtualization environment, monitoring the operations of the object within the re-instantiation of the virtualization environment, identifying untrusted actions from the monitored operations, and generating a report regarding the identified untrusted actions of the object.

233 citations


Journal ArticleDOI
TL;DR: The main aim of the paper is to provide an introduction to emulation modelling together with a unified strategy for its application, so that modellers from different disciplines can better appreciate how it may be applied in their area of expertise.
Abstract: Emulation modelling is an effective way of overcoming the large computational burden associated with the process-based models traditionally adopted by the environmental modelling community. An emulator is a low-order, computationally efficient model identified from the original large model and then used to replace it for computationally intensive applications. As the number and forms of the problem that benefit from the identification and subsequent use of an emulator is very large, emulation modelling has emerged in different sectors of science, engineering and social science. For this reason, a variety of different strategies and techniques have been proposed in the last few years. The main aim of the paper is to provide an introduction to emulation modelling, together with a unified strategy for its application, so that modellers from different disciplines can better appreciate how it may be applied in their area of expertise. Particular emphasis is devoted to Dynamic Emulation Modelling (DEMo), a methodological approach that preserves the dynamic nature of the original process-based model, with consequent advantages in a wide variety of problem areas. The different techniques and approaches to DEMo are considered in two macro categories: structure-based methods, where the mathematical structure of the original model is manipulated to a simpler, more computationally efficient form; and data-based approaches, where the emulator is identified and estimated from a data-set generated from planned experiments conducted on the large simulation model. The main contribution of the paper is a unified, six-step procedure that can be applied to most kinds of dynamic emulation problem.

146 citations


Journal ArticleDOI
TL;DR: This paper proposes a defense strategy against the PUE attack in CR networks using belief propagation, which avoids the deployment of additional sensor networks and expensive hardware in the networks used in the existing literatures.
Abstract: Cognitive radio (CR) is a promising technology for future wireless spectrum allocation to improve the usage of the licensed bands. However, CR wireless networks are susceptible to various attacks and cannot offer efficient security. Primary user emulation (PUE) is one of the most serious attacks for CR networks, which can significantly increase the spectrum access failure probability. In this paper, we propose a defense strategy against the PUE attack in CR networks using belief propagation, which avoids the deployment of additional sensor networks and expensive hardware in the networks used in the existing literatures. In our proposed approach, each secondary user calculates the local function and the compatibility function, computes the messages, exchanges messages with the neighboring users, and calculates the beliefs until convergence. Then, the PUE attacker will be detected, and all the secondary users in the network will be notified in a broadcast way about the characteristics of the attacker's signal. Therefore, all SUs can avoid the PUE attacker's primary emulation signal in the future. Simulation results show that our proposed approach converges quickly, and is effective to detect the PUE attacker.

114 citations


Journal ArticleDOI
TL;DR: This thematic issue aims at providing a guide and reference for modellers in choosing appropriate emulation modelling approaches and understanding their features, and tools and applications of sensitivity analysis in the context of environmental modelling are addressed.
Abstract: Emulation (also denoted as metamodelling in the literature) is an important and expanding area of research and represents one of the major advances in the study of complex mathematical models, with applications ranging from model reduction to sensitivity analysis. Despite the stunning increase in computing power over recent decades, computational limitations remain a major barrier to the effective and systematic use of large-scale, process-based simulation models in rational environmental decision-making. Whereas complex models may provide clear advantages when the goal of the modelling exercise is to enhance our understanding of the natural processes, they introduce problems of model identifiability caused by over-parameterization and suffer from high computational burden when used in management and planning problems, i.e. when they are combined with optimization routines. Therefore, a combination of techniques for complex model reduction with procedures for data assimilation and learning-based control could help to bridge the gap between science and the operational use of models for decision-making. Furthermore sensitivity analysis is a well known and established tool for evaluating robustness of model based results in management and planning, and is often performed in tandem with emulation. Indeed, emulators provide an efficient means for doing a sensitivity analysis for large and expensive models. This thematic issue aims at providing a guide and reference for modellers in choosing appropriate emulation modelling approaches and understanding their features. Tools and applications of sensitivity analysis in the context of environmental modelling are also addressed, which is a typical complement of emulation in most applications. We hope that this thematic issue provides a useful benchmark in the academic literature for this important and expanding area of research, and will create an opportunity for dialogue between methodological and user-focused research.

111 citations


Proceedings ArticleDOI
31 Mar 2012
TL;DR: This work takes advantage of the ubiquitous multicore platforms, using multithreaded approach to implement DBT, and demonstrates in a multi-threaded DBT prototype, called HQEMU, that it could improve QEMU performance by a factor of 2.4X on the SPEC 2006 integer and floating point benchmarks.
Abstract: Dynamic binary translation (DBT) is a core technology to many important applications such as system virtualization, dynamic binary instrumentation and security. However, there are several factors that often impede its performance: (1) emulation overhead before translation; (2) translation and optimization overhead, and (3) translated code quality. On the dynamic binary translator itself, the issues also include its retargetability to support guest applications from different instruction-set architectures (ISAs) to host machines also with different ISAs, an important feature for system virtualization. In this work, we take advantage of the ubiquitous multicore platforms, using multithreaded approach to implement DBT. By running the translators and the dynamic binary optimizers on different threads on different cores, it could off-load the overhead caused by DBT on the target applications; thus, afford DBT of more sophisticated optimization techniques as well as the support of its retargetability. Using QEMU (a popular retargetable DBT for system virtualization) and LLVM (Low Level Virtual Machine) as our building blocks, we demonstrated in a multi-threaded DBT prototype, called HQEMU, that it could improve QEMU performance by a factor of 2.4X and 4X on the SPEC 2006 integer and floating point benchmarks for x86 to x86-64 emulations, respectively, i.e. it is only 2.5X and 2.1X slower than native execution of the same benchmarks on x86-64, as opposed to 6X and 8.4X slowdown on QEMU. For ARM to x86-64 emulation, HQEMU could gain a factor of 2.4X speedup over QEMU for the SPEC 2006 integer benchmarks.

107 citations


Journal ArticleDOI
TL;DR: Experimental results demonstrate that AMUSE can emulate soft error effects for complex circuits including microprocessors and memories, considering the real delays of an ASIC technology, and support massive fault injection campaigns, in the order of tens of millions of faults within acceptable time.
Abstract: Estimation of soft error sensitivity is crucial in order to devise optimal mitigation solutions that can satisfy reliability requirements with reduced impact on area, performance, and power consumption. In particular, the estimation of Single Event Transient (SET) effects for complex systems that include a microprocessor is challenging, due to the huge potential number of different faults and effects that must be considered, and the delay-dependent nature of SET effects. In this paper, we propose a multilevel FPGA emulation-based fault injection approach for evaluation of SET effects called AMUSE (Autonomous MUltilevel emulation system for Soft Error evaluation). This approach integrates Gate level and Register-Transfer level models of the circuit under test in a FPGA and is able to switch to the appropriate model as needed during emulation. Fault injection is performed at the Gate level, which provides delay accuracy, while fault propagation across clock cycles is performed at the Register-Transfer level for higher performance. Experimental results demonstrate that AMUSE can emulate soft error effects for complex circuits including microprocessors and memories, considering the real delays of an ASIC technology, and support massive fault injection campaigns, in the order of tens of millions of faults within acceptable time.

102 citations


Book
15 Oct 2012
TL;DR: In this article, an extensive introduction to the modeling of photovoltaic generators and their emulation by means of power electronic converters is presented, which can aid in understanding and improving design and set up of new PV plants.
Abstract: Modeling of photovoltaic sources and their emulation by means of power electronic converters are challenging issues. The former is tied to the knowledge of the electrical behavior of the PV generator; the latter consists in its realization by a suitable power amplifier. This extensive introduction to the modeling of PV generators and their emulation by means of power electronic converters will aid in understanding and improving design and set up of new PV plants. The main benefit of reading Photovoltaic Sources is the ability to face the emulation of photovoltaic generators obtained by the design of a suitable equipment in which voltage and current are the same as in a real source. This is achieved according to the following steps: the source electrical behavior modeling, the power converter design, including its control, for the laboratory emulator. This approach allows the reader to cope with the creation of an indoor virtual photovoltaic plant, in which the environmental conditions can be imposed by the user, for testing real operation including maximum power point tracking, partial shading, control for the grid or load interfacing, etc. Photovoltaic Sources is intended to meet the demands of postgraduate level students, and should prove useful to professional engineers and researchers dealing with the problems associated with modeling and emulation of photovoltaic sources.

93 citations


Journal ArticleDOI
TL;DR: The main novelty of the proposed framework is that it provides a set of experimental capabilities that are missing from other approaches, e.g. safe experimentation with real malware, flexibility to use different physical processes.

Journal ArticleDOI
TL;DR: Preliminary results show that the proposed approach significantly simplifies the learning of good operating policies and can highlight interesting properties of the system to be controlled.
Abstract: The optimal management of large environmental systems is often limited by the high computational burden associated to the process-based models commonly adopted to describe such systems. In this paper we propose a novel data-driven Dynamic Emulation Modelling approach for the construction of small, computationally efficient models that accurately emulate the main dynamics of the original process-based model, but with less computational requirements. The approach combines the many advantages of data-based modelling in representing complex, non-linear relationships, but preserves the state-space representation, which is both particularly effective in several applications (e.g. optimal management and data assimilation) and facilitates the ex-post physical interpretation of the emulator structure, thus enhancing the credibility of the model to stakeholders and decision-makers. The core mechanism is a novel variable selection procedure that is recursively applied to a data-set of input, state and output variables generated via simulation of the process-based model. The approach is demonstrated on a real-world case study concerning the optimal operation of a selective withdrawal reservoir (Tono Dam, Japan) suffering from downstream water quality problems. The emulator is identified on a data-set generated with a 1D coupled hydrodynamic-ecological model and subsequently used to design the optimal operating policy for the dam. Preliminary results show that the proposed approach significantly simplifies the learning of good operating policies and can highlight interesting properties of the system to be controlled.

Proceedings ArticleDOI
03 Mar 2012
TL;DR: This work proposes a new analysis platform that combines hardware virtualization and software emulation, called V2E, that ensures the execution replay is precise and can easily adjust to analyze various forms of malware.
Abstract: A transparent and extensible malware analysis platform is essential for defeating malware. This platform should be transparent so malware cannot easily detect and bypass it. It should also be extensible to provide strong support for heavyweight instrumentation and analysis efficiency. However, no existing platform can meet both requirements. Leveraging hardware virtualization technology, analysis platforms like Ether can achieve good transparency, but its instrumentation support and analysis efficiency is poor. In contrast, software emulation provides strong support for code instrumentation and good analysis efficiency by using dynamic binary translation. However, analysis platforms based on software emulation can be easily detected by malware and thus is poor in transparency. To achieve both transparency and extensibility, we propose a new analysis platform that combines hardware virtualization and software emulation. The essence is precise heterogeneous replay: the malware execution is recorded via hardware virtualization and then replayed in software. Our design ensures the execution replay is precise. Moreover, with page-level recording granularity, the platform can easily adjust to analyze various forms of malware (a process, a kernel module, or a shared library). We implemented a prototype called V2E and demonstrated its capability and efficiency by conducting an extensive evaluation with both synthetic samples and 14 realworld emulation-resistant malware samples.

Journal ArticleDOI
TL;DR: This paper focuses on debugging digital controllers to be implemented in Field Programmable Gate Arrays or Application Specific Integrated Circuits, which are designed in hardware description languages, with the main conclusion that 32-bit floating point is not enough for medium and high switching frequencies.
Abstract: Debugging digital controllers for power converters can be a problem because there are both digital and analog components. This paper focuses on debugging digital controllers to be implemented in Field Programmable Gate Arrays or Application Specific Integrated Circuits, which are designed in hardware description languages. Four methods are proposed and described. All of them allow simulation, and two methods also allow emulation-synthesizing the model of the converter to run the complete closed-loop system in actual hardware. The first method consists in using a mixed analog and digital simulator. This is the easiest alternative for the designer, but simulation time can be a problem, specially for long simulations like those necessary in power factor correction or when the controller is very complex, for example, with embedded processors. The alternative is to use pure digital models, generating a digital model of the power converter. Three methods are proposed: real type, float type and fixed point models (in the latter case including hand-coded and automatic-coded descriptions). Float and fixed point models are synthesizable, so emulation is possible, achieving speedups over 20 000. The results obtained with each method are presented, highlighting the advantages and disadvantages of each one. Apart from that, an analysis of the necessary resolution in the variables is presented, being the main conclusion that 32-bit floating point is not enough for medium and high switching frequencies.

06 Aug 2012
TL;DR: This paper methodically models the Tor network by exploring and justifying every modeling choice required to produce accurate Tor experimentation environments and finds that this model enables experiments that characterize Tor's load and performance with reasonable accuracy.
Abstract: Live Tor network experiments are difficult due to Tor's distributed nature and the privacy requirements of its client base. Alternative experimentation approaches, such as simulation and emulation, must make choices about how to model various aspects of the Internet and Tor that are not possible or not desirable to duplicate or implement directly. This paper methodically models the Tor network by exploring and justifying every modeling choice required to produce accurate Tor experimentation environments. We validate our model using two state-of-the-art Tor experimentation tools and measurements from the live Tor network. We find that our model enables experiments that characterize Tor's load and performance with reasonable accuracy.

Proceedings ArticleDOI
03 Dec 2012
TL;DR: Several novel mechanisms by which an attacker can delude an emulator are introduced and a novel approach to generate execution traces is introduced, which uses a hardware feature available on commodity x86 processors to utilize the processor itself to generate such traces.
Abstract: A detailed understanding of the behavior of exploits and malicious software is necessary to obtain a comprehensive overview of vulnerabilities in operating systems or client applications, and to develop protection techniques and tools. To this end, a lot of research has been done in the last few years on binary analysis techniques to efficiently and precisely analyze code. Most of the common analysis frameworks are based on software emulators since such tools offer a fine-grained control over the execution of a given program. Naturally, this leads to an arms race where the attackers are constantly searching for new methods to detect such analysis frameworks in order to successfully evade analysis.In this paper, we focus on two aspects. As a first contribution, we introduce several novel mechanisms by which an attacker can delude an emulator. In contrast to existing detection approaches that perform a dedicated test on the environment and combine the test with an explicit conditional branch, our detection mechanisms introduce code sequences that have an implicitly different behavior on a native machine when compared to an emulator. Such differences in behavior are caused by the side-effects of the particular operations and imperfections in the emulation process that cannot be mitigated easily. Motivated by these findings, we introduce a novel approach to generate execution traces. We propose to utilize the processor itself to generate such traces. Mores precisely, we propose to use a hardware feature called branch tracing available on commodity x86 processors in which the log of all branches taken during code execution is generated directly by the processor. Effectively, the logging is thus performed at the lowest level possible. We evaluate the practical viability of this approach.

Proceedings ArticleDOI
01 Oct 2012
TL;DR: This paper investigates and compares the currently available simulation/emulation software for IoT emulation, found out that the current solutions are mostly appropriate for small- and medium-scale emulation, however they are not suitable for large-scale testing that reaches millions of node running concurrently.
Abstract: Internet of Things (IoT) is increasingly used in a plethora of fields to enable radically new ways for various purposes, ranging from monitoring the environment to enhancing the wellbeing of human life With the ever-increasing size of such networks, it is fundamental to understand the issues that come with scaling on different networking layers A cost-efficient approach to examine large-scale networks is to use simulators or emulators to test the infrastructure and its ability to support the desired applications In this paper, we investigate and compare the currently available simulation/emulation software We found out that the current solutions are mostly appropriate for small- and medium-scale emulation, however they are not suitable for large-scale testing that reaches millions of node running concurrently We then propose a large-scale IoT emulator called MAMMotH and present a brief overview of its design Finally we discuss some of the current issues and future directions, eg radio link simulation

Journal ArticleDOI
TL;DR: A new model for an ideal operational amplifier that does not include implicit equations and is thus suitable for implementation using wave digital filters (WDFs) is introduced and a novel WDF model for a diode is proposed using the Lambert W function.
Abstract: This brief presents a generic model to emulate distortion circuits using operational amplifiers and diodes. Distortion circuits are widely used for enhancing the sound of guitars and other musical instruments. This brief introduces a new model for an ideal operational amplifier that does not include implicit equations and is thus suitable for implementation using wave digital filters (WDFs). Furthermore, a novel WDF model for a diode is proposed using the Lambert W function. A comparison of output signals of the proposed models to those obtained from a reference simulation using SPICE shows that the distortion characteristics are accurately reproduced over a wide frequency range. Additionally, the proposed model enables real-time emulation of distortion circuits using ten multiplications, 22 additions, and two interpolations from a lookup table per output sample.

Proceedings ArticleDOI
03 Jun 2012
TL;DR: A high-precision, low-overhead embedded test structure for measuring path delays to detect the delay anomalies introduced by hardware Trojans and is minimally invasive to the design as it leverages the existing scan structures.
Abstract: The horizontal dissemination of the chip fabrication industry has raised new concerns over Integrated Circuit (IC) Trust, in particular, the threat of malicious functionality, i.e., a Hardware Trojan, that is added by an adversary to an IC. In this paper, we propose the use of a high-precision, low-overhead embedded test structure for measuring path delays to detect the delay anomalies introduced by hardware Trojans. The proposed test structure, called REBEL, is minimally invasive to the design as it leverages the existing scan structures. In this work, we integrate REBEL into a structural description of a pipelined Floating Point Unit. Trojan emulation circuits, designed to model internal wire loads introduced by a hardware Trojan, are inserted into the design at multiple places. The emulation cell incorporates an analog control pin to allow a variety of hardware Trojan loading scenarios to be investigated. We evaluate the detection sensitivity of REBEL for detecting hardware Trojans using regression analysis and hardware data collected from 62 copies of the chip fabricated in 90nm CMOS technology.

Journal ArticleDOI
TL;DR: A novel approach to the modelling and emulation of general mem-systems without the necessity of utilizing a digital potentiometer and additional mutators is described.
Abstract: The recently published memristor emulator is based on a digital potentiometer, which is controlled by a microprocessor according to a programmed algorithm. After completing the emulator with suitable mutators, it is possible to emulate also memcapacitors and meminductors. This paper describes a novel approach to the modelling and emulation of general mem-systems without the necessity of utilizing a digital potentiometer and additional mutators. Copyright © 2011 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: A new methodology for performing multilevel emulation of a complex model as a function of any decision within a predefined class that makes specific use of a scenario ensemble of opportunity on a fast or early version of a simulator and a small, well‐chosen, design on the current simulator of interest.
Abstract: When using computer models to provide policy support it is normal to encounter ensembles that test only a handful of feasible or idealized decision scenarios. We present a new methodology for performing multilevel emulation of a complex model as a function of any decision within a predefined class that makes specific use of a scenario ensemble of opportunity on a fast or early version of a simulator and a small, well-chosen, design on our current simulator of interest. The method exploits a geometrical approach to Bayesian inference and is designed to be fast, to facilitate detailed diagnostic checking of our emulators by allowing us to carry out many analyses very quickly. Our motivating application involved constructing an emulator for the UK Met Office Hadley Centre coupled climate model HadCM3 as a function of carbon dioxide forcing, which was part of a ‘RAPID’ programme deliverable to the UK Met Office funded by the Natural Environment Research Council. Our application involved severe time pressure as well as limited access to runs of HadCM3 and a scenario ensemble of opportunity on a lower resolution version of the model.

Patent
19 Mar 2012
TL;DR: In this paper, a model of a computing device and an application that is executable in the computing device are identified, and a video signal of the application is encoded into a media stream.
Abstract: Disclosed are various embodiments that facilitate remote emulation of computing devices. A model of a computing device and an application that is executable in the computing device are identified. The application is executed in a hosted environment. A video signal of the application is encoded into a media stream. A user interface is encoded for rendering in a client. The user interface includes a graphical representation of the model of the computing device. A screen of the graphical representation of the model of the computing device is configured to render at least a portion of the video signal from the media stream.

Patent
15 Jun 2012
TL;DR: In this paper, an emulation of an instruction is terminated prior to completing the execution of the instruction, after the emulation routine completes any previously initiated unit of operation of the Instruction Instruction this paper.
Abstract: Embodiments relate to intra-instructional transaction abort handling. An aspect includes using an emulation routine to execute an instruction within a transaction. The instruction includes at least one unit of operation. The transaction effectively delays committing stores to memory until the transaction has completed successfully. After receiving an abort indication, emulation of the instruction is terminated prior to completing the execution of the instruction. The instruction is terminated after the emulation routine completes any previously initiated unit of operation of the instruction.

Proceedings ArticleDOI
01 Dec 2012
TL;DR: This paper investigates performance and trade-offs related to TCAM emulation in FPGAs, and analyzes the impact of encoding different key ranges on rules for different configurations in terms of the search key length and the number of rules.
Abstract: Packet classification techniques are continuously challenged as network bandwidth increases and new services are deployed. Ternary Content Addressable Memories (TCAMs) have traditionally been used for scenarios requiring high-speed packet processing. However, TCAM-based classification suffers from high power consumption and clock rate limitations. Among several proposed solutions, TCAM emulation through RAM has emerged as a more flexible and energy-efficient strategy. On the other hand, Field-Programmable Gate Array (FPGA) devices have been evolving providing not only abundant logical resources but also an increasing number of integrated RAM blocks. This paper investigates performance and trade-offs related to TCAM emulation in FPGAs. In particular, we analyze the impact of encoding different key ranges on rules for different configurations in terms of the search key length and the number of rules. To validate and evaluate actual performance, we report and discuss results of real implementations on FPGA devices. Our work shows that classification rates above 300Mpps for both large keys and rule sets can be implemented with only a few megabits of RAM when considering up to medium size range matching intervals.

Patent
17 Aug 2012
TL;DR: In this paper, a near field communications (NFC) device is disclosed that interacts with other NFC devices to exchange information and/or the data, and the NFC device can include a plurality of secure elements each configured to store one or more card emulation instances.
Abstract: A near field communications (NFC) device is disclosed that interacts with other NFC devices to exchange information and/or the data. The NFC device can include a plurality of secure elements each configured to store one or more card emulation instances. Each card emulation instance is associated with an application identification (AID) and a priority value. The NFC device can route communications between another NFC device and plurality of secure elements based on each card emulation instance's associated AID and priority value.

Patent
26 Nov 2012
TL;DR: In this article, instructions of an application program are emulated such that they are carried out sequentially in a first virtual execution environment that represents the user-mode data processing of the operating system.
Abstract: Instructions of an application program are emulated such that they are carried out sequentially in a first virtual execution environment that represents the user-mode data processing of the operating system. A system API call requesting execution of a user-mode system function is detected. In response, the instructions of the user-mode system function called by the API are emulated according to a second emulation mode in which the instructions of the user-mode system function are carried out sequentially in a second virtual execution environment that represents the user-mode data processing of the operating system, including tracking certain processor and memory states affected by the instructions of the user-mode system function. Results of the emulating of the application program instructions according to the first emulation mode are analyzed for any presence of malicious code.

Proceedings ArticleDOI
03 Dec 2012
TL;DR: An approach of a novel cloud layer called Hardware as a Service (HaaS), which allows for usage distinct hardware components through the Internet analogously to the cloud services, and explains the applicability in a distributed development process by an anti blocking system and an adaptive cruise control system in the automotive industry.
Abstract: Cloud computing has already been adopted in a broad range of application domains. However, domains like the distributed development of embedded systems are still unable to benefit from the advancements of cloud computing. Besides general security concerns, a common obstacle often is the incompatibility between such applications and the cloud. In particular, if applications need direct access to hardware elements, cloud computing cannot be used. In this paper we describe an approach of a novel cloud layer called Hardware as a Service (HaaS), which allows for usage distinct hardware components through the Internet analogously to the cloud services. HaaS focuses the transparent integration of remote hardware that is distributed over multiple geographical locations into an operating system. Furthermore, HaaS will not only enable interconnection of physical systems, but also virtual hardware emulation. Therefore, we consider in this paper only the use of emulated hardware and the interconnection with hardware models. To demonstrate the tremendous improvement by a Haas cloud, we explain the applicability in a distributed development process by an anti blocking system and an adaptive cruise control system in the automotive industry.

Journal ArticleDOI
TL;DR: An evaluation framework and a set of tests that allow assessment of the degree to which system emulation preserves original characteristics and thus significant properties of digital artifacts are presented.
Abstract: Accessible emulation is often the method of choice for maintaining digital objects, specifically complex ones such as applications, business processes, or electronic art. However, validating the emulator’s ability to faithfully reproduce the original behavior of digital objects is complicated.This article presents an evaluation framework and a set of tests that allow assessment of the degree to which system emulation preserves original characteristics and thus significant properties of digital artifacts. The original system, hardware, and software properties are described. Identical environment is then recreated via emulation. Automated user input is used to eliminate potential confounders. The properties of a rendered form of the object are then extracted automatically or manually either in a target state, a series of states, or as a continuous stream. The concepts described in this article enable preservation planners to evaluate how emulation affects the behavior of digital objects compared to their behavior in the original environment. We also review how these principles can and should be applied to the evaluation of migration and other preservation strategies as a general principle of evaluating the invocation and faithful rendering of digital objects and systems. The article concludes with design requirements for emulators developed for digital preservation tasks.

Journal ArticleDOI
TL;DR: Simulation results demonstrate that the proposed belief updating system achieves better performance than other models for the secondary user in terms of greater payoff, lower probability of missing primary user and better robustness to the inaccurate estimation of the primary user's state.
Abstract: Cognitive radio (CR) enabled dynamic spectrum access (DSA) networks are designed to detect and opportunistically utilise the unused or under-utilised spectrum bands. However, due to the open paradigm of CR networks and lack of proactive security protocols, the DSA networks are vulnerable to various denial-of-service threats. The authors propose a game-theoretic framework to study the primary user emulation attack (PUEA) on CR nodes. A non-cooperative dynamic multistage game between the secondary nodes and the adversaries generating the PUEA is formulated. The pure-strategy and mixed-strategy Nash equilibria for the secondary user and malicious attacker are investigated. Moreover, a novel belief updating system is proposed for the secondary user to learn the state of the primary user as the game evolves. Simulation results demonstrate that the proposed belief updating system achieves better performance than other models for the secondary user in terms of greater payoff, lower probability of missing primary user and better robustness to the inaccurate estimation of the primary user's state.

Journal ArticleDOI
TL;DR: A cooperative localization method specifically suited to CRNs which relies on TDoA measurements and Taylor-series estimations is proposed and results show the goodness of the proposed method and its suitability to typical CRN scenarios.