scispace - formally typeset
Search or ask a question

Showing papers on "Overhead (computing) published in 2013"


Proceedings ArticleDOI
01 Oct 2013
TL;DR: This paper proposes a framework for deploying multiple controllers within an WAN that dynamically adjusts the number of active controllers and delegates each controller with a subset of Openflow switches according to network dynamics while ensuring minimal flow setup time and communication overhead.
Abstract: Software Defined Networking (SDN) has emerged as a new paradigm that offers the programmability required to dynamically configure and control a network. A traditional SDN implementation relies on a logically centralized controller that runs the control plane. However, in a large-scale WAN deployment, this rudimentary centralized approach has several limitations related to performance and scalability. To address these issues, recent proposals have advocated deploying multiple controllers that work cooperatively to control a network. Nonetheless, this approach drags in an interesting problem, which we call the Dynamic Controller Provisioning Problem (DCPP). DCPP dynamically adapts the number of controllers and their locations with changing network conditions, in order to minimize flow setup time and communication overhead. In this paper, we propose a framework for deploying multiple controllers within an WAN. Our framework dynamically adjusts the number of active controllers and delegates each controller with a subset of Openflow switches according to network dynamics while ensuring minimal flow setup time and communication overhead. To this end, we formulate the optimal controller provisioning problem as an Integer Linear Program (ILP) and propose two heuristics to solve it. Simulation results show that our solution minimizes flow setup time while incurring very low communication overhead.

363 citations


Proceedings Article
14 Aug 2013
TL;DR: KBouncer, a practical runtime ROP exploit prevention technique for the protection of third-party applications based on the detection of abnormal control transfers that take place during ROP code execution, has low runtime overhead when stressed with specially crafted workloads.
Abstract: Return-oriented programming (ROP) has become the primary exploitation technique for system compromise in the presence of non-executable page protections. ROP exploits are facilitated mainly by the lack of complete address space randomization coverage or the presence of memory disclosure vulnerabilities, necessitating additional ROP-specific mitigations. In this paper we present a practical runtime ROP exploit prevention technique for the protection of third-party applications. Our approach is based on the detection of abnormal control transfers that take place during ROP code execution. This is achieved using hardware features of commodity processors, which incur negligible runtime overhead and allow for completely transparent operation without requiring any modifications to the protected applications. Our implementation for Windows 7, named kBouncer, can be selectively enabled for installed programs in the same fashion as user-friendly mitigation toolkits like Microsoft's EMET. The results of our evaluation demonstrate that kBouncer has low runtime overhead of up to 4%, when stressed with specially crafted workloads that continuously trigger its core detection component, while it has negligible overhead for actual user applications. In our experiments with in-the-wild ROP exploits, kBouncer successfully protected all tested applications, including Internet Explorer, Adobe Flash Player, and Adobe Reader.

313 citations


Journal ArticleDOI
TL;DR: This paper proposes an efficient algorithm that is based on iteratively solving a sequence of group LASSO problems that performs BS clustering and beamformer design jointly rather than separately as is done in the existing approaches for partial coordinated transmission.
Abstract: We consider the interference management problem in a multicell MIMO heterogeneous network. Within each cell there is a large number of distributed micro/pico base stations (BSs) that can be potentially coordinated for joint transmission. To reduce coordination overhead, we consider user-centric BS clustering so that each user is served by only a small number of (potentially overlapping) BSs. Thus, given the channel state information, our objective is to jointly design the BS clustering and the linear beamformers for all BSs in the network. In this paper, we formulate this problem from a {sparse optimization} perspective, and propose an efficient algorithm that is based on iteratively solving a sequence of group LASSO problems. A novel feature of the proposed algorithm is that it performs BS clustering and beamformer design jointly rather than separately as is done in the existing approaches for partial coordinated transmission. Moreover, the cluster size can be controlled by adjusting a single penalty parameter in the nonsmooth regularized utility function. The convergence of the proposed algorithm (to a stationary solution) is guaranteed, and its effectiveness is demonstrated via extensive simulation.

309 citations


Journal ArticleDOI
TL;DR: This paper proposes a secure multi-owner data sharing scheme, named Mona, for dynamic groups in the cloud, leveraging group signature and dynamic broadcast encryption techniques, so that any cloud user can anonymously share data with others.
Abstract: With the character of low maintenance, cloud computing provides an economical and efficient solution for sharing group resource among cloud users. Unfortunately, sharing data in a multi-owner manner while preserving data and identity privacy from an untrusted cloud is still a challenging issue, due to the frequent change of the membership. In this paper, we propose a secure multi-owner data sharing scheme, named Mona, for dynamic groups in the cloud. By leveraging group signature and dynamic broadcast encryption techniques, any cloud user can anonymously share data with others. Meanwhile, the storage overhead and encryption computation cost of our scheme are independent with the number of revoked users. In addition, we analyze the security of our scheme with rigorous proofs, and demonstrate the efficiency of our scheme in experiments.

302 citations


Book ChapterDOI
18 Mar 2013
TL;DR: This paper proposes a push-based approach to performance monitoring in flow-based networks, where the network inform us of performance changes, rather than query it ourselves on demand, and discusses how the proposed passive approach can be combined with active approaches with low overhead.
Abstract: Flow-based programmable networks must continuously monitor performance metrics, such as link utilization, in order to quickly adapt forwarding rules in response to changes in workload. However, existing monitoring solutions either require special instrumentation of the network or impose significant measurement overhead. In this paper, we propose a push-based approach to performance monitoring in flow-based networks, where we let the network inform us of performance changes, rather than query it ourselves on demand. Our key insight is that control messages sent by switches to the controller carry information that allows us to estimate performance. In OpenFlow networks, PacketIn and FlowRemoved messages--sent by switches to the controller upon the arrival of a new flow or upon the expiration of a flow entry, respectively--enable us to compute the utilization of links between switches. We conduct a) experiments on a real testbed, and b) simulations with real enterprise traces, to show accuracy, and that it can refresh utilization information frequently (e.g., at most every few seconds) given a constant stream of control messages. Since the number of control messages may be limited by the properties of traffic (e.g., long flows trigger sparse FlowRemoved's) or by the choices made by operators (e.g., proactive or wildcard rules eliminate or limit PacketIn's), we discuss how our proposed passive approach can be combined with active approaches with low overhead.

279 citations


Journal ArticleDOI
TL;DR: In this article, a distributed convex optimization framework is developed for energy trading between islanded microgrids, where the problem consists of several island-grids that exchange energy flows by means of an arbitrary topology, and a subgradient-based cost minimization algorithm is proposed that converges to the optimal solution in a practical number of iterations.
Abstract: In this paper, a distributed convex optimization framework is developed for energy trading between islanded microgrids. More specifically, the problem consists of several islanded microgrids that exchange energy flows by means of an arbitrary topology. Due to scalability issues and in order to safeguard local information on cost functions, a subgradient-based cost minimization algorithm is proposed that converges to the optimal solution in a practical number of iterations and with a limited communication overhead. Furthermore, this approach allows for a very intuitive economics interpretation that explains the algorithm iterations in terms of "supply--demand model" and "market clearing". Numerical results are given in terms of convergence rate of the algorithm and attained costs for different network topologies.

251 citations


Journal ArticleDOI
TL;DR: A general-purpose framework for interconnecting scientific simulation programs using a homogeneous, unified interface that conveniently separates all component numerical modules in memory and provides a platform to combine existing simulation codes or develop new physical solver codes within a rich “ecosystem” of interchangeable modules.

229 citations


Proceedings ArticleDOI
16 Aug 2013
TL;DR: New algorithms that trade the time required to perform a consistent update against the rule-space overhead required to implement it are introduced and how to optimize rule space used by representing the minimization problem as a mixed integer linear program is shown.
Abstract: A consistent update installs a new packet-forwarding policy across the switches of a software-defined network in place of an old policy. While doing so, such an update guarantees that every packet entering the network either obeys the old policy or the new one, but not some combination of the two. In this paper, we introduce new algorithms that trade the time required to perform a consistent update against the rule-space overhead required to implement it. We break an update in to k rounds that each transfer part of the traffic to the new configuration. The more rounds used, the slower the update, but the smaller the rule-space overhead. To ensure consistency, our algorithm analyzes the dependencies between rules in the old and new policies to determine which rules to add and remove on each round. In addition, we show how to optimize rule space used by representing the minimization problem as a mixed integer linear program. Moreover, to ensure the largest flows are moved first, while using rule space efficiently, we extend the mixed integer linear program with additional constraints. Our initial experiments show that a 6-round, optimized incremental update decreases rule space overhead from 100% to less than 10%. Moreover, if we cap the maximum rule-space overhead at 5% and assume the traffic flow volume follows Zipf's law, we find that 80% of the traffic may be transferred to the new policy in the first round and 99% in the first 3 rounds.

212 citations


Journal ArticleDOI
TL;DR: Numerical results are given to validate the theoretical findings, highlighting the inherent tradeoffs facing small cells, namely exploration/exploitation, myopic/foresighted behavior and complete/incomplete information.
Abstract: In this paper, a decentralized and self-organizing mechanism for small cell networks (such as micro-, femto- and picocells) is proposed. In particular, an application to the case in which small cell networks aim to mitigate the interference caused to the macrocell network, while maximizing their own spectral efficiencies, is presented. The proposed mechanism is based on new notions of reinforcement learning (RL) through which small cells jointly estimate their time-average performance and optimize their probability distributions with which they judiciously choose their transmit configurations. Here, a minimum signal to interference plus noise ratio (SINR) is guaranteed at the macrocell user equipment (UE), while the small cells maximize their individual performances. The proposed RL procedure is fully distributed as every small cell base station requires only an observation of its instantaneous performance which can be obtained from its UE. Furthermore, it is shown that the proposed mechanism always converges to an epsilon Nash equilibrium when all small cells share the same interest. In addition, this mechanism is shown to possess better convergence properties and incur less overhead than existing techniques such as best response dynamics, fictitious play or classical RL. Finally, numerical results are given to validate the theoretical findings, highlighting the inherent tradeoffs facing small cells, namely exploration/exploitation, myopic/foresighted behavior and complete/incomplete information.

208 citations


Proceedings ArticleDOI
04 Nov 2013
TL;DR: This work proposes a dynamic PoR scheme with constant client storage whose bandwidth cost is comparable to a Merkle hash tree, thus being very practical and shows how to make the scheme publicly verifiable, providing the first dynamic Po R scheme with such a property.
Abstract: Proofs of Retrievability (PoR), proposed by Juels and Kaliski in 2007, enable a client to store n file blocks with a cloud server so that later the server can prove possession of all the data in a very efficient manner (i.e., with constant computation and bandwidth). Although many efficient PoR schemes for static data have been constructed, only two dynamic PoR schemes exist. The scheme by Stefanov et. al. (ACSAC 2012) uses a large of amount of client storage and has a large audit cost. The scheme by Cash (EUROCRYPT 2013) is mostly of theoretical interest, as it employs Oblivious RAM (ORAM) as a black box, leading to increased practical overhead (e.g., it requires about 300 times more bandwidth than our construction).We propose a dynamic PoR scheme with constant client storage whose bandwidth cost is comparable to a Merkle hash tree, thus being very practical. Our construction outperforms the constructions of Stefanov et. al. and Cash et. al., both in theory and in practice. Specifically, for n outsourced blocks of beta bits each, writing a block requires beta+O(lambdalog n) bandwidth and O(betalog n) server computation (lambda is the security parameter). Audits are also very efficient, requiring beta+O(lambda^2log n) bandwidth. We also show how to make our scheme publicly verifiable, providing the first dynamic PoR scheme with such a property. We finally provide a very efficient implementation of our scheme.

204 citations


Journal ArticleDOI
TL;DR: A lightweight and dependable trust system (LDTS) for WSNs, which employ clustering algorithms, and a self-adaptive weighted method is defined for trust aggregation at CH level, which surpasses the limitations of traditional weighting methods for trust factors.
Abstract: The resource efficiency and dependability of a trust system are the most fundamental requirements for any wireless sensor network (WSN). However, existing trust systems developed for WSNs are incapable of satisfying these requirements because of their high overhead and low dependability. In this work, we proposed a lightweight and dependable trust system (LDTS) for WSNs, which employ clustering algorithms. First, a lightweight trust decision-making scheme is proposed based on the nodes' identities (roles) in the clustered WSNs, which is suitable for such WSNs because it facilitates energy-saving. Due to canceling feedback between cluster members (CMs) or between cluster heads (CHs), this approach can significantly improve system efficiency while reducing the effect of malicious nodes. More importantly, considering that CHs take on large amounts of data forwarding and communication tasks, a dependability-enhanced trust evaluating approach is defined for cooperations between CHs. This approach can effectively reduce networking consumption while malicious, selfish, and faulty CHs. Moreover, a self-adaptive weighted method is defined for trust aggregation at CH level. This approach surpasses the limitations of traditional weighting methods for trust factors, in which weights are assigned subjectively. Theory as well as simulation results shows that LDTS demands less memory and communication overhead compared with the current typical trust systems for WSNs.

Proceedings ArticleDOI
Ying Zhang1
09 Dec 2013
TL;DR: A novel method that performs adaptive zooming in the aggregation of flows to be measured that can detect anomalies more accurately with less overhead and a prediction based algorithm that dynamically change the granularity of measurement along both the spatial and the temporal dimensions is proposed.
Abstract: The accuracy and granularity of network flow measurement play a critical role in many network management tasks, especially for anomaly detection. Despite its important, traffic monitoring often introduces overhead to the network, thus, operators have to employ sampling and aggregation to avoid overloading the infrastructure. However, such sampled and aggregated information may affect the accuracy of traffic anomaly detection. In this work, we propose a novel method that performs adaptive zooming in the aggregation of flows to be measured. In order to better balance the monitoring overhead and the anomaly detection accuracy, we propose a prediction based algorithm that dynamically change the granularity of measurement along both the spatial and the temporal dimensions. To control the load on each individual switch, we carefully delegate monitoring rules in the network wide. Using real-world data and three simple anomaly detectors, we show that the adaptive based counting can detect anomalies more accurately with less overhead.

Proceedings ArticleDOI
08 Apr 2013
TL;DR: A scalable influence approximation algorithm, Independent Path Algorithm (IPA) for Independent Cascade (IC) diffusion model, which efficiently approximates influence by considering an independent influence path as an influence evaluation unit and is implemented in the demo application for influence maximization.
Abstract: As social network services connect people across the world, influence maximization, i.e., finding the most influential nodes (or individuals) in the network, is being actively researched with applications to viral marketing. One crucial challenge in scalable influence maximization processing is evaluating influence, which is #P-hard and thus hard to solve in polynomial time. We propose a scalable influence approximation algorithm, Independent Path Algorithm (IPA) for Independent Cascade (IC) diffusion model. IPA efficiently approximates influence by considering an independent influence path as an influence evaluation unit. IPA are also easily parallelized by simply adding a few lines of OpenMP meta-programming expressions. Also, overhead of maintaining influence paths in memory is relieved by safely throwing away insignificant influence paths. Extensive experiments conducted on large-scale real social networks show that IPA is an order of magnitude faster and uses less memory than the state of the art algorithms. Our experimental results also show that parallel versions of IPA speeds up further as the number of CPU cores increases, and more speed-up is achieved for larger datasets. The algorithms have been implemented in our demo application for influence maximization (available at http://dm.postech.ac.kr/ipa demo), which efficiently finds the most influential nodes in a social network.

Proceedings ArticleDOI
01 Dec 2013
TL;DR: A finite horizon, zero-sum, nonstationary stochastic game approach is employed to minimize the worst-case control and detection cost, and an optimal control policy for switching between control-cost optimal and secure (but cost-suboptimal) controllers in presence of replay attacks is obtained.
Abstract: The existing tradeoff between control system performance and the detection rate for replay attacks highlights the need to provide an optimal control policy that balances the security overhead with control cost. We employ a finite horizon, zero-sum, nonstationary stochastic game approach to minimize the worst-case control and detection cost, and obtain an optimal control policy for switching between control-cost optimal (but nonsecure) and secure (but cost-suboptimal) controllers in presence of replay attacks. To formulate the game, we quantify game parameters using knowledge of the system dynamics, controller design and utilized statistical detector. We show that the optimal strategy for the system exists, and present a suboptimal algorithm used to calculate the system's strategy by combining robust game techniques and a finite horizon stationary stochastic game algorithm. Our approach can be generalized for any system with multiple finite cost, time-invariant linear controllers/estimators/intrusion detectors.

Book ChapterDOI
18 Aug 2013
TL;DR: In the setting of secure two-party computation, two parties wish to securely compute a joint function of their private inputs, while revealing only the output, and this methodology, called cut-and-choose, introduces significant overhead, both in computation and in communication.
Abstract: In the setting of secure two-party computation, two parties wish to securely compute a joint function of their private inputs, while revealing only the output. One of the primary techniques for achieving efficient secure two-party computation is that of Yao’s garbled circuits (FOCS 1986). In the semi-honest model, where just one garbled circuit is constructed and evaluated, Yao’s protocol has proven itself to be very efficient. However, a malicious adversary who constructs the garbled circuit may construct a garbling of a different circuit computing a different function, and this cannot be detected (due to the garbling). In order to solve this problem, many circuits are sent and some of them are opened to check that they are correct while the others are evaluated. This methodology, called cut-and-choose, introduces significant overhead, both in computation and in communication, and is mainly due to the number of circuits that must be used in order to prevent cheating.

Journal ArticleDOI
TL;DR: VMST is an entirely new technique that can automatically bridge the semantic gap and generate the VMI tools and automatically enables an in-guest inspection program to become an introspection program.
Abstract: It is generally believed to be a tedious, time-consuming, and error-prone process to develop a virtual machine introspection (VMI) tool because of the semantic gap. Recent advance shows that the semantic-gap can be largely narrowed by reusing the executed code from a trusted OS kernel. However, the limitation for such an approach is that it only reuses the exercised code through a training process, which suffers the code coverage issues. Thus, in this article, we present Vmst, a new technique that can seamlessly bridge the semantic gap and automatically generate the VMI tools. The key idea is that, through system wide instruction monitoring, Vmst automatically identifies the introspection related data from a secure-VM and online redirects these data accesses to the kernel memory of a product-VM, without any training. Vmst offers a number of new features and capabilities. Particularly, it enables an in-VM inspection program (e.g., ps) to automatically become an out-of-VM introspection program. We have tested Vmst with over 25 commonly used utilities on top of a number of different OS kernels including Linux and Microsoft Windows. The experimental results show that our technique is general (largely OS-independent), and it introduces 9.3X overhead for Linux utilities and 19.6X overhead for Windows utilities on average for the introspected program compared to the native in-VM execution without data redirection.

Book ChapterDOI
03 Mar 2013
TL;DR: A new method for secure two-party Random Access Memory (RAM) program computation that does not require taking a program and first turning it into a circuit is presented, and the method achieves logarithmic overhead compared to an insecure program execution.
Abstract: We present a new method for secure two-party Random Access Memory (RAM) program computation that does not require taking a program and first turning it into a circuit. The method achieves logarithmic overhead compared to an insecure program execution. In the heart of our construction is a new Oblivious RAM construction where a client interacts with two non-communicating servers. Our two-server Oblivious RAM for n reads/writes requires O(n) memory for the servers, O(1) memory for the client, and O(logn) amortized read/write overhead for data access. The constants in the big-O notation are tiny, and we show that the storage and data access overhead of our solution concretely compares favorably to the state-of-the-art single-server schemes. Our protocol enjoys an important feature from a practical perspective as well. At the heart of almost all previous single-server Oblivious RAM solutions, a crucial but inefficient process known as oblivious sorting was required. In our two-server model, we describe a new technique to bypass oblivious sorting, and show how this can be carefully blended with existing techniques to attain a more practical Oblivious RAM protocol in comparison to all prior work. As alluded above, our two-server Oblivious RAM protocol leads to a novel application in the realm of secure two-party RAM program computation. We observe that in the secure two-party computation, Alice and Bob can play the roles of two non-colluding servers. We show that our Oblivious RAM construction can be composed with an extended version of the Ostrovsky-Shoup compiler to obtain a new method for secure two-party program computation with lower overhead than all existing constructions.

Proceedings ArticleDOI
23 Feb 2013
TL;DR: Experimental results demonstrate that, when this online error detection approach is used together with checkpointing, it improves the time to obtain correct results by up to several orders of magnitude over the traditional offline approach.
Abstract: Soft errors are one-time events that corrupt the state of a computing system but not its overall functionality. Large supercomputers are especially susceptible to soft errors because of their large number of components. Soft errors can generally be detected offline through the comparison of the final computation results of two duplicated computations, but this approach often introduces significant overhead. This paper presents Online-ABFT, a simple but efficient online soft error detection technique that can detect soft errors in the widely used Krylov subspace iterative methods in the middle of the program execution so that the computation efficiency can be improved through the termination of the corrupted computation in a timely manner soon after a soft error occurs. Based on a simple verification of orthogonality and residual, Online-ABFT is easy to implement and highly efficient. Experimental results demonstrate that, when this online error detection approach is used together with checkpointing, it improves the time to obtain correct results by up to several orders of magnitude over the traditional offline approach.

Journal ArticleDOI
TL;DR: A neighbor coverage-based probabilistic rebroadcast protocol for reducing routing overhead in MANETs is proposed, which can significantly decrease the number of retransmissions so as to reduce the routing overhead, and can also improve the routing performance.
Abstract: Due to high mobility of nodes in mobile ad hoc networks (MANETs), there exist frequent link breakages which lead to frequent path failures and route discoveries. The overhead of a route discovery cannot be neglected. In a route discovery, broadcasting is a fundamental and effective data dissemination mechanism, where a mobile node blindly rebroadcasts the first received route request packets unless it has a route to the destination, and thus it causes the broadcast storm problem. In this paper, we propose a neighbor coverage-based probabilistic rebroadcast protocol for reducing routing overhead in MANETs. In order to effectively exploit the neighbor coverage knowledge, we propose a novel rebroadcast delay to determine the rebroadcast order, and then we can obtain the more accurate additional coverage ratio by sensing neighbor coverage knowledge. We also define a connectivity factor to provide the node density adaptation. By combining the additional coverage ratio and connectivity factor, we set a reasonable rebroadcast probability. Our approach combines the advantages of the neighbor coverage knowledge and the probabilistic mechanism, which can significantly decrease the number of retransmissions so as to reduce the routing overhead, and can also improve the routing performance.

Proceedings ArticleDOI
12 Feb 2013
TL;DR: This work designed a novel workload-independent data structure called the VT-tree which extends the LSM-tree to efficiently handle sequential and file-system workloads and provides efficient and scalable access to both large and small data items regardless of the access pattern.
Abstract: As the Internet and the amount of data grows, the variability of data sizes grows too--from small MP3 tags to large VM images. With applications using increasingly more complex queries and larger data-sets, data access patterns have become more complex and randomized. Current storage systems focus on optimizing for one band of workloads at the expense of other workloads due to limitations in existing storage system data structures. We designed a novel workload-independent data structure called the VT-tree which extends the LSM-tree to efficiently handle sequential and file-system workloads. We designed a system based solely on VT-trees which offers concurrent access to data via file system and database APIs, transactional guarantees, and consequently provides efficient and scalable access to both large and small data items regardless of the access pattern. Our evaluation shows that our user-level system has 2-6.6× better performance for random-write workloads and only a small average overhead for other workloads.

01 Jan 2013
TL;DR: In this paper, a Position based Opportunistic Routing protocol (POR) is proposed to address the problem of delivering data packets for highly dynamic mobile ad hoc networks in a reliable and timely manner.
Abstract: This paper addresses the problem of delivering data packets for highly dynamic mobile ad hoc networks in a reliable and timely manner Most existing ad hoc routing protocols are susceptible to node mobility, especially for large-scale networks Driven by this issue, we propose an efficient Position based Opportunistic Routing protocol (POR) which takes advantage of the stateless property of geographic routing and the broadcast nature of wireless medium When a data packet is sent out, some of the neighbor nodes that have overheard the transmission will serve as forwarding candidates, and take turn to forward the packet if it is not relayed by the specific best forwarder within a certain period of time By utilizing such in-the-air backup, communication is maintained without being interrupted The additional latency incurred by local route recovery is greatly reduced and the duplicate relaying caused by packet reroute is also decreased In case of communication hole, a Virtual Destination based Void Handling (VDVH) scheme is further proposed to work together with POR Both theoretical analysis and simulation results show that POR achieves excellent performance even under high node mobility with acceptable overhead and the new void handling scheme also works well Index Terms-Geographic routing,

Proceedings ArticleDOI
21 Apr 2013
TL;DR: This paper presents an in-depth study of several different AC and DC measurement methodologies as well as model approaches on test systems with the latest processor generations from both Intel and AMD.
Abstract: Energy efficiency is of steadily growing importance in virtually all areas from mobile to high performance computing. Therefore, lots of research projects focus on this topic and strongly rely on power measurements from their test platforms. The need for finer grained measurement data-both in terms of temporal and spatial resolution (component breakdown)-often collides with very rudimentary measurement setups that rely e.g., on non-professional power meters, IMPI based platform data or model-based interfaces such as RAPL or APM. This paper presents an in-depth study of several different AC and DC measurement methodologies as well as model approaches on test systems with the latest processor generations from both Intel and AMD. We analyze most important aspects such as signal quality, time resolution, accuracy, and measurement overhead and use a calibrated, professional power analyzer as our reference.

Book ChapterDOI
03 Mar 2013
TL;DR: This protocol is the first to obtain these properties for Boolean circuits, and develops new homomorphic authentication schemes based on asymptotically good codes with an additional multiplication property.
Abstract: We present a protocol for securely computing a Boolean circuit C in presence of a dishonest and malicious majority. The protocol is unconditionally secure, assuming a preprocessing functionality that is not given the inputs. For a large number of players the work for each player is the same as computing the circuit in the clear, up to a constant factor. Our protocol is the first to obtain these properties for Boolean circuits. On the technical side, we develop new homomorphic authentication schemes based on asymptotically good codes with an additional multiplication property. We also show a new algorithm for verifying the product of Boolean matrices in quadratic time with exponentially small error probability, where previous methods only achieved constant error.

Journal ArticleDOI
Xu Xiuqiang1, Gaoning He1, Shunqing Zhang1, Yan Chen1, Shugong Xu1 
TL;DR: A two-layer network functionality separation scheme targeting at low control signaling overhead and flexible network reconfiguration for future mobile networks, which achieves significant energy reduction over traditional LTE networks, and can be recommended as a candidate solution for future green mobile networks.
Abstract: Traditional wireless networks are designed for ubiquitous network access provision with low-rate voice services, which thus preserve the homogeneous architecture and tight coupling for infrastructures such as base stations. With the traffic explosion and the paradigm shift from voice-oriented services to data-oriented services, traditional homogeneous architecture no longer maintains its optimality, and heterogeneous deployment with flexible network control capability becomes a promising evolution direction. To achieve this goal, in this article, we propose a two-layer network functionality separation scheme, targeting at low control signaling overhead and flexible network reconfiguration for future mobile networks. The proposed scheme is shown to support all kinds of user activities defined in current networks. Moreover, we give two examples to illustrate how the proposed scheme can be applied to multicarrier networks and suggest two important design principles for future green networks. Numerical results show that the proposed scheme achieves significant energy reduction over traditional LTE networks, and can be recommended as a candidate solution for future green mobile networks.

Journal ArticleDOI
TL;DR: This paper studies relay selection schemes for two-way amplify-and-forward (AF) relay networks and develops a multiple-relay selection (MRS) scheme based on the maximization of the worse signal-to-noise ratio of the two end users.
Abstract: This paper studies relay selection schemes for two-way amplify-and-forward (AF) relay networks. For a network with two users that exchange information via multiple AF relays, we first consider a single-relay selection (SRS) scheme based on the maximization of the worse signal-to-noise ratio (SNR) of the two end users. The cumulative distribution function (CDF) of the worse SNR of the two users and its approximations are obtained, based on which the block error rate (BLER), the diversity order, the outage probability, and the sum-rate of the two-way network are derived. Then, with the help of a relay ordering, a multiple-relay selection (MRS) scheme is developed. The training overhead and feedback requirement for the implementation of the relay selection schemes are discussed. Numerical and simulation results are provided to corroborate the analytical results.

Journal ArticleDOI
TL;DR: This paper uses the non-decision directed maximum likelihood criterion for estimating the channel delay and derives the Cramer-Rao lower bound and evaluates the performance of the proposed synchronization algorithm by investigating its mean square error.
Abstract: Synchronization is an essential feature of any communication system. Due to the very low throughput of molecular communications systems, blind synchronization is preferred in order to reduce communications overhead. In this paper, we present the first blind synchronization algorithm for the diffusion-based molecular communication channel. Considering a diffusion-based physical channel model, we use the non-decision directed maximum likelihood criterion for estimating the channel delay. We then derive the Cramer-Rao lower bound and evaluate the performance of the proposed synchronization algorithm by investigating its mean square error.

Journal ArticleDOI
01 Jan 2013
TL;DR: The EIBAS scheme achieves a minimization of communication overhead, allowing the total energy consumption to be reduced by up to 48.5% compared to previous identity-based broadcast authentication schemes.
Abstract: In this paper, we propose an efficient identity-based broadcast authentication scheme, EIBAS, to achieve security requirements in wireless sensor networks. To minimize communication and computational costs, we use a pairing-optimal identity-based signature scheme with message recovery, where the original message of the signature is not required to be transmitted together with the signature, as it can be recovered according to the verification/message recovery process. The EIBAS scheme achieves a minimization of communication overhead, allowing the total energy consumption to be reduced by up to 48.5% compared to previous identity-based broadcast authentication schemes.

Journal ArticleDOI
TL;DR: An algorithm used for Full Configuration Interaction Quantum Monte Carlo (FCIQMC), which is implemented and available in MOLPRO and as a standalone code, and is designed for high-level parallelism and linear-scaling with walker number is explored.
Abstract: For many decades, quantum chemical method development has been dominated by algorithms which involve increasingly complex series of tensor contractions over one-electron orbital spaces. Procedures for their derivation and implementation have evolved to require the minimum amount of logic and rely heavily on computationally efficient library-based matrix algebra and optimized paging schemes. In this regard, the recent development of exact stochastic quantum chemical algorithms to reduce computational scaling and memory overhead requires a contrasting algorithmic philosophy, but one which when implemented efficiently can often achieve higher accuracy/cost ratios with small random errors. Additionally, they can exploit the continuing trend for massive parallelization which hinders the progress of deterministic high-level quantum chemical algorithms. In the Quantum Monte Carlo community, stochastic algorithms are ubiquitous but the discrete Fock space of quantum chemical methods is often unfamiliar, and the methods introduce new concepts required for algorithmic efficiency. In this paper, we explore these concepts and detail an algorithm used for Full Configuration Interaction Quantum Monte Carlo (FCIQMC), which is implemented and available in MOLPRO and as a standalone code, and is designed for high-level parallelism and linear-scaling with walker number. Many of the algorithms are also in use in, or can be transferred to, other stochastic quantum chemical methods and implementations. We apply these algorithms to the strongly correlated Chromium dimer, to demonstrate their efficiency and parallelism.

Journal ArticleDOI
TL;DR: It is found that, using the best available techniques, for parameters of practical interest, block code state distillation does not always lead to lower overhead, and, when it does, the overhead reduction is typically less than a factor of three.
Abstract: State distillation is the process of taking a number of imperfect copies of a particular quantum state and producing fewer better copies. Until recently, the lowest overhead method of distilling states produced a single improved [formula: see text] state given 15 input copies. New block code state distillation methods can produce k improved [formula: see text] states given 3k + 8 input copies, potentially significantly reducing the overhead associated with state distillation. We construct an explicit surface code implementation of block code state distillation and quantitatively compare the overhead of this approach to the old. We find that, using the best available techniques, for parameters of practical interest, block code state distillation does not always lead to lower overhead, and, when it does, the overhead reduction is typically less than a factor of three.

Proceedings ArticleDOI
25 Mar 2013
TL;DR: This paper experimentally investigates the factors that affect the power consumption and the duration of virtual machine migration and uses the KVM platform for this experiment to show that a live migration entails an energy overhead and the size of this overhead varies with thesize of the virtual machine and the available network bandwidth.
Abstract: Live migration, the process of moving a virtual machine (VM) interruption-free between physical hosts is a core concept in modern data centers. Power management strategies use live migration to consolidate services in a cluster environment and to switch off underutilized machines to save power. However, most migration models do not consider the energy cost of migration. This paper experimentally investigates the factors that affect the power consumption and the duration of virtual machine migration. We use the KVM platform for our experiment and show that a live migration entails an energy overhead and the size of this overhead varies with the size of the virtual machine and the available network bandwidth.