scispace - formally typeset
Search or ask a question

Showing papers on "Overhead (computing) published in 2011"


Journal ArticleDOI
TL;DR: The state-of-the-art survey of cooperative sensing is provided to address the issues of cooperation method, cooperative gain, and cooperation overhead.

1,800 citations


Journal ArticleDOI
TL;DR: A total cost minimization is formulated that allows for a flexible tradeoff between flow-level performance and energy consumption and a simple greedy-on and greedy-off algorithms are proposed that are inspired by the mathematical background of submodularity maximization problem.
Abstract: Energy-efficiency, one of the major design goals in wireless cellular networks, has received much attention lately, due to increased awareness of environmental and economic issues for network operators. In this paper, we develop a theoretical framework for BS energy saving that encompasses dynamic BS operation and the related problem of user association together. Specifically, we formulate a total cost minimization that allows for a flexible tradeoff between flow-level performance and energy consumption. For the user association problem, we propose an optimal energy-efficient user association policy and further present a distributed implementation with provable convergence. For the BS operation problem (i.e., BS switching on/off), which is a challenging combinatorial problem, we propose simple greedy-on and greedy-off algorithms that are inspired by the mathematical background of submodularity maximization problem. Moreover, we propose other heuristic algorithms based on the distances between BSs or the utilizations of BSs that do not impose any additional signaling overhead and thus are easy to implement in practice. Extensive simulations under various practical configurations demonstrate that the proposed user association and BS operation algorithms can significantly reduce energy consumption.

479 citations


Proceedings ArticleDOI
Alan Shieh1, Srikanth Kandula1, Albert Greenberg, Changhoon Kim, Bikas Saha1 
30 Mar 2011
TL;DR: This work presents Seawall, a network bandwidth allocation scheme that divides network capacity based on an administrator-specified policy that adds little overhead and achieves strong performance isolation.
Abstract: While today's data centers are multiplexed across many non-cooperating applications, they lack effective means to share their network. Relying on TCP's congestion control, as we show from experiments in production data centers, opens up the network to denial of service attacks and performance interference. We present Seawall, a network bandwidth allocation scheme that divides network capacity based on an administrator-specified policy. Seawall computes and enforces allocations by tunneling traffic through congestion controlled, point to multipoint, edge to edge tunnels. The resulting allocations remain stable regardless of the number of flows, protocols, or destinations in the application's traffic mix. Unlike alternate proposals, Seawall easily supports dynamic policy changes and scales to the number of applications and churn of today's data centers. Through evaluation of a prototype, we show that Seawall adds little overhead and achieves strong performance isolation.

384 citations


Posted Content
TL;DR: This work presents a construction of fully homomorphic encryption schemes that for security parameter λ can evaluate any width-Ω(λ) circuit with t gates in time t· polylog(λ), and introduces permuting/routing techniques to move plaintext elements across these vectors efficiently.
Abstract: We show that homomorphic evaluation of (wide enough) arithmetic circuits can be accomplished with only polylogarithmic overhead. Namely, we present a construction of fully homomorphic encryption (FHE) schemes that for security parameter λ can evaluate any width-Ω(λ) circuit with t gates in time t · polylog(λ). To get low overhead, we use the recent batch homomorphic evaluation techniques of Smart-Vercauteren and BrakerskiGentry-Vaikuntanathan, who showed that homomorphic operations can be applied to “packed” ciphertexts that encrypt vectors of plaintext elements. In this work, we introduce permuting/routing techniques to move plaintext elements across these vectors efficiently. Hence, we are able to implement general arithmetic circuit in a batched fashion without ever needing to “unpack” the plaintext vectors. We also introduce some other optimizations that can speed up homomorphic evaluation in certain cases. For example, we show how to use the Frobenius map to raise plaintext elements to powers of p at the “cost” of a linear operation.

334 citations


Proceedings ArticleDOI
15 Feb 2011
TL;DR: A Content-Aware Flash Translation Layer (CAFTL) is proposed to enhance the endurance of SSDs at the device level to reduce write traffic to flash memory by removing unnecessary duplicate writes and extend available free flash memory space by coalescing redundant data in SSDs.
Abstract: Although Flash Memory based Solid State Drive (SSD) exhibits high performance and low power consumption, a critical concern is its limited lifespan along with the associated reliability issues. In this paper, we propose to build a Content-Aware Flash Translation Layer (CAFTL) to enhance the endurance of SSDs at the device level. With no need of any semantic information from the host, CAFTL can effectively reduce write traffic to flash memory by removing unnecessary duplicate writes and can also substantially extend available free flash memory space by coalescing redundant data in SSDs, which further improves the efficiency of garbage collection and wear-leveling. In order to retain high data access performance, we have also designed a set of acceleration techniques to reduce the runtime overhead and minimize the performance impact caused by extra computational cost. Our experimental results show that our solution can effectively identify up to 86.2% of the duplicate writes, which translates to a write traffic reduction of up to 24.2% and extends the flash space by a factor of up to 31.2%. Meanwhile, CAFTL only incurs a minimized performance overhead by a factor of up to 0.5%.

331 citations


Proceedings ArticleDOI
10 Apr 2011
TL;DR: This paper proposes a novel approach for user-centric data dissemination in DTNs, which considers satisfying user interests and maximizes the cost-effectiveness of data dissemination, based on a social centrality metric.
Abstract: Data dissemination is useful for many applications of Disruption Tolerant Networks (DTNs). Current data dissemination schemes are generally network-centric ignoring user interests. In this paper, we propose a novel approach for user-centric data dissemination in DTNs, which considers satisfying user interests and maximizes the cost-effectiveness of data dissemination. Our approach is based on a social centrality metric, which considers the social contact patterns and interests of mobile users simultaneously, and thus ensures effective relay selection. The performance of our approach is evaluated from both theoretical and experimental perspectives. By formal analysis, we show the lower bound on the cost-effectiveness of data dissemination, and analytically investigate the tradeoff between the effectiveness of relay selection and the overhead of maintaining network information. By trace-driven simulations, we show that our approach achieves better cost-effectiveness than existing data dissemination schemes.

206 citations


StandardDOI
01 Jan 2011
TL;DR: In this article, the authors identify factors that contribute to lightning-caused faults on the line insulation of overhead distribution lines and suggested improvements to existing and new constructions, as well as suggested improvements for new and existing constructions.
Abstract: Factors that contribute to lightning-caused faults on the line insulation of overhead distribution lines and suggested improvements to existing and new constructions are identified in this guide.

193 citations


Journal ArticleDOI
TL;DR: The root causes of energy overhead in continuous sensing are examined and it is shown that energy-efficient continuous sensing can be achieved through proper system design.
Abstract: Today's mobile phones come with a rich set of built-in sensors such as accelerometers, ambient light sensors, compasses, and pressure sensors, which can measure various phenomena on and around the phone. Gathering user context such as user activity, geographic location, and location type requires continuous sampling of sensor data. However, such sampling shortens a phone's battery life because of the associated energy overhead. This article examines the root causes of this energy overhead and shows that energy-efficient continuous sensing can be achieved through proper system design.

179 citations


Book ChapterDOI
29 Aug 2011
TL;DR: This work proposes an effective method for In-situ Sort-And-B-spline Error-bounded Lossy Abatement (ISABELA) of scientific data that is widely regarded as effectively incompressible and significantly outperforms existing lossy compression methods, such as Wavelet compression.
Abstract: Modern large-scale scientific simulations running on HPC systems generate data in the order of terabytes during a single run. To lessen the I/O load during a simulation run, scientists are forced to capture data infrequently, thereby making data collection an inherently lossy process. Yet, lossless compression techniques are hardly suitable for scientific data due to its inherently random nature; for the applications used here, they offer less than 10% compression rate. They also impose significant overhead during decompression, making them unsuitable for data analysis and visualization that require repeated data access. To address this problem, we propose an effective method for In-situ Sort-And-B-spline Error-bounded Lossy Abatement (ISABELA) of scientific data that is widely regarded as effectively incompressible. With ISABELA, we apply a preconditioner to seemingly random and noisy data along spatial resolution to achieve an accurate fitting model that guarantees a ≥ 0.99 correlation with the original data. We further take advantage of temporal patterns in scientific data to compress data by ≈ 85%, while introducing only a negligible overhead on simulations in terms of runtime. ISABELA significantly outperforms existing lossy compression methods, such as Wavelet compression. Moreover, besides being a communication-free and scalable compression technique, ISABELA is an inherently local decompression method, namely it does not decode the entire data, making it attractive for random access.

174 citations


Posted Content
TL;DR: In this article, the authors proposed a low-complex and fully distributed interference management (IM) scheme, called this articleIM, in the downlink of heterogeneous multi-cell networks.
Abstract: Due to the increasing demand of capacity in wireless cellular networks, the small cells such as pico and femto cells are becoming more popular to enjoy a spatial reuse gain, and thus cells with different sizes are expected to coexist in a complex manner. In such a heterogeneous environment, the role of interference management (IM) becomes of more importance, but technical challenges also increase, since the number of cell-edge users, suffering from severe interference from the neighboring cells, will naturally grow. In order to overcome low performance and/or high complexity of existing static and other dynamic IM algorithms, we propose a novel low-complex and fully distributed IM scheme, called REFIM, in the downlink of heterogeneous multi-cell networks. We first formulate a general optimization problem that turns out to require intractable computation complexity for global optimality. To have a practical solution with low computational and signaling overhead, which is crucial for low-cost small-cell solutions, e.g., femto cells, in REFIM, we decompose it into per-BS problems based on the notion of reference user and reduce feedback overhead over backhauls both temporally and spatially. We evaluate REFIM through extensive simulations under various configurations, including the scenarios from a real deployment of BSs. We show that, compared to the schemes without IM, REFIM can yield more than 40% throughput improvement of cell-edge users while increasing the overall performance by 10~107%. This is equal to about 95% performance of the existing centralized IM algorithm that is known to be near-optimal but hard to implement in practice due to prohibitive complexity. We also present that as long as interference is managed well, the spectrum sharing policy can outperform the best spectrum splitting policy where the number of subchannels is optimally divided between macro and femto cells.

169 citations


Proceedings ArticleDOI
11 Apr 2011
TL;DR: New algorithms for continuous outlier monitoring in data streams, based on sliding windows are proposed, able to reduce the required storage overhead, run faster than previously proposed techniques and offer significant flexibility.
Abstract: Anomaly detection is considered an important data mining task, aiming at the discovery of elements (also known as outliers) that show significant diversion from the expected case. More specifically, given a set of objects the problem is to return the suspicious objects that deviate significantly from the typical behavior. As in the case of clustering, the application of different criteria lead to different definitions for an outlier. In this work, we focus on distance-based outliers: an object x is an outlier if there are less than k objects lying at distance at most R from x. The problem offers significant challenges when a stream-based environment is considered, where data arrive continuously and outliers must be detected on-the-fly. There are a few research works studying the problem of continuous outlier detection. However, none of these proposals meets the requirements of modern stream-based applications for the following reasons: (i) they demand a significant storage overhead, (ii) their efficiency is limited and (iii) they lack flexibility. In this work, we propose new algorithms for continuous outlier monitoring in data streams, based on sliding windows. Our techniques are able to reduce the required storage overhead, run faster than previously proposed techniques and offer significant flexibility. Experiments performed on real-life as well as synthetic data sets verify our theoretical study.

Proceedings ArticleDOI
05 Dec 2011
TL;DR: A novel cost-effective defense technique called control flow locking, which allows for effective enforcement of control flow integrity with a small performance overhead and denies any potential gains an attacker might obtain from what is permitted in the threat model.
Abstract: Code-reuse attacks are software exploits in which an attacker directs control flow through existing code with a malicious result. One such technique, return-oriented programming, is based on "gadgets" (short pre-existing sequences of code ending in a ret instruction) being executed in arbitrary order as a result of a stack corruption exploit. Many existing codereuse defenses have relied upon a particular attribute of the attack in question (e.g., the frequency of ret instructions in a return-oriented attack), which leads to an incomplete protection, while a smaller number of efforts in protecting all exploitable control flow transfers suffer from limited deploy-ability due to high performance overhead. In this paper, we present a novel cost-effective defense technique called control flow locking, which allows for effective enforcement of control flow integrity with a small performance overhead. Specifically, instead of immediately determining whether a control flow violation happens before the control flow transfer takes place, control flow locking lazily detects the violation after the transfer. To still restrict attackers' capability, our scheme guarantees that the deviation of the normal control flow graph will only occur at most once. Further, our scheme ensures that this deviation cannot be used to craft a malicious system call, which denies any potential gains an attacker might obtain from what is permitted in the threat model. We have developed a proof-of-concept prototype in Linux and our evaluation demonstrates desirable effectiveness and competitive performance overhead with existing techniques. In several benchmarks, our scheme is able to achieve significant gains.

Proceedings ArticleDOI
15 Feb 2011
TL;DR: A cluster-based deduplication system that can dedupleicate with high throughput, support dedUplication ratios comparable to that of a single system, and maintain a low variation in the storage utilization of individual nodes is presented.
Abstract: As data have been growing rapidly in data centers, deduplication storage systems continuously face challenges in providing the corresponding throughputs and capacities necessary to move backup data within backup and recovery window times. One approach is to build a cluster deduplication storage system with multiple deduplication storage system nodes. The goal is to achieve scalable throughput and capacity using extremely high-throughput (e.g. 1.5 GB/s) nodes, with a minimal loss of compression ratio. The key technical issue is to route data intelligently at an appropriate granularity.We present a cluster-based deduplication system that can deduplicate with high throughput, support deduplication ratios comparable to that of a single system, and maintain a low variation in the storage utilization of individual nodes. In experiments with dozens of nodes, we examine tradeoffs between stateless data routing approaches with low overhead and stateful approaches that have higher overhead but avoid imbalances that can adversely affect deduplication effectiveness for some datasets in large clusters. The stateless approach has been deployed in a two-node commercial system that achieves 3 GB/s for multi-stream deduplication throughput and currently scales to 5.6 PB of storage (assuming 20X total compression).

Journal ArticleDOI
TL;DR: A new one-time signature scheme which can reduce the storage cost by a factor of 8 and reduce the signature size by 40% compared with existing schemes is proposed and is more appropriate for smart grid applications where the receivers have limited storage or where data communication is frequent and short.
Abstract: Multicast has been envisioned to be useful in many smart grid applications such as demand-response, wide area protection , in-substation protection and various operation and control. Since the multicast messages are related to critical control, authentication is necessary to prevent message forgery attacks. In this paper, we first identify the requirements of multicast communication and multicast authentication in the smart grid. Based on these requirements, we find that one-time signature based multicast authentication is a promising solution, due to its short authentication delay and low computation cost. However, existing one-time signatures are not designed for the smart grid and they may have high storage and bandwidth overhead. To address this problem, we propose a new one-time signature scheme which can reduce the storage cost by a factor of 8 and reduce the signature size by 40% compared with existing schemes. Thus, our scheme is more appropriate for smart grid applications where the receivers have limited storage (e.g., home appliances and field devices) or where data communication is frequent and short (e.g., phasor data). These gains are at the cost of increased computations in signature generation and/or verification and fortunately our scheme can flexibly allocate the computations between the sender and receiver based on their computing resources. We formulate the computation allocation as a nonlinear integer programming problem to minimize the signing cost under a certain verification cost and propose a heuristic solution to solve it.

01 Jan 2011
TL;DR: This paper presents a model that works on SPINS security building blocks because Sensor network works in very resource constraint environment and only SP INS security protocol can fulfill those requirements of the proposed model.
Abstract: Sensor network is a dominant technology among different wireless communication technologies due to its great deal of efficiency. Security is the critical issue for every types of network whether it is sensor networks or other networks. So far, many of the researchers have thought to physically implement the sensor nodes and sensor networks but their work was not enough to create any valuable security for different communicating devices during communication processes.This paper presents a model that works on SPINS security building blocks because Sensor network works in very resource constraint environment and only SPINS security protocol can fulfill those requirements of the proposed model. SPINS provides two security building blocks, SNEP and µTESLA. This model presents some unique processing units features such as Beacon message, Data controller unit etc. This security model is best to achieve targets but main issue in sensor network is still not solved about the management of power (short battery life), computation overhead and low storage capacity of memory. The proposed model scenarios have been simulated in QualNet 4.5.

Proceedings Article
29 Mar 2011
TL;DR: This work designs and evaluates a hierarchical heavy hitters algorithm that identifies large traffic aggregates, while striking a good balance between measurement accuracy and switch overhead.
Abstract: Traffic measurement plays an important role in many network-management tasks, such as anomaly detection and traffic engineering. However, existing solutions either rely on custom hardware designed for a specific task, or introduce a high overhead for data collection and analysis. Instead, we argue that a practical traffic-measurement solution should run on commodity network elements, support a range of measurement tasks, and provide accurate results with low overhead. Inspired by the capabilities of OpenFlow switches, we explore a measurement framework where switches match packets against a small collection of rules and update traffic counters for the highest-priority match. A separate controller can read the counters and dynamically tune the rules to quickly "drill down" to identify large traffic aggregates. As the first step towards designing measurement algorithms for this framework, we design and evaluate a hierarchical heavy hitters algorithm that identifies large traffic aggregates, while striking a good balance between measurement accuracy and switch overhead.

Proceedings Article
01 Jan 2011
TL;DR: This paper presents experimental results on two popular state-of-the-art virtualization platforms, Citrix XenServer 5.5 and VMware ESX 4.0, and proposes a basic, generic performance prediction model for the two different types of hypervisor architectures.
Abstract: Due to trends like Cloud Computing and Green IT, virtualization technologies are gaining increasing importance. They promise energy and cost savings by sharing physical resources, thus making resource usage more efficient. However, resource sharing and other factors have direct effects on system performance, which are not yet well-understood. Hence, performance prediction and performance management of services deployed in virtualized environments like public and private Clouds is a challenging task. Because of the large variety of virtualization solutions, a generic approach to predict the performance overhead of services running on virtualization platforms is highly desirable. In this paper, we present experimental results on two popular state-of-the-art virtualization platforms, Citrix XenServer 5.5 and VMware ESX 4.0, as representatives of the two major hypervisor architectures. Based on these results, we propose a basic, generic performance prediction model for the two different types of hypervisor architectures. The target is to predict the performance overhead for executing services on virtualized platforms.

Journal ArticleDOI
Yuan Liao1
TL;DR: In this article, the authors proposed two fault-location algorithms for overhead distribution systems that provide a unified solution that eliminates or reduces iterative procedures applicable to all types of faults. But, the proposed methods are based on the bus impedance matrix, through which the substation voltage and current quantities can be expressed as a function of the fault location and fault resistance.
Abstract: Various methods have been proposed in the past for locating faults on distribution systems, which generally entail iterative procedures. This paper presents novel fault-location algorithms for overhead distribution systems that provide a unified solution that eliminates or reduces iterative procedures applicable to all types of faults. Two types of methods, respectively, for nonradial systems and radial systems have been proposed by utilizing voltage and current measurements at the local substation. The proposed methods are based on the bus impedance matrix, through which the substation voltage and current quantities can be expressed as a function of the fault location and fault resistance, a solution to which yields the fault location. The methods are developed in phase domain and, consequently, are naturally applicable to unbalanced systems. The assumptions made are that the distribution network parameters and topology are known so that the bus impedance matrix can be developed. Simulation studies have demonstrated that both types of methods are accurate and quite robust to load variations and measurement errors.

Patent
27 Jan 2011
TL;DR: In this article, the authors describe a master electronic circuit that includes a storage (300) representing a wireless collision avoidance networking process (332) involving collision avoidance overhead and combined with a schedulable process (345) including a serial data transfer process and a scheduler, a wireless modem (350) operable to transmit and receive wireless signals for the networking process, and a processor (320) coupled with the storage (324) and with the wireless modem(350) and operability to execute the scheduler to establish and transmit a schedule (110) for plural serial data
Abstract: A master electronic circuit (300) includes a storage (324) representing a wireless collision avoidance networking process (332) involving collision avoidance overhead and combined with a schedulable process (345) including a serial data transfer process and a scheduler, a wireless modem (350) operable to transmit and receive wireless signals for the networking process (332), and a processor (320) coupled with the storage (324) and with the wireless modem (350) and operable to execute the scheduler to establish and transmit a schedule (110) for plural serial data transfers involving the processor (320) and distinct station identifications, and to execute the serial data transfers inside the wireless networking process and according to the schedule so as to avoid at least some of the collision avoidance overhead. Other electronic circuits, processes of making and using, and systems are disclosed.

Journal ArticleDOI
TL;DR: Compared to MORE, a state-of-the-art NC-based OR protocol, CCACK improves both throughput and fairness, by up to 20x and 124%, respectively, with average improvements of 45% and 8.8%, respectively.
Abstract: The use of random linear network coding (NC) has significantly simplified the design of opportunistic routing (OR) protocols by removing the need of coordination among forwarding nodes for avoiding duplicate transmissions. However, NC-based OR protocols face a new challenge: How many coded packets should each forwarder transmit? To avoid the overhead of feedback exchange, most practical existing NC-based OR protocols compute offline the expected number of transmissions for each forwarder using heuristics based on periodic measurements of the average link loss rates and the ETX metric. Although attractive due to their minimal coordination overhead, these approaches may suffer significant performance degradation in dynamic wireless environments with continuously changing levels of channel gains, interference, and background traffic. In this paper, we propose CCACK, a new efficient NC-based OR protocol. CCACK exploits a novel Cumulative Coded ACKnowledgment scheme that allows nodes to acknowledge network-coded traffic to their upstream nodes in a simple way, oblivious to loss rates, and with negligible overhead. Through extensive simulations and testbed experiments, we show that CCACK greatly improves both throughput and fairness compared to MORE, a state-of-the-art NC-based OR protocol.

Book ChapterDOI
15 May 2011
TL;DR: This work presents a method to compile Yao's two-player garbled circuit protocol into one that is secure against malicious adversaries that relies on witness indistinguishability, and develops and analyzes new solutions to issues arising with this transformation.
Abstract: We present a method to compile Yao's two-player garbled circuit protocol into one that is secure against malicious adversaries that relies on witness indistinguishability. Our approach can enjoy lower communication and computation overhead than methods based on cut-and-choose [13] and lower overhead than methods based on zero-knowledge proofs [8] (or Σ-protocols [14]). To do so, we develop and analyze new solutions to issues arising with this transformation: -- How to guarantee the generator's input consistency -- How to support different outputs for each player without adding extra gates to the circuit of the function f being computed -- How the evaluator can retrieve input keys but avoid selective failure attacks -- Challenging 3/5 of the circuits is near optimal for cut-and-choose (and better than challenging 1/2). Our protocols require the existence of secure-OT and claw-free functions that have a weak malleability property. We discuss an experimental implementation of our protocol to validate our efficiency claims.

Journal ArticleDOI
02 Sep 2011-Sensors
TL;DR: Two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position are proposed.
Abstract: The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling.

Book ChapterDOI
14 Aug 2011
TL;DR: New authenticated data structures are presented that allow any entity to publicly verify a proof attesting the correctness of primitive set operations such as intersection, union, subset and set difference, based on the bilinear q-strong Diffie-Hellman assumption.
Abstract: We study the design of protocols for set-operation verification, namely the problem of cryptographically checking the correctness of outsourced set operations performed by an untrusted server over a dynamic collection of sets that are owned (and updated) by a trusted source. We present new authenticated data structures that allow any entity to publicly verify a proof attesting the correctness of primitive set operations such as intersection, union, subset and set difference. Based on a novel extension of the security properties of bilinear-map accumulators as well as on a primitive called accumulation tree, our protocols achieve optimal verification and proof complexity (i.e., only proportional to the size of the query parameters and the answer), as well as optimal update complexity (i.e., constant), while incurring no extra asymptotic space overhead. The proof construction is also efficient, adding a logarithmic overhead to the computation of the answer of a set-operation query. In contrast, existing schemes entail high communication and verification costs or high storage costs. Applications of interest include efficient verification of keyword search and database queries. The security of our protocols is based on the bilinear q-strong Diffie-Hellman assumption.

Proceedings ArticleDOI
01 Dec 2011
TL;DR: LTE4V2X is presented, a novel framework for a centralized vehicular network organization using LTE that takes advantage of a centralized architecture around the eNodeB in order to optimize the clusters management and provide better performances.
Abstract: Vehicular networks face a number of new challenges, particularly due to the extremely dynamic network topology and the large variable number of mobile nodes. To overcome these problems, an effective solution is to organize the network in a way which will facilitate the management tasks and permit to deploy a wide panoply of applications such as urban sensing applications. This paper presents LTE4V2X, a novel framework for a centralized vehicular network organization using LTE. It takes advantage of a centralized architecture around the eNodeB in order to optimize the clusters management and provide better performances. We studied its performances against a decentralized organization protocol for a well known urban sensing application, FCD application. We analyze the performances of LTE4V2X using NS-3 simulation environment and a realistic urban mobility model. We show that it permits performance improvement by lowering the overhead induced by control messages, reducing the FCD packet losses, as well as enhancing the goodput.

Journal ArticleDOI
TL;DR: Since identity-based cryptography is employed in the scheme to generate private keys for pseudo identities, certificates are not required and thus transmission overhead can be significantly reduced, and thus message verification speed can be tremendously increased.
Abstract: In this paper, an efficient identity-based batch signature verification scheme is proposed for vehicular communications. With the proposed scheme, vehicles can verify a batch of signatures once instead of in a one-by-one manner. Hence the message verification speed can be tremendously increased. To identify invalid signatures in a batch of signatures, this paper adopts group testing technique, which can find the invalid signatures with few number of batch verifications. In addition, a trust authority in our scheme is capable of tracing a vehicle's real identity from its pseudo identity, and therefore conditional privacy preserving can also be achieved. Moreover, since identity-based cryptography is employed in the scheme to generate private keys for pseudo identities, certificates are not required and thus transmission overhead can be significantly reduced.

Proceedings ArticleDOI
09 Oct 2011
TL;DR: A novel resource-management scheme that supports so-called malleable applications that can adopt their level of parallelism to the assigned resources and is practically useful for employment in large many-core systems as extensive studies and experiments show.
Abstract: The trend towards many-core systems comes with various issues, among them their highly dynamic and non-predictable workloads. Hence, new paradigms for managing resources of many-core systems are of paramount importance. The problem of resource management, e.g. mapping applications to processor cores, is NP-hard though, requiring heuristics especially when performed online. In this paper, we therefore present a novel resource-management scheme that supports so-called malleable applications. These applications can adopt their level of parallelism to the assigned resources. By design, our (decentralized) scheme is scalable and it copes with the computational complexity by focusing on local decision-making. Our simulations show that the quality of the mapping decisions of our approach is able to stay near the mapping quality of state-of-the-art (i.e. centralized) online schemes for malleable applications but at a reduced overall communication overhead (only about 12,75% on a 1024 core system with a total workload of 32 multi-threaded applications). In addition, our approach is scalable as opposed to a centralized scheme and therefore it is practically useful for employment in large many-core systems as our extensive studies and experiments show.

Journal ArticleDOI
01 Jun 2011
TL;DR: In this article, a family of hybrid algorithms for adaptive indexing in column-store database systems is presented, and compared with traditional full index lookup and scan of unordered data.
Abstract: Adaptive indexing is characterized by the partial creation and refinement of the index as side effects of query execution. Dynamic or shifting workloads may benefit from preliminary index structures focused on the columns and specific key ranges actually queried --- without incurring the cost of full index construction. The costs and benefits of adaptive indexing techniques should therefore be compared in terms of initialization costs, the overhead imposed upon queries, and the rate at which the index converges to a state that is fully-refined for a particular workload component.Based on an examination of database cracking and adaptive merging, which are two techniques for adaptive indexing, we seek a hybrid technique that has a low initialization cost and also converges rapidly. We find the strengths and weaknesses of database cracking and adaptive merging complementary. One has a relatively high initialization cost but converges rapidly. The other has a low initialization cost but converges relatively slowly. We analyze the sources of their respective strengths and explore the space of hybrid techniques. We have designed and implemented a family of hybrid algorithms in the context of a column-store database system. Our experiments compare their behavior against database cracking and adaptive merging, as well as against both traditional full index lookup and scan of unordered data. We show that the new hybrids significantly improve over past methods while at least two of the hybrids come very close to the "ideal performance" in terms of both overhead per query and convergence to a final state.

Journal ArticleDOI
TL;DR: A method using G-networks is developed to incorporate both the effect of user traffic and the overhead in QoS and energy consumption introduced by the control traffic that will be needed to carry out the re-routing decisions, and this approach results in an algorithm which has O(N3) time complexity.
Abstract: Although there is great interest in reducing energy consumption for all areas of human activity, many of the proposed approaches such as the smart home or the smart grid are actually prone to an increase in the use of Information and Communication Technologies (ICT) which itself is a big consumer of energy. It is remarkable that ICT's carbon imprint is of the order of 2% of the world total, comparable to the carbon imprint of air travel. Thus, it is imperative to address energy savings in ICT and, in particular, in data centres and networks. This paper follows up on our previous work that seeks novel ways to reduce the energy consumption in packet networks which constitute the backbone of the Internet and of the information society as a whole. Here we discuss the use of routing control as a means to reduce energy consumption while remaining aware of QoS considerations, and propose a method that uses a queueing theoretic analysis and optimization technique to distribute traffic so as to reduce a cost function that comprises both energy and QoS. A method using G-networks is developed to incorporate both the effect of user traffic and the overhead in QoS and energy consumption introduced by the control traffic that will be needed to carry out the re-routing decisions. For an N-node network, we show that this approach results in an algorithm which has O(N3) time complexity. Because this approach may be too costly in computational overhead and delays, we also propose another approach that uses load balancing and which would be much simpler to implement.

Posted Content
TL;DR: ANDaNA as mentioned in this paper is an NDN add-on tool that borrows a number of features from Tor and provides comparable anonymity with a lower relative overhead, as compared to Tor.
Abstract: Content-centric networking -- also known as information-centric networking (ICN) -- shifts emphasis from hosts and interfaces (as in today's Internet) to data. Named data becomes addressable and routable, while locations that currently store that data become irrelevant to applications. Named Data Networking (NDN) is a large collaborative research effort that exemplifies the content-centric approach to networking. NDN has some innate privacy-friendly features, such as lack of source and destination addresses on packets. However, as discussed in this paper, NDN architecture prompts some privacy concerns mainly stemming from the semantic richness of names. We examine privacy-relevant characteristics of NDN and present an initial attempt to achieve communication privacy. Specifically, we design an NDN add-on tool, called ANDaNA, that borrows a number of features from Tor. As we demonstrate via experiments, it provides comparable anonymity with lower relative overhead.

Book ChapterDOI
27 Sep 2011
TL;DR: This work views event sequences as observation sequences of a Hidden Markov Model, uses an HMM model of the monitored program to "fill in" sampling-induced gaps in observation sequences, and extends the classic forward algorithm for HMM state estimation to compute the probability that the property is satisfied by an execution of the program.
Abstract: We introduce the concept of Runtime Verification with State Estimation and show how this concept can be applied to estimate the probability that a temporal property is satisfied by a run of a program when monitoring overhead is reduced by sampling. In such situations, there may be gaps in the observed program executions, thus making accurate estimation challenging. To deal with the effects of sampling on runtime verification, we view event sequences as observation sequences of a Hidden Markov Model (HMM), use an HMM model of the monitored program to "fill in" sampling-induced gaps in observation sequences, and extend the classic forward algorithm for HMM state estimation (which determines the probability of a state sequence, given an observation sequence) to compute the probability that the property is satisfied by an execution of the program. To validate our approach, we present a case study based on the mission software for a Mars rover. The results of our case study demonstrate high prediction accuracy for the probabilities computed by our algorithm. They also show that our technique is much more accurate than simply evaluating the temporal property on the given observation sequences, ignoring the gaps.