scispace - formally typeset
Search or ask a question

Showing papers on "Overhead (computing) published in 2006"


Proceedings Article
30 May 2006
TL;DR: It is shown that with reasonable overhead, a Provenance-Aware Storage System can provide useful functionality not available in today's file systems or provenance management systems.
Abstract: A Provenance-Aware Storage System (PASS) is a storage system that automatically collects and maintains provenance or lineage, the complete history or ancestry of an item. We discuss the advantages of treating provenance as meta-data collected and maintained by the storage system, rather than as manual annotations stored in a separately administered database. We describe a PASS implementation, discussing the challenges it presents, performance cost it incurs, and the new functionality it enables. We show that with reasonable overhead, we can provide useful functionality not available in today's file systems or provenance management systems.

588 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a game theoretic framework to analyze the behavior of cognitive radios for distributed adaptive channel allocation, which can be formulated as a potential game, and thus converges to a deterministic channel allocation Nash equilibrium point.
Abstract: In this work, we propose a game theoretic framework to analyze the behavior of cognitive radios for distributed adaptive channel allocation. We define two different objective functions for the spectrum sharing games, which capture the utility of selfish users and cooperative users, respectively. Based on the utility definition for cooperative users, we show that the channel allocation problem can be formulated as a potential game, and thus converges to a deterministic channel allocation Nash equilibrium point. Alternatively, a no-regret learning implementation is proposed for both scenarios and it is shown to have similar performance with the potential game when cooperation is enforced, but with a higher variability across users. The no-regret learning formulation is particularly useful to accommodate selfish users. Non-cooperative learning games have the advantage of a very low overhead for information exchange in the network. We show that cooperation based spectrum sharing etiquette improves the overall network performance at the expense of an increased overhead required for information exchange.

556 citations


Proceedings ArticleDOI
31 Oct 2006
TL;DR: It is shown that protothreads significantly reduce the complexity of a number of widely used programs previously written with event-driven state machines, with a memory overhead of only two bytes per protothread.
Abstract: Event-driven programming is a popular model for writing programs for tiny embedded systems and sensor network nodes. While event-driven programming can keep the memory overhead down, it enforces a state machine programming style which makes many programs difficult to write, maintain, and debug. We present a novel programming abstraction called protothreads that makes it possible to write event-driven programs in a thread-like style, with a memory overhead of only two bytes per protothread. We show that protothreads significantly reduce the complexity of a number of widely used programs previously written with event-driven state machines. For the examined programs the majority of the state machines could be entirely removed. In the other cases the number of states and transitions was drastically decreased. With protothreads the number of lines of code was reduced by one third. The execution time overhead of protothreads is on the order of a few processor cycles.

418 citations


Patent
27 Mar 2006
TL;DR: In this paper, radio frames are divided into a plurality of subframes, and data is transmitted over the radio frames within the radio subframes by having a frame duration selected from two or more possible frame durations.
Abstract: During operation radio frames are divided into a plurality of subframes. Data is transmitted over the radio frames within a plurality of subframes, and having a frame duration selected from two or more possible frame durations.

373 citations


Proceedings ArticleDOI
28 Jun 2006
TL;DR: A case for HPC with virtual machines is presented by introducing a framework which addresses the performance and management overhead associated with VM-based computing and shows that HPC applications can achieve almost the same performance as those running in a native, non-virtualized environment.
Abstract: Virtual machine (VM) technologies are experiencing a resurgence in both industry and research communities. VMs offer many desirable features such as security, ease of management, OS customization, performance isolation, check-pointing, and migration, which can be very beneficial to the performance and the manageability of high performance computing (HPC) applications. However, very few HPC applications are currently running in a virtualized environment due to the performance overhead of virtualization. Further, using VMs for HPC also introduces additional challenges such as management and distribution of OS images.In this paper we present a case for HPC with virtual machines by introducing a framework which addresses the performance and management overhead associated with VM-based computing. Two key ideas in our design are: Virtual Machine Monitor (VMM) bypass I/O and scalable VM image management. VMM-bypass I/O achieves high communication performance for VMs by exploiting the OS-bypass feature of modern high speed interconnects such as Infini-Band. Scalable VM image management significantly reduces the overhead of distributing and managing VMs in large scale clusters. Our current implementation is based on the Xen VM environment and InfiniBand. However, many of our ideas are readily applicable to other VM environments and high speed interconnects.We carry out detailed analysis on the performance and management overhead of our VM-based HPC framework. Our evaluation shows that HPC applications can achieve almost the same performance as those running in a native, non-virtualized environment. Therefore, our approach holds promise to bring the benefits of VMs to HPC applications with very little degradation in performance.

352 citations


Proceedings ArticleDOI
11 Dec 2006
TL;DR: This work proposes address space layout permutation (ASLP) that introduces high degree of randomness (or high entropy) with minimal performance overhead and modified the Linux operating system kernel to permute stack, heap, and memory mapped regions.
Abstract: Address space randomization is an emerging and promising method for stopping a broad range of memory corruption attacks. By randomly shifting critical memory regions at process initialization time, address space randomization converts an otherwise successful malicious attack into a benign process crash. However, existing approaches either introduce insufficient randomness, or require source code modification. While insufficient randomness allows successful brute-force attacks, as shown in recent studies, the required source code modification prevents this effective method from being used for commodity software, which is the major source of exploited vulnerabilities on the Internet. We propose Address Space Layout Permutation (ASLP) that introduces high degree of randomness (or high entropy) with minimal performance overhead. Essential to ASLP is a novel binary rewriting tool that can place the static code and data segments of a compiled executable to a randomly specified location and performs fine-grained permutation of procedure bodies in the code segment as well as static data objects in the data segment. We have also modified the Linux operating system kernel to permute stack, heap, and memory mapped regions. Together, ASLP completely permutes memory regions in an application. Our security and performance evaluation shows minimal performance overhead with orders of magnitude improvement in randomness (e.g., up to 29 bits of randomness on a 32-bit architecture).

313 citations


Proceedings ArticleDOI
11 Dec 2006
TL;DR: A novel algorithm called SCALE is derived, that provides a significant performance improvement over the existing iterative water-filling (IWF) algorithm in multi-user DSL networks, doing so with comparable low complexity.
Abstract: Dynamic Spectrum Management of Digital Subscriber Lines (DSL) has the potential to dramatically increase the capacity of the aging last-mile copper access network. This paper takes an important step toward fulfilling this potential through power spectrum balancing. We derive a novel algorithm called SCALE, that provides a significant performance improvement over the existing iterative water-filling (IWF) algorithm in multi-user DSL networks, doing so with comparable low complexity. The algorithm is easily distributed through measurement and limited message-passing with the use of a Spectrum Management Center. We outline how overhead can be managed, and show that in the limit of zero message-passing, performance reduces to IWF. Numerical convergence of SCALE was found to be extremely fast when applied to VDSL, with performance exceeding that of iterative water-filling in just a few iterations, and to over 90% of the final rate in under 5 iterations. Lastly, we return to the problem of iterative water-filling and derive a new algorithm named SCAWF that is shown to be a very simple way to waterfill, particularly suited to the multi-user context.

285 citations


Journal ArticleDOI
TL;DR: Simulations indicate that the accuracy of the CFO estimates asymptotically achieves the Cramer-Rao bound and the proposed algorithm requires increased overhead but has more flexibility as it can be used with any subcarrier assignment scheme.
Abstract: Maximum-likelihood estimation of the carrier frequency offset (CFO), timing error, and channel response of each active user in the uplink of an orthogonal frequency-division multiple-access system is investigated in this study, assuming that a training sequence is available. The exact solution to this problem turns out to be too complex for practical purposes as it involves a search over a multidimensional domain. However, making use of the alternating projection method, we replace the above search with a sequence of mono-dimensional searches. This results in an estimation algorithm of a reasonable complexity which is suitable for practical applications. As compared with other existing semi-blind methods, the proposed algorithm requires increased overhead but has more flexibility as it can be used with any subcarrier assignment scheme. Simulations indicate that the accuracy of the CFO estimates asymptotically achieves the Cramer-Rao bound.

285 citations


Journal ArticleDOI
TL;DR: It is shown that the analysis of system call arguments and the use of Bayesian classification improves detection accuracy and resilience against evasion attempts and a tool is described based on this approach.
Abstract: Intrusion detection systems (IDSs) are used to detect traces of malicious activities targeted against the network and its resources. Anomaly-based IDSs build models of the expected behavior of applications by analyzing events that are generated during the applications' normal operation. Once these models have been established, subsequent events are analyzed to identify deviations, on the assumption that anomalies represent evidence of an attack. Host-based anomaly detection systems often rely on system call sequences to characterize the normal behavior of applications. Recently, it has been shown how these systems can be evaded by launching attacks that execute legitimate system call sequences. The evasion is possible because existing techniques do not take into account all available features of system calls. In particular, system call arguments are not considered. We propose two primary improvements upon existing host-based anomaly detectors. First, we apply multiple detection models to system call arguments. Multiple models allow the arguments of each system call invocation to be evaluated from several different perspectives. Second, we introduce a sophisticated method of combining the anomaly scores from each model into an overall aggregate score. The combined anomaly score determines whether an event is part of an attack. Individual anomaly scores are often contradicting and, therefore, a simple weighted sum cannot deliver reliable results. To address this problem, we propose a technique that uses Bayesian networks to perform system call classification. We show that the analysis of system call arguments and the use of Bayesian classification improves detection accuracy and resilience against evasion attempts. In addition, the paper describes a tool based on our approach and provides a quantitative evaluation of its performance in terms of both detection effectiveness and overhead. A comparison with four related approaches is also presented.

277 citations


01 Jan 2006
TL;DR: This paper introduces a model called m-confidentiality which deals with minimality attacks, and proposes a feasible solution that can prevent such attacks with very little overhead and information loss.
Abstract: Data publishing generates much concern over the protection of individual privacy. Recent studies consider cases where the adversary may possess different kinds of knowledge about the data. In this paper, we show that knowledge of the mechanism or algorithm of anonymization for data publication can also lead to extra information that assists the adversary and jeopardizes individual privacy. In particular, all known mechanisms try to minimize information loss and such an attempt provides a loophole for attacks. We call such an attack a minimality attack. In this paper, we introduce a model called m-confidentiality which deals with minimality attacks, and propose a feasible solution. Our experiments show that minimality attacks are practical concerns on real datasets and that our algorithm can prevent such attacks with very little overhead and information loss.

273 citations


Journal ArticleDOI
TL;DR: This paper derives and analyzes distributed state estimators of dynamical stochastic processes, whereby the low communication cost is effected by requiring the transmission of a single bit per observation.
Abstract: When dealing with decentralized estimation, it is important to reduce the cost of communicating the distributed observations-a problem receiving revived interest in the context of wireless sensor networks. In this paper, we derive and analyze distributed state estimators of dynamical stochastic processes, whereby the low communication cost is effected by requiring the transmission of a single bit per observation. Following a Kalman filtering (KF) approach, we develop recursive algorithms for distributed state estimation based on the sign of innovations (SOI). Even though SOI-KF can afford minimal communication overhead, we prove that in terms of performance and complexity it comes very close to the clairvoyant KF which is based on the analog-amplitude observations. Reinforcing our conclusions, we show that the SOI-KF applied to distributed target tracking based on distance-only observations yields accurate estimates at low communication cost

Proceedings ArticleDOI
01 Jan 2006
TL;DR: This work proposes a distributed, cluster-based anomaly detection algorithm that achieves comparable accuracy compared to a centralized scheme with a significant reduction in communication overhead.
Abstract: Identifying misbehaviors is an important challenge for monitoring, fault diagnosis and intrusion detection in wireless sensor networks. A key problem is how to minimise the communication overhead and energy consumption in the network when identifying misbe-haviors. Our approach to this problem is based on a distributed, cluster-based anomaly detection algorithm. We minimise the communication overhead by clustering the sensor measurements and merging clusters before sending a description of the clusters to the other nodes. In order to evaluate our distributed scheme, we implemented our algorithm in a simulation based on the sensor data gathered from the Great Duck Island project. We demonstrate that our scheme achieves comparable accuracy compared to a centralised scheme with a significant reduction in communication overhead.

Proceedings ArticleDOI
31 Oct 2006
TL;DR: It is shown that run-time dynamic linking is an effective method for reprogramming even resource constrained wireless sensor nodes, and a combination of native code and virtual machine code provide good energy efficiency.
Abstract: From experience with wireless sensor networks it has become apparent that dynamic reprogramming of the sensor nodes is a useful feature. The resource constraints in terms of energy, memory, and processing power make sensor network reprogramming a challenging task. Many different mechanisms for reprogramming sensor nodes have been developed ranging from full image replacement to virtual machines.We have implemented an in-situ run-time dynamic linker and loader that use the standard ELF object file format. We show that run-time dynamic linking is an effective method for reprogramming even resource constrained wireless sensor nodes. To evaluate our dynamic linking mechanism we have implemented an application-specific virtual machine and a Java virtual machine and compare the energy cost of the different linking and execution models. We measure the energy consumption and execution time overhead on real hardware to quantify the energy costs for dynamic linkin.Our results suggest that while in general the overhead of a virtual machine is high, a combination of native code and virtual machine code provide good energy efficiency. Dynamic run-time linking can be used to update the native code, even in heterogeneous networks.

Proceedings ArticleDOI
14 May 2006
TL;DR: This paper derives and analyzes distributed state estimators of dynamical stochastic processes, whereby the low communication cost is effected by requiring the transmission of a single bit per observation.
Abstract: We derive and analyze distributed state estimators of dynamical stochastic processes, whereby low communication cost is effected by requiring the transmission of a single bit per observation. Following a Kalman filtering (KF) approach, we develop recursive algorithms for distributed state estimation based on the sign of innovations (SOI). Even though SOI-KF can afford minimal communication overhead, we prove that in terms of performance and complexity it comes very close to the clairvoyant KF which is based on the analog-amplitude observations. Reinforcing our conclusions, we show that the SOI-KF applied to distributed target tracking based on distance only observations yields accurate estimates at low communication cost.

Journal ArticleDOI
TL;DR: This paper proposes algorithms that continuously monitor the incoming data and maintain the skyline incrementally, and utilizes several interesting properties of stream skylines to improve space/time efficiency by expunging data from the system as early as possible (i.e., before their expiration).
Abstract: The skyline of a multidimensional data set contains the "best" tuples according to any preference function that is monotonic on each dimension. Although skyline computation has received considerable attention in conventional databases, the existing algorithms are inapplicable to stream applications because 1) they assume static data that are stored in the disk (rather than continuously arriving/expiring), 2) they focus on "one-time" execution that returns a single skyline (in contrast to constantly tracking skyline changes), and 3) they aim at reducing the I/O overhead (as opposed to minimizing the CPU-cost and main-memory consumption). This paper studies skyline computation in stream environments, where query processing takes into account only a "sliding window" covering the most recent tuples. We propose algorithms that continuously monitor the incoming data and maintain the skyline incrementally. Our techniques utilize several interesting properties of stream skylines to improve space/time efficiency by expunging data from the system as early as possible (i.e., before their expiration). Furthermore, we analyze the asymptotical performance of the proposed solutions, and evaluate their efficiency with extensive experiments.

Patent
05 Jun 2006
TL;DR: In this article, a computer system may be configured to dynamically select a memory virtualization and corresponding virtual-to-physical address translation technique during execution of an application and to dynamically employ the selected technique in place of a current technique without re-initializing the application.
Abstract: A computer system may be configured to dynamically select a memory virtualization and corresponding virtual-to-physical address translation technique during execution of an application and to dynamically employ the selected technique in place of a current technique without re-initializing the application. The computer system may be configured to determine that a current address translation technique incurs a high overhead for the application's current workload and may be configured to select a different technique dependent on various performance criteria and/or a user policy. Dynamically employing the selected technique may include reorganizing a memory, reorganizing a translation table, allocating a different block of memory to the application, changing a page or segment size, or moving to or from a page-based, segment-based, or function-based address translation technique. A selected translation technique may be dynamically employed for the application independent of a translation technique employed for a different application.

Journal ArticleDOI
TL;DR: An overview of DDM applications and algorithms for P2P environments is offered, focusing particularly on local algorithms that perform data analysis by using computing primitives with limited communication overhead.
Abstract: Peer-to-peer (P2P) networks are gaining popularity in many applications such as file sharing, e-commerce, and social networking, many of which deal with rich, distributed data sources that can benefit from data mining. P2P networks are, in fact, well-suited to distributed data mining (DDM), which deals with the problem of data analysis in environments with distributed data, computing nodes, and users. This article offers an overview of DDM applications and algorithms for P2P environments, focusing particularly on local algorithms that perform data analysis by using computing primitives with limited communication overhead. The authors describe both exact and approximate local P2P data mining algorithms that work in a decentralized and communication-efficient manner

Proceedings ArticleDOI
27 Jun 2006
TL;DR: This paper proposes to address the problem of thresholded counts by setting local thresholds at each monitoring node and initiating communication only when the locally observed data exceeds these local thresholds, and explores algorithms in two categories: static and adaptive thresholds.
Abstract: Monitoring is an issue of primary concern in current and next generation networked systems. For ex, the objective of sensor networks is to monitor their surroundings for a variety of different applications like atmospheric conditions, wildlife behavior, and troop movements among others. Similarly, monitoring in data networks is critical not only for accounting and management, but also for detecting anomalies and attacks. Such monitoring applications are inherently continuous and distributed, and must be designed to minimize the communication overhead that they introduce. In this context we introduce and study a fundamental class of problems called "thresholded counts" where we must return the aggregate frequency count of an event that is continuously monitored by distributed nodes with a user-specified accuracy whenever the actual count exceeds a given threshold value.In this paper we propose to address the problem of thresholded counts by setting local thresholds at each monitoring node and initiating communication only when the locally observed data exceeds these local thresholds. We explore algorithms in two categories: static and adaptive thresholds. In the static case, we consider thresholds based on a linear combination of two alternate strategies, and show that there exists an optimal blend of the two strategies that results in minimum communication overhead. We further show that this optimal blend can be found using a steepest descent search. In the adaptive case, we propose algorithms that adjust the local thresholds based on the observed distributions of updated information. We use extensive simulations not only to verify the accuracy of our algorithms and validate our theoretical results, but also to evaluate the performance of our algorithms. We find that both approaches yield significant savings over the naive approach of centralized processing.

Proceedings ArticleDOI
25 Sep 2006
TL;DR: A scalable solution that uses a delay-based overlay structure to organize nodes based on their proximity to one another, using a small number of delay experiments, which results in effective clustering with acceptable overhead.
Abstract: Computational grids have not scaled effectively due to administrative hurdles to resource and user participation. Most production grids are essentially multi-site supercomputer centers, rather than truly open and heterogeneous sets of resources that can join and leave dynamically, and that can provide support for an equally dynamic set of users. Large-scale grids containing individual resources with more autonomy about when and how they join and leave will require self-organizing grid middleware services that do not require centralized administrative control. This paper considers one such service, namely the dynamic discovery of high-performance variable-size clusters of grid nodes. A brute force approach to the problem of identifying these "ad-hoc clusters" would require excessive overhead in terms of both message exchange and computation. Therefore, we propose a scalable solution that uses a delay-based overlay structure to organize nodes based on their proximity to one another, using a small number of delay experiments. This overlay can then be used to provide a variable-size set of promising candidate nodes than can then be used as a cluster, or tested further to improve the selection. Simulation results show that this approach results in effective clustering with acceptable overhead.

Proceedings ArticleDOI
03 Apr 2006
TL;DR: The core of SUBSKY is a transformation that converts multi-dimensional data to 1D values, and enables several effective pruning heuristics, and can be implemented in any relational database.
Abstract: Given a set of multi-dimensional points, the skyline contains the best points according to any preference function that is monotone on all axes. In practice, applications that require skyline analysis usually provide numerous candidate attributes, and various users depending on their interests may issue queries regarding different (small) subsets of the dimensions. Formally, given a relation with a large number (e.g.,ge 10) of attributes, a query aims at finding the skyline in an arbitrary subspace with a low dimensionality (e.g., 2). The existing algorithms do not support subspace skyline retrieval efficiently because they (i) require scanning the entire database at least once, or (ii) are optimized for one particular subspace but incur significant overhead for other subspaces. In this paper, we propose a technique SUBSKY which settles the problem using a single B-tree, and can be implemented in any relational database. The core of SUBSKY is a transformation that converts multi-dimensional data to 1D values, and enables several effective pruning heuristics. Extensive experiments with real data confirm that SUBSKY outperforms alternative approaches significantly in both efficiency and scalability.

Proceedings ArticleDOI
25 Oct 2006
TL;DR: This paper shows that Network Coding is practical in a P2P setting since it incurs little overhead, both in terms of CPU processing and IO activity, and it results in smooth, fast downloads, and efficient server utilization.
Abstract: In this paper we present the first implementation of a P2P content distribution system that uses Network Coding. Using results from live trials with several hundred nodes, we provide a detailed performance analysis of such P2P system. In contrast to prior work, which mainly relies on monitoring P2P systems at particular locations, we are able to provide performance results from a variety of novel angles by monitoring all components in the P2P distribution.In particular, we show that Network Coding is practical in a P2P setting since it incurs little overhead, both in terms of CPU processing and IO activity, and it results in smooth, fast downloads, and efficient server utilization. We also study the importance of topology construction algorithms in real scenarios and study the effect of peers behind NATs and firewalls, showing that the system is surprisingly robust to large number of unreachable peers. Finally, we present performance results related to verifying network encoded blocks on-the-fly using special security primitives called Secure-Random-Checksums.

01 Jan 2006
TL;DR: This work considers the design of low-overhead, obstruction-free software transactional memory for non-garbage-collected languages and eliminates dynamic allocation of transactional metadata and co-locates data that are separate in other systems, thereby reducing the expected number of cache misses on the common-case code path.
Abstract: Recent years have seen the development of several dierent systems for software transactional memory (STM). Most either employ locks in the underlying implementation or depend on thread-safe general-purpose garbage collection to collect stale data and metadata. We consider the design of low-overhead, obstruction-free software transactional memory for non-garbage-collected languages. Our design eliminates dynamic allocation of transactional metadata and co-locates data that are separate in other systems, thereby reducing the expected number of cache misses on the common-case code path, while preserving nonblocking progress and requiring no atomic instructions other than single-word load, store, and compare-and-swap (or load-linked/store-conditional). We also employ a simple, epoch-based storage management system and introduce a novel conservative mechanism to make reader transactions visible to writers without inducing additional metadata copying or dynamic allocation. Experimental results show throughput significantly higher than that of existing nonblocking STM systems, and highlight significant, application-specific dierences among conflict detection and validation strategies.

Journal ArticleDOI
TL;DR: In this article, two methods to model overhead distribution lines' failure rates are presented, based on a Poisson regression model and a Bayesian network model, which uses conditional probabilities of failures given different weather states.
Abstract: Weather is one of the major factors affecting the reliability of power distribution systems. An effective method to model weather's impact on overhead distribution lines' failure rates will enable utilities to compare their systems' reliabilities under different weather conditions. This will allow them to make the right decisions to obtain the best operation and maintenance plan to reduce impacts of weather on reliabilities. Two methods to model overhead distribution lines' failure rates are presented in this paper. The first is based on a Poisson regression model, and it captures the counting nature of failure events on overhead distribution lines. The second is a Bayesian network model, which uses conditional probabilities of failures given different weather states. Both methods are used to predict the yearly weather-related failure events on overhead lines. This is followed by a Monte Carlo analysis to determine prediction bounds. The results obtained by these models are compared to evaluate their salient features

Proceedings ArticleDOI
26 Jun 2006
TL;DR: This paper demonstrates that TaintTrace is effective in protecting against various attacks while maintaining a modest slowdown of 5.5 times, offering significant improvements over similar tools.
Abstract: TaintTrace is a high performance flow tracing tool that protects systems against security exploits. It is based on dynamic execution binary rewriting empowering our tool with fine-grained monitoring of system activities such as the tracking of the usage and propagation of data originated from the network. The challenge lies in minimizing the run-time overhead of the tool. TaintTrace uses a number of techniques such as direct memory mapping to optimize performance. In this paper, we demonstrate that TaintTrace is effective in protecting against various attacks while maintaining a modest slowdown of 5.5 times, offering significant improvements over similar tools.

Proceedings ArticleDOI
24 Jul 2006
TL;DR: A formal specification of the protocol is defined and an efficient scheme for the implementation of elasticity that involves no datapath overhead is presented, opening up opportunities for microarchitectural design.
Abstract: A simple protocol for latency-insensitive design is presented. The main features of the protocol are the efficient implementation of elastic communication channels and the automatable design methodology. With this approach, fine-granularity elasticity can be introduced at the level of functional units (e.g. ALUs, memories). A formal specification of the protocol is defined and an efficient scheme for the implementation of elasticity that involves no datapath overhead is presented. The opportunities this protocol opens for microarchitectural design are discussed.

Proceedings ArticleDOI
05 Nov 2006
TL;DR: The experimental results show that the proposed adaptive mechanism could provide significant performance improvement over the well-known coarse-grained management mechanism NFTL (NAND flash translation layer) over realistic workloads.
Abstract: While the capacity of flash-memory storage systems keeps increasing significantly, effective and efficient management of flash-memory space has become a critical design issue! Different granularities in space management impose different management costs and mapping efficiency. In this paper, we explore an address translation mechanism that can dynamically and adaptively switch between two granularities in the mapping of logical block addresses into physical block addresses in flash memory management. The objective is to provide good performance in address mapping and space utilization and, at the same time, to have the memory space requirements, and the garbage collection overhead under proper management. The experimental results show that the proposed adaptive mechanism could provide significant performance improvement over the well-known coarse-grained management mechanism NFTL (NAND Flash Translation Layer) over realistic workloads.


Proceedings ArticleDOI
01 Jan 2006
TL;DR: This work proposes an algorithm for this collaborative graph discovery problem and shows that the inferred topology can greatly improve the efficiency of mobility forwarding and achieves end-to-end delays comparable to those of epidemic approaches, while requiring a significantly lower transmission overhead.
Abstract: Mobile wireless ad hoc and sensor networks can be permanently partitioned in many interesting scenarios. This implies that instantaneous end-to-end routes do not exist. Nevertheless, when nodes are mobile, it is possible to forward messages to their destinations through mobility. We observe that in many practical settings, spatial node distributions are very heterogeneous and possess concentration points of high node density. The locations of these concentration points and the flow of nodes between them tend to be stable over time. This motivates a novel mobility model, where nodes move randomly between stable islands of connectivity, where they are likely to encounter other nodes, while connectivity is very limited outside these islands. Our goal is to exploit such a stable topology of concentration points by developing algorithms that allow nodes to collaborate to discover this topology and to use it for efficient mobility forwarding. We achieve this without any external signals to nodes, such as geographic positions or fixed beacons; instead, we rely only on the evolution of the set of neighbors of each node. We propose an algorithm for this collaborative graph discovery problem and show that the inferred topology can greatly improve the efficiency of mobility forwarding. Using both synthetic and data-driven mobility models we show through simulations that our approach achieves end-to-end delays comparable to those of epidemic approaches, while requiring a significantly lower transmission overhead

Proceedings ArticleDOI
17 Jul 2006
TL;DR: A multi-hop routing protocol, called MURU that is able to find robust paths in urban VANETs to achieve high end-to-end packet delivery ratio with low overhead and is sufficiently justified through theoretical analysis and the protocol is evaluated with extensive simulations.
Abstract: Vehicular ad hoc networks (VANETs) are going to be an important communication infrastructure in our life. Because of high mobility and frequent link disconnection, it becomes quite challenging to establish a robust multi-hop path that helps packet delivery from the source to the destination. This paper presents a multi-hop routing protocol, called MURU, that is able to find robust paths in urban VANETs to achieve high end-to-end packet delivery ratio with low overhead. MURU tries to minimize the probability of path breakage by exploiting mobility information of each vehicle in VANETs. A new metric called expected disconnection degree (EDD) is used to select the most robust path from the source to the destination. MURU is fully distributed and does not incur much overhead, which makes MURU highly scalable for VANETs. The design is sufficiently justified through theoretical analysis and the protocol is evaluated with extensive simulations. Simulation results demonstrate that MURU significantly outperforms existing ad hoc routing protocols in terms of packet delivery ratio, packet delay and control overhead.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a technique to convert a large class of existing honest-verifier zero-knowledge protocols into ones with stronger properties in the common reference string model, such as non-malleability and universal composability.
Abstract: Recently there has been an interest in zero-knowledge protocols with stronger properties, such as concurrency, simulation soundness, non-malleability, and universal composability. In this paper we show a novel technique to convert a large class of existing honest-verifier zero-knowledge protocols into ones with these stronger properties in the common reference string model. More precisely, our technique utilizes a signature scheme existentially unforgeable against adaptive chosen-message attacks, and transforms any Σ-protocol (which is honest-verifier zero-knowledge) into a simulation sound concurrent zero-knowledge protocol. We also introduce Ω-protocols, a variant of Σ-protocols for which our technique further achieves the properties of non-malleability and/or universal composability. In addition to its conceptual simplicity, a main advantage of this new technique over previous ones is that it avoids the Cook-Levin theorem, which tends to be rather inefficient. Indeed, our technique allows for very efficient instantiation based on the security of some efficient signature schemes and standard number-theoretic assumptions. For instance, one instantiation of our technique yields a universally composable zero-knowledge protocol under the Strong RSA assumption, incurring an overhead of a small constant number of exponentiations, plus the generation of two signatures.