scispace - formally typeset
Search or ask a question

Showing papers on "Communication complexity published in 2005"


Journal ArticleDOI
TL;DR: Data Streams: Algorithms and Applications surveys the emerging area of algorithms for processing data streams and associated applications, which rely on metric embeddings, pseudo-random computations, sparse approximation theory and communication complexity.
Abstract: In the data stream scenario, input arrives very rapidly and there is limited memory to store the input. Algorithms have to work with one or few passes over the data, space less than linear in the input size or time significantly less than the input size. In the past few years, a new theory has emerged for reasoning about algorithms that work within these constraints on space, time, and number of passes. Some of the methods rely on metric embeddings, pseudo-random computations, sparse approximation theory and communication complexity. The applications for this scenario include IP network traffic analysis, mining text message streams and processing massive data sets in general. Researchers in Theoretical Computer Science, Databases, IP Networking and Computer Systems are working on the data stream challenges. This article is an overview and survey of data stream algorithmics and is an updated version of [1].

1,598 citations


Book ChapterDOI
11 Jul 2005
TL;DR: A single-database private information retrieval (PIR) scheme with communication complexity ${\mathcal O}(k+d)$, where k ≥ log n is a security parameter that depends on the database size n and d is the bit-length of the retrieved database block.
Abstract: We present a single-database private information retrieval (PIR) scheme with communication complexity ${\mathcal O}(k+d)$, where k ≥ log n is a security parameter that depends on the database size n and d is the bit-length of the retrieved database block. This communication complexity is better asymptotically than previous single-database PIR schemes. The scheme also gives improved performance for practical parameter settings whether the user is retrieving a single bit or very large blocks. For large blocks, our scheme achieves a constant “rate” (e.g., 0.2), even when the user-side communication is very low (e.g., two 1024-bit numbers). Our scheme and security analysis is presented using general groups with hidden smooth subgroups; the scheme can be instantiated using composite moduli, in which case the security of our scheme is based on a simple variant of the “Φ-hiding” assumption by Cachin, Micali and Stadler [2].

353 citations


Journal ArticleDOI
TL;DR: In this paper, the authors define complexity as the measure of uncertainty in achieving the functional requirements (FRs) of a system within their specified design range, which leads to the existence of four different types of complexity: time-independent real complexity, timeindependent imaginary complexity, and time-dependent combinatorial complexity.

187 citations


Journal ArticleDOI
DaeHo Seo1, Akif Ali1, Won-Taek Lim1, Nauman Rafique1, Mithuna Thottethodi1 
01 May 2005
TL;DR: The major contribution of this paper is the design of an oblivious routing algorithm � O1TURN � with provable near-optimal worst- case throughput, good average-case throughput, low design complexity and minimal number of network hops for 2D-mesh networks, thus satisfying all the stated design goals.
Abstract: Minimizing latency and maximizing throughput are important goals in the design of routing algorithms for interconnection networks. Ideally, we would like a routing algorithm to (a) route packets using the minimal number of hops to reduce latency and preserve communication locality, (b) deliver good worst-case and average-case throughput and (c) enable low-complexity (and hence, low latency) router implementation. In this paper, we focus on routing algorithms for an important class of interconnection networks: two dimensional (2D) mesh networks. Existing routing algorithms for mesh networks fail to satisfy one or more of design goals mentioned above. Variously, the routing algorithms suffer from poor worst case throughput (ROMM [13], DOR [23]), poor latency due to increased packet hops (VALIANT [31]) or increased latency due to hardware complexity (minimaladaptive [7, 30]). The major contribution of this paper is the design of an oblivious routing algorithm - O1TURN - with provable near-optimal worst-case throughput, good average-case throughput, low design complexity and minimal number of network hops for 2D-mesh networks, thus satisfying all the stated design goals. O1TURN offers optimal worst-case throughput when the network radix (k in a kxk network) is even. When the network radix is odd, O1TURN is within a 1/k2 factor of optimal worst-case throughput. O1TURN achieves superior or comparable average-case throughput with global traffic as well as local traffic. For example, O1TURN achieves 18.8%, 0.7% and 13.6% higher average-case throughput than DOR, ROMM and VALIANT routing, respectively when averaged over one million random traffic patterns on an 8x8 network. Finally, we demonstrate that O1TURN is well suited for a partitioned router implementation that is of similar delay complexity as a simple dimension-ordered router. Our implementation incurs a marginal increase in switch arbitration delay that is completely hidden in pipelined routers as it is not on the clock-critical path.

174 citations


Book ChapterDOI
14 Aug 2005
TL;DR: A constant-round protocol for general secure multiparty computation which makes a black-box use of a pseudorandom generator and which withstands an active, adaptive adversary corrupting a minority of the parties.
Abstract: We present a constant-round protocol for general secure multiparty computation which makes a black-box use of a pseudorandom generator. In particular, the protocol does not require expensive zero-knowledge proofs and its communication complexity does not depend on the computational complexity of the underlying cryptographic primitive. Our protocol withstands an active, adaptive adversary corrupting a minority of the parties. Previous constant-round protocols of this type were only known in the semi-honest model or for restricted classes of functionalities.

160 citations


Journal ArticleDOI
TL;DR: New iterative soft-input soft-output (SISO) detection schemes for intersymbol interference (ISI) channels are proposed and verified by computer simulations that the SP algorithm converges to a good approximation of the exact marginal APPs of the transmitted symbols if the FG has girth at least 6.
Abstract: In this paper, based on the application of the sum-product (SP) algorithm to factor graphs (FGs) representing the joint a posteriori probability (APP) of the transmitted symbols, we propose new iterative soft-input soft-output (SISO) detection schemes for intersymbol interference (ISI) channels. We have verified by computer simulations that the SP algorithm converges to a good approximation of the exact marginal APPs of the transmitted symbols if the FG has girth at least 6. For ISI channels whose corresponding FG has girth 4, the application of a stretching technique allows us to obtain an equivalent girth-6 graph. For sparse ISI channels, the proposed algorithms have advantages in terms of complexity over optimal detection schemes based on the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm. They also allow a parallel implementation of the receiver and the possibility of a more efficient complexity reduction. The application to joint detection and decoding of low-density parity-check (LDPC) codes is also considered and results are shown for some partial-response magnetic channels. Also in these cases, we show that the proposed algorithms have a limited performance loss with respect to that can be obtained when the optimal "serial" BCJR algorithm is used for detection. Therefore, for their parallel implementation, they represent a favorable alternative to the modified "parallel" BCJR algorithm proposed in the literature for the application to magnetic channels.

158 citations


Proceedings ArticleDOI
12 Dec 2005
TL;DR: This paper proposes a formal model for a network of robotic agents that move and communicate and defines notions of robotic network, control and communication law, coordination task, and time and communication complexity.
Abstract: This paper proposes a formal model for a network of robotic agents that move and communicate. Building on concepts from distributed computation, robotics and control theory, we define notions of robotic network, control and communication law, coordination task, and time and communication complexity. We illustrate our model and compute the proposed complexity measures in the example of a network of locally connected agents on a circle that agree upon a direction of motion and pursue their immediate neighbors.

124 citations


Journal ArticleDOI
TL;DR: This work addresses the information-theoretic setting for PIR, where the user's privacy should be unconditionally protected against computationally unbounded servers, and presents a general construction, whose abstract components can be instantiated to yield both old and new families of PIR protocols.

112 citations


Proceedings ArticleDOI
15 May 2005
TL;DR: A model of configuration complexity is developed that represents systems as a set of nested containers with configuration controls and derives various metrics that indicate configuration complexity, including execution complexity, parameter complexity, and memory complexity.
Abstract: The complexity of configuring computing systems is a major impediment to the adoption of new information technology (IT) products and greatly increases the cost of IT services. This paper develops a model of configuration complexity and demonstrates its value for a change management system. The model represents systems as a set of nested containers with configuration controls. From this representation, we derive various metrics that indicate configuration complexity, including execution complexity, parameter complexity, and memory complexity. We apply this model to a J2EE-based enterprise application and its associated middleware stack to assess the complexity of the manual configuration process for this application. We then show how an automated change management system can greatly reduce configuration complexity.

96 citations


Journal ArticleDOI
TL;DR: A new and generic rate-distortion-complexity model is proposed that can generate DIA descriptions for image and video decoding algorithms running on various hardware architectures and explicitly model the complexity involved in decoding a bitstream by a generic receiver.
Abstract: Existing research on Universal Multimedia Access has mainly focused on adapting multimedia to the network characteristics while overlooking the receiver capabilities. Alternatively, part 7 of the MPEG-21 standard entitled Digital Item Adaptation (DIA) defines description tools to guide the multimedia adaptation process based on both the network conditions and the available receiver resources. In this paper, we propose a new and generic rate-distortion-complexity model that can generate such DIA descriptions for image and video decoding algorithms running on various hardware architectures. The novelty of our approach is in virtualizing complexity, i.e., we explicitly model the complexity involved in decoding a bitstream by a generic receiver. This generic complexity is translated dynamically into "real" complexity, which is architecture-specific. The receivers can then negotiate with the media server/proxy the transmission of a bitstream having a desired complexity level based on their resource constraints. Hence, unlike in previous streaming systems, multimedia transmission can be optimized in an integrated rate-distortion-complexity setting by minimizing the incurred distortion under joint rate-complexity constraints.

91 citations


DOI
01 Jan 2005
TL;DR: This PhD thesis focuses on fair exchange protocols and radio frequency identification protocols, and proposes two approaches to overcome this problem by attaching to each person, a guardian angel, that is, a security module conceived by a trustworthy authority and whose behavior cannot deviate from the established rules.
Abstract: This PhD thesis focuses on fair exchange protocols and radio frequency identification protocols Fair exchange stems from a daily life problem: how can two people exchange objects (material or immaterial) fairly, that is, without anyone being hurt in the exchange? More formally, if Alice and Bob each have objects mA and mB respectively, then the exchange is fair if, at the end of the protocol, both Alice and Bob have received mB and mA respectively, or neither Alice nor Bob have received the expected information, even partially Ensuring fairness in an exchange is impossible without introducing additional assumptions Thus, we propose two approaches to overcome this problem The first consists in attaching to each person, a guardian angel, that is, a security module conceived by a trustworthy authority and whose behavior cannot deviate from the established rules In such a model, the fairness of the exchange can be ensured with a probability as close to 1 as desired, implying however a communication complexity cost We then use results from the distributed algorithm to generalize this approach for n people Finally, we propose a second approach that consists in no more considering the exchange in an isolated manner, but to replace it in its context, in the heart of a network, where each person in the pair has a few honest neighbors In this framework, fairness can lie on these neighbors, who are solicited only in the case of a conflict during the exchange We then look into Radio Frequency Identification (RFID), which consists in remotely identifying objects or subjects having a transponder The great achievements that radio frequency identification has made today, lies essentially on the willingness to develop low cost and small size transponders Consequently, they have limited computation and storage capabilities Due to this reason, many questions have been asked regarding RFID's potential and limitations, more precisely in terms of security and privacy Since this is a recent problem, the works presented in this document first outline completely the framework by introducing certain basic concepts In particular, we present and classify threats, we show the link between traceability and the communication model, and we analyze existing RFID protocols We also present the complexity issues due to key management We show that the solution proposed by Molnar and Wagner has weaknesses and we propose another solution based on time-memory trade-offs Finally, we continue our time-memory trade-off analysis by proposing a method based on checkpoints, which allows detecting false alarms in a probabilistic manner

Journal ArticleDOI
TL;DR: This paper presents a simple, but effective method of enhancing and exploiting diversity from multiple packet transmissions in systems that employ nonbinary linear modulations such as phase-shift keying (PSK) and quadrature amplitude modulation (QAM).
Abstract: In this paper, we present a simple, but effective method of enhancing and exploiting diversity from multiple packet transmissions in systems that employ nonbinary linear modulations such as phase-shift keying (PSK) and quadrature amplitude modulation (QAM). This diversity improvement results from redesigning the symbol mapping for each packet transmission. By developing a general framework for evaluating the upper bound of the bit error rate (BER) with multiple transmissions, a criterion to obtain optimal symbol mappings is attained. The optimal adaptation scheme reduces to solutions of the well known quadratic assignment problem (QAP). Symbol mapping adaptation only requires a small increase in receiver complexity but provides very substantial BER gains when applied to additive white Gaussian noise (AWGN) and flat-fading channels.

Journal ArticleDOI
TL;DR: A new self-healing key distribution scheme is proposed, which is optimal in terms of user memory storage and more efficient in Terms of communication complexity than the previous results.
Abstract: The main property of the self-healing key distribution scheme is that users are capable of recovering lost group keys on their own, without requesting additional transmission from the group manager. In this paper, we propose a new self-healing key distribution scheme, which is optimal in terms of user memory storage and more efficient in terms of communication complexity than the previous results.

Proceedings ArticleDOI
22 May 2005
TL;DR: Efficient implementations of the embedding that yield solutions to various computational problems involving edit distance, including sketching, communication complexity, nearest neighbor search are shown.
Abstract: We show that 0,1d endowed with edit distance embeds into l1 with distortion 2O(√log dlog log d). We further show efficient implementations of the embedding that yield solutions to various computational problems involving edit distance. These include sketching, communication complexity, nearest neighbor search. For all these problems, we improve upon previous bounds.

Journal ArticleDOI
TL;DR: Analysis of a multilateral negotiation framework, where autonomous agents agree on a sequence of deals to exchange sets of discrete resources in order to both further their own goals and to achieve a distribution of resources that is socially optimal, can distinguish different aspects of complexity.
Abstract: We study the complexity of a multilateral negotiation framework, where autonomous agents agree on a sequence of deals to exchange sets of discrete resources in order to both further their own goals and to achieve a distribution of resources that is socially optimal. When analysing such a framework, we can distinguish different aspects of complexity: How many deals are required to reach an optimal allocation of resources? How many communicative exchanges are required to agree on one such deal? How complex a communication language do we require? And finally, how complex is the reasoning task faced by each agent?

Journal ArticleDOI
TL;DR: The goal is to present NetRec, a dynamic network reconfiguration algorithm for tolerating multiple node and link failures in high-speed networks with arbitrary topology and the termination, liveness, and safety of the proposed algorithm are proven.
Abstract: Component failures in high-speed computer networks can result in significant topological changes. In such cases, a network reconfiguration algorithm must be executed to restore the connectivity between the network nodes. Most contemporary networks use either static reconfiguration algorithms or stop the user traffic in order to prevent cyclic dependencies in the routing tables. The goal is to present NetRec, a dynamic network reconfiguration algorithm for tolerating multiple node and link failures in high-speed networks with arbitrary topology. The algorithm updates the routing tables asynchronously and does not require any global knowledge about the network topology. Certain phases of NetRec are executed in parallel, which reduces the reconfiguration time. The algorithm suspends the application traffic in small regions of the network only while the routing tables are being updated. The message complexity of NetRec is analyzed and the termination, liveness, and safety of the proposed algorithm are proven. Additionally, results from validation of the algorithm in a distributed network-validation testbed Distant, based on the MPI 1.2 features for building arbitrary virtual topologies, are presented.

Book ChapterDOI
28 Feb 2005
TL;DR: This paper presents three protocols that solve the above problem in the setting where Alice and Bob wish to disclose no information to each other about their sets beyond the single bit: “whether the intersection is empty or not”.
Abstract: Two parties, say Alice and Bob, possess two sets of elements that belong to a universe of possible values and wish to test whether these sets are disjoint or not. In this paper we consider the above problem in the setting where Alice and Bob wish to disclose no information to each other about their sets beyond the single bit: “whether the intersection is empty or not.” This problem has many applications in commercial settings where two mutually distrustful parties wish to decide with minimum possible disclosure whether there is any overlap between their private datasets. We present three protocols that solve the above problem that meet different efficiency and security objectives and data representation scenarios. Our protocols are based on Homomorphic encryption and in our security analysis, we consider the semi-honest setting as well as the malicious setting. Our most efficient construction for a large universe in terms of overall communication complexity uses a new encryption primitive that we introduce called “superposed encryption.” We formalize this notion and provide a construction that may be of independent interest. For dealing with the malicious adversarial setting we take advantage of recent efficient constructions of Universally-Composable commitments based on verifiable encryption as well as zero-knowledge proofs of language membership.

Proceedings ArticleDOI
07 Nov 2005
TL;DR: An in-depth study of the communication requirements across a broad spectrum of important scientific applications, whose computational methods include: finite-difference, lattice-Bolzmann, particle in cell, sparse linear algebra, particle mesh ewald, and FFT-based solvers, to guide architectural choices for the design and implementation of interconnects for future HPC systems.
Abstract: As thermal constraints reduce the pace of CPU performance improvements, the cost and scalability of future HPC architectures are increasingly dominated by the interconnect. In this paper we perform an in-depth study of the communication requirements across a broad spectrum of important scientific applications, whose computational methods include: finite-difference, lattice-Bolzmann, particle in cell, sparse linear algebra, particle mesh ewald, and FFT-based solvers. We use the IPM (integrated performance monitoring) profiling framework to collect detailed statistics on communication topology and message volume with minimal impact to code performance. By characterizing the parallelism and communication requirements of such a diverse set of applications, we hope to guide architectural choices for the design and implementation of interconnects for future HPC systems.

Book ChapterDOI
Felix Brandt1
01 Dec 2005
TL;DR: In this article, a set of primitives based on El Gamal encryption are proposed to construct efficient multiparty computation protocols for low-complexity functions, such as the Hamming distance of two bitstrings and the greater than function.
Abstract: We propose a set of primitives based on El Gamal encryption that can be used to construct efficient multiparty computation protocols for certain low-complexity functions. In particular, we show how to privately count the number of true Boolean disjunctions of literals and pairwise exclusive disjunctions of literals. Applications include efficient two-party protocols for computing the Hamming distance of two bitstrings and the greater-than function. The resulting protocols only require 6 rounds of interaction (in the random oracle model) and their communication complexity is $\mathcal{O}(kQ)$ where k is the length of bit-strings and Q is a security parameter. The protocols are secure against active adversaries but do not provide fairness. Security relies on the decisional Diffie-Hellman assumption and error probability is negligible in Q.

Journal ArticleDOI
TL;DR: This paper considers the communications involved by the execution of a complex application, deployed on a heterogeneous platform, modeled by a graph where resources have different communication and computation speeds, and shows how to compute the best throughput using linear programming and how to exhibit a periodic schedule.
Abstract: In this paper, we consider the communications involved by the execution of a complex application, deployed on a heterogeneous platform. Such applications extensively use macrocommunication schemes, for example, to broadcast data items. Rather than aiming at minimizing the execution time of a single broadcast, we focus on the steady-state operation. We assume that there is a large number of messages to be broadcast in pipeline fashion, and we aim at maximizing the throughput, i.e., the (rational) number of messages which can be broadcast every time-step. We target heterogeneous platforms, modeled by a graph where resources have different communication and computation speeds. Achieving the best throughput may well require that the target platform is used in totality: we show that neither spanning trees nor DAGs are as powerful as general graphs. We show how to compute the best throughput using linear programming, and how to exhibit a periodic schedule, first when restricting to a DAG, and then when using a general graph. The polynomial compactness of the description comes from the decomposition of the schedule into several broadcast trees that are used concurrently to reach the best throughput. It is important to point out that a concrete scheduling algorithm based upon the steady-state operation is asymptotically optimal, in the class of all possible schedules (not only periodic solutions).

Journal ArticleDOI
TL;DR: A distributed clustering algorithm that computes energy-efficient broadcast trees in polynomial time and a reduction in the amount of energy needed to form the broadcast tree that is linear in the number of source nodes is observed.
Abstract: This paper addresses the energy-efficient broadcasting problem in ad hoc wireless networks. First, we show that finding the minimum-energy broadcast tree is NP-complete. We then develop a distributed clustering algorithm that computes energy-efficient broadcast trees in polynomial time. Our distributed algorithm computes all N possible broadcast trees simultaneously, while requiring O(N/sup 2/) messages to be exchanged between nodes. We compare our algorithm's performance to the best-known centralized algorithm, and show that it constructs trees consuming, on average, only 18% more energy. We also consider the possibility of having multiple source nodes that can be used to broadcast the message and adapt our algorithm to compute energy-efficient broadcast trees with multiple source nodes. We observe a reduction in the amount of energy needed to form the broadcast tree that is linear in the number of source nodes.

Journal ArticleDOI
TL;DR: A class of reconfigurable LDPCC characterized by low encoding and decoding complexity is presented, called generalized irregular repeat-accumulate (GeIRA) codes, for both space and terrestrial wireless communications.
Abstract: Low-density parity-check codes (LDPCC) have been recently investigated as a possible solution for high data rate applications, for both space and terrestrial wireless communications. A main issue is the research of low complexity encoding and decoding schemes. In this letter we present a class of reconfigurable LDPCC characterized by low encoding and decoding complexity: we call them generalized irregular repeat-accumulate (GeIRA) codes.

Proceedings ArticleDOI
11 Jun 2005
TL;DR: Two new complexity measures for Boolean functions are introduced, one of which, sumPI, is always at least as large as sumPI(f), and is derived from sumPI in such a way that maxPI/sup 2/(f) remains a lower bound on formula size.
Abstract: We introduce two new complexity measures for Boolean functions, which we name sumPI and maxPI. The quantity sumPI has been emerging through a line of research on quantum query complexity lower bounds via the so-called quantum adversary, culminating with the realization that these many different formulations are in fact equivalent. Given that sumPI turns out to be such a robust invariant of a function, we begin to investigate this quantity in its own right and see that it also has applications to classical complexity theory. As a surprising application we show that sumPI/sup 2/(f) is a lower bound on the formula size, and even, up to a constant multiplicative factor, the probabilistic formula size of f. We show that several formula size lower bounds in the literature, specifically Khrapchenko and its extensions [Khrapchenko, 1971, Koutsoupias, 1993], including a key lemma of [Hastad, 1998], are in fact special cases of our method. The second quantity we introduce, maxPI(f), is always at least as large as sumPI(f), and is derived from sumPI in such a way that maxPI/sup 2/(f) remains a lower bound on formula size. Our main result is proven via a combinatorial lemma which relates the square of the spectral norm of a matrix to the squares of the spectral norms of its submatrices. The generality of this lemma gives that our methods can also be used to lower bound the communication complexity of relations, and a related combinatorial quantity, the rectangle partition number. To exhibit the strengths and weaknesses of our methods, we look at the sumPI and maxPI complexity of a few examples, including the recursive majority of three function, a function defined by Ambainis [2003], and the collision problem.

Journal ArticleDOI
TL;DR: It is shown that every regular language L has either constant, logarithmic or linear two-party communication complexity (in a worst-case partition sense) and similar classifications for the communication complexity of regular languages are proved for the simultaneous, probabilistic, simultaneous Probabilistic and Modp-counting models of communication.
Abstract: We show that every regular language L has either constant, logarithmic or linear two-party communication complexity (in a worst-case partition sense). We prove similar classifications for the communication complexity of regular languages for the simultaneous, probabilistic, simultaneous probabilistic and Modp-counting models of communication.

Proceedings ArticleDOI
11 Jun 2005
TL;DR: It is proved that corruption, one of the most powerful measures used to analyze 2-party randomized communication complexity, satisfies a strong direct sum property under rectangular distributions.
Abstract: We prove that corruption, one of the most powerful measures used to analyze 2-party randomized communication complexity, satisfies a strong direct sum property under rectangular distributions. This direct sum bound holds even when the error is allowed to be exponentially close to 1. We use this to analyze the complexity of the widely-studied set disjointness problem in the usual "number-on-the-forehead" (NOF) model of multiparty communication complexity.

Book ChapterDOI
11 Jul 2005
TL;DR: An analogue of the Hadamard property of matrices for tensors in multiple dimensions is defined and it is shown that any k-party communication problem represented by a hadamard tensor must have Ω(n/2k) multiparty communication complexity.
Abstract: We develop a new method for estimating the discrepancy of tensors associated with multiparty communication problems in the “Number on the Forehead” model of Chandra, Furst and Lipton. We define an analogue of the Hadamard property of matrices for tensors in multiple dimensions and show that any k-party communication problem represented by a Hadamard tensor must have Ω(n/2k) multiparty communication complexity. We also exhibit constructions of Hadamard tensors, giving Ω(n/2k) lower bounds on multiparty communication complexity for a new class of explicitly defined Boolean functions.

Proceedings ArticleDOI
18 Mar 2005
TL;DR: This paper develops a reception model for UWB multiple access based on frame-rate sampled signals in lieu of chip-rate samples that enables low-complexity MUD, of which a reduced-rank Wiener filter for blind symbol detection is examined.
Abstract: Realizing the large user capacity planned for ultra-wideband (UWB) systems motivates multiuser detection (MUD). However, it is impractical to implement conventional chip-rate MUD methods, because UWB signaling gives rise to high detection complexity and difficulty in capturing energy scattered by dense multipath. In this paper, we develop a reception model for UWB multiple access based on frame-rate sampled signals in lieu of chip-rate samples. This model enables low-complexity MUD, of which we examine a reduced-rank Wiener filter for blind symbol detection. We show that frame-rate UWB samples have a small number of distinct eigenvalues in the data covariance matrix, resulting in warp convergence of reduced-rank filtering. The proposed MUD method exhibits good performance at low complexity, even in the presence of strong frequency-selective multipath fading.

Posted Content
TL;DR: In this article, the authors present a constant-round protocol for general secure multiparty computation which makes a black-box use of a pseudorandom generator and does not require expensive zero-knowledge proofs and its communication complexity does not depend on the computational complexity of the underlying cryptographic primitive.
Abstract: We present a constant-round protocol for general secure multiparty computation which makes a black-box use of a pseudorandom generator. In particular, the protocol does not require expensive zero-knowledge proofs and its communication complexity does not depend on the computational complexity of the underlying cryptographic primitive. Our protocol withstands an active, adaptive adversary corrupting a minority of the parties. Previous constant-round protocols of this type were only known in the semi-honest model or for restricted classes of functionlities.

Proceedings ArticleDOI
31 Oct 2005
TL;DR: A T-decomposition algorithm with O(n log n) time and space complexity is presented, which involves the parsing of potentially very large strings, which in turn requires algorithms with good time complexity.
Abstract: T-decomposition maps a finite string into a series of parameters for a recursive string construction algorithm. Initially developed for the communication of coding trees (M. R. Titchener, June 1996), (U. Guenther, Feb. 2001), T-decomposition has since been studied within the context of information measures. This involves the parsing of potentially very large strings, which in turn requires algorithms with good time complexity. This paper presents a T-decomposition algorithm with O(n log n) time and space complexity

Proceedings ArticleDOI
09 May 2005
TL;DR: This approach combines the advantages of two overlay architectures: Chord-like regular networks and unstructured networks with epidemic communication, which provides higher robustness as well as higher speed and lower message complexity than appropriate base methods.
Abstract: In this paper, we present an efficient algorithm for performing a broadcast operation in P2P grids. Our approach combines the advantages of two overlay architectures: Chord-like regular networks and unstructured networks with epidemic communication. The resulting meta architecture provides higher robustness as well as higher speed and lower message complexity than appropriate base methods. Preliminary experiments show the viability of introduced approach.