scispace - formally typeset
Search or ask a question

Showing papers on "Communication complexity published in 2004"


Proceedings ArticleDOI
Mark Coates1
26 Apr 2004
TL;DR: Two methodologies for performing distributed particle filtering in a sensor network by adding a predictive scalar quantizer training step into the more standard particle filtering framework, allowing adaptive encoding of the measurements.
Abstract: This paper describes two methodologies for performing distributed particle filtering in a sensor network. It considers the scenario in which a set of sensor nodes make multiple, noisy measurements of an underlying, time-varying state that describes the monitored system. The goal of the proposed algorithms is to perform on-line, distributed estimation of the current state at multiple sensor nodes, whilst attempting to minimize communication overhead. The first algorithm relies on likelihood factorization and the training of parametric models to approximate the likelihood factors. The second algorithm adds a predictive scalar quantizer training step into the more standard particle filtering framework, allowing adaptive encoding of the measurements. As its primary example, the paper describes the application of the quantization-based algorithm to tracking a maneuvering object. The paper concludes with a discussion of the limitations of the presented technique and an indication of future avenues for enhancement.

302 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that for every Bell's inequality, including those which are not yet known, there always exists a communication complexity problem for which a protocol assisted by states which violate the inequality is more efficient than any classical protocol.
Abstract: We prove that for every Bell's inequality, including those which are not yet known, there always exists a communication complexity problem, for which a protocol assisted by states which violate the inequality is more efficient than any classical protocol. Violation of Bell's inequalities is the necessary and sufficient condition for quantum protocol to beat the classical ones.

257 citations


Journal ArticleDOI
TL;DR: This work discusses and analyzes a specific group key agreement technique which supports dynamic group membership and handles network failures, such as group partitions and merges, and is simple, fault-tolerant, and well-suited for high-delay networks.
Abstract: In recent years, collaborative and group-oriented applications and protocols have gained popularity. These applications typically involve communication over open networks; security thus is naturally an important requirement. Group key management is one of the basic building blocks in securing group communication. Most prior research in group key management focused on minimizing computation overhead, in particular minimizing expensive cryptographic operations. However, continued advances in computing power have not been matched by a decrease in network communication delay. Thus, communication latency, especially in high-delay long-haul networks, increasingly dominates the key setup latency, replacing computation delay as the main latency contributor. Hence, there is a need to minimize the size of messages and, especially, the number of rounds in cryptographic protocols. Since most previously proposed group key management techniques optimize computational (cryptographic) overhead, they are particularly impacted by high communication delay. In this work, we discuss and analyze a specific group key agreement technique which supports dynamic group membership and handles network failures, such as group partitions and merges. This technique is very communication-efficient and provably secure against hostile eavesdroppers as well as various other attacks specific to group settings. Furthermore, it is simple, fault-tolerant, and well-suited for high-delay networks.

184 citations


Book ChapterDOI
13 Jul 2004
TL;DR: These schemes are based on Paillier’s cryptosystem, which along with its variants have drawn extensive studies in recent cryptographic researches and have many important applications.
Abstract: We study the problem of single database private information retrieval, and present a solution with only logarithmic server-side communication complexity and a solution with only logarithmic user-side communication complexity Previously the best result could only achieve polylogarithmic communication on each side, and was based on certain less well-studied assumptions in number theory [6] On the contrary, our schemes are based on Paillier’s cryptosystem [16], which along with its variants have drawn extensive studies in recent cryptographic researches [3, 4, 8, 9], and have many important applications [7, 8]

153 citations


Book ChapterDOI
Yehuda Lindell1
19 Feb 2004
TL;DR: It is proved that there exist many functionalities that cannot be securely computed in the setting of concurrent self composition, and a communication complexity lower bound on protocols that securely compute a large class of functionalities in this setting is proved.
Abstract: In the setting of concurrent self composition, a single protocol is executed many times concurrently by a single set of parties. In this paper, we prove that there exist many functionalities that cannot be securely computed in this setting. We also prove a communication complexity lower bound on protocols that securely compute a large class of functionalities in this setting. Specifically, we show that any protocol that computes a functionality from this class and remains secure for m concurrent executions, must have bandwidth of at least m bits. Our results hold for the plain model (where no trusted setup phase is assumed), and for the case that the parties may choose their inputs adaptively, based on previously obtained outputs. While proving our impossibility result, we also show that for many functionalities, security under concurrent self composition (where a single secure protocol is run many times) is actually equivalent to the seemingly more stringent requirement of security under concurrent general composition (where a secure protocol is run concurrently with other arbitrary protocols). This observation has significance beyond the impossibility results that are derived by it for concurrent self composition.

135 citations


Journal ArticleDOI
TL;DR: It is proved that the problem of finding a broadcast tree such that the energy cost of the broadcast tree is minimized, and three heuristic algorithms are proposed, namely, shortest path tree heuristic, greedyHeuristic, and node weighted Steiner tree-based heuristic which are centralized algorithms.
Abstract: In this paper, we discuss energy efficient broadcast in ad hoc wireless networks. The problem of our concern is: given an ad hoc wireless network, find a broadcast tree such that the energy cost of the broadcast tree is minimized. Each node in the network is assumed to have a fixed level of transmission power. We first prove that the problem is NP-hard and propose three heuristic algorithms, namely, shortest path tree heuristic, greedy heuristic, and node weighted Steiner tree-based heuristic, which are centralized algorithms. The approximation ratio of the node weighted Steiner tree-based heuristic is proven to be (1 + 2 ln(n - 1)). Extensive simulations have been conducted and the results have demonstrated the efficiency of the proposed algorithms.

132 citations


Journal ArticleDOI
TL;DR: In this paper, a cross-layer view for roles of signal processing in random access network and vice versa is presented, and two cases where cross layer design has a quantifiable impact on system performance are discussed.
Abstract: In this paper, a cross-layer view for roles of signal processing in random access network and vice versa is presented. The two cases where cross-layer design has a quantifiable impact on system performance are discussed. The first case is a small network (such as wireless LAN) where a few nodes with bursty arrivals communicate with an access point. The design objective is to achieve the highest throughput among users with variable rate and delay constraints. The impact of PHY layer design on MAC protocol is examined and illustrates a tradeoff between allocating resources to the PHY layer and to MAC layer. The second case, in contrast, deals with large-scale sensor networks where each node carries little information but is severely constrained by its computation and communication complexity and most importantly, battery power. This paper emphasizes that the design of signal processing algorithms must take into account the role of MAC and the nature of random arrivals and bursty transmissions.

131 citations


Book ChapterDOI
01 Jan 2004
TL;DR: The idea that the less coordination a multi-robot system requires, the better it should scale to large numbers of robots is captured.
Abstract: We examine the scalability of multi-robot algorithms. In particular, we attempt to capture the idea that the less coordination a multi-robot system requires, the better it should scale to large numbers of robots. To that end, we introduce a notion of communication complexity of multi-robot (or more generally, distributed control) systems as a surrogate for coordination. We describe a formalism, called CCL, for specifying multi-robot systems and algorithms for which the definition of communication complexity arises naturally. We then analyze the communication complexity of several, in some cases novel, multi-robot communication schemes each representative of one of several natural complexity classes.

105 citations


Proceedings ArticleDOI
13 Jun 2004
TL;DR: The Hidden Matching Problem HMn is defined and it is proved that any randomized linear one-way protocol with bounded error for this problem requires Ω(√[3] n log n) bits of communication.
Abstract: We give the first exponential separation between quantum and bounded-error randomized one-way communication complexity. Specifically, we define the Hidden Matching Problem HMn: Alice gets as input a string x ∈ (0, 1)n and Bob gets a perfect matching M on the n coordinates. Bob's goal is to output a tuple [i,j,b] such that the edge (i,j) belongs to the matching M and b = xi ⊕ xj. We prove that the quantum one-way communication complexity of HMn is O(log n), yet any randomized one-way protocol with bounded error must use Ω(√n) bits of communication. No asymptotic gap for one-way communication was previously known. Our bounds also hold in the model of Simultaneous Messages (SM) and hence we provide the first exponential separation between quantum SM and randomized SM with public coins.For a Boolean decision version of HMn, we show that the quantum one-way communication complexity remains O(log n) and that the 0-error randomized one-way communication complexity is Ω(n). We prove that any randomized linear one-way protocol with bounded error for this problem requires Ω(√[3] n log n) bits of communication.

105 citations


Proceedings ArticleDOI
17 Oct 2004
TL;DR: This paper studies the computational model that results if the streaming model is augmented with a sorting primitive, and argues that this model is highly practical, and that a wide range of important problems can be efficiently solved in this (relatively weak) model.
Abstract: The need to deal with massive data sets in many practical applications has led to a growing interest in computational models appropriate for large inputs. The most important quality of a realistic model is that it can be efficiently implemented across a wide range of platforms and operating systems. In this paper, we study the computational model that results if the streaming model is augmented with a sorting primitive. We argue that this model is highly practical, and that a wide range of important problems can be efficiently solved in this (relatively weak) model. Examples are undirected connectivity, minimum spanning trees, and red-blue line segment intersection, among others. This suggests that using more powerful, harder to implement models may not always be justified. Our main technical contribution is to show a hardness result for the "streaming and sorting" model, which demonstrates that the main limitation of this model is that it can only access one data stream at a time. Since our model is strong enough to solve "pointer chasing" problems, the communication complexity based techniques commonly used in showing lower bounds for the streaming model cannot be adapted to our model. We therefore have to employ techniques to obtain these results. Finally, we compare our model to a popular restriction of external memory algorithms that access their data mostly sequentially.

101 citations


Journal Article
TL;DR: In this article, the authors gave the first exponential separation between quantum and bounded-error randomized one-way communication complexity for the Hidden Matching Problem (HM$_n), and showed that the complexity of the hidden matching problem with bounded error is O(log n) bits.
Abstract: We give the first exponential separation between quantum and bounded-error randomized one-way communication complexity. Specifically, we define the Hidden Matching Problem HM$_n$: Alice gets as input a string ${\bf x}\in\{0, 1\}^n$, and Bob gets a perfect matching $M$ on the $n$ coordinates. Bob's goal is to output a tuple $\langle i,j,b \rangle$ such that the edge $(i,j)$ belongs to the matching $M$ and $b=x_i\oplus x_j$. We prove that the quantum one-way communication complexity of HM$_n$ is $O(\log n)$, yet any randomized one-way protocol with bounded error must use $\Omega({\sqrt{n}})$ bits of communication. No asymptotic gap for one-way communication was previously known. Our bounds also hold in the model of Simultaneous Messages (SM), and hence we provide the first exponential separation between quantum SM and randomized SM with public coins. For a Boolean decision version of HM$_n$, we show that the quantum one-way communication complexity remains $O(\log n)$ and that the 0-error randomized one-way communication complexity is $\Omega(n)$. We prove that any randomized linear one-way protocol with bounded error for this problem requires $\Omega(\sqrt[3]{n \log n})$ bits of communication.

Journal ArticleDOI
TL;DR: A family of new algorithms for rate-fidelity optimal packetization of scalable source bit streams with uneven error protection does away with the expediency of fractional bit allocation, a limitation of some existing algorithms.
Abstract: In this paper, we present a family of new algorithms for rate-fidelity optimal packetization of scalable source bit streams with uneven error protection. In the most general setting where no assumption is made on the probability function of packet loss or on the rate-fidelity function of the scalable code stream, one of our algorithms can find the globally optimal solution to the problem in O(N/sup 2/L/sup 2/) time, compared to a previously obtained O(N/sup 3/L/sup 2/) complexity, where N is the number of packets and L is the packet payload size. If the rate-fidelity function of the input is convex, the time complexity can be reduced to O(NL/sup 2/) for a class of erasure channels, including channels for which the probability function of losing n packets is monotonically decreasing in n and independent erasure channels with packet erasure rate no larger than N/2(N + 1). Furthermore, our O(NL/sup 2/) algorithm for the convex case can be modified to rind an approximation solution for the general case. All of our algorithms do away with the expediency of fractional bit allocation, a limitation of some existing algorithms.

Book ChapterDOI
04 Oct 2004
TL;DR: The first MAC protocols that satisfy all of these requirements are given, i.e., distributed, contention-free, self-stabilizing MAC protocols which do not assume a global time reference.
Abstract: A MAC protocol specifies how nodes in a sensor network access a shared communication channel. Desired properties of such MAC protocol are: it should be distributed and contention-free (avoid collisions); it should self-stabilize to changes in the network (such as arrival of new nodes), and these changes should be contained, i.e., affect only the nodes in the vicinity of the change; it should not assume that nodes have a global time reference, i.e., nodes may not be time-synchronized. We give the first MAC protocols that satisfy all of these requirements, i.e., we give distributed, contention-free, self-stabilizing MAC protocols which do not assume a global time reference. Our protocols self-stabilize from an arbitrary initial state, and if the network changes the changes are contained and the protocol adjusts to the local topology of the network. The communication complexity, number and size of messages, for the protocol to stabilize is small (logarithmic in network size).

Journal ArticleDOI
TL;DR: This paper studies the SIMULTANEOUS MESSAGES (SM) model of multiparty communication complexity, a restricted version of the CFL game in which the players are not allowed to communicate with each other, and proves lower and upper bounds on the SM complexity of several classes of explicit functions.
Abstract: In the multiparty communication game (CFL game) of Chandra, Furst, and Lipton [Proceedings of the 15th Annual ACM Symposium on Theory of Computing, Boston, MA, 1983, pp. 94--99] k players collaboratively evaluate a function f(x0, . . . , xk-1) in which player i knows all inputs except xi. The players have unlimited computational power. The objective is to minimize communication. In this paper, we study the SIMULTANEOUS MESSAGES (SM) model of multiparty communication complexity. The SM model is a restricted version of the CFL game in which the players are not allowed to communicate with each other. Instead, each of the k players simultaneously sends a message to a referee, who sees none of the inputs. The referee then announces the function value. We prove lower and upper bounds on the SM complexity of several classes of explicit functions. Our lower bounds extend to randomized SM complexity via an entropy argument. A lemma establishing a tradeoff between average Hamming distance and range size for transformations of the Boolean cube might be of independent interest. Our lower bounds on SM complexity imply an exponential gap between the SM model and the CFL model for up to $(\log n)^{1-\epsilon}$ players for any $\epsilon > 0$. This separation is obtained by comparing the respective complexities of the Generalized Addressing Function, GAFG,k, where G is a group of order n. We also combine our lower bounds on SM complexity with the ideas of Hastad and Goldmann [Comput. Complexity, 1 (1991), pp. 113--129] to derive superpolynomial lower bounds for certain depth-2 circuits computing a function related to the GAF function. We prove some counterintuitive upper bounds on SM complexity. We show that {\sf GAF}$_{\mathbb{Z}_2^t,3}$ has SM complexity $O(n^{0.92})$. When the number of players is at least $c\log n$, for some constant c>0, our SM protocol for {\sf GAF}$_{\mathbb{Z}_2^t,k}$ has polylog(n) complexity. We also examine a class of functions defined by certain depth-2 circuits. This class includes the Generalized Inner Product function and Majority of Majorities. When the number of players is at least 2+log n, we obtain polylog(n) upper bounds for this class of functions.

Proceedings ArticleDOI
26 Apr 2004
TL;DR: This work adopts a non-asymptotic approach and optimize both, the sensing and the fusion sides with respect to the probability of detection error and shows that the optimal fusion rule has an interesting structure similar to the majority-voting rule.
Abstract: In this paper, we address the problem of optimizing the detection performance of sensor networks under communication constraints on the common access channel. Our work helps understanding tradeoffs between sensor network para-meters like number of sensors, degree of quantization at each local sensor, and SNR. Traditionally, this problem is tack-led using asymptotic assumptions on the number of sensors, an approach that leads to the abstraction of important details such as the structure of the fusion center. We adopt a non-asymptotic approach and optimize both, the sensing and the fusion sides with respect to the probability of detection error. We show that the optimal fusion rule has an interesting structure similar to themajority-voting rule. In addition, we study the convergence with respect to the number of sensors of the performance of the fusion rule. We show that convergence is SNR dependent and that, in low-SNR environments, asymptotics may require a large number of sensors.

Proceedings ArticleDOI
17 May 2004
TL;DR: Ways in which emergence science can be applied to distributed computing are discussed, avoiding some of the compromises associated with traditionally-designed applications.
Abstract: Natural distributed systems are adaptive, scalable and fault-tolerant. Emergence science describes how higher-level self-regulatory behaviour arises in natural systems from many participants following simple rule-sets. Emergence advocates simple communication models, autonomy and independence, enhancing robustness and self-stabilization. High-quality distributed applications such as autonomic systems must satisfy the appropriate nonfunctional requirements which include scalability, efficiency, robustness, low-latency and stability. However the traditional design of distributed applications, especially in terms of the communication strategies employed, can introduce compromises between these characteristics. This paper discusses ways in which emergence science can be applied to distributed computing, avoiding some of the compromises associated with traditionally-designed applications. To demonstrate the effectiveness of this paradigm, an emergent election algorithm is described and its performance evaluated. The design incorporates nondeterministic behaviour. The resulting algorithm has very low communication complexity, and is simultaneously very stable, scalable and robust.

Proceedings ArticleDOI
24 Mar 2004
TL;DR: This paper extends the existing atomicity consistency criterion defined for multiwriter/multireader shared memory in a crash-stop model, by providing two new criteria for the crash-recovery model, and introduces lower bounds on the log-complexity for each of the two corresponding types of robust shared memory emulations.
Abstract: A shared memory abstraction can be robustly emulated over an asynchronous message passing system where any process can fail by crashing and possibly recover (crash-recovery model), by having (a) the processes exchange messages to synchronize their read and write operations and (b) log key information on their local stable storage. This paper extends the existing atomicity consistency criterion defined for multiwriter/multireader shared memory in a crash-stop model, by providing two new criteria for the crash-recovery model. We introduce lower bounds on the log-complexity for each of the two corresponding types of robust shared memory emulations. We demonstrate that our lower bounds are tight by providing algorithms that match them. Besides being optimal, these algorithms have the same message and time complexity as their most efficient counterpart we know of in the crash-stop model.

Proceedings ArticleDOI
25 Oct 2004
TL;DR: Wang et al. as discussed by the authors proposed a dynamic proxy tree-based framework, which can efficiently disseminate data from a dynamic source to multiple mobile sinks for applications such as mobile target detection and tracking.
Abstract: In wireless sensor networks, efficiently disseminating data from a dynamic source to multiple mobile sinks is important for applications such as mobile target detection and tracking. A tree-based multicasting scheme can be used. However, due to the short communication range of each sensor node and the frequent movement of sources and sinks, a sink may fail to receive data due to broken paths, and the tree should frequently be reconfigured to reconnect sources and sinks. To address the problem, we propose a dynamic proxy tree-based framework. A big challenge in implementing the framework is how to reconfigure the proxy tree efficiently as sources and sinks change. We model the problem as on-line construction of a minimum Steiner tree in a Euclidean plane, and propose centralized schemes to solve it. Considering the strict energy constraints in wireless sensor networks, we further propose two distributed on-line schemes, a shortest path-based (SP) scheme and a spanning range-based (SR) scheme. Extensive simulations are conducted to evaluate the schemes. The results show that the distributed schemes have similar performance to the centralized ones, and among the distributed schemes, SR outperforms SP.

Book ChapterDOI
02 May 2004
TL;DR: This work revisits the following open problem in information-theoretic cryptography: can computationally unbounded players compute an arbitrary function of their inputs with polynomial communication complexity and a linear threshold of unconditional privacy?
Abstract: We revisit the following open problem in information-theoretic cryptography: Does the communication complexity of unconditionally secure computation depend on the computational complexity of the function being computed? For instance, can computationally unbounded players compute an arbitrary function of their inputs with polynomial communication complexity and a linear threshold of unconditional privacy? Can this be done using a constant number of communication rounds?

Proceedings ArticleDOI
21 Jun 2004
TL;DR: It is observed that any attempt to give short quantum proofs for the class of languages Co - NP have to go beyond black box arguments, and that for any Boolean function G(X/sub 1/,..., X/sub n/), if for both G and 7minus;G there are QMA black box protocols that make at most T queries to the black box, then there is a classical deterministic black box protocol for G that makes 0(T/sup 6/) queries to
Abstract: We study the power of quantum proofs, or more precisely, the power of quantum Merlin-Arthur (QMA) protocols, in two well studied models of quantum computation: the black box model and the communication complexity model. Our main results are obtained for the communication complexity model. For this model, we identify a complete promise problem for QMA protocols, the linear sub-spaces distance problem. The problem is of geometrical nature: each player gets a linear subspace of R/sup m/ and considers the sphere of unit vectors in that subspace. Their goal is to output 1 if the distance between the two spheres is very small (say, smaller than 0.1 /spl middot/ /spl radic/2) and 0 if the distance is very large (say, larger than 0.9 /spl middot/ /spl radic/2). We show that: 1. The QMA communication complexity of the problem is O(logm). 2. The (classical) MA communication complexity of the problem is /spl Omega/(m/sup /spl epsi//) (for some /spl epsi/ > 0). 3. The (standard) quantum communication complexity of the problem is /spl Omega/(/spl radic/m). In particular, this gives an exponential separation between QMA communication complexity and MA communication complexity. For the black box model we give several observations. First, we observe that the block sensitivity method, as well as the polynomial method for proving lower bounds for the number of queries, can both be extended to QMA protocols. We use these methods to obtain lower bounds for the QMA black box complexity of functions. In particular, we obtain a tight lower bound of /spl Omega/(N) for the QMA black box complexity of a random function, and a tight lower bound of /spl Omega/(/spl radic/N) for the QMA black box query complexity of NOR(X/sub 1/,..., X/sub n/). In particular, this shows that any attempt to give short quantum proofs for the class of languages Co - NP have to go beyond black box arguments. We also observe that for any Boolean function G(X/sub 1/,..., X/sub n/), if for both G and 7minus;G there are QMA black box protocols that make at most T queries to the black box, then there is a classical deterministic black box protocol for G that makes 0(T/sup 6/) queries to the black box. In particular, this shows that in the black box model QMA /spl cap/ Co - QMA = P. On the positive side, we observe that any (total or partial) Boolean function G(X/sub 1/,..., X/sub n/) has a QMA black box protocol with proofs of length N that makes only 0(/spl radic/N) queries to the black box. Finally, we observe a very simple proof for the exponential separation (for promise problems) between QMA black box complexity and (classical) MA black box complexity (first obtained by Watrous).

Proceedings ArticleDOI
01 Nov 2004
TL;DR: An algorithm designed to efficiently construct a decision tree over heterogeneously distributed data without centralizing is presented and its experimental results show that by using only 20% of the communication cost necessary to centralize the data it can achieve trees with accuracy at least 80%" of the trees produced by the centralized version.
Abstract: We present an algorithm designed to efficiently construct a decision tree over heterogeneously distributed data without centralizing We compare our algorithm against a standard centralized decision tree implementation in terms of accuracy as well as the communication complexity Our experimental results show that by using only 20% of the communication cost necessary to centralize the data we can achieve trees with accuracy at least 80% of the trees produced by the centralized version

Proceedings ArticleDOI
Boaz Patt-Shamir1
25 Jul 2004
TL;DR: This note presents distributed protocols for computing the median with sublinear space and communication complexity per node and observes that any deterministic protocol that counts the number of distinct data items must have linear complexity in the worst case.
Abstract: We consider a scenario where nodes in a sensor network hold numeric items, and the task is to evaluate simple functions of the distributed data. In this note we present distributed protocols for computing the median with sublinear space and communication complexity per node. Specifically, we give a deterministic protocol for computing median with polylog complexity and a randomized protocol that computes an approximate median with polyloglog communication complexity per node. On the negative side, we observe that any deterministic protocol that counts the number of distinct data items must have linear complexity in the worst case.

Proceedings ArticleDOI
29 Nov 2004
TL;DR: In this paper, the architecture-aware regular and irregular repeat-accumulate (AARA) code design is proposed to achieve high-performance decoder design for RA codes of large block length.
Abstract: This paper investigates high-performance decoder design for regular and irregular repeat-accumulate (RA) codes of large block length. In order to achieve throughputs and bit-error rate performance that are inline with future trends in high-speed communications. high-throughput and low-power decoders of low complexity are needed. To meet such conflicting requirements for long codes, the concept of architecture-aware RA (AARA) code design is proposed. AARA code design decouples the complexity of the decoder from the owe structure by inducing structural regularity features that are amenable to efficient and scalable decoder implementations. Design methods of AARA codes with structured permuters for which an iterative decoding algorithm performs well under message-passing are analogous to those for AA LDPC codes. Algorithmic and architectural optimizations that address the latency, memory overhead, and complexity problems typical of iterative decoders for long RA codes are investigated, and a staggered decoding schedule is introduced. AARA decoders using the proposed schedule have substantial advantage over serial and parallel RA decoders.

Journal ArticleDOI
TL;DR: The model of cellular automata is fascinating because very simple local rules can generate complex global behaviors as mentioned in this paper, and the relationship between local and global functions is a subject of many studies.

Journal ArticleDOI
TL;DR: Pseudo-telepathy is a surprising application of quantum information processing to communication complexity and a survey of recent and not-so-recent work on the subject is presented.
Abstract: Quantum information processing is at the crossroads of physics, mathematics and computer science. It is concerned with that we can and cannot do with quantum information that goes beyond the abilities of classical information processing devices. Communication complexity is an area of classical computer science that aims at quantifying the amount of communication necessary to solve distributed computational problems. Quantum communication complexity uses quantum mechanics to reduce the amount of communication that would be classically required. Pseudo-telepathy is a surprising application of quantum information processing to communication complexity. Thanks to entanglement, perhaps the most nonclassical manifestation of quantum mechanics, two or more quantum players can accomplish a distributed task with no need for communication whatsoever, which would be an impossible feat for classical players. After a detailed overview of the principle and purpose of pseudo-telepathy, we present a survey of recent and no-so-recent work on the subject. In particular, we describe and analyse all the pseudo-telepathy games currently known to the authors.

Proceedings ArticleDOI
Ilan Newman1
21 Jun 2004
TL;DR: This work presents here the first linear complexity protocols for several classes of Boolean functions, including the OR function, functions that have O(l)-minterm (maxterm) size, function that have linear size AC/sub 0/ formulae and some other functions.
Abstract: We consider a fault tolerance broadcast network of n processors each holding one bit of information. The goal is to compute a given Boolean function on the n bits. In each step, a processor may broadcast one bit of information. Each listening processor receives the bit that was broadcasted with error probability bounded by a fixed constant /spl epsi/. The errors in different steps, as well as for different receiving processors in the same step, are mutually independent. The protocols that are considered in this model are oblivious protocols: At each step, the processors that broadcast are fixed in advanced and independent of the input and the outcome of previous steps. The primal complexity measure in this model is the total number of broadcasts that is performed by the protocol. We present here the first linear complexity protocols for several classes of Boolean functions, including the OR function, functions that have O(l)-minterm (maxterm) size, functions that have linear size AC/sub 0/ formulae and some other functions. This answer an open question of Yao (1997), considering this fault tolerance model of El-Gamal (1984) and Gallager (1988).

Proceedings ArticleDOI
25 Oct 2004
TL;DR: In this article, the authors propose a new primitive called private inference control (PIC), which is a means for the server to provide inference control without learning what information is being retrieved.
Abstract: Access control can be used to ensure that database queries pertaining to sensitive information are not answered. This is not enough to prevent users from learning sensitive information though, because users can combine non-sensitive information to discover something sensitive. Inference control prevents users from obtaining sensitive information via such "inference channels", however, existing inference control techniques are not private - that is, they require the server to learn what queries the user is making in order to deny inference-enabling queries.We propose a new primitive - private inference control (PIC) -which is a means for the server to provide inference control without learning what information is being retrieved. PIC is a generalization of private and symmetrically-private information retrieval (PIR/SPIR). While it is straightforward to implement access control using PIR (simply omit sensitive information from the database), it is nontrivial to implement inference control efficiently. We measure the efficiency of a PIC protocol in terms of its communication complexity, its round complexity, and the work the server performs per query. Under existing cryptographic assumptions, we give a PIC scheme which is simultaneously optimal, up to logarithmic factors, in the work the server performs per query, the total communication complexity, and the number of rounds of interaction. We also present a scheme requiring more communication but sufficient storage of state by the server to facilitate private user revocation. Finally, we present a generic reduction which shows that one can focus on designing PIC schemes for which the inference channels take a particularly simple threshold form.

Journal Article
TL;DR: In this article, it was shown that the communication complexity of unconditionally secure computation depends on the computational complexity of the function being computed, and that the difficulty of resolving these questions is closely related to the problem of obtaining efficient protocols for (information-theoretic) private information retrieval and hence also to constructing short locally-decodable error-correcting codes.
Abstract: We revisit the following open problem in information-theoretic cryptography: Does the communication complexity of unconditionally secure computation depend on the computational complexity of the function being computed? For instance, can computationally unbounded players compute an arbitrary function of their inputs with polynomial communication complexity and a linear threshold of unconditional privacy? Can this be done using a constant number of communication rounds? We provide an explanation for the difficulty of resolving these questions by showing that they are closely related to the problem of obtaining efficient protocols for (information-theoretic) private information retrieval and hence also to the problem of constructing short locally-decodable error-correcting codes. The latter is currently considered to be among the most intriguing open problems in complexity theory.

Proceedings ArticleDOI
15 Nov 2004
TL;DR: An optimality criterion is given and the convergence of the algorithm for deterministic environments is proved and variable and hierarchical communication strategies are introduced which considerably reduce the number of communications.
Abstract: We present a new algorithm for cooperative reinforcement learning in multiagent systems. We consider autonomous and independently learning agents, and we seek to obtain an optimal solution for the team as a whole while keeping the learning as much decentralized as possible. Coordination between agents occurs through communication, namely the mutual notification algorithm. We define the learning problem as a decentralized process using the MDP formalism. We then give an optimality criterion and prove the convergence of the algorithm for deterministic environments. We introduce variable and hierarchical communication strategies which considerably reduce the number of communications. Finally we study the convergence properties and communication overhead on a small example.

Journal ArticleDOI
TL;DR: It is shown that the privacy loss in computing a function can be decreased exponentially by using quantum protocols, while the class of privately computable functions (i.e., those with privacy loss 0) is not enlarged by quantum protocols.
Abstract: This paper studies privacy and secure function evaluation in communication complexity. The focus is on quantum versions of the model and on protocols with only approximate privacy against honest players. We show that the privacy loss (the minimum divulged information) in computing a function can be decreased exponentially by using quantum protocols, while the class of privately computable functions (i.e., those with privacy loss 0) is not enlarged by quantum protocols. Quantum communication combined with small information leakage on the other hand makes certain functions computable (almost) privately which are not computable using either quantum communication without leakage or classical communication with leakage. We also give an example of an exponential reduction of the communication complexity of a function by allowing a privacy loss of o(1) instead of privacy loss 0.