scispace - formally typeset
Search or ask a question

Showing papers on "Communication complexity published in 1998"


Proceedings ArticleDOI
23 May 1998
TL;DR: In this paper, the authors introduce a model of symmetrically private information retrieval (SPIR), where the privacy of the data, as well as the private of the user, is guaranteed.
Abstract: Private information retrieval (PIR) schemes allow a user to retrieve the ith bit of an n-bit data string x, replicated in k?2 databases (in the information-theoretic setting) or in k?1 databases (in the computational setting), while keeping the value of i private. The main cost measure for such a scheme is its communication complexity. In this paper we introduce a model of symmetrically-private information retrieval (SPIR), where the privacy of the data, as well as the privacy of the user, is guaranteed. That is, in every invocation of a SPIR protocol, the user learns only a single physical bit of x and no other information about the data. Previously known PIR schemes severely fail to meet this goal. We show how to transform PIR schemes into SPIR schemes (with information-theoretic privacy), paying a constant factor in communication complexity. To this end, we introduce and utilize a new cryptographic primitive, called conditional disclosure of secrets, which we believe may be a useful building block for the design of other cryptographic protocols. In particular, we get a k-database SPIR scheme of complexity O(n1/(2k?1)) for every constant k?2 and an O(logn)-database SPIR scheme of complexity O(log2n·loglogn). All our schemes require only a single round of interaction, and are resilient to any dishonest behavior of the user. These results also yield the first implementation of a distributed version of (n1)-OT (1-out-of-n oblivious transfer) with information-theoretic security and sublinear communication complexity.

485 citations


Posted Content
TL;DR: In this paper, a simple and general simulation technique that transforms any black-box quantum algorithm (a la Grover's database search algorithm) to a quantum communication protocol for a related problem, in a way that fully exploits the quantum parallelism is presented.
Abstract: We present a simple and general simulation technique that transforms any black-box quantum algorithm (a la Grover's database search algorithm) to a quantum communication protocol for a related problem, in a way that fully exploits the quantum parallelism. This allows us to obtain new positive and negative results. The positive results are novel quantum communication protocols that are built from nontrivial quantum algorithms via this simulation. These protocols, combined with (old and new) classical lower bounds, are shown to provide the first asymptotic separation results between the quantum and classical (probabilistic) two-party communication complexity models. In particular, we obtain a quadratic separation for the bounded-error model, and an exponential separation for the zero-error model. The negative results transform known quantum communication lower bounds to computational lower bounds in the black-box model. In particular, we show that the quadratic speed-up achieved by Grover for the OR function is impossible for the PARITY function or the MAJORITY function in the bounded-error model, nor is it possible for the OR function itself in the exact case. This dichotomy naturally suggests a study of bounded-depth predicates (i.e. those in the polynomial hierarchy) between OR and MAJORITY. We present black-box algorithms that achieve near quadratic speed up for all such predicates.

391 citations


Proceedings ArticleDOI
Klaus Becker, Uta Wille1
01 Nov 1998
TL;DR: For every measure of protocol complexity, it is shown that the corresponding bound is realistic for DiffieHellmau-based prot~ COIS by referring to or introducing protocols that match the bound or exceed it by only one.
Abstract: Communication complexity has al~vays been an important issue Ivhen designing group key distribution systems. This paper systematically studies \vhat can be achieved for the most common measures of protocol complexity. Lo\ver bounds for the total number of messages, the total number of exchanges, and the number of necwsary rounds are established, ~vherebymodels that allo~vbroadcasting have to be distinguished from those that do not. For every measure of protocol complexity, we furthermore show that the corresponding bound is realistic for DiffieHellmau-based prot~ COISby referring to or introducing protocols that match the bound or exceed it by only one.

305 citations


Journal ArticleDOI
TL;DR: This paper considers two-party communication complexity, the “asymmetric case”, when the input sizes of the two players differ significantly, and derives two generally applicable methods of proving lower bounds and obtain several applications.

216 citations


Journal ArticleDOI
TL;DR: This work presents a methodology for the design of scheduling algorithms that provide the same end-to-end delay bound as that of WFQ and bounded unfairness without the complexity of GPS emulation, and produces a class of algorithms, called rate-proportional servers (RPSs).
Abstract: Generalized processor sharing (GPS) has been considered as an ideal scheduling discipline based on its end-to-end delay bounds and fairness properties. Until recently, emulation of GPS in a packet server has been regarded as the ideal means of designing a packet-level scheduling algorithm to obtain low delay bounds and bounded unfairness. Strict emulation of GPS, as required in the weighted fair queueing (WFQ) scheduler, however, incurs a time-complexity of O(N) where N is the number of sessions sharing the link. Efforts in the past to simplify the implementation of WFQ, such as self-clocked fair queueing (SCFQ), have resulted in degrading its isolation properties, thus affecting the delay bound. We present a methodology for the design of scheduling algorithms that provide the same end-to-end delay bound as that of WFQ and bounded unfairness without the complexity of GPS emulation. The resulting class of algorithms, called rate-proportional servers (RPSs), are based on isolating scheduler properties that give rise to ideal delay and fairness behavior. Network designers can use this methodology to construct efficient fair-queueing algorithms, balancing their fairness with implementation complexity.

203 citations


Book ChapterDOI
24 Sep 1998
TL;DR: The Arrow distributed directory protocol is devised, a scalable and local mechanism for ensuring mutually exclusive access to mobile objects and has communication complexity optimal within a factor of (1+MST-stretch(G))/2, where MST-Stretch( G) is the “minimum spanning tree stretch” of the underlying network.
Abstract: Most practical techniques for locating remote objects in a distributed system suffer from problems of scalability and locality of reference We have devised the Arrow distributed directory protocol, a scalable and local mechanism for ensuring mutually exclusive access to mobile objects This directory has communication complexity optimal within a factor of (1+MST-stretch(G))/2, where MST-stretch(G) is the “minimum spanning tree stretch” of the underlying network

137 citations


Book ChapterDOI
17 Feb 1998
TL;DR: This work considers the communication complexity of the binary inner product function in a variation of the two-party scenario where the parties have an a priori supply of particles in an entangled quantum state and proves linear lower bounds for both exact protocols, as well as for protocols that determine the answer with bounded-error probability.
Abstract: We consider the communication complexity of the binary inner product function in a variation of the two-party scenario where the parties have an a priori supply of particles in an entangled quantum state. We prove linear lower bounds for both exact protocols, as well as for protocols that determine the answer with bounded-error probability. Our proofs employ a novel kind of "quantum" reduction from a quantum information theory problem to the problem of computing the inner product. The communication required for the former problem can then be bounded by an application of Holevo's theorem. We also give a specific example of a probabilistic scenario where entanglement reduces the communication complexity of the inner product function by one bit.

133 citations


Book ChapterDOI
24 Sep 1998
TL;DR: A new Leader Election algorithm, with O(n) time complexity and O( n · lg(n)) message transmission complexity, is proposed, which uses a special form of the propagation of information with feedback (PIF) building block tuned to the broadcast media.
Abstract: The paper addresses the problem of solving classic distributed algorithmic problems under the practical model of Broadcast Communication Networks. Our main result is a new Leader Election algorithm, with O(n) time complexity and O(n · lg(n)) message transmission complexity. Our distributed solution uses a special form of the propagation of information with feedback (PIF) building block tuned to the broadcast media, and a special counting and joining approach for the election procedure phase. The latter is required for achieving the linear time.

94 citations


Book ChapterDOI
TL;DR: A simple calculus of agents is proposed that allows implementations of such distributed infrastructure algorithms to be expressed as encodings, or compilations, of the whole calculus into the fragment with only location dependent communication.
Abstract: We study communication primitives for interaction between mobile agents. They can be classified into two groups. At a low level there are location dependent primitives that require a programmer to know the current site of a mobile agent in order to communicate with it. At a high level there are location independent primitives that allow communication with a mobile agent irrespective of its current site and of any migrations. Implementation of these requires delicate distributed infrastructure. We propose a simple calculus of agents that allows implementations of such distributed infrastructure algorithms to be expressed as encodings, or compilations, of the whole calculus into the fragment with only location dependent communication. These encodings give executable descriptions of the algorithms, providing a clean implementation strategy for prototype languages. The calculus is equipped with a precise semantics, providing a solid basis for understanding the algorithms and for reasoning about their correctness and robustness. Two sample infrastructure algorithms are presented as encodings.

82 citations


Proceedings ArticleDOI
08 Nov 1998
TL;DR: An exponential gap is established between quantum and classical sampling complexity, for the set disjointness function, which is the first exponential gap for any task where the classical probabilistic algorithm is allowed to err.
Abstract: Sampling is an important primitive in probabilistic and quantum algorithms. In the spirit of communication complexity, given a function f: X/spl times/Y/spl rarr/{0,1} and a probability distribution D over X/spl times/Y, we define the sampling complexity of (f,D) as the minimum number of bits Alice and Bob must communicate for Alice to pick x/spl isin/X and Bob to pick y/spl isin/Y as well as a valve z s.t. the resulting distribution of (x,y,z) is close to the distribution (D,f(D)). In this paper we initiate the study of sampling complexity, in both the classical and quantum model. We give several variants of the definition. We completely characterize some of these tasks, and give upper and lower bounds on others. In particular this allows us to establish an exponential gap between quantum and classical sampling complexity, for the set disjointness function. This is the first exponential gap for any task where the classical probabilistic algorithm is allowed to err.

66 citations


Proceedings ArticleDOI
30 Mar 1998
TL;DR: Performance results indicate that the additional functionality of the algorithm comes at the cost of 30% longer response times within the authors' test environment for distributed execution when compared with an unprioritized algorithm, suggesting that the algorithm should be used when strict priority ordering is required.
Abstract: A number of solutions have been proposed for the problem of mutual exclusion in distributed systems. Some of these approaches have since been extended to a prioritized environment suitable for real-time applications but impose a higher message passing overhead than our approach. We present a new protocol for prioritized mutual exclusion in a distributed environment. Our approach uses a token-based model working on a logical tree structure, which is dynamically modified. In addition, we utilize a set of local queues whose union would resemble a single global queue. Furthermore, our algorithm is designed for out-of-order message delivery, handles messages asynchronously and supports multiple requests from one node for multi-threaded nodes. The prioritized algorithm has an average overhead of O(log(n)) messages per request for mutual exclusion with a worst-case overhead of O(n), where n represents the number of nodes in the system. Thus, our prioritized algorithm matches the message complexity of the best non-prioritized algorithms while previous prioritized algorithms have a higher message complexity, to our knowledge. Our concept of local queues can be incorporated into arbitrary token-based protocols with or without priority support to reduce the amount of messages. Performance results indicate that the additional functionality of our algorithm comes at the cost of 30% longer response times within our test environment for distributed execution when compared with an unprioritized algorithm. This result suggests that the algorithm should be used when strict priority ordering is required.

Proceedings ArticleDOI
10 Aug 1998
TL;DR: An algorithm to determine the message transmission delay upper bound to predict worst-case message delay is developed and results show that the delay upper bounds calculated using the proposed algorithm are very close to actual average message transmission delays for messages with high priorities.
Abstract: In this paper we propose a real-time communication scheme that can be used in general point-to-point real-time multicomputer systems with wormhole switching. Real-time communication should satisfy the two requirements of predictability and priority handling. Since traditional wormhole switching does not support priority handling which is essential in real-time computing, flit-level preemption is adopted in our wormhole switching. Also, we develop an algorithm to determine the message transmission delay upper bound to predict worst-case message delay. Simulation results show that the delay upper bounds calculated using the proposed algorithm are very close to actual average message transmission delays for messages with high priorities.

Proceedings ArticleDOI
01 Jul 1998
TL;DR: This work investigates the optimization of the communication subsystem of Time Warp simulators using dynamic message aggregation, where Time Warp messages with the same destination LP, occurring in close temporal proximity are dynamically aggregated and sent as a single physical message.
Abstract: In message passing environments, the message send time is dominated by overheads that are relatively independent of the message size. Therefore, fine grained applications (such as Time Warp simulators) suffer high overheads because of frequent communication. We investigate the optimization of the communication subsystem of Time Warp simulators using dynamic message aggregation. Under this scheme, Time Warp messages with the same destination LP, occurring in close temporal proximity are dynamically aggregated and sent as a single physical message. Several aggregation strategies that attempt to minimize the communication overhead without harming the progress of the simulation (because of messages being delayed) are developed. The performance of the strategies is evaluated for a network of workstations, and an SMP, using a number of applications that have different communication behavior.

Proceedings ArticleDOI
23 May 1998
TL;DR: The multiparty communication model of Chandra, Furst, and Lipton (1983) is generalized to functions with b-bit output, and new families of explicit boolean functions for which Ω(n/ck) bits of communication are required to find the “missing bit” are constructed.
Abstract: We generalize the multiparty communication model of Chandra, Furst, and Lipton (1983) to functions with b-bit output (b = 1 in the CFL model). We allow the players to receive up to b 1 bits of information from an all-powerful benevolent Helper who can see all the input. Extending results of Babai, Nisan, and Szegedy (1992) to this model, we construct families of explicit functions for which ( n/c k ) bits of communication are required to find the “missing bit,” where n is the length of each player’s input and k is the number of players. As a consequence we settle the problem of separating the one-way vs. multiround communication complexities (in the CFL sense) for k (1 )logn players, extending a result of Nisan and Wigderson (1991) who demonstrated this separation for k = 3 players. As a byproduct we obtain ( n/c k ) lower bounds for the multiparty complexity (in the CFL sense) of new families of explicit boolean functions (not derivable from BNS).

Proceedings ArticleDOI
28 Jul 1998
TL;DR: A uniform framework for developing adaptive communication schedules for various collective communication patterns is presented, developed at run-time, based on network performance information obtained from a directory service.
Abstract: Heterogeneous network-based systems are emerging as attractive computing platforms for HPC applications. We discuss fundamental research issues that must be addressed to enable network-aware communication at the application level. We present a uniform framework for developing adaptive communication schedules for various collective communication patterns. Schedules are developed at run-time, based on network performance information obtained from a directory service. We illustrate our framework by developing communication schedules for total exchange. Our first algorithm develops a schedule by computing a series of matchings in a bipartite graph. We also present a O(P/sup 3/) heuristic algorithm, whose completion time is within twice the optimal. This algorithm is based on the open shop scheduling problem. Simulation results show performance improvements of a factor of 5 over well known homogeneous scheduling techniques.

Journal ArticleDOI
TL;DR: The main objective is to extend the communication complexity approach of [4, 5] to a wider class of proof systems and obtain an effective interpolation in a form of a protocol of small real communication complexity.
Abstract: We introduce a notion of a real game (a generalisation of the Karchmer-Wigderson game (cf. [3]) and of real communication complexity, and relate this complexity to the size of monotone real formulas and circuits. We give an exponential lower bound for tree-like monotone protocols (defined in [4, Definition 2.2]) of small real communication complexity solving the monotone communication complexity problem associated with the bipartite perfect matching problem. This work is motivated by a research in interpolation theorems for prepositional logic (by a problem posed in [5, Section 8], in particular). Our main objective is to extend the communication complexity approach of [4, 5] to a wider class of proof systems. In this direction we obtain an effective interpolation in a form of a protocol of small real communication complexity. Together with the above mentioned lower bound for tree-like protocols this yields as a corollary a lower bound on the number of steps for particular semantic derivations of Hall's theorem (these include tree-like cutting planes proofs for which an exponential lower bound was demonstrated in [2]).

Proceedings ArticleDOI
12 Oct 1998
TL;DR: A novel distributed acquisition algorithm, which has similar message complexity as the search approach and similar acquisition delay as the update approach, and which outperforms all other approaches in terms of call blocking probability under uniform as well as non-uniform traffic.
Abstract: There are two approaches to design a distributed channel allocation algorithms: search and update. The update approach has shorter acquisition delay and lower call blocking rate, but higher message complexity. On the other hand, the search approach has lower message complexity, but longer acquisition delay and higher call blocking rate. In this paper we first propose a novel distributed acquisition algorithm, which has similar message complexity as the search approach and similar acquisition delay as the update approach. Then, we present a channel selection algorithm and integrate it into our distributed acquisition algorithm. By a rigorous analysis in terms of delay and message complexity, we show that our channel acquisition algorithm performs significantly better than the update approach (Dong and Lai 1997) and the search approach (Prakash et al. 1995). Detailed simulation experiments are carried out in order to evaluate our proposed methodology. The performance of our algorithm is compared with those of the geometric strategy (Baiocchi et al. 1995), the search approach, and the update approach. Simulation results show that our algorithm outperforms all other approaches in terms of call blocking probability under uniform as well as non-uniform traffic.

Journal ArticleDOI
TL;DR: This work introduces a new class of parallel algorithms for the exact computation of systems with pairwise mutual interactions of n elements, so called n/sup 2/-problems, and can reduce the inter-processor communication complexity to a number O(np).
Abstract: We introduce a new class of parallel algorithms for the exact computation of systems with pairwise mutual interactions of n elements, so called n/sup 2/-problems. Hitherto, practical conventional parallelization strategies could achieve a complexity of O(np) with respect to the inter-processor communication, p being the number of processors. Our new approach can reduce the inter-processor communication complexity to a number O(np). In the framework of Additive Number Theory, the determination of the optimal communication pattern can be formulated as h-range minimization problem that can be solved numerically. Based on a complexity model, the scaling behavior of the new algorithm is numerically tested on the connection machine CM5. As a real life example, we have implemented a fast code for globular cluster n-body simulations, a generic n/sup 2/-problem, on the CRAY T3D, with striking success. Our parallel method promises to be useful in various scientific and engineering fields like polymer chain computations, protein folding, signal processing, and, in particular, for parallel level-3 BLAS.

Book ChapterDOI
01 Sep 1998
TL;DR: Reliability is achieved by computationally efficient MDS array codes that eliminate single points of failure in the systems, thus providing more reliability and flexibility to the systems.
Abstract: As the need for data explodes with the passage of time and the increase of computing power, data storage becomes more and more important. Distributed storage, as distributed computing before it, is coming of age as a good solution to make systems highly available, i.e., highly scalable, reliable and efficient. The focus of this thesis is how to achieve data reliability and efficiency in distributed storage systems. This thesis consists of two parts. The first part deals with the reliability of distributed storage systems. Reliability is achieved by computationally efficient MDS array codes that eliminate single points of failure in the systems, thus providing more reliability and flexibility to the systems. Two new classes of MDS array codes are presented: the X-Code and the B-Code. The encoding operations of both codes are optimal, i.e., their update complexity achieves the theoretical lower bound. They distribute parity bits over an columns rather than concentrating them on some parity columns. The X-Code has a very simple geometrical structure, and the B-Code is related to a 3-decade old graph theory problem: perfect one-factorization (or P1F) of a complete graph, solutions to one problem lead directly to solutions to the other. The second part of the thesis deals with the efficiency of distributed storage systems. While it is intuitive that redundancy can bring reliability to a system, this thesis gives another direction: using redundancy actively to improve performance (efficiency) of distributed data systems. General analytical results on the performance of (n, k) systems are given, as are two schemes to improve the performance of general data server systems, namely the data distribution and the data acquisition schemes. In addition, a novel deterministic voting scheme is proposed, which generalizes all known simple deterministic voting algorithms. It can be tuned to various application environments with different error rates to drastically reduce average communication complexity. These schemes are based on error-correcting codes, particularly the MDS array codes developed in the first part. Finally, some research problems related to storage systems are proposed as future directions.

Journal ArticleDOI
TL;DR: This paper presents a distributed algorithm for finding the articulation points in an n node communication network represented by a connected undirected graph and shows that the algorithm requires O( n) messages and O(n) units of time and is optimal in communication complexity to within a constant factor.

Proceedings ArticleDOI
08 Nov 1998
TL;DR: It is demonstrated that there are scenarios in which a high-speed link from the server to the client can be used to greatly reduce the number of bits sent from the client to the server across a slower link.
Abstract: In this paper we examine the problem of sending an n-bit data item from a client to a server across an asymmetric communication channel. We demonstrate that there are scenarios in which a high-speed link from the server to the client can be used to greatly reduce the number of bits sent from the client to the server across a slower link. In particular, we assume that the data item is drawn from a probability distribution D that is known to the server but not to the client. We present several protocols in which the expected number of bits transmitted by the server and client are O(n) and O(H(D)+1), respectively, where H(D) is the binary entropy of D (and can range from 0 to n). These protocols are within a small constant factor of optimal in terms of the number of bits sent by the client. The expected number of rounds of communication between the server and client in the simplest of our protocols is O(H(D)). We also give a protocol for which the expected number of rounds is only 0(1), but which requires more computational effort on the part of the server. A third technique provides a tradeoff between the computational effort and the number of rounds.

Proceedings ArticleDOI
26 May 1998
TL;DR: A delay-optimal quorum-based mutual exclusion algorithm which reduces the synchronization delay to T and still has the low message complexity O(K) (K is the size of the quorum, which can be as low as log N).
Abstract: The performance of a mutual exclusion algorithm is measured by the number of messages exchanged per critical section execution and the delay between successive executions of the critical section. There is a message complexity and synchronization delay trade-off in mutual exclusion algorithms. Lamport's (1978) algorithm and Ricart and Agrawal's (1981) algorithm both have a synchronization delay of T, but their message complexity is O(N). Maekawa's (1985) algorithm reduces message complexity to O(/spl radic/N); however, it increases the synchronization delay to 2T. After Maekawa's algorithm, many quorum-based mutual exclusion algorithms have been proposed to reduce message complexity or increase the resiliency to site and communication link failures. Since these algorithms are Maekawa-type algorithms, they also suffer from long synchronization delay 2T. We propose a delay-optimal quorum-based mutual exclusion algorithm which reduces the synchronization delay to T and still has the low message complexity O(K) (K is the size of the quorum, which can be as low as log N). A correctness proof and detailed performance analysis are provided.

Journal ArticleDOI
TL;DR: A deterministic majority voting algorithm for NMR systems that uses error-correcting codes to drastically reduce the average case communication complexity and shows that the efficiency of the voting algorithm can be improved by choosing the parameters of the error-Correcting code to match the probability of the computational faults.
Abstract: Distributed voting is an important problem in reliable computing In an N Modular Redundant (NMR) system, the N computational modules execute identical tasks and they need to periodically vote on their current states In this paper, we propose a deterministic majority voting algorithm for NMR systems Our voting algorithm uses error-correcting codes to drastically reduce the average case communication complexity In particular, we show that the efficiency of our voting algorithm can be improved by choosing the parameters of the error-correcting code to match the probability of the computational faults For example, consider an NMR system with 31 modules, each with a state of m bits, where each module has an independent computational error probability of 10/sup -3/ 1, this NMR system, our algorithm can reduce the average case communication complexity to approximately 10825 m compared with the communication complexity of 31 m of the naive algorithm in which every module broadcasts its local result to all other modules We have also implemented the voting algorithm over a network of workstations The experimental performance results match well the theoretical predictions

Journal ArticleDOI
TL;DR: A protocol to solve the end-to-end problem with logarithmic-space and polynomial communication at the same time is presented, an exponential memory improvement to all previous polynometric communication solutions.
Abstract: Communication between processors is the essence of distributed computing: clearly, without communication, distributed computation is impossible. However, as networks become larger and larger, the frequency of link failures increases. The end-to-end communication problem asks how to efficiently carry out fault-free communication between two processors over a network, in spite of such frequent link failures. The sole minimum assumption is that the two processors that are trying to communicate are not permanently disconnected (i.e., the communication should proceed even when there does not (ever) simultaneously exist an operational path between the two processors that are trying to communicate). We present a protocol to solve the end-to-end problem with logarithmic-space and polynomial communication at the same time. This is an exponential memory improvement to all previous polynomial communication solutions. That is, all previous polynomial communication solutions needed at least linear (in n, the size of the network) amount of memory per link. Our protocol transfers packets over the network, maintains a simple-to-compute O(log n)-bits potential function at each link in order to perform routing, and uses a novel technique of packet canceling which allows us to keep only one packet per link. The computations of both our potential function and our packet-canceling policy are totally local in nature.

Proceedings ArticleDOI
23 Jun 1998
TL;DR: Non-redundant fault-tolerant communication algorithms for faulty k-ary n-cubes are introduced, which include: one-to-all broadcasting, all-To- all broadcasting, one- to-all personalized communication and all-to the all personalized communication.
Abstract: Non-redundant fault-tolerant communication algorithms for faulty k-ary n-cubes are introduced. These include: one-to-all broadcasting, all-to-all broadcasting, one-to-all personalized communication and all-to-all personalized communication. Each of these algorithms can tolerate up to (2n-2) node failures provided that k>(2n-2) and k>3. The communication complexities of the proposed algorithms are derived when cut-through or store-and-forward packet routing is used. The proposed algorithms are close to optimal in terms of communication time.

Proceedings ArticleDOI
17 Dec 1998
TL;DR: This work discusses a closely related problem which is to find a delay-bounded cost-optimal path (DBCP) between a specified source and destination node and presents an exact solution to the DBCP which is based on the branch-and-bound paradigm.
Abstract: As the amount of data transmitted over a network increases and high bandwidth applications requiring point to multipoint communications like videoconferencing, distributed database management or cooperative work become widespread, it becomes very important to optimize network resources. One such optimization is multicast tree generation. The problem of generating a minimum cost multicast tree given the network topology and costs associated with the connecting links can be modelled as a Steiner tree problem which is NP-complete. Much work has been done in the direction of obtaining near-optimal multicast trees when the objective is only to minimize the cost. However, many real time applications such as videoconferencing require that data be sent within prespecified delay limits in order to avoid problems such as anachronism and lack of synchronization. We deal with the delay-bounded cost-optimal multicast tree (DBCPAT) generation problem. Specifically, we discuss a closely related problem which is to find a delay-bounded cost-optimal path (DBCP) between a specified source and destination node. Such a path can be used as a starting point to solve the DBCMT. We present an exact solution to the DBCP which is based on the branch-and-bound paradigm. We also propose a heuristic technique to solve the DBCP using the principle of evolutionary computing. The results obtained using the two techniques are compared for several large networks.

Journal ArticleDOI
TL;DR: It is shown that the problem of determining the minimum cost necessary to perform fault-tolerant gossiping among a given set of participants is NP-hard and approximate (with respect to the cost) fault-Toleran gossiping algorithms are given.

Journal ArticleDOI
TL;DR: The notion of the communication complexity unit based on the smallest event communication group is introduced, and a generalized software complexity metric for measuring the maintenance complexity of distributed programs is presented.
Abstract: With increasing complication in distributed programs we must do our best to establish the mechanism of maintaining distributed programs. This paper introduces the notion of the communication complexity unit based on the smallest event communication group, and presents a generalized software complexity metric for measuring the maintenance complexity of distributed programs. We affirm that the smallest event communication group is a unifying mechanism for event and process abstraction whatever distributed programming languages are used and advocate it as a basically generalized communication complexity unit. We have applied the proposed distributed software complexity metric to a moderately complex example. Experience indicates that the proposed metric is indeed very useful.

Book ChapterDOI
24 Aug 1998
TL;DR: The aim of this paper is to create a general framework for the use of two-party communication protocols for lower bound proofs on multilective computations and the derivation of new nontrivial lower bounds on multilesctive VLSI circuits.
Abstract: Communication complexity of two-party (multiparty) protocols has established itself as a successful method for proving lower bounds on the complexity of concrete problems for numerous computing models. While the relations between communication complexity and oblivious, semilective computations are usually transparent and the main difficulty is reduced to proving nontrivial lower bounds on the communication complexity of given computing problems, the situation essentially changes, if one considers non-oblivious or multilective computations. The known lower bound proofs for such computations are far from being transparent and the crucial ideas of these proofs are often hidden behind some nontrivial combinatorial analysis. The aim of this paper is to create a general framework for the use of two-party communication protocols for lower bound proofs on multilective computations. The result of this creation is not only a transparent presentation of some known lower bounds on the complexity of multilective computations on distinct computing models, but also the derivation of new nontrivial lower bounds on multilective VLSI circuits.

Proceedings ArticleDOI
22 Oct 1998
TL;DR: A new distributed delay-constrained unicast routing algorithm which can always find a delay- Constrained path with small message complexity if such a path exists.
Abstract: We propose a new distributed delay-constrained unicast routing algorithm which can always find a delay-constrained path with small message complexity if such a path exists. At each network node, limited information about the network state is needed and only a small amount of computation is required. Simulation results show that the proposed algorithm has much better cost performance than the least delay path algorithm.