scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Adversarial Queuing on the Multiple Access Channel

TL;DR: It is proved that no acknowledgment-based protocol can be stable for injection rates larger than 3 1 + lg n, and the impossibility to achieve just stability by restricted protocols is proved.
Abstract: We study deterministic broadcasting on multiple access channels when packets are injected continuously. The quality of service is considered in the framework of adversarial queuing. An adversary is determined by injection rate and burstiness, the latter denoting the number of packets that can be injected simultaneously in a round. We consider only injection rates that are less than 1. A protocol is stable when the numbers of packets in queues stay bounded at all rounds, and it is of fair latency when waiting times of packets in queues are O(burstiness/rate). For channels with collision detection, we give a full-sensing protocol of fair latency for injection rates that are at most 1 2(⌈lg n⌉ + 1), where n is the number of stations, and show that fair latency is impossible to achieve for injection rates that are ω(1 log n). For channels without collision detection, we present a full-sensing protocol of fair latency for injection rates that are at most 1 c lg2n, for some c > 0. We show that there exists an acknowledgment-based protocol that has fair latency for injection rates that are at most 1 cn lg2n, for some c > 0, and develop an explicit acknowledgment-based protocol of fair latency for injection rates that are at most 1 27n2 ln n. Regarding impossibility to achieve just stability by restricted protocols, we prove that no acknowledgment-based protocol can be stable for injection rates larger than 3 1 + lg n.
Citations
More filters
Proceedings ArticleDOI
19 Jun 2016
TL;DR: A new backoff protocol for a shared communications channel is offered that guarantees expected constant throughput with only O(log(log* N) access attempts in expectation, and new algorithms for approximate counting and leader election are introduced with the same performance guarantees.
Abstract: For decades, randomized exponential backoff has provided a critical algorithmic building block in situations where multiple devices seek access to a shared resource. Surprisingly, despite this history, the performance of standard backoff is poor under worst-case scheduling of demands on the resource: (i) subconstant throughput can occur under plausible scenarios, and (ii) each of N devices requires Omega(log N) access attempts before obtaining the resource. In this paper, we address these shortcomings by offering a new backoff protocol for a shared communications channel that guarantees expected constant throughput with only O(log(log* N)) access attempts in expectation. Central to this result are new algorithms for approximate counting and leader election with the same performance guarantees.

48 citations

Proceedings ArticleDOI
10 Jan 2016
TL;DR: This paper presents a relatively simple backoff protocol, R e -B ackoff, that has, at its heart, a version of exponential backoff that guarantees expected constant throughput with dynamic process arrivals and requires only an expected polylogarithmic number of access attempts per process.
Abstract: Randomized exponential backoff is a widely deployed technique for coordinating access to a shared resource. A good backoff protocol should, arguably, satisfy three natural properties: (i) it should provide constant throughput, wasting as little time as possible; (ii) it should require few failed access attempts, minimizing the amount of wasted effort; and (iii) it should be robust, continuing to work efficiently even if some of the access attempts fail for spurious reasons. Unfortunately, exponential backoff has some well-known limitations in two of these: it provides poor (sub-constant) throughput (in the worst case), and is not robust (to adversarial disruption).The goal of this paper is to "fix" exponential backoff by making it scalable, particularly focusing on the case where processes arrive in an on-line, worst-case fashion. We present a relatively simple backoff protocol, Re-Backoff, that has, at its heart, a version of exponential backoff. It guarantees expected constant throughput with dynamic process arrivals and requires only an expected polylogarithmic number of access attempts per process.Re-Backoff is also robust to periods where the shared resource is unavailable for a period of time. If it is unavailable for D time slots, Re-Backoff provides the following guarantees. When the number of packets is a finite n, the average expected number of access attempts for successfully sending a packet is O(log2(n + D)). In the infinite case, the average expected number of access attempts for successfully sending a packet is O(log2(η + D)) where η is the maximum number of processes that are ever in the system concurrently.

46 citations

Proceedings ArticleDOI
16 Jul 2012
TL;DR: In this article, a stochastic and an adversarial model is introduced to bound the packet injection in a wireless network and the geometry of the network and power control is taken into account, allowing to increase the network's performance significantly.
Abstract: We consider protocols that serve communication requests arising over time in a wireless network that is subject to interference Unlike previous approaches, we take the geometry of the network and power control into account, both allowing to increase the network's performance significantly We introduce a stochastic and an adversarial model to bound the packet injection Although taken as the primary motivation, this approach is not only suitable for models based on the signal-to-interference-plus-noise ratio (SINR) It also covers virtually all other common interference models, for example the multiple-access channel, the protocol model, the radio-network model, and distance-2 matching Packet-routing networks allowing each edge or each node to transmit or receive one packet at a time can be modeled as well Starting from an algorithm for the respective scheduling problem with static transmission requests, we build distributed stable protocols This is more involved than in previous, similar approaches because the algorithms we consider do not necessarily scale linearly when scaling the input instance We can guarantee a throughput that is as large as the one of the original static algorithm In particular, for SINR models the competitive ratios of the protocol in comparison to optimal ones in the respective model are between constant and O(log2 m) for a network of size m

44 citations

Book ChapterDOI
26 Jun 2011
TL;DR: This work studies broadcasting on multiple access channels with dynamic packet arrivals and jamming with upper bounds on worst-case packet latency of protocols in terms of the parameters defining adversaries.
Abstract: We study broadcasting on multiple access channels with dynamic packet arrivals and jamming. The presented protocols are for the medium-access-control layer. The mechanisms of timing of packet arrivals and determination of which rounds are jammed are represented by adversarial models. Packet arrivals are constrained by the average rate of injections and the number of packets that can arrive in one round. Jamming is constrained by the rate with which the adversary can jam rounds and by the number of consecutive rounds that can be jammed. Broadcasting is performed by deterministic distributed protocols. We give upper bounds on worst-case packet latency of protocols in terms of the parameters defining adversaries. Experiments include both deterministic and randomized protocols. A simulation environment we developed is designed to represent adversarial properties of jammed channels understood as restrictions imposed on adversaries.

44 citations


Cites background or methods from "Adversarial Queuing on the Multiple..."

  • ...We recall the definition of an adversary without jamming, as used in [2, 12, 13]....

    [...]

  • ...[12] proposed to apply adversarial queuing to deterministic distributed broadcast protocols for multiple-access channels in the model with queues at stations....

    [...]

  • ...This upper bound on the number of packets waiting in queues is a natural performance metric, see [12, 13]....

    [...]

  • ...We consider two classes of protocols: full sensing protocols and adaptive ones, these definitions were introduced for deterministic protocols in [12, 13]....

    [...]

  • ...We adapt deterministic distributed protocols, as introduced in [12, 13]....

    [...]

Journal ArticleDOI
TL;DR: A protocol is developed that achieves throughput 1 for any number of stations against leaky-bucket adversaries, and it is shown that protocols that either are fair or do not have the queue sizes affect the order of transmissions cannot be stable in systems of at least four stations against window adversaries.
Abstract: We consider deterministic distributed broadcasting on multiple access channels in the framework of adversarial queuing. Packets are injected dynamically by an adversary that is constrained by the injection rate and the number of packets that may be injected simultaneously; the latter we call burstiness. The maximum injection rate that an algorithm can handle in a stable manner is called the throughput of the algorithm. We develop an algorithm that achieves throughput $1$ for any number of stations against leaky-bucket adversaries. The algorithm has $O(n^2+\text{burstiness})$ packets queued simultaneously at any time, where $n$ is the number of stations; this upper bound is proved to be best possible. An algorithm is called fair when each packet is eventually broadcast. We show that no algorithm can be both stable and fair for a system of at least two stations against leaky-bucket adversaries. We study in detail small systems of exactly two and three stations against window adversaries to exhibit differences in quality of broadcast among classes of algorithms. For two stations, we show that fair latency can be achieved by a full sensing algorithm, while there is no stable acknowledgment based algorithm. For three stations, we show that fair latency can be achieved by a general algorithm, while no full sensing algorithm can be stable. Finally, we show that algorithms that either are fair or do not have the queue sizes affect the order of transmissions cannot be stable in systems of at least four stations against window adversaries.

39 citations

References
More filters
Book
01 Jan 1996
TL;DR: This book familiarizes readers with important problems, algorithms, and impossibility results in the area, and teaches readers how to reason carefully about distributed algorithms-to model them formally, devise precise specifications for their required behavior, prove their correctness, and evaluate their performance with realistic measures.
Abstract: In Distributed Algorithms, Nancy Lynch provides a blueprint for designing, implementing, and analyzing distributed algorithms. She directs her book at a wide audience, including students, programmers, system designers, and researchers. Distributed Algorithms contains the most significant algorithms and impossibility results in the area, all in a simple automata-theoretic setting. The algorithms are proved correct, and their complexity is analyzed according to precisely defined complexity measures. The problems covered include resource allocation, communication, consensus among distributed processes, data consistency, deadlock detection, leader election, global snapshots, and many others. The material is organized according to the system model-first by the timing model and then by the interprocess communication mechanism. The material on system models is isolated in separate chapters for easy reference. The presentation is completely rigorous, yet is intuitive enough for immediate comprehension. This book familiarizes readers with important problems, algorithms, and impossibility results in the area: readers can then recognize the problems when they arise in practice, apply the algorithms to solve them, and use the impossibility results to determine whether problems are unsolvable. The book also provides readers with the basic mathematical tools for designing new algorithms and proving new impossibility results. In addition, it teaches readers how to reason carefully about distributed algorithms-to model them formally, devise precise specifications for their required behavior, prove their correctness, and evaluate their performance with realistic measures. Table of Contents 1 Introduction 2 Modelling I; Synchronous Network Model 3 Leader Election in a Synchronous Ring 4 Algorithms in General Synchronous Networks 5 Distributed Consensus with Link Failures 6 Distributed Consensus with Process Failures 7 More Consensus Problems 8 Modelling II: Asynchronous System Model 9 Modelling III: Asynchronous Shared Memory Model 10 Mutual Exclusion 11 Resource Allocation 12 Consensus 13 Atomic Objects 14 Modelling IV: Asynchronous Network Model 15 Basic Asynchronous Network Algorithms 16 Synchronizers 17 Shared Memory versus Networks 18 Logical Time 19 Global Snapshots and Stable Properties 20 Network Resource Allocation 21 Asynchronous Networks with Process Failures 22 Data Link Protocols 23 Partially Synchronous System Models 24 Mutual Exclusion with Partial Synchrony 25 Consensus with Partial Synchrony

4,340 citations

Book
01 Jan 1979
TL;DR: This classic in stochastic network modelling broke new ground when it was published in 1979, and it remains a superb introduction to reversibility and its applications thanks to the author's clear and easy-to-read style.
Abstract: This classic in stochastic network modelling broke new ground when it was published in 1979, and it remains a superb introduction to reversibility and its applications. The book concerns behaviour in equilibrium of vector stochastic processes or stochastic networks. When a stochastic network is reversible its analysis is greatly simplified, and the first chapter is devoted to a discussion of the concept of reversibility. The rest of the book focuses on the various applications of reversibility and the extent to which the assumption of reversibility can be relaxed without destroying the associated tractability. Now back in print for a new generation, this book makes enjoyable reading for anyone interested in stochastic processes thanks to the author's clear and easy-to-read style. Elementary probability is the only prerequisite and exercises are interspersed throughout.

2,480 citations

Journal ArticleDOI
Robert M. Metcalfe1, David R. Boggs1
TL;DR: The design principles and implementation are described, based on experience with an operating Ethernet of 100 nodes along a kilometer of coaxial cable, of a model for estimating performance under heavy loads and a packet protocol for error controlled communication.
Abstract: Ethernet is a branching broadcast communication system for carrying digital data packets among locally distributed computing stations. The packet transport mechanism provided by Ethernet has been used to build systems which can be viewed as either local computer networks or loosely coupled multiprocessors. An Ethernet's shared communication facility, its Ether, is a passive broadcast medium with no central control. Coordination of access to the Ether for packet broadcasts is distributed among the contending transmitting stations using controlled statistical arbitration. Switching of packets to their destinations on the Ether is distributed among the receiving stations using packet address recognition. Design principles and implementation are described, based on experience with an operating Ethernet of 100 nodes along a kilometer of coaxial cable. A model for estimating performance under heavy loads and a packet protocol for error controlled communication are included for completeness.

1,701 citations

Journal ArticleDOI
TL;DR: The information theoretic approach and the collision resolution approach to multiaccess channels are reviewed in terms of the underlying communication problems that both are modeling.
Abstract: The information theoretic approach and the collision resolution approach to multiaccess channels are reviewed in terms of the underlying communication problems that both are modeling. Some perspective on the strengths and weakness of these approaches is given, and the need of a more combined approach focused on coding and decoding techniques is argued.

629 citations

Book
01 Jan 2005

521 citations