scispace - formally typeset
Search or ask a question

A Quantitative Measure Of Fairness And Discrimination For Resource Allocation In Shared Computer Systems

TL;DR: Indiex of Fairness as mentioned in this paper is a quantitative measure that is applicable to any resource sharing or allocation problem, and it is independent of the amount of the resource and the fairness index always lies between 0 and 1.
Abstract: Fairness is an important performance criterion in all resource allocation schemes, including those in distributed computer systems. However, it is often specified only qualitatively. The quantitative measures proposed in the literature are either too specific to a particular application, or suffer from some undesirable characteristics. In this paper, we have introduced a quantitative measure called Indiex of FRairness. The index is applicable to any resource sharing or allocation problem. It is independent of the amount of the resource. The fairness index always lies between 0 and 1. This boundedness aids intuitive understanding of the fairness index. For example, a distribution algorithm with a fairness of 0.10 means that it is unfair to 90% of the users. Also, the discrimination index can be defined as 1 - fairness index.
Citations
More filters
Proceedings Article•DOI•
21 Oct 2001
TL;DR: This work presents the SEDA design and an implementation of an Internet services platform based on this architecture, and describes several control mechanisms for automatic tuning and load conditioning, including thread pool sizing, event batching, and adaptive load shedding.
Abstract: We propose a new design for highly concurrent Internet services, which we call the staged event-driven architecture (SEDA). SEDA is intended to support massive concurrency demands and simplify the construction of well-conditioned services. In SEDA, applications consist of a network of event-driven stages connected by explicit queues. This architecture allows services to be well-conditioned to load, preventing resources from being overcommitted when demand exceeds service capacity. SEDA makes use of a set of dynamic resource controllers to keep stages within their operating regime despite large fluctuations in load. We describe several control mechanisms for automatic tuning and load conditioning, including thread pool sizing, event batching, and adaptive load shedding. We present the SEDA design and an implementation of an Internet services platform based on this architecture. We evaluate the use of SEDA through two applications: a high-performance HTTP server and a packet router for the Gnutella peer-to-peer file sharing network. These results show that SEDA applications exhibit higher performance than traditional service designs, and are robust to huge variations in load.

975 citations

Proceedings Article•DOI•
Michael Isard1, Vijayan Prabhakaran1, Jon Currey1, Udi Wieder1, Kunal Talwar1, Andrew V. Goldberg1 •
11 Oct 2009
TL;DR: It is argued that data-intensive computation benefits from a fine-grain resource sharing model that differs from the coarser semi-static resource allocations implemented by most existing cluster computing architectures.
Abstract: This paper addresses the problem of scheduling concurrent jobs on clusters where application data is stored on the computing nodes. This setting, in which scheduling computations close to their data is crucial for performance, is increasingly common and arises in systems such as MapReduce, Hadoop, and Dryad as well as many grid-computing environments. We argue that data-intensive computation benefits from a fine-grain resource sharing model that differs from the coarser semi-static resource allocations implemented by most existing cluster computing architectures. The problem of scheduling with locality and fairness constraints has not previously been extensively studied under this resource-sharing model.We introduce a powerful and flexible new framework for scheduling concurrent distributed jobs with fine-grain resource sharing. The scheduling problem is mapped to a graph datastructure, where edge weights and capacities encode the competing demands of data locality, fairness, and starvation-freedom, and a standard solver computes the optimal online schedule according to a global cost model. We evaluate our implementation of this framework, which we call Quincy, on a cluster of a few hundred computers using a varied workload of data-and CPU-intensive jobs. We evaluate Quincy against an existing queue-based algorithm and implement several policies for each scheduler, with and without fairness constraints. Quincy gets better fairness when fairness is requested, while substantially improving data locality. The volume of data transferred across the cluster is reduced by up to a factor of 3.9 in our experiments, leading to a throughput increase of up to 40%.

949 citations

Proceedings Article•DOI•
22 Aug 2005
TL;DR: The objective is to define an access method optimized for throughput and fairness, able to dynamically adapt to physical channel conditions, to operate near optimum for a wide range of error rates, and to provide equal time shares when hosts use different bit rates.
Abstract: We consider wireless LANs such as IEEE 802.11 operating in the unlicensed radio spectrum. While their nominal bit rates have increased considerably, the MAC layer remains practically unchanged despite much research effort spent on improving its performance. We observe that most proposals for tuning the access method focus on a single aspect and disregard others. Our objective is to define an access method optimized for throughput and fairness, able to dynamically adapt to physical channel conditions, to operate near optimum for a wide range of error rates, and to provide equal time shares when hosts use different bit rates.We propose a novel access method derived from 802.11 DCF [2] (Distributed Coordination Function) in which all hosts use similar values of the contention window CW to benefit from good short-term access fairness. We call our method Idle Sense, because each host observes the mean number of idle slots between transmission attempts to dynamically control its contention window. Unlike other proposals, Idle Sense enables each host to estimate its frame error rate, which can be used for switching to the right bit rate. We present simulations showing how the method leads to high throughput, low collision overhead, and low delay. The method also features fast reactivity and time-fair channel allocation.

541 citations

Proceedings Article•DOI•
17 Aug 2015
TL;DR: TIMELY is the first delay-based congestion control protocol for use in the datacenter, and it achieves its results despite having an order of magnitude fewer RTT signals than earlier delay- based schemes such as Vegas.
Abstract: Datacenter transports aim to deliver low latency messaging together with high throughput. We show that simple packet delay, measured as round-trip times at hosts, is an effective congestion signal without the need for switch feedback. First, we show that advances in NIC hardware have made RTT measurement possible with microsecond accuracy, and that these RTTs are sufficient to estimate switch queueing. Then we describe how TIMELY can adjust transmission rates using RTT gradients to keep packet latency low while delivering high bandwidth. We implement our design in host software running over NICs with OS-bypass capabilities. We show using experiments with up to hundreds of machines on a Clos network topology that it provides excellent performance: turning on TIMELY for OS-bypass messaging over a fabric with PFC lowers 99 percentile tail latency by 9X while maintaining near line-rate throughput. Our system also outperforms DCTCP running in an optimized kernel, reducing tail latency by $13$X. To the best of our knowledge, TIMELY is the first delay-based congestion control protocol for use in the datacenter, and it achieves its results despite having an order of magnitude fewer RTT signals (due to NIC offload) than earlier delay-based schemes such as Vegas.

442 citations

Journal Article•DOI•
TL;DR: The scope of this work is to give an overview of the security threats and challenges that cognitive radios and cognitive radio networks face, along with the current state-of-the-art to detect the corresponding attacks.
Abstract: With the rapid proliferation of new technologies and services in the wireless domain, spectrum scarcity has become a major concern. The allocation of the Industrial, Medical and Scientific (ISM) band has enabled the explosion of new technologies (e.g. Wi-Fi) due to its licence-exempt characteristic. The widespread adoption of Wi-Fi technology, combined with the rapid penetration of smart phones running popular user services (e.g. social online networks) has overcrowded substantially the ISM band. On the other hand, according to a number of recent reports, several parts of the static allocated licensed bands are under-utilized. This has brought up the idea of the opportunistic use of these bands through the, so-called, cognitive radios and cognitive radio networks. Cognitive radios have enabled the opportunity to transmit in several licensed bands without causing harmful interference to licensed users. Along with the realization of cognitive radios, new security threats have been raised. Adversaries can exploit several vulnerabilities of this new technology and cause severe performance degradation. Security threats are mainly related to two fundamental characteristics of cognitive radios: cognitive capability, and reconfigurability. Threats related to the cognitive capability include attacks launched by adversaries that mimic primary transmitters, and transmission of false observations related to spectrum sensing. Reconfiguration can be exploited by attackers through the use of malicious code installed in cognitive radios. Furthermore, as cognitive radio networks are wireless in nature, they face all classic threats present in the conventional wireless networks. The scope of this work is to give an overview of the security threats and challenges that cognitive radios and cognitive radio networks face, along with the current state-of-the-art to detect the corresponding attacks. In addition, future challenges are addressed.

434 citations