scispace - formally typeset
Search or ask a question
Author

Jeff Edmonds

Bio: Jeff Edmonds is an academic researcher from York University. The author has contributed to research in topics: Upper and lower bounds & Scheduling (computing). The author has an hindex of 26, co-authored 62 publications receiving 2738 citations. Previous affiliations of Jeff Edmonds include University of Toronto & Keele University.


Papers
More filters
Journal ArticleDOI
TL;DR: An information-theoretic lower bound is given that for any set of priorities the total length of the encoding packets must be at least the girth, and the system introduced is optimal in terms of the total encoding length.
Abstract: We introduce a new method, called priority encoding transmission, for sending messages over lossy packet-based networks. When a message is to be transmitted, the user specifies a priority value for each part of the message. Based on the priorities, the system encodes the message into packets for transmission and sends them to (possibly multiple) receivers. The priority value of each part of the message determines the fraction of encoding packets sufficient to recover that part. Thus even if some of the encoding packets are lost en-route, each receiver is still able to recover the parts of the message for which a sufficient fraction of the encoding packets are received. For any set of priorities for a message, we define a natural quantity called the girth of the priorities. We develop systems for implementing any given set of priorities such that the total length of the encoding packets is equal to the girth. On the other hand, we give an information-theoretic lower bound that shows that for any set of priorities the total length of the encoding packets must be at least the girth. Thus the system we introduce is optimal in terms of the total encoding length. This work has immediate applications to multimedia and high-speed networks applications, especially in those with bursty sources and multiple receivers with heterogeneous capabilities. Implementations of the system show promise of being practical.

648 citations

Proceedings ArticleDOI
20 Nov 1994
TL;DR: A novel approach for sending messages over lossy packet-based networks that allows a user to specify a different priority on each segment of the message, and an information-theoretic proof that there is no system that implements a set of priorities with rate greater than one.
Abstract: We introduce a novel approach for sending messages over lossy packet-based networks. The new method, called Priority Encoding Transmission, allows a user to specify a different priority on each segment of the message. Based on the priorities, the sender uses the system to encode the segments into packets for transmission. The system ensures recovery of the segments in order of their priority. The priority of a segment determines the minimum number of packets sufficient to recover the segment. We define a measure for a set of priorities, called the rate, which dictates how much information about the message must be contained in each bit of the encoding. We develop systems for implementing any set of priorities with rate equal to one. We also give an information-theoretic proof that there is no system that implements a set of priorities with rate greater than one. This work has applications to multi-media and high speed networks applications, especially in those with bursty sources and multiple receivers with heterogeneous capabilities. >

331 citations

Proceedings ArticleDOI
23 Oct 1995
TL;DR: In this paper, the authors describe erasure codes where both the encoding and decoding algorithms run in linear time and where r is only slightly larger than n. The encoding algorithm produces a set of l-bit packets of total length cn from an n-bit message, and the decoding algorithm is able to recover the message from any set of packets whose total length is r.
Abstract: An (n,c,l,r) erasure code consists of an encoding algorithm and a decoding algorithm with the following properties. The encoding algorithm produces a set of l-bit packets of total length cn from an n-bit message. The decoding algorithm is able to recover the message from any set of packets whose total length is r, i.e., from any set of r/l packets. We describe erasure codes where both the encoding and decoding algorithms run in linear time and where r is only slightly larger than n.

143 citations

Journal ArticleDOI
TL;DR: This work proves several separations which show that in a generic relativized world the search classes are distinct and there is a standard search problem in each of them that is not computationally equivalent to any decision problem.

137 citations

Proceedings ArticleDOI
Jeff Edmonds1
01 May 1999
TL;DR: It is proved that if none of the jobs are “strictly” fully parallelizahle, then Equi-partition performs competitively with no extra processors, and it is provided new upper and lower bound techniques applicable in this more difficult scenario.
Abstract: We considered non-clairvoyant multiprocessor scheduling of jobs with arbitrary arrival times and changing execution characteristics. The problem has been studied extensively when either the jobs all arrive at time zero, or when all the jobs are fully parallelizable, or when the scheduler has considerable knowledge about the jobs. This paper considers for the first time this problem without any of these three restrictions although our algorithm is given more resources than the adversary. We provide new upper and lower bound techniques applicable in this more difficult scenario. The results are of both theoretical and practical interest. In our model, a job can arrive at any arbitrary time and its execution characteristics can change through the life of the job from being anywhere from fully parallelizable to completely sequential. We assume that the scheduler has no knowledge about the jobs except for knowing when a job arrives and when it completes. (This is why we say that the scheduler is completely in the dark.) Given all this, we prove that the scheduler algorithm Equi-partition, though simple, performs within a constant factor as well as the optimal scheduler as long as it is given at least twice as many processors. Moreover, we prove that if none of the jobs are “strictly” fully parallelizable, then Equi-partition performs competitively with no extra processors. We also consider other variations: faster processors; fewer preemptions; and a wider range of execution characteristics.

119 citations


Cited by
More filters
Journal ArticleDOI
27 Jun 2005
TL;DR: The recent development of practical distributed video coding schemes is reviewed, finding that the rate-distortion performance is superior to conventional intraframe coding, but there is still a gap relative to conventional motion-compensated interframe coding.
Abstract: Distributed coding is a new paradigm for video compression, based on Slepian and Wolf's and Wyner and Ziv's information-theoretic results from the 1970s. This paper reviews the recent development of practical distributed video coding schemes. Wyner-Ziv coding, i.e., lossy compression with receiver side information, enables low-complexity video encoding where the bulk of the computation is shifted to the decoder. Since the interframe dependence of the video sequence is exploited only at the decoder, an intraframe encoder can be combined with an interframe decoder. The rate-distortion performance is superior to conventional intraframe coding, but there is still a gap relative to conventional motion-compensated interframe coding. Wyner-Ziv coding is naturally robust against transmission errors and can be used for joint source-channel coding. A Wyner-Ziv MPEG encoder that protects the video waveform rather than the compressed bit stream achieves graceful degradation under deteriorating channel conditions without a layered signal representation.

1,342 citations

Journal ArticleDOI
Luigi Rizzo1
01 Apr 1997
TL;DR: A very basic description of erasure codes is provided, an implementation of a simple but very flexible erasure code to be used in network protocols is described, and its performance and possible applications are discussed.
Abstract: Reliable communication protocols require that all the intended recipients of a message receive the message intact. Automatic Repeat reQuest (ARQ) techniques are used in unicast protocols, but they do not scale well to multicast protocols with large groups of receivers, since segment losses tend to become uncorrelated thus greatly reducing the effectiveness of retransmissions. In such cases, Forward Error Correction (FEC) techniques can be used, consisting in the transmission of redundant packets (based on error correcting codes) to allow the receivers to recover from independent packet losses.Despite the widespread use of error correcting codes in many fields of information processing, and a general consensus on the usefulness of FEC techniques within some of the Internet protocols, very few actual implementations exist of the latter. This probably derives from the different types of applications, and from concerns related to the complexity of implementing such codes in software. To fill this gap, in this paper we provide a very basic description of erasure codes, describe an implementation of a simple but very flexible erasure code to be used in network protocols, and discuss its performance and possible applications. Our code is based on Vandermonde matrices computed over GF(pr), can be implemented very efficiently on common microprocessors, and is suited to a number of different applications, which are briefly discussed in the paper. An implementation of the erasure code shown in this paper is available from the author, and is able to encode/decode data at speeds up to several MB/s running on a Pentium 133.

1,067 citations

Proceedings ArticleDOI
12 May 2002
TL;DR: This work considers the problem that arises when the server is overwhelmed by the volume of requests from its clients, and proposes Cooperative Networking (CoopNet), where clients cooperate to distribute content, thereby alleviating the load on the server.
Abstract: In this paper, we discuss the problem of distributing streaming media content, both live and on-demand, to a large number of hosts in a scalable way Our work is set in the context of the traditional client-server framework Specifically, we consider the problem that arises when the server is overwhelmed by the volume of requests from its clients As a solution, we propose Cooperative Networking (CoopNet), where clients cooperate to distribute content, thereby alleviating the load on the server We discuss the proposed solution in some detail, pointing out the interesting research issues that arise, and present a preliminary evaluation using traces gathered at a busy news site during the flash crowd that occurred on September 11, 2001

914 citations

Journal ArticleDOI
TL;DR: The survey outlines fundamental results about multiprocessor real-time scheduling that hold independent of the scheduling algorithms employed, and provides a taxonomy of the different scheduling methods, and considers the various performance metrics that can be used for comparison purposes.
Abstract: This survey covers hard real-time scheduling algorithms and schedulability analysis techniques for homogeneous multiprocessor systems. It reviews the key results in this field from its origins in the late 1960s to the latest research published in late 2009. The survey outlines fundamental results about multiprocessor real-time scheduling that hold independent of the scheduling algorithms employed. It provides a taxonomy of the different scheduling methods, and considers the various performance metrics that can be used for comparison purposes. A detailed review is provided covering partitioned, global, and hybrid scheduling algorithms, approaches to resource sharing, and the latest results from empirical investigations. The survey identifies open issues, key research challenges, and likely productive research directions.

910 citations

Proceedings ArticleDOI
04 May 1997
TL;DR: In this article, the authors presented randomized constructions of linear-time encodable and decodable codes that can transmit over lossy channels at rates extremely close to capacity.
Abstract: We present randomized constructions of linear-time encodable and decodable codes that can transmit over lossy channels at rates extremely close to capacity. The encod-ing and decoding algorithms for these codes have fast and simple software implementations. Partial implementationsof our algorithms are faster by orders of magnitude than the best software implementations of any previous algorithm forthis problem. We expect these codes will be extremely useful for applications such as real-time audio and video transmission over the Internet, where lossy channels are common and fast decoding is a requirement. Despite the simplicity of the algorithms, their design andanalysis are mathematically intricate. The design requires the careful choice of a random irregular bipartite graph,where the structure of the irregular graph is extremely important. We model the progress of the decoding algorithmby a set of differential equations. The solution to these equations can then be expressed as polynomials in one variable with coefficients determined by the graph structure. Based on these polynomials, we design a graph structure that guarantees successful decoding with high probability

872 citations