scispace - formally typeset
Search or ask a question

Showing papers on "Online algorithm published in 1990"


Proceedings ArticleDOI
01 Apr 1990
TL;DR: It is proved the existence of an efficient “simulation” of randomized on-line algorithms by deterministic ones, which is best possible in general.
Abstract: Against in adaptive adversary, we show that the power of randomization in on-line algorithms is severely limited! We prove the existence of an efficient “simulation” of randomized on-line algorithms by deterministic ones, which is best possible in general. The proof of the upper bound is existential. We deal with the issue of computing the efficient deterministic algorithm, and show that this is possible in very general cases.

220 citations


Proceedings ArticleDOI
22 Oct 1990
TL;DR: A simple algorithm that guarantees response time that is essentially polynomial in delta /sub j/ is presented, based on the notion of a distribution queue and has a compact implementation.
Abstract: Presents an efficient distributed online algorithm for scheduling jobs that are created dynamically, subject to resource constraints that require that certain pairs of jobs not run concurrently. The focus is on the response time of the system to each job, i.e. the length of the time interval that starts when the job is created or assigned to a processor and ends at the instant the execution of the job begins. The goal is to provide guarantees on the response time to each job j in terms of the density of arrivals of jobs that conflict with j. The model is completely asynchronous and includes various resource allocation problems that have been studied extensively, including the dining philosophers problem and its generalizations to arbitrary networks. In these versions of the problem, the resource requirements of each new job j determines an upper bound delta /sub j/ on the number of jobs that can exist concurrently in the system and conflict with j. Given such upper bounds, no scheduling algorithm can guarantee a response time better than delta /sub j/ times the maximum execution or message transmission time. A simple algorithm that guarantees response time that is essentially polynomial in delta /sub j/ is presented. It is based on the notion of a distribution queue and has a compact implementation. >

64 citations


Proceedings ArticleDOI
01 Sep 1990
TL;DR: Focusing on the uniform and the harmonic distributions, rather than worst-case distributions, it is shown that if there are no restrictions on how the processors are partitioned, these distributions cause no waste with an offline algorithm but a waste of 20% to 37% for online algorithms.
Abstract: Gang scheduling (the scheduling of a number of interacting threads to run simultaneously on distinct processors) can leave processors idle if the sizes of the gangs do not match the number of available processors. Given an optimal offline algorithm it is shown that the wasted processing power can range from 0% to 50%, depending on the distribution of gang sizes. Focusing on the uniform and the harmonic distributions, rather than worst-case distributions, it is shown that if there are no restrictions on how the processors are partitioned, these distributions cause no waste with an offline algorithm but a waste of 20% to 37% for online algorithms. Using the distributed hierarchical control scheme, which is similar to buddy systems for memory allocation, a waste of 10% to 20% should be expected for offline algorithms, and 21% to 51% for online algorithms. >

19 citations


Proceedings ArticleDOI
C.J. Walter1
26 Jun 1990
TL;DR: The author describes the online algorithm used in the multicomputer architecture for fault tolerance (MAFT) to diagnose faulty system elements and uses syndrome information which categorizes detected errors as either symmetric or asymmetric, bounds for correct diagnosis can be deduced.
Abstract: The author presents an approach to the consistent diagnosis of error monitoring observations in a distributed fault-tolerant computing system, even when the faulty source produces arbitrary errors. He describes the online algorithm used in the multicomputer architecture for fault tolerance (MAFT) to diagnose faulty system elements. By the use of syndrome information which categorizes detected errors as either symmetric or asymmetric, bounds for correct diagnosis can be deduced. Finally, an interactive consistency algorithm is employed to guarantee consistent diagnosis in a distributed environment and to provide online verification of all diagnostic units. >

13 citations


Proceedings ArticleDOI
05 Dec 1990
TL;DR: Whether it is possible to have an optimal online algorithm for unidirectional ring, out-tree, in- tree, bidirectional tree, and bidirectionals ring networks is discussed.
Abstract: Whether it is possible to have an optimal online algorithm for unidirectional ring, out-tree, in-tree, bidirectional tree, and bidirectional ring networks is discussed. The problem is considered under various restrictions of four parameters-origin node, destination node, release time and deadline. For unidirectional ring, it is shown that no such algorithm can exist unless one of the four parameters is fixed. For out-tree, it is shown that no such algorithm can exist unless one of three parameters-origin node, destination node and release time-is fixed. For in-tree, it is shown that no such algorithm can exist unless one of three parameters-origin node, destination node and deadline-is fixed. For bidirectional tree, it is shown that no such algorithm can exist unless the origin node or the destination node is fixed. For bidirectional ring, it is shown that no such algorithm can exist unless the origin node and either the destination node or the release time are fixed. >

11 citations


Proceedings ArticleDOI
01 Apr 1990
TL;DR: This work abstracts the problem of adaptively choosing locations in a long computation to a server problem in whichk servers move along a line in a single direction, modeling the fact that most computations are not reversible.
Abstract: Motivated by applications in data compression, debugging, and physical simulation, we consider the problem of adaptively choosing locations in a long computation at which to save intermediate results. Such checkpoints allow faster recomputation of arbitrary requested points within the computation. We abstract the problem to a server problem in whichk servers move along a line in a single direction, modeling the fact that most computations are not reversible. Since checkpoints may be arbitrarily copied, we allow a server to jump to any location currently occupied by another server. We present on-line algorithms and analyze their competitiveness. We give lower bounds on the competitiveness of any on-line algorithm and show that our algorithms achieve these bounds within relatively small factors.

10 citations


Proceedings ArticleDOI
Ola Dahl1, Lars Nielsen1
13 May 1990
TL;DR: A secondary controller for online time scaling of nominal high-performance trajectories has been proposed to handle path following with torques close to the limits to show that if the nominal velocity profile is within certain limits, the result is a bounded actual velocity profile.
Abstract: A secondary controller for online time scaling of nominal high-performance trajectories has been proposed to handle path following with torques close to the limits. Such nominal reference trajectories are typically available from an offline optimization. The stability properties of a closed-loop system including a robot, a primary controller, and a new secondary controller are studied. The analysis shows that if the nominal velocity profile is within certain limits, the result is a bounded actual velocity profile. The stability of the closed-loop system is then obtained by requiring a specified tracking performance for the primary controller. >

4 citations


Proceedings ArticleDOI
05 Dec 1990
TL;DR: A recursive optimization algorithm is formulated using likelihood ratio estimates to minimize the steady-state probability of loss with respect to the load sharing parameters, and almost sure convergence of the algorithm is proved.
Abstract: The likelihood ratio method is studied as a possible approach for sensitivity analysis of discrete event systems. A load sharing problem is considered for a multiqueue system in which customers have soft real-time constraints-if the waiting time of a customer exceeds a given random amount (called the laxity of the customer), then the customer is considered lost. A recursive optimization algorithm is formulated using likelihood ratio estimates to minimize the steady-state probability of loss with respect to the load sharing parameters, and almost sure convergence of the algorithm is proved. The algorithm can be used for online optimization of the real-time system, and does not require a priori knowledge of the arrival rate of customers to the system or the service time and laxity distributions. To illustrate the results, simulation examples are presented. >

4 citations


Proceedings ArticleDOI
06 Jun 1990
TL;DR: The concept of online processing is presented as an effective approach to overcome the data distribution overhead in parallel real-time implementation of low-level vision and a two-stage pipelined module that allows online implementations is described.
Abstract: The concept of online processing is presented as an effective approach to overcome the data distribution overhead in parallel real-time implementation of low-level vision. Online processing refers to pipelining each input item with the pipeline rate equal to the input arrival rate. Online algorithms are presented for four benchmark low level tasks, namely two-dimensional convolution, two-dimensional rank value filtering, connected component labeling, and Hough transform. A two-stage pipelined module that allows online implementations is described. This module is to function as a coprocessor to nodes in B-HIVE, a loosely coupled multiprocessor for integrated vision. >

3 citations


01 Jan 1990
TL;DR: It is shown here that for any class satisfying the property of closure under exception lists, the PAC-learnability of the class implies the existence of an Occam algorithm for the class, which reveals a close relationship between PAC-learning and information compression for a wide range of interesting classes.
Abstract: The distribution-independent model of concept learning from examples ("PAC-learning") due to Valiant is investigated. It has previously been shown that the existence of an Occam algorithm for a class of concepts is a sufficient condition for the PAC-learnability of that class. (An Occam algorithm is a randomized polynomial-time algorithm that, when given as input a sample of strings of some unknown concept to be learned, outputs a small description of a concept that is consistent with the sample.) It is shown here that for any class satisfying the property of closure under exception lists, the PAC-learnability of the class implies the existence of an Occam algorithm for the class. Thus the existence of randomized Occam algorithms exactly characterizes PAC-learnability for all concept classes with this property. This reveals a close relationship between PAC-learning and information compression for a wide range of interesting classes. The PAC-learning model is then extended to that of semi-supervised learning (ss-learning), in which a collection of disjoint concepts is to be simultaneously learned with only partial information concerning concept membership available to the learning algorithm. It is shown that many PAC-learnable concept classes are also ss-learnable. Several sets of sufficient conditions for a class to be ss-learnable are given. A prediction-based definition of learning multiple concept classes has been given and shown to be equivalent to ss-learning. The predictive ability of automata less powerful than Turing machines is investigated. Models for prediction by deterministic finite state machines, 1-counter machines, and deterministic pushdown automata are defined, and the classes of languages that can be predicted by these types of automata are precisely characterized. In addition, upper bounds are given for the size of classes that can be predicted by such automata. Two new online protocols for graph algorithms are defined. Bounds on the performance of online algorithms for the graph bandwidth, vertex cover, independent set, and dominating set problems are demonstrated. Various results are proved for algorithms operating according to a standard online protocol as well as the two new protocols.

1 citations