scispace - formally typeset
Search or ask a question

Showing papers on "Probabilistic analysis of algorithms published in 1989"


Book ChapterDOI
17 Aug 1989
TL;DR: This paper describes and analyzes skip lists and presents new techniques for analyzing probabilistic algorithms.
Abstract: Skip lists are a practical, probabilistic data structure that can be used in place of balanced trees. Algorithms for insertion and deletion in skip lists are much simpler and significantly faster than equivalent algorithms for balanced trees. This paper describes and analyzes skip lists and presents new techniques for analyzing probabilistic algorithms.

843 citations


Book
01 Jan 1989
TL;DR: Algorithms Involving Sequences and Sets, Algebraic and Numeric Algorithms, Parallel Al algorithms, and NP-Completeness.
Abstract: Introduction. Mathematical Induction. Analysis of Algorithms. Data Structures. Design of Algorithms by Induction. Algorithms Involving Sequences and Sets. Graph Algorithms. Geometric Algorithms. Algebraic and Numeric Algorithms. Reductions. NP-Completeness. Parallel Algorithms.

466 citations


Journal ArticleDOI
TL;DR: A probabilistic version of the maximal covering location problem is introduced here, structured here as a zero--one linear programming problem and solved on a medium-sized transportation network representing Baltimore City.
Abstract: A probabilistic version of the maximal covering location problem is introduced here. The maximum available location problem (MALP) positions p servers in such a way as to maximize the population which will find a server available within a time standard with α reliability. The maximum availability problem builds on the probabilistic location set covering problem in concept and on backup covering and expected covering models in technical detail. MALP bears the same relation to the probabilistic location set covering problem as the deterministic maximal covering problem bears to the deterministic location set covering problem. The maximum availability problem is structured here as a zero–one linear programming problem and solved on a medium-sized transportation network representing Baltimore City.

426 citations


Proceedings ArticleDOI
05 Jun 1989
TL;DR: A probabilistic power domain construction is given for the category of inductively complete partial orders and it is the partial order of continuous.
Abstract: A probabilistic power domain construction is given for the category of inductively complete partial orders. It is the partial order of continuous

401 citations


Proceedings ArticleDOI
30 Oct 1989
TL;DR: The use of highly expanding bipartite multigraphs (called dispersers) to reduce greatly the error of probabilistic algorithms at the cost of few additional random bits is treated.
Abstract: The use of highly expanding bipartite multigraphs (called dispersers) to reduce greatly the error of probabilistic algorithms at the cost of few additional random bits is treated. Explicit constructions of such graphs are generalized and used to obtain the following results: (1) The error probability of any RP (BPP) algorithm can be made exponentially small at the cost of only a constant factor increase in the number of random bits. (2) RP (BPP) algorithms with some weak bit fixing sources are simulated. >

205 citations


Journal ArticleDOI
TL;DR: This paper presents some results on the probabilistic analysis of learning, illustrating the applicability of these results to settings such as connectionist networks.
Abstract: This paper presents some results on the probabilistic analysis of learning, illustrating the applicability of these results to settings such as connectionist networks. In particular, it concerns the learning of sets and functions from examples and background information. After a formal statement of the problem, some theorems are provided identifying the conditions necessary and sufficient for efficient learning, with respect to measures of information complexity and computational complexity. Intuitive interpretations of the definitions and theorems are provided.

159 citations


Journal ArticleDOI
TL;DR: The author presents a simple solution for the committee coordination problem, which encompasses the synchronization and exclusion problems associated with implementing multiway rendezvous, and shows how it can be implemented to develop a family of algorithms.
Abstract: The author presents a simple solution for the committee coordination problem, which encompasses the synchronization and exclusion problems associated with implementing multiway rendezvous, and shows how it can be implemented to develop a family of algorithms. The algorithms use message counts to solve the synchronization problem, and they solve the exclusion problem by using a circulating token or by using auxiliary resources as in the solutions for the dining or drinking philosophers' problems. Results of a simulation study of the performance of the algorithms are presented. The experiments measured the response time and message complexity of each algorithm as a function of variations in the model parameters, including network topology and level of conflict in the system. The results show that the response time for algorithms proposed is significantly better than for existing algorithms, whereas the message complexity is considerably worse. >

111 citations


Proceedings ArticleDOI
01 Jun 1989
TL;DR: This paper designed and developed several algorithms which can generate the K most critical paths in a non-increasing order of their delays and shows the effectiveness of these algorithms.
Abstract: Path extracting algorithms are a very important part of timing analysis approach. In this paper we designed and developed several algorithms which can generate the K most critical paths in a non-increasing order of their delays. The effectiveness of these algorithms is shown by some experimental results.

95 citations


Journal ArticleDOI
TL;DR: This book provides the first unified and current view of the principal exact computational algorithms that have been developed to date for the analysis of product form queueing networks and suggests the existence of even more efficient algorithms that might be developed.
Abstract: In the last fifteen years queueing networks have come into widespread use as models of computer systems and computer-communication networks. Their popularity derives primarily from the particular class of product form queueing networks that make available a large variety of modeling features and for which a number of efficient computational algorithms have been developed. This book provides the first unified and current view of the principal exact computational algorithms that have been developed to date for the analysis of product form queueing networks.The authors cover both recent important advances such as the Recursion by Chain Algorithm (RECAL), Mean Value Analysis by Chain (MVAC), and the Distribution Analysis by Chain (DAC) algorithms, and established algorithms such as the Convolution Algorithm and Mean Value Analysis within the context of a unified theory based on the notions of decomposition and aggregation.The theory gives a useful general constructive methodology for the development of computational algorithms and unifies the seemingly unconnected developments that have taken place. It also provides intuitive insight into the problem of constructing efficient algorithms and suggests the existence of other even more efficient algorithms that might be developed.Adrian E. Conway is Senior Member of Technical Staff at GTE Laboratories Incorporated. Nicolas D. Georganas is Dean of the Faculty of Engineering at the University of Ottawa. "Queueing Networks: Exact Computational Algorithms" is included in the Computer Systems series, edited by Herb Schwetman.

73 citations


Journal ArticleDOI
TL;DR: Algorithms are presented for improving the overall effectiveness and efficiency of the decomposition-simulation approach for multi-area reliability evaluation and substantial reductions in the computational time requirement are provided, thus alleviating one of the drawbacks of this approach.
Abstract: Algorithms are presented for improving the overall effectiveness and efficiency of the decomposition-simulation approach for multi-area reliability evaluation. An algorithm for developing a multi-area load model is first described. This algorithm is based on clustering concepts and captures correlation between area loads. Next, algorithms for simplifying frequency calculations are described. These algorithms provide substantial reductions in the computational time requirement, thus alleviating one of the drawbacks of the decomposition-simulation approach. System studies are presented to demonstrate the relative merits of all the algorithms introduced here. >

60 citations


Proceedings ArticleDOI
30 Oct 1989
TL;DR: An Omega ((log n)/sup 2/) bound on the probabilistic communication complexity of monotonic st-connectivity is proved and it is deduced that every nonmonotonic NC/sup 1/ circuit for st-Connectivity requires a constant fraction of negated input variables.
Abstract: The authors demonstrate an exponential gap between deterministic and probabilistic complexity and between the probabilistic complexity of monotonic and nonmonotonic relations. They then prove, as their main result, an Omega ((log n)/sup 2/) bound on the probabilistic communication complexity of monotonic st-connectivity. From this they deduce that every nonmonotonic NC/sup 1/ circuit for st-connectivity requires a constant fraction of negated input variables. >

Proceedings ArticleDOI
01 Jan 1989
Abstract: In probabilistic structural analysis, the performance or response functions usually are implicitly defined and must be solved by numerical analysis methods such as finite element methods. In such cases, the most commonly used probabilistic analysis tool is the mean-based, second-moment method which provides only the first two statistical moments. This paper presents a generalized advanced mean value (AMV) method which is capable of establishing the distributions to provide additional information for reliability design. The method requires slightly more computations than the second-moment method but is highly efficient relative to the other alternative methods. In particular, the examples show that the AMV method can be used to solve problems involving non-monotonic functions that result in truncated distributions.

Journal ArticleDOI
Eric Allender1
TL;DR: In this paper, it is shown that if secure generators exist, then there are fast deterministic simulations of probabilistic algorithms; the nature of the simulations and the class of algorithms for which simulations can be exhibited depends on the notion of security which is assumed.


Book ChapterDOI
01 Dec 1989
TL;DR: It is concluded that learning and inference algorithms need to be more closely coupled in the area of probabilistic production rules.
Abstract: In this paper we address the problem of learning sets of probabilistic rules. While attention has been focused for some time now on the learning of fixed classification rule structures such as decision trees, the problem of identifying useful sets of probabilistic production rules is relatively new. We discuss our recent work in this area and conclude that learning and inference algorithms need to be more closely coupled.

Journal ArticleDOI
TL;DR: Two power-series algorithms are described; one builds on the dualaffine scaling algorithm and the other on a primal-dual path-following algorithm that accelerate convergence by reducing the number of iterations.
Abstract: Many interior-point linear programming algorithms have been proposed since the Karmarkar algorithm for linear programming problems appeared in 1984. These algorithms follow tangent (first-order) approximations to families of continuous trajectories that underlie such algorithms. This paper describes power-series variants of such algorithms that follow higher-order, truncated, power-series approximations to such trajectories. The choice of the power-series parameter is important to the performance of such algorithms, and this paper describes an apparently good choice of parameter. We describe two power-series algorithms; one builds on the dualaffine scaling algorithm and the other on a primal-dual path-following algorithm. Empirical results indicate that, compared to first-order methods, these higher-order power-series algorithms accelerate convergence by reducing the number of iterations. Both of these power-series algorithms have been successfully implemented in the AT&T KORBX® system.

Journal ArticleDOI
Guangye Li1
TL;DR: Two algorithms for solving sparse nonlinear systems of equations with some promise of being very effective in practice are presented: the CM-successive column correction algorithm and a modified CM- successive column Correction algorithm.
Abstract: This paper presents two algorithms for solving sparse nonlinear systems of equations: the CM-successive column correction algorithm and a modified CM-successive column correction algorithm. Aq-superlinear convergence theorem and anr-convergence order estimate are given for both algorithms. Some numerical results and the detailed comparisons with some previously established algorithms show that the new algorithms have some promise of being very effective in practice.


Proceedings Article
01 Jan 1989
TL;DR: The first explicit constructions of highly expanding bipartite multigraphs (called here dispersers) appeared in [Sze,AKS87], and these constructions are used and generalized to obtain the error probability of any RP (BPP) algorithm can be made exponentially small.
Abstract: Sipser [Sip861 was the first to suggest the use of highly expanding bipartite multigraphs (called here dispersers) to drastically reduce the error of probabilistic algorithms at the cost of few additional random bits. The first explicit constructions of such graphs appeared in [Sze,AKS87]. We use and generalize these constructions to obtain: 0 The error probability of any RP (BPP) algorithm can be made exponentially small at the cost of only a constant factor increase in the number of random bits. 0 Simulations of RP (BPP) algorithms with some weak bit fixing sources.

Journal ArticleDOI
TL;DR: The well-known switching algorithm proposed by Lin and Kernighan for the Euclidean Travelling Salesman Problem is derived theoretically and a bound of O(n18) with probability 1 −c/n is come up with, whereas in practice the algorithm works slightly better.
Abstract: The well-known switching algorithm proposed by Lin and Kernighan for the Euclidean Travelling Salesman Problem has proved to be a simple efficient algorithm for medium size problems (though it often gets trapped in local optima). Although its complexity status is still open, it has been observed to be polynomially bounded in practice, when applied to uniformly distributed points in the unit square. In this paper this polynomial behaviour is derived theoretically. (However, we will come up with a bound of O(n18) with probability 1 −c/n, whereas in practice the algorithm works slightly better.)

Book ChapterDOI
26 Sep 1989
TL;DR: The present paper was motivated by Schieber and Snir's work [SS89] and in particular by their list of open questions.
Abstract: In [Ang80] it was shown that from symmetry considerations, there is no deterministic algorithm to elect a leader (i.e., to break the symmetry) in a general anonymous network [ASW85]. Following [Ang80] many probabilistic algorithms for electing a leader, and/or breaking the symmetry were proposed [IR81, ASW85, AAHK86, CV86, FS86, SS89]. How~ ever, only Schieber and Snir have considered the general case of an arbitrary topology asynchronous network. The present paper was motivated by their work [SS89] and in particular by their list of open questions.

Journal ArticleDOI
TL;DR: Through Monte Carlo simulation, the statistical characteristics and frequency histogram of the kinematic errors are then analysed on computer and the spatial RCSR linkage is taken as a numerical example.

Journal ArticleDOI
TL;DR: Two algorithms to parallelize the identification of clusters on a lattice that are necessary for a variety of problems such as percolation and non-local spin update algorithms are presented.

01 Jan 1989
TL;DR: The main result of this work is the development of a real-valued function which is denoted by the symbol $\Omega$ and called Omega, which characterizes the proximal relationship of two non-point convex bodies, which provides useful information when the sets under consideration intersect.
Abstract: An important problem in robotics is to determine an obstacle avoidance control which transfers the system along a collision-free path among obstacles in the work space. Most algorithms which have been developed to solve this problem are based on a mapping of the given problem into a finite state space which always resulted in a problem which can be solved by a computer but have a high computational complexity. Certain previously developed algorithms worked directly with continuous problem space and are called continuum algorithms. These algorithms were relatively unsophisticated when compared to the continuous algorithms. This observation served as a motivation for the research reported here, in which it was sought to provide continuum algorithms with a more rigorous theoretical base. The main result of this work is the development of a real-valued function which is denoted by the symbol $\Omega$ and called Omega, which characterizes the proximal relationship of two non-point convex bodies. This function is different from similar functions reported previously in that it also provides useful information when the sets under consideration intersect. In this thesis, the need for such a function is established, and certain computationally useful properties of the developed function are proven. These result are subsequently used in a path-planning algorithm based on the use of the $\Omega$ function and the Potential Function approach. Results of this algorithm are presented and compared with those of previously developed algorithms.


01 Jan 1989
TL;DR: In this article, it was shown that for any integer n and e ≫ 0, a pseudorandom bit generator can be computed by uniform polynomial size, constant depth, unbounded fan-in circuit.
Abstract: We explicitly construct, for every integer n and e ≫ 0, a family of functions (psuedo-random bit generators) fn,e:{0,1}ne → {0,1}n with the following property: for a random seed, the pseudorandom output "looks random" to any polynomial size, constant depth, unbounded fan-in circuit. Moreover, the functions fn,e themselves can be computed by uniform polynomial size, constant depth circuits. Some (interrelated) consequences of this result are given below. 1) Deterministic simulation of probabilistic algorithms. The constant depth analogues of the probabilistic complexity classes RP and BPP are contained in the deterministic complexity classes DSPACE(ne) and DTIME(2ne) for any e ≫ 0. 2) Making probabilistic constructions deterministic. Some probablistic constructions of structures that elude explicit constructions can be simulated in the above complexity classes. 3) Approximate counting. The number of satisfying assignments to a (CNF or DNF) formula, if not too small, can be arbitrarily approximated in DSPACE(ne) and DTIME(2ne), for any e ≫ 0. We also present two results for the special case of depth 2 circuits. They deal, respectively, with finding a satisfying assignment and approximately counting the number of assignments. For example, for 3-CNF formulas with a fixed fraction of satisfying assignmemts, both tasks can be performed in polynomial time!


Journal ArticleDOI
TL;DR: The paper gives a state-of-art survey in the field of developing approximaion scheduling algorithms with determined worst-case performance guarantees.
Abstract: The paper gives a state-of-art survey in the field of developing approximaion scheduling algorithms. Main attention is concentrated on algorithms with determined worst-case performance guarantees.

Proceedings ArticleDOI
16 Dec 1989
TL;DR: The preliminary results presented here demonstrate that these algorithms have remarkably lower complexity and cost, work well under model variability and their performance is nearly optimal.
Abstract: We consider real-time sequential detection and estima-tion problems for non-gaussian signal and noise models. We develop optimal algorithms and several architectures for real-time implementation based on numerical algorithms, including asynchronous implementations of multigrid algorithms. These implementations are of high complexity, costly and cannot easily accomodate model variability. We then propose and analyze a different class of algorithms, which are symbolic, of the neural network type. The preliminary results presented here demonstrate that these algorithms have remarkably lower complexity and cost, work well under model variability and their performance is nearly optimal. We also discuss how these type of algorithms are incorporated in the DELPHI system for integrated design of signal processing systems.