scispace - formally typeset
Search or ask a question

Showing papers on "Probabilistic analysis of algorithms published in 1982"


Journal ArticleDOI
TL;DR: This correspondence is concerned with the development of algorithms for special-purpose VLSI arrays and the approach used is to identify algorithm transformations which modify favorably the index set and the data dependences, but perserve the ordering imposed on theindex set by the data dependsences.
Abstract: This correspondence is concerned with the development of algorithms for special-purpose VLSI arrays. The approach used in this correspondence is to identify algorithm transformations which modify favorably the index set and the data dependences, but perserve the ordering imposed on the index set by the data dependences. Conditions for the existance of such transformations are given for a class of algorithms. Also, a methodology is proposed for the synthesis of VLSI algorithms.

164 citations


Journal ArticleDOI
TL;DR: A model of algorithm design based on the analysis of the protocols of two subjects designing three convex hull algorithms is presented, observing that the time a subject takes to design an algorithm is proportional to the number of components in the algorithm's data-flow representation.
Abstract: By studying the problem-solving techniques that people use to design algorithms we can learn something about building systems that automatically derive algorithms or assist human designers. In this paper we present a model of algorithm design based on our analysis of the protocols of two subjects designing three convex hull algorithms. The subjects work mainly in a data-flow problem space in which the objects are representations of partially specified algorithms. A small number of general-purpose operators construct and modify the representations; these operators are adapted to the current problem state by means-ends analysis. The problem space also includes knowledge-rich schemas such as divide and conquer that subjects incorporate into their algorithms. A particularly versatile problem-solving method in this problem space is symbolic execution, which can be used to refine, verify, or explain components of an algorithm. The subjects also work in a task-domain space about geometry. The interplay between problem solving in the two spaces makes possible the process of discovery. We have observed that the time a subject takes to design an algorithm is proportional to the number of components in the algorithm's data-flow representation. Finally, the details of the problem spaces provide a model for building a robust automated system.

119 citations



Journal ArticleDOI
TL;DR: Methods of successive approximation for solving linear systems or minimization problems are accelerated by aggregation-disaggregation processes, characterized by means of Galerkin approximations, and this in turn permits analysis of the method.

81 citations


Proceedings ArticleDOI
03 Nov 1982
TL;DR: The average-case behaviour of the Next-Fit algorithm for bin-packing is analyzed, closed-form expressions for distributions of interest are obtained, and asymptotically perfect packing is established.
Abstract: We analyze the average-case behaviour of the Next-Fit algorithm for bin-packing, and obtain closed-form expressions for distributions of interest. Our analysis is based on a novel technique of partitioning the interval (0, 1) suitably and then formulating the problem as a matrix-differential equation. We compare our analytic results with previously known simulation results and show that there is an excellent agreement between the two. We also explain a certain empirically observed anomaly in the behaviour of the algorithm. Finally we establish that asymptotically perfect packing is possible when input items are drawn from a monotonically decreasing density function.

44 citations


Journal ArticleDOI
Stathis Zachos1
TL;DR: It is shown that many definitions, that arise naturally from different types of algorithms, are equivalent in defining the same class R (or ZPP, resp.).
Abstract: Various types of probabilistic algorithms play an increasingly important role in computer science, especially in computational complexity theory. Probabilistic polynomial time complexity classes are defined and compared to each other, emphasizing some structural relationships to the known complexity classes P, NP, PSPACE. The classes R and ZPP, corresponding to the so-called Las Vegas polynomial time bounded algorithms, are given special attention. It is shown that many definitions, that arise naturally from different types of algorithms, are equivalent in defining the same class R (or ZPP, resp.). These robustness results justify finally the tractability of the above probabilistic classes.

41 citations


Journal Article
TL;DR: A number of tests based on the likelihood ratio statistic have been developed as discussed by the authors, and available information on their power are summarized in this paper and some of them can be used to compare models with substantially different functional forms or models that are based on different behavioural paradigms.
Abstract: Probabilistic choice models, such as logit and probit models, are highly sensitive to a variety of specification errors, including the use of incorrect functional forms for the systematic component of the utility function, incorrect specification of the probability distribution of the random component of the utility function, and incorrect specification of the choice set. Specification errors can cause large forecasting errors, so it is of considerable importance to have means of testing models for the presence of these errors. A number of tests based on the likelihood ratio statistic have been developed. These tests and available information on their power are summarized in this paper. The likelihood ratio test can entail considerable computational difficulty, owing to the need to evaluate the likelihood function for both the null and alternative hypotheses. Substantial gains in computational efficiency can be achieved through the use of a test that requires evaluating the likelihood function only for the null hypothesis. A lagrangian multiplier test that has this property is described, and numerical examples of its computational properties are given. An important disadvantage of conventional specification tests is that they do not permit comparisons of models that belong to different parametric families in order to determine which model best explains the available data. Thus, these tests cannot be used to compare models whose utility functions have substantially different functional forms or models that are based on different behavioural paradigms. Several methods for dealing with this problem, including the construction of hybrid models and the Cox test of separate families of hypotheses, are described. (Author/TRRL)

33 citations


Proceedings ArticleDOI
01 Jan 1982
TL;DR: In this article, a probabilistic model of cell failure is proposed to minimize the length of the longest wire in a VLSI network, thus minimizing the communication time between cells.
Abstract: VLSI technologists are fast developing wafer-scale integration. Rather than partitioning a silicon wafer into chips as is usually done, the idea behind wafer-scale integration is to assemble an entire system (or network of chips) on a single wafer, thus avoiding the costs and performance loss associated with individual packaging of chips. A major problem with assembling a large system of microprocessors on a single wafer, however, is that some of the processors, or cells, on the wafer are likely to be defective. In the paper, we describe practical procedures for integrating "around" such faults. The procedures are designed to minimize the length of the longest wire in the system, thus minimizing the communication time between cells. Although the underlying network problems are NP-complete, we prove that the procedures are reliable by assuming a probabilistic model of cell failure. We also discuss applications of the work to problems in VLSI layout theory, graph theory, fault-tolerant systems, planar geometry, and the probabilistic analysis of algorithms.

30 citations


Journal ArticleDOI
TL;DR: In this article, a family of hierarchical algorithms for nonlinear structural equations is presented, based on the Davidenko-Branin type homotopy and shown to yield consistent hierarchical perturbation equations.
Abstract: A family of hierarchical algorithms for nonlinear structural equations are presented. The algorithms are based on the Davidenko-Branin type homotopy and shown to yield consistent hierarchical perturbation equations. The algorithms appear to be particularly suitable to problems involving bifurcation and limit point calculations. An important by-product of the algorithms is that they provide a systematic and economical means for computing the stepsize at each iteration stage when a Newton-like method is employed to solve the systems of equations. Some sample problems are provided to illustrate the characteristics of the algorithms.

25 citations


Journal ArticleDOI
TL;DR: In this article, a probabilistic methodology is presented for liquefaction analysis of horizontally layered sand deposits, subjected to vertically propagating seismic S waves, and three different deposit models are developed.
Abstract: A probabilistic methodology is presented for liquefaction analysis of horizontally layered sand deposits, subjected to vertically propagating seismic S waves. Three different deposit models are developed. The models are effectively one-dimensional and differ in that they include or neglect pore pressure diffusion and stiffness reduction due to pore pressure buildup. Conclusions are drawn about sensitivity of the results to different assumptions about the mechanical behavior of the deposit, the effect of vertical and horizontal variations of soil properties, and about the importance of statistical variability of the response spectrum at bedrock given peak acceleration. An approximate procedure is proposed for the calculation of the probability of almost complete layer liquefaction, for given input motion and given layer average SPT results. Predictions from the approximate procedure are in good agreement with historical data.

24 citations


Journal ArticleDOI
TL;DR: Two linear time algorithms are shown to solve the n -object SUBSET-SUM problem with probability approaching 1 as n gets large, for a uniform instance distribution.

Book ChapterDOI
John H. Reif1
12 Jul 1982
TL;DR: It is shown that parallelism uniformly speeds up time bounded Probabilistic sequential RAM computations by nearly a quadratic factor, and that probabilistic choice can be, eliminated from parallel computation by introducing nonuniformity.
Abstract: This paper introduces probabilistic choice to synchronous parallel machine models; in particular parallel RAMs The power of probabilistic choice in parallel computations is illustrated by parallelizing some known probabilistic sequential algorithms We characterize the computational complexity of time, space, and processor bounded probabilistic prallel RAMs in terms of the computational complexity of probabilistic sequential RAMs We show that parallelism uniformly speeds up time bounded probabilistic sequential RAM computations by nearly a quadratic factor We also show that probabilistic choice can be, eliminated from parallel computations by introducing nonuniformity

Journal ArticleDOI
TL;DR: The present paper is a bibliography which contains 70 references dealing with probabilistic evaluation of time complexity and performance accuracy of deterministic algorithms for combinatorial decision and optimization problems.
Abstract: Probabilistic methods in evaluation of performance efficiency of combinatorial optimization algorithms are of continuously growing interest, and rapidly increasing effort of researchers in this field is observed. The present paper is a bibliography which contains 70 references dealing with probabilistic evaluation of time complexity and performance accuracy of deterministic algorithms for combinatorial decision and optimization problems. Some entries of the bibliography, mainly those having appreared in journals and nonperiodical issues of limited distribution are shortly annotated (18 references). Basic notions and definitions facilitating better understanding and plain presentation of different results are given.


Proceedings ArticleDOI
01 Jan 1982

Book ChapterDOI
01 Jan 1982
TL;DR: The average performance of the LPT processor scheduling algorithm is analyzed, under the assumption that task times are drawn from a uniform distribution on (0,1].
Abstract: The average performance of the LPT processor scheduling algorithm is analyzed, under the assumption that task times are drawn from a uniform distribution on (0,1]. The ratio of the expected length of the schedule to the expected length of a preemptive schedule is shown to be bounded by 1 + 0(m2 /n2 ), where m is the number of processors, and η is the number of tasks.

Journal ArticleDOI
TL;DR: This paper considers the problem of maximizing the expected value of multicommodity flows in a network in which the arcs experience probabilistic loss rates and an arc-chain formulation and an efficient algorithm for computing an optimal solution are provided.
Abstract: This paper considers the problem of maximizing the expected value of multicommodity flows in a network in which the arcs experience probabilistic loss rates. Consideration of probabilistic losses are relevant, particularly, in communication and transportation networks. An arc-chain formulation of the problem and an efficient algorithm for computing an optimal solution are provided. The algorithm involves a modified column generation technique for identifying a constrained chain. Computational experience with the algorithm is included.

Proceedings ArticleDOI
05 May 1982
TL;DR: This paper looks at the question of how fast a probabilistic machine can simulate another, and the approach should be of interest in its own right, in view of the great attention that Probabilistic algorithms have recently attracted.
Abstract: The results of this paper concern the question of how fast machines with one type of storage media can simulate machines with a different type of storage media. Most work on this question has focused on the question of how fast one deterministic machine can simulate another. In this paper we shall look at the question of how fast a probabilistic machine can simulate another. This approach should be of interest in its own right, in view of the great attention that probabilistic algorithms have recently attracted.

Journal ArticleDOI
TL;DR: In this article, a method for determining the probability of failure is proposed so that widely available stability charts can be used to calculate peak and residual shear strengths (undrained) as the only random variables, and the concept of residual factor is used to specify the proportion of slip surface which has passed from peak to residual strength state.

Proceedings Article
01 Jan 1982
TL;DR: An endless resilient member is carried by the edge of each flexible member with each resilient member being of a length greater than the peripheral distance around the window opening adjacent thereto and adapted to snap therethrough and then retain a position outwardly thereof.
Abstract: The space between and surrounding aligned window openings of a vehicle cab and a camper carried thereby is sealed by an endless flexible member of sheet material of a length to surround the aligned window openings and of a width greater than the space therebetween. An endless resilient member is carried by the edge of each flexible member with each resilient member being of a length greater than the peripheral distance around the window opening adjacent thereto and adapted to snap therethrough and then retain a position outwardly thereof.

Journal ArticleDOI
TL;DR: This paper explores the problem of finding behavioral models of finite state systems, given observed data of systems which are presumed to lie in this category, and several possible approaches to developing a solution to this general problem are discussed.
Abstract: This paper explores the problem of finding behavioral models of finite state systems, given observed data of systems which are presumed to lie in this category. Following an informal introduction, a formal definition of an identification problem is given along with the form that a solution to such a problem may be expected to lake. The most general type of such problem involving finite state systems is described and several possible approaches to developing a solution to this general problem are discussed. Several classes of more specialized classes of systems and their identification problems are developed. For these, a set of algorithms are described which may be used to solve them. The complexity of these algorithms is discussed.

Book ChapterDOI
05 Apr 1982
TL;DR: It is shown that the Cantor-Zassenhaus probabilistic step to find a factor over GF(p) of a polynomial being the product of equal degree factors takes O(n3L2( p)Lβ(p )2) units of time.
Abstract: We have shown that the Cantor-Zassenhaus probabilistic step to find a factor over GF(p) of a polynomial being the product of equal degree factors takes O(n3L2(p)Lβ(p)2) units of time. This is, using classical arithmetic, the cost in Rabin's algorithm to find a root of an irreducible factor of degree d in the extension field GF(pd). The constants involved in Rabin's algorithm seem to be higher than in the simple Cantor-Zassenhaus test, which is the most promising candidate of a probabilistic algorithmic to be compared with the deterministic Berlekamp-Hensel algorithm. A careful analysis, backed up by measurements using current technology in computer algebra, demonstrates that the time has not yet come to beat Berlekamp-Hensel. This is mainly due to the cost of exponentiation both in Berlekamp's Q-matrix and in the probabilistic test of Cantor-Zassenhaus which makes both algorithms for large primes intractable. In contradistinction : the restriction of the Berlekamp-Hensel algorithm to small primes is computationally its greatest strength.

Book ChapterDOI
01 Jan 1982
TL;DR: The value of the optimal solution of a random instance of the Knapsack problem is analyzed and the performance of a simple greedy heuristic for the solution of this problem is evaluated.
Abstract: In this paper the value of the optimal solution of a random instance of the Knapsack problem is analyzed. With respect to this value, the performance of a simple greedy heuristic for the solution of this problem is evaluated. The results are compared with the performance of other greedy heuristics.


Journal ArticleDOI
TL;DR: In this article, a probabilistic analysis of steel beam-columns in a typical medium-rise office building designed in accordance with current AISC specifications was conducted, where risks were evaluated in terms of failure of probability for several failure modes and for various combinations of dead, live and wind loads.

Journal ArticleDOI
TL;DR: A brief review of the use of probabilistic models can be found in this paper, where an example of earthquake ground motion is considered, and it is shown that a rather simple model yields interesting results, from simple applications of probability and classical statistical theory to extremely complex models that are needed to solve current problems.

Journal ArticleDOI
TL;DR: Computational algorithms specific to the analysis of covariance are discussed for the treatment of both balanced and unbalanced data and the solution of missing data problems.
Abstract: Computational algorithms specific to the analysis of covariance are discussed for the treatment of both balanced and unbalanced data. The use of covariance algorithms in the solution of missing data problems is also considered.


Book ChapterDOI
01 Jan 1982
TL;DR: The first part of the paper examines three important probabilistic algorithms that together illustrate many of the important points of the field and generalizes from those examples to provide a more systematic view.
Abstract: This paper is a brief introduction to the field of probabilistic analysis of algorithms; it is not a comprehensive survey. The first part of the paper examines three important probabilistic algorithms that together illustrate many of the important points of the field, and the second part then generalizes from those examples to provide a more systematic view.

01 Jan 1982
TL;DR: In this paper, a probabilistic analysis of learning characteristics of the class of stochastic gradient-descent algorithms for adapting the parameters of a non-recursive linear filter in order to identify a time-varying system from noisy measurements is presented.
Abstract: The solution to the linear mean-square estimation problem on the basis of a priori knowledge of the relevant correlations is well known. However, in many applications a priori knowledge of the correlations is not available and the correlations may be changing with time. In this case, adaptive estimation is indicated. In this research, probabilistic analysis of learning characteristics of the class of stochastic-gradient-descent algorithms for adapting the parameters of a non-recursive linear filter in order to identify a time-varying system from noisy measurements is presented. Characteristics analyzed include stability, rate of convergence, steady state average and RMS misadjustments, average- and RMS-optimum adaptation step sizes, and sensitivity. A measure of the degree of nonstationarity for the system to be identified is proposed and its effects together with the effects of gradient-averaging, adaptation step size, and signal-to-noise ratio, on the learning characteristics are studied. The tracking performance of this class of algorithms is studied for the case in which the unknown system evolves according to (i) a first-order Markov random process and (ii) a periodic process. Simulation results that verify the accuracy of theoretical performance predictions are presented.