scispace - formally typeset
Search or ask a question

Showing papers on "Probabilistic analysis of algorithms published in 1981"


Journal ArticleDOI
TL;DR: The theoretical differences and similarities among the various algorithms for automatic connected-word recognition are discussed and an experimental comparison shows that for typical applications, the level-building algorithm performs better than either the two-level DP matching or the sampling algorithm.
Abstract: Several different algorithms have been proposed for time registering a test pattern and a concatenated (isolated word) sequence of reference patterns for automatic connected-word recognition. These algorithms include the two-level, dynamic programming algorithm, the sampling approach and the level-building approach. In this paper, we discuss the theoretical differences and similarities among the various algorithms. An experimental comparison of these algorithms for a connected-digit recognition task is also given. The comparison shows that for typical applications, the level-building algorithm performs better than either the two-level DP matching or the sampling algorithm.

458 citations


Journal ArticleDOI
TL;DR: A new probabilistic method is presented which, when combined with the above algorithms, avoids the need for both resultants and linear equations and leads to algorithms which are conceptually simpler than previous methods.
Abstract: in F[t]. There are two standard methods; one is due to E. Berlekamp, the other appears to be a "folk method". Both are described by D. Knuth in [3, pp. 381-397]. Further improvements, for special values of q, are given by R. Moenck in [5J. See also E. Berlekamp [1]. (Note that both methods as described in the references apply only when d = 1, however, straightforward modifications, described below, allow d > 1.) Both methods use the calculation of resultants (or equivalently the solution of linear equations) to reduce the problem to finding the roots of a polynomial which has all of its roots in F. When p is very small, probabilistic methods are used. An improvement to the "folk method" method, along with a more explicit calculation of the work required, has recently been given by M. Rabin [6]. We present here a new probabilistic method which, when combined with the above algorithms, avoids the need for both resultants and linear equations. It leads to algorithms which are conceptually simpler than previous methods. Moreover, it works equally well for all finite fields F, regardless of the magnitude of q. When used for factoring a quadratic x2 a, it reduces to Berlekamp's algorithm. Other standard algorithms for factoring quadratics are due to D. H. Lehmer [4] and D. Shanks [7]. Our algorithm is also suitable for finding solutions of polynomial equations over finite fields.

332 citations


Journal ArticleDOI
TL;DR: It is shown that the family of all probabilistic sets constitutes a complete pseudo-Boolean algebra, which is based on both probability theory and fuzzy concepts.

216 citations


Proceedings ArticleDOI
28 Oct 1981

141 citations


Journal ArticleDOI
TL;DR: In the algorithms for finding minimum spanning trees, bridges, and fundamental cycles, the number of processors used is small enough that the parallel algorithm is efficient in comparison with the best sequential algorithms for these problems.
Abstract: Algorithms for solving graph problems on an unbounded parallel model of computation are considered. Parallel algorithms of time complexity $O(\log ^2 n)$ are described for finding biconnected components, bridges, minimum spanning trees and fundamental cycles. In the algorithms for finding minimum spanning trees, bridges, and fundamental cycles, the number of processors used is small enough that the parallel algorithm is efficient in comparison with the best sequential algorithms for these problems. Several other algorithms are presented which are especially suitable for processing sparse graphs.

118 citations


Proceedings ArticleDOI
28 Oct 1981
TL;DR: It is shown that the size of the ring cannot be calculated by any probabilistic algorithm in which the processes can sense termination and any algorithm may yield an incorrect value.
Abstract: Given a ring (cycle) of n processes it is required to design the processes so that they will be able to choose a leader (a uniquely designated process) by sending messages along the ring. If the processes are indistiguishable there is no deterministic algorithm, and therefore probabilistic algorithms are proposed. These algorithms need not terminate, but their expected complexity (time or number of bits of communication) is bounded by a function of n. If the processes work asynchronously then on the average O(n log2n) bits are transmitted. In the above cases the size n of the ring was assumed to be known. If n is not known it is suggested first to determine the value of n and then use the above algorithm. However, n may only be determined probabilistically and any algorithm may yield an incorrect value. In addition, it is shown that the size of the ring cannot be calculated by any probabilistic algorithm in which the processes can sense termination.

111 citations


01 Jan 1981
TL;DR: In this paper, it is argued that Bayesian statistics is the more appropriate vehicle for the probabilistic analysis of rare events, and a short review is given with some discussion on the representation of ignorance.
Abstract: The sources of uncertainty in probabilistic risk analysis are discussed, using the event and fault-tree methodology as an example. The role of statistics in quantifying these uncertainties is investigated. A class of uncertainties is identified which are, at present, unquantifiable using either classical or Bayesian statistics. it is argued that Bayesian statistics is the more appropriate vehicle for the probabilistic analysis of rare events, and a short review is given with some discussion on the representation of ignorance.

93 citations


Book ChapterDOI
31 Aug 1981
TL;DR: By giving a matrix inversion algorithm that uses a small amount of space, a result of Simon is improved: For constructible functions f(n)∉ o(logn) f( n) tape-bounded probabilistic Turing machines can be simulated on deterministic ones within (f(n))2 space.
Abstract: By giving a matrix inversion algorithm that uses a small amount of space, a result of Simon is improved: For constructible functions f(n)∉ o(logn) f(n) tape-bounded probabilistic Turing machines can be simulated on deterministic ones within (f(n))2 space.

32 citations


Journal ArticleDOI
TL;DR: A maximum matching is a matching of maximum cardinality and the set of nodes which take part in such amaximum matching is denoted by Nodes(G) and the cardinality of the matching isDenoted by Card(G).

29 citations


Journal ArticleDOI
TL;DR: Three algorithms whose convergence can be proved are devised and implemented and Computational experimentation indicates that substantial improvement can be achieved utilizing the two-phase method.

27 citations


Proceedings ArticleDOI
Barry K. Rosen1
26 Jan 1981
TL;DR: Demand-driven algorithms capable of updating without extensive reanalysls are studied: accurate where accuracy is demanded, but not burdened by the cost of providing much more than is demanded.
Abstract: In analysis of programs and many other kinds of computation, algorithms are commonly written to have an input P and an output Q, where both P and Q are large and complicated objects. For example, P might be a routing problem and Q might be a solution to P. Although documented and discussed in this exhaustive style, algorithms are sometimes intended for use in contexts with two departures from the one-time analysis of an entire new input. First, the current value of P is the result of a small change to a previous value of P. We want to update the results of the previous analysis without redoing all of the work. Second, accurate information is only wanted in some designated portion of the large output Q. Possibly inaccurate information may appear elsewhere in Q. We want the analysis to be demand-driven: accurate where accuracy is demanded, but not burdened by the cost of providing much more than is demanded. This paper studies demand-driven algorithms capable of updating without extensive reanalysls. Such algorithms are called incremental, in contrast with the one-time analysis of an entire new input contemplated by exhaustive algorithms. In some cases, it is shown that an exhaustive algorithm can be easily recast in an efficient incremental style. Other algorithms for the same problem may be much less amenable to incremental use. It is shown that traditional exhaustive computational complexity bounds are poor predictors of incremental performance. The main examples are taken from global data flow analysis.

Proceedings ArticleDOI
12 May 1981
TL;DR: A probabilistic analysis of several arbitration algorithms according to several criteria that reflect their relative performances in rendering equal service to all competing devices and allocating available bus bandwidth efficiently reveals that under heavy bus loads, the dynamic priority and FCFS algorithms offer significantly better performances than do the static priority and FTS schemes.
Abstract: Current multiprocessing systems are often organized by connecting several devices with similar characteristics (usually processors) to a common bus. These devices present access with minimal delay; access is controlled by the bus arbitration algorithm. This paper presents a probabilistic analysis of several arbitration algorithms according to several criteria that reflect their relative performances in (1) rendering equal service to all competing devices and (2) allocating available bus bandwidth efficiently. The sensitivity of these criteria to the number of devices on the bus, the speed of the bus, and the distribution of interrequest times is considered.A probabilistic model for the quantitative comparison of these algorithms is constructed in which multiple devices repeatedly issue bus requests at random intervals according to an arbitrary distribution function and are serviced according to one of the algorithms; the devices do not buffer bus requests. The algorithms studied include the static priority, fixed time slice (FTS), two dynamic priority, and first-come, first-served (FCFS) schemes. The performance measures are computed by simulation. The analysis reveals that under heavy bus loads, the dynamic priority and FCFS algorithms offer significantly better performances by these measures than do the static priority and FTS schemes.

Proceedings ArticleDOI
11 May 1981
TL;DR: This paper considers a fixed (possibly infinite) set π of distributed asynchronous processes which at various times are willing to communicate with each other, and describes probabilistic algorithms for synchronizing this communication with boolean “flag” variables.
Abstract: This paper considers a fixed (possibly infinite) set π of distributed asynchronous processes which at various times are willing to communicate with each other. We describe probabilistic algorithms for synchronizing this communication with boolean “flag” variables, each of which can be written by only one process and read by at most one other process. With very few assumptions (the speeds of processes may vary in time within fixed arbitrary bounds, and the processes may be willing to communicate with a time varying set of processes (but bounded in number), and no probability assumptions about system behavior) we show our synchronization algorithms have real time response: If a pair of processes are mutually willing to communicate within a constant time interval, they establish communication in that interval, with high likelihood (for the worst case behavior of the system). Our communication model and synchronization algorithms are quite robust. They are applied to solve a large class of real time resource synchronization problems, as well as real time implementation of the synchronization primitives of Hoare's multiprocessing language CSP.

Journal ArticleDOI
TL;DR: A survey is given of some results on the complexity of algorithms and computations published up to 1973.
Abstract: A survey is given of some results on the complexity of algorithms and computations published up to 1973.

01 Oct 1981
TL;DR: In this paper, a model for dynamic analysis of floating bridges exposed to short-crested sea waves is presented, where the sea surface is idealized as a three-dimensional zero mean ergodic Gaussian field from which the wave loading process is derived using linear potential theory.
Abstract: Probabilistic methods for dynamic analysis of floating bridges exposed to short-crested sea waves are presented. The sea surface is idealized as a three-dimensional zero mean ergodic Gaussian field from which the wave loading process is derived using linear potential theory. The structural idealization is based on the finite element method. Structural response is obtained both in the frequency domain using the spectral approach and in the time domain by a sample path approach using Monte Carlo simulation. To exemplify the theory, results are presented for a 1200 metre long, continuous curved floating bridge. The report contains the paper presented at "European Simulation Meeting, Modelling and Simulation of Large-Scale Structural Systems", Capri, Italy, September 24-25, 1981. (Author/TRRL)




Journal ArticleDOI
TL;DR: A class of recognition algorithms based on a mixed principle is described: in the construction the basic ideas of both algorithms of “Kora” type and also algorithms of the calculation of estimates are used.
Abstract: A class of recognition algorithms based on a mixed principle is described: in the construction the basic ideas of both algorithms of “Kora” type and also algorithms of the calculation of estimates are used.

Journal ArticleDOI
TL;DR: The implementation of the probabilistic analysis in ADINA is outlined, and the evaluation of these matrices by analytical integration is discussed, which is computationally very effective and maintains full accuracy in the dynamic properties of the structural model.

01 May 1981
TL;DR: In this paper, the authors evaluated the likelihood of pore pressure build-up during an earthquake and developed a probabilistic pore-pressure model using laboratory data and an effective stress model.
Abstract: This research is primarily concerned with the evaluation of the likelihood of pore pressure build-up during an earthquake. As part of a long-term project, the report reviews methods for pore pressure development and liquefaction and documents and investigates the behavior of cohesionless soils in cyclic loading. Described are the development and application of a probabilistic pore pressure model using laboratory data; the probabilistic development of pore pressure using an effective stress model; and a probabilistic pore pressure analysis and seismic hazard evaluation. A generalized methodology is presented. Recommendations for future work include the improvement and application of the models and specific research tasks for completion of the methodology. The models developed in this study are shown to be advantageous over conventional methods.

Journal ArticleDOI
TL;DR: In this paper, the authors reviewed the loads that affect geotechnical designs, characterizing the uncertainties in each load component and examined the level of variabilities and uncertainties associated with the effect of each component on the foundation.
Abstract: The loads that affect geotechnical designs are reviewed, characterizing the uncertainties in each load component. Probabilistic models for evaluating each component are examined. Whenever data are available, the level of variabilities and uncertainties associated with the effect of each component on the foundation is assessed. Satisfactory performance of a geotechnical system relies, in part, on a good understanding of the characteristics of the loads and environments to which the system is subjected. Uncertainties do exist in these factors. For dynamic loads due to earthquake excitation and wave action, both their magnitude and frequencies of occurrences over the expected duration of the system could not be known. The long-term stability of soil slope would also be dependent on stochastic fluctuation of the pore pressure due to seasonal variation and other temporal changes in the environment. (ASCE)

Proceedings ArticleDOI
05 Aug 1981
TL;DR: This paper presents linear probabilistic algorithms for verification of the validity of the integer equality f1, f2 for rational functions f and f2, which can be of the form of a rational combination of rational functions.
Abstract: For many computational problems it is not known whether verification of a result can be done faster than its computation. For instance, it is unknown whether the verification of the validity of the integer equality x*y=z needs fewer bit operations than a computation of the product x*y. It is sometimes much easier, however, to speed up the computation probabilistically if just the verification of the result is involved.In this paper we present linear probabilistic algorithms for verification of the validity of the integer equality f1(x1,...,xN)=f2(x1,...,xN) for rational functions f1 and f2, which can be of the form of a rational combination of rational functions.

01 Jan 1981
TL;DR: In this article, a probabilistic model is proposed for evaluating the variation of stability risks over time after platform installation, and a Bayesian procedure is formulated to update the statistics of the consolidation parameters from settlement measurements.
Abstract: A probabilistic model is proposed for evaluating the variation of stability risks over time after platform installation. A Bayesian procedure is formulated to update the statistics of the consolidation parameters from settlement measurements, and reevaluate the probability of foundation failure in subsequent years. A case study is used to demonstrate the proposed methodology. For the covering abstract of the conference see TRIS no 450958.

Journal Article
TL;DR: In this article, a simplified probabilistic approach to the determination of the short-term (phi sub u = 0) reliability of clayey slopes under static and seismic conditions is presented.
Abstract: A simplified probabilistic approach to the determination of the short-term (phi sub u = 0) reliability of clayey slopes under static and seismic conditions is presented. The uncertainties associated with (a) the undrained strength of soil and its spatial variation and (b) the analytic procedure used to assess the safety of the slope are considered, and probabilistic tools are introduced for their description and amelioration. The probability of the failure of a slope under static loading is first determined. The effect of an earthquake on the slope is introduced through an equivalent horizontal peak acceleration (deterministic), and the new probability of failure is obtained by using Bayes' theorem. Finally, the developed procedure is illustrated in an example, the results of which are presented and discussed. (Author)


Book ChapterDOI
01 Jan 1981
TL;DR: This paper discusses some of the techniques which are useful when carrying out analysis of the performance of a deterministic algorithm under some assumption about the distribution of inputs and the problem of choosing an appropriate distribution.
Abstract: Randomness arises in connection with algorithms and their analysis in at least two different ways, Some algorithms (sometimes called coin-flipping algorithms) provide their own randomness, perhaps through the use of random number generators, Sometimes, though, it is useful to analyze the performance of a deterministic algorithm under some assumption about the distribution of inputs, We briefly survey some work which gives a perspective on such problems, Next we discuss some of the techniques which are useful when carrying out this type of analysis, Finally, we briefly discuss the problem of choosing an appropriate distribution

Proceedings ArticleDOI
01 Dec 1981
TL;DR: The convergence theory of algorithms for minimizing potential functions has received considerable attention, but some of the conditions that are prerequisite to this theory are not of a constructive nature, and some of these results remedy this for a class of fixed-point algorithms.
Abstract: a solution of the given problem. For brevity, we refer to this as a convergent algorithm. Some of the properties needed to show that a fixed-point algorithm is convergent depend directly on the fixed-point formulation. Other properties that are needed to show convergence may depend more on some alternate reformulation. For example, many fixedpoint problems can be rewritten as potential function problems in which the solution set minimizes some potential function. The convergence theory of algorithms for minimizing potential functions has received considerable attention [6, 7 , 8 , 15, 231, but some of the conditions that are prerequisite to this theory are not of a constructive nature. Some of our results remedy this for a class of fixed-point algorithms.

Journal ArticleDOI
C. A. Rau1, P. M. Besuner1
TL;DR: In this article, the basic concepts of probabilistic fracture mechanics are used to assess and control risk, and several specific examples are described that illustrate how these methods can be applied to assess risk and to provide a quantitative basis for establishing design, operation or maintenance allowables.
Abstract: Statistical variations in input parameters that affect structural reliability have historically been incorporated approximately in engineering designs by application of safety factors. Increased concerns over the injury potential and costs of licensing, insurance, field repairs or recalls, and product liability claims now demand more quantitative evaluation of possible flaws or unusual usage conditions that might result from statistical variations or uncertainties. This paper describes the basic concepts of probabilistic fracture mechanics that are used to assess and control risk. Recent developments in combined analysis methods are presented that utilize field experience data with probabilistic analysis to improve the accuracy of the structural integrity predictions. Several specific examples are described that illustrate how these probabilistic methods are used to assess risk and to provide a quantitative basis for establishing design, operation or maintenance allowables. These procedures, which realistically model the actual statistical variations that exist, can eliminate unnecessarily conservative approximations and often achieve improved reliability at reduced cost.

Book ChapterDOI
01 Jan 1981