scispace - formally typeset
Search or ask a question

Showing papers on "Probabilistic analysis of algorithms published in 1990"


Book
01 Jan 1990
TL;DR: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures and presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers.
Abstract: From the Publisher: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures. Like the first edition,this text can also be used for self-study by technical professionals since it discusses engineering issues in algorithm design as well as the mathematical aspects. In its new edition,Introduction to Algorithms continues to provide a comprehensive introduction to the modern study of algorithms. The revision has been updated to reflect changes in the years since the book's original publication. New chapters on the role of algorithms in computing and on probabilistic analysis and randomized algorithms have been included. Sections throughout the book have been rewritten for increased clarity,and material has been added wherever a fuller explanation has seemed useful or new information warrants expanded coverage. As in the classic first edition,this new edition of Introduction to Algorithms presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers. Further,the algorithms are presented in pseudocode to make the book easily accessible to students from all programming language backgrounds. Each chapter presents an algorithm,a design technique,an application area,or a related topic. The chapters are not dependent on one another,so the instructor can organize his or her use of the book in the way that best suits the course's needs. Additionally,the new edition offers a 25% increase over the first edition in the number of problems,giving the book 155 problems and over 900 exercises thatreinforcethe concepts the students are learning.

21,651 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that probabilistic inference using belief networks is NP-hard and that it seems unlikely that an exact algorithm can be developed to perform inference efficiently over all classes of belief networks and that research should be directed toward the design of efficient special-case, average-case and approximation algorithms.

1,877 citations


Book
01 Jan 1990
TL;DR: Algorithms in C is a comprehensive repository of algorithms, complete with code, with extensive treatment of searching and advanced data structures, sorting, string processing, computational geometry, graph problems, and mathematical algorithms.
Abstract: Algorithms in C is a comprehensive repository of algorithms, complete with code. If you're in a pinch and need to code something up fast, this book is the place to look. Starting with basic data structures, Algorithms in C covers an enormous scope of information, with extensive treatment of searching and advanced data structures, sorting, string processing, computational geometry, graph problems, and mathematical algorithms. Although the manual often neglects to provide rigorous analysis, the text surrounding the algorithms provides clear and relevant insight into why the algorithms work.

1,043 citations


Journal ArticleDOI
TL;DR: An advanced mean-based method is presented, capable of establishing the full probability distributions to provide additional information for reliability design and can be used to solve problems involving nonmonotonic functions that result in truncated distributions.
Abstract: In probabilistic structural analysis, the performance or response functions usually are implicitly defined and must be solved by numerical analysis methods such as finite-elemen t methods. In such cases, the commonly used probabilistic analysis tool is the mean-based second-moment method, which provides only the first two statistical moments. This paper presents an advanced mean-based method, which is capable of establishing the full probability distributions to provide additional information for reliability design. The method requires slightly more computations than the mean-based second-moment method but is highly efficient relative to the other alternative methods. Several examples are presented to demonstrate the method. In particular, the examples show that the new mean-based method can be used to solve problems involving nonmonotonic functions that result in truncated distributions.

466 citations


Journal ArticleDOI
22 Oct 1990
TL;DR: A model of machine learning in which the concept to be learned may exhibit uncertain or probabilistic behavior is investigated, and an underlying theory of learning p-concepts is developed in detail.
Abstract: A model of machine learning in which the concept to be learned may exhibit uncertain or probabilistic behavior is investigated. Such probabilistic concepts (or p-concepts) may arise in situations such as weather prediction, where the measured variables and their accuracy are insufficient to determine the outcome with certainty. It is required that learning algorithms be both efficient and general in the sense that they perform well for a wide class of p-concepts and for any distribution over the domain. Many efficient algorithms for learning natural classes of p-concepts are given, and an underlying theory of learning p-concepts is developed in detail. >

425 citations


Proceedings ArticleDOI
01 Jan 1990
TL;DR: New randomized on-line algorithms for snoopy caching and the spin-block problem are presented and achieve competitive ratios approachinge/(e−1) ≈ 1.58 against an oblivious adversary, a surprising improvement over the best possible ratio in the deterministic case.
Abstract: Competitive analysis is concerned with comparing the performance of on-line algorithms with that of optimal off-line algorithms. In some cases randomization can lead to algorithms with improved performance ratios on worst-case sequences. In this paper we present new randomized on-line algorithms for snoopy caching and the spin-block problem. These algorithms achieve competitive ratios approachinge/(e−1) ≈ 1.58 against an oblivious adversary. These ratios are optimal and are a surprising improvement over the best possible ratio in the deterministic case, which is 2. We also consider the situation when the request sequences for these problems are generated according to an unknown probability distribution. In this case we show that deterministic algorithms that adapt to the observed request statistics also have competitive factors approachinge/(e−1). Finally, we obtain randomized algorithms for the 2-server problem on a class of isosceles triangles. These algorithms are optimal against an oblivious adversary and have competitive ratios that approache/(e−1). This compares with the ratio of 3/2 that can be achieved on an equilateral triangle.

352 citations


Journal ArticleDOI
TL;DR: The active set approach provides a unifying framework for studying algorithms for isotonic regression, simplifies the exposition of existing algorithms and leads to several new efficient algorithms, including a new O(n) primal feasible active set algorithm.
Abstract: In this and subsequent papers we will show that several algorithms for the isotonic regression problem may be viewed as active set methods. The active set approach provides a unifying framework for studying algorithms for isotonic regression, simplifies the exposition of existing algorithms and leads to several new efficient algorithms. We also investigate the computational complexity of several algorithms.

271 citations


Journal ArticleDOI
TL;DR: It is shown how to compute, in polynomial time, a simplicial packing of sizeO(rd) which coversd-space, each of whose simplices intersectsO(n/r) hyperplanes, and improves on various probabilistic bounds in geometric complexity.
Abstract: The combination of divide-and-conquer and random sampling has proven very effective in the design of fast geometric algorithms. A flurry of efficient probabilistic algorithms have been recently discovered, based on this happy marriage. We show that all those algorithms can be derandomized with only polynomial overhead. In the process we establish results of independent interest concerning the covering of hypergraphs and we improve on various probabilistic bounds in geometric complexity. For example, givenn hyperplanes ind-space and any integerr large enough, we show how to compute, in polynomial time, a simplicial packing of sizeO(r d ) which coversd-space, each of whose simplices intersectsO(n/r) hyperplanes.

261 citations


Proceedings ArticleDOI
01 Apr 1990
TL;DR: It is proved the existence of an efficient “simulation” of randomized on-line algorithms by deterministic ones, which is best possible in general.
Abstract: Against in adaptive adversary, we show that the power of randomization in on-line algorithms is severely limited! We prove the existence of an efficient “simulation” of randomized on-line algorithms by deterministic ones, which is best possible in general. The proof of the upper bound is existential. We deal with the issue of computing the efficient deterministic algorithm, and show that this is possible in very general cases.

220 citations


Journal ArticleDOI
TL;DR: Probabilistic algorithms are proposed to overcome the difficulty of designing a ring of n processors such that they will be able to choose a leader by sending messages along the ring, if the processors are indistinguishable.
Abstract: Given a ring of n processors it is required to design the processors such that they will be able to choose a leader (a uniquely designated processor) by sending messages along the ring If the processors are indistinguishable then there exists no deterministic algorithm to solve the problem To overcome this difficulty, probabilistic algorithms are proposed The algorithms may run forever but they terminate within finite time on the average For the synchronous case several algorithms are presented: The simplest requires, on the average, the transmission of no more than 2442 n bits and O ( n ) time More sophisticated algorithms trade time for communication complexity If the processors work asynchronously then on the average O ( n log n ) bits are transmitted In the above cases the size of the ring is assumed to be known to all the processors If the size is not known then finding it may be be done only with high probability: any algorithm may yield incorrect results (with nonzero probability) for some values of n Another difficulty is that, if we insist on correctness, the processors may not explicity terminate Rather, the entire ring reaches an inactive state, in which no processor initiates communication

218 citations


Journal ArticleDOI
TL;DR: Two algorithms for the k-satisfiability problem are presented and a probabilistic analysis is performed and it is shown that the first algorithm finds a solution with probability approaching one for a wide range of parameter values.

Journal ArticleDOI
TL;DR: A bibliography is presented on the subject of power system analysis from a Probabilistic point of view, focusing on probabilistic loadflow, probabilists short circuit, and probabilism dynamic analysis (probabilistic security).
Abstract: A bibliography is presented on the subject of power system analysis from a probabilistic point of view. Comprising 269 references, this bibliography focuses on probabilistic loadflow, probabilistic short circuit, and probabilistic dynamic analysis (probabilistic security). An additional section covering general topics (analysis and engineering) and a list of previous bibliographies are also presented. >

Journal ArticleDOI
TL;DR: This work presents new algorithms for computing transitive closure of large database relations that do not depend on the length of paths in the underlying graph and proposes a new methodology for evaluating the performance of recursive queries.
Abstract: We present new algorithms for computing transitive closure of large database relations. Unlike iterative algorithms, such as the seminaive and logarithmic algorithms, the termination of our algorithms does not depend on the length of paths in the underlying graph (hence the name direct algorithms). Besides reachability computations, the proposed algorithms can also be used for solving path problems. We discuss issues related to the efficient implementation of these algorithms, and present experimental results that show the direct algorithms perform uniformly better than the iterative algorithms. A side benefit of this work is that we have proposed a new methodology for evaluating the performance of recursive queries.

Journal ArticleDOI
TL;DR: A general, modular technique for designing efficient leader finding algorithms in distributed, asynchronous networks is developed, and in some cases the message complexity of the resulting algorithms is better by a constant factor than that of previously known algorithms.
Abstract: A general, modular technique for designing efficient leader finding algorithms in distributed, asynchronous networks is developed. This technique reduces the problem of efficient leader finding to a simpler problem of efficient serial traversing of the corresponding network. The message complexity of the resulting leader finding algorithms is bounded by [f(n) + n)(log2k + 1) (or (f(m) + n)(log2k + 1)], where n is the number of nodes in the network [m is the number of edges in the network], k is the number of nodes that start the algorithm, and f (n) [f(m)] is the message complexity of traversing the nodes [edges] of the network. The time complexity of these algorithms may be as large as their message complexity. This technique does not require that the FIFO discipline is obeyed by the links. The local memory needed for each node, besides the memory needed for the traversal algorithm, is logarithmic in the maximal identity of a node in the network. This result achieves in a unified way the best known upper bounds on the message complexity of leader finding algorithms for circular, complete, and general networks. It is also shown to be applicable to other classes of networks, and in some cases the message complexity of the resulting algorithms is better by a constant factor than that of previously known algorithms.

Journal ArticleDOI
TL;DR: The classical problems reviewed are the traveling salesman problem, minimal spanning tree, minimal matching, greedy matching, minimal triangulation, and others, each optimization problem considered for finite sets of points in ℝd.
Abstract: The classical problems reviewed are the traveling salesman problem, minimal spanning tree, minimal matching, greedy matching, minimal triangulation, and others. Each optimization problem is considered for finite sets of points in ℝd, and the feature of principal interest is the value of the associated objective function. Special attention is given to the asymptotic behavior of this value under probabilistic assumptions, but both probabilistic and worst case analyses are surveyed.

Journal ArticleDOI
TL;DR: Techniques applicable to the simulation of repetitive construction operations are described and a practical example application is presented to demonstrate how they can be applied.
Abstract: Before full adaptation of simulation techniques for the analysis and design of construction operations can be implemented, the appropriate statistical tools must be understood and applied as part of the simulation experiment. Most simulation models in construction can be treated as stochastic models. The proper analysis of such models requires: (1) Application of input modeling techniques; (2) appropriate analysis of output parameters of concern based on multiple runs; and (3) validation and verification of the results. This paper describes techniques applicable to the simulation of repetitive construction operations and presents a practical example application to demonstrate how they can be applied. Procedures for selecting input models, methods for solving for the parameters of selected distributions, and goodness‐of‐fit testing for construction data are reviewed. The discussions on output analysis were limited to simulation output that is normal because it is frequently encountered in construction simu...

Journal ArticleDOI
TL;DR: In this paper, a probabilistic damage analysis method is presented to generate seismic fragility curves of structures by evaluating uncertainties in parameters that define the earthquake-structure system, which are characterized by several representative values that are selected considering the uncertainty range of the parameter and its use in engineering practices.
Abstract: This paper presents a probabilistic damage analysis method to generate seismic fragility curves of structures. Uncertainties in earthquake and structure are quantified by evaluating uncertainties in parameters that define the earthquake‐structure system. The uncertainty in each parameter is characterized by several representative values that are selected considering the uncertainty range of the parameter and its use in engineering practices. Samples of structures and earthquake motions are constructed from the combination of these representative values, then the Latin hypercube sampling technique is used to construct the samples of earthquake‐structure system. For each sample, the nonlinear seismic analysis is performed to produce response data, which are then statistically analyzed. On the other hand, five limit states representing various degrees of structural damage are defined and the statistics of the structural capacity corresponding to each limit state can be established. The fragility curve is gen...

Journal ArticleDOI
TL;DR: It is clear that the field is still in early stages of development; the algorithms and probability models tend to be simplistic, and stimates of performance are far more common that exact measures.

01 Jan 1990
TL;DR: Methods of approximating quantities related to the solutions of stochastic differential systems based on the simulation of time-discrete Markov chains are presented, and an application to an engineering problem (the study of stability of the motion of a helicopter blade) is described.
Abstract: We present methods of approximating quantities related to the solutions of stochastic differential systems based on the simulation of time-discrete Markov chains. The motivations come from random mechanics and the numerical integration of certains deterministic P.D.E.'s by probabilistic algorithms. We state theoretical results concerning the rates of convergence of these methods. We give results of numerical tests, and we describe an application of this approach to an engineering problem (the study of stability of the motion of a helicopter blade).

Journal ArticleDOI
01 Aug 1990-Networks
TL;DR: This article has developed a randomized approximation scheme, BN-RAS, for doing probabilistic inference in belief networks that can, in many circumstances, perform efficient approximate inference in large and richly interconnected models.
Abstract: Researchers in decision analysis and artificial intelligence (AI) have used Bayesian belief networks to build probabilistic expert systems. Using standard methods drawn from the theory of computational complexity, workers in the field have shown that the problem of probabilistic inference in belief networks is difficult and almost certainly intractable. We have developed a randomized approximation scheme, BN-RAS, for doing probabilistic inference in belief networks. The algorithm can, in many circumstances, perform efficient approximate inference in large and richly interconnected models. Unlike previously described stochastic algorithms for probabilistic inference, the randomized approximation scheme (ras) computes a priori bounds on running time by analyzing the structure and contents of the belief network. In this article, we describe BN-RAS precisely and analyze its performance mathematically.

01 Jan 1990
TL;DR: This dissertation investigates properties of conditional independence in relation to the elicitation, organization and inference of probabilistic expert systems and develops an efficient algorithm to find this solution.
Abstract: This dissertation investigates properties of conditional independence in relation to the elicitation, organization and inference of probabilistic expert systems. Qualitative notions of interaction, connectedness, mediation and causation are given formal probabilistic underpinning: graph-based representations and algorithms are developed for processing these notions. A partial axiomatic characterization is established of the predicate $I(X,Z,Y)$ to read: "X is conditionally independent of Y, given Z". This characterization facilitates both a graphical representation of dependence information and a solution to the implication problem, of deciding whether an arbitrary independence statement $I(X,Z,Y)$ logically follows from a given set $\Sigma$ of such statements. The solution of the implication problem is the key for identifying what information is unnecessary for performing a given computation. An algorithm is developed that identifies this information is probabilistic networks. The algorithm's correctness and optimality stems from the soundness and completeness of probabilistic networks with respect to probability theory. An enhanced version of the algorithm extends its applicability to networks that encode functional dependencies. Probabilistic dependence is also used to formalize the notion of interactions among variables; a class of distributions is identified for which this formal definition exhibits qualitative properties normally attributed to the word "interact". Finally, the problem is addressed of deciding whether a given distribution can be represented as a graph of certain structure. Conditions are identified for the existence of a unique solution, an efficient algorithm is developed to find this solution, and a relationship to the problem of discovering causality from statistical data is discussed.

Journal ArticleDOI
TL;DR: Combined algorithms are proposed that incorporate a priori knowledge about the solution in the form of constraints and converge faster than previously published algorithms.
Abstract: A class of iterative signal restoration algorithms is derived based on a representation theorem for the generalized inverse of a matrix. These algorithms exhibit a first or higher order of convergence, and some of them consist of an online and an offline computational part. The conditions for convergence, the rate of convergence of these algorithms, and the computational load required to achieve the same restoration results are derived. An iterative algorithm is also presented which exhibits a higher rate of convergence than the standard quadratic algorithm with no extra computational load. These algorithms can be applied to the restoration of signals of any dimensionality. The presented approach unifies a large number of iterative restoration algorithms. Based on the convergence properties of these algorithms, combined algorithms are proposed that incorporate a priori knowledge about the solution in the form of constraints and converge faster than previously published algorithms. >


Journal ArticleDOI
28 May 1990
TL;DR: It is shown that such problems can be solved exactly with high probability, in a well-defined sense, under the assumption that all coefficients are drawn uniformly and independently from [0, 1].
Abstract: We analyse thegeneralised assignment problem under the assumption that all coefficients are drawn uniformly and independently from [0, 1]. We show that such problems can be solved exactly with high probability, in a well-defined sense. The results are closely related to earlier work of Lueker, Goldberg and Marchetti-Spaccamela and ourselves.

Proceedings ArticleDOI
02 Dec 1990
TL;DR: The paper presents theoretical analysis of the deterministic complexity of the load balancing problem (LBP) and shows certain cases of the LBP to be NP-complete.
Abstract: The paper presents theoretical analysis of the deterministic complexity of the load balancing problem (LBP). Because of difficulty of the general problem, research in the area mostly restricts itself to probabilistic or approximation algorithms, or to the average behavior of a network. The paper provides deterministic analysis of the problem for general networks. It focuses on the worst-case complexity analysis of the problem. It shows certain cases of the LBP to be NP-complete. The paper also discusses situations closely related to computer networks, where there is a global view of load distribution in the network; it provides a polynomial algorithm for solving the load balancing problem in this network. >

Journal ArticleDOI
TL;DR: In this paper, a probabilistic evolution of damage in the material is proposed with a model based on the development of a two-level approach, where the micro-level is constituted of elastic-brittle springs whose strength follows the distribution of the distributin.
Abstract: A model is proposed with a probabilistic evolution of damage in the material. The constitutive law is built through the development of a two-level approach. The micro-level is assumed to be constituted of elastic-brittle springs whose strength follows a probabilistic distributin. The representative macro-volume of material contains a given number of these elementary defects and its damage (loss of elastic properties) is computed from the knowledge of the local states. The macro-behavior results from the interactions between all the micro-defects. The model may be considered as representative of the behavior of brittle and almost brittle materials. It exhibits scattering and size effect. A Weibull distribution law is assumed for the local probabilities of failure and a parallel loose bundle connects the micro-defects. These simple hypotheses lead to analytical expressions of the probabilistic constitutive law. The approach developed appears as an intermediate model between continuous damage mechanics and probabilistic brittle fracture. The knowledge of a single parameter \IN\N\dt, number of defects in a given volume, provides the degree of ductility of the material.

Book ChapterDOI
01 Jan 1990
TL;DR: KNET, a software environment for constructing knowledge-based systems within the axiomatic framework of decision theory, contains a randomized approximation scheme for probabilistic inference that computes a priori bounds on running time by analyzing the structure and contents of the belief network.
Abstract: In recent years, researchers in decision analysis and artificial intelligence (AI) have used Bayesian belief networks to build models of expert opinion Using standard methods drawn from the theory of computational complexity, workers in the field have shown that the problem of probabilistic inference in belief networks is difficult and almost certainly intractable KNET, a software environment for constructing knowledge-based systems within the axiomatic framework of decision theory, contains a randomized approximation scheme for probabilistic inference The algorithm can, in many circumstances, perform efficient approximate inference in large and richly interconnected models of medical diagnosis Unlike previously described stochastic algorithms for probabilistic inference, the randomized approximation scheme computes a priori bounds on running time by analyzing the structure and contents of the belief network

Journal ArticleDOI
01 Jun 1990
TL;DR: Algebraic Geometry .
Abstract: Computation Theory . . Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Analytic Number Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . " .. " ...... " ........ "" .... .. Algebraic Geometry . Probability .

Journal ArticleDOI
TL;DR: In this article, a probabilistic analysis of the distribution of collapsing soils is proposed as a rational approach for quantifying risk involved for a project in an area where such soils are found.
Abstract: Collapsing soils, which undergo a large decrease in bulk volume virtually instantaneously upon saturation and/or load application, are found in arid and semi-arid regions of the world. In the western and midwestern U.S., problems resulting from collapsing soils are being recognized due to rapid industrial and urban developments. A probabilistic analysis of the distribution of such soils would be a rational approach for quantifying risk involved for a project in an area where such soils are found. Indicator kriging was applied to seven sets of collapse and collapse-related soil parameters to obtain the probability that a certain parameter is more or less than a predefined critical value for low, medium, and high collapse susceptibility. Results are presented in the form of probability contour plots with known variance of estimation of the probability. The ability to predict the probability of occurrence of collapse and collapse-related soil parameters for different critical values with a known degree of certainty is invaluable to planners, developers, and geotechnical engineers.

Journal Article
TL;DR: The NESSUS project as discussed by the authors developed at SwRI integrates state-of-the-art structural analysis techniques with probability theory for the design and analysis of complex large-scale engineering structures.
Abstract: The Probabilistic Structural Analysis Methods (PSAM) project developed at SwRI integrates state-of-the-art structural analysis techniques with probability theory for the design and analysis of complex large-scale engineering structures. An advanced efficient software system (NESSUS) capable of performing complex probabilistic analysis has been developed. A number of software components are contained in the NESSUS system and include: an expert system, a probabilistic finite element code, a probabilistic boundary element code and a fast probability integrator. This paper discusses the NESSUS software system and its ability to carry out the goals of the PSAM project.