scispace - formally typeset
Search or ask a question

Showing papers on "Probabilistic analysis of algorithms published in 2003"


Journal ArticleDOI
TL;DR: A method of assigning functions based on a probabilistic analysis of graph neighborhoods in a protein-protein interaction network that exploits the fact that graph neighbors are more likely to share functions than nodes which are not neighbors.
Abstract: Motivation: The development of experimental methods for genome scale analysis of molecular interaction networks has made possible new approaches to inferring protein function. This paper describes a method of assigning functions based on a probabilistic analysis of graph neighborhoods in a protein-protein interaction network. The method exploits the fact that graph neighbors are more likely to share functions than nodes which are not neighbors. A binomial model of local neighbor function labeling probability is combined with a Markov random field propagation algorithm to assign function probabilities for proteins in the network. Results: We applied the method to a protein-protein interaction dataset for the yeast Saccharomyces cerevisiae using the Gene Ontology (GO) terms as function labels. The method reconstructed known GO term assignments with high precision, and produced putative GO assignments to 320 proteins that currently lack GO annotation, which represents about 10% of the unlabeled proteins in S. cere

387 citations


Journal ArticleDOI
TL;DR: The upper and lower bounds on the maximum load are tight up to additive constants, proving that the Always-Go-Left algorithm achieves an almost optimal load balancing among all sequential multiple-choice algorithm.
Abstract: This article deals with randomized allocation processes placing sequentially n balls into n bins. We consider multiple-choice algorithms that choose d locations (bins) for each ball at random, inspect the content of these locations, and then place the ball into one of them, for example, in a location with minimum number of balls. The goal is to achieve a good load balancing. This objective is measured in terms of the maximum load, that is, the maximum number of balls in the same bin.Multiple-choice algorithms have been studied extensively in the past. Previous analyses typically assume that the d locations for each ball are drawn uniformly and independently from the set of all bins. We investigate whether a nonuniform or dependent selection of the d locations of a ball may lead to a better load balancing. Three types of selection, resulting in three classes of algorithms, are distinguished: (1) uniform and independent, (2) nonuniform and independent, and (3) nonuniform and dependent.Our first result shows that the well-studied uniform greedy algorithm (class 1) does not obtain the smallest possible maximum load. In particular, we introduce a nonuniform algorithm (class 2) that obtains a better load balancing. Surprisingly, this algorithm uses an unfair tie-breaking mechanism, called Always-Go-Left, resulting in an asymmetric assignment of the balls to the bins. Our second result is a lower bound showing that a dependent allocation (class 3) cannot yield significant further improvement.Our upper and lower bounds on the maximum load are tight up to additive constants, proving that the Always-Go-Left algorithm achieves an almost optimal load balancing among all sequential multiple-choice algorithm. Furthermore, we show that the results for the Always-Go-Left algorithm can be generalized to allocation processes with more balls than bins and even to infinite processes in which balls are inserted and deleted by an oblivious adversary.

260 citations


Journal ArticleDOI
TL;DR: In this paper, a non-probabilistic interval analysis method for the dynamical response of structures with uncertain-but-bounded parameters is presented. And the results show that the region of the structure with uncertain but bounded parameters contains that produced by the probabilistic approach.

229 citations


Proceedings ArticleDOI
27 Oct 2003
TL;DR: A family of bitmap algorithms that address the problem of counting the number of distinct header patterns (flows) seen on a high-speed link and can be used to detect DoS attacks and port scans and to solve measurement problems.
Abstract: This paper presents a family of bitmap algorithms that address the problem of counting the number of distinct header patterns (flows) seen on a high speed link. Such counting can be used to detect DoS attacks and port scans, and to solve measurement problems. Counting is especially hard when processing must be done within a packet arrival time (8 nsec at OC-768 speeds) and, hence, must require only a small number of accesses to limited, fast memory. A naive solution that maintains a hash table requires several Mbytes because the number of flows can be above a million. By contrast, our new probabilistic algorithms take very little memory and are fast. The reduction in memory is particularly important for applications that run multiple concurrent counting instances. For example, we replaced the port scan detection component of the popular intrusion detection system Snort with one of our new algorithms. This reduced memory usage on a ten minute trace from 50 Mbytes to 5.6 Mbytes while maintaining a 99.77% probability of alarming on a scan within 6 seconds of when the large-memory algorithm would. The best known prior algorithm (probabilistic counting) takes 4 times more memory on port scan detection and 8 times more on a measurement application. Fundamentally, this is because our algorithms can be customized to take advantage of special features of applications such as a large number of instances that have very small counts or prior knowledge of the likely range of the count.

216 citations


Proceedings ArticleDOI
09 Jun 2003
TL;DR: A simple and easy-to-analyze randomized approximation algorithms for several well-studied NP-hard network design problems and a simple constant-factor approximation algorithm for the single-sink buy-at-bulk network design problem.
Abstract: We give simple and easy-to-analyze randomized approximation algorithms for several well-studied NP-hard network design problems. Our algorithms improve over the previously best known approximation ratios. Our main results are the following.We give a randomized 3.55-approximation algorithm for the connected facility location problem. The algorithm requires three lines to state, one page to analyze, and improves the best-known performance guarantee for the problem.We give a 5.55-approximation algorithm for virtual private network design. Previously, constant-factor approximation algorithms were known only for special cases of this problem.We give a simple constant-factor approximation algorithm for the single-sink buy-at-bulk network design problem. Our performance guarantee improves over what was previously known, and is an order of magnitude improvement over previous combinatorial approximation algorithms for the problem.

201 citations


Journal Article
TL;DR: A new family of topic- ranking algorithms for multi-labeled documents that achieve state-of-the-art results and outperforms topic-ranking adaptations of Rocchio's algorithm and of the Perceptron algorithm are described.
Abstract: We describe a new family of topic-ranking algorithms for multi-labeled documents. The motivation for the algorithms stem from recent advances in online learning algorithms. The algorithms are simple to implement and are also time and memory efficient. We provide a unified analysis of the family of algorithms in the mistake bound model. We then discuss experiments with the proposed family of topic-ranking algorithms on the Reuters-21578 corpus and the new corpus released by Reuters in 2000. On both corpora, the algorithms we present achieve state-of-the-art results and outperforms topic-ranking adaptations of Rocchio's algorithm and of the Perceptron algorithm.

201 citations


Dissertation
01 Jan 2003
TL;DR: This thesis proposes a novel, hybrid approach, combining features of both symbolic and explicit implementations and shows that this technique can almost match the speed of sparse matrix based implementations, but uses significantly less memory.
Abstract: In this thesis, we present efficient implementation techniques for probabilistic model checking, a method which can be used to analyse probabilistic systems such as randomised distributed algorithms, fault-tolerant processes and communication networks. A probabilistic model checker inputs a probabilistic model and a specification, such as "the message will be delivered with probability 1", "the probability of shutdown occurring is at most 0.02" or "the probability of a leader being elected within 5 rounds is at least 0.98", and can automatically verify if the specification is true in the model. Motivated by the success of symbolic approaches to non-probabilistic model checking, which are based on a data structure called binary decision diagrams (BDDs), we present an extension to the probabilistic case, using multi-terminal binary decision diagrams (MTBDDs). We demonstrate that MTBDDs can be used to perform probabilistic analysis of large, structured models with more than 7.5 billion states, way out of the reach of conventional, explicit techniques, based on sparse matrices. We also propose a novel, hybrid approach, combining features of both symbolic and explicit implementations and show, using results from a wide range of case studies, that this technique can almost match the speed of sparse matrix based implementations, but uses significantly less memory. This increases, by approximately an order of magnitude, the size of model which can be handled on a typical workstation.

201 citations


Book ChapterDOI
01 Jan 2003
TL;DR: Important analytical tools are presented, discussed, and applied to well-chosen example functions in the analysis of different variants of evolutionary algorithms on selected functions.
Abstract: Many experiments have shown that evolutionary algorithms are useful randomized search heuristics for optimization problems In order to learn more about the reasons for their efficiency and in order to obtain proven results on evolutionary algorithms it is necessary to develop a theory of evolutionary algorithms Such a theory is still in its infancy A major part of a theory is the analysis of different variants of evolutionary algorithms on selected functions Several results of this kind have been obtained during the last years Here important analytical tools are presented, discussed, and applied to well-chosen example functions

195 citations


Proceedings ArticleDOI
11 Oct 2003
TL;DR: This paper shows that plain old DPLL equipped with memorization (an algorithm the authors call #DPLLCache) can solve both of these problems with time complexity that is at least as good as state-of-the-art exact algorithms, and that it can also achieve the best known time-space tradeoff.
Abstract: Bayesian inference is an important problem with numerous applications in probabilistic reasoning. Counting satisfying assignments is a closely related problem of fundamental theoretical importance. In this paper, we show that plain old DPLL equipped with memorization (an algorithm we call #DPLLCache) can solve both of these problems with time complexity that is at least as good as state-of-the-art exact algorithms, and that it can also achieve the best known time-space tradeoff. We then proceed to show that there are instances where #DPLLCache can achieve an exponential speedup over existing algorithms.

177 citations


01 Jan 2003
TL;DR: This thesis shows how probabilistic algorithms can be formally verified using a mechanical theorem prover, and defines a version with strong properties, which can execute in the logic to prove compositeness of numbers.
Abstract: This thesis shows how probabilistic algorithms can be formally verified using a mechanical theorem prover. We begin with an extensive foundational development of probability, creating a higherorder logic formalization of mathematical measure theory. This allows the definition of the probability space we use to model a random bit generator, which informally is a stream of coin-flips, or technically an infinite sequence of IID Bernoulli( 2 ) random variables. Probabilistic programs are modelled using the state-transformer monad familiar from functional programming, where the random bit generator is passed around in the computation. Functions remove random bits from the generator to perform their calculation, and then pass back the changed random bit generator with the result. Our probability space modelling the random bit generator allows us to give precise probabilistic specifications of such programs, and then verify them in the theorem prover. We also develop technical support designed to expedite verification: probabilistic quantifiers; a compositional property subsuming measurability and independence; a probabilistic while loop together with a formal concept of termination with probability 1. We also introduce a technique for reducing properties of a probabilistic while loop to properties of programs that are guaranteed to terminate: these can then be established using induction and standard methods of program correctness. We demonstrate the formal framework with some example probabilistic programs: sampling algorithms for four probability distributions; some optimal procedures for generating dice rolls from coin flips; the symmetric simple random walk. In addition, we verify the Miller-Rabin primality test, a well-known and commercially used probabilistic algorithm. Our fundamental perspective allows us to define a version with strong properties, which we can execute in the logic to prove compositeness of numbers.

167 citations


Journal ArticleDOI
TL;DR: These algorithms can be regarded as generalizations of the previously proposed set-membership NLMS (SM-NLMS) algorithm, and include two constraint sets in order to construct a space of feasible solutions for the coefficient updates.
Abstract: This paper presents and analyzes novel data selective normalized adaptive filtering algorithms with two data reuses. The algorithms [the set-membership binormalized LMS (SM-BN-DRLMS) algorithms] are derived using the concept of set-membership filtering (SMF). These algorithms can be regarded as generalizations of the previously proposed set-membership NLMS (SM-NLMS) algorithm. They include two constraint sets in order to construct a space of feasible solutions for the coefficient updates. The algorithms include data-dependent step sizes that provide fast convergence and low-excess mean-squared error (MSE). Convergence analyzes in the mean squared sense are presented, and closed-form expressions are given for both white and colored input signals. Simulation results show good performance of the algorithms in terms of convergence speed, final misadjustment, and reduced computational complexity.

Journal ArticleDOI
TL;DR: This paper develops a fast, linear-programming-based, approximation scheme that exploits the decomposable structure and is guaranteed to produce feasible solutions for a stochastic capacity expansion problem.
Abstract: Planning for capacity expansion forms a crucial part of the strategic-level decision making in many applications. Consequently, quantitative models for economic capacity expansion planning have been the subject of intense research. However, much of the work in this area has been restricted to linear cost models and/or limited degree of uncertainty to make the problems analytically tractable. This paper addresses a stochastic capacity expansion problem where the economies-of-scale in expansion costs are handled via fixed-charge cost functions, and forecast uncertainties in the problem parameters are explicitly considered by specifying a set of scenarios. The resulting formulation is a multistage stochastic integer program. We develop a fast, linear-programming-based, approximation scheme that exploits the decomposable structure and is guaranteed to produce feasible solutions for this problem. Through a probabilistic analysis, we prove that the optimality gap of the heuristic solution almost surely vanishes asymptotically as the problem size increases.

Journal ArticleDOI
TL;DR: Multichannel affine and fast affine projection algorithms are introduced for active noise control or acoustic equalization and it is shown that they can provide the best convergence performance (even over recursive-least-squares algorithms) when nonideal noisy acoustic plant models are used in the adaptive systems.
Abstract: In the field of adaptive signal processing, it is well known that affine projection algorithms or their low-computational implementations fast affine projection algorithms can produce a good tradeoff between convergence speed and computational complexity. Although these algorithms typically do not provide the same convergence speed as recursive-least-squares algorithms, they can provide a much improved convergence speed compared to stochastic gradient descent algorithms, without the high increase of the computational load or the instability often found in recursive-least-squares algorithms. In this paper, multichannel affine and fast affine projection algorithms are introduced for active noise control or acoustic equalization. Multichannel fast affine projection algorithms have been previously published for acoustic echo cancellation, but the problem of active noise control or acoustic equalization is a very different one, leading to different structures, as explained in the paper. The computational complexity of the new algorithms is evaluated, and it is shown through simulations that not only can the new algorithms provide the expected tradeoff between convergence performance and computational complexity, they can also provide the best convergence performance (even over recursive-least-squares algorithms) when nonideal noisy acoustic plant models are used in the adaptive systems.

Journal ArticleDOI
TL;DR: In this paper, the authors present a methodology for conducting a site-specific probabilistic analysis of fault displacement hazard, which can be applied to any region and indicate the type of data needed to apply the methodology elsewhere.
Abstract: We present a methodology for conducting a site-specific probabilistic analysis of fault displacement hazard. Two approaches are outlined. The first relates the occurrence of fault displacement at or near the ground surface to the occurrence of earthquakes in the same manner as is done in a standard probabilistic seismic hazard analysis (PSHA) for ground shaking. The methodology for this approach is taken directly from PSHA methodology with the ground-motion attenuation function replaced by a fault displacement attenuation function. In the second approach, the rate of displacement events and the distribution for fault displacement are derived directly from the characteristics of the faults or geologic features at the site of interest. The methodology for probabilistic fault displacement hazard analysis (PFDHA) was developed for a normal faulting environment and the probability distributions we present may have general application in similar tectonic regions. In addition, the general methodology is applicable to any region and we indicate the type of data needed to apply the methodology elsewhere.

Proceedings ArticleDOI
01 Dec 2003
TL;DR: The truncated graph based scheduling algorithm (TGSA) is introduced that provides probabilistic guarantees for the throughput performance of the network and indicates that, counter intuitively, maximization of the cardinality of independent sets does not necessarily increase the throughput of a network.
Abstract: Many published algorithms used for scheduling transmissions in packet radio networks are based on finding maximal independent sets in an underlying graph. Such algorithms are developed under the assumptions of variations of the protocol interference model, which does not take the aggregated effect of interference into consideration. We provide a probabilistic analysis for the throughput performance of such graph based scheduling algorithms under the physical interference model. We show that in many scenarios a significant portion of transmissions scheduled based on the protocol interference model result in unacceptable signal-to-interference and noise ratio (SINR) at intended receivers. Our analytical as well as simulation results indicate that, counter intuitively, maximization of the cardinality of independent sets does not necessarily increase the throughput of a network. We introduce the truncated graph based scheduling algorithm (TGSA) that provides probabilistic guarantees for the throughput performance of the network.

01 Jan 2003
TL;DR: In this article, a deterministic and probabilistic algorithm for identically equal to zero is presented, where the deterministic algorithm is to expand Q(x, y) to individual terms, opening all parentheses, and then cancel out all equal terms with opppsite signs.
Abstract: Let Q(x, y) be the two-variable polynomial Q(x, y) = (x + y)7 - x7 - y7 - 7xy(x + y)(x2 + xy + y2)2 and assume that we want to ascertain whether Q(x, y) is identically equal to zero. A deterministic algorithm for this task would be to expand Q(x, y) to individual terms, opening all parentheses, and then cancel out all equal terms with opppsite signs. Q(x, y) is identically zero if and only if all the terms cancel. A probabilistic algorithm for the same problem can be described as follows:

Journal Article
TL;DR: This paper makes an deep study of the reasons of the algorithms' inefficiency, analyzes the properties of indiscernibility relation, proposes and proves an equivalent and efficient method for computing positive region, and designs a complete algorithm for the reduction of attributes.
Abstract: This paper makes an deep study of the reasons of the algorithms' inefficiency, mainly focuses on two important concepts: indiscernibility relation and positive region, analyzes the properties of indiscernibility relation, proposes and proves an equivalent and efficient method for computing positive region. Thus some efficient basic algorithms for rough set methods are introduced with a detailed analysis of the time complexity and comparison with the existing algorithms. Furthermore, this paper researches the incremental computing of positive region. Based on the above results, a complete algorithm for the reduction of attributes is designed. Its completeness is proved. In addition, its time complexity and space complexity are analyzed in detail. In order to test the efficiency of the algorithm, some experiments are made on the data sets in UCI machine learning repository. Theoretical analysis and experimental results show that the reduction algorithm is more efficient than those existing algorithms.

Journal ArticleDOI
TL;DR: A new inverse scattering algorithm for reconstructing the structure of highly reflecting fiber Bragg gratings based on solving the Gel'fand-Levitan-Marchenko integral equation in a layer-peeling procedure, which enables one to solve numerically difficult inverse scattering problems, where previous algorithms failed to give an accurate result.
Abstract: We demonstrate a new inverse scattering algorithm for reconstructing the structure of highly reflecting fiber Bragg gratings. The method, called integral layer-peeling (ILP), is based on solving the Gel'fand-Levitan-Marchenko (GLM) integral equation in a layer-peeling procedure. Unlike in previously published layer-peeling algorithms, the structure of each layer in the ILP algorithm can have a nonuniform profile. Moreover, errors due to the limited bandwidth used to sample the reflection coefficient do not rapidly accumulate along the grating. Therefore, the error in the new algorithm is smaller than in previous layer peeling algorithms. The ILP algorithm is compared to two discrete layer-peeling algorithms and to an iterative solution to the GLM equation. The comparison shows that the ILP algorithm enables one to solve numerically difficult inverse scattering problems, where previous algorithms failed to give an accurate result. The complexity of the ILP algorithm is of the same order as in previous layer peeling algorithms. When a small error is acceptable, the complexity of the ILP algorithm can be significantly reduced below the complexity of previously published layer-peeling algorithms.

Book ChapterDOI
TL;DR: Since Smart targets both the classroom and realistic industrial settings as a learning, research, and application tool, it is written in a modular way that allows for easy integration of new formalisms and solution algorithms.
Abstract: We describe the main features of Smart, a software package providing a seamless environment for the logic and probabilistic analysis of complex systems. Smart can combine different formalisms in the same modeling study. For the analysis of logical behavior, both explicit and symbolic state-space generation techniques, as well as symbolic CTL model-checking algorithms, are available. For the study of stochastic and timing behavior, both sparse-storage and Kronecker numerical solution approaches are available when the underlying process is a Markov chain. In addition, discrete-event simulation is always applicable regardless of the stochastic nature of the process, but certain classes of non-Markov models can still be solved numerically. Finally, since Smart targets both the classroom and realistic industrial settings as a learning, research, and application tool, it is written in a modular way that allows for easy integration of new formalisms and solution algorithms.

Journal ArticleDOI
TL;DR: Evidence theory is proposed to handle the epistemic uncertainty that stems from lack of knowledge about a structural system and an intermediate complexity wing example is used to evaluate the relevance of evidence theory to an uncertainty quantification problem for the preliminary design of airframe structures.
Abstract: Over the past decade, classical probabilistic analysis has been a popular approach among the uncertainty quantification methods As the complexity and performance requirements of a structural system are increased, the quantification of uncertainty becomes more complicated, and various forms of uncertainties should be taken into consideration Because of the need to characterize the distribution of probability, classical probability theory may not be suitable for a large complex system such as an aircraft, in that our information is never complete because of lack of knowledge and statistical data Evidence theory, also known as Dempster-Shafer theory, is proposed to handle the epistemic uncertainty that stems from lack of knowledge about a structural system Evidence theory provides us with a useful tool for aleatory (random) and epistemic (subjective) uncertainties An intermediate complexity wing example is used to evaluate the relevance of evidence theory to an uncertainty quantification problem for the preliminary design of airframe structures Also, methods for efficient calculations in large-scale problems are discussed

Proceedings ArticleDOI
10 Apr 2003
TL;DR: An order of magnitude improvement in run times of likelihood computations is demonstrated using the use of stochastic greedy algorithms for optimizing the order of conditioning and summation operations in genetic linkage analysis.
Abstract: Genetic linkage analysis is a challenging application which requires Bayesian networks consisting of thousands of vertices. Consequently, computing the likelihood of data, which is needed for learning linkage parameters, using exact inference procedures calls for an extremely efficient implementation that carefully optimizes the order of conditioning and summation operations. In this paper we present the use of stochastic greedy algorithms for optimizing this order. Our algorithm has been incorporated into the newest version of superlink, which is currently the fastest genetic linkage program for exact likelihood computations in general pedigrees. We demonstrate an order of magnitude improvement in run times of likelihood computations using our new optimization algorithm, and hence enlarge the class of problems that can be handled effectively by exact computations.

01 Jan 2003
TL;DR: The purpose of the thesis is to elaborate new interior point algorithms for solving linear optimization problems, and it is proved that these algorithms are polynomial.
Abstract: In this paper the abstract of the thesis ”New Interior Point Algorithms in Linear Programming” is presented. The purpose of the thesis is to elaborate new interior point algorithms for solving linear optimization problems. The theoretical complexity of the new algorithms are calculated. We also prove that these algorithms are polynomial. The thesis is composed of seven chapters. In the first chapter a short history of interior point methods is discussed. In the following three chapters some variants of the ane scaling, the projective and the path-following algorithms are presented. In the last three chapters new path-following interior point algorithms are defined. In the fifth chapter a new method for constructing search directions for interior point algorithms is introduced, and a new primal-dual pathfollowing algorithm is defined. Polynomial complexity of this algorithm is proved. We mention that this complexity is identical with the best known complexity in the present. In the sixth chapter, using a similar approach with the one defined in the previous chapter, a new class of search directions for the self-dual problem is introduced. A new primal-dual algorithm is defined for solving the self-dual linear optimization problem, and polynomial complexity is proved. In the last chapter the method proposed in the fifth chapter is generalized for target-following methods. A conceptual target-following algorithm is defined, and this algorithm is particularized in order to obtain a new primal-dual weighted-path-following method. The complexity of this algorithm is computed.

Book ChapterDOI
01 Jan 2003
TL;DR: This work studies an embedded active learner that can limit its predictions to almost arbitrary computable aspects of spatio-temporal events and constructs probabilistic algorithms that map event sequences to abstract internal representations (IRs), and predicts IRs from IRs computed earlier.
Abstract: Details of complex event sequences are often not predictable, but their reduced abstract representations are. I study an embedded active learner that can limit its predictions to almost arbitrary computable aspects of spatio-temporal events. It constructs probabilistic algorithms that (1) control interaction with the world, (2) map event sequences to abstract internal representations (IRs), (3) predict IRs from IRs computed earlier. Its goal is to create novel algorithms generating IRs useful for correct IR predictions, without wasting time on those learned before. This requires an adaptive novelty measure which is implemented by a co-evolutionary scheme involving two competing modules collectively designing (initially random) algorithms representing experiments. Using special instructions, the modules can bet on the outcome of IR predictions computed by algorithms they have agreed upon. If their opinions differ then the system checks who's right, punishes the loser (the surprised one), and rewards the winner. An evolutionary or reinforcement learning algorithm forces each module to maximize reward. This motivates both modules to lure each other into agreeing upon experiments involving predictions that surprise it. Since each module essentially can veto experiments it does not consider profitable, the system is motivated to focus on those computable aspects of the environment where both modules still have confident but different opinions. Once both share the same opinion on a particular issue (via the loser's learning process, e.g., the winner is simply copied onto the loser), the winner loses a source of reward -- an incentive to shift the focus of interest onto novel experiments. My simulations include an example where surprise-generation of this kind helps to speed up external reward.

Proceedings ArticleDOI
15 Jul 2003
TL;DR: The paper applies weak probabilistic bisimulation to prove that the type system guarantees the Probabilistic noninterference property, and shows that the language can safely be extended with a fork command that allows new threads to be spawned.
Abstract: To be practical, systems for ensuring secure information flow must be as permissive as possible. To this end, the author recently proposed a type system for multi-threaded programs running under a uniform probabilistic scheduler; it allows the running times of threads to depend on the values of H variables, provided that these timing variations cannot affect the values of L variables. But these timing variations preclude a proof of the soundness of the type system using the framework of probabilistic bisimulation, because probabilistic bisimulation is too strict regarding time. To address this difficulty, this paper proposes a notion of weak probabilistic bisimulation for Markov chains, allowing two Markov chains to be regarded as equivalent even when one "runs" more slowly that the other. The paper applies weak probabilistic bisimulation to prove that the type system guarantees the probabilistic noninterference property. Finally, the paper shows that the language can safely be extended with a fork command that allows new threads to be spawned.

Book ChapterDOI
13 Oct 2003
TL;DR: The limitations of the deterministic formulation of scheduling are outlined and a probabilistic approach is motivated, with one model being chosen as a basic framework.
Abstract: The limitations of the deterministic formulation of scheduling are outlined and a probabilistic approach is motivated. A number of models are reviewed with one being chosen as a basic framework. Response-time analysis is extended to incorporate a probabilistic characterisation of task arrivals and execution times. Copulas are used to represent dependencies.

Book
01 Jan 2003
TL;DR: Mesh algorithms for computational geometry: preliminaries the convex hull smallest enclosing figures nearest point problem line segments and simple polygons intersection of convex sets diameter iso-oriented rectangles and polygons voronoi diagram.
Abstract: Part 1 Overview: models of computation forms of input problems data movement operations sample algorithms further remarks. Part 2 Fundamental mesh algorithms: definitions lower bounds primitive mesh algorithms matrix algorithms algorithms involving ordered data further remarks. Part 3 Mesh algorithms for images and graphs: fundamental graph algorithms connected components internal distances convexity external distances further remarks. Part 4 Mesh algorithms for computational geometry: preliminaries the convex hull smallest enclosing figures nearest point problem line segments and simple polygons intersection of convex sets diameter iso-oriented rectangles and polygons voronoi diagram further remarks. Part 5 Tree-like pyramid algorithms: definitions lower bounds fundamental algorithms image algorithms further remarks. Part 6 Hybrid pyramid algorithms: graphs as unordered edges graphs as adjacency matrices digitized pictures convexity data movement operations optimality further remarks.

Proceedings Article
26 Oct 2003
TL;DR: A technique for harmonic analysis is presented that partitions a piece of music into contiguous regions and labels each with the key, mode, and functional chord, e.g. tonic, dominant, etc.
Abstract: A technique for harmonic analysis is presented that partitions a piece of music into contiguous regions and labels each with the key, mode, and functional chord, e.g. tonic, dominant, etc. The analysis is performed with a hidden Markov model and, as such, is automatically trainable from generic MIDI files and capable of finding the globally optimal harmonic labeling. Experiments are presented highlighting our current state of the art. An extension to a more complex probabilistic graphical model is outlined in which music is modeled as a collection of voices that evolve independently given the harmonic progression.

Journal ArticleDOI
TL;DR: A general probabilistic framework applicable to any search algorithm and whose net effect is to reduce the search radius is presented, which illustrates empirically its practical performance on a particular class of algorithms, where large improvements in the search time are obtained at the cost of a very small error probability.

Journal ArticleDOI
01 Jan 2003-Optik
TL;DR: Several phase-shifting algorithms with an arbitrary but constant phase-shift between captured intensity frames are proposed, which means the phase evaluation process then does not depend on the linear phase shift errors.