scispace - formally typeset
Search or ask a question

Showing papers on "Markov chain published in 2005"


Journal ArticleDOI
TL;DR: HHpred is a fast server for remote protein homology detection and structure prediction and is the first to implement pairwise comparison of profile hidden Markov models (HMMs) and allows to search a wide choice of databases.
Abstract: HHpred is a fast server for remote protein homology detection and structure prediction and is the first to implement pairwise comparison of profile hidden Markov models (HMMs). It allows to search a wide choice of databases, such as the PDB, SCOP, Pfam, SMART, COGs and CDD. It accepts a single query sequence or a multiple alignment as input. Within only a few minutes it returns the search results in a user-friendly format similar to that of PSI-BLAST. Search options include local or global alignment and scoring secondary structure similarity. HHpred can produce pairwise query-template alignments, multiple alignments of the query with a set of templates selected from the search results, as well as 3D structural models that are calculated by the MODELLER software from these alignments. A detailed help facility is available. As a demonstration, we analyze the sequence of SpoVT, a transcriptional regulator from Bacillus subtilis. HHpred can be accessed at http://protevo.eb.tuebingen.mpg.de/hhpred.

3,347 citations


Book
01 Jan 2005
TL;DR: Preface 1. Events and probability 2. Discrete random variables and expectation 3. Moments and deviations 4. Chernoff bounds 5. Balls, bins and random graphs 6. Probabilistic method 7. Markov chains and random walks 8. Continuous distributions and the Poisson process
Abstract: Preface 1. Events and probability 2. Discrete random variables and expectation 3. Moments and deviations 4. Chernoff bounds 5. Balls, bins and random graphs 6. The probabilistic method 7. Markov chains and random walks 8. Continuous distributions and the Poisson process 9. Entropy, randomness and information 10. The Monte Carlo method 11. Coupling of Markov chains 12. Martingales 13. Pairwise independence and universal hash functions 14. Balanced allocations References.

2,543 citations


Book
18 Feb 2005
TL;DR: This volume is essential reading for statisticians working in spatial theory and its applications, as well as quantitative researchers in a wide range of science fields where spatial data analysis is important.
Abstract: Gaussian Markov Random Field (GMRF) models are most widely used in spatial statistics, a very active area of research in which few up-to-date reference works are available. Gaussian Markov Random Field: Theory and Applications is the first book on the subject that provides a unified framework of GMRFs with particular emphasis on the computational aspects. The book includes extensive case studies and online a c-library for fast and exact simulation. With chapters contributed by leading researchers in the field, this volume is essential reading for statisticians working in spatial theory and its applications, as well as quantitative researchers in a wide range of science fields where spatial data analysis is important.

2,106 citations


Journal ArticleDOI
TL;DR: The process of iterating or diffusing the Markov matrix is seen as a generalization of some aspects of the Newtonian paradigm, in which local infinitesimal transitions of a system lead to global macroscopic descriptions by integration.
Abstract: We provide a framework for structural multiscale geometric organization of graphs and subsets of R(n). We use diffusion semigroups to generate multiscale geometries in order to organize and represent complex structures. We show that appropriately selected eigenfunctions or scaling functions of Markov matrices, which describe local transitions, lead to macroscopic descriptions at different scales. The process of iterating or diffusing the Markov matrix is seen as a generalization of some aspects of the Newtonian paradigm, in which local infinitesimal transitions of a system lead to global macroscopic descriptions by integration. We provide a unified view of ideas from data analysis, machine learning, and numerical analysis.

1,654 citations


Proceedings ArticleDOI
15 Aug 2005
TL;DR: A novel approach is developed to train the model that directly maximizes the mean average precision rather than maximizing the likelihood of the training data, and significant improvements are possible by modeling dependencies, especially on the larger web collections.
Abstract: This paper develops a general, formal framework for modeling term dependencies via Markov random fields. The model allows for arbitrary text features to be incorporated as evidence. In particular, we make use of features based on occurrences of single terms, ordered phrases, and unordered phrases. We explore full independence, sequential dependence, and full dependence variants of the model. A novel approach is developed to train the model that directly maximizes the mean average precision rather than maximizing the likelihood of the training data. Ad hoc retrieval experiments are presented on several newswire and web collections, including the GOV2 collection used at the TREC 2004 Terabyte Track. The results show significant improvements are possible by modeling dependencies, especially on the larger web collections.

996 citations


Journal Article
TL;DR: While the estimation of the gradient of log-density function is, in principle, a very difficult non-parametric problem, it is proved a surprising result that gives a simple formula that simplifies to a sample average of a sum of some derivatives of the log- density given by the model.
Abstract: One often wants to estimate statistical models where the probability density function is known only up to a multiplicative normalization constant. Typically, one then has to resort to Markov Chain Monte Carlo methods, or approximations of the normalization constant. Here, we propose that such models can be estimated by minimizing the expected squared distance between the gradient of the log-density given by the model and the gradient of the log-density of the observed data. While the estimation of the gradient of log-density function is, in principle, a very difficult non-parametric problem, we prove a surprising result that gives a simple formula for this objective function. The density function of the observed data does not appear in this formula, which simplifies to a sample average of a sum of some derivatives of the log-density given by the model. The validity of the method is demonstrated on multivariate Gaussian and independent component analysis models, and by estimating an overcomplete filter set for natural image data.

992 citations



Journal ArticleDOI
TL;DR: A particle filter that effectively deals with interacting targets, targets that are influenced by the proximity and/or behavior of other targets, is described and a novel Markov chain Monte Carlo (MCMC) sampling step is replaced to obtain a more efficient MCMC-based multitarget filter.
Abstract: We describe a particle filter that effectively deals with interacting targets, targets that are influenced by the proximity and/or behavior of other targets. The particle filter includes a Markov random field (MRF) motion prior that helps maintain the identity of targets throughout an interaction, significantly reducing tracker failures. We show that this MRF prior can be easily implemented by including an additional interaction factor in the importance weights of the particle filter. However, the computational requirements of the resulting multitarget filter render it unusable for large numbers of targets. Consequently, we replace the traditional importance sampling step in the particle filter with a novel Markov chain Monte Carlo (MCMC) sampling step to obtain a more efficient MCMC-based multitarget filter. We also show how to extend this MCMC-based filter to address a variable number of interacting targets. Finally, we present both qualitative and quantitative experimental results, demonstrating that the resulting particle filters deal efficiently and effectively with complicated target interactions.

900 citations


Reference EntryDOI
15 Jul 2005
TL;DR: The Gibbs sampler is a special case in which the proposal distributions are conditional distributions of single components of a vector parameter as mentioned in this paper, and it is particularly useful for the evaluation of posterior distributions in complex Bayesian models.
Abstract: Markov chain Monte Carlo (MCMC) is a technique for estimating by simulation the expectation of a statistic in a complex model. Successive random selections form a Markov chain, the stationary distribution of which is the target distribution. It is particularly useful for the evaluation of posterior distributions in complex Bayesian models. In the Metropolis–Hastings algorithm, items are selected from an arbitrary “proposal” distribution and are retained or not according to an acceptance rule. The Gibbs sampler is a special case in which the proposal distributions are conditional distributions of single components of a vector parameter. Various special cases and applications are considered. Keywords: Metropolis–Hastings; Gibbs sampler; stationary distribution; proposal distribution; hybrid chain; reversible jump; hierarchical; missing data

843 citations


Journal ArticleDOI
TL;DR: This work considers the stabilization problem for a kind of networked control systems in discrete-time domain with random delays, and it is shown that the state-feedback gains are different with different modes.
Abstract: We consider the stabilization problem for a class of networked control systems in the discrete-time domain with random delays. The sensor-to-controller and controller-to-actuator delays are modeled as two Markov chains, and the resulting closed-loop systems are jump linear systems with two modes. The necessary and sufficient conditions on the existence of stabilizing controllers are established. It is shown that state-feedback gains are mode-dependent. An iterative linear matrix inequality (LMI) approach is employed to calculate the state-feedback gains.

838 citations


Journal ArticleDOI
TL;DR: This work considers a robust control problem for a finite-state, finite-action Markov decision process, where uncertainty on the transition matrices is described in terms of possibly nonconvex sets, and shows that perfect duality holds for this problem, and that it can be solved with a variant of the classical dynamic programming algorithm, the "robust dynamic programming" algorithm.
Abstract: Optimal solutions to Markov decision problems may be very sensitive with respect to the state transition probabilities In many practical problems, the estimation of these probabilities is far from accurate Hence, estimation errors are limiting factors in applying Markov decision processes to real-world problems We consider a robust control problem for a finite-state, finite-action Markov decision process, where uncertainty on the transition matrices is described in terms of possibly nonconvex sets We show that perfect duality holds for this problem, and that as a consequence, it can be solved with a variant of the classical dynamic programming algorithm, the "robust dynamic programming" algorithm We show that a particular choice of the uncertainty sets, involving likelihood regions or entropy bounds, leads to both a statistically accurate representation of uncertainty, and a complexity of the robust recursion that is almost the same as that of the classical recursion Hence, robustness can be added at practically no extra computing cost We derive similar results for other uncertainty sets, including one with a finite number of possible values for the transition matrices We describe in a practical path planning example the benefits of using a robust strategy instead of the classical optimal strategy; even if the uncertainty level is only crudely guessed, the robust strategy yields a much better worst-case expected travel time

BookDOI
01 Aug 2005
TL;DR: The Four Themes are presented: Parametric inference, tree construction using Singular Value Decomposition, analysis of point mutations in vertebrate genomes, extended statistical models from trees to splits graphs, and applications of interval methods to phylogenetics.
Abstract: Preface Part I. Introduction to the Four Themes: 1. Statistics L. Pachter and B. Sturmfels 2. Computation L. Pachter and B. Sturmfels 3. Algebra L. Pachter and B. Sturmfels 4. Biology L. Pachter and B. Sturmfels Part II. Studies on the Four Themes: 5. Parametric inference R. Mihaescu 6. Polytope propagation on graphs M. Joswig 7. Parametric sequence alignment C. Dewey and K. Woods 8. Bounds for optimal sequence alignment S. Elizalde 9. Inference functions S. Elizalde 10. Geometry of Markov chains E. Kuo 11. Equations defining hidden Markov models N. Bray and J. Morton 12. The EM algorithm for hidden Markov models I. B. Hallgrimsdottir, A. Milowski and J. Yu 13. Homology mapping with Markov random fields A. Caspi 14. Mutagenetic tree models N. Beerenwinkel and M. Drton 15. Catalog of small trees M. Casanellas, L. Garcia and S. Sullivant 16. The strand symmetric model M. Casanellas and S. Sullivant 17. Extending statistical models from trees to splits graphs D. Bryant 18. Small trees and generalized neighbor-joining M. Contois and D. Levy 19. Tree construction using Singular Value Decomposition N. Eriksson 20. Applications of interval methods to phylogenetics R. Sainudiin and R. Yoshida 21. Analysis of point mutations in vertebrate genomes J. Al-Aidroos and S. Snir 22. Ultra-conserved elements in vertebrate genomes M. Drton, N. Eriksson and G. Leung Index.

Book ChapterDOI
01 Feb 2005
TL;DR: This chapter treats statistical methods for network evolution with a focus on models where a continuous-time network evolution is assumed although the observations are made at discrete time points.
Abstract: This chapter treats statistical methods for network evolution. It is argued that it is most fruitful to consider models where network evolution is represented as the result of many (usually non-observed) small changes occurring between the consecutively observed networks. Accordingly, the focus is on models where a continuous-time network evolution is assumed although the observations are made at discrete time points (two or more). Three models are considered in detail, all based on the assumption that the observed networks are outcomes of a Markov process evolving in continuous time. The independent arcs model is a trivial baseline model. The reciprocity model expresses effects of reciprocity, but lacks other structural effects. The actor-oriented model is based on a model of actors changing their outgoing ties as a consequence of myopic stochastic optimization of an objective function. This framework offers the flexibility to represent a variety of network effects. An estimation algorithm is treated, based on a Markov chain Monte Carlo implementation of the method of moments. Preprint of: Snijders, Tom A.B. (2005). Models for Longitudinal Network Data. Chapter 11 (pp. 215–247) in P. Carrington, J. Scott, and S. Wasserman (Eds.), Models and methods in social network analysis. New York: Cambridge University Press.

Journal ArticleDOI
TL;DR: In this article, an improved Perron cluster analysis algorithm is presented, which is more robust than earlier suggestions, for the detection of metastable conformations in molecular conformation dynamics.

Journal ArticleDOI
TL;DR: In this paper, a Bayesian framework for parsing images into their constituent visual patterns is presented, which optimizes the posterior probability and outputs a scene representation as a "parsing graph", in a spirit similar to parsing sentences in speech and natural language.
Abstract: In this paper we present a Bayesian framework for parsing images into their constituent visual patterns. The parsing algorithm optimizes the posterior probability and outputs a scene representation as a "parsing graph", in a spirit similar to parsing sentences in speech and natural language. The algorithm constructs the parsing graph and re-configures it dynamically using a set of moves, which are mostly reversible Markov chain jumps. This computational framework integrates two popular inference approaches--generative (top-down) methods and discriminative (bottom-up) methods. The former formulates the posterior probability in terms of generative models for images defined by likelihood functions and priors. The latter computes discriminative probabilities based on a sequence (cascade) of bottom-up tests/filters. In our Markov chain algorithm design, the posterior probability, defined by the generative models, is the invariant (target) probability for the Markov chain, and the discriminative probabilities are used to construct proposal probabilities to drive the Markov chain. Intuitively, the bottom-up discriminative probabilities activate top-down generative models. In this paper, we focus on two types of visual patterns--generic visual patterns, such as texture and shading, and object patterns including human faces and text. These types of patterns compete and cooperate to explain the image and so image parsing unifies image segmentation, object detection, and recognition (if we use generic visual patterns only then image parsing will correspond to image segmentation (Tu and Zhu, 2002. IEEE Trans. PAMI, 24(5):657--673). We illustrate our algorithm on natural images of complex city scenes and show examples where image segmentation can be improved by allowing object specific knowledge to disambiguate low-level segmentation cues, and conversely where object detection can be improved by using generic visual patterns to explain away shadows and occlusions.

Proceedings ArticleDOI
20 Jun 2005
TL;DR: This work addresses the problem of segmenting 3D scan data into objects or object classes by using a recently proposed maximum-margin framework to discriminatively train the model from a set of labeled scans and automatically learn the relative importance of the features for the segmentation task.
Abstract: We address the problem of segmenting 3D scan data into objects or object classes. Our segmentation framework is based on a subclass of Markov random fields (MRFs) which support efficient graph-cut inference. The MRF models incorporate a large set of diverse features and enforce the preference that adjacent scan points have the same classification label. We use a recently proposed maximum-margin framework to discriminatively train the model from a set of labeled scans; as a result we automatically learn the relative importance of the features for the segmentation task. Performing graph-cut inference in the trained MRF can then be used to segment new scenes very efficiently. We test our approach on three large-scale datasets produced by different kinds of 3D sensors, showing its applicability to both outdoor and indoor environments containing diverse objects.

Journal ArticleDOI
TL;DR: Under certain conditions that the stochastic process generated is ergodic, with appropriate stationary distribution is shown, which is used to analyse an adaptive version of the random walk Metropolis algorithm where the scale parameter o is sequentially adapted using a Robbins Monro type algorithm in order to find the optimal scale parameter aopt.
Abstract: We look at adaptive Markov chain Monte Carlo algorithms that generate stochastic processes based on sequences of transition kernels, where each transition kernel is allowed to depend on the history of the process. We show under certain conditions that the stochastic process generated is ergodic, with appropriate stationary distribution. We use this result to analyse an adaptive version of the random walk Metropolis algorithm where the scale parameter σ is sequentially adapted using a Robbins-Monro type algorithm in order to find the optimal scale parameter σopt. We close with a simulation example.

Proceedings ArticleDOI

[...]

20 Jun 2005
TL;DR: A principled Bayesian method for detecting and segmenting instances of a particular object category within an image, providing a coherent methodology for combining top down and bottom up cues and developing an efficient method, OBJ CUT, to obtain segmentations using this model.
Abstract: In this paper, we present a principled Bayesian method for detecting and segmenting instances of a particular object category within an image, providing a coherent methodology for combining top down and bottom up cues. The work draws together two powerful formulations: pictorial structures (PS) and Markov random fields (MRFs) both of which have efficient algorithms for their solution. The resulting combination, which we call the object category specific MRF, suggests a solution to the problem that has long dogged MRFs namely that they provide a poor prior for specific shapes. In contrast, our model provides a prior that is global across the image plane using the PS. We develop an efficient method, OBJ CUT, to obtain segmentations using this model. Novel aspects of this method include an efficient algorithm for sampling the PS model, and the observation that the expected log likelihood of the model can be increased by a single graph cut. Results are presented on two object categories, cows and horses. We compare our methods to the state of the art in object category specific image segmentation and demonstrate significant improvements.

Journal ArticleDOI
01 Apr 2005-Energy
TL;DR: In this paper, the authors used the transition matrix approach of the Markov chain process to generate wind speed time series from two meteorological stations in Malaysia, and compared the observed wind speed and the synthetically generated ones.

Proceedings Article
30 Jul 2005
TL;DR: This paper defines a general framework for activity recognition by building upon and extending Relational Markov Networks and develops an efficient inference and learning technique based on MCMC that can accurately label a person's activity locations.
Abstract: In this paper we define a general framework for activity recognition by building upon and extending Relational Markov Networks. Using the example of activity recognition from location data, we show that our model can represent a variety of features including temporal information such as time of day, spatial information extracted from geographic databases, and global constraints such as the number of homes or workplaces of a person. We develop an efficient inference and learning technique based on MCMC. Using GPS location data collected by multiple people we show that the technique can accurately label a person's activity locations. Furthermore, we show that it is possible to learn good models from less data by using priors extracted from other people's data.

Proceedings ArticleDOI
13 Mar 2005
TL;DR: It is established that any piecewise linear movement applied to a user preserves the uniform distribution of position and direction provided that users were initially uniformly throughout the space with equal likelihood of being pointed in any direction.
Abstract: A number of mobility models have been proposed for the purpose of either analyzing or simulating the movement of users in a mobile wireless network. Two of the more popular are the random waypoint and the random direction models. The random waypoint model is physically appealing but difficult to understand. Although the random direction model is less appealing physically, it is much easier to understand. User speeds are easily calculated, unlike for the waypoint model, and, as we observe, user positions and directions are uniformly distributed. The contribution of this paper is to establish this last property for a rich class of random direction models that allow future movements to depend on past movements. To this end, we consider finite oneand two-dimensional spaces. We consider two variations, the random direction model with wrap around and with reflection. We establish a simple relationship between these two models and, for both, show that positions and directions are uniformly distributed for a class of Markov movement models regardless of initial position. In addition, we establish a sample path property for both models, namely that any piecewise linear movement applied to a user preserves the uniform distribution of position and direction provided that users were initially uniformly throughout the space with equal likelihood of being pointed in any direction.

Proceedings ArticleDOI
07 Aug 2005
TL;DR: An algorithm for learning the structure of MLNs from relational databases is developed, combining ideas from inductive logic programming (ILP) and feature induction in Markov networks.
Abstract: Markov logic networks (MLNs) combine logic and probability by attaching weights to first-order clauses, and viewing these as templates for features of Markov networks. In this paper we develop an algorithm for learning the structure of MLNs from relational databases, combining ideas from inductive logic programming (ILP) and feature induction in Markov networks. The algorithm performs a beam or shortest-first search of the space of clauses, guided by a weighted pseudo-likelihood measure. This requires computing the optimal weights for each candidate structure, but we show how this can be done efficiently. The algorithm can be used to learn an MLN from scratch, or to refine an existing knowledge base. We have applied it in two real-world domains, and found that it outperforms using off-the-shelf ILP systems to learn the MLN structure, as well as pure ILP, purely probabilistic and purely knowledge-based approaches.

Proceedings ArticleDOI
08 Jun 2005
TL;DR: In this paper, the stabilization problem for a kind of networked control systems in discrete-time domain with random delays is considered, and the necessary and sufficient conditions on the existence of the stabilizing controllers are established.
Abstract: We consider the stabilization problem for a kind of networked control systems in discrete-time domain with random delays. The sensor-to-controller and controller-to-sensor delays are modeled as two Markov chains, and the resulting closed-loop systems are jump linear systems with two modes. The necessary and sufficient conditions on the existence of the stabilizing controllers are established. It is shown that the state-feedback gains are different with different modes. An iterative linear matrix inequality (LMI) approach is employed to calculate the state-feedback gains.

Journal ArticleDOI
08 Jul 2005-Science
TL;DR: This work used Bayesian inference to derive a probability distribution that represents the unknown structure and its precision and implemented this approach by using Markov chain Monte Carlo techniques, providing an objective figure of merit and improves structural quality.
Abstract: Macromolecular structures calculated from nuclear magnetic resonance data are not fully determined by experimental data but depend on subjective choices in data treatment and parameter settings. This makes it difficult to objectively judge the precision of the structures. We used Bayesian inference to derive a probability distribution that represents the unknown structure and its precision. This probability distribution also determines additional unknowns, such as theory parameters, that previously had to be chosen empirically. We implemented this approach by using Markov chain Monte Carlo techniques. Our method provides an objective figure of merit and improves structural quality.

Journal ArticleDOI
TL;DR: The authors discuss the applications of the Perron-Frobenius theorem in diverse areas such as steady state behavior of Markov chains, power control in wireless networks, commodity pricing models in economics, population growth models, and Web search engines.
Abstract: The Perron-Frobenius theorem provides a simple characterization of the eigenvectors and eigenvalues of certain types of matrices with nonnegative entries. The importance of the Perron-Frobenius theorem stems from the fact that eigenvalue problems on these types of matrices frequently arise in many different fields of science and engineering. In this article, the authors discuss the applications of this theorem in diverse areas such as steady state behavior of Markov chains, power control in wireless networks, commodity pricing models in economics, population growth models, and Web search engines. The article starts with a review and discussion of the mathematical foundations.

Journal ArticleDOI
TL;DR: This article provides an introduction to Bayesian statistics, hierarchical modeling, and Markov chain Monte Carlo computational techniques and shows that a signal detection analysis of recognition memory data leads to asymptotic underestimation of sensitivity.
Abstract: Although many nonlinear models of cognition have been proposed in the past 50 years, there has been little consideration of corresponding statistical techniques for their analysis. In analyses with nonlinear models, unmodeled variability from the selection of items or participants may lead to asymptotically biased estimation. This asymptotic bias, in turn, renders inference problematic. We show, for example, that a signal detection analysis of recognition memory data leads to asymptotic underestimation of sensitivity. To eliminate asymptotic bias, we advocate hierarchical models in which participant variability, item variability, and measurement error are modeled simultaneously. By accounting for multiple sources of variability, hierarchical models yield consistent and accurate estimates of participant and item effects in recognition memory. This article is written in tutorial format; we provide an introduction to Bayesian statistics, hierarchical modeling, and Markov chain Monte Carlo computational techniques.

Journal ArticleDOI
TL;DR: In this paper, an input-output hidden Markov model (IOHMM) is proposed for analyzing and forecasting electricity spot prices in the Spanish electricity market, which provides both good predictions in terms of accuracy as well as dynamic information about the market.
Abstract: In competitive electricity markets, in addition to the uncertainty of exogenous variables such as energy demand, water inflows, and availability of generation units and fuel costs, participants are faced with the uncertainty of their competitors' behavior. The analysis of electricity price time series reflects a switching nature, related to discrete changes in competitors' strategies, which can be represented by a set of dynamic models sequenced together by a Markov chain. An input-output hidden Markov model (IOHMM) is proposed for analyzing and forecasting electricity spot prices. The model provides both good predictions in terms of accuracy as well as dynamic information about the market. In this way, different market states are identified and characterized by their more relevant explanatory variables. Moreover, a conditional probability transition matrix governs the probabilities of remaining in the same state, or changing to another, whenever a new market session is opened. The model has been successfully applied to real clearing prices in the Spanish electricity market.

Book
01 Jan 2005
TL;DR: This paper presents a meta-anatomy of Markov chains using a model derived from the model derived in [Bouchut-Boyaval, M3AS (23) 2013].
Abstract: TOOLS Introduction to Markov chains Structured matrix analysis Matrix equations and canonical factorization STRUCTURED MARKOV CHAINS M/G/1-type Markov chains Phase-type queues ALGORITHMS Functional iterations Logarithmic reduction and cyclic reduction Alternative approaches Specialized structures Appendix Notations List of Algorithms Bibliography

Journal ArticleDOI
TL;DR: In this article, a simple method for generating alternative CTPDFs which can significantly speed up the convergence of Markov chain Monte Carlo (MCMCMC) is presented, which can be applied to multiple planet systems.
Abstract: Precise radial velocity measurements have led to the discovery of ~170 extrasolar planetary systems. Understanding the uncertainties in the orbital solutions will become increasingly important as the discovery space for extrasolar planets shifts to planets with smaller masses and longer orbital periods. The method of Markov chain Monte Carlo (MCMC) provides a rigorous method for quantifying the uncertainties in orbital parameters in a Bayesian framework (Ford 2005a). The main practical challenge for the general application of MCMC is the need to construct Markov chains which quickly converge. The rate of convergence is very sensitive to the choice of the candidate transition probability distribution function (CTPDF). Here we explain one simple method for generating alternative CTPDFs which can significantly speed convergence by one to three orders of magnitude. We have numerically tested dozens of CTPDFs with simulated radial velocity data sets to identify those which perform well for different types of orbits and suggest a set of CTPDFs for general application. Additionally, we introduce other refinements to the MCMC algorithm for radial velocity planets, including an improved treatment of the uncertainties in the radial velocity observations, an algorithm for automatically choosing step sizes, an algorithm for automatically determining reasonable stopping times, and the use of importance sampling for including the dynamical evolution of multiple planet systems. Together, these improvements make it practical to apply MCMC to multiple planet systems. We demonstrate the improvements in efficiency by analyzing a variety of extrasolar planetary systems.

Proceedings ArticleDOI
19 Sep 2005
TL;DR: A novel performance analysis technique for large scale systems modelled in the stochastic process algebra PEPA, whose underlying mathematical representation is a set of coupled ordinary differential equations (ODEs).
Abstract: In this paper we present a novel performance analysis technique for large scale systems modelled in the stochastic process algebra PEPA. In contrast to the well-known approach of analysing via continuous time Markov chains, our underlying mathematical representation is a set of coupled ordinary differential equations (ODEs). This analysis process supports all of the comhinators of the PEPA algebra and is well suited to systems with large numbers of replicated components. The paper presents an elegant procedure for the generation of the ODEs and compares the results of this analysis with more conventional methods.