scispace - formally typeset
Search or ask a question
Author

Sibylle Sturtz

Bio: Sibylle Sturtz is an academic researcher from Technical University of Dortmund. The author has contributed to research in topics: Bayesian probability & Medicine. The author has an hindex of 3, co-authored 4 publications receiving 2209 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: The R2WinBUGS package provides convenient functions to call WinBUGS from R and automatically writes the data and scripts in a format readable by WinBUGs for processing in batch mode, which is possible since version 1.4.
Abstract: The R2WinBUGS package provides convenient functions to call WinBUGS from R. It automatically writes the data and scripts in a format readable by WinBUGS for processing in batch mode, which is possible since version 1.4. After the WinBUGS process has finished, it is possible either to read the resulting data into R by the package itself—which gives a compact graphical summary of inference and convergence diagnostics—or to use the facilities of the coda package for further analyses of the output. Examples are given to demonstrate the usage of this package.

1,633 citations

Journal ArticleDOI
TL;DR: It is observed that the SMR and the age-standardised mortality rate (ASM) are strongly correlated and lead to comparable results.
Abstract: The number of deaths in a particular connection can be expressed in different ways. In spatial epidemiology, two widely used measures are the standardised mortality ratio (SMR) and the so called mortality rate. This paper compares these two ways of expressing mortality using a descriptive and a model-based approach. Age-standardised versions of both terms have been investigated by a descriptive analysis of temporal and spatial patterns and by employing different Bayesian spatial models to study their performance. We observed that the SMR and the age-standardised mortality rate (ASM) are strongly correlated and lead to comparable results. This demonstration is based on mortality data by age, stratified into five-year ranges, from the cause-of-death-statistics with reference to ischaemic heart disease and lung cancer in 54 counties of the German state of North Rhine Westphalia between 1980 and 1997.

4 citations

Journal ArticleDOI
TL;DR: The Bayesian Detection of Clusters and Discontinuities model is found to have advantages in situations dominated by abruptly changing risk while the Poisson/gamma random field model convinces by its flexibility in the estimation of random field structures and byIts flexibility incorporating covariates.
Abstract: Bayesian hierarchical models usually model the risk surface on the same arbitrary geographical units for all data sources. Poisson/gamma random field models overcome this restriction as the underlying risk surface can be specified independently to the resolution of the data. Moreover, covariates may be considered as either excess or relative risk factors. We compare the performance of the Poisson/gamma random field model to the Markov random field (MRF)-based ecologic regression model and the Bayesian Detection of Clusters and Discontinuities (BDCD) model, in both a simulation study and a real data example. We find the BDCD model to have advantages in situations dominated by abruptly changing risk while the Poisson/gamma random field model convinces by its flexibility in the estimation of random field structures and by its flexibility incorporating covariates. The MRF-based ecologic regression model is inferior. WinBUGS code for Poisson/gamma random field models is provided.

3 citations

Journal ArticleDOI
TL;DR: In this paper, the authors focus on simple and readily applicable approaches to fit a distribution to empirically observed heterogeneity data from a set of meta-analyses and then translate these into (prior) probability distributions.
Abstract: In Bayesian meta‐analysis, the specification of prior probabilities for the between‐study heterogeneity is commonly required, and is of particular benefit in situations where only few studies are included. Among the considerations in the set‐up of such prior distributions, the consultation of available empirical data on a set of relevant past analyses sometimes plays a role. How exactly to summarize historical data sensibly is not immediately obvious; in particular, the investigation of an empirical collection of heterogeneity estimates will not target the actual problem and will usually only be of limited use. The commonly used normal‐normal hierarchical model for random‐effects meta‐analysis is extended to infer a heterogeneity prior. Using an example data set, we demonstrate how to fit a distribution to empirically observed heterogeneity data from a set of meta‐analyses. Considerations also include the choice of a parametric distribution family. Here, we focus on simple and readily applicable approaches to then translate these into (prior) probability distributions.

1 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A fatal flaw of NHST is reviewed and some benefits of Bayesian data analysis are introduced and illustrative examples of multiple comparisons in Bayesian analysis of variance and Bayesian approaches to statistical power are presented.
Abstract: Bayesian methods have garnered huge interest in cognitive science as an approach to models of cognition and perception. On the other hand, Bayesian methods for data analysis have not yet made much headway in cognitive science against the institutionalized inertia of 20th century null hypothesis significance testing (NHST). Ironically, specific Bayesian models of cognition and perception may not long endure the ravages of empirical verification, but generic Bayesian methods for data analysis will eventually dominate. It is time that Bayesian data analysis became the norm for empirical methods in cognitive science. This article reviews a fatal flaw of NHST and introduces the reader to some benefits of Bayesian data analysis. The article presents illustrative examples of multiple comparisons in Bayesian analysis of variance and Bayesian approaches to statistical power. Copyright © 2010 John Wiley & Sons, Ltd. For further resources related to this article, please visit the WIREs website.

6,081 citations

Journal ArticleDOI
TL;DR: This work considers approximate Bayesian inference in a popular subset of structured additive regression models, latent Gaussian models, where the latent field is Gaussian, controlled by a few hyperparameters and with non‐Gaussian response variables and can directly compute very accurate approximations to the posterior marginals.
Abstract: Structured additive regression models are perhaps the most commonly used class of models in statistical applications. It includes, among others, (generalized) linear models, (generalized) additive models, smoothing spline models, state space models, semiparametric regression, spatial and spatiotemporal models, log-Gaussian Cox processes and geostatistical and geoadditive models. We consider approximate Bayesian inference in a popular subset of structured additive regression models, latent Gaussian models, where the latent field is Gaussian, controlled by a few hyperparameters and with non-Gaussian response variables. The posterior marginals are not available in closed form owing to the non-Gaussian response variables. For such models, Markov chain Monte Carlo methods can be implemented, but they are not without problems, in terms of both convergence and computational time. In some practical applications, the extent of these problems is such that Markov chain Monte Carlo sampling is simply not an appropriate tool for routine analysis. We show that, by using an integrated nested Laplace approximation and its simplified version, we can directly compute very accurate approximations to the posterior marginals. The main benefit of these approximations is computational: where Markov chain Monte Carlo algorithms need hours or days to run, our approximations provide more precise estimates in seconds or minutes. Another advantage with our approach is its generality, which makes it possible to perform Bayesian analysis in an automatic, streamlined way, and to compute model comparison criteria and various predictive measures so that models can be compared and the model under study can be challenged.

4,164 citations

Journal ArticleDOI
TL;DR: A hierarchical Bayesian approach to MTC implemented using WinBUGS and R is taken and it is shown that both methods are useful in identifying potential inconsistencies in different types of network and that they illustrate how the direct and indirect evidence combine to produce the posterior MTC estimates of relative treatment effects.
Abstract: Pooling of direct and indirect evidence from randomized trials, known as mixed treatment comparisons (MTC), is becoming increasingly common in the clinical literature. MTC allows coherent judgements on which of the several treatments is the most effective and produces estimates of the relative effects of each treatment compared with every other treatment in a network.We introduce two methods for checking consistency of direct and indirect evidence. The first method (back-calculation) infers the contribution of indirect evidence from the direct evidence and the output of an MTC analysis and is useful when the only available data consist of pooled summaries of the pairwise contrasts. The second more general, but computationally intensive, method is based on 'node-splitting' which separates evidence on a particular comparison (node) into 'direct' and 'indirect' and can be applied to networks where trial-level data are available. Methods are illustrated with examples from the literature. We take a hierarchical Bayesian approach to MTC implemented using WinBUGS and R.We show that both methods are useful in identifying potential inconsistencies in different types of network and that they illustrate how the direct and indirect evidence combine to produce the posterior MTC estimates of relative treatment effects. This allows users to understand how MTC synthesis is pooling the data, and what is 'driving' the final estimates.We end with some considerations on the modelling assumptions being made, the problems with the extension of the back-calculation method to trial-level data and discuss our methods in the context of the existing literature.

1,559 citations

Journal ArticleDOI
TL;DR: In this paper, the authors integrate perspectives from meteorologists, climatologists, statisticians, and hydrologists to identify generic end user (in particular, impact modeler) needs and to discuss downscaling capabilities and gaps.
Abstract: Precipitation downscaling improves the coarse resolution and poor representation of precipitation in global climate models and helps end users to assess the likely hydrological impacts of climate change. This paper integrates perspectives from meteorologists, climatologists, statisticians, and hydrologists to identify generic end user (in particular, impact modeler) needs and to discuss downscaling capabilities and gaps. End users need a reliable representation of precipitation intensities and temporal and spatial variability, as well as physical consistency, independent of region and season. In addition to presenting dynamical downscaling, we review perfect prognosis statistical downscaling, model output statistics, and weather generators, focusing on recent developments to improve the representation of space-time variability. Furthermore, evaluation techniques to assess downscaling skill are presented. Downscaling adds considerable value to projections from global climate models. Remaining gaps are uncertainties arising from sparse data; representation of extreme summer precipitation, subdaily precipitation, and full precipitation fields on fine scales; capturing changes in small-scale processes and their feedback on large scales; and errors inherited from the driving global climate model.

1,443 citations

Journal ArticleDOI
Klaus F. X. Mayer, Jane Rogers, Jaroslav Doležel1, Curtis J. Pozniak2, Kellye Eversole, Catherine Feuillet3, Bikram S. Gill4, Bernd Friebe4, Adam J. Lukaszewski5, Pierre Sourdille6, Takashi R. Endo7, M. Kubaláková1, Jarmila Číhalíková1, Zdeňka Dubská1, Jan Vrána1, Romana Šperková1, Hana Šimková1, Melanie Febrer8, Leah Clissold, Kirsten McLay, Kuldeep Singh9, Parveen Chhuneja9, Nagendra K. Singh10, Jitendra P. Khurana11, Eduard Akhunov4, Frédéric Choulet6, Adriana Alberti, Valérie Barbe, Patrick Wincker, Hiroyuki Kanamori12, Fuminori Kobayashi12, Takeshi Itoh12, Takashi Matsumoto12, Hiroaki Sakai12, Tsuyoshi Tanaka12, Jianzhong Wu12, Yasunari Ogihara13, Hirokazu Handa12, P. Ron Maclachlan2, Andrew G. Sharpe14, Darrin Klassen14, David Edwards, Jacqueline Batley, Odd-Arne Olsen, Simen Rød Sandve15, Sigbjørn Lien15, Burkhard Steuernagel16, Brande B. H. Wulff16, Mario Caccamo, Sarah Ayling, Ricardo H. Ramirez-Gonzalez, Bernardo J. Clavijo, Jonathan M. Wright, Matthias Pfeifer, Manuel Spannagl, Mihaela Martis, Martin Mascher17, Jarrod Chapman18, Jesse Poland4, Uwe Scholz17, Kerrie Barry18, Robbie Waugh19, Daniel S. Rokhsar18, Gary J. Muehlbauer, Nils Stein17, Heidrun Gundlach, Matthias Zytnicki20, Véronique Jamilloux20, Hadi Quesneville20, Thomas Wicker21, Primetta Faccioli, Moreno Colaiacovo, Antonio Michele Stanca, Hikmet Budak22, Luigi Cattivelli, Natasha Glover6, Lise Pingault6, Etienne Paux6, Sapna Sharma, Rudi Appels23, Matthew I. Bellgard23, Brett Chapman23, Thomas Nussbaumer, Kai Christian Bader, Hélène Rimbert, Shichen Wang4, Ron Knox, Andrzej Kilian, Michael Alaux20, Françoise Alfama20, Loïc Couderc20, Nicolas Guilhot6, Claire Viseux20, Mikaël Loaec20, Beat Keller21, Sébastien Praud 
18 Jul 2014-Science
TL;DR: Insight into the genome biology of a polyploid crop provide a springboard for faster gene isolation, rapid genetic marker development, and precise breeding to meet the needs of increasing food demand worldwide.
Abstract: An ordered draft sequence of the 17-gigabase hexaploid bread wheat (Triticum aestivum) genome has been produced by sequencing isolated chromosome arms. We have annotated 124,201 gene loci distributed nearly evenly across the homeologous chromosomes and subgenomes. Comparative gene analysis of wheat subgenomes and extant diploid and tetraploid wheat relatives showed that high sequence similarity and structural conservation are retained, with limited gene loss, after polyploidization. However, across the genomes there was evidence of dynamic gene gain, loss, and duplication since the divergence of the wheat lineages. A high degree of transcriptional autonomy and no global dominance was found for the subgenomes. These insights into the genome biology of a polyploid crop provide a springboard for faster gene isolation, rapid genetic marker development, and precise breeding to meet the needs of increasing food demand worldwide.

1,421 citations