scispace - formally typeset
Search or ask a question

Showing papers in "Statistical Methods in Medical Research in 2011"


Journal ArticleDOI
TL;DR: This work proposes convolution prior modifications to the well known BYM model for attainment of identifiability and Bayesian robustness in univariate and multivariate disease mapping and spatial regression.
Abstract: We discuss the nature of Gaussian Markov random fields (GMRFs) as they are typically formulated via full conditionals, also named conditional autoregressive or CAR formulations, to represent small area relative risks ensemble priors within a Bayesian hierarchical model framework for statistical inference in disease mapping and spatial regression. We present a partial review on GMRF/CAR and multivariate GMRF prior formulations in univariate and multivariate disease mapping models and communicate insights into various prior characteristics for representing disease risks variability and 'spatial interaction.' We also propose convolution prior modifications to the well known BYM model for attainment of identifiability and Bayesian robustness in univariate and multivariate disease mapping and spatial regression. Several illustrative examples of disease mapping and spatial regression are presented.

74 citations


Journal ArticleDOI
TL;DR: This article reviews three approaches for analysing multivariate longitudinal data in the light of the associated theory, applications and software and combines all the outcomes into a single joint multivariate model.
Abstract: Repeated observation of multiple outcomes is common in biomedical and public health research. Such experiments result in multivariate longitudinal data, which are unique in the sense that they allow the researcher to study the joint evolution of these outcomes over time. Special methods are required to analyse such data because repeated observations on any given response are likely to be correlated over time while multiple responses measured at a given time point will also be correlated. We review three approaches for analysing such data in the light of the associated theory, applications and software. The first method consists of the application of univariate longitudinal tools to a single summary outcome. The second method aims at estimating regression coefficients without explicitly modelling the underlying covariance structure of the data. The third method combines all the outcomes into a single joint multivariate model. We also introduce a multivariate longitudinal dataset and use it to illustrate some of the techniques discussed in the article.

62 citations


Journal ArticleDOI
TL;DR: The various approaches to seamless phase II/III designs based upon the group-sequential approach, the combination test approach and the adaptive Dunnett method are described in a unified framework to allow choice of an appropriate methodology by a trialist considering conducting such a trial.
Abstract: In recent years, there has been a drive to save development costs and shorten time-to-market of new therapies. Research into novel trial designs to facilitate this goal has led to, amongst other approaches, the development of methodology for seamless phase II/III designs. Such designs allow treatment or dose selection at an interim analysis and comparative evaluation of efficacy with control, in the same study. Methods have gained much attention because of their potential advantages compared to conventional drug development programmes with separate trials for individual phases. In this article, we review the various approaches to seamless phase II/III designs based upon the group-sequential approach, the combination test approach and the adaptive Dunnett method. The objective of this article is to describe the approaches in a unified framework and highlight their similarities and differences to allow choice of an appropriate methodology by a trialist considering conducting such a trial.

61 citations


Journal ArticleDOI
TL;DR: The article considers the diagnostic odds ratio, a special summarising function of specificity and sensitivity for a given diagnostic test, which has been suggested as a measure of diagnostic discriminatory power, and shows that this strategy is not to be recommended since it might easily lead to cut-off values on the boundary of the parameter range.
Abstract: The article considers the diagnostic odds ratio, a special summarising function of specificity and sensitivity for a given diagnostic test, which has been suggested as a measure of diagnostic discriminatory power. In the situation of a continuous diagnostic test a cut-off value has to be chosen and it is a common practice to choose the cut-off value on the basis of the maximised diagnostic odds ratio. We show that this strategy is not to be recommended since it might easily lead to cut-off values on the boundary of the parameter range. This is illustrated by means of some examples. The source of the deficient behaviour of the diagnostic odds ratio lies in the convexity of the log-diagnostic odds ratio as a function of the cut-off value. This can easily be seen in practice by plotting a non-parametric estimate of the log-DOR against the cut-off value. In fact, it is shown for the case of a normal distributed diseased and a normal distributed non-diseased population with equal variances that the log-DOR is a convex function of the cut-off value. It is also shown that these problems are not present for the Youden index, which appears to be a better choice.

58 citations


Journal ArticleDOI
TL;DR: A relatively non-technical and practically orientated review of statistical methods that can be used to estimate dose—response relationships in randomised controlled psychotherapy trials in which participants fail to attend all of the planned sessions of therapy.
Abstract: We present a relatively non-technical and practically orientated review of statistical methods that can be used to estimate dose-response relationships in randomised controlled psychotherapy trials in which participants fail to attend all of the planned sessions of therapy. Here we are investigating the effects on treatment outcome of the number of sessions attended when the latter is possibly subject to hidden selection effects (hidden confounding). The aim is to estimate the parameters of a structural mean model (SMM) using randomisation, and possibly randomisation by covariate interactions, as instrumental variables. We describe, compare and illustrate the equivalence of the use of a simple G-estimation algorithm and two two-stage least squares procedures that are traditionally used in economics.

49 citations


Journal ArticleDOI
TL;DR: Through a case study from a replicate cross-over study it is shown how, given suitable replication, it is possible to isolate the component of variation corresponding to patient-by-treatment interaction and hence investigate the possibility of individual response to treatment.
Abstract: It is a common belief that individual variation in response to treatment is an important explanation for the variation in observed outcomes in clinical trials. If such variation is large, it seems reasonable to suppose that progress in treating disease will be advanced by classifying patients according to their abilities or not to 'respond' to particular treatments. We consider that there is currently a lost opportunity in drug development. There is a great deal of talk about individual response to treatment and tailor-made drugs. However, relatively little work is being done to formally investigate, using suitable designs, where individual response to treatment may be important. Through a case study from a replicate cross-over study we show how, given suitable replication, it is possible to isolate the component of variation corresponding to patient-by-treatment interaction and hence investigate the possibility of individual response to treatment.

46 citations


Journal ArticleDOI
TL;DR: The results demonstrate that self-report measures as well as traditional variants of the RRT, which do not take cheating into account, may provide considerably distorted estimates of the prevalence of medication non-adherence.
Abstract: Medication non-adherence is a serious problem for medical research and clinical practice. Self-reports are only moderately valid, and objective methods are cumbersome and expensive to administer. We sought to improve self-reports of medication non-adherence using a cheating detection extension of the randomised-response-technique (RRT). This RRT variant encourages more honest responses by offering interviewees a higher degree of anonymity while simultaneously allowing us to estimate the proportion of respondents disobeying the RRT instructions. The 597 patients were asked to report their lifetime prevalence of medication non-adherence under one of two different questioning procedures, direct questioning or randomised-response. When questioned directly, only 20.9% of patients admitted to intentional medication non-adherent behaviour, as opposed to 32.7% of patients under RRT conditions. Additionally, the cheating detection extension revealed a significant proportion of patients (47.1%) disobeying the instr...

37 citations


Journal ArticleDOI
TL;DR: This article considers a study to examine human sperm cell DNA damage obtained from single-cell electrophoresis (COMET assay) experiment in which the outcome measures present a typical example of log-normal data with excess zeros, and extends the previous methods by incorporating a hierarchical structure using latent random variables to take into account both inter- and intra-subject variations in zero-inflated log- normal data.
Abstract: Although considerable attention has been given to zero-inflated count data, research on zero-inflated lognormal data is limited. In this article, we consider a study to examine human sperm cell DNA damage obtained from single-cell electrophoresis (COMET assay) experiment in which the outcome measures present a typical example of log-normal data with excess zeros. The problem is further complicated by the fact that each study subject has multiple outcomes at each of up to three visits separated by six-week intervals. Previous methods for zero-inflated log-normal data are based on either simple experimental designs, where comparison of means of zero-inflated log-normal data across different experiment groups is of primary interest, or longitudinal measurements, where only one observation is available for each subject at each visit. Their methods cannot be applied when multiple observations per visit are possible and both inter- and intra-subject variations are present. Our zero-inflated model extends the previous methods by incorporating a hierarchical structure using latent random variables to take into account both inter- and intra-subject variations in zero-inflated log-normal data. An EM algorithm has been developed to obtain the Maximum likelihood estimates of the parameters and their standard errors can be estimated by parametric bootstrap. The model is illustrated using the COMET assay data.

34 citations


Journal ArticleDOI
TL;DR: This article investigates the performance of three competing proposals of fitting marginal linear models to clustered longitudinal data, namely, GEE, within-cluster resampling (WCR) and cluster-weighted generalised estimating equations (CWGEE), and concludes that CWGEE appears to be the recommended choice for marginal parametric inference with clusters longitudinal data that achieves similar parameter estimates and test statistics as WCR while avoiding Monte Carlo computation.
Abstract: Clustered longitudinal data are often collected as repeated measures on subjects arising in clusters. Examples include periodontal disease study, where the measurements related to the disease status of each tooth are collected over time for each patient, which can be considered as a cluster. For such applications, the number of teeth for each patient may be related to the overall oral health of the individual and hence may influence the distribution of the outcome measure of interest leading to an informative cluster size. Under such situations, generalised estimating equations (GEE) may lead to invalid inferences. In this article, we investigate the performance of three competing proposals of fitting marginal linear models to clustered longitudinal data, namely, GEE, within-cluster resampling (WCR) and cluster-weighted generalised estimating equations (CWGEE). We show by simulations and theoretical calculations that, when the cluster size is informative, GEE provides biased estimators, while both WCR and...

33 citations


Journal ArticleDOI
TL;DR: This article emphasises the specific considerations necessary for designing good quality simulation studies, including defining data generation processes, data analytic methods, decision criteria and also determining the presentation of results for all intended audiences.
Abstract: Clinical trial simulation studies can be used to assess the impact of many aspects of trial design, conduct, analysis and decision making on trial performance metrics. Simulation studies can play a...

27 citations


Journal ArticleDOI
TL;DR: How a wide range of group sequential designs can easily be implemented using two accessible SAS functions, one of which is a standard function, while the other is part of the interactive matrix language of SAS, PROC IML.
Abstract: The methodology of group sequential trials is now well established and widely implemented. The benefits of the group sequential approach are generally acknowledged, and its use, when applied properly, is accepted by researchers and regulators. This article describes how a wide range of group sequential designs can easily be implemented using two accessible SAS functions. One of these, PROBBNRM is a standard function, while the other, SEQ, is part of the interactive matrix language of SAS, PROC IML. The account focuses on the essentials of the approach and reveals how straightforward it can be. The design of studies is described, including their evaluation in terms of the distribution of final sample size. The conduct of the interim analyses is discussed, with emphasis on the consequences of inevitable departures from the planned schedule of information accrual. The computations required for the final analysis, allowing for the sequential design, are closely related to those conducted at the design stage. Illustrative examples are given and listings of suitable of SAS code are provided.

Journal ArticleDOI
TL;DR: Two simulation studies are used to compare methods for providing appropriate standard errors in this spatial setting and four methods are extended to the change-of-support case where X is observed at points, but Y is observed for areal units, and these approaches are compared via simulation.
Abstract: When a response variable Y is measured on one set of points and a spatially varying predictor variable X is measured on a different set of points, X and Y have different supports and thus are spatially misaligned. To draw inference about the association between X and Y , X is commonly predicted at the points for which Y is observed, and Y is regressed on the predicted X. If X is predicted using kriging or some other smoothing approach, use of the predicted instead of the true (unobserved) X values in the regression results in unbiased estimates of the regression parameters. However, the naive standard errors of these parameters tend to be too small. In this article, two simulation studies are used to compare methods for providing appropriate standard errors in this spatial setting. Three of the methods are extended to the change-of-support case where X is observed at points, but Y is observed for areal units, and these approaches are also compared via simulation.

Journal ArticleDOI
TL;DR: These routines were applied to data on about 20 years of weekly Portuguese number of deaths by pneumonia and influenza showing that, in this case, the parameter that had the highest impact on influenza-associated deaths estimates was the a priori chosen type of period used.
Abstract: The occurrence of influenza epidemics during winters, in the northern hemisphere countries, is known to be associated with observed excess mortality for all causes. A large variety of methods have been developed in order to estimate, from weekly or monthly mortality time series, the number of influenza-associated deaths in each season. The present work focus on the group of methods characterised by fitting statistical models to interrupted mortality time series. The study objective is to find a common ground between these methods in order to describe and compare them. They are unified in a single class, being categorised according to three main parameters: the model used to fit the interrupted time series and obtain a baseline, the a priori chosen type of periods used to estimate the influenza epidemic periods and the procedure used to fit the model to the time series (iterative or non-iterative). This generalisation led quite naturally to the construction of a set of user friendly R-routines, package flu...

Journal ArticleDOI
TL;DR: Empirical results show that Wald-type, score-type and bootstrap confidence intervals based on the dependence model perform satisfactorily for small to large sample sizes in the sense that their empirical coverage probabilities are close to the pre-specified nominal confidence level and are hence recommended.
Abstract: Bilateral dichotomous data are very common in modern medical comparative studies (e.g. comparison of two treatments in ophthalmologic, orthopaedic and otolaryngologic studies) in which information involving paired organs (e.g. eyes, ears and hips) is available from each subject. In this article, we study various confidence interval estimators for proportion difference based on Wald-type statistics, Fieller theorem, likelihood ratio statistic, score statistics and bootstrap resampling method under the dependence or/and independence models for bilateral binary data. Performance is evaluated with respect to the coverage probability and expected width via simulation studies. Our empirical results show that (1) ignoring the dependence feature of bilateral data could lead to severely incorrect coverage probabilities; and (2) Wald-type, score-type and bootstrap confidence intervals based on the dependence model perform satisfactorily for small to large sample sizes in the sense that their empirical coverage prob...

Journal ArticleDOI
TL;DR: It is shown how the sample size is quite sensitive to assumptions about the control response, and recommended that the Bayesian methods described in this article be adopted to assess sample size.
Abstract: Non-inferiority trials are motivated in the context of clinical research where a proven active treatment exists and placebo-controlled trials are no longer acceptable for ethical reasons. Instead, active-controlled trials are conducted where a treatment is compared to an established treatment with the objective of demonstrating that it is non-inferior to this treatment. We review and compare the methodologies for calculating sample sizes and suggest appropriate methods to use. We demonstrate how the simplest method of using the anticipated response is predominantly consistent with simulations. In the context of trials with binary outcomes with expected high proportions of positive responses, we show how the sample size is quite sensitive to assumptions about the control response. We recommend when designing such a study that sensitivity analyses be performed with respect to the underlying assumptions and that the Bayesian methods described in this article be adopted to assess sample size.

Journal ArticleDOI
TL;DR: This article derives sample size formulae for the non-randomised triangular design based on the power analysis approach and numerically compares the sample sizes required for the randomised Warner design with those necessary for the direct questioning (DDQ), and extends the one-sample problem to the two- sample problem.
Abstract: Sample size determination is an essential component in public health survey designs on sensitive topics (e.g. drug abuse, homosexuality, induced abortions and pre or extramarital sex). Recently, non-randomised models have been shown to be an efficient and cost effective design when comparing with randomised response models. However, sample size formulae for such non-randomised designs are not yet available. In this article, we derive sample size formulae for the non-randomised triangular design based on the power analysis approach. We first consider the one-sample problem. Power functions and their corresponding sample size formulae for the one- and two-sided tests based on the large-sample normal approximation are derived. The performance of the sample size formulae is evaluated in terms of (i) the accuracy of the power values based on the estimated sample sizes and (ii) the sample size ratio of the non-randomised triangular design and the design of direct questioning (DDQ). We also numerically compare the sample sizes required for the randomised Warner design with those required for the DDQ and the non-randomised triangular design. Theoretical justification is provided. Furthermore, we extend the one-sample problem to the two-sample problem. An example based on an induced abortion study in Taiwan is presented to illustrate the proposed methods.

Journal ArticleDOI
TL;DR: This article proposes an additional estimation method, based on inverse probability weighting, for the attributable fraction, and carries out a simulation study to examine the performance of the inverse probability weighted estimator, and to compare it to the maximum likelihood estimation.
Abstract: The attributable fraction is commonly used in epidemiology to quantify the impact of an exposure on a disease. Several estimation methods have been suggested in the literature, including maximum likelihood estimation. In this article we propose an additional estimation method, based on inverse probability weighting. This method is particularly useful when a model for the exposure distibution can be well specified. We carry out a simulation study to examine the performance of the inverse probability weighted estimator, and to compare it to the maximum likelihood estimator.

Journal ArticleDOI
TL;DR: The mixture survival model is extended to population-based grouped survival data and the personal cure rate is estimated using the colorectal cancer survival data from the Surveillance, Epidemiology and End Results Programme.
Abstract: Cancer patients are subject to multiple competing risks of death and may die from causes other than the cancer diagnosed The probability of not dying from the cancer diagnosed, which is one of the patients’ main concerns, is sometimes called the “personal cure” rate Two approaches of modeling competing-risk survival data, namely the cause-specific hazards approach and the mixture model approach, have been used to model competing-risk survival data In this article, we first show the connection and differences between crude cause-specific survival in the presence of other causes and net survival in the absence of other causes The mixture survival model is extended to population-based grouped survival data to estimate the personal cure rate Using the colorectal cancer survival data from the Surveillance, Epidemiology and End Results (SEER) Program, we estimate the probabilities of dying from colorectal cancer, heart disease, and other causes by age at diagnosis, race and American Joint Committee on Cancer (AJCC) stage

Journal ArticleDOI
TL;DR: In this paper, a linear combination of K independent binomial proportions L = Σβipi (in which the first two are special cases) is considered and the score method is used to verify the desirable properties of spatial and parametric convexity.
Abstract: Statistical methods for carrying out asymptotic inferences (tests or confidence intervals) relative to one or two independent binomial proportions are very frequent. However, inferences about a linear combination of K independent proportions L = Σβipi (in which the first two are special cases) have had very little attention paid to them (focused exclusively on the classic Wald method). In this article the authors approach the problem from the more efficient viewpoint of the score method, which can be solved using a free programme, which is available from the webpage quoted in the article. In addition the article offers approximate formulas that are easy to calculate, gives a general proof of Agresti’s heuristic method (consisting of adding a certain number of successes and failures to the original results before applying Wald’s method) and, finally, it proves that the score method (which verifies the desirable properties of spatial and parametric convexity) is the best option in comparison with other methods.

Journal ArticleDOI
TL;DR: This work compares using complete cases with multiple imputation using backward selection (backwards stepping) and least angle regression and finds that the coefficients are slightly different and the estimated standard errors are smaller in the complete cases.
Abstract: We consider variable selection when missing values are present in the predictor variables We compare using complete cases with multiple imputation using backward selection (backwards stepping) and least angle regression These are studied using a data set from a rheumatological disease (myositis) We find that the coefficients are slightly different and the estimated standard errors are smaller in the complete cases (not a surprise) This seems to be due to the fact that because the estimated residual variance is small the complete cases are more homogeneous than the full data cases

Journal ArticleDOI
TL;DR: This article provides statistical tests that allow for examination of several local statistics across multiple spatial scales, and yet avoid the need for simulation, using data on leukemia from central New York State.
Abstract: Local spatial statistics are used to test for spatial association in some variable of interest, and to test for clustering around predefined locations. Such statistics require that a neighbourhood be defined around the location of interest. This is done by specifying weights for surrounding regions, and this is tantamount to specification of the scale at which the local dependence or clustering is tested. In practice, weights are usually assigned exogenously, with little thought given to their definition. Most common is the definition of binary adjacency - weights are set equal to one if the region is adjacent to the focal region and to zero otherwise. But this implies a spatial scale that may or may not be the best one to evaluate the variable under study - the actual scale of dependence or clustering is one that is smaller or larger. An alternative strategy is to try different sets of weights corresponding to different spatial scales. The purpose of this article is to provide statistical tests that allow for examination of several local statistics across multiple spatial scales, and yet avoid the need for simulation. Application of these tests leads to a choice of spatial scale through the weights, as well as an assessment of statistical significance. The approach is illustrated using data on leukemia from central New York State.

Journal ArticleDOI
TL;DR: A multivariate spatial betabinomial (BB) model for these data that accommodates both over-dispersion as well as latent spatial associations and provides a superior estimation and model fit as compared to other sub-models that do not consider modelling spatial associations.
Abstract: One of the most important indicators of dental caries prevalence is the total count of decayed, missing or filled surfaces in a tooth. These count data are often clustered in nature (several count responses clustered within a subject), over-dispersed as well as spatially referenced (a diseased tooth might be positively influencing the decay process of a set of neighbouring teeth). In this article, we develop a multivariate spatial betabinomial (BB) model for these data that accommodates both over-dispersion as well as latent spatial associations. Using a Bayesian paradigm, the re-parameterised marginal mean (as well as variance) under the BB framework are modelled using a regression on subject/tooth-specific co-variables and a conditionally autoregressive prior that models the latent spatial process. The necessity of exploiting spatial associations to model count data arising in dental caries research is demonstrated using a small simulation study. Real data confirms that our spatial BB model provides a superior estimation and model fit as compared to other sub-models that do not consider modelling spatial associations.

Journal ArticleDOI
TL;DR: An item response theory model is proposed to analyse psychiatric questionnaires that contain embarrassing items, using Bayesian methods to estimate its parameters and a simulation study is considered to evaluate the performance of the proposed estimators.
Abstract: We propose an item response theory model to analyse psychiatric questionnaires that contain embarrassing items. We use Bayesian methods to estimate its parameters and consider a simulation study to evaluate the performance of the proposed estimators. The results are illustrated with the analysis of data collected to evaluate teenager depression, highlighting the gender difference in the probabilities of ‘crying crisis’, a trait known to embarrass some male populations.

Journal ArticleDOI
TL;DR: It is shown that the exposure mean, variance and intraclass correlation are the only additional parameters needed for exact solutions for the required sample size, if compound symmetry of residuals can be assumed, or to a good approximation if residuals follow a damped exponential correlation structure.
Abstract: Existing study design formulas for longitudinal studies assume that the exposure is time invariant or that it varies in a manner that is controlled by design. However, in observational studies, the investigator does not control how exposure varies within subjects over time. Typically, a large number of exposure patterns are observed, with differences in the number of exposed periods per participant and with changes in the cross-sectional mean of exposure over time. This article provides formulas for study design calculations that incorporate these features for studies with a continuous outcome and a time-varying exposure, for cases where the effect of exposure on the response is assumed to be constant over time. We show that incorrectly using the formulas for time-invariant exposure can produce substantial overestimation of the required sample size. It is shown that the exposure mean, variance and intraclass correlation are the only additional parameters needed for exact solutions for the required sample size, if compound symmetry of residuals can be assumed, or to a good approximation if residuals follow a damped exponential correlation structure. The methods are applied to several examples. A publicly available programme to perform the calculations is provided.

Journal ArticleDOI
TL;DR: A probability-based model involving the use of direct likelihood formulation and generalised linear modelling (GLM) approaches useful in estimating important disease parameters from longitudinal or repeated measurement data is developed.
Abstract: This article aims to develop a probability-based model involving the use of direct likelihood formulation and generalised linear modelling (GLM) approaches useful in estimating important disease parameters from longitudinal or repeated measurement data. The current application is based on infection with respiratory syncytial virus. The force of infection and the recovery rate or per capita loss of infection are the parameters of interest. However, because of the limitation arising from the study design and subsequently, the data generated only the force of infection is estimable. The problem of dealing with time-varying disease parameters is also addressed in the article by fitting piecewise constant parameters over time via the GLM approach. The current model formulation is based on that published in White LJ, Buttery J, Cooper B, Nokes DJ and Medley GF. Rotavirus within day care centres in Oxfordshire, UK: characterization of partial immunity. Journal of Royal Society Interface 2008; 5: 1481–1490 with an application to rotavirus transmission and immunity.

Journal ArticleDOI
TL;DR: Results show that the heterogeneity increases with the variability of length of follow-up for OR and RR, but not for the ratio of the logarithms of survival probability, which avoids the problems mentioned above when hazards are proportional.
Abstract: Odds ratios (ORs) and relative risks (RRs) are sensitive to the length of follow-up. In meta-analyses, pooling such results from studies with different lengths of follow-up may lead to an artificial heterogeneity and discrepancy caused by the choice of the summary index. In this article, we explore the utility of a meta-analysis method that uses the ratio of logarithms of survival probability as the measure of association, and that avoids the problems mentioned above when hazards are proportional. Meta-analyses of ORs, RRs and ratios of logarithms of survival probability are compared through a simulation study, in which data are simulated from a proportional hazard model and the length of follow-up varies across studies using realistic patterns of variability. Results show that the heterogeneity increases with the variability of length of follow-up for OR and RR, but not for the ratio of the logarithms of survival probability. A published meta-analysis is used to illustrate the method.

Journal ArticleDOI
TL;DR: A Bayesian Poisson mixed linear model is proposed in order to describe the observed cases of influenzalike illness for every sentinel and week of surveillance, based on information from sentinel networks.
Abstract: The threat of pandemics has made influenza surveillance systems a priority in epidemiology services around the world. The emergence of A-H1N1 influenza has required accurate surveillance systems in order to undertake specific actions only when and where they are necessary. In that sense, the main goal of this article is to describe a novel methodology for monitoring the geographical distribution of the incidence of influenza-like illness, as a proxy for influenza, based on information from sentinel networks. A Bayesian Poisson mixed linear model is proposed in order to describe the observed cases of influenza-like illness for every sentinel and week of surveillance. This model includes a spatio-temporal random effect that shares information in space by means of a kernel convolution process and in time by means of a first order autoregressive process. The extrapolation of this term to sites where information on incidence is not available will allow us to visualise the geographical distribution of the disease for every week of study. The following article shows the performance of this model in the Comunitat Valenciana's Sentinel Network (one of the 17 autonomous regions of Spain) as a real case study of this methodology.

Journal ArticleDOI
TL;DR: A Bayesian hierarchical model is presented to evaluate the effect of long-range and local range PM10 during air pollution episodes on hospital admissions for cardio-respiratory diseases in Greater London.
Abstract: In this paper, we present a Bayesian hierarchical model to evaluate the effect of long-range and local range PM(10) during air pollution episodes on hospital admissions for cardio-respiratory diseases in Greater London. These episodes in 2003 are matched with the same periods during the previous year, used as a control. A baseline dose-response function is estimated for the controls and carried forward in the episodes, which are characterised by an additional component that estimates their specific effect on the health outcome.

Journal ArticleDOI
TL;DR: This article defines a profile empirical likelihood ratio for the sensitivity of a continuous-scale diagnostic test and shows that its limiting distribution is a scaled chi-square distribution, and proposes two new empirical likelihood-based confidence intervals for theensitivity of the test at a fixed level of specificity.
Abstract: For a continuous-scale diagnostic test, it is often of interest to find the range of the sensitivity of the test at the cut-off that yields a desired specificity. In this article, we first define a profile empirical likelihood ratio for the sensitivity of a continuous-scale diagnostic test and show that its limiting distribution is a scaled chi-square distribution. We then propose two new empirical likelihood-based confidence intervals for the sensitivity of the test at a fixed level of specificity by using the scaled chi-square distribution. Simulation studies are conducted to compare the finite sample performance of the newly proposed intervals with the existing intervals for the sensitivity in terms of coverage probability. A real example is used to illustrate the application of the recommended methods.

Journal ArticleDOI
TL;DR: The problem of detecting all effective and superior combinations in a factorial drug efficacy trial is stated in terms of two hypothesis families, full and reduced, which allows individual detection of the efficacy and superiority of combinations resulting in more detailed conclusions.
Abstract: The problem of detecting all effective and superior combinations in a factorial drug efficacy trial is stated in terms of two hypothesis families, full and reduced. The reduced problem formulation allows identification of all simultaneously effective and superior combinations. The full formulation allows individual detection of the efficacy and superiority of combinations resulting in more detailed conclusions. While the full hypothesis family deals with simpler parameters, the true mean effect differences, it has three times as many hypotheses as the reduced family. The reduced family is comprised of hypotheses concerning a gain-parameter, which is defined as the minimum of the true mean differences and leading to a fairly complicated structure. For each problem formulation, Holm’s, Hochberg’s and two resampling approaches are studied with respect to strong control of overall error rate and several power measures. Holm’s and Hochberg’s approaches are recommended for the reduced family, while the step-dow...