scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Estimating Sensitivity and Sojourn Time in Screening for Colorectal Cancer A Comparison of Statistical Approaches

15 Sep 1998-American Journal of Epidemiology (Oxford University Press)-Vol. 148, Iss: 6, pp 609-619
TL;DR: Various analytic strategies for fitting exponential models to data from a screening program for colorectal cancer conducted in Calvados, France, between 1991 and 1994 are considered, yielding estimates of mean sojourn time and sensitivity.
Abstract: The effectiveness of cancer screening depends crucially on two elements: the sojourn time (that is, the duration of the preclinical screen-detectable period) and the sensitivity of the screening test. Previous literature on methods of estimating mean sojourn time and sensitivity has largely concentrated on breast cancer screening. Screening for colorectal cancer has been shown to be effective in randomized trials, but there is little literature on the estimation of sojourn time and sensitivity. It would be interesting to demonstrate whether methods commonly used in breast cancer screening could be used in colorectal cancer screening. In this paper, the authors consider various analytic strategies for fitting exponential models to data from a screening program for colorectal cancer conducted in Calvados, France, between 1991 and 1994. The models yielded estimates of mean sojourn time of approximately 2 years for 45- to 54-year-olds, 3 years for 55- to 64-year-olds, and 6 years for 65- to 74-year-olds. Estimates of sensitivity were approximately 75%, 50%, and 40% for persons aged 45-54, 55-64, and 65-74 years, respectively. There is room for improvement in all models in terms of goodness of fit, particularly for the first year after screening, but results from randomized trials indicate that the sensitivity estimates are roughly correct. Am J Epidemiol 1998;148:609-19.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: In this paper , the mean preclinical detectable phase (PCDP) of open-angle glaucoma patients was estimated using a Markov chain Monte Carlo (MCMC) model.
Abstract: Importance A 50% reduction of glaucoma-related blindness has previously been demonstrated in a population that was screened for open-angle glaucoma. Ongoing screening trials of high-risk populations and forthcoming low-cost screening methods suggest that such screening may become more common in the future. One would then need to estimate a key component of the natural history of chronic disease, the mean preclinical detectable phase (PCDP). Knowledge of the PCDP is essential for the planning and early evaluation of screening programs and has been estimated for several types of cancer that are screened for. Objective To estimate the mean PCDP for open-angle glaucoma. Design, Setting, and Participants A large population-based screening for open-angle glaucoma was conducted from October 1992 to January 1997 in Malmö, Sweden, including 32 918 participants aged 57 to 77 years. A retrospective medical record review was conducted to assess the prevalence of newly detected cases at the screening, incidence of new cases after the screening, and the expected clinical incidence, ie, the number of new glaucoma cases expected to be detected without a screening. The latter was derived from incident cases in the screened age cohorts before the screening started and from older cohorts not invited to the screening. A total of 2029 patients were included in the current study. Data were analyzed from March 2020 to October 2021. Main Outcomes and Measures The length of the mean PCDP was calculated by 2 different methods: first, by dividing the prevalence of screen-detected glaucoma with the clinical incidence, assuming that the screening sensitivity was 100% and second, by using a Markov chain Monte Carlo (MCMC) model simulation that simultaneously derived both the length of the mean PCDP and the sensitivity of the screening. Results Of 2029 included patients, 1352 (66.6%) were female. Of 1420 screened patients, the mean age at screening was 67.4 years (95% CI, 67.2-67.7). The mean length of the PCDP of the whole study population was 10.7 years (95% CI, 8.7-13.0) by the prevalence/incidence method and 10.1 years (95% credible interval, 8.9-11.2) by the MCMC method. Conclusions and Relevance The mean PCDP was similar for both methods of analysis, approximately 10 years. A mean PCDP of 10 years found in the current study allows for screening with reasonably long intervals, eg, 5 years.

1 citations

Dissertation
01 Sep 2009
TL;DR: This dissertation aims to provide a history of environmental health practices in the United States and some of the practices that have been adopted since the 1970s, as well as some new practices that are currently being developed.
Abstract: University of Minnesota Ph.D. dissertation. September 2009. Major: Environmental Health. Advisor: Dr. Timothy R. Church. 1 computer file (PDF); xiii, 139 pages, appendix pages 123-139.

1 citations


Cites background or methods from "Estimating Sensitivity and Sojourn ..."

  • ...Mean and Median simulated relative risk (RR) values at selected preclinical duration lognormal distribution parameterization combinations for the mode and standard deviation (StDev) of (1,1), (5,3), and (10,5) across the four selected study designs ....

    [...]

  • ...97 estimated RRobserved and 12 RRsimulated due to using several different mode years (1,3,5,10) and standard deviation years(1,3,5) for the lognormal preclinical duration distribution....

    [...]

  • ...The position on the x-axis represents the observed RR estimated using a logistic regression model, one calculated for each of the four study designs where vertical range represents the 12 different simulated RRs (obtained through combination of mode (1,3,5,10) and standard deviation (1,3,5) year model parameterizations for the preclinical duration distribution) for that study design....

    [...]

  • ...The 12 relative risks were simulated using a combination of four preclinical duration distribution parameters for the mode (1,3,5,10) and three standard deviations (sd) (1,3,5)....

    [...]

  • ...Mean and Median simulated relative risk (RR) values at selected preclinical duration lognormal distribution parameterization combinations for the mode and standard deviation (StDev) of (1,1), (5,3), and (10,5) ....

    [...]

Journal ArticleDOI
TL;DR: In this paper , a mathematical relationship for how empirical sensitivity varies with the screening interval and the mean preclinical sojourn time and identify conditions under which empirical sensitivity exceeds or falls short of true sensitivity.
Abstract: The true sensitivity of a cancer screening test, defined as the frequency with which the test returns a positive result if the cancer is present, is a key indicator of diagnostic performance. Given the challenges of directly assessing test sensitivity in a prospective screening program, proxy measures for true sensitivity are frequently reported. We call one such proxy empirical sensitivity, as it is given by the observed ratio of screen-detected cancers to the sum of screen-detected and interval cancers. In the setting of the canonical three-state Markov model for progression from preclinical onset to clinical diagnosis, we formulate a mathematical relationship for how empirical sensitivity varies with the screening interval and the mean preclinical sojourn time and identify conditions under which empirical sensitivity exceeds or falls short of true sensitivity. In particular, when the inter-screening interval is short relative to the mean sojourn time, empirical sensitivity tends to exceed true sensitivity, unless true sensitivity is high. The Breast Cancer Surveillance Consortium (BCSC) has reported an estimate of 0.87 for the empirical sensitivity of digital mammography. We show that this corresponds to a true sensitivity of 0.82 under a mean sojourn time of 3.6 years estimated based on breast cancer screening trials. However, the BCSC estimate of empirical sensitivity corresponds to even lower true sensitivity under more contemporary, longer estimates of mean sojourn time. Consistently applied nomenclature that distinguishes empirical sensitivity from true sensitivity is needed to ensure that published estimates of sensitivity from prospective screening studies are properly interpreted.

1 citations

01 Jan 2008
TL;DR: Evidence that the transmissibility of influenza varies with time over the period September 1918 to November 1918 in these two locations is found with an increasing then decreasingTransmissibility in Baltimore and a strictly decreasing transmissible in Newark.
Abstract: Mathematical modelling is an effective tool for studying infectious disease epidemics. Stochastic models have been increasingly used in recent studies due to their ability to quantify uncertainty. Chapter 2 of my dissertation discusses the building of stochastic compartment models to analyze time series, infectious disease data, and applying Bayesian method to estimate the parameters. With an emphasis on modeling disease transmissibility, population-level, time series data of influenza morbidity and mortality from 1918 pandemic influenza in Baltimore, MD and Newark, NJ is analyzed. We find evidence that the transmissibility of influenza varies with time over the period September 1918 to November 1918 in these two locations with an increasing then decreasing transmissibility in Baltimore and a strictly decreasing transmissibility in Newark. In contrast to the traditional population level models, simulation-based computational models that feature the "micro" structure of population have been developed to capture fine-grained disease dynamics and control strategies. These "agent-based models" (ABMs) general require a large number of input parameters, with empirical data insufficient to provide estimates for all of them (in statistical terms, nonidentifiability). Availability of prior information on different model levels make statistical inference a challenging task, for computational infectious disease models and for more general ABMs. Prior information at various model levels is combined and used to update information on the input parameters. The standard Bayesian approach to this updating induces changes in the ABM stochastic structure. Chapter 3 of my dissertation reports on Optimal Constrained Bayesian Updating method to address these issues including retaining the original ABM structure. Subject to retaining the ABM structure, the approach produces an updated distribution on inputs as close as possible to the standard Bayesian solution. ii A disease natural history estimation example is presented in Chapter 4 to illustrate the optimal constrained Bayesian updating method. Chapter 5 summarizes and discusses future works.

1 citations

Journal ArticleDOI
TL;DR: In this paper , the authors analyzed data from the English Faecal Immunochemical Testing (FIT) pilot, comprising 27,238 individuals aged 59-75, screened for colorectal cancer (CRC) using FIT.
Abstract: The NHS Bowel Cancer Screening Programme (BCSP) faces endoscopy capacity challenges from the COVID-19 pandemic and plans to lower the screening starting age. This may necessitate modifying the interscreening interval or threshold.We analysed data from the English Faecal Immunochemical Testing (FIT) pilot, comprising 27,238 individuals aged 59-75, screened for colorectal cancer (CRC) using FIT. We estimated screening sensitivity to CRC, adenomas, advanced adenomas (AA) and mean sojourn time of each pathology by faecal haemoglobin (f-Hb) thresholds, then predicted the detection of these abnormalities by interscreening interval and f-Hb threshold.Current 2-yearly screening with a f-Hb threshold of 120 μg/g was estimated to generate 16,092 colonoscopies, prevent 186 CRCs, detect 1142 CRCs, 7086 adenomas and 4259 AAs per 100,000 screened over 15 years. A higher threshold at 180 μg/g would reduce required colonoscopies to 11,500, prevent 131 CRCs, detect 1077 CRCs, 4961 adenomas and 3184 AAs. A longer interscreening interval of 3 years would reduce required colonoscopies to 10,283, prevent 126 and detect 909 CRCs, 4796 adenomas and 2986 AAs.Increasing the f-Hb threshold was estimated to be more efficient than increasing the interscreening interval regarding overall colonoscopies per screen-benefited cancer. Increasing the interval was more efficient regarding colonoscopies per cancer prevented.

1 citations

References
More filters
Journal ArticleDOI
TL;DR: The focus is on applied inference for Bayesian posterior distributions in real problems, which often tend toward normal- ity after transformations and marginalization, and the results are derived as normal-theory approximations to exact Bayesian inference, conditional on the observed simulations.
Abstract: The Gibbs sampler, the algorithm of Metropolis and similar iterative simulation methods are potentially very helpful for summarizing multivariate distributions. Used naively, however, iterative simulation can give misleading answers. Our methods are simple and generally applicable to the output of any iterative simulation; they are designed for researchers primarily interested in the science underlying the data and models they are analyzing, rather than for researchers interested in the probability theory underlying the iterative simulations themselves. Our recommended strategy is to use several independent sequences, with starting points sampled from an overdispersed distribution. At each step of the iterative simulation, we obtain, for each univariate estimand of interest, a distributional estimate and an estimate of how much sharper the distributional estimate might become if the simulations were continued indefinitely. Because our focus is on applied inference for Bayesian posterior distributions in real problems, which often tend toward normality after transformations and marginalization, we derive our results as normal-theory approximations to exact Bayesian inference, conditional on the observed simulations. The methods are illustrated on a random-effects mixture model applied to experimental measurements of reaction times of normal and schizophrenic patients.

13,884 citations

Journal ArticleDOI
TL;DR: In this paper, three sampling-based approaches, namely stochastic substitution, the Gibbs sampler, and the sampling-importance-resampling algorithm, are compared and contrasted in relation to various joint probability structures frequently encountered in applications.
Abstract: Stochastic substitution, the Gibbs sampler, and the sampling-importance-resampling algorithm can be viewed as three alternative sampling- (or Monte Carlo-) based approaches to the calculation of numerical estimates of marginal probability distributions. The three approaches will be reviewed, compared, and contrasted in relation to various joint probability structures frequently encountered in applications. In particular, the relevance of the approaches to calculating Bayesian posterior densities for a variety of structured models will be discussed and illustrated.

6,294 citations

Journal Article
TL;DR: Stochastic substitution, the Gibbs sampler, and the sampling-importance-resampling algorithm can be viewed as three alternative sampling- (or Monte Carlo-) based approaches to the calculation of numerical estimates of marginal probability distributions.
Abstract: Stochastic substitution, the Gibbs sampler, and the sampling-importance-resampling algorithm can be viewed as three alternative sampling- (or Monte Carlo-) based approaches to the calculation of numerical estimates of marginal probability distributions. The three approaches will be reviewed, compared, and contrasted in relation to various joint probability structures frequently encountered in applications. In particular, the relevance of the approaches to calculating Bayesian posterior densities for a variety of structured models will be discussed and illustrated.

6,223 citations

Book
01 Jan 1996
TL;DR: Mathematica has defined the state of the art in technical computing for over a decade, and has become a standard in many of the world's leading companies and universities as discussed by the authors.
Abstract: From the Publisher: Mathematica has defined the state of the art in technical computing for over a decade, and has become a standard in many of the world's leading companies and universities From simple calculator operations to large-scale programming and the preparation of interactive documents, Mathematica is the tool of choice

3,566 citations

01 Jan 1996
TL;DR: From the Publisher: Mathematica has defined the state of the art in technical computing for over a decade, and has become a standard in many of the world's leading companies and universities.
Abstract: From the Publisher: Mathematica has defined the state of the art in technical computing for over a decade, and has become a standard in many of the world's leading companies and universities. From simple calculator operations to large-scale programming and the preparation of interactive documents, Mathematica is the tool of choice.

3,115 citations