scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Estimating Sensitivity and Sojourn Time in Screening for Colorectal Cancer A Comparison of Statistical Approaches

15 Sep 1998-American Journal of Epidemiology (Oxford University Press)-Vol. 148, Iss: 6, pp 609-619
TL;DR: Various analytic strategies for fitting exponential models to data from a screening program for colorectal cancer conducted in Calvados, France, between 1991 and 1994 are considered, yielding estimates of mean sojourn time and sensitivity.
Abstract: The effectiveness of cancer screening depends crucially on two elements: the sojourn time (that is, the duration of the preclinical screen-detectable period) and the sensitivity of the screening test. Previous literature on methods of estimating mean sojourn time and sensitivity has largely concentrated on breast cancer screening. Screening for colorectal cancer has been shown to be effective in randomized trials, but there is little literature on the estimation of sojourn time and sensitivity. It would be interesting to demonstrate whether methods commonly used in breast cancer screening could be used in colorectal cancer screening. In this paper, the authors consider various analytic strategies for fitting exponential models to data from a screening program for colorectal cancer conducted in Calvados, France, between 1991 and 1994. The models yielded estimates of mean sojourn time of approximately 2 years for 45- to 54-year-olds, 3 years for 55- to 64-year-olds, and 6 years for 65- to 74-year-olds. Estimates of sensitivity were approximately 75%, 50%, and 40% for persons aged 45-54, 55-64, and 65-74 years, respectively. There is room for improvement in all models in terms of goodness of fit, particularly for the first year after screening, but results from randomized trials indicate that the sensitivity estimates are roughly correct. Am J Epidemiol 1998;148:609-19.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: Findings support the hypothesis that colonoscopic removal of adenomatous polyps prevents death from colorectal cancer.
Abstract: BACKGROUND In the National Polyp Study (NPS), colorectal cancer was prevented by colonoscopic removal of adenomatous polyps. We evaluated the long-term effect of colonoscopic polypectomy in a study on mortality from colorectal cancer. METHODS We included in this analysis all patients prospectively referred for initial colonoscopy (between 1980 and 1990) at NPS clinical centers who had polyps (adenomas and nonadenomas). The National Death Index was used to identify deaths and to determine the cause of death; follow-up time was as long as 23 years. Mortality from colorectal cancer among patients with adenomas removed was compared with the expected incidence-based mortality from colorectal cancer in the general population, as estimated from the Surveillance Epidemiology and End Results (SEER) Program, and with the observed mortality from colorectal cancer among patients with nonadenomatous polyps (internal control group). RESULTS Among 2602 patients who had adenomas removed during participation in the study, after a median of 15.8 years, 1246 patients had died from any cause and 12 had died from colorectal cancer. Given an estimated 25.4 expected deaths from colorectal cancer in the general population, the standardized incidence-based mortality ratio was 0.47 (95% confidence interval [CI], 0.26 to 0.80) with colonoscopic polypectomy, suggesting a 53% reduction in mortality. Mortality from colorectal cancer was similar among patients with adenomas and those with nonadenomatous polyps during the first 10 years after polypectomy (relative risk, 1.2; 95% CI, 0.1 to 10.6). CONCLUSIONS These findings support the hypothesis that colonoscopic removal of adenomatous polyps prevents death from colorectal cancer. (Funded by the National Cancer Institute and others.)

2,381 citations

Journal ArticleDOI
06 Jun 2001-JAMA
TL;DR: A framework to guide individualized cancer screening decisions in older patients may be more useful to the practicing clinician than age guidelines because it anchors decisions through quantitative estimates of life expectancy, risk of cancer death, and screening outcomes based on published data.
Abstract: Considerable uncertainty exists about the use of cancer screening tests in older people, as illustrated by the different age cutoffs recommended by various guideline panels. We suggest that a framework to guide individualized cancer screening decisions in older patients may be more useful to the practicing clinician than age guidelines. Like many medical decisions, cancer screening decisions require weighing quantitative information, such as risk of cancer death and likelihood of beneficial and adverse screening outcomes, as well as qualitative factors, such as individual patients' values and preferences. Our framework first anchors decisions through quantitative estimates of life expectancy, risk of cancer death, and screening outcomes based on published data. Potential benefits of screening are presented as the number needed to screen to prevent 1 cancer-specific death, based on the estimated life expectancy during which a patient will be screened. Estimates reveal substantial variability in the likelihood of benefit for patients of similar ages with varying life expectancies. In fact, patients with life expectancies of less than 5 years are unlikely to derive any survival benefit from cancer screening. We also consider the likelihood of potential harm from screening according to patient factors and test characteristics. Some of the greatest harms of screening occur by detecting cancers that would never have become clinically significant. This becomes more likely as life expectancy decreases. Finally, since many cancer screening decisions in older adults cannot be answered solely by quantitative estimates of benefits and harms, considering the estimated outcomes according to the patient's own values and preferences is the final step for making informed screening decisions.

955 citations

Journal ArticleDOI
TL;DR: Screening data with tumor measurements can provide population-based estimates of tumor growth and screen test sensitivity directly linked to tumor size, and there is a large variation in breast cancer tumor growth, with faster growth among younger women.
Abstract: Knowledge of tumor growth is important in the planning and evaluation of screening programs, clinical trials, and epidemiological studies. Studies of tumor growth rates in humans are usually based on small and selected samples. In the present study based on the Norwegian Breast Cancer Screening Program, tumor growth was estimated from a large population using a new estimating procedure/model. A likelihood-based estimating procedure was used, where both tumor growth and the screen test sensitivity were modeled as continuously increasing functions of tumor size. The method was applied to cancer incidence and tumor measurement data from 395,188 women aged 50 to 69 years. Tumor growth varied considerably between subjects, with 5% of tumors taking less than 1.2 months to grow from 10 mm to 20 mm in diameter, and another 5% taking more than 6.3 years. The mean time a tumor needed to grow from 10 mm to 20 mm in diameter was estimated as 1.7 years, increasing with age. The screen test sensitivity was estimated to increase sharply with tumor size, rising from 26% at 5 mm to 91% at 10 mm. Compared with previously used Markov models for tumor progression, the applied model gave considerably higher model fit (85% increased predictive power) and provided estimates directly linked to tumor size. Screening data with tumor measurements can provide population-based estimates of tumor growth and screen test sensitivity directly linked to tumor size. There is a large variation in breast cancer tumor growth, with faster growth among younger women.

179 citations


Cites background or methods or result from "Estimating Sensitivity and Sojourn ..."

  • ...To make the results comparable with estimates provided in previous studies [5,12-15], all cases of ductal carcinoma in situ (DCIS) – a noninvasive form of breast tumor – were included....

    [...]

  • ...time unit (month) without screening – to simplify calculations, the rate is assumed constant over time as in the earlier used Markov model [5,12], probably giving a good approximation in the limited time span used in the estimation – and , i f gs , is the probability that a clinical...

    [...]

  • ...Overall, sensitivity investigations indicate that the new model is probably less vulnerable to several potential biases than the Markov model [5,12], possibly as a result of more utilized data....

    [...]

  • ...Generally, this model combines many of the advantages of the large population-based Markov methods [5,12], with more specific tumor growth estimates found in clinical studies of overlooked cancers....

    [...]

  • ...These studies [1] are usually analyzed using Markov models [5,6], where the mean time for a breast cancer tumor to growth from screening-detectable size to clinical detection without screening – the so-called mean sojourn time – and the STS are estimated....

    [...]

Journal ArticleDOI
TL;DR: Lead-time bias is the main determinant of the short-term benefit provided by surveillance for HCC, but this benefit becomes factual in a long-term perspective, confirming the clinical utility of an anticipated diagnosis of HCC.
Abstract: Background & Aims Lead-time is the time by which diagnosis is anticipated by screening/surveillance with respect to the symptomatic detection of a disease. Any screening program, including surveillance for hepatocellular carcinoma (HCC), is subject to lead-time bias. Data regarding lead-time for HCC are lacking. Aims of the present study were to calculate lead-time and to assess its impact on the benefit obtainable from the surveillance of cirrhotic patients. Methods One-thousand three-hundred and eighty Child–Pugh class A/B patients from the ITA.LI.CA database, in whom HCC was detected during semiannual surveillance (n=850), annual surveillance (n=234) or when patients came when symptomatic (n=296), were selected. Lead-time was estimated by means of appropriate formulas and Monte Carlo simulation, including 1000 patients for each arm. Results The 5-year overall survival after HCC diagnosis was 32.7% in semiannually surveilled patients, 25.2% in annually surveilled patients, and 12.2% in symptomatic patients ( p Conclusions Lead-time bias is the main determinant of the short-term benefit provided by surveillance for HCC, but this benefit becomes factual in a long-term perspective, confirming the clinical utility of an anticipated diagnosis of HCC.

109 citations


Cites methods from "Estimating Sensitivity and Sojourn ..."

  • ...This study was aimed at accurately estimating the lead time affecting semiannual and annual surveillance for HCC through a rigorous mathematical model already proposed in other cancer screening programs [15-16]....

    [...]

  • ...This distribution was used to calculate the transition rate to symptomatic disease and lead time, using the appropriate formula (equation 4) [15,16,19]....

    [...]

Journal ArticleDOI
TL;DR: Dietary patterns that reflect a Western way of life are associated with a higher risk of colorectal tumors.
Abstract: Little is known about the dietary patterns associated with colorectal tumors along the adenoma-carcinoma sequence. Scores for dietary patterns were obtained by factor analysis in women from the French cohort of the European Prospective Investigation into Cancer and Nutrition (1993-2000). Their association with colorectal tumors was investigated in 516 adenoma cases (175 high-risk adenomas) and 4,804 polyp-free women and in 172 colorectal cancer cases and 67,312 cancer-free women. The authors identified four dietary patterns: "healthy" (vegetables, fruit, yogurt, sea products, and olive oil); "Western" (potatoes, pizzas and pies, sandwiches, sweets, cakes, cheese, cereal products, processed meat, eggs, and butter); "drinker" (sandwiches, snacks, processed meat, and alcoholic beverages); and "meat eaters" (meat, poultry, and margarine). For quartile 4 versus quartile 1, an increased risk of adenoma was observed with high scores of the Western pattern (multivariate relative risk (RR) = 1.39, 95% confidence interval: 1.00, 1.94; p(trend) = 0.03) and the drinker pattern (RR = 1.42, 95% confidence interval: 1.10, 1.83; p(trend) = 0.01). The meat-eaters pattern was positively associated with colorectal cancer risk (for quartile 4 vs. quartile 1: RR = 1.58, 95% confidence interval: 0.98, 2.53; p(trend) = 0.02). Dietary patterns that reflect a Western way of life are associated with a higher risk of colorectal tumors.

106 citations

References
More filters
Journal ArticleDOI
TL;DR: The focus is on applied inference for Bayesian posterior distributions in real problems, which often tend toward normal- ity after transformations and marginalization, and the results are derived as normal-theory approximations to exact Bayesian inference, conditional on the observed simulations.
Abstract: The Gibbs sampler, the algorithm of Metropolis and similar iterative simulation methods are potentially very helpful for summarizing multivariate distributions. Used naively, however, iterative simulation can give misleading answers. Our methods are simple and generally applicable to the output of any iterative simulation; they are designed for researchers primarily interested in the science underlying the data and models they are analyzing, rather than for researchers interested in the probability theory underlying the iterative simulations themselves. Our recommended strategy is to use several independent sequences, with starting points sampled from an overdispersed distribution. At each step of the iterative simulation, we obtain, for each univariate estimand of interest, a distributional estimate and an estimate of how much sharper the distributional estimate might become if the simulations were continued indefinitely. Because our focus is on applied inference for Bayesian posterior distributions in real problems, which often tend toward normality after transformations and marginalization, we derive our results as normal-theory approximations to exact Bayesian inference, conditional on the observed simulations. The methods are illustrated on a random-effects mixture model applied to experimental measurements of reaction times of normal and schizophrenic patients.

13,884 citations

Journal ArticleDOI
TL;DR: In this paper, three sampling-based approaches, namely stochastic substitution, the Gibbs sampler, and the sampling-importance-resampling algorithm, are compared and contrasted in relation to various joint probability structures frequently encountered in applications.
Abstract: Stochastic substitution, the Gibbs sampler, and the sampling-importance-resampling algorithm can be viewed as three alternative sampling- (or Monte Carlo-) based approaches to the calculation of numerical estimates of marginal probability distributions. The three approaches will be reviewed, compared, and contrasted in relation to various joint probability structures frequently encountered in applications. In particular, the relevance of the approaches to calculating Bayesian posterior densities for a variety of structured models will be discussed and illustrated.

6,294 citations

Journal Article
TL;DR: Stochastic substitution, the Gibbs sampler, and the sampling-importance-resampling algorithm can be viewed as three alternative sampling- (or Monte Carlo-) based approaches to the calculation of numerical estimates of marginal probability distributions.
Abstract: Stochastic substitution, the Gibbs sampler, and the sampling-importance-resampling algorithm can be viewed as three alternative sampling- (or Monte Carlo-) based approaches to the calculation of numerical estimates of marginal probability distributions. The three approaches will be reviewed, compared, and contrasted in relation to various joint probability structures frequently encountered in applications. In particular, the relevance of the approaches to calculating Bayesian posterior densities for a variety of structured models will be discussed and illustrated.

6,223 citations

Book
01 Jan 1996
TL;DR: Mathematica has defined the state of the art in technical computing for over a decade, and has become a standard in many of the world's leading companies and universities as discussed by the authors.
Abstract: From the Publisher: Mathematica has defined the state of the art in technical computing for over a decade, and has become a standard in many of the world's leading companies and universities From simple calculator operations to large-scale programming and the preparation of interactive documents, Mathematica is the tool of choice

3,566 citations

01 Jan 1996
TL;DR: From the Publisher: Mathematica has defined the state of the art in technical computing for over a decade, and has become a standard in many of the world's leading companies and universities.
Abstract: From the Publisher: Mathematica has defined the state of the art in technical computing for over a decade, and has become a standard in many of the world's leading companies and universities. From simple calculator operations to large-scale programming and the preparation of interactive documents, Mathematica is the tool of choice.

3,115 citations