scispace - formally typeset
Search or ask a question

Showing papers in "Clinical Epidemiology in 2017"


Journal ArticleDOI
TL;DR: Multiple imputation is an alternative method to deal withMissing data, which accounts for the uncertainty associated with missing data, and provides unbiased and valid estimates of associations based on information from the available data.
Abstract: Missing data are ubiquitous in clinical epidemiological research. Individuals with missing data may differ from those with no missing data in terms of the outcome of interest and prognosis in general. Missing data are often categorized into the following three types: missing completely at random (MCAR), missing at random (MAR), and missing not at random (MNAR). In clinical epidemiological research, missing data are seldom MCAR. Missing data can constitute considerable challenges in the analyses and interpretation of results and can potentially weaken the validity of results and conclusions. A number of methods have been developed for dealing with missing data. These include complete-case analyses, missing indicator method, single value imputation, and sensitivity analyses incorporating worst-case and best-case scenarios. If applied under the MCAR assumption, some of these methods can provide unbiased but often less precise estimates. Multiple imputation is an alternative method to deal with missing data, which accounts for the uncertainty associated with missing data. Multiple imputation is implemented in most statistical software under the MAR assumption and provides unbiased and valid estimates of associations based on information from the available data. The method affects not only the coefficient estimates for variables with missing data but also the estimates for other variables with no missing data.

562 citations


Journal ArticleDOI
TL;DR: The strong assumptions underlying the assessment of interaction, and particularly mediation, require clinicians and epidemiologists to take extra care when conducting observational studies in the context of health care databases, which may limit the applicability of interaction and mediation assessments.
Abstract: We revisited the three interrelated epidemiological concepts of effect modification, interaction and mediation for clinical investigators and examined their applicability when using research databases. The standard methods that are available to assess interaction, effect modification and mediation are explained and exemplified. For each concept, we first give a simple "best-case" example from a randomized controlled trial, followed by a structurally similar example from an observational study using research databases. Our explanation of the examples is based on recent theoretical developments and insights in the context of large health care databases. Terminology is sometimes ambiguous for what constitutes effect modification and interaction. The strong assumptions underlying the assessment of interaction, and particularly mediation, require clinicians and epidemiologists to take extra care when conducting observational studies in the context of health care databases. These strong assumptions may limit the applicability of interaction and mediation assessments, at least until the biases and limitations of these assessments when using large research databases are clarified.

145 citations


Journal ArticleDOI
TL;DR: A summary of Medicare data is provided, including the types of data that are captured, and how they may be used in epidemiologic and health outcomes research, to highlight strengths, limitations, and key considerations when designing a study using Medicare data.
Abstract: Medicare is the federal health insurance program for individuals in the US who are aged ≥65 years, select individuals with disabilities aged <65 years, and individuals with end-stage renal disease. The Centers for Medicare and Medicaid Services grants researchers access to Medicare administrative claims databases for epidemiologic and health outcomes research. The data cover beneficiaries' encounters with the health care system and receipt of therapeutic interventions, including medications, procedures, and services. Medicare data have been used to describe patterns of morbidity and mortality, describe burden of disease, compare effectiveness of pharmacologic therapies, examine cost of care, evaluate the effects of provider practices on the delivery of care and patient outcomes, and explore the health impacts of important Medicare policy changes. Considering that the vast majority of US citizens ≥65 years of age have Medicare insurance, analyses of Medicare data are now essential for understanding the provision of health care among older individuals in the US and are critical for providing real-world evidence to guide decision makers. This review is designed to provide researchers with a summary of Medicare data, including the types of data that are captured, and how they may be used in epidemiologic and health outcomes research. We highlight strengths, limitations, and key considerations when designing a study using Medicare data. Additionally, we illustrate the potential impact that Centers for Medicare and Medicaid Services policy changes may have on data collection, coding, and ultimately on findings derived from the data.

143 citations


Journal ArticleDOI
TL;DR: Charlson comorbidity index scores from chart review and administrative data showed good agreement and predicted 30-day and 1-year mortality in ICU patients as well as the physiology-based SAPS II.
Abstract: Purpose This study compared the Charlson comorbidity index (CCI) information derived from chart review and administrative systems to assess the completeness and agreement between scores, evaluate the capacity to predict 30-day and 1-year mortality in intensive care unit (ICU) patients, and compare the predictive capacity with that of the Simplified Acute Physiology Score (SAPS) II model. Patients and methods Using data from 959 patients admitted to a general ICU in a Norwegian university hospital from 2007 to 2009, we compared the CCI score derived from chart review and administrative systems. Agreement was assessed using % agreement, kappa, and weighted kappa. The capacity to predict 30-day and 1-year mortality was assessed using logistic regression, model discrimination with the c-statistic, and calibration with a goodness-of-fit statistic. Results The CCI was complete (n=959) when calculated from chart review, but less complete from administrative data (n=839). Agreement was good, with a weighted kappa of 0.667 (95% confidence interval: 0.596-0.714). The c-statistics for categorized CCI scores from charts and administrative data were similar in the model that included age, sex, and type of admission: 0.755 and 0.743 for 30-day mortality, respectively, and 0.783 and 0.775, respectively, for 1-year mortality. Goodness-of-fit statistics supported the model fit. Conclusion The CCI scores from chart review and administrative data showed good agreement and predicted 30-day and 1-year mortality in ICU patients. CCI combined with age, sex, and type of admission predicted mortality almost as well as the physiology-based SAPS II.

107 citations


Journal ArticleDOI
TL;DR: The types of potential confounding factors typically lacking in large health care databases are described and strategies for confounding control when data on important confounders are unavailable are suggested.
Abstract: Population-based health care databases are a valuable tool for observational studies as they reflect daily medical practice for large and representative populations. A constant challenge in observational designs is, however, to rule out confounding, and the value of these databases for a given study question accordingly depends on completeness and validity of the information on confounding factors. In this article, we describe the types of potential confounding factors typically lacking in large health care databases and suggest strategies for confounding control when data on important confounders are unavailable. Using Danish health care databases as examples, we present the use of proxy measures for important confounders and the use of external adjustment. We also briefly discuss the potential value of active comparators, high-dimensional propensity scores, self-controlled designs, pseudorandomization, and the use of positive or negative controls.

104 citations


Journal ArticleDOI
TL;DR: This review summarizes the literature on stress disorders (classified according to the International Classification of Diseases, 10th Edition [ICD-10]), including acute stress reaction, PTSD, adjustment disorder and unspecified stress reactions, and the prevalence and incidence of each disorder.
Abstract: Given the ubiquity of traumatic events, it is not surprising that posttraumatic stress disorder (PTSD) - a common diagnosis following one of these experiences - is characterized as conferring a large burden for individuals and society. Although there is recognition of the importance of PTSD diagnoses throughout psychiatry, the literature on other diagnoses one may receive following a stressful or traumatic event is scant. This review summarizes the literature on stress disorders (classified according to the International Classification of Diseases, 10th Edition [ICD-10]), including acute stress reaction, PTSD, adjustment disorder and unspecified stress reactions. This review focuses on the literature related to common psychiatric and somatic consequences of these disorders. The prevalence and incidence of each disorder are described. A review of epidemiologic studies on comorbid mental health conditions, including depression, anxiety and substance abuse, is included, as well as a review of epidemiologic studies on somatic outcomes, including cancer, cardiovascular disease and gastrointestinal disorders. Finally, the current literature on all-cause mortality and suicide following stress disorder diagnoses is reviewed. Stress disorders are a critical public health issue with potentially deleterious outcomes that have a significant impact on those living with these disorders, the health care system and society. It is only through an awareness of the impact of stress disorders that appropriate resources can be allocated to prevention and treatment. Future research should expand the work done to date beyond the examination of PTSD, so that the field may obtain a more complete picture of the impact all stress disorders have on the many people living with these diagnoses.

95 citations


Journal ArticleDOI
TL;DR: Benefits of big data for clinical epidemiology include improved precision of estimates, which is especially important for reassuring (“null”) findings; ability to conduct meaningful analyses in subgroup of patients; and rapid detection of safety signals.
Abstract: Routinely recorded health data have evolved from mere by-products of health care delivery or billing into a powerful research tool for studying and improving patient care through clinical epidemiologic research. Big data in the context of epidemiologic research means large interlinkable data sets within a single country or networks of multinational databases. Several Nordic, European, and other multinational collaborations are now well established. Advantages of big data for clinical epidemiology include improved precision of estimates, which is especially important for reassuring ("null") findings; ability to conduct meaningful analyses in subgroup of patients; and rapid detection of safety signals. Big data will also provide new possibilities for research by enabling access to linked information from biobanks, electronic medical records, patient-reported outcome measures, automatic and semiautomatic electronic monitoring devices, and social media. The sheer amount of data, however, does not eliminate and may even amplify systematic error. Therefore, methodologies addressing systematic error, clinical knowledge, and underlying hypotheses are more important than ever to ensure that the signal is discernable behind the noise.

93 citations


Journal ArticleDOI
TL;DR: The screening test of Ishii et al showed better properties in terms of distinguishing those at risk of sarcopenia from those who were not at risk, and can be relevantly used in clinical practice to make sure to identify individuals who do not suffer from the syndrome.
Abstract: Background Sarcopenia leads to serious adverse health consequences. There is a dearth of screening tools for this condition, and performances of these instruments have rarely been evaluated. Our aim was to compare the performance of five screening tools for identifying elders at risk of sarcopenia against five diagnostic definitions. Subjects and methods We gathered cross-sectional data of elders from the SarcoPhAge ("Sarco"penia and "Ph"ysical Impairment with Advancing "Age") study. Lean mass was measured with X-ray absorptiometry, muscle strength with a dynamometer and physical performance with the Short Physical Performance Battery (SPPB) test. Performances of screening methods were described using sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and area under the curve (AUC), according to five diagnostic definitions of sarcopenia. For each screening tool, optimal cutoff points were computed using two methods. Results A total of 306 subjects (74.8±5.9 years, 59.5% women) were included. The prevalence of sarcopenia varied from 5.7% to 16.7% depending on the definition. The best sensitivity (up to 100%) and the best NPV (up to 99.1%) were obtained with the screening test of Ishii et al, regardless of the definition applied. The highest AUC (up to 0.914) was also demonstrated by the instrument of Ishii et al. The most specific tool was the algorithm of the European Working Group on Sarcopenia in Older People (EWGSOP; up to 91.1%). All NPVs were above 87.0%, and all PPVs were below 51.0%. New cutoffs related to each screening instrument were also proposed to better discriminate sarcopenic individuals from non-sarcopenic individuals. Conclusion Screening instruments for sarcopenia can be relevantly used in clinical practice to make sure to identify individuals who do not suffer from the syndrome. The screening test of Ishii et al showed better properties in terms of distinguishing those at risk of sarcopenia from those who were not at risk.

75 citations


Journal ArticleDOI
TL;DR: This article provides an overview of standard methods in the analysis phase, such as stratification, standardization, multivariable regression analysis and propensity score (PS) methods, together with the more advanced high-dimensional propensity Score (HD-PS) method.
Abstract: In observational studies, control of confounding can be done in the design and analysis phases. Using examples from large health care database studies, this article provides the clinicians with an overview of standard methods in the analysis phase, such as stratification, standardization, multivariable regression analysis and propensity score (PS) methods, together with the more advanced high-dimensional propensity score (HD-PS) method. We describe the progression from simple stratification confined to the inclusion of a few potential confounders to complex modeling procedures such as the HD-PS approach by which hundreds of potential confounders are extracted from large health care databases. Stratification and standardization assist in the understanding of the data at a detailed level, while accounting for potential confounders. Incorporating several potential confounders in the analysis typically implies the choice between multivariable analysis and PS methods. Although PS methods have gained remarkable popularity in recent years, there is an ongoing discussion on the advantages and disadvantages of PS methods as compared to those of multivariable analysis. Furthermore, the HD-PS method, despite its generous inclusion of potential confounders, is also associated with potential pitfalls. All methods are dependent on the assumption of no unknown, unmeasured and residual confounding and suffer from the difficulty of identifying true confounders. Even in large health care databases, insufficient or poor data may contribute to these challenges. The trend in data collection is to compile more fine-grained data on lifestyle and severity of diseases, based on self-reporting and modern technologies. This will surely improve our ability to incorporate relevant confounders or their proxies. However, despite a remarkable development of methods that account for confounding and new data opportunities, confounding will remain a serious issue. Considering the advantages and disadvantages of different methods, we emphasize the importance of the clinical input and of the interplay between clinicians and analysts to ensure a proper analysis.

71 citations


Journal ArticleDOI
TL;DR: The DanFunD cohort was initiated to outline the epidemiology of functional somatic syndromes (FSS) and is the first larger coordinated epidemiological study focusing exclusively on FSS, and the specific aims with the cohort were to test delimitations of F SS, estimate prevalence and incidence rates, identify risk factors, delimitate the pathogenic pathways, and explore the consequences of FSS.
Abstract: The Danish study of Functional Disorders (DanFunD) cohort was initiated to outline the epidemiology of functional somatic syndromes (FSS) and is the first larger coordinated epidemiological study focusing exclusively on FSS. FSS are prevalent in all medical settings and can be defined as syndromes that, after appropriate medical assessment, cannot be explained in terms of a conventional medical or surgical disease. FSS are frequent and the clinical importance varies from vague symptoms to extreme disability. No well-described medical explanations exist for FSS, and how to delimit FSS remains a controversial topic. The specific aims with the cohort were to test delimitations of FSS, estimate prevalence and incidence rates, identify risk factors, delimitate the pathogenic pathways, and explore the consequences of FSS. The study population comprises a random sample of 9,656 men and women aged 18-76 years from the general population examined from 2011 to 2015. The survey comprises screening questionnaires for five types of FSS, ie, fibromyalgia, whiplash-associated disorder, multiple chemical sensitivity, irritable bowel syndrome, and chronic fatigue syndrome, and for the unifying diagnostic category of bodily distress syndrome. Additional data included a telephone-based diagnostic interview assessment for FSS, questionnaires on physical and mental health, personality traits, lifestyle, use of health care services and social factors, and a physical examination with measures of cardiorespiratory and morphological fitness, metabolic fitness, neck mobility, heart rate variability, and pain sensitivity. A biobank including serum, plasma, urine, DNA, and microbiome has been established, and central registry data from both responders and nonresponders are similarly available on morbidity, mortality, reimbursement of medicine, heath care use, and social factors. A complete 5-year follow-up is scheduled to take place from year 2017 to 2020, and further reexaminations will be planned. Several projects using the DanFunD data are ongoing, and findings will be published in the coming years.

65 citations


Journal ArticleDOI
TL;DR: The 30-day hospital readmission rate was 25% following hospitalization for COPD in an Australian tertiary hospital and as such comparable to international published rates, and the LACE index only had moderate discriminative ability to predict 30- day readmission in patients hospitalized for COPd.
Abstract: Background and objective Patients hospitalized for acute exacerbation of chronic obstructive pulmonary disease (COPD) have a high 30-day hospital readmission rate, which has a large impact on the health care system and patients' quality of life. The use of a prediction model to quantify a patient's risk of readmission may assist in directing interventions to patients who will benefit most. The objective of this study was to calculate the rate of 30-day readmissions and evaluate the accuracy of the LACE index (length of stay, acuity of admission, co-morbidities, and emergency department visits within the last 6 months) for 30-day readmissions in a general hospital population of COPD patients. Methods All patients admitted with a principal diagnosis of COPD to Liverpool Hospital, a tertiary hospital in Sydney, Australia, between 2006 and 2016 were included in the study. A LACE index score was calculated for each patient and assessed using receiver operator characteristic curves. Results During the study period, 2,662 patients had 5,979 hospitalizations for COPD. Four percent of patients died in hospital and 25% were readmitted within 30 days; 56% of all 30-day readmissions were again due to COPD. The most common reasons for readmission, following COPD, were heart failure, pneumonia, and chest pain. The LACE index had moderate discriminative ability to predict 30-day readmission (C-statistic =0.63). Conclusion The 30-day hospital readmission rate was 25% following hospitalization for COPD in an Australian tertiary hospital and as such comparable to international published rates. The LACE index only had moderate discriminative ability to predict 30-day readmission in patients hospitalized for COPD.

Journal ArticleDOI
TL;DR: The Chinese health care system and its implication for medical research, especially within clinical epidemiology research is reviewed, and the construction of the Chinese health information system as well as several existing registers and research projects on health data are described.
Abstract: China has gone through a comprehensive health care insurance reform since 2003 and achieved universal health insurance coverage in 2011. The new health care insurance system provides China with a huge opportunity for the development of health care and medical research when its rich medical resources are fully unfolded. In this study, we review the Chinese health care system and its implication for medical research, especially within clinical epidemiology. First, we briefly review the population register system, the distribution of the urban and rural population in China, and the development of the Chinese health care system after 1949. In the following sections, we describe the current Chinese health care delivery system and the current health insurance system. We then focus on the construction of the Chinese health information system as well as several existing registers and research projects on health data. Finally, we discuss the opportunities and challenges of the health care system in regard to clinical epidemiology research. China now has three main insurance schemes. The Urban Employee Basic Medical Insurance (UEBMI) covers urban employees and retired employees. The Urban Residence Basic Medical Insurance (URBMI) covers urban residents, including children, students, elderly people without previous employment, and unemployed people. The New Rural Cooperative Medical Scheme (NRCMS) covers rural residents. The Chinese Government has made efforts to build up health information data, including electronic medical records. The establishment of universal health care insurance with linkage to medical records will provide potentially huge research opportunities in the future. However, constructing a complete register system at a nationwide level is challenging. In the future, China will demand increased capacity of researchers and data managers, in particular within clinical epidemiology, to explore the rich resources.

Journal ArticleDOI
TL;DR: It is suggested that LEL is associated with higher overall and premature mortality and that the association is affected by MM, lifestyle factors, and quality of life.
Abstract: Objective: Multimorbidity (MM) is more prevalent among people of lower socioeconomic status (SES), and both MM and SES are associated with higher mortality rates. However, little is known about the relationship between SES, MM, and mortality. This study investigates the association between educational level and mortality, and to what extent MM modifies this association. Methods: We followed 239,547 individuals invited to participate in the Danish National Health Survey 2010 (mean follow-up time: 3.8 years). MM was assessed by using information on drug prescriptions and diagnoses for 39 long-term conditions. Data on educational level were provided by Statistics Denmark. Date of death was obtained from the Civil Registration System. Information on lifestyle factors and quality of life was collected from the survey. The main outcomes were overall and premature mortality (death before the age of 75). Results: Of a total of 12,480 deaths, 6,607 (9.5%) were of people with low educational level (LEL) and 1,272 (2.3%) were of people with high educational level (HEL). The mortality rate was higher among people with LEL compared with HEL in groups of people with 0–1 disease (hazard ratio: 2.26, 95% confidence interval: 2.00–2.55) and ≥4 diseases (hazard ratio: 1.14, 95% confidence interval: 1.04–1.24), respectively (adjusted model). The absolute number of deaths was six times higher among people with LEL than those with HEL in those with ≥4 diseases. The 1-year cumulative mortality proportions for overall death in those with ≥4 diseases was 5.59% for people with HEL versus 7.27% for people with LEL, and 1-year cumulative mortality proportions for premature death was 2.93% for people with HEL versus 4.04% for people with LEL. Adjusting for potential mediating factors such as lifestyle and quality of life eliminated the statistical association between educational level and mortality in people with MM. Conclusion: Our study suggests that LEL is associated with higher overall and premature mortality and that the association is affected by MM, lifestyle factors, and quality of life.

Journal ArticleDOI
TL;DR: DANBIO held a high proportion of true RA cases (96%) and was found to be superior to the DNPR with regard to the validity of the diagnosis, and both registries were estimated to have a high completeness of RA cases treated in hospital care.
Abstract: Objectives In Denmark, patients with rheumatoid arthritis (RA) are registered in the nationwide clinical DANBIO quality register and the Danish National Patient Registry (DNPR). The aim was to study the validity of the RA diagnosis and to estimate the completeness of relevant RA cases in each registry. Study design and setting Patients registered for the first time in 2011 with a diagnosis of RA were identified in DANBIO and DNPR in January 2013. For DNPR, filters were applied to reduce false-positive cases. The diagnosis was verified by a review of patient records. We calculated the positive predictive values (PPVs) of the RA diagnosis registrations in DANBIO and DNPR, and estimated the registry completeness of relevant RA cases for both DANBIO and DNPR. Updated data from 2011 to 2015 from DANBIO were retrieved to identify patients with delayed registration, and the registry completeness and PPV was recalculated. Results We identified 1,678 unique patients in DANBIO or in DNPR. The PPV (2013 dataset) was 92% in DANBIO and 79% in DNPR. PPV for DANBIO on the 2015 update was 96%. The registry completeness of relevant RA cases was 43% in DANBIO, increasing to 91% in the 2015 update and 90% in DNPR. Conclusion DANBIO held a high proportion of true RA cases (96%) and was found to be superior to the DNPR (79%) with regard to the validity of the diagnosis. Both registries were estimated to have a high completeness of RA cases treated in hospital care (~90%).

Journal ArticleDOI
TL;DR: A high level of agreement and validity of diagnosis and procedure codes in the Danish Colorectal Cancer Screening Database (DCCSD) indicates that DCCSD reflects the hospital records well, and may be a valuable data source for future research on coloreCTal cancer screening.
Abstract: Background In Denmark, a nationwide screening program for colorectal cancer was implemented in March 2014. Along with this, a clinical database for program monitoring and research purposes was established. Objective The aim of this study was to estimate the agreement and validity of diagnosis and procedure codes in the Danish Colorectal Cancer Screening Database (DCCSD). Methods All individuals with a positive immunochemical fecal occult blood test (iFOBT) result who were invited to screening in the first 3 months since program initiation were identified. From these, a sample of 150 individuals was selected using stratified random sampling by age, gender and region of residence. Data from the DCCSD were compared with data from hospital records, which were used as the reference. Agreement, sensitivity, specificity and positive and negative predictive values were estimated for categories of codes "clean colon", "colonoscopy performed", "overall completeness of colonoscopy", "incomplete colonoscopy", "polypectomy", "tumor tissue left behind", "number of polyps", "lost polyps", "risk group of polyps" and "colorectal cancer and polyps/benign tumor". Results Hospital records were available for 136 individuals. Agreement was highest for "colorectal cancer" (97.1%) and lowest for "lost polyps" (88.2%). Sensitivity varied between moderate and high, with 60.0% for "incomplete colonoscopy" and 98.5% for "colonoscopy performed". Specificity was 92.7% or above, except for the categories "colonoscopy performed" and "overall completeness of colonoscopy", where the specificity was low; however, the estimates were imprecise. Conclusion A high level of agreement between categories of codes in DCCSD and hospital records indicates that DCCSD reflects the hospital records well. Further, the validity of the categories of codes varied from moderate to high. Thus, the DCCSD may be a valuable data source for future research on colorectal cancer screening.

Journal ArticleDOI
TL;DR: Results from this meta-analysis support the growing evidence of an association between coffee/caffeine intake and the risk of SAB and are supportive of a precautionary principle advised by health organizations, although the advised limit of two to three cups of coffee/200–300 mg caffeine per day may be too high.
Abstract: Objective The aim was to investigate whether coffee or caffeine consumption is associated with reproductive endpoints among women with natural fertility (ie, time to pregnancy [TTP] and spontaneous abortion [SAB]) and among women in fertility treatment (ie, clinical pregnancy rate or live birth rate). Design This study was a systematic review and dose-response meta-analysis including data from case-control and cohort studies. Methods An extensive literature search was conducted in MEDLINE and Embase, with no time and language restrictions. Also, reference lists were searched manually. Two independent reviewers assessed the manuscript quality using the Newcastle-Ottawa Scale (NOS). A two-stage dose-response meta-analysis was applied to assess a potential association between coffee/caffeine consumption and the outcomes: TTP, SAB, clinical pregnancy, and live birth. Heterogeneity between studies was assessed using Cochrane Q-test and I2 statistics. Publication bias was assessed using Egger's regression test. Results The pooled results showed that coffee/caffeine consumption is associated with a significantly increased risk of SAB for 300 mg caffeine/day (relative risk [RR]: 1.37, 95% confidence interval [95% CI]: 1.19; 1.57) and for 600 mg caffeine/day (RR: 2.32, 95% CI: 1.62; 3.31). No association was found between coffee/caffeine consumption and outcomes of fertility treatment (based on two studies). No clear association was found between exposure to coffee/caffeine and natural fertility as measured by fecundability odds ratio (based on three studies) or waiting TTP (based on two studies). Conclusion Results from this meta-analysis support the growing evidence of an association between coffee/caffeine intake and the risk of SAB. However, viewing the reproductive capacity in a broader perspective, there seems to be little, if any, association between coffee/caffeine consumption and fecundity. In general, results from this study are supportive of a precautionary principle advised by health organizations such as European Food Safety Authority (EFSA) and World Health Organization (WHO), although the advised limit of a maximum of two to three cups of coffee/200-300 mg caffeine per day may be too high.

Journal ArticleDOI
TL;DR: Social inequality in screening uptake was evident among both men and women in the Danish CRC screening program, even though the program is free of charge and the screening kit is based on FIT and mailed directly to the individuals.
Abstract: INTRODUCTION Fecal occult blood tests are recommended for colorectal cancer (CRC) screening in Europe. Recently, the fecal immunochemical test (FIT) has come into use. Sociodemographic differences between participants and nonparticipants may be less pronounced when using FIT as there are no preceding dietary restrictions and only one specimen is required. The aim of this study was to examine the associations between sociodemographic characteristics and nonparticipation for both genders, with special emphasis on those who actively unsubscribe from the program. METHODS The study was a national, register-based, cross-sectional study among men and women randomized to be invited to participate in the prevalence round of the Danish CRC screening program between March 1 and December 31, 2014. Prevalence ratios (PRs) were used to quantify the association between sociodemographic characteristics and nonparticipation (including active nonparticipation). PRs were assessed using Poisson regression with robust error variance. RESULTS The likelihood of being a nonparticipant was highest in the younger part of the population; however, for women, the association across age groups was U-shaped. Female immigrants were more likely to be nonparticipants. Living alone, being on social welfare, and having lower income were factors that were associated with nonparticipation among both men and women. For both men and women, there was a U-shaped association between education and nonparticipation. For both men and women, the likelihood of active nonparticipation rose with age; it was lowest among non-western immigrants and highest among social welfare recipients. CONCLUSION Social inequality in screening uptake was evident among both men and women in the Danish CRC screening program, even though the program is free of charge and the screening kit is based on FIT and mailed directly to the individuals. Interventions are needed to bridge this gap if CRC screening is to avoid aggravating existing inequalities in CRC-related morbidity and mortality.

Journal ArticleDOI
TL;DR: It is demonstrated that substantial heterogeneity exists in the definition of overall, severe/major, and nocturnal hypoglycemia across RCTs investigating T2D interventions.
Abstract: OBJECTIVE To understand the severity and potential impact of heterogeneity in definitions of hypoglycemia used in diabetes research, we aimed to review the hypoglycemia definitions adopted in randomized controlled trials (RCTs) METHODS We reviewed 109 RCTs included in the Canadian Agency for Drugs and Technologies in Health reports for the second- and third-line therapy for the patients with type 2 diabetes (T2D) RESULTS Nearly 60% (n=66) of the studies reviewed presented the definitions for overall hypoglycemia, and another 20% (n=22) of the studies reported the results for hypoglycemia but did not report a definition Among these 66 studies, only 9 (14%) followed the American Diabetes Association/European Medicines Agency specified guidelines to define hypoglycemia, with an exact threshold of plasma glucose ≤39 mmol/L Fifty-two of the 66 studies (79%) used a threshold considerably lower than the recommended ≤39 mmol/L, and 16 studies used a threshold between 38 and 40 mmol/L The proportion of the trials that used a cutoff value of <31 mmol/L appeared to be slightly similar among the more commonly used non-insulin treatments, GLP-1s (7 of 18 [39%]), thiazolidinediones (TZDs; 6 of 11 [55%]), DPP-4s (12 of 19 [64%]), and sulfonylureas (11 of 20 [55%]) Among trials with intermediate-long-acting insulins (neutral protamine Hagedorn insulin, detemir, glargine), 7 of 26 trials (27%) used a cutoff of <31 mmol/L The definition of severe hypoglycemia was also subject to substantial heterogeneity, in both the utilized threshold and accompanying soft definitions CONCLUSION This review demonstrates that substantial heterogeneity exists in the definition of overall, severe/major, and nocturnal hypoglycemia across RCTs investigating T2D interventions

Journal ArticleDOI
TL;DR: Overall, the performance of the models was poorer in the external validation than in the original population, affirming the importance of external validation.
Abstract: Objective In medicine, many more prediction models have been developed than are implemented or used in clinical practice. These models cannot be recommended for clinical use before external validity is established. Though various models to predict mortality in dialysis patients have been published, very few have been validated and none are used in routine clinical practice. The aim of the current study was to identify existing models for predicting mortality in dialysis patients through a review and subsequently to externally validate these models in the same large independent patient cohort, in order to assess and compare their predictive capacities. Methods A systematic review was performed following the preferred reporting items for systematic reviews and meta-analyses (PRISMA) guidelines. To account for missing data, multiple imputation was performed. The original prediction formulae were extracted from selected studies. The probability of death per model was calculated for each individual within the Netherlands Cooperative Study on the Adequacy of Dialysis (NECOSAD). The predictive performance of the models was assessed based on their discrimination and calibration. Results In total, 16 articles were included in the systematic review. External validation was performed in 1,943 dialysis patients from NECOSAD for a total of seven models. The models performed moderately to well in terms of discrimination, with C-statistics ranging from 0.710 (interquartile range 0.708-0.711) to 0.752 (interquartile range 0.750-0.753) for a time frame of 1 year. According to the calibration, most models overestimated the probability of death. Conclusion Overall, the performance of the models was poorer in the external validation than in the original population, affirming the importance of external validation. Floege et al's models showed the highest predictive performance. The present study is a step forward in the use of a prediction model as a useful tool for nephrologists, using evidence-based medicine that combines individual clinical expertise, patients' choices, and the best available external evidence.

Journal ArticleDOI
TL;DR: Compared with an approach using only contemporaneous data to define cohorts, the approach based on future redemption data generated a substantially higher short-term association between low-dose ASA use and major bleeding on the absolute, but not the relative, scale possibly due to selection and immortal time biases.
Abstract: Background A principle of cohort studies is that cohort membership is defined by current rather than future exposure information. Pharmacoepidemiologic studies using existing databases are vulnerable to violation of this principle. We evaluated the impact of using data on future redemption of prescriptions to determine cohort membership, motivated by a published example seeking to emulate a "per-protocol" association between continuous versus never use of low-dose acetylsalicylic acid (ASA) and major bleeding (e.g., cerebral hemorrhage or gastrointestinal bleeding). Materials and methods Danish medical registry data from 2004 to 2011 were used to construct two analytic cohorts. In Cohort 1, we used information about future redemption of low-dose ASA prescriptions to identify cohorts of continuous and never-ASA users. In Cohort 2, we identified ASA initiators and non-initiators using only contemporaneous data and censored follow-up for changes in use over time. We implemented propensity score-matched Poisson regression to evaluate associations between ASA use and major bleeding and estimated adjusted incidence rate differences (IRDs) per 1,000 person-years and ratios (IRRs) overall and stratified by time since initiation. Results Among >6 million eligible Danish adults, we identified 403,693 low-dose ASA initiators (Cohort 2), of whom 189,150 were defined as continuous users (Cohort 1). Overall, IRDs and IRRs were similar across cohorts. However, the IRD for major bleeding in the first 90 days was substantially larger in Cohort 1 (IRD=25 per 1,000 person-years) compared with Cohort 2 (IRD=10 per 1,000 person-years). Conclusion Using future medication redemption data to define baseline cohorts violates basic epidemiologic principles. Compared with an approach using only contemporaneous data to define cohorts, the approach based on future redemption data generated a substantially higher short-term association between low-dose ASA use and major bleeding on the absolute, but not the relative, scale possibly due to selection and immortal time biases.

Journal ArticleDOI
TL;DR: The results of this study suggest that lamotrigine and carbamazepine are safer treatment options than valproate in pregnancy and should be considered as alternative treatment options for women of childbearing potential and in pregnancy.
Abstract: OBJECTIVE: The aim of this study was to examine the prevalence of major congenital malformations associated with antiepileptic drug (AED) treatment in pregnancy. PATIENTS AND METHODS: Using data from The Health Improvement Network, we identified women who have given live birth and their offspring. Four subgroups were selected based on the AED treatment in early pregnancy, valproate, carbamazepine, lamotrigine and women not receiving AED treatment. We compared the prevalence of major congenital malformations within children of these four groups and estimated prevalence ratios (PRs) using Poisson regression adjusted for maternal age, sex of child, quintiles of Townsend deprivation score and indication for treatment. RESULTS: In total, 240,071 women were included in the study. A total of 229 women were prescribed valproate in pregnancy, 357 were prescribed lamotrigine and 334 were prescribed carbamazepine and 239,151 women were not prescribed AEDs. Fifteen out of 229 (6.6%) women prescribed valproate gave birth to a child with a major congenital malformation. The figures for lamotrigine, carbamazepine and women not prescribed AEDs were 2.7%, 3.3% and 2.2%, respectively. The prevalence of major congenital malformation was similar for women prescribed lamotrigine or carbamazepine compared to women with no AED treatment in pregnancy. For women prescribed valproate in polytherapy, the prevalence was fourfold higher. After adjustments, the effect of estimates attenuated, but the prevalence remained two- to threefold higher in women prescribed valproate. CONCLUSION: The results of our study suggest that lamotrigine and carbamazepine are safer treatment options than valproate in pregnancy and should be considered as alternative treatment options for women of childbearing potential and in pregnancy.

Journal ArticleDOI
TL;DR: The methods used to validate asthma diagnoses in electronic health records are described and wide variation in the validity of each definition is suggested, suggesting this may be important for obtaining asthma definitions with optimal validity.
Abstract: Objective To describe the methods used to validate asthma diagnoses in electronic health records and summarize the results of the validation studies. Background Electronic health records are increasingly being used for research on asthma to inform health services and health policy. Validation of the recording of asthma diagnoses in electronic health records is essential to use these databases for credible epidemiological asthma research. Methods We searched EMBASE and MEDLINE databases for studies that validated asthma diagnoses detected in electronic health records up to October 2016. Two reviewers independently assessed the full text against the predetermined inclusion criteria. Key data including author, year, data source, case definitions, reference standard, and validation statistics (including sensitivity, specificity, positive predictive value [PPV], and negative predictive value [NPV]) were summarized in two tables. Results Thirteen studies met the inclusion criteria. Most studies demonstrated a high validity using at least one case definition (PPV >80%). Ten studies used a manual validation as the reference standard; each had at least one case definition with a PPV of at least 63%, up to 100%. We also found two studies using a second independent database to validate asthma diagnoses. The PPVs of the best performing case definitions ranged from 46% to 58%. We found one study which used a questionnaire as the reference standard to validate a database case definition; the PPV of the case definition algorithm in this study was 89%. Conclusion Attaining high PPVs (>80%) is possible using each of the discussed validation methods. Identifying asthma cases in electronic health records is possible with high sensitivity, specificity or PPV, by combining multiple data sources, or by focusing on specific test measures. Studies testing a range of case definitions show wide variation in the validity of each definition, suggesting this may be important for obtaining asthma definitions with optimal validity.

Journal ArticleDOI
TL;DR: T2DM was associated with approximately twice the risk of disease progression, and mortality risk increased with disease stage, and died without evidence of stage progression in 2.3 years of follow-up.
Abstract: Purpose To identify the characteristics and initial disease severity of patients with nonalcoholic fatty liver disease (NAFLD) and assess incidence and risk factors for disease progression in a retrospective study. Methods Patients ≥18 years of age without alcoholism or other liver diseases (eg, hepatitis B/C) were selected from Geisinger Health System electronic medical record data from 2004 to 2015. Initial disease stage was stratified into uncomplicated NAFLD, advanced fibrosis, cirrhosis, hepatocellular carcinoma (HCC), and liver transplant using clinical biomarkers, diagnosis, and procedure codes. Disease progression was defined as stage progression or death and analyzed via Kaplan-Meier plots and multistate models. Results In the NAFLD cohort (N=18,754), 61.5% were women, 39.0% had type 2 diabetes mellitus (T2DM), and the mean body mass index was 38.2±10.2 kg/m2. At index, 69.9% had uncomplicated NAFLD, 11.7% had advanced fibrosis, and 17.8% had cirrhosis. Of 18,718 patients assessed for progression, 17.3% progressed (11.0% had stage progression, 6.3% died without evidence of stage progression) during follow-up (median=842 days). Among subgroups, 12.3% of those without diabetes mellitus progressed vs 24.7% of those with T2DM. One-year mortality increased from 0.5% in uncomplicated NAFLD to 22.7% in HCC. After liver transplant, mortality decreased to 5.6% per year. Conclusions In 2.3 years of follow-up, approximately 17% of patients progressed or died without evidence of stage progression. T2DM was associated with approximately twice the risk of disease progression, and mortality risk increased with disease stage. Early diagnosis and monitoring of disease progression, especially in patients with T2DM, is warranted.

Journal ArticleDOI
TL;DR: The current taxonomy, epidemiology, and management of sessile serrated polyps with an emphasis on the clinical and public health impact of these lesions are outlined.
Abstract: Serrated polyps (SPs) of the colorectum pose a novel challenge to practicing gastroenterologists. Previously thought benign and unimportant, there is now compelling evidence that SPs are responsible for a significant percentage of incident colorectal cancer worldwide. In contrast to conventional adenomas, which tend to be slow growing and polypoid, SPs have unique features that undermine current screening and surveillance practices. For example, sessile serrated polyps (SSPs) are flat, predominately right-sided, and thought to have the potential for rapid growth. Moreover, SSPs are subject to wide variations in endoscopic detection and pathologic interpretation. Unfortunately, little is known about the natural history of SPs, and current guidelines are based largely on expert opinion. In this review, we outline the current taxonomy, epidemiology, and management of SPs with an emphasis on the clinical and public health impact of these lesions.

Journal ArticleDOI
TL;DR: Lichtensztajn et al. as mentioned in this paper used hospital discharge data to construct a comorbidity index for cancer registries, which can provide important clinically relevant information for population-based cancer outcomes research.
Abstract: Author(s): Lichtensztajn, Daphne Y; Giddings, Brenda M; Morris, Cyllene R; Parikh-Patel, Arti; Kizer, Kenneth W | Abstract: The presence of comorbid medical conditions can significantly affect a cancer patient's treatment options, quality of life, and survival. However, these important data are often lacking from population-based cancer registries. Leveraging routine linkage to hospital discharge data, a comorbidity score was calculated for patients in the California Cancer Registry (CCR) database.California cancer cases diagnosed between 1991 and 2013 were linked to statewide hospital discharge data. A Deyo and Romano adapted Charlson Comorbidity Index was calculated for each case, and the association of comorbidity score with overall survival was assessed with Kaplan-Meier curves and Cox proportional hazards models. Using a subset of Medicare-enrolled CCR cases, the index was validated against a comorbidity score derived using Surveillance, Epidemiology, and End Results (SEER)-Medicare linked data.A comorbidity score was calculated for 71% of CCR cases. The majority (60.2%) had no relevant comorbidities. Increasing comorbidity score was associated with poorer overall survival. In a multivariable model, high comorbidity conferred twice the risk of death compared to no comorbidity (hazard ratio 2.33, 95% CI: 2.32-2.34). In the subset of patients with a SEER-Medicare-derived score, the sensitivity of the hospital discharge-based index for detecting any comorbidity was 76.5. The association between overall mortality and comorbidity score was stronger for the hospital discharge-based score than for the SEER-Medicare-derived index, and the predictive ability of the hospital discharge-based score, as measured by Harrell's C index, was also slightly better for the hospital discharge-based score (C index 0.62 versus 0.59, Pl0.001).Despite some limitations, using hospital discharge data to construct a comorbidity index for cancer registries is a feasible and valid method to enhance registry data, which can provide important clinically relevant information for population-based cancer outcomes research.

Journal ArticleDOI
TL;DR: The five-marker panel showed similar diagnostic efficacy for the detection of early- and late-stage CRC and could contribute to the development of powerful blood-based tests for CRC screening in the future.
Abstract: Objective Reliable noninvasive biomarkers for early detection of colorectal cancer (CRC) are highly desirable for efficient population-based screening with high adherence rates. We aimed to discover and validate blood-based protein markers for the early detection of CRC. Patients and methods A two-stage design with a discovery and a validation set was used. In the discovery phase, plasma levels of 92 protein markers and serum levels of TP53 autoantibody were measured in 226 clinically recruited CRC patients and 118 controls who were free of colorectal neoplasms at screening colonoscopy. An algorithm predicting the presence of CRC was derived by Lasso regression and validated in a validation set consisting of all available 41 patients with CRC and a representative sample of 106 participants with advanced adenomas and 107 controls free of neoplasm from a large screening colonoscopy cohort (N=6018). Receiver operating characteristic (ROC) analyses were conducted to evaluate the diagnostic performance of individual biomarkers and biomarker combinations. Results An algorithm based on growth differentiation factor 15 (GDF-15), amphiregulin (AREG), Fas antigen ligand (FasL), Fms-related tyrosine kinase 3 ligand (Flt3L) and TP53 autoantibody was constructed. In the validation set, the areas under the curves of this five-marker algorithm were 0.82 (95% CI, 0.74-0.90) for detecting CRC and 0.60 (95% CI, 0.52-0.69) for detecting advanced adenomas. At cutoffs yielding 90% specificity, the sensitivities (95% CI) for detecting CRC and advanced adenomas were 56.4% (38.4%-71.8%) and 22.0% (13.4%-35.4%), respectively. The five-marker panel showed similar diagnostic efficacy for the detection of early- and late-stage CRC. Conclusion The identified most promising biomarkers could contribute to the development of powerful blood-based tests for CRC screening in the future.

Journal ArticleDOI
TL;DR: An HLB seems to be protective for long duration troublesome LBP in men, and for longduration troublesome NP in women.
Abstract: Background: The role of healthy lifestyle behavior (HLB) in terms of physical activity, alcohol intake, smoking, and diet put together has not yet been explored for the risk of low back pain (LBP) ...

Journal ArticleDOI
TL;DR: Although the majority of studies suggest a link between exposure to infected/colonized roommates and prior room occupants, methodological improvements such as increasing the statistical power and conducting universal screening for colonization would provide more definitive evidence needed to establish causality.
Abstract: Pathogens that cause health care-associated infections (HAIs) are known to survive on surfaces and equipment in health care environments despite routine cleaning As a result, the infection status of prior room occupants and roommates may play a role in HAI transmission We performed a systematic review of the literature evaluating the association between patients' exposure to infected/colonized hospital roommates or prior room occupants and their risk of infection/colonization with the same organism A PubMed search for English articles published in 1990-2014 yielded 330 studies, which were screened by three reviewers Eighteen articles met our inclusion criteria Multiple studies reported positive associations between infection and exposure to roommates with influenza and group A streptococcus, but no associations were found for Clostridium difficile, methicillin-resistant Staphylococcus aureus, Cryptosporidium parvum, or Pseudomonas cepacia; findings were mixed for vancomycin-resistant enterococci (VRE) Positive associations were found between infection/colonization and exposure to rooms previously occupied by patients with Pseudomonas aeruginosa and Acinetobacter baumannii, but no associations were found for resistant Gram-negative organisms; findings were mixed for C difficile, methicillin-resistant S aureus, and VRE Although the majority of studies suggest a link between exposure to infected/colonized roommates and prior room occupants, methodological improvements such as increasing the statistical power and conducting universal screening for colonization would provide more definitive evidence needed to establish causality

Journal ArticleDOI
TL;DR: An increased risk of breast cancer with greater breast density in Korean women was confirmed which was consistent regardless of BI-RADS assessment category, time interval after initially non-recall results, and menopausal status.
Abstract: Purpose The purpose of this study was to investigate the effects of breast density on breast cancer risk among women screened via a nationwide mammographic screening program. Patients and methods We conducted a nested case-control study for a randomly selected population of 1,561 breast cancer patients and 6,002 matched controls from the National Cancer Screening Program. Breast density was measured and recorded by two independent radiologists using the Breast Imaging Reporting and Data System (BI-RADS). Associations between BI-RADS density and breast cancer risk were evaluated according to screening results, time elapsed since receiving non-recall results, age, and menopausal status after adjusting for possible covariates. Results Breast cancer risk for women with extremely dense breasts was five times higher (adjusted odds ratio [aOR] =5.0; 95% confidence interval [CI]) =3.7-6.7) than that for women with an almost entirely fatty breast, although the risk differed between recalled women (aOR =3.3, 95% CI =2.3-3.6) and women with non-recalled results (aOR =12.1, 95% CI =6.3-23.3, P-heterogeneity =0.001). aORs for BI-RADS categories of breast density were similar when subjects who developed cancer after showing non-recall findings during initial screening were grouped according to time until cancer diagnosis thereafter (<1 and ≥1 year). The prevalence of dense breasts was higher in younger women, and the association between a denser breast and breast cancer was stronger in younger women (heterogeneously dense breast: aOR =7.0, 95% CI =2.4-20.3, women in their 40s) than older women (aOR =2.5, 95% CI =1.1-6.0, women in their 70s or more). In addition, while the positive association remained, irrespective of menopausal status, the effect of a dense breast on breast cancer risk was stronger in premenopausal women. Conclusion This study confirmed an increased risk of breast cancer with greater breast density in Korean women which was consistent regardless of BI-RADS assessment category, time interval after initially non-recall results, and menopausal status.

Journal ArticleDOI
TL;DR: Large differences in PA time derived from the LASA Physical Activity Questionnaire and the wrist-worn accelerometer were observed, related to body-mass index, level of disability, and presence of depressive symptoms.
Abstract: _Background:_ Agreement between questionnaires and accelerometers to measure physical activity (PA) differs between studies and might be related to demographic, lifestyle, and health characteristics, including disability and depressive symptoms. _Methods:_ We included 1,410 individuals aged 51–94 years from the population-based Rotterdam Study. Participants completed the LASA Physical Activity Questionnaire and wore a wrist-worn accelerometer on the nondominant wrist for 1 week thereafter. We compared the Spearman correlation and disagreement (level and direction) for total PA across levels of demographic, lifestyle, and health variables. The level of disagreement was defined as the absolute difference between questionnaire- and accelerometer-derived PA, whereas the direction of disagreement was defined as questionnaire PA minus accelerometer PA. We used linear regression analyses with the level and direction of disagreement as outcome, including all demographic, lifestyle, and health variables in the model. _Results:_ We observed a Spearman correlation of 0.30 between questionnaire- and accelerometer-derived PA in the total population. The level of disagreement (ie, absolute difference) was 941.9 (standard deviation [SD] 747.0) minutes/week, and the PA reported by questionnaire was on average 529.4 (SD 1,079.5) minutes/week lower than PA obtained by the accelerometer. The level of disagreement decreased with higher educational levels. Additionally, participants with obesity, higher disability scores, and more depressive symptoms underestimated their self-reported PA more than their healthier counterparts. _Conclusion:_ We observed large differences in PA time derived from the LASA Physical Activity Questionnaire and the wrist-worn accelerometer. Differences between the methods were related to body-mass index, level of disability, and presence of depressive symptoms. Future studies using questionnaires and/or accelerometers should account for these differences.