scispace - formally typeset
Search or ask a question

Showing papers by "Johns Hopkins University published in 2002"


Journal ArticleDOI
TL;DR: This review discusses patterns of DNA methylation and chromatin structure in neoplasia and the molecular alterations that might cause them and/or underlie altered gene expression in cancer.
Abstract: Patterns of DNA methylation and chromatin structure are profoundly altered in neoplasia and include genome-wide losses of, and regional gains in, DNA methylation. The recent explosion in our knowledge of how chromatin organization modulates gene transcription has further highlighted the importance of epigenetic mechanisms in the initiation and progression of human cancer. These epigenetic changes -- in particular, aberrant promoter hypermethylation that is associated with inappropriate gene silencing -- affect virtually every step in tumour progression. In this review, we discuss these epigenetic events and the molecular alterations that might cause them and/or underlie altered gene expression in cancer.

5,492 citations



Journal ArticleDOI
TL;DR: Research on the mental and physical health sequelae of intimate partner violence is reviewed and increased assessment and interventions for intimate partner Violence in health-care settings are recommended.

3,615 citations


Posted Content
TL;DR: Weak instruments arise when the instruments in linear instrumental variables (IV) regression are weakly correlated with the included endogenous variables as mentioned in this paper, and weak instruments correspond to weak identification of some or all of the unknown parameters.
Abstract: Weak instruments arise when the instruments in linear instrumental variables (IV) regression are weakly correlated with the included endogenous variables. In generalized method of moments (GMM), more generally, weak instruments correspond to weak identification of some or all of the unknown parameters. Weak identification leads to GMM statistics with nonnormal distributions, even in large samples, so that conventional IV or GMM inferences are misleading. Fortunately, various procedures are now available for detecting and handling weak instruments in the linear IV model and, to a lesser degree, in nonlinear GMM.

3,378 citations


Journal ArticleDOI
TL;DR: The Multi-Ethnic Study of Atherosclerosis was initiated in July 2000 to investigate the prevalence, correlates, and progression of subclinical cardiovascular disease (CVD) in a population-based sample of 6,500 men and women aged 45-84 years for identification and characterization of CVD events.
Abstract: The Multi-Ethnic Study of Atherosclerosis was initiated in July 2000 to investigate the prevalence, correlates, and progression of subclinical cardiovascular disease (CVD) in a population-based sample of 6,500 men and women aged 45-84 years. The cohort will be selected from six US field centers. Approximately 38% of the cohort will be White, 28% African-American, 23% Hispanic, and 11% Asian (of Chinese descent). Baseline measurements will include measurement of coronary calcium using computed tomography; measurement of ventricular mass and function using cardiac magnetic resonance imaging; measurement of flow-mediated brachial artery endothelial vasodilation, carotid intimal-medial wall thickness, and distensibility of the carotid arteries using ultrasonography; measurement of peripheral vascular disease using ankle and brachial blood pressures; electrocardiography; and assessments of microalbuminuria, standard CVD risk factors, sociodemographic factors, life habits, and psychosocial factors. Blood samples will be assayed for putative biochemical risk factors and stored for use in nested case-control studies. DNA will be extracted and lymphocytes will be immortalized for genetic studies. Measurement of selected subclinical disease indicators and risk factors will be repeated for the study of progression over 7 years. Participants will be followed through 2008 for identification and characterization of CVD events, including acute myocardial infarction and other coronary heart disease, stroke, peripheral vascular disease, and congestive heart failure; therapeutic interventions for CVD; and mortality.

3,367 citations


Journal ArticleDOI
TL;DR: It is the right time for medical societies and public health regulators to consider the causal role of human papillomavirus infections in cervical cancer and to define its preventive and clinical implications.
Abstract: The causal role of human papillomavirus infections in cervical cancer has been documented beyond reasonable doubt. The association is present in virtually all cervical cancer cases worldwide. It is the right time for medical societies and public health regulators to consider this evidence and to define its preventive and clinical implications. A comprehensive review of key studies and results is presented.

3,333 citations


Journal ArticleDOI
24 Apr 2002-JAMA
TL;DR: The 2001 Bethesda System terminology reflects important advances in biological understanding of cervical neoplasia and cervical screening technology.
Abstract: ObjectivesThe Bethesda 2001 Workshop was convened to evaluate and update the 1991 Bethesda System terminology for reporting the results of cervical cytology. A primary objective was to develop a new approach to broaden participation in the consensus process.ParticipantsForum groups composed of 6 to 10 individuals were responsible for developing recommendations for discussion at the workshop. Each forum group included at least 1 cytopathologist, cytotechnologist, clinician, and international representative to ensure a broad range of views and interests. More than 400 cytopathologists, cytotechnologists, histopathologists, family practitioners, gynecologists, public health physicians, epidemiologists, patient advocates, and attorneys participated in the workshop, which was convened by the National Cancer Institute and cosponsored by 44 professional societies. More than 20 countries were represented.EvidenceLiterature review, expert opinion, and input from an Internet bulletin board were all considered in developing recommendations. The strength of evidence of the scientific data was considered of paramount importance.Consensus ProcessBethesda 2001 was a year-long iterative review process. An Internet bulletin board was used for discussion of issues and drafts of recommendations. More than 1000 comments were posted to the bulletin board over the course of 6 months. The Bethesda Workshop, held April 30-May 2, 2001, was open to the public. Postworkshop recommendations were posted on the bulletin board for a last round of critical review prior to finalizing the terminology.ConclusionsBethesda 2001 was developed with broad participation in the consensus process. The 2001 Bethesda System terminology reflects important advances in biological understanding of cervical neoplasia and cervical screening technology.

3,122 citations


Journal ArticleDOI
TL;DR: In this article, the convergence rate for the factor estimates that will allow for consistent estimation of the number of factors is established, and some panel criteria are proposed to obtain the convergence rates.
Abstract: In this paper we develop some econometric theory for factor models of large dimensions. The focus is the determination of the number of factors (r), which is an unresolved issue in the rapidly growing literature on multifactor models. We first establish the convergence rate for the factor estimates that will allow for consistent estimation of r. We then propose some panel criteria and show that the number of factors can be consistently estimated using the criteria. The theory is developed under the framework of large cross-sections (N) and large time dimensions (T). No restriction is imposed on the relation between N and T. Simulations show that the proposed criteria have good finite sample properties in many configurations of the panel data encountered in practice.

2,863 citations


Journal ArticleDOI
31 Oct 2002-Nature
TL;DR: A thermomechanical treatment of Cu is described that results in a bimodal grain size distribution, with micrometre-sized grains embedded inside a matrix of nanocrystalline and ultrafine (<300 nm) grains, which impart high strength, as expected from an extrapolation of the Hall–Petch relationship.
Abstract: Nanocrystalline metals--with grain sizes of less than 100 nm--have strengths exceeding those of coarse-grained and even alloyed metals, and are thus expected to have many applications. For example, pure nanocrystalline Cu (refs 1-7) has a yield strength in excess of 400 MPa, which is six times higher than that of coarse-grained Cu. But nanocrystalline materials often exhibit low tensile ductility at room temperature, which limits their practical utility. The elongation to failure is typically less than a few per cent; the regime of uniform deformation is even smaller. Here we describe a thermomechanical treatment of Cu that results in a bimodal grain size distribution, with micrometre-sized grains embedded inside a matrix of nanocrystalline and ultrafine (<300 nm) grains. The matrix grains impart high strength, as expected from an extrapolation of the Hall-Petch relationship. Meanwhile, the inhomogeneous microstructure induces strain hardening mechanisms that stabilize the tensile deformation, leading to a high tensile ductility--65% elongation to failure, and 30% uniform elongation. We expect that these results will have implications in the development of tough nanostructured metals for forming operations and high-performance structural applications including microelectromechanical and biomedical systems.

2,531 citations


Journal ArticleDOI
TL;DR: The Sloan Digital Sky Survey (SDSS) is an imaging and spectroscopic survey that will eventually cover approximately one-quarter of the celestial sphere and collect spectra of ≈106 galaxies, 100,000 quasars, 30,000 stars, and 30, 000 serendipity targets as discussed by the authors.
Abstract: The Sloan Digital Sky Survey (SDSS) is an imaging and spectroscopic survey that will eventually cover approximately one-quarter of the celestial sphere and collect spectra of ≈106 galaxies, 100,000 quasars, 30,000 stars, and 30,000 serendipity targets. In 2001 June, the SDSS released to the general astronomical community its early data release, roughly 462 deg2 of imaging data including almost 14 million detected objects and 54,008 follow-up spectra. The imaging data were collected in drift-scan mode in five bandpasses (u, g, r, i, and z); our 95% completeness limits for stars are 22.0, 22.2, 22.2, 21.3, and 20.5, respectively. The photometric calibration is reproducible to 5%, 3%, 3%, 3%, and 5%, respectively. The spectra are flux- and wavelength-calibrated, with 4096 pixels from 3800 to 9200 A at R ≈ 1800. We present the means by which these data are distributed to the astronomical community, descriptions of the hardware used to obtain the data, the software used for processing the data, the measured quantities for each observed object, and an overview of the properties of this data set.

2,422 citations


Journal ArticleDOI
TL;DR: The persistence of the engrafted hMSCs and their in situ differentiation in the heart may represent the basis for using these adult stem cells for cellular cardiomyoplasty.
Abstract: Background— Cellular cardiomyoplasty has been proposed as an alternative strategy for augmenting the function of diseased myocardium. We investigated the potential of human mesenchymal stem cells (hMSCs) from adult bone marrow to undergo myogenic differentiation once transplanted into the adult murine myocardium. Methods and Results— A small bone marrow aspirate was taken from the iliac crest of healthy human volunteers, and hMSCs were isolated as previously described. The stem cells, labeled with lacZ, were injected into the left ventricle of CB17 SCID/beige adult mice. At 4 days after injection, none of the engrafted hMSCs expressed myogenic markers. A limited number of cells survived past 1 week and over time morphologically resembled the surrounding host cardiomyocytes. Immunohistochemistry revealed de novo expression of desmin, β-myosin heavy chain, α-actinin, cardiac troponin T, and phospholamban at levels comparable to those of the host cardiomyocytes; sarcomeric organization of the contractile pro...

Journal ArticleDOI
TL;DR: Better primary care, especially coordination of care, could reduce avoidable hospitalization rates, especially for individuals with multiple chronic conditions.
Abstract: Methods: A cross-sectional analysis was conducted on a nationally random sample of 1217103 Medicare feefor-service beneficiaries aged 65 and older living in the United States and enrolled in both Medicare Part A and Medicare Part B during 1999. Multiple logistic regression was used to analyze the influence of age, sex, and number of types of chronic conditions on the risk of incurring inpatient hospitalizations for ambulatory care sensitive conditions and hospitalizations with preventable complications among aged Medicare beneficiaries. Results: In 1999, 82% of aged Medicare beneficiaries had 1 or more chronic conditions, and 65% had multiple chronic conditions. Inpatient admissions for ambulatory care sensitive conditions and hospitalizations with preventable complications increased with the number of chronic conditions. For example, Medicare beneficiaries with 4 or more chronic conditions were 99 times more likely than a beneficiary without any chronic conditions to have an admission for an ambulatory care sensitive condition (95% confidence interval, 86-113). Per capita Medicare expenditures increased with the number of types of chronic conditions from $211 among beneficiaries without a chronic condition to $13973 among beneficiaries with 4 or more types of chronic conditions. Conclusions: The risk of an avoidable inpatient admission or a preventable complication in an inpatient setting increases dramatically with the number of chronic conditions. Better primary care, especially coordination of care, could reduce avoidable hospitalization rates, especially for individuals with multiple chronic conditions.

Journal ArticleDOI
TL;DR: In this article, the authors describe the algorithm that selects the main sample of galaxies for spectroscopy in the Sloan Digital Sky Survey (SDSS) from the photometric data obtained by the imaging survey.
Abstract: We describe the algorithm that selects the main sample of galaxies for spectroscopy in the Sloan Digital Sky Survey (SDSS) from the photometric data obtained by the imaging survey. Galaxy photometric properties are measured using the Petrosian magnitude system, which measures flux in apertures determined by the shape of the surface brightness profile. The metric aperture used is essentially independent of cosmological surface brightness dimming, foreground extinction, sky brightness, and the galaxy central surface brightness. The main galaxy sample consists of galaxies with r-band Petrosian magnitudes r ≤ 17.77 and r-band Petrosian half-light surface brightnesses μ50 ≤ 24.5 mag arcsec-2. These cuts select about 90 galaxy targets per square degree, with a median redshift of 0.104. We carry out a number of tests to show that (1) our star-galaxy separation criterion is effective at eliminating nearly all stellar contamination while removing almost no genuine galaxies, (2) the fraction of galaxies eliminated by our surface brightness cut is very small (~0.1%), (3) the completeness of the sample is high, exceeding 99%, and (4) the reproducibility of target selection based on repeated imaging scans is consistent with the expected random photometric errors. The main cause of incompleteness is blending with saturated stars, which becomes more significant for brighter, larger galaxies. The SDSS spectra are of high enough signal-to-noise ratio (S/N > 4 per pixel) that essentially all targeted galaxies (99.9%) yield a reliable redshift (i.e., with statistical error less than 30 km s-1). About 6% of galaxies that satisfy the selection criteria are not observed because they have a companion closer than the 55'' minimum separation of spectroscopic fibers, but these galaxies can be accounted for in statistical analyses of clustering or galaxy properties. The uniformity and completeness of the galaxy sample make it ideal for studies of large-scale structure and the characteristics of the galaxy population in the local universe.

Journal ArticleDOI
TL;DR: There was a statistically significant positive correlation between percent maximal cytoreduction and log median survival time, and this correlation remained significant after controlling for all other variables.
Abstract: PURPOSE: To evaluate the relative effect of percent maximal cytoreductive surgery and other prognostic variables on survival among cohorts of patients with advanced-stage ovarian carcinoma treated with platinum-based chemotherapy. MATERIALS AND METHODS: Eighty-one cohorts of patients with stage III or IV ovarian carcinoma (6,885 patients) were identified from articles in MEDLINE (1989 through 1998). Linear regression models, with weighted correlation calculations, were used to assess the effects on log median survival time of the proportion of each cohort undergoing maximal cytoreduction, dose-intensity of the platinum compound administered, proportion of patients with stage IV disease, median age, and year of publication. RESULTS: There was a statistically significant positive correlation between percent maximal cytoreduction and log median survival time, and this correlation remained significant after controlling for all other variables (P < .001). Each 10% increase in maximal cytoreduction was associat...

Journal ArticleDOI
13 Nov 2002-JAMA
TL;DR: Results support the effectiveness and durability of the cognitive training interventions in improving targeted cognitive abilities and were of a magnitude equivalent to the amount of decline expected in elderly persons without dementia over 7- to 14-year intervals.
Abstract: ContextCognitive function in older adults is related to independent living and need for care. However, few studies have addressed whether improving cognitive functions might have short- or long-term effects on activities related to living independently.ObjectiveTo evaluate whether 3 cognitive training interventions improve mental abilities and daily functioning in older, independent-living adults.DesignRandomized, controlled, single-blind trial with recruitment conducted from March 1998 to October 1999 and 2-year follow-up through December 2001.Setting and ParticipantsVolunteer sample of 2832 persons aged 65 to 94 years recruited from senior housing, community centers, and hospital/clinics in 6 metropolitan areas in the United States.InterventionsParticipants were randomly assigned to 1 of 4 groups: 10-session group training for memory (verbal episodic memory; n = 711), or reasoning (ability to solve problems that follow a serial pattern; n = 705), or speed of processing (visual search and identification; n = 712); or a no-contact control group (n = 704). For the 3 treatment groups, 4-session booster training was offered to a 60% random sample 11 months later.Main Outcome MeasuresCognitive function and cognitively demanding everyday functioning.ResultsThirty participants were incorrectly randomized and were excluded from the analysis. Each intervention improved the targeted cognitive ability compared with baseline, durable to 2 years (P<.001 for all). Eighty-seven percent of speed-, 74% of reasoning-, and 26% of memory-trained participants demonstrated reliable cognitive improvement immediately after the intervention period. Booster training enhanced training gains in speed (P<.001) and reasoning (P<.001) interventions (speed booster, 92%; no booster, 68%; reasoning booster, 72%; no booster, 49%), which were maintained at 2-year follow-up (P<.001 for both). No training effects on everyday functioning were detected at 2 years.ConclusionsResults support the effectiveness and durability of the cognitive training interventions in improving targeted cognitive abilities. Training effects were of a magnitude equivalent to the amount of decline expected in elderly persons without dementia over 7- to 14-year intervals. Because of minimal functional decline across all groups, longer follow-up is likely required to observe training effects on everyday function.

Journal ArticleDOI
25 Sep 2002-JAMA
TL;DR: These are the first population-based estimates for neuropsychiatric symptoms in MCI, indicating a high prevalence associated with this condition as well.
Abstract: ContextMild cognitive impairment (MCI) may be a precursor to dementia, at least in some cases. Dementia and MCI are associated with neuropsychiatric symptoms in clinical samples. Only 2 population-based studies exist of the prevalence of these symptoms in dementia, and none exist for MCI.ObjectiveTo estimate the prevalence of neuropsychiatric symptoms in dementia and MCI in a population-based study.DesignCross-sectional study derived from the Cardiovascular Health Study, a longitudinal cohort study.Setting and ParticipantsA total of 3608 participants were cognitively evaluated using data collected longitudinally over 10 years and additional data collected in 1999-2000 in 4 US counties. Dementia and MCI were classified using clinical criteria and adjudicated by committee review by expert neurologists and psychiatrists. A total of 824 individuals completed the Neuropsychiatric Inventory (NPI); 362 were classified as having dementia, 320 as having MCI; and 142 did not meet criteria for MCI or dementia.Main Outcome MeasurePrevalence of neuropsychiatric symptoms, based on ratings on the NPI in the previous month and from the onset of cognitive symptoms.ResultsOf the 682 individuals with dementia or MCI, 43% of MCI participants (n = 138) exhibited neuropsychiatric symptoms in the previous month (29% rated as clinically significant) with depression (20%), apathy (15%), and irritability (15%) being most common. Among the dementia participants, 75% (n = 270) had exhibited a neuropsychiatric symptom in the past month (62% were clinically significant); 55% (n = 199) reported 2 or more and 44% (n = 159) 3 or more disturbances in the past month. In participants with dementia, the most frequent disturbances were apathy (36%), depression (32%), and agitation/aggression (30%). Eighty percent of dementia participants (n = 233) and 50% of MCI participants (n = 139) exhibited at least 1 NPI symptom from the onset of cognitive symptoms. There were no differences in prevalence of neuropsychiatric symptoms between participants with Alzheimer-type dementia and those with other dementias, with the exception of aberrant motor behavior, which was more frequent in Alzheimer-type dementia (5.4% vs 1%; P = .02).ConclusionsNeuropsychiatric symptoms occur in the majority of persons with dementia over the course of the disease. These are the first population-based estimates for neuropsychiatric symptoms in MCI, indicating a high prevalence associated with this condition as well. These symptoms have serious adverse consequences and should be inquired about and treated as necessary. Study of neuropsychiatric symptoms in the context of dementia may improve our understanding of brain-behavior relationships.

Journal ArticleDOI
TL;DR: This finding suggests that reaction of cysteine thiols is followed by rapid formation of protein disulfide linkages, which are the direct sensors of inducers of the phase 2 system.
Abstract: Coordinate induction of phase 2 proteins and elevation of glutathione protect cells against the toxic and carcinogenic effects of electrophiles and oxidants. All inducers react covalently with thiols at rates that are closely related to their potencies. Inducers disrupt the cytoplasmic complex between the actin-bound protein Keap1 and the transcription factor Nrf2, thereby releasing Nrf2 to migrate to the nucleus where it activates the antioxidant response element (ARE) of phase 2 genes and accelerates their transcription. We cloned, overexpressed, and purified murine Keap1 and demonstrated on native gels the formation of complexes of Keap1 with the Neh2 domain of Nrf2 and their concentration-dependent disruption by inducers such as sulforaphane and bis(2-hydroxybenzylidene)acetone. The kinetics, stoichiometry, and order of reactivities of the most reactive of the 25 cysteine thiol groups of Keap1 have been determined by tritium incorporation from [3H]dexamethasone mesylate (an inducer and irreversible modifier of thiols) and by UV spectroscopy with sulforaphane, 2,2′-dipyridyl disulfide and 4,4′-dipyridyl disulfide (titrants of thiol groups), and two closely related Michael reaction acceptors [bis(2- and 4-hydroxybenzylidene)acetones] that differ 100-fold in inducer potency and the UV spectra of which are bleached by thiol addition. With large excesses of these reagents nearly all thiols of Keap1 react, but sequential reaction with three successive single equivalents (per cysteine residue) of dipyridyl disulfides revealed excellent agreement with pseudo-first order kinetics, rapid successive declines in reaction velocity, and the stoichiometric formation of two equivalents of thiopyridone per reacted cysteine. This finding suggests that reaction of cysteine thiols is followed by rapid formation of protein disulfide linkages. The most reactive residues of Keap1 (C257, C273, C288, and C297) were identified by mapping the dexamethasone-modified cysteines by mass spectrometry of tryptic peptides. These residues are located in the intervening region between BTB and Kelch repeat domains of Keap1 and probably are the direct sensors of inducers of the phase 2 system.

Journal ArticleDOI
08 Nov 2002-Science
TL;DR: A computational model is presented that describes the temporal control of NF-κB activation by the coordinated degradation and synthesis of IκB proteins and demonstrates that IπκBα is responsible for strong negative feedback that allows for a fast turn-off of the NF-σB response.
Abstract: Nuclear localization of the transcriptional activator NF-κB (nuclear factor κB) is controlled in mammalian cells by three isoforms of NF-κB inhibitor protein: IκBα, -β, and -ɛ Based on simplifying reductions of the IκB–NF-κB signaling module in knockout cell lines, we present a computational model that describes the temporal control of NF-κB activation by the coordinated degradation and synthesis of IκB proteins The model demonstrates that IκBα is responsible for strong negative feedback that allows for a fast turn-off of the NF-κB response, whereas IκBβ and -ɛ function to reduce the system's oscillatory potential and stabilize NF-κB responses during longer stimulations Bimodal signal-processing characteristics with respect to stimulus duration are revealed by the model and are shown to generate specificity in gene expression

Journal ArticleDOI
TL;DR: In this paper, the 158 standard stars that define the u'g'r'i'z' photometric system are presented, which form the basis for the photometric calibration of the Sloan Digital Sky Survey.
Abstract: We present the 158 standard stars that define the u'g'r'i'z' photometric system. These stars form the basis for the photometric calibration of the Sloan Digital Sky Survey. The defining instrument system and filters, the observing process, the reduction techniques, and the software used to create the stellar network are all described. We briefly discuss the history of the star selection process, the derivation of a set of transformation equations for the UBVRCIC system, and plans for future work.

Journal ArticleDOI
16 Oct 2002-JAMA
TL;DR: Applying these approaches to the general population as a component of public health and clinical practice can help prevent blood pressure from increasing and can help decrease elevated blood pressure levels for those with high normal blood pressure or hypertension.
Abstract: The National High Blood Pressure Education Program Coordinating Committee published its first statement on the primary prevention of hypertension in 1993. This article updates the 1993 report, using new and further evidence from the scientific literature. Current recommendations for primary prevention of hypertension involve a population-based approach and an intensive targeted strategy focused on individuals at high risk for hypertension. These 2 strategies are complementary and emphasize 6 approaches with proven efficacy for prevention of hypertension: engage in moderate physical activity; maintain normal body weight; limit alcohol consumption; reduce sodium intake; maintain adequate intake of potassium; and consume a diet rich in fruits, vegetables, and low-fat dairy products and reduced in saturated and total fat. Applying these approaches to the general population as a component of public health and clinical practice can help prevent blood pressure from increasing and can help decrease elevated blood pressure levels for those with high normal blood pressure or hypertension.

Journal ArticleDOI
TL;DR: In this article, statistical downscaling of hydrologic extremes is considered, and future challenges such as the development of more rigorous statistical methodology for regional analysis of extremes, as well as the extension of Bayesian methods to more fully quantify uncertainty in extremal estimation are reviewed.

Journal ArticleDOI
TL;DR: In this paper, a biologically motivated computational model of bottom-up visual selective attention was used to examine the degree to which stimulus salience guides the allocation of attention in human eye movements while participants viewed a series of digitized images of complex natural and artificial scenes.

Journal ArticleDOI
TL;DR: Long-term survival following liver resection for colorectal metastases has improved significantly in recent years at this institution, and contributing factors may include the use of newer preoperative and intraoperative imaging, increased use of chemotherapy, and salvage surgical therapy.
Abstract: Objective To examine trends in outcomes of patients undergoing resection at a single tertiary care referral center over a 16-year period. Background Data Hepatic resection is considered the treatment of choice in selected patients with colorectal metastasis confined to the liver. Although a variety of retrospective studies have demonstrated improvements in short-term outcomes in recent years, changes in long-term survival over time are less well-established. Methods Data from 226 consecutive patients undergoing potentially curative liver resection for colorectal metastases between 1984 and 1999 were analyzed. Actuarial survival rates related to prognostic determinants were analyzed using the log-rank test. Results The median survival for the entire cohort was 46 months, with 5- and 10-year survival rates of 40% and 26% respectively. Ninety-three patients operated on between 1984 and 1992 were found to have an overall survival of 31% at 5 years, compared to 58% for the 133 patients operated on during the more recent period (1993-1999). Both overall and disease-free survival were significantly better in the recent time period compared with the earlier period on both univariate and multivariate analyses. Other independent factors associated with improved survival included number of metastatic tumors ≤ 3, negative resection margin, and CEA < 100. Comparisons were made between time periods for a variety of patient, tumor and treatment-related factors. Among all parameters studied, only resection type (anatomical versus nonanatomical), use of intraoperative ultrasonography, and perioperative chemotherapy administration differed between the early and recent time periods. Conclusions Long-term survival following liver resection for colorectal metastases has improved significantly in recent years at our institution. Although the reasons for this survival trend are not clear, contributing factors may include the use of newer preoperative and intraoperative imaging, increased use of chemotherapy, and salvage surgical therapy.

Journal ArticleDOI
TL;DR: The most prevalent diseases were non-food-borne gastroenteritis, food-borne illness, gastroesophageal reflux disease, and irritable bowel syndrome, followed by gallbladder disease, which had the highest annual direct costs in the United States.

Journal ArticleDOI
TL;DR: This work proposes a general framework for comparing treatments adjusting for posttreatment variables that yields principal effects based on principal stratification and attacks the problem of surrogate or biomarker endpoints, where all current definitions of surrogacy do not generally have the desired interpretation as causal effects of treatment on outcome.
Abstract: Summary. Many scientific problems require that treatment comparisons be adjusted for posttreatment variables, but the estimands underlying standard methods are not causal effects. To address this deficiency, we propose a general framework for comparing treatments adjusting for posttreatment variables that yields principal effects based on principal stratification. Principal stratification with respect to a posttreatment variable is a cross-classification of subjects defined by the joint potential values of that posttreatment variable under each of the treatments being compared. Principal effects are causal effects within a principal stratum. The key property of principal strata is that they are not affected by treatment assignment and therefore can be used just as any pretreatment covariate, such as age category. As a result, the central property of our principal effects is that they are always causal effects and do not suffer from the complications of standard posttreatment-adjusted estimands. We discuss briefly that such principal causal effects are the link between three recent applications with adjustment for posttreatment variables: (i) treatment noncompliance, (ii) missing outcomes (dropout) following treatment noncompliance, and (iii) censoring by death. We then attack the problem of surrogate or biomarker endpoints, where we show, using principal causal effects, that all current definitions of surrogacy, even when perfectly true, do not generally have the desired interpretation as causal effects of treatment on outcome. We go on to formulate estimands based on principal stratification and principal causal effects and show their superiority.

Journal ArticleDOI
01 Apr 2002
TL;DR: One can classify ways to establish the interpretability of quality-of-life measures as anchor based or distribution based, which relies on an independent standard or anchor that is itself interpretable and at least moderately correlated with the instrument being explored.
Abstract: One can classify ways to establish the interpretability of quality-of-life measures as anchor based or distribution based. Anchor-based measures require an independent standard or anchor that is itself interpretable and at least moderately correlated with the instrument being explored. One can further classify anchor-based approaches into population-focused and individual-focused measures. Population-focused approaches are analogous to construct validation and rely on multiple anchors that frame an individual's response in terms of the entire population (eg, a group of patients with a score of 40 has a mortality of 20%). Anchors for population-based approaches include status on a single item, diagnosis, symptoms, disease severity, and response to treatment. Individual-focused approaches are analogous to criterion validation. These methods, which rely on a single anchor and establish a minimum important difference in change in score, require 2 steps. The first step establishes the smallest change in score that patients consider, on average, to be important (the minimum important difference). The second step estimates the proportion of patients who have achieved that minimum important difference. Anchors for the individual-focused approach include global ratings of change within patients and global ratings of differences between patients. Distribution-based methods rely on expressing an effect in terms of the underlying distribution of results. Investigators may express effects in terms of between-person standard deviation units, within-person standard deviation units, and the standard error of measurement. No single approach to interpretability is perfect. Use of multiple strategies is likely to enhance the interpretability of any particular instrument.

Journal ArticleDOI
TL;DR: This review carried out this review to assess the current epidemiologic evidence available for this purpose and concluded that as the US population becomes more urbanized and the number of elderly people continues to increase, the threat of heat-related mortality will probably become more severe.
Abstract: The effect of elevated temperature on mortality is a public health threat of considerable magnitude. Every year, a large number of hospitalizations and deaths occur in association with exposure to elevated ambient temperatures (1, 2). An average of 400 deaths annually are counted as directly related to heat in the United States, with the highest death rates occurring in persons aged 65 years or more (3). The actual magnitude of heat-related mortality may be notably greater than what has been reported, since we do not have widely accepted criteria for determining heat-related death (4, 5–7), and heat may not be listed on the death certificate as causing or contributing to death. Persons living in urban environments may be at particularly increased risk for mortality from ambient heat exposure, since urban areas typically have higher heat indexes (combinations of temperature and humidity (8)) than surrounding suburban or rural areas, a phenomenon known as the “urban heat island effect” (9). Moreover, urban areas retain heat during the night more efficiently (10). Thus, as the US population becomes more urbanized and the number of elderly people continues to increase (11), the threat of heat-related mortality will probably become more severe. Many of these deaths may be preventable with adequate warning and an appropriate response to heat emergencies, but preventive efforts are complicated by the short time interval that may elapse between high temperature exposure and death. Thus, prevention programs must be based around prospective and rapid identification of high-risk conditions and persons. We carried out this review to assess the current epidemiologic evidence available for this purpose.

Journal ArticleDOI
TL;DR: The authors found a strong association of the temperature-mortality relation with latitude, with a greater effect of colder temperatures on mortality risk in more-southern cities and of warmer temperatures inMore-northern cities.
Abstract: Episodes of extremely hot or cold temperatures are associated with increased mortality. Time-series analyses show an association between temperature and mortality across a range of less extreme temperatures. In this paper, the authors describe the temperature-mortality association for 11 large eastern US cities in 1973-1994 by estimating the relative risks of mortality using log-linear regression analysis for time-series data and by exploring city characteristics associated with variations in this temperature-mortality relation. Current and recent days' temperatures were the weather components most strongly predictive of mortality, and mortality risk generally decreased as temperature increased from the coldest days to a certain threshold temperature, which varied by latitude, above which mortality risk increased as temperature increased. The authors also found a strong association of the temperature-mortality relation with latitude, with a greater effect of colder temperatures on mortality risk in more-southern cities and of warmer temperatures in more-northern cities. The percentage of households with air conditioners in the south and heaters in the north, which serve as indicators of socioeconomic status of the city population, also predicted weather-related mortality. The model developed in this analysis is potentially useful for projecting the consequences of climate-change scenarios and offering insights into susceptibility to the adverse effects of weather.

Journal ArticleDOI
TL;DR: A significant proportion of occult SDB in the general population would be missed if screening or case finding were based solely on increased body habitus or male sex, particularly in older adults.
Abstract: Background Sleep-disordered breathing (SDB) is common, but largely undiagnosed in the general population. Information on demographic patterns of SDB occurrence and its predictive factors in the general population is needed to target high-risk groups that may benefit from diagnosis. Methods The sample comprised 5615 community-dwelling men and women aged between 40 and 98 years who were enrolled in the Sleep Heart Health Study. Data were collected by questionnaire, clinical examinations, and in-home polysomnography. Sleep-disordered breathing status was based on the average number of apnea and hypopnea episodes per hour of sleep (apnea-hypopnea index [AHI]). We used multiple logistic regression modeling to estimate cross-sectional associations of selected participant characteristics with SDB defined by an AHI of 15 or greater. Results Male sex, age, body mass index, neck girth, snoring, and repeated breathing pause frequency were independent, significant correlates of an AHI of 15 or greater. People reporting habitual snoring, loud snoring, and frequent breathing pauses were 3 to 4 times more likely to have an AHI of 15 or greater vs an AHI less than 15, but there were weaker associations for other factors with an AHI of 15 or greater. The odds ratios (95% confidence interval) for an AHI of 15 or greater vs an AHI less than 15 were 1.6 and 1.5, respectively, for 1-SD increments in body mass index and neck girth. As age increased, the magnitude of associations for SDB and body habitus, snoring, and breathing pauses decreased. Conclusions A significant proportion of occult SDB in the general population would be missed if screening or case finding were based solely on increased body habitus or male sex. Breathing pauses and obesity may be particularly insensitive for identifying SDB in older people. A better understanding of predictive factors for SDB, particularly in older adults, is needed.

Book
01 Jan 2002
TL;DR: In this paper, Blyth analyzes the two periods of deep seated institutional change that characterized the twentieth century: the 1930s and the 1970s, and demonstrates the critical role economic ideas played in making institutional change possible.
Abstract: This book picks up where Karl Polanyi's study of economic and political change left off. Building upon Polanyi's conception of the double movement, Blyth analyzes the two periods of deep seated institutional change that characterized the twentieth century: the 1930s and the 1970s. Blyth views both sets of changes as part of the same dynamic. In the 1930s labor reacted against the exigencies of the market and demanded state action to mitigate the market's effects by 'embedding liberalism.' In the 1970s, those who benefited least from such 'embedding' institutions, namely business, reacted against these constraints and sought to overturn that institutional order. Blyth demonstrates the critical role economic ideas played in making institutional change possible. Great Transformations rethinks the relationship between uncertainty, ideas, and interests, achieving profound new insights on how, and under what conditions, institutional change takes place.