scispace - formally typeset
Search or ask a question

Showing papers by "Glenn M. Chertow published in 2010"


Journal ArticleDOI
TL;DR: In this article, the authors used the Coronary Heart Disease (CHD) Policy Model to quantify the benefits of potentially achievable, population-wide reductions in dietary salt of up to 3 g per day (1200 mg of sodium per day).
Abstract: Background The U.S. diet is high in salt, with the majority coming from processed foods. Reducing dietary salt is a potentially important target for the improvement of public health. Methods We used the Coronary Heart Disease (CHD) Policy Model to quantify the benefits of potentially achievable, population-wide reductions in dietary salt of up to 3 g per day (1200 mg of sodium per day). We estimated the rates and costs of cardiovascular disease in subgroups defined by age, sex, and race; compared the effects of salt reduction with those of other interventions intended to reduce the risk of cardiovascular disease; and determined the cost-effectiveness of salt reduction as compared with the treatment of hypertension with medications. Results Reducing dietary salt by 3 g per day is projected to reduce the annual number of new cases of CHD by 60,000 to 120,000, stroke by 32,000 to 66,000, and myocardial infarction by 54,000 to 99,000 and to reduce the annual number of deaths from any cause by 44,000 to 92,000. All segments of the population would benefit, with blacks benefiting proportionately more, women benefiting particularly from stroke reduction, older adults from reductions in CHD events, and younger adults from lower mortality rates. The cardiovascular benefits of reduced salt intake are on par with the benefits of population-wide reductions in tobacco use, obesity, and cholesterol levels. A regulatory intervention designed to achieve a reduction in salt intake of 3 g per day would save 194,000 to 392,000 quality-adjusted life-years and $10 billion to $24 billion in health care costs annually. Such an intervention would be cost-saving even if only a modest reduction of 1 g per day were achieved gradually between 2010 and 2019 and would be more cost-effective than using medications to lower blood pressure in all persons with hypertension. Conclusions Modest reductions in dietary salt could substantially reduce cardiovascular events and medical costs and should be a public health target.

1,128 citations


Journal ArticleDOI
TL;DR: Frequent hemodialysis, as compared with conventional hemodIALysis, was associated with favorable results with respect to the composite outcomes of death or change in left ventricular mass and death orchange in a physical-health composite score but prompted more frequent interventions related to vascular access.
Abstract: Background In this randomized clinical trial, we aimed to determine whether increasing the frequency of in-center hemodialysis would result in beneficial changes in left ventricular mass, self-reported physical health, and other intermediate outcomes among patients undergoing maintenance hemodialysis. Methods Patients were randomly assigned to undergo hemodialysis six times per week (frequent hemodialysis, 125 patients) or three times per week (conventional hemodialysis, 120 patients) for 12 months. The two coprimary composite outcomes were death or change (from baseline to 12 months) in left ventricular mass, as assessed by cardiac magnetic resonance imaging, and death or change in the physical-health composite score of the RAND 36-item health survey. Secondary outcomes included cognitive performance; self-reported depression; laboratory markers of nutrition, mineral metabolism, and anemia; blood pressure; and rates of hospitalization and of interventions related to vascular access. Results Patients in the frequent-hemodialysis group averaged 5.2 sessions per week; the weekly standard Kt/V(urea) (the product of the urea clearance and the duration of the dialysis session normalized to the volume of distribution of urea) was significantly higher in the frequent-hemodialysis group than in the conventional-hemodialysis group (3.54±0.56 vs. 2.49±0.27). Frequent hemodialysis was associated with significant benefits with respect to both coprimary composite outcomes (hazard ratio for death or increase in left ventricular mass, 0.61; 95% confidence interval [CI], 0.46 to 0.82; hazard ratio for death or a decrease in the physical-health composite score, 0.70; 95% CI, 0.53 to 0.92). Patients randomly assigned to frequent hemodialysis were more likely to undergo interventions related to vascular access than were patients assigned to conventional hemodialysis (hazard ratio, 1.71; 95% CI, 1.08 to 2.73). Frequent hemodialysis was associated with improved control of hypertension and hyperphosphatemia. There were no significant effects of frequent hemodialysis on cognitive performance, self-reported depression, serum albumin concentration, or use of erythropoiesis-stimulating agents. Conclusions Frequent hemodialysis, as compared with conventional hemodialysis, was associated with favorable results with respect to the composite outcomes of death or change in left ventricular mass and death or change in a physical-health composite score but prompted more frequent interventions related to vascular access. (Funded by the National Institute of Diabetes and Digestive and Kidney Diseases and others; ClinicalTrials.gov number, NCT00264758.).

878 citations


Journal ArticleDOI
TL;DR: In critically-ill patients, the dilution of sCr by fluid accumulation may lead to underestimation of the severity of AKI and increases the time required to identify a 50% relative increase in sCr.
Abstract: Introduction: Serum creatinine concentration (sCr) is the marker used for diagnosing and staging acute kidney injury (AKI) in the RIFLE and AKIN classification systems, but is influenced by several factors including its volume of distribution. We evaluated the effect of fluid accumulation on sCr to estimate severity of AKI. Methods: In 253 patients recruited from a prospective observational study of critically-ill patients with AKI, we calculated cumulative fluid balance and computed a fluid-adjusted sCr concentration reflecting the effect of volume of distribution during the development phase of AKI. The time to reach a relative 50% increase from the reference sCr using the crude and adjusted sCr was compared. We defined late recognition to estimate severity of AKI when this time interval to reach 50% relative increase between the crude and adjusted sCr exceeded 24 hours. Results: The median cumulative fluid balance increased from 2.7 liters on day 2 to 6.5 liters on day 7. The difference between adjusted and crude sCr was significantly higher at each time point and progressively increased from a median difference of 0.09 mg/dL to 0.65 mg/dL after six days. Sixty-four (25%) patients met criteria for a late recognition to estimate severity progression of AKI. This group of patients had a lower urine output and a higher daily and cumulative fluid balance during the development phase of AKI. They were more likely to need dialysis but showed no difference in mortality compared to patients who did not meet the criteria for late recognition of severity progression. Conclusions: In critically-ill patients, the dilution of sCr by fluid accumulation may lead to underestimation of the severity of AKI and increases the time required to identify a 50% relative increase in sCr. A simple formula to correct sCr for fluid balance can improve staging of AKI and provide a better parameter for earlier recognition of severity progression.

362 citations


Journal ArticleDOI
TL;DR: Cognitive impairment, especially impaired executive function, is common among hemodialysis patients, but with the exception of CNS-active medications, is not strongly associated with several ESRD- and dialysis-associated factors.
Abstract: Background and objectives: Cognitive impairment is common among persons with ESRD, but the underlying mechanisms are unknown. This study evaluated the prevalence of cognitive impairment and association with modifiable ESRD- and dialysis-associated factors in a large group of hemodialysis patients. Design, setting, participants, & measurements: Cross-sectional analyses were conducted on baseline data collected from 383 subjects participating in the Frequent Hemodialysis Network trials. Global cognitive impairment was defined as a score <80 on the Modified Mini-Mental State Exam, and impaired executive function was defined as a score ≥300 seconds on the Trailmaking B test. Five main categories of explanatory variables were examined: urea clearance, nutritional markers, hemodynamic measures, anemia, and central nervous system (CNS)-active medications. Results: Subjects had a mean age of 51.6 ± 13.3 years and a median ESRD vintage of 2.6 years. Sixty-one subjects (16%) had global cognitive impairment, and 110 subjects (29%) had impaired executive function. In addition to several nonmodifiable factors, the use of H1-receptor antagonists and opioids were associated with impaired executive function. No strong association was found between several other potentially modifiable factors associated with ESRD and dialysis therapy, such as urea clearance, proxies of dietary protein intake and other nutritional markers, hemodynamic measures, and anemia with global cognition and executive function after adjustment for case-mix factors. Conclusions: Cognitive impairment, especially impaired executive function, is common among hemodialysis patients, but with the exception of CNS-active medications, is not strongly associated with several ESRD- and dialysis-associated factors.

163 citations


Journal ArticleDOI
TL;DR: Physical activity was found to be extremely low with scores for all age and gender categories below the 5th percentile of healthy individuals and 95% of patients had scores consonant with low fitness.

162 citations


Journal ArticleDOI
TL;DR: Vitamin D deficiency and frailty in older Americans is found to be a major risk factor for disease and disability in these older Americans.
Abstract: . Wilhelm-Leen ER, Hall YN, deBoer IH, Chertow GM. (Stanford University School of Medicine; University of Washington; Stanford University School of Medicine, Palo Alto, CA, USA). Vitamin D deficiency and frailty in older Americans. J Intern Med 2010; 268: 171–180. Objective. To explore the relation between 25-hydroxyvitamin D deficiency and frailty. Frailty is a multidimensional phenotype that describes declining physical function and a vulnerability to adverse outcomes in the setting of physical stress such as illness or hospitalization. Low serum concentrations of 25-hydroxyvitamin D are known to be associated with multiple chronic diseases such as cardiovascular disease and diabetes, in addition to all cause mortality. Design. Using data from the Third National Health and Nutrition Survey (NHANES III), we evaluated the association between low serum 25-hydroxyvitamin D concentration and frailty, defined according to a set of criteria derived from a definition previously described and validated. Subjects. Nationally representative survey of noninstitutionalized US residents collected between 1988 and 1994. Results. 25-Hydroxyvitamin D deficiency, defined as a serum concentration <15 ng mL−1, was associated with a 3.7-fold increase in the odds of frailty amongst whites and a fourfold increase in the odds of frailty amongst non-whites. This association persisted after sensitivity analyses adjusting for season of the year and latitude of residence, intended to reduce misclassification of persons as 25-hydroxyvitamin D deficient or insufficient. Conclusion. Low serum 25-hydroxyvitamin D concentrations are associated with frailty amongst older adults.

154 citations


Journal Article
TL;DR: A systematic review found information relating to the effectiveness and safety of the following interventions: cinacalcet, darbepoetin, erythropoietin, haemodialysis, increased-dose peritoneal dialysis, mupirocin, sevelamer, standard-dose Dialysis, and statins.
Abstract: What are the effects of different doses for peritoneal dialysis? What are the effects of different doses and membrane fluxes for hemodialysis? What are the effects of interventions aimed at preventing secondary complications?

108 citations


Journal ArticleDOI
TL;DR: Infection-related hospitalization is frequent in older patients on dialysis therapy, and a broad range of infections, many unrelated to dialysis access, result in hospitalization in this population.

95 citations


Journal ArticleDOI
TL;DR: Among patients hospitalized with AKI, weekend admission is associated with a higher risk for death compared with admission on a weekday, and increased mortality was also associated with weekend admission among patients withAKI as a secondary diagnosis across a spectrum of co-existing medical diagnoses.
Abstract: Admission to the hospital on weekends is associated with increased mortality for several acute illnesses. We investigated whether patients admitted on a weekend with acute kidney injury (AKI) were more likely to die than those admitted on a weekday. Using the Nationwide Inpatient Sample, a large database of admissions to acute care, nonfederal hospitals in the United States, we identified 963,730 admissions with a diagnosis of AKI between 2003 and 2006. Of these, 214,962 admissions (22%) designated AKI as the primary reason for admission (45,203 on a weekend and 169,759 on a weekday). We used logistic regression models to examine the adjusted odds of in-hospital mortality associated with weekend versus weekday admission. Compared with admission on a weekday, patients admitted with a primary diagnosis of AKI on a weekend had a higher odds of death [adjusted odds ratio (OR) 1.07, 95% confidence interval (CI) 1.02 to 1.12]. The risk for death with admission on a weekend for AKI was more pronounced in smaller hospitals (adjusted OR 1.17, 95% CI 1.03 to 1.33) compared with larger hospitals (adjusted OR 1.07, 95% CI 1.01 to 1.13). Increased mortality was also associated with weekend admission among patients with AKI as a secondary diagnosis across a spectrum of co-existing medical diagnoses. In conclusion, among patients hospitalized with AKI, weekend admission is associated with a higher risk for death compared with admission on a weekday.

94 citations


Journal ArticleDOI
TL;DR: This work modeled 245 hemodialysis sessions in 210 patients enrolled in the Frequent Hemodialysis Network Daily and Nocturnal clinical trials and predicted standard Kt/V with a high level of accuracy, even when substantial fluid removal and residual urea clearance were present.

80 citations


Journal ArticleDOI
TL;DR: In patients with acute kidney injury, glomerular filtration rate estimating equations can be improved by incorporating data on creatinine generation and fluid balance, which could improve evaluation and management and guide interventions.
Abstract: Background. In critically ill patients with acute kidney injury, estimates of kidney function are used to modify drug dosing, adjust nutritional therapy and provide dialytic support. However, estimating glomerular filtration rate is challenging due to fluctuations in kidney function, creatinine production and fluid balance. We hypothesized that commonly used glomerular filtration rate prediction equations overestimate kidney function in patients with acute kidney injury and that improved estimates could be obtained by methods incorporating changes in creatinine generation and fluid balance. Methods. We analysed data from a multicentre observational study of acute kidney injury in critically ill patients. We identified 12 non-dialysed, non-oliguric patients with consecutive increases in creatinine for at least 3 and up to 7 days who had measurements of urinary creatinine clearance. Glomerular filtration rate was estimated by Cockcroft–Gault, Modification of Diet in Renal Disease, Jelliffe equation and Jelliffe equation with creatinine adjusted for fluid balance (Modified Jelliffe) and compared to measured urinary creatinine clearance. Results. Glomerular filtration rate estimated by Jelliffe and Modification of Diet in Renal Disease equation correlated best with urinary creatinine clearances. Estimated glomerular filtration rate by Cockcroft–Gault, Modification of Diet in Renal Disease and Jelliffe overestimated urinary creatinine clearance was 80%, 33%, 10%, respectively, and Modified Jelliffe underestimated GFR by 2%. Conclusion. In patients with acute kidney injury, glomerular filtration rate estimating equations can be improved by incorporating data on creatinine generation and fluid balance. A better assessment of glomerular filtration rate in acute kidney injury could improve evaluation and management and guide interventions.

Journal ArticleDOI
TL;DR: It is suggested that shorter hemodialysis sessions are associated with higher mortality when marginal structural analysis was used to adjust for time-dependent confounding, and a consistency of findings across prespecified subgroups.

Journal ArticleDOI
TL;DR: When normalized to body surface area rather than V, the dose of dialysis in women in the HEMO Study was substantially lower than in men, consistent with the hypothesis that when dialysis dose is expressed as Kt/V, women, due to their lower V/SA ratio, require a higher amount than men.
Abstract: Background and objectives: In the Hemodialysis (HEMO) Study, the lower death rate in women but not in men assigned to the higher dose (Kt/V) could have resulted from use of “V” as the normalizing factor, since women have a lower anthropometric V per unit of surface area (V/SA) than men. Design, setting, participants, & measurements: The effect of Kt/V on mortality was re-examined after normalizing for surface area and expressing dose as surface area normalized standard Kt/V (SAn-stdKt/V). Results: Both men and women in the high-dose group received approximately 16% more dialysis (when expressed as SAn-stdKt/V) than the controls. SAn-stdKt/V clustered into three levels: 2.14/wk for conventional dose women, 2.44/wk for conventional dose men or 2.46/wk for high-dose women, and 2.80/wk for high-dose men. V/SA was associated with the effect of dose assignment on the risk of death; above 20 L/m2, the mortality hazard ratio = 1.23 (0.99 to 1.53); below 20 L/m2, hazard ratio = 0.78 (0.65 to 0.95), P = 0.002. Within gender, V/SA did not modify the effect of dose on mortality. Conclusions: When normalized to body surface area rather than V, the dose of dialysis in women in the HEMO Study was substantially lower than in men. The lowest surface-area-normalized dose was received by women randomized to the conventional dose arm, possibly explaining the sex-specific response to dialysis dose. Results are consistent with the hypothesis that when dialysis dose is expressed as Kt/V, women, due to their lower V/SA ratio, require a higher amount than men.

Journal ArticleDOI
TL;DR: Holter findings in patients on hemodialysis are characterized by sympathetic overactivity and vagal withdrawal and are associated with higher LVM and impaired physical performance and understanding the spectrum of autonomic heart rate modulation and its determinants could help to guide preventive and therapeutic strategies.
Abstract: Background and objectives: Cardiovascular events are common in patients with ESRD. Whether sympathetic overactivity or vagal withdrawal contribute to cardiovascular events is unclear. We determined the general prevalence and clinical correlates of heart rate variability in patients on hemodialysis. Design, setting, participants, & measurements: We collected baseline information on demographics, clinical conditions, laboratory values, medications, physical performance, left ventricular mass (LVM), and 24-hour Holter monitoring on 239 subjects enrolled in the Frequent Hemodialysis Network Daily Trial. Results: The mean R-R interval was 812 ± 217 ms. The SD of R-R intervals was 79.1 ± 40.3 ms. Spectral power analyses showed low-frequency (sympathetic modulation of heart rate) and high-frequency power (HF; vagal modulation of heart rate) to be 106.0 (interquartile range, 48.0 to 204 ms2) and 42.4 ms2 (interquartile range, 29.4 to 56.3 ms2), respectively. LVM was inversely correlated with log HF (−0.02 [−0.0035; −0.0043]) and the R-R interval (−1.00 [−1.96; −0.032]). Physical performance was associated with mean R-R intervals (1.98 [0.09; 3.87]) and SD of R-R intervals (0.58 [0.049; 1.10]). After adjustment for age, race, ESRD vintage, diabetes, and physical performance, the relationship between log HF and LVM (per 10 g) remained significant (−0.025 [−0.042; −0.0085]). Conclusions: Holter findings in patients on hemodialysis are characterized by sympathetic overactivity and vagal withdrawal and are associated with higher LVM and impaired physical performance. Understanding the spectrum of autonomic heart rate modulation and its determinants could help to guide preventive and therapeutic strategies.

Journal ArticleDOI
TL;DR: Adults with CKD stages 3 to 5 cared for within an urban public health system were relatively young and predominantly nonwhite-both factors associated with a higher risk of progression to ESRD.
Abstract: Background and objectives: In the United States, relatively little is known about clinical outcomes of chronic kidney disease (CKD) in vulnerable populations utilizing public health systems. The primary study objectives were to describe patient characteristics, incident ESRD, and mortality in adults with nondialysis-dependent CKD receiving care in the health care safety net. Design, setting, participants, & measurements: Time to ESRD and time to death were examined among a cohort of 15,353 ambulatory adults with nondialysis-dependent CKD from the Community Health Network of San Francisco. Results: The mean age of the CKD cohort was 59.0 ± 13.8 years; 50% of the cohort was younger than 60 years and 26% was younger than 50 years. Most (72%) were members of nonwhite racial-ethnic groups, 73% were indigent (annual income <$15,000) and 18% were uninsured. In adjusted analyses, blacks [hazard ratio (95% confidence interval), 4.00 (2.99 to 5.35)], Hispanics [2.20 (1.46 to 3.30)], and Asians/Pacific Islanders [3.84 (2.73 to 5.40)] had higher risks of progression to ESRD than non-Hispanic whites. The higher risk of progression to ESRD among nonwhite compared with white persons with CKD was not explained by lower relative mortality. Conclusions: Adults with CKD stages 3 to 5 cared for within an urban public health system were relatively young and predominantly nonwhite—both factors associated with a higher risk of progression to ESRD. These findings call for targeted efforts to assess the burden and progression of CKD within other public and safety-net health systems in this country.

Journal ArticleDOI
TL;DR: hospital days per patient-year were statistically and clinically significantly lower among nonprofit dialysis providers, suggesting that the indirect incentives in Medicare's current payment system may provide insufficient incentive for for-profit providers to achieve optimal patient outcomes.
Abstract: Several studies have examined differences in mortality rates among for-profit and nonprofit dialysis facilities. These studies have been widely criticized for having insufficiently considered risk adjustment and possible referral bias. A recent comprehensive analysis conducted by Brooks et al. (2006) utilized an instrumental variable approach and demonstrated no significant difference in mortality rates among for-profit and nonprofit dialysis facilities. Because mortality is a terminal event that happens infrequently (at most once for each patient), we are concerned that statistical tests lack sufficient power to detect differences in mortality among for-profits and nonprofits. This paper considers a more frequent event, days in the hospital, to test whether there are significant differences among for-profit and nonprofit dialysis. Selecting a clinical outcome for comparing for-profit and nonprofit dialysis providers should be based on a theoretical justification of how the behavior of providers differs based on their profit status and leads to differences in that outcome. The most common justification is based on the assumption of different objectives: for-profits are profit maximizers while nonprofits adopt public interest goals (Horwitz 2007). Therefore, for-profits will be more likely than nonprofits to provide profitable services and less likely to provide unprofitable ones. Empirical evidence supporting that theoretical justification in the context of nonprofit and for-profit hospitals has been reported previously. To apply this theoretical justification in the context of dialysis we need to identify a clinical outcome that is inadequately rewarded by the existing reimbursement system: an outcome in which the financial rewards to the provider from interventions that aim to improve it are less than the cost of these interventions. Days in the hospital could provide such a clinical outcome. Preventing infections and other complications in dialysis patients that lead to lengthy hospitalizations is within the realm of activities undertaken by dialysis providers. However, the financial rewards are modest in the form of avoiding missed treatments while the costs can be considerable. If the costs exceed the rewards, then we expect for-profit providers to have higher hospitalization days than nonprofit providers. If the costs are less than the rewards, we would expect the opposite. One would expect relatively high costs of hospitalization avoidance strategies, as these would generally require input from professionals (e.g., physicians, pharmacists, or registered nurses, functioning to monitor or screen for drug–drug interactions or problems with vascular access) whose services would be more expensive than those conducted for the routine provision of dialysis, which are principally performed by technicians. We explored these issues using Medicare data to test for differences in hospital outcomes among patients treated in for-profit versus nonprofit facilities.

Journal ArticleDOI
TL;DR: The ADVANCE Study should help determine whether cinacalcet attenuates progression of vascular calcification, and subjects enrolled in ADVANCE have extensive CAC at baseline.
Abstract: Background The ADVANCE (A Randomized Study to Evaluate the Effects of Cinacalcet plus Low-Dose Vitamin D on Vascular Calcification in Subjects with Chronic Kidney Disease Receiving Haemodialysis) Study objective is to assess the effect of cinacalcet plus low-dose active vitamin D versus flexible dosing of active vitamin D on progression of coronary artery calcification (CAC) in haemodialysis patients. We report the ADVANCE Study design and baseline subject characteristics. Methods ADVANCE is a multinational, multicentre, randomized, open-label study. Adult haemodialysis patients with moderate to severe secondary hyperparathyroidism (intact parathyroid hormone [iPTH] >300 pg/mL or bio-intact PTH >160 pg/mL) and baseline CAC score >or=30 were stratified by CAC score (>or=30-399, >or=400-999, >or=1000) and randomized in a 1:1 ratio to cinacalcet (30-180 mg/day) plus low-dose active vitamin D (cinacalcet group) or flexible dosing of active vitamin D alone (control). The study had three phases: screening, 20-week dose titration and 32-week follow-up. CAC scores obtained by cardiac computed tomography were determined at screening and weeks 28 and 52. The primary end point was percentage change in CAC score from baseline to Week 52. Results Subjects (n = 360) were randomized to cinacalcet or control. Mean age was 61.5 years, 43% were women, and median dialysis vintage was 36.7 months (range, 2.7-351.5 months). The baseline geometric mean CAC score by the Agatston method was 548.7 (95% confidence interval, 480.5-626.6). Baseline CAC score was independently associated with age, sex, dialysis vintage, diabetes and iPTH. Subjects also had extensive aortic and valvular calcification at baseline. Conclusions Subjects enrolled in ADVANCE have extensive CAC at baseline. The ADVANCE Study should help determine whether cinacalcet attenuates progression of vascular calcification.

Journal ArticleDOI
TL;DR: The data showed that a population-wide reduction in dietary salt of 3 g per day (1200 mg of sodium per day) would reduce the annual number of new cases of CHD, stroke, myocardial infarction, and the number of deaths from any cause by 44,000 to 92,000.
Abstract: Extensive evidence links high-salt intake to an increased risk of hypertension and cardiovascular disease. Attempts in the United States to lower dietary salt intake by encouraging individuals to follow guidelines on recommended daily intake have been largely ineffective, and dietary salt intake is increasing. Accordingly, calls have been made for population-wide interventions to reduce dietary salt in the US diet. The investigators in this report used the coronary heart disease (CHD) Policy Model to make a quantitative estimate of potentially achievable, population-wide reductions in dietary salt of up to 3 g per day (1200 mg of sodium per day). The effect and cost-effectiveness of salt reduction was compared with those of other interventions used to reduce the risk of cardiovascular disease. Estimated rates and costs of cardiovascular disease were stratified in subgroups defined by age, sex, and race. Projections were calculated for the potential annual benefits for each subgroup with regard to reduction of CHD, stroke, myocardial infarction, and deaths from any cause. The data showed that a population-wide reduction in dietary salt of 3 g per day (1200 mg of sodium per day) would reduce the annual number of new cases of CHD by 60,000 to 120,000, stroke by 32,000 to 66,000, myocardial infarction by 54,000 to 99,000, and the number of deaths from any cause by 44,000 to 92,000. Lower daily dietary salt intake would benefit all adult age groups. Among the subgroups, reductions in stroke would be greater among women, reductions in CHD would be especially beneficial in older adults, and younger adults would have lower mortality rates. Reduction in risk for all categories would occur in blacks, especially for lowered risk of hypertension and stroke. The expected benefit for cardiovascular disease would be of similar magnitude or higher than that achieved by interventions targeting tobacco, obesity, and cholesterol. A national regulatory intervention that reduces salt intake by 3 g per day would produce a gain of 194,000 to 392,000 quality-adjusted life-years and a savings of $10 billion to $24 billion in annual health care costs. A reduction in dietary salt of only 1 g achieved gradually over the years 2010 through 2019 would result in cost savings and would be more cost-effective than using antihypertensive therapy for all persons with hypertension. These findings demonstrate the likely adverse cardiovascular outcomes for failure to reduce salt consumption in the US population and call for an immediate major public health initiative to prevent these largely avoidable outcomes.

Journal Article
TL;DR: In the USA and Japan, approximately two thirds of people with ESRD receive haemodialysis, a quarter have kidney transplants, and a tenth receive peritoneal dialysis as mentioned in this paper.
Abstract: Introduction End stage renal disease (ESRD) affects over 1500 people per million population in countries with a high prevalence, such as the USA and Japan. Approximately two thirds of people with ESRD receive haemodialysis, a quarter have kidney transplants, and a tenth receive peritoneal dialysis.


Journal Article
TL;DR: Key concepts of geriatrics frailty, dementia and palliative care are discussed, as nephrologists frequently participate in decision-making directed toward balancing longevity, functional status and the burden of therapy.
Abstract: Nephrologists care for an increasing number of elderly patients on hemodialysis. As such, an understanding of the overlap among complications of hemodialysis and geriatric syndromes is crucial. This article reviews hemodialysis management issues including vascular access, hypertension, anemia and bone and mineral disorders with an attention towards the distinct medical needs of the elderly. Key concepts of geriatrics frailty, dementia and palliative care are also discussed, as nephrologists frequently participate in decision-making directed toward balancing longevity, functional status and the burden of therapy.

Journal ArticleDOI
01 Aug 2010-Ndt Plus
TL;DR: This case report is remarkable for its severe hypercalcaemia requiring haemodialysis, large adenoma size, acute-on-chronic kidney injury and markedly elevated PTH concentration in association with primary HPT in CKD.
Abstract: Objective. This study aims to highlight the challenges in the diagnosis of hyperparathyroidism (HPT) in patients with advanced chronic kidney disease (CKD). Methods. In this report, we describe a middle-aged Filipino gentleman with underlying CKD who presented with intractable nausea, vomiting, severe and medically refractory hypercalcaemia and parathyroid hormone (PTH) concentrations in excess of 2400 pg/mL. The underlying pathophysiology as well as the aetiologies and current relevant literature are discussed. We also suggest an appropriate diagnostic approach to identify and promptly treat patients with CKD, HPT and hypercalcaemia. Results. Evaluation confirmed the presence of a large parathyroid adenoma; HPT and hypercalcaemia resolved rapidly following resection. Conclusion. This case report is remarkable for its severe hypercalcaemia requiring haemodialysis, large adenoma size, acute-on-chronic kidney injury and markedly elevated PTH concentration in association with primary HPT in CKD.

Journal ArticleDOI
TL;DR: Assessment of comorbidity significantly strengthen the ability to predict death in patients on hemodialysis and future studies in dialysis should invest the necessary resources to include repeated assessments of comorebidity.
Abstract: When evaluating clinical characteristics and outcomes in patients on hemodialysis, the prevalence and severity of comorbidity may change over time. Knowing whether updated assessments of comorbidity enhance predictive power will assist the design of future studies. We conducted a secondary data analysis of 1846 prevalent hemodialysis patients from 15 US clinical centers enrolled in the HEMO study. Our primary explanatory variable was the Index of Coexistent Diseases score, which aggregates comorbidities, as a time-constant and time-varying covariate. Our outcomes of interest were all-cause mortality, time to first hospitalization, and total hospitalizations. We used Cox proportional hazards regression. Accounting for an updated comorbidity assessment over time yielded a more robust association with mortality than accounting for baseline comorbidity alone. The variation explained by time-varying comorbidity assessments on time to death was greater than age, baseline serum albumin, diabetes, or any other covariates. There was a less pronounced advantage of updated comorbidity assessments on determining time to hospitalization. Updated assessments of comorbidity significantly strengthen the ability to predict death in patients on hemodialysis. Future studies in dialysis should invest the necessary resources to include repeated assessments of comorbidity.

Journal ArticleDOI
TL;DR: Of the handful of randomized trials in patients with type 2 diabetes and hypertension, no prior trial has achieved average BPs <130/80 mm Hg, and trial conclusions have been mixed.

Journal ArticleDOI
TL;DR: The study suggests that a low-risk population can be identified based on demographic and clinical risk factors and stratified patients into low-, medium-, and high-risk groups; all patients with a history of hydronephrosis were considered high risk.
Abstract: Acute kidney injury is a common complication in hospitalized patients, occurring in approximately 10% of hospitalizations,1,2 and the incidence of AKI appears to be on the rise.3,4 Although the most common cause of hospital-acquired AKI is acute tubular necrosis,5 physicians frequently rule out urinary tract obstruction as the underlying cause of AKI using ultrasonography. While renal ultrasonography is a safe and noninvasive test, it is not without cost. Moreover, since obstruction is a relatively uncommon cause of hospital-acquired AKI, the majority of ultrasonography results obtained are negative. Therefore, it is likely that in at least a subset of patients with AKI, ultrasonography has limited utility and may not be cost-effective. Licurse et al attempt to refine our diagnostic algorithm in AKI by developing a scoring system to identify patients at high and low risk of AKI due to urinary tract obstruction.6 The study population comprised all hospitalized patients with suspected AKI who underwent renal ultrasonography at Yale–New Haven Hospital between January 2005 and May 2009. Suspected AKI was based on the indication for ultrasonography. For inclusion into this study, patients were subsequently confirmed to have AKI, defined as a rise in serum creatinine concentration of at least 0.3 mg/dL. To identify clinical risk factors for hydronephrosis, a derivation sample of 100 patients with hydronephrosis diagnosed with ultrasonography and 100 randomly selected controls was used; results were subsequently validated using 797 ultrasonography studies obtained over 16 months. The authors considered 36 variables for inclusion in their risk stratification model, including factors predisposing to obstruction or to other common specific causes of AKI, such as prerenal azotemia. The primary study outcomes were hydronephrosis and hydronephrosis requiring an intervention (urologic stent and/or nephrostomy tube). The authors also considered the incremental benefit of identifying incidental findings by ultrasonography. The authors identified multiple risk factors for hydronephrosis on univariate analysis; reassuringly, these factors included the following: history of hydronephrosis, history of abdominal or pelvic cancer, prior pelvic surgery, or a single functioning kidney. Patients with a history of heart failure, granular casts on urinalysis, elevated leukocyte count, documented hypotension, or exposure to aspirin, diuretics, or vancomycin during hospitalization were less likely to have hydronephrosis. The authors’ final predictive model included 7 variables. In the derivation sample, the model had an area under the receiver operating characteristic curve of 0.79; in the validation sample, the corresponding value was 0.80, indicating satisfactory but not excellent discrimination. The authors proceeded to create a risk score for use in clinical practice. They assigned point scores to identified risk factors and stratified patients into low-, medium-, and high-risk groups; all patients with a history of hydronephrosis were considered high risk. In the validation sample, the overall prevalence of hydronephrosis was 10.6%. According to the authors’ clinical decision rule, 27.8% of patients were assigned to the low-risk group; this group had a prevalence of hydronephrosis of 3.1%, and only 1 patient required intervention. The prevalence of hydronephrosis was 10.7% and 16.1% in the intermediate- and high-risk groups, respectively. With this model, the number needed to screen to find 1 case of hydronephrosis in the low-risk group was 32, and for 1 case of hydronephrosis requiring an intervention, 223. What then are the implications of this stratification scheme for clinical practice? The study suggests that a low-risk population can be identified based on demographic and clinical risk factors, and that in this population, the prevalence of hydronephrosis, and in particular hydronephrosis requiring an intervention, is quite low. However, the majority of patients who underwent ultrasonography did not fall into the low-risk category; nearly 3 of 4 patients were considered intermediate or high risk. The authors report a sizable potential cost savings by avoiding ultrasonography in low-risk patients, which they estimate at approximately $42 000 per year at their own institution, assuming a cost of $200 per test. There are some limitations to this study that should be considered. First, only patients with AKI who underwent ultrasonography were considered in this analysis. Since all patients with AKI were not studied, the true incidence of AKI associated with urinary tract obstruction cannot be accurately assessed. While it is possible that even lower-risk groups might be identified if all patients with AKI had been studied, differential test ordering could have introduced important biases. For example, if AKI were less well recognized in elderly patients owing to more modest elevations in serum creatinine concentration, prevalence rates of hydronephrosis might have been underestimated. In determining the value of screening by focusing on the number of patients in whom surgically remediable hydronephrosis was identified, the authors ignore the value of definitive diagnostic information in the workup of AKI and the potential for nonsurgical approaches (eg, avoidance of anticholinergics, narcotic analgesics and other drugs, placement of Foley catheters) to ameliorate urinary tract obstruction. This was a single-center study; while the identified risk factors carry face validity, the incidence of AKI associated with urinary tract obstruction in the low-, intermediate-, and high-risk groups is likely to vary widely by institution. Finally, the authors fail to provide an estimate of the costs of not identifying a case of hydronephrosis. Kidney damage may be more severe and often irreversible if urinary tract obstruction is protracted. The personal costs to a patient with postrenal AKI who fails to recover and requires maintenance dialysis are enormous; the financial costs to society of a preventable case of end-stage renal disease should be factored in the overall costs of screening estimates. The authors’ suggestion that ultrasonography be deferred in low-risk patients until other diagnostic studies prove unrevealing or where there is an inadequate response to conservative measures (eg, volume expansion) is reasonable. In our clinical practice, we have generally followed the path directed by the Licurse et al decision rule. We do not routinely perform ultrasonography on the recognition of hospital-acquired AKI. However, given the clinical implications of progressive AKI, particularly when severe enough to require dialysis, we proceed with ultrasonography in all patients—no matter their risk profile—if they have progressive AKI and if we anticipate the need for dialysis. Other studies have suggested that the proportion of patients with urinary tract obstruction as a cause of community-acquired AKI is significantly higher than for patients with hospital-acquired AKI.7 As such, routine ultrasonography in patients with community-acquired AKI would seem justified in the absence of more recent contradictory studies that focus on this patient population. In conclusion, the carefully conducted study by Licurse et al helps to rationalize the diagnostic algorithm for AKI. Just as one would not advocate ordering an extensive panel of serologic studies when rapidly progressive glomerulonephritis is unlikely to be the cause of AKI, the reflex to “check a renal ultrasound” should not necessarily be exercised in all patients, at least not immediately. A more deliberate approach to diagnostic imaging in AKI is likely to conserve some resources without compromising care. These and related comparative effectiveness studies are essential to refine our approach to complex diseases, particularly when mortality, morbidity, and costs are high and therapeutic options are either of marginal efficacy or altogether absent. Acute kidney injury certainly fits that bill.

Journal ArticleDOI
TL;DR: The results suggest that the standard “constant G” urea kinetic model may need to be modified, particularly for nocturnal dialysis, and was much less pronounced for the 6/week daily and 3/week conventional treatments.
Abstract: Classic urea modeling assumes that both urea generation rate (G) and residual renal urea clearance (Kru) are constant throughout the week, but this may not be true. Reductions in intradialysis G could be caused by lower plasma amino acid levels due to predialysis/intradialysis fasting and also to losses of amino acids into the dialysate. Intradialytic reductions in Kru could be due to lower intravascular volume, blood pressure, or osmotic load. To determine the possible effects of reduced G or Kru during dialysis on the calculation of the volume of distribution (V) and Kt/Vurea, we modeled 3 and 6/week nocturnal, 6/week short daily, and 3/week conventional hemodialysis. A modified 2-pool mathematical model of urea mass balance with a constant time-averaged G was used, but the model was altered to allow adjustment of the ratio of dialytic/interdialytic G (Gd/Gid) and dialytic/total Kru (Krud/Kru) to vary from 1.0 down to near zero. In patients dialyzed six times per week for 400 minutes per session, when Gd/Gid was decreased from 1.0 to 0.05, the predicted urea reduction ratio (URR) increased from 68.9% to 80.2%. To achieve an increased URR of this magnitude under conditions of constant G (Gd/Gid=1.0) required a decrease in modeled urea volume (V) of 36%. At Gd/Gid ratios of 0.8 or 0.6 (corresponding to 20% or 40% reductions in intradialysis G), the modeled URR was increased to 71.0% or 73.3%, causing a 7% or 15% factitious decrease in V. The error was intermediate for the 3/week nocturnal schedule, and was much less pronounced for the 6/week daily and 3/week conventional treatments. Reductions in intradialytic Kru had the opposite effect, lowering the predicted URR and increasing the apparent V, but here the errors were of much lesser amplitude. The results suggest that, particularly for nocturnal dialysis, the standard "constant G" urea kinetic model may need to be modified.


Journal ArticleDOI
TL;DR: Results from the Studying the Treatment of Acute Hypertension (STAT) registry are selected, a recently completed observational study whose goal is to improve the understanding of the clinical condition of acute, severe hypertension managed in a critical care setting and treated with intravenous antihypertensive therapy.
Abstract: In this issue of Circulation, Szczech et al1 report selected results from the Studying the Treatment of Acute Hypertension (STAT) registry, a recently completed observational study whose goal is “to improve the understanding of the clinical condition of acute, severe hypertension … managed in a critical care setting and treated with intravenous antihypertensive therapy.”2 The STAT registry is sponsored by the Medicines Company, a publically traded entity focused on “advancing the treatment of critical care patients through the delivery of innovative, cost-effective medicines to the worldwide hospital marketplace.”3 The stated goal of the analyses presented was to “define the risk among patients with acute severe hypertension and acute kidney injury (AKI), and the risk associated with both AKI and chronic kidney disease (CKD) on cardiovascular outcomes and mortality.”1 Although the extensive array of analyses presented by Szczech et al highlights an important area of investigation, readers should be circumspect about the conclusions reached. Before proceeding to the study results, we must carefully consider the authors’ definitions. The authors define CKD as an estimated glomerular filtration rate (eGFR) italic>90 mL/min/ 1.73m2, calculated using the 4-variable Modification of Diet in Renal Disease study equation, designating CKD in nearly 4 of every 5 STAT registry enrollees. Although some have advocated that eGFR in the range of 60 to 89 mL/min/1.73m2 represents mild CKD, several studies have demonstrated that the Modification of Diet in Renal Disease study equation often underestimates true GFR above the range of 25 to 55 mL/min/1.73m2, the population from which the original equation was derived.4 Moreover, even liberal definitions of CKD require a chronic element (eg, reduced eGFR for 3 or more months). The authors define baseline kidney function using single serum creatinine determinations up to 12 months before admission (“where available”), which could be particularly problematic if the serum creatinine concentration were not in steady state in the setting of acute illness. Thus, misclassification of CKD by errors inherent in the Modification of Diet in Renal Disease study equation and single rather than multiple values indicating the persistence of impaired kidney function probably yielded a sizeable overestimate in CKD prevalence. Such an error may have diminished the apparent risk associated with CKD and inflated the risk associated with AKI. Adding insult to injury (pun intended), the authors employ a definition of AKI that also maximizes its prevalence by calculating change in eGFR from “baseline” to “nadir” without considering the length of hospitalization or the number of serum creatinine determinations. Thus, if a 70-year-old white woman were admitted with a serum creatinine concentration of 1.0 mg/dL (corresponding to an eGFR of 55 mL/min/1.73m2 defined as “moderate CKD”) and had subsequent serum creatinine concentrations of 1.0, 1.0, 1.0, 0.9, 1.0, 1.3, and 1.0 mg/dL (with the nadir eGFR calculated at 40 mL/min/1.73m2), the relative change in eGFR (15 divided by 55, or 27%) would be classified as AKI (“risk” by the risk, injury, failure, loss, end-stage renal disease criteria). In this example, the patient probably had neither CKD nor AKI but was misclassified as having both, rendering true estimates of risk uninterpretable. The STAT registry designated enrollees as having acute, severe hypertension managed in emergency or critical care settings. However, the reader is unable to determine whether hypertension was truly acute or chronic. Overall, 89% of the study sample had a history of hypertension, including 94% and 98% of those with advanced and end-stage kidney disease, respectively. Indeed, the cohort may be better defined as hypertension requiring intervention (probably owing to some untoward clinical manifestation), rather than “acute” and “severe.” Much is made of AKI in this article. The authors are indeed correct that small changes in serum creatinine have been associated with adverse outcomes in other settings,5 although the incidence of clinically relevant AKI in this study was relatively low. Despite the severity of hypertension, only 122 (8%) developed “injury” using the definition of a relative change in eGFR ≥50% adopted by proponents of the risk, injury, failure, loss, end-stage renal disease criteria. Moreover, only when considering left ventricular dysfunction and moderate to severe bleeding was there a significant difference with a “dose response” by AKI stage. Although the authors acknowledge some of the limitations of their work, others should be highlighted to help place the results in a proper context. The authors failed to provide sufficient detail about in-hospital testing. For example, how was acute left ventricular dysfunction evaluated? Were echo-cardiograms routinely performed for the purpose of research? The authors also failed to note whether cardiac and cerebrovascular events were evaluated routinely or by protocol. If all enrollees were not screened for subarachnoid hemorrhage (because in all likelihood imaging was ordered in response to signs and/or symptoms), the incidence estimates, which suggest a paradoxically higher risk with better kidney function, could simply reflect the threshold for testing rather than the true incidence because the denominator is unknown. The authors’ contention that a critical mass of functioning nephrons “compensates” or somehow otherwise protects an individual from target organ injury is appealing to nephrologists but is not supported by facts. So what facts can be gleaned from the article by Szczech et al? First, in the setting of severe hypertension, a decline in kidney function in-hospital is more common in persons with underlying CKD, as has been shown in other settings.6,7 Second, currently available renal diagnostic studies (ie, serum creatinine or derivations thereof) are neither sufficiently specific nor sensitive to guide therapy or reliably predict outcomes. Finally, determining whether kidney disease is a cause or consequence of severe hypertension and understanding the mechanism(s) linking kidney disease to cardiovascular disease are likely to be exceptionally worthwhile pursuits at both the bench and the bedside.

Journal ArticleDOI
TL;DR: Applying the abbreviated 4-variable MDRD equation in 571 353 adult outpatients undergoing a general health examination from the Swedish Apolipoprotein-related Mortality risk (AMORIS) cohort showed strikingly different prevalence estimates of CKD depending on which GFR estimating equation was used.
Abstract: In 2002, the US National Kidney Foundation released clinical practice guidelines [the Kidney Disease Outcomes Quality Initiative (K / DOQI)] aimed at standardizing and simplifying the definition of chronic kidney disease (CKD). These guidelines classify CKD in part by glomerular filtration rate (GFR) estimated either by the Cockcroft–Gault or the Modification of Diet in Renal Disease (MDRD) study equations. The Cockcroft–Gault formula was derived from 249 mostly male Canadians [1] and the MDRD equation from 1628 mostly white study subjects with measured GFR by iothalamate of 22–55 mL min−1 1.73m−2 [2]. The wisdom of applying an equation derived within a population with moderate to advanced CKD to the general population is debatable, yet the MDRD equation nonetheless has become the ‘industry standard’ for estimating GFR, as evidenced by its widespread application in clinical research and its routine reporting in many laboratories. In the current issue of the Journal of Internal Medicine, Holzmann et al. [3] applied the abbreviated 4-variable MDRD equation in 571 353 adult outpatients undergoing a general health examination from the Swedish Apolipoprotein-related Mortality risk (AMORIS) cohort. Using a Cox proportional hazards regression model, the authors examined the relations among estimated GFR and risk of first myocardial infarction or death. In addition to using the MDRD equation, analyses were repeated using the so-called Mayo quadratic equation to estimate GFR, a formula derived in 2004 that included persons with CKD as well as healthy prospective kidney donors [4]. The Holzmann et al. study showed that for those individuals with an estimated GFR of 30–59 mL min−1 1.73 m−2 by either the MDRD or Mayo quadratic equation, adjusted hazard ratios for first myocardial infarction (MI) and all-cause mortality were significantly increased as compared with the referent group of those with normal to near-normal kidney function (estimated GFR >90 mL min−1 1.73 m−2). The group with advanced CKD (GFR 90 mL min−1 1.73m−2. In contrast, using the MDRD equation, the adjusted relative risk of death was actually decreased by 23% (21–25%) relative to persons with estimated GFR >90 mL min−1 1.73 m−2 and there was no difference in MI risk. Aside from differences in relative risk estimates, Holzmann et al. also showed strikingly different prevalence estimates of CKD depending on which GFR estimating equation was used. With the MDRD equation, only 37.4% of participants had normal kidney function versus 86.6% when the Mayo quadratic equation was used. As no gold standard was used to directly measure GFR, which estimate is closer to the truth? The monotonic increase in risk associated with lower estimated GFR using the Mayo quadratic equation is biologically plausible. It is difficult to explain why impaired kidney function would be associated with enhanced survival (as was seen with application of the MDRD equation), based on known correlates of CKD and the totality of evidence from experimental and epidemiological studies. A more likely explanation is that when applied to the general population on a single measurement, the MDRD equation misclassifies a large fraction of persons with normal or near normal kidney function as having mild (60–89 mL min−1 1.73 m−2) or moderate ‘CKD’ (30–59 mL min−1 1.73 m−2). After initial publication of the K/DOQI clinical practice guidelines, several studies have shown that the MDRD equation underestimates measured GFR around or above the upper levels at which the equation was derived [4,7, 8]. Misclassification of mild CKD has potentially enormous public health implications. If the prevalence estimates in this study were applied to the general adult population of the United States, the number of people affected / diagnosed / classified with mildly reduced GFR (estimated GFR 60–89 mL min−1 1.73 m−2) could differ by 4–5 fold depending on whether the Mayo quadratic or the MDRD equation were used. As the population attributable risk depends directly on the prevalence of disease, focusing efforts to modify cardiovascular risk factors on the largest of all groups with kidney disease would most likely have the largest impact on public health. On the other hand, targeting millions of people who may be erroneously categorized as having reduced GFR would be colossal waste of time, effort and money. It is therefore precisely in this population (i.e. CKD stage II and the upper range of CKD stage III), where the MDRD equation performs least well, that accurate identification and diagnosis of bona fide, clinically meaningful kidney disease is most imperative. Erroneously informing a patient that he or she has CKD could also lead to a host of adverse downstream consequences, including unnecessary office visits, blood tests, and interventions; difficulties obtaining health or life insurance; and undue anxiety, stress and worry. From the provider’s perspective, the time spent explaining misclassification of disease states by regression equations and / or reassuring a patient that he or she effectively does not have kidney disease is time that could otherwise be spent in more productive activities that could advance health and prevent disease and disability. Finally, misclassification of mild to moderate CKD could contribute to kidney transplant gridlock, as prospective living donors might be incorrectly deemed unsuitable based on falsely low MDRD GFR estimates. In summary, Holzmann et al. present an interesting and extremely important study that highlights both the calculated and real hazards inherent in applying a GFR estimating equation derived within a population with moderate to severe CKD to the general population. The major strengths of the study include its large well characterized cohort with relatively long follow-up time. The study also has several limitations. First, only a single serum creatinine determination was required for inclusion in the study. Within-person variation of serum creatinine concentrations can be substantial, and repeated measurements of serum creatinine would likely have reduced the magnitude of misclassification bias using both equations. Second, the serum creatinine, despite being measured at a single laboratory, was not calibrated. Calibration could significantly influence the performance of the MDRD equation, and calibration could have reduced misclassification error. One study found higher measured serum creatinine by 20.3 µmol L−1 when compared with assays of the same sample performed in the original MDRD laboratory [9]. Third, the study did not have information on, and therefore could not control for, several other risk factors for cardiovascular disease such as albuminuria, hypertension, smoking, and antihypertensive medication use. Most of these limitations were acknowledged by the authors. In our collective enthusiasm to arrive at a convenient method by which CKD can be recognized, diagnosed and staged, we (including these editorialists) have been led like lemmings off a cliff for the better part of a decade, unquestioningly applying the MDRD equation to anyone and everyone, without recalling (or conveniently forgetting) the population in which it was derived. Until additional studies have been conducted, a more thoughtful strategy would be to consider the application of alternative GFR estimating equations in different situations. For example, the MDRD equation can be used to estimate GFR in persons known to have moderate to advanced CKD while the Mayo quadratic equation may be better suited to the general population. Whether noncreatinine-based markers such as cystatin C will offer significant advantages over creatinine-based GFR estimating equations remains to be seen. Whatever the approach, before blindly applying the MDRD equation to a patient in an ambulatory practice or a cohort in an epidemiological study, it is worthwhile to take a moment to pause and ponder its appropriateness while considering its alternatives.