Showing papers in "JAMA in 1998"
••
TL;DR: Alternative medicine use and expenditures increased substantially between 1990 and 1997, attributable primarily to an increase in the proportion of the population seeking alternative therapies, rather than increased visits per patient.
Abstract: Context.—A prior national survey documented the high prevalence and costs of
alternative medicine use in the United States in 1990.Objective.—To document trends in alternative medicine use in the United States
between 1990 and 1997.Design.—Nationally representative random household telephone surveys using comparable
key questions were conducted in 1991 and 1997 measuring utilization in 1990
and 1997, respectively.Participants.—A total of 1539 adults in 1991 and 2055 in 1997.Main Outcomes Measures.—Prevalence, estimated costs, and disclosure of alternative therapies
to physicians.Results.—Use of at least 1 of 16 alternative therapies during the previous year
increased from 33.8% in 1990 to 42.1% in 1997 (P≤.001).
The therapies increasing the most included herbal medicine, massage, megavitamins,
self-help groups, folk remedies, energy healing, and homeopathy. The probability
of users visiting an alternative medicine practitioner increased from 36.3%
to 46.3% (P=.002). In both surveys alternative therapies
were used most frequently for chronic conditions, including back problems,
anxiety, depression, and headaches. There was no significant change in disclosure
rates between the 2 survey years; 39.8% of alternative therapies were disclosed
to physicians in 1990 vs 38.5% in 1997. The percentage of users paying entirely
out-of-pocket for services provided by alternative medicine practitioners
did not change significantly between 1990 (64.0%) and 1997 (58.3%) (P=.36). Extrapolations to the US population suggest a 47.3%
increase in total visits to alternative medicine practitioners, from 427 million
in 1990 to 629 million in 1997, thereby exceeding total visits to all US primary
care physicians. An estimated 15 million adults in 1997 took prescription
medications concurrently with herbal remedies and/or high-dose vitamins (18.4%
of all prescription users). Estimated expenditures for alternative medicine
professional services increased 45.2% between 1990 and 1997 and were conservatively
estimated at $21.2 billion in 1997, with at least $12.2 billion paid out-of-pocket.
This exceeds the 1997 out-of-pocket expenditures for all US hospitalizations.
Total 1997 out-of-pocket expenditures relating to alternative therapies were
conservatively estimated at $27.0 billion, which is comparable with the projected
1997 out-of-pocket expenditures for all US physician services.Conclusions.—Alternative medicine use and expenditures increased substantially between
1990 and 1997, attributable primarily to an increase in the proportion of
the population seeking alternative therapies, rather than increased visits
per patient.
6,814 citations
••
TL;DR: Treatment with oral conjugated equine estrogen plus medroxyprogesterone acetate did not reduce the overall rate of CHD events in postmenopausal women with established coronary disease and the treatment did increase the rate of thromboembolic events and gallbladder disease.
Abstract: Context.—Observational studies have found lower rates of coronary heart disease
(CHD) in postmenopausal women who take estrogen than in women who do not,
but this potential benefit has not been confirmed in clinical trials.Objective.—To determine if estrogen plus progestin therapy alters the risk for
CHD events in postmenopausal women with established coronary disease.Design.—Randomized, blinded, placebo-controlled secondary prevention trial.Setting.—Outpatient and community settings at 20 US clinical centers.Participants.—A total of 2763 women with coronary disease, younger than 80 years,
and postmenopausal with an intact uterus. Mean age was 66.7 years.Intervention.—Either 0.625 mg of conjugated equine estrogens plus 2.5 mg of medroxyprogesterone
acetate in 1 tablet daily (n=1380) or a placebo of identical appearance (n=1383).
Follow-up averaged 4.1 years; 82% of those assigned to hormone treatment were
taking it at the end of 1 year, and 75% at the end of 3 years.Main Outcome Measures.—The primary outcome was the occurrence of nonfatal myocardial infarction
(MI) or CHD death. Secondary cardiovascular outcomes included coronary revascularization,
unstable angina, congestive heart failure, resuscitated cardiac arrest, stroke
or transient ischemic attack, and peripheral arterial disease. All-cause mortality
was also considered.Results.—Overall, there were no significant differences between groups in the
primary outcome or in any of the secondary cardiovascular outcomes: 172 women
in the hormone group and 176 women in the placebo group had MI or CHD death
(relative hazard [RH], 0.99; 95% confidence interval [CI], 0.80-1.22). The
lack of an overall effect occurred despite a net 11% lower low-density lipoprotein
cholesterol level and 10% higher high-density lipoprotein cholesterol level
in the hormone group compared with the placebo group (each P<.001). Within the overall null effect, there was a statistically
significant time trend, with more CHD events in the hormone group than in
the placebo group in year 1 and fewer in years 4 and 5. More women in the
hormone group than in the placebo group experienced venous thromboembolic
events (34 vs 12; RH, 2.89; 95% CI, 1.50-5.58) and gallbladder disease (84
vs 62; RH, 1.38; 95% CI, 1.00-1.92). There were no significant differences
in several other end points for which power was limited, including fracture,
cancer, and total mortality (131 vs 123 deaths; RH, 1.08; 95% CI, 0.84-1.38).Conclusions.—During an average follow-up of 4.1 years, treatment with oral conjugated
equine estrogen plus medroxyprogesterone acetate did not reduce the overall
rate of CHD events in postmenopausal women with established coronary disease.
The treatment did increase the rate of thromboembolic events and gallbladder
disease. Based on the finding of no overall cardiovascular benefit and a pattern
of early increase in risk of CHD events, we do not recommend starting this
treatment for the purpose of secondary prevention of CHD. However, given the
favorable pattern of CHD events after several years of therapy, it could be
appropriate for women already receiving this treatment to continue.
5,991 citations
••
TL;DR: Lovastatin reduces the risk for the first acute major coronary event in men and women with average TC and LDL-C levels and below-average HDL- C levels and supports the inclusion of HDL-C in risk-factor assessment and the need for reassessment of the National Cholesterol Program guidelines.
Abstract: 0.63; 95% confidence interval [CI], 0.50-0.79;P,.001), myocardial infarction (95 vs 57 myocardial infarctions; RR, 0.60; 95% CI, 0.43-0.83; P = .002), unstable angina (87 vs 60 first unstable angina events; RR, 0.68; 95% CI, 0.49-0.95;P = .02), coronary revascularization procedures (157 vs 106 procedures; RR, 0.67; 95% CI, 0.520.85; P = .001), coronary events (215 vs 163 coronary events; RR, 0.75; 95% CI, 0.61-0.92; P = .006), and cardiovascular events (255 vs 194 cardiovascular events; RR, 0.75; 95% CI, 0.62-0.91; P = .003). Lovastatin (20-40 mg daily) reduced LDL-C by 25% to 2.96 mmol/L (115 mg/dL) and increased HDL-C by 6% to 1.02 mmol/L (39 mg/dL). There were no clinically relevant differences in safety parameters between treatment groups. Conclusions.— Lovastatin reduces the risk for the first acute major coronary event in men and women with average TC and LDL-C levels and below-average HDL-C levels. These findings support the inclusion of HDL-C in risk-factor assessment, confirm the benefit of LDL-C reduction to a target goal, and suggest the need for reassessment of the National Cholesterol Education Program guidelines regarding pharmacological intervention.
5,301 citations
••
TL;DR: The incidence of serious and fatal adverse drug reactions in US hospitals was found to be extremely high, and data suggest that ADRs represent an important clinical issue.
Abstract: Objective.—To estimate the incidence of serious and fatal adverse drug reactions
(ADR) in hospital patients.Data Sources.—Four electronic databases were searched from 1966 to 1996.Study Selection.—Of 153, we selected 39 prospective studies from US hospitals.Data Extraction.—Data extracted independently by 2 investigators were analyzed by a random-effects
model. To obtain the overall incidence of ADRs in hospitalized patients, we
combined the incidence of ADRs occurring while in the hospital plus the incidence
of ADRs causing admission to hospital. We excluded errors in drug administration,
noncompliance, overdose, drug abuse, therapeutic failures, and possible ADRs.
Serious ADRs were defined as those that required hospitalization, were permanently
disabling, or resulted in death.Data Synthesis.—The overall incidence of serious ADRs was 6.7% (95% confidence interval
[CI], 5.2%-8.2%) and of fatal ADRs was 0.32% (95% CI, 0.23%-0.41%) of hospitalized
patients. We estimated that in 1994 overall 2216000 (1721000-2711000) hospitalized
patients had serious ADRs and 106000 (76000-137000) had fatal ADRs, making
these reactions between the fourth and sixth leading cause of death.Conclusions.—The incidence of serious and fatal ADRs in US hospitals was found to
be extremely high. While our results must be viewed with circumspection because
of heterogeneity among studies and small biases in the samples, these data
nevertheless suggest that ADRs represent an important clinical issue.
4,764 citations
••
TL;DR: This work proposes a simple method to approximate a risk ratio from the adjusted odds ratio and derive an estimate of an association or treatment effect that better represents the true relative risk.
Abstract: Logistic regression is used frequently in cohort studies and clinical trials. When the incidence of an outcome of interest is common in the study population (>10%), the adjusted odds ratio derived from the logistic regression can no longer approximate the risk ratio. The more frequent the outcome, the more the odds ratio overestimates the risk ratio when it is more than 1 or underestimates it when it is less than 1. We propose a simple method to approximate a risk ratio from the adjusted odds ratio and derive an estimate of an association or treatment effect that better represents the true relative risk.
3,616 citations
••
TL;DR: Low-risk patients had estimates of 5-year PSA outcome after treatment with RP, RT, or implant with or without neoadjuvant androgen deprivation that were not statistically different, whereas intermediate- and high- risk patients treated with RP or RT did better then those treated by implant.
Abstract: Context.—Interstitial radiation (implant) therapy is used to treat clinically
localized adenocarcinoma of the prostate, but how it compares with other treatments
is not known.Objective.—To estimate control of prostate-specific antigen (PSA) after radical
prostatectomy (RP), external beam radiation (RT), or implant with or without
neoadjuvant androgen deprivation therapy in patients with clinically localized
prostate cancer.Design.—Retrospective cohort study of outcome data compared using Cox regression
multivariable analyses.Setting and Patients.—A total of 1872 men treated between January 1989 and October 1997 with
an RP (n=888) or implant with or without neoadjuvant androgen deprivation
therapy (n=218) at the Hospital of the University of Pennsylvania, Philadelphia,
or RT (n=766) at the Joint Center for Radiation Therapy, Boston, Mass, were
enrolled.Main Outcome Measure.—Actuarial freedom from PSA failure (defined as PSA outcome).Results.—The relative risk (RR) of PSA failure in low-risk patients (stage T1c,
T2a and PSA level ≤10 ng/mL and Gleason score ≤6) treated using RT,
implant plus androgen deprivation therapy, or implant therapy was 1.1 (95%
confidence interval [CI], 0.5-2.7), 0.5 (95% CI, 0.1-1.9), and 1.1 (95% CI,
0.3-3.6), respectively, compared with those patients treated with RP. The
RRs of PSA failure in the intermediate-risk patients (stage T2b or Gleason
score of 7 or PSA level >10 and ≤20 ng/mL) and high-risk patients (stage
T2c or PSA level >20 ng/mL or Gleason score ≥8) treated with implant compared
with RP were 3.1 (95% CI, 1.5-6.1) and 3.0 (95% CI, 1.8-5.0), respectively.
The addition of androgen deprivation to implant therapy did not improve PSA
outcome in high-risk patients but resulted in a PSA outcome that was not statistically
different compared with the results obtained using RP or RT in intermediate-risk
patients. These results were unchanged when patients were stratified using
the traditional rankings of biopsy Gleason scores of 2 through 4 vs 5 through
6 vs 7 vs 8 through 10.Conclusions.—Low-risk patients had estimates of 5-year PSA outcome after treatment
with RP, RT, or implant with or without neoadjuvant androgen deprivation that
were not statistically different, whereas intermediate- and high-risk patients
treated with RP or RT did better then those treated by implant. Prospective
randomized trials are needed to verify these findings.
3,408 citations
••
TL;DR: Along with being more educated and reporting poorer health status, the majority of alternative medicine users appear to be doing so not so much as a result of being dissatisfied with conventional medicine but largely because they find these health care alternatives to be more congruent with their own values, beliefs, and philosophical orientations toward health and life.
Abstract: Context.—Research both in the United States and abroad suggests that significant
numbers of people are involved with various forms of alternative medicine.
However, the reasons for such use are, at present, poorly understood.Objective.—To investigate possible predictors of alternative health care use.Methods.—Three primary hypotheses were tested. People seek out these alternatives
because (1) they are dissatisfied in some way with conventional treatment;
(2) they see alternative treatments as offering more personal autonomy and
control over health care decisions; and (3) the alternatives are seen as more
compatible with the patients' values, worldview, or beliefs regarding the
nature and meaning of health and illness. Additional predictor variables explored
included demographics and health status.Design.—A written survey examining use of alternative health care, health status,
values, and attitudes toward conventional medicine. Multiple logistic regression
analyses were used in an effort to identify predictors of alternative health
care use.Setting and Participants.—A total of 1035 individuals randomly selected from a panel who had agreed
to participate in mail surveys and who live throughout the United States.Main Outcome Measure.—Use of alternative medicine within the previous year.Results.—The response rate was 69%.The following variables emerged as predictors
of alternative health care use: more education (odds ratio [OR], 1.2; 95%
confidence interval [CI], 1.1-1.3); poorer health status (OR, 1.3; 95% CI,
1.1-1.5); a holistic orientation to health (OR, 1.4; 95% CI, 1.1-1.9); having
had a transformational experience that changed the person's worldview (OR,
1.8; 95% CI, 1.3-2.5); any of the following health problems: anxiety (OR,
3.1; 95% CI, 1.6-6.0); back problems (OR, 2.3; 95% CI, 1.7-3.2); chronic pain
(OR, 2.0; 95% CI, 1.1-3.5); urinary tract problems (OR, 2.2; 95% CI, 1.3-3.5);
and classification in a cultural group identifiable by their commitment to
environmentalism, commitment to feminism, and interest in spirituality and
personal growth psychology (OR, 2.0; 95% CI, 1.4-2.7). Dissatisfaction with
conventional medicine did not predict use of alternative medicine. Only 4.4%
of those surveyed reported relying primarily on alternative therapies.Conclusion.—Along with being more educated and reporting poorer health status, the
majority of alternative medicine users appear to be doing so not so much as
a result of being dissatisfied with conventional medicine but largely because
they find these health care alternatives to be more congruent with their own
values, beliefs, and philosophical orientations toward health and life.
2,691 citations
••
TL;DR: In women with low BMD but without vertebral fractures, 4 years of alendronate safely increased BMD and decreased the risk of first vertebral deformity.
Abstract: Context.—Alendronate sodium reduces fracture risk in
postmenopausal women who have vertebral fractures, but its effects on
fracture risk have not been studied for women without vertebral
fractures.Objective.—To test the hypothesis that 4 years of
alendronate would decrease the risk of clinical and vertebral fractures
in women who have low bone mineral density (BMD) but no vertebral
fractures.Design.—Randomized, blinded, placebo-controlled trial.Setting.—Eleven community-based clinical research centers.Subjects.—Women aged 54 to 81 years with a femoral neck BMD
of 0.68 g/cm2 or less (Hologic Inc, Waltham, Mass) but no
vertebral fracture; 4432 were randomized to alendronate or placebo and
4272 (96%) completed outcome measurements at the final visit (an
average of 4.2 years later).Intervention.—All participants reporting calcium intakes of
1000 mg/d or less received a supplement containing 500 mg of calcium
and 250 IU of cholecalciferol. Subjects were randomly assigned to
either placebo or 5 mg/d of alendronate sodium for 2 years followed by
10 mg/d for the remainder of the trial.Main Outcome Measures.—Clinical fractures confirmed
by x-ray reports, new vertebral deformities detected by morphometric
measurements on radiographs, and BMD measured by dual x-ray
absorptiometry.Results.—Alendronate increased BMD at all sites studied
(P<.001) and reduced clinical fractures from 312 in the
placebo group to 272 in the intervention group, but not significantly
so (14% reduction; relative hazard [RH], 0.86; 95% confidence
interval [CI], 0.73-1.01). Alendronate reduced clinical fractures by
36% in women with baseline osteoporosis at the femoral neck (>2.5
SDs below the normal young adult mean; RH, 0.64; 95% CI, 0.50-0.82;
treatment-control difference, 6.5%; number needed to treat [NNT],
15), but there was no significant reduction among those with higher BMD
(RH, 1.08; 95% CI, 0.87-1.35). Alendronate decreased the risk of
radiographic vertebral fractures by 44% overall (relative risk, 0.56;
95% CI, 0.39-0.80; treatment-control difference, 1.7%; NNT, 60).
Alendronate did not increase the risk of gastrointestinal or other
adverse effects.Conclusions.—In women with low BMD but without vertebral
fractures, 4 years of alendronate safely increased BMD and decreased
the risk of first vertebral deformity. Alendronate significantly
reduced the risk of clinical fractures among women with osteoporosis
but not among women with higher BMD.
2,254 citations
••
TL;DR: The published results from these prospective studies are remarkably consistent for each factor, indicating moderate but highly statistically significant associations with CHD, even though mechanisms that might account for these associations are not clear.
Abstract: Context.—A large number of epidemiologic studies have reported on associations
between various "inflammatory" factors and coronary heart disease (CHD).Objective.—To assess the associations of blood levels of fibrinogen, C-reactive
protein (CRP), and albumin and leukocyte count with the subsequent risk of
CHD.Data Sources.—Meta-analyses of any long-term prospective studies of CHD published
before 1998 on any of these 4 factors. Studies were identified by MEDLINE
searches, scanning of relevant reference lists, hand searching of cardiology,
epidemiology, and other relevant journals, and discussions with authors of
relevant reports.Study Selection.—All relevant studies identified were included.Data Extraction.—The following information was abstracted from published reports (supplemented,
in several cases, by the authors): size and type of cohort, mean age, mean
duration of follow-up, assay methods, degree of adjustment for confounders,
and relationship of CHD risk to the baseline assay results.Data Synthesis.—For fibrinogen, with 4018 CHD cases in 18 studies, comparison of individuals
in the top third with those in the bottom third of the baseline measurements
yielded a combined risk ratio of 1.8 (95% confidence interval [CI], 1.6-2.0)
associated with a difference in long-term usual mean fibrinogen levels of
2.9 µmol/L (0.1 g/dL) between the top and bottom thirds (10.3 vs 7.4
µmol/L [0.35 vs 0.25 g/dL]). For CRP, with 1053 CHD cases in 7 studies,
the combined risk ratio of 1.7 (95% CI, 1.4-2.1) was associated with a difference
of 1.4 mg/L (2.4 vs 1.0 mg/L). For albumin, with 3770 CHD cases in 8 studies,
the combined risk ratio of 1.5 (95% CI, 1.3-1.7) was associated with a difference
of 4 g/L (38 vs 42 g/L, ie, an inverse association). For leukocyte count,
with 5337 CHD cases in the 7 largest studies, the combined risk ratio of 1.4
(95% CI, 1.3-1.5) was associated with a difference of 2.8×109/L
(8.4 vs 5.6×109/L). Each of these overall results was highly
significant (P<.0001).Conclusions.—The published results from these prospective studies are remarkably
consistent for each factor, indicating moderate but highly statistically significant
associations with CHD. Hence, even though mechanisms that might account for
these associations are not clear, further study of the relevance of these
factors to the causation of CHD is warranted.
2,089 citations
••
TL;DR: Physician computer order entry decreased the rate of nonintercepted serious medication errors by more than half, although this decrease was larger for potential ADEs than for errors that actually resulted in an ADE.
Abstract: Context.—Adverse drug events (ADEs) are a significant and costly cause of injury
during hospitalization.Objectives.—To evaluate the efficacy of 2 interventions for preventing nonintercepted
serious medication errors, defined as those that either resulted in or had
potential to result in an ADE and were not intercepted before reaching the
patient.Design.—Before-after comparison between phase 1 (baseline) and phase 2 (after
intervention was implemented) and, within phase 2, a randomized comparison
between physican computer order entry (POE) and the combination of POE plus
a team intervention.Setting.—Large tertiary care hospital.Participants.—For the comparison of phase 1 and 2, all patients admitted to a stratified
random sample of 6 medical and surgical units in a tertiary care hospital
over a 6-month period, and for the randomized comparison during phase 2, all
patients admitted to the same units and 2 randomly selected additional units
over a subsequent 9-month period.Interventions.—A physician computer order entry system (POE) for all units and a team-based
intervention that included changing the role of pharmacists, implemented for
half the units.Main Outcome Measure.—Nonintercepted serious medication errors.Results.—Comparing identical units between phases 1 and 2, nonintercepted serious
medication errors decreased 55%, from 10.7 events per 1000 patient-days to
4.86 events per 1000 (P=.01). The decline occurred
for all stages of the medication-use process. Preventable ADEs declined 17%
from 4.69 to 3.88 (P=.37), while nonintercepted potential
ADEs declined 84% from 5.99 to 0.98 per 1000 patient-days (P=.002). When POE-only was compared with the POE plus team intervention
combined, the team intervention conferred no additonal benefit over POE.Conclusions.—Physician computer order entry decreased the rate of nonintercepted
serious medication errors by more than half, although this decrease was larger
for potential ADEs than for errors that actually resulted in an ADE.
2,073 citations
••
TL;DR: Patients with cancer and single metastases to the brain who receive treatment with surgical resection and postoperative radiotherapy have fewer recurrences of cancer in the brain and are less likely to die of neurologic causes than similar patients treated withurgical resection alone.
Abstract: Context.—For the treatment of a single metastasis to the brain, surgical resection
combined with postoperative radiotherapy is more effective than treatment
with radiotherapy alone. However, the efficacy of postoperative radiotherapy
after complete surgical resection has not been established.Objective.—To determine if postoperative radiotherapy resulted in improved neurologic
control of disease and increased survival.Design.—Multicenter, randomized, parallel group trial.Setting.—University-affiliated cancer treatment facilities.Patients.—Ninety-five patients who had single metastases to the brain that were
treated with complete surgical resections (as verified by postoperative magnetic
resonance imaging) between September 1989 and November 1997 were entered into
the study.Interventions.—Patients were randomly assigned to treatment with postoperative whole-brain
radiotherapy (radiotherapy group, 49 patients) or no further treatment (observation
group, 46 patients) for the brain metastasis, with median follow-up of 48
weeks and 43 weeks, respectively.Main Outcome Measures.—The primary end point was recurrence of tumor in the brain; secondary
end points were length of survival, cause of death, and preservation of ability
to function independently.Results.—Recurrence of tumor anywhere in the brain was less frequent in the radiotherapy
group than in the observation group (9 [18%] of 49 vs 32 [70%] of 46; P<.001). Postoperative radiotherapy prevented brain
recurrence at the site of the original metastasis (5 [10%] of 49 vs 21 [46%]
of 46; P<.001) and at other sites in the brain
(7 [14%] of 49 vs 17 [37%] of 46; P <.01). Patients
in the radiotherapy group were less likely to die of neurologic causes than
patients in the observation group (6 [14%] of 43 who died vs 17 [44%] of 39; P =.003). There was no significant difference between the
2 groups in overall length of survival or the length of time that patients
remained functionally independent.Conclusions.—Patients with cancer and single metastases to the brain who receive
treatment with surgical resection and postoperative radiotherapy have fewer
recurrences of cancer in the brain and are less likely to die of neurologic
causes than similar patients treated with surgical resection alone.
••
TL;DR: The CDSSs can enhance clinical performance for drug dosing, preventive care, and other aspects of medical care, but not convincingly for diagnosis.
Abstract: Context.—Many computer software developers and vendors claim that their systems
can directly improve clinical decisions. As for other health care interventions,
such claims should be based on careful trials that assess their effects on
clinical performance and, preferably, patient outcomes.Objective.—To systematically review controlled clinical trials assessing the effects
of computer-based clinical decision support systems (CDSSs) on physician performance
and patient outcomes.Data Sources.—We updated earlier reviews covering 1974 to 1992 by searching the MEDLINE,
EMBASE, INSPEC, SCISEARCH, and the Cochrane Library bibliographic databases
from 1992 to March 1998. Reference lists and conference proceedings were reviewed
and evaluators of CDSSs were contacted.Study Selection.—Studies were included if they involved the use of a CDSS in a clinical
setting by a health care practitioner and assessed the effects of the system
prospectively with a concurrent control.Data Extraction.—The validity of each relevant study (scored from 0-10) was evaluated
in duplicate. Data on setting, subjects, computer systems, and outcomes were
abstracted and a power analysis was done on studies with negative findings.Data Synthesis.—A total of 68 controlled trials met our criteria, 40 of which were published
since 1992. Quality scores ranged from 2 to10, with more recent trials rating
higher (mean, 7.7) than earlier studies (mean, 6.4) (P<.001).
Effects on physician performance were assessed in 65 studies and 43 found
a benefit (66%). These included 9 of 15 studies on drug dosing systems, 1
of 5 studies on diagnostic aids, 14 of 19 preventive care systems, and 19
of 26 studies evaluating CDSSs for other medical care. Six of 14 studies assessing
patient outcomes found a benefit. Of the remaining 8 studies, only 3 had a
power of greater than 80% to detect a clinically important effect.Conclusions.—Published studies of CDSSs are increasing rapidly, and their quality
is improving. The CDSSs can enhance clinical performance for drug dosing,
preventive care, and other aspects of medical care, but not convincingly for
diagnosis. The effects of CDSSs on patient outcomes have been insufficiently
studied.
••
TL;DR: The hypothesis that when complex surgical oncologic procedures are provided by surgical teams in hospitals with specialty expertise, mortality rates are lower is supported.
Abstract: Context.—Hospitals that treat a relatively high volume of patients for selected
surgical oncology procedures report lower surgical in-hospital mortality rates
than hospitals with a low volume of the procedures, but the reports do not
take into account length of stay or adjust for case mix.Objective.—To determine whether hospital volume was inversely associated with 30-day
operative mortality, after adjusting for case mix.Design and Setting.—Retrospective cohort study using the Surveillance, Epidemiology, and
End Results (SEER)–Medicare linked database in which the hypothesis
was prospectively specified. Surgeons determined in advance the surgical oncology
procedures for which the experience of treating a larger volume of patients
was most likely to lead to the knowledge or technical expertise that might
offset surgical fatalities.Patients.—All 5013 patients in the SEER registry aged 65 years or older at cancer
diagnosis who underwent pancreatectomy, esophagectomy, pneumonectomy, liver
resection, or pelvic exenteration, using incident cancers of the pancreas,
esophagus, lung, colon, and rectum, and various genitourinary cancers diagnosed
between 1984 and 1993.Main Outcome Measure.—Thirty-day mortality in relation to procedure volume, adjusted for comorbidity,
patient age, and cancer stage.Results.—Higher volume was linked with lower mortality for pancreatectomy (P=.004), esophagectomy (P<.001),
liver resection (P=.04), and pelvic exenteration
(P=.04), but not for pneumonectomy (P=.32). The most striking results were for esophagectomy, for which
the operative mortality rose to 17.3% in low-volume hospitals, compared with
3.4% in high-volume hospitals, and for pancreatectomy, for which the corresponding
rates were 12.9% vs 5.8%. Adjustments for case mix and other patient factors
did not change the finding that low volume was strongly associated with excess
mortality.Conclusions.—These data support the hypothesis that when complex surgical oncologic
procedures are provided by surgical teams in hospitals with specialty expertise,
mortality rates are lower.
••
TL;DR: Although reducing the prevalence of health risk behaviors in low-income populations is an important public health goal, socioeconomic differences in mortality are due to a wider array of factors and, therefore, would persist even with improved health behaviors among the disadvantaged.
Abstract: Context.— A prominent hypothesis regarding social inequalities in mortality is that the elevated risk among the socioeconomically disadvantaged is largely due to the higher prevalence of health risk behaviors among those with lower levels of education and income. Objective.— To investigate the degree to which 4 behavioral risk factors (cigarette smoking, alcohol drinking, sedentary lifestyle, and relative body weight) explain the observed association between socioeconomic characteristics and allcause mortality. Design.— Longitudinal survey study investigating the impact of education, income, and health behaviors on the risk of dying within the next 7.5 years. Participants.— A nationally representative sample of 3617 adult women and men participating in the Americans’ Changing Lives survey. Main Outcome Measure.— All-cause mortality verified through the National Death Index and death certificate reviews. Results.— Educational differences in mortality were explained in full by the strong association between education and income. Controlling for age, sex, race, urbanicity, and education, the hazard rate ratio of mortality was 3.22 (95% confidence interval [CI], 2.01-5.16) for those in the lowest-income group and 2.34 (95% CI, 1.49-3.67) for those in the middle-income group. When health risk behaviors were considered, the risk of dying was still significantly elevated for the lowestincome group (hazard rate ratio, 2.77; 95% CI, 1.74-4.42) and the middle-income group (hazard rate ratio, 2.14; 95% CI, 1.38-3.25). Conclusion.— Although reducing the prevalence of health risk behaviors in lowincome populations is an important public health goal, socioeconomic differences in mortality are due to a wider array of factors and, therefore, would persist even with improved health behaviors among the disadvantaged.
••
TL;DR: More regression of coronary atherosclerosis occurred after 5 years than after 1 year in the experimental group, and in the control group, coronary Atherosclerosis continued to progress and more than twice as many cardiac events occurred.
Abstract: Context.— The Lifestyle Heart Trial demonstrated that intensive lifestyle changes may lead to regression of coronary atherosclerosis after 1 year. Objectives.— To determine the feasibility of patients to sustain intensive lifestyle changes for a total of 5 years and the effects of these lifestyle changes (without lipid-lowering drugs) on coronary heart disease. Design.— Randomized controlled trial conducted from 1986 to 1992 using a randomized invitational design. Patients.— Forty-eight patients with moderate to severe coronary heart disease were randomized to an intensive lifestyle change group or to a usual-care control group, and 35 completed the 5-year follow-up quantitative coronary arteriography. Setting.— Two tertiary care university medical centers. Intervention.— Intensive lifestyle changes (10% fat whole foods vegetarian diet, aerobic exercise, stress management training, smoking cessation, group psychosocial support) for 5 years. Main Outcome Measures.— Adherence to intensive lifestyle changes, changes in coronary artery percent diameter stenosis, and cardiac events. Results.— Experimental group patients (20 [71%] of 28 patients completed 5-year follow-up) made and maintained comprehensive lifestyle changes for 5 years, whereas control group patients (15 [75%] of 20 patients completed 5-year follow-up) made more moderate changes. In the experimental group, the average percent diameter stenosis at baseline decreased 1.75 absolute percentage points after 1 year (a 4.5% relative improvement) and by 3.1 absolute percentage points after 5 years (a 7.9% relative improvement). In contrast, the average percent diameter stenosis in the control group increased by 2.3 percentage points after 1 year (a 5.4% relative worsening) and by 11.8 percentage points after 5 years (a 27.7% relative worsening) (P = .001 between groups. Twenty-five cardiac events occurred in 28 experimental group patients vs 45 events in 20 control group patients during the 5-year follow-up (risk ratio for any event for the control group, 2.47 [95% confidence interval, 1.48-4.20]). Conclusions.— More regression of coronary atherosclerosis occurred after 5 years than after 1 year in the experimental group. In contrast, in the control group, coronary atherosclerosis continued to progress and more than twice as many cardiac events occurred.
••
TL;DR: It is demonstrated that the prevalence of community-acquired MRSA among children without identified risk factors is increasing, and the spectrum of disease associated with MRSA isolation is defined.
Abstract: Context.—Community-acquired methicillin-resistant Staphylococcus
aureus (MRSA) infections in children have occurred primarily in individuals
with recognized predisposing risks. Community-acquired MRSA infections in
the absence of identified risk factors have been reported infrequently.Objectives.—To determine whether community-acquired MRSA infections in children
with no identified predisposing risks are increasing and to define the spectrum
of disease associated with MRSA isolation.Design.—Retrospective review of medical records.Patients.—Hospitalized children with S aureus isolated
between August 1988 and July 1990 (1988-1990) and between August 1993 and
July 1995 (1993-1995).Setting.—The University of Chicago Children's Hospital.Main Outcome Measures.—Prevalence of community-acquired MRSA over time, infecting vs colonizing
isolates, and risk factors for disease.Results.—The number of children hospitalized with community-acquired MRSA disease
increased from 8 in 1988-1990 to 35 in 1993-1995. Moreover, the prevalence
of community-acquired MRSA without identified risk increased from 10 per 100000
admissions in 1988-1990 to 259 per 100000 admissions in 1993-1995 (P<.001), and a greater proportion of isolates produced clinical
infection. The clinical syndromes associated with MRSA in children without
identified risk were similar to those associated with community-acquired methicillin-susceptible S aureus. Notably, 7 (70%) of 10 community-acquired MRSA
isolates obtained from children with an identified risk were nonsusceptible
to at least 2 drugs, compared with only 6 (24%) of 25 isolates obtained from
children without an identified risk (P=.02).Conclusions.—These findings demonstrate that the prevalence of community-acquired
MRSA among children without identified risk factors is increasing.
••
TL;DR: Many US children watch a great deal of television and are inadequately vigorously active, andVigorous activity levels are lowest among girls, non-Hispanic blacks, and Mexican Americans.
Abstract: Context.—Physical inactivity contributes to weight gain in adults, but whether
this relationship is true for children of different ethnic groups is not well
established.Objective.—To assess participation in vigorous activity and television watching
habits and their relationship to body weight and fatness in US children.Design.—Nationally representative cross-sectional survey with an in-person interview
and medical examination.Setting and Participants.—Between 1988 and 1994, 4063 children aged 8 through 16 years were examined
as part of the National Health and Nutrition Examination Survey III. Mexican
Americans and non-Hispanic blacks were oversampled to produce reliable estimates
for these groups.Main Outcome Measures.—Episodes of weekly vigorous activity and daily hours of television watched,
and their relationship to body mass index and body fatness.Results.—Eighty percent of US children reported performing 3 or more bouts of
vigorous activity each week. This rate was lower in non-Hispanic black and
Mexican American girls (69% and 73%, respectively). Twenty percent of US children
participated in 2 or fewer bouts of vigorous activity per week, and the rate
was higher in girls (26%) than in boys (17%). Overall, 26% of US children
watched 4 or more hours of television per day and 67% watched at least 2 hours
per day. Non-Hispanic black children had the highest rates of watching 4 or
more hours of television per day (42%). Boys and girls who watch 4 or more
hours of television each day had greater body fat (P<.001)
and had a greater body mass index (P<.001) than
those who watched less than 2 hours per day.Conclusions.—Many US children watch a great deal of television and are inadequately
vigorously active. Vigorous activity levels are lowest among girls, non-Hispanic
blacks, and Mexican Americans. Intervention strategies to promote lifelong
physical activity among US children are needed to stem the adverse health
consequences of inactivity.
••
TL;DR: Gabapentin monotherapy appears to be efficacious for the treatment of pain and sleep interference associated with diabetic peripheral neuropathy and exhibits positive effects on mood and quality of life.
Abstract: Context.—Pain is the most disturbing symptom of
diabetic peripheral neuropathy. As many as 45% of patients with
diabetes mellitus develop peripheral neuropathies.Objective.—To evaluate the effect of gabapentin monotherapy
on pain associated with diabetic peripheral neuropathy.Design.—Randomized, double-blind, placebo-controlled,
8-week trial conducted between July 1996 and March 1997.Setting.—Outpatient clinics at 20 sites.Patients.—The 165 patients enrolled had a 1- to 5-year
history of pain attributed to diabetic neuropathy and a minimum 40-mm
pain score on the Short-Form McGill Pain Questionnaire visual analogue
scale.Intervention.—Gabapentin (titrated from 900 to 3600
mg/d or maximum tolerated dosage) or placebo.Main Outcome Measures.—The primary efficacy measure was
daily pain severity as measured on an 11-point Likert scale (0, no
pain; 10, worst possible pain). Secondary measures included sleep
interference scores, the Short-Form McGill Pain Questionnaire scores,
Patient Global Impression of Change and Clinical Global Impression of
Change, the Short Form–36 Quality of Life Questionnaire scores, and
the Profile of Mood States results.Results.—Eighty-four patients received gabapentin and 70
(83%) completed the study; 81 received placebo and 65 (80%) completed
the study. By intent-to-treat analysis, gabapentin-treated patients'
mean daily pain score at the study end point (baseline, 6.4; end point,
3.9; n = 82) was significantly lower (P<.001) compared with
the placebo-treated patients' end-point score (baseline, 6.5; end
point, 5.1; n = 80). All secondary outcome measures of pain were
significantly better in the gabapentin group than in the placebo group.
Additional statistically significant differences favoring gabapentin
treatment were observed in measures of quality of life (Short Form–36
Quality of Life Questionnaire and Profile of Mood States). Adverse
events experienced significantly more frequently in the gabapentin
group were dizziness (20 [24%] in the gabapentin group vs 4
[4.9%] in the control group; P<.001) and somnolence (19
[23%] in the gabapentin group vs 5 [6%] in the control group;
P = .003). Confusion was also more frequent in the gabapentin
group (7 [8%] vs 1 [1.2%]; P = .06).Conclusion.—Gabapentin monotherapy appears to be
efficacious for the treatment of pain and sleep interference associated
with diabetic peripheral neuropathy and exhibits positive effects on
mood and quality of life.
••
TL;DR: Gabapentin is effective in the treatment of pain and sleep interference associated with PHN and Mood and quality of life also improve with gabapentin therapy.
Abstract: Context.—Postherpetic neuralgia (PHN) is a syndrome
of often intractable neuropathic pain following herpes zoster
(shingles) that eludes effective treatment in many patients.Objective.—To determine the efficacy and safety of the
anticonvulsant drug gabapentin in reducing PHN pain.Design.—Multicenter, randomized, double-blind,
placebo-controlled, parallel design, 8-week trial conducted from August
1996 through July 1997.Setting.—Sixteen US outpatient clinical centers.Participants.—A total of 229 subjects were randomized.Intervention.—A 4-week titration period to a maximum dosage
of 3600 mg/d of gabapentin or matching placebo. Treatment was
maintained for another 4 weeks at the maximum tolerated dose.
Concomitant tricyclic antidepressants and/or narcotics were continued
if therapy was stabilized prior to study entry and remained constant
throughout the study.Main Outcome Measures.—The primary efficacy measure was
change in the average daily pain score based on an 11-point Likert
scale (0, no pain; 10, worst possible pain) from baseline week to the
final week of therapy. Secondary measures included average daily sleep
scores, Short-Form McGill Pain Questionnaire (SF-MPQ), Subject Global
Impression of Change and investigator-rated Clinical Global Impression
of Change, Short Form-36 (SF-36) Quality of Life Questionnaire, and
Profile of Mood States (POMS). Safety measures included the frequency
and severity of adverse events.Results.—One hundred thirteen patients received
gabapentin, and 89 (78.8%) completed the study; 116 received placebo,
and 95 (81.9%) completed the study. By intent-to-treat analysis,
subjects receiving gabapentin had a statistically significant reduction
in average daily pain score from 6.3 to 4.2 points compared with a
change from 6.5 to 6.0 points in subjects randomized to receive placebo
(P<.001). Secondary measures of pain as well as changes in
pain and sleep interference showed improvement with gabapentin
(P<.001). Many measures within the SF-36 and POMS also
significantly favored gabapentin (P≤.01). Somnolence,
dizziness, ataxia, peripheral edema, and infection were all more
frequent in the gabapentin group, but withdrawals were comparable in
the 2 groups (15 [13.3%] in the gabapentin group vs 11 [9.5%] in
the placebo group).Conclusions.—Gabapentin is effective in the
treatment of pain and sleep interference associated with PHN. Mood and
quality of life also improve with gabapentin
therapy.
••
TL;DR: Persistent pain was a commonly reported health problem among primary care patients and was consistently associated with psychological illness across centers, suggesting caution in drawing conclusions about the role of culture in shaping responses to persistent pain.
Abstract: Context.—There is little information on the extent of persistent pain across
cultures. Even though pain is a common reason for seeking health care, information
on the frequency and impacts of persistent pain among primary care patients
is inadequate.Objective.—To assess the prevalence and impact of persistent pain among primary
care patients.Design and Setting.—Survey data were collected from representative samples of primary care
patients as part of the World Health Organization Collaborative Study of Psychological
Problems in General Health Care, conducted in 15 centers in Asia, Africa,
Europe, and the Americas.Participants.—Consecutive primary care attendees between the age of majority (typically
18 years) and 65 years were screened (n=25916) and stratified random samples
interviewed (n=5438).Main Outcome Measures.—Persistent pain, defined as pain present most of the time for a period
of 6 months or more during the prior year, and psychological illness were
assessed by the Composite International Diagnostic Interview. Disability was
assessed by the Groningen Social Disability Schedule and by activity-limitation
days in the prior month.Results.—Across all 15 centers, 22% of primary care patients reported persistent
pain, but there was wide variation in prevalence rates across centers (range,
5.5%-33.0%). Relative to patients without persistent pain, pain sufferers
were more likely to have an anxiety or depressive disorder (adjusted odds
ratio [OR], 4.14; 95% confidence interval [CI], 3.52-4.86), to experience
significant activity limitations (adjusted OR, 1.63; 95% CI, 1.41-1.89), and
to have unfavorable health perceptions (adjusted OR, 1.26; 95% CI, 1.07-1.49).
The relationship between psychological disorder and persistent pain was observed
in every center, while the relationship between disability and persistent
pain was inconsistent across centers.Conclusions.—Persistent pain was a commonly reported health problem among primary
care patients and was consistently associated with psychological illness across
centers. Large variation in frequency and the inconsistent relationship between
persistent pain and disability across centers suggests caution in drawing
conclusions about the role of culture in shaping responses to persistent pain
when comparisons are based on patient samples drawn from a limited number
of health care settings in each culture.
••
TL;DR: The quality of health care can be precisely defined and measured with a degree of scientific accuracy comparable with that of most measures used in clinical medicine and will not succeed unless a major, systematic effort to overhaul how health care services are delivered, educate and train clinicians, and assess and improve quality.
Abstract: Objective.—To identify issues related to the quality of health care in the United
States, including its measurement, assessment, and improvement, requiring
action by health care professionals or other constituencies in the public
or private sectors.Participants.—The National Roundtable on Health Care Quality, convened by the Institute
of Medicine, a component of the National Academy of Sciences, comprised 20
representatives of the private and public sectors, practicing medicine and
nursing, representing academia, business, consumer advocacy, and the health
media, and including the heads of federal health programs. The roundtable
met 6 times between February 1996 and January 1998. It explored ongoing, rapid
changes in health care and the implications of these changes for the quality
of health and health care in the United States.Evidence.—Roundtable members held discussions with a wide variety of experts,
convened conferences, commissioned papers, and drew on their individual professional
experience.Consensus Process.—At the end of its deliberations, roundtable members reached consensus
on the conclusions described in this article by a series of discussions at
committee meetings and reviews of successive draft documents, the first of
which was created by the listed authors and the Institute of Medicine project
director. The drafts were revised following these discussions, and the final
document was approved according to the formal report review procedures of
the National Research Council of the National Academy of Sciences.Conclusions.—The quality of health care can be precisely defined and measured with
a degree of scientific accuracy comparable with that of most measures used
in clinical medicine. Serious and widespread quality problems exist throughout
American medicine. These problems, which may be classified as underuse, overuse,
or misuse, occur in small and large communities alike, in all parts of the
country, and with approximately equal frequency in managed care and fee-for-service
systems of care. Very large numbers of Americans are harmed as a direct result.
Quality of care is the problem, not managed care. Current efforts to improve
will not succeed unless we undertake a major, systematic effort to overhaul
how we deliver health care services, educate and train clinicians, and assess
and improve quality.
••
TL;DR: The WHR and waist circumference are independently associated with risk ofCHD in women and were independently strongly associated with increased risk of CHD also among women with a BMI of 25 kg/m2 or less.
Abstract: Context.—Obesity is a well-established risk factor
for coronary heart disease (CHD), but whether regional fat distribution
contributes independently to risk remains unclear.Objective.—To compare waist-hip ratio (WHR) and waist
circumference in determining risk of CHD in women.Design and Setting.—Prospective cohort study among US
female registered nurses participating in the Nurses' Health Study
conducted between 1986, when the nurses completed a questionnaire, and
follow-up in June 1994.Participants.—A total of 44,702 women aged 40 to 65
years who provided waist and hip circumferences and were free of prior
CHD, stroke, or cancer in 1986.Main Outcome Measures.—Incidence of CHD (nonfatal
myocardial infarction or CHD death).Results.—During 8 years of follow-up 320 CHD events (251
myocardial infarctions and 69 CHD deaths) were documented. Higher WHR
and greater waist circumference were independently associated with a
significantly increased age-adjusted risk of CHD. After adjusting for
body mass index (BMI) (defined as weight in kilograms divided by the
square of height in meters) and other cardiac risk factors, women with
a WHR of 0.88 or higher had a relative risk (RR) of 3.25 (95%
confidence interval [CI], 1.78-5.95) for CHD compared with women with
a WHR of less than 0.72. A waist circumference of 96.5 cm (38 in) or
more was associated with an RR of 3.06 (95% CI, 1.54-6.10). The WHR
and waist circumference were independently strongly associated with
increased risk of CHD also among women with a BMI of 25
kg/m2 or less. After adjustment for reported hypertension,
diabetes, and high cholesterol level, a WHR of 0.76 or higher or waist
circumference of 76.2 cm (30 in) or more was associated with more than
a 2-fold higher risk of CHD.Conclusions.—The WHR and waist circumference are
independently associated with risk of CHD in
women.
••
TL;DR: Patients with metastatic colon and lung cancer overestimate their survival probabilities and these estimates may influence their preferences about medical therapies, according to patient and physician estimates of the probability of 6-month survival.
Abstract: Context.— Previous studies have documented that cancer patients tend to overestimate the probability of long-term survival. If patient preferences about the trade-offs between the risks and benefits associated with alternative treatment strategies are based on inaccurate perceptions of prognosis, then treatment choices may not reflect each patient’s true values. Objective.— To test the hypothesis that among terminally ill cancer patients an accurate understanding of prognosis is associated with a preference for therapy that focuses on comfort over attempts at life extension. Design.— Prospective cohort study. Setting.—Five teaching hospitals in the United States. Patients.—A total of 917 adults hospitalized with stage III or IV non‐small cell lung cancer or colon cancer metastatic to liver in phases 1 and 2 of the Study to Understand Prognoses and Preferences for Outcomes and Risks of Treatments (SUPPORT). Main Outcome Measures.— Proportion of patients favoring life-extending therapy over therapy focusing on relief of pain and discomfort, patient and physician estimates of the probability of 6-month survival, and actual 6-month survival. Results.— Patients who thought they were going to live for at least 6 months were more likely (odds ratio [OR], 2.6; 95% confidence interval [CI], 1.8-3.7) to favor lifeextending therapy over comfort care compared with patients who thought there was at least a 10% chance that they would not live 6 months. This OR was highest (8.5; 95% CI, 3.0-24.0) among patients who estimated their 6-month survival probability at greater than 90% but whose physicians estimated it at 10% or less. Patients overestimated their chances of surviving 6 months, while physicians estimated prognosis quite accurately. Patients who preferred life-extending therapy were more likely to undergo aggressive treatment, but controlling for known prognostic factors, their 6-month survival was no better. Conclusions.— Patients with metastatic colon and lung cancer overestimate their survival probabilities and these estimates may influence their preferences about medical therapies.
••
Brown University1, Harvard University2, International AIDS Society3, Stanford University4, University of British Columbia5, University of California, San Diego6, University of Alabama at Birmingham7, University of Colorado Denver8, Istituto Superiore di Sanità9, University of Paris10, University of California, San Francisco11
TL;DR: New data have provided a stronger rationale for earlier initiation of more aggressive therapy than previously recommended and reinforce the importance of careful selection of initial drug regimen for each patient for optimal long-term clinical benefit and adherence.
Abstract: Objective.—To provide recommendations for antiretroviral therapy based on information
available in mid-1998.Participants.—An international panel of physicians with expertise in antiretroviral
research and care of patients with human immunodeficiency virus (HIV) infection,
first convened by the International AIDS Society–USA in December 1995.Evidence.—The panel reviewed available clinical and basic science study
results (including phase 3 controlled trials; clinical, virologic, and immunologic
end point data; data presented at research conferences; and studies of HIV
pathophysiology); opinions of panel members were also considered. Recommendations
were limited to drugs available in mid-1998.Consensus Process.—Panel members monitor new clinical research reports and interim
results. The full panel meets regularly to discuss how the new information
may change treatment recommendations. Updated recommendations are developed
through consensus of the entire panel at each stage of development.Conclusions.—Accumulating data from clinical and pathogenesis studies continue
to support early institution of potent antiretroviral therapy in patients
with HIV infection. A variety of combination regimens show potency, expanding
choices for initial regimens for individual patients. Plasma HIV RNA assays
with increased sensitivity are important in monitoring therapeutic response;
however, more data are needed to determine precisely the HIV RNA levels that
define treatment failure. Long-term adverse drug effects are beginning to
emerge, requiring ongoing attention. Some issues regarding optimal long-term
approaches to antiretroviral management are unresolved. The increased complexity
in HIV management requires ongoing monitoring of new data for optimal treatment
of HIV infection.
••
TL;DR: Evaluating the adequacy of pain management in elderly and minority cancer patients admitted to nursing homes found age, gender, race, marital status, physical function, depression, and cognitive status were independently associated with the presence of pain.
Abstract: Context.—Cancer pain can be relieved with pharmacological agents as indicated
by the World Health Organization (WHO). All too frequently pain management
is reported to be poor.Objective.—To evaluate the adequacy of pain management in elderly and minority
cancer patients admitted to nursing homes.Design.—Retrospective, cross-sectional study.Setting.—A total of 1492 Medicare-certified and/or Medicaid-certified nursing
homes in 5 states participating in the Health Care Financing Administration's
demonstration project, which evaluated the implementation of the Resident
Assessment Instrument and its Minimum Data Set.Study Population.—A group of 13625 cancer patients aged 65 years and older discharged
from the hospital to any of the facilities from 1992 to 1995. Data were from
the multilinked Systematic Assessment of Geriatric Drug Use via Epidemiology
(SAGE) database.Main Outcome Measures.—Prevalence and predictors of daily pain and of analgesic treatment.
Pain assessment was based on patients' report and was completed by a multidisciplinary
team of nursing home personnel that observed, over a 7-day period, whether
each resident complained or showed evidence of pain daily.Results.—A total of 4003 patients (24%, 29%, and 38% of those aged ≥85 years,
75 to 84 years, and 65 to 74 years, respectively) reported daily pain. Age,
gender, race, marital status, physical function, depression, and cognitive
status were all independently associated with the presence of pain. Of patients
with daily pain, 16% received a WHO level 1 drug, 32% a WHO level 2 drug,
and only 26% received morphine. Patients aged 85 years and older were less
likely to receive either weak opiates or morphine than those aged 65 to 74
years (13% vs 38%, respectively). More than a quarter of patients (26%) in
daily pain did not receive any analgesic agent. Patients older than 85 years
in daily pain were also more likely to receive no analgesia (odds ratio [OR],
1.40; 95% confidence interval [CI], 1.13-1.73). Other independent predictors
of failing to receive any analgesic agent were minority race (OR, 1.63; 95%
CI, 1.18-2.26 for African Americans), low cognitive performance (OR, 1.23;
95% CI, 1.05-1.44), and the number of other medications received (OR, 0.65;
95% CI, 0.5-0.84 for 11 or more medications).Conclusions.—Daily pain is prevalent among nursing home residents with cancer and
is often untreated, particularly among older and minority patients.
••
TL;DR: Reduced sodium intake and weight loss constitute a feasible, effective, and safe nonpharmacologic therapy of hypertension in older persons.
Abstract: Context.—Nonpharmacologic interventions are frequently recommended for treatment
of hypertension in the elderly, but there is a paucity of evidence from randomized
controlled trials in support of this recommendation.Objective.—To determine whether weight loss or reduced sodium intake is effective
in the treatment of older persons with hypertension.Design.—Randomized controlled trial.Participants.—A total of 875 men and women aged 60 to 80 years with systolic blood
pressure lower than 145 mm Hg and diastolic blood pressure lower than 85 mm
Hg while receiving treatment with a single antihypertensive medication.Setting.—Four academic health centers.Intervention.—The 585 obese participants were randomized to reduced sodium intake,
weight loss, both, or usual care, and the 390 nonobese participants were randomized
to reduced sodium intake or usual care. Withdrawal of antihypertensive medication
was attempted after 3 months of intervention.Main Outcome Measure.—Diagnosis of high blood pressure at 1 or more follow-up visits, or treatment
with antihypertensive medication, or a cardiovascular event during follow-up
(range, 15-36 months; median, 29 months).Results.—The combined outcome measure was less frequent among those assigned
vs not assigned to reduced sodium intake (relative hazard ratio, 0.69; 95%
confidence interval [CI], 0.59-0.81; P<.001) and,
in obese participants, among those assigned vs not assigned to weight loss
(relative hazard ratio, 0.70; 95% CI, 0.57-0.87; P<.001).
Relative to usual care, hazard ratios among the obese participants were 0.60
(95% CI, 0.45-0.80; P<.001) for reduced sodium
intake alone, 0.64 (95% CI, 0.49-0.85; P=.002) for
weight loss alone, and 0.47 (95% CI, 0.35-0.64; P<.001)
for reduced sodium intake and weight loss combined. The frequency of cardiovascular
events during follow-up was similar in each of the 6 treatment groups.Conclusion.—Reduced sodium intake and weight loss constitute a feasible, effective,
and safe nonpharmacologic therapy of hypertension in older persons.
••
TL;DR: Objective measures of subclinical disease and disease severity were independent and joint predictors of 5-year mortality in older adults, along with male sex, relative poverty, physical activity, smoking, indicators of frailty, and disability.
Abstract: Context—Multiple factors contribute to mortality in older adults, but the extent
to which subclinical disease and other factors contribute independently to
mortality risk is not knownObjective—To determine the disease, functional, and personal characteristics that
jointly predict mortality in community-dwelling men and women aged 65 years
or olderDesign—Prospective population-based cohort study with 5 years of follow-up
and a validation cohort of African Americans with 425-year follow-upSetting—Four US communitiesParticipants—A total of 5201 and 685 men and women aged 65 years or older in the
original and African American cohorts, respectivelyMain Outcome Measures—Five-year mortalityResults—In the main cohort, 646 deaths (12%) occurred within 5 years Using
Cox proportional hazards models, 20 characteristics (of 78 assessed) were
each significantly (P<05) and independently associated
with mortality: increasing age, male sex, income less than $50000 per year,
low weight, lack of moderate or vigorous exercise, smoking for more than 50
pack-years, high brachial (>169 mm Hg) and low tibial (≤127 mm Hg) systolic
blood pressure, diuretic use by those without hypertension or congestive heart
failure, elevated fasting glucose level (>72 mmol/L [130 mg/dL]), low albumin
level (≤37 g/L), elevated creatinine level (≥106 µmol/L [12 mg/dL]),
low forced vital capacity (≤206 mL), aortic stenosis (moderate or severe)
and abnormal left ventricular ejection fraction (by echocardiography), major
electrocardiographic abnormality, stenosis of internal carotid artery (by
ultrasound), congestive heart failure, difficulty in any instrumental activity
of daily living, and low cognitive function by Digit Symbol Substitution test
score Neither high-density lipoprotein cholesterol nor low-density lipoprotein
cholesterol was associated with mortality After adjustment for other factors,
the association between age and mortality diminished, but the reduction in
mortality with female sex persisted Finally, the risk of mortality was validated
in the second cohort; quintiles of risk ranged from 2% to 39% and 0% to 26%
for the 2 cohortsConclusions—Objective measures of subclinical disease and disease severity were
independent and joint predictors of 5-year mortality in older adults, along
with male sex, relative poverty, physical activity, smoking, indicators of
frailty, and disability Except for history of congestive heart failure, objective,
quantitative measures of disease were better predictors of mortality than
was clinical history of disease
••
TL;DR: Although some children are being diagnosed as having ADHD with insufficient evaluation and in some cases stimulant medication is prescribed when treatment alternatives exist, there is little evidence of widespread overdiagnosis or misdiagnosis of ADHD or ofidespread overprescription of methylphenidate by physicians.
Abstract: Objective.— To deal with public and professional concern regarding possible overprescription of attention-deficit/hyperactivity disorder (ADHD) medications, particularly methylphenidate, by reviewing issues related to the diagnosis, optimal treatment, and actual care of ADHD patients and of evidence of patient misuse of ADHD medications. Data Sources.— Literature review using a National Library of Medicine database search for 1975 through March 1997 on the termsattention deficit disorder with hyperactivity, methylphenidate, stimulants, and stimulant abuse and dependence. Relevant documents from the Drug Enforcement Administration were also reviewed. Study Selection.— All English-language studies dealing with children of elementary school through high school age were included. Data Extraction.— All searched articles were selected and were made available to coauthors for review. Additional articles known to coauthors were added to the initial list, and a consensus was developed among the coauthors regarding the articles most pertinent to the issues requested in the resolution calling for this report. Relevant information from these articles was included in the report. Data Synthesis.— Diagnostic criteria for ADHD are based on extensive empirical research and, if applied appropriately, lead to the diagnosis of a syndrome with high interrater reliability, good face validity, and high predictability of course and medication responsiveness. The criteria of what constitutes ADHD in children have broadened, and there is a growing appreciation of the persistence of ADHD into adolescence and adulthood. As a result, more children (especially girls), adolescents, and adults are being diagnosed and treated with stimulant medication, and children are being treated for longer periods of time. Epidemiologic studies using standardized diagnostic criteria suggest that 3% to 6% of the school-aged population (elementary through high school) may suffer from ADHD, although the percentage of US youth being treated for ADHD is at most at the lower end of this prevalence range. Pharmacotherapy, particularly use of stimulants, has been extensively studied and generally provides significant short-term symptomatic and academic improvement. There is little evidence that stimulant abuse or diversion is currently a major problem, particularly among those with ADHD, although recent trends suggest that this could increase with the expanding production and use of stimulants. Conclusions.— Although some children are being diagnosed as having ADHD with insufficient evaluation and in some cases stimulant medication is prescribed when treatment alternatives exist, there is little evidence of widespread overdiagnosis or misdiagnosis of ADHD or of widespread overprescription of methylphenidate by physicians. JAMA. 1998;279:1100-1107
••
TL;DR: Use of the percentage of free PSA can reduce unnecessary biopsies in patients undergoing evaluation for prostate cancer, with a minimal loss in sensitivity in detecting cancer.
Abstract: Context.—The percentage of free prostate-specific antigen (PSA) in serum has
been shown to enhance the specificity of PSA testing for prostate cancer detection,
but earlier studies provided only preliminary cutoffs for clinical use.Objective.—To develop risk assessment guidelines and a cutoff value for defining
abnormal percentage of free PSA in a population of men to whom the test would
be applied.Design.—Prospective blinded study using the Tandem PSA and free PSA assays (Hybritech
Inc, San Diego, Calif).Setting.—Seven nationwide university medical centers.Participants.—A total of 773 men (379 with prostate cancer, 394 with benign prostatic
disease) 50 to 75 years of age with a palpably benign prostate gland, PSA
level of 4.0 to 10.0 ng/mL, and histologically confirmed diagnosis.Main Outcome Measures.—A percentage of free PSA cutoff that maintained 95% sensitivity for
prostate cancer detection, and probability of cancer for individual patients.Results.—The percentage of free PSA may be used in 2 ways: as a single cutoff
(ie, perform a biopsy for all patients at or below a cutoff of 25% free PSA)
or as an individual patient risk assessment (ie, base biopsy decisions on
each patient's risk of cancer). The 25% free PSA cutoff detected 95% of cancers
while avoiding 20% of unnecessary biopsies. The cancers associated with greater
than 25% free PSA were more prevalent in older patients, and generally were
less threatening in terms of tumor grade and volume. For individual patients,
a lower percentage of free PSA was associated with a higher risk of cancer
(range, 8%-56%). In the multivariate model used, the percentage of free PSA
was an independent predictor of prostate cancer (odds ratio [OR], 3.2; 95%
confidence interval [CI], 2.5-4.1; P<.001) and
contributed significantly more than age (OR, 1.2; 95% CI, 0.92-1.55) or total
PSA level (OR, 1.0; 95% CI, 0.92-1.11) in this cohort of subjects with total
PSA values between 4.0 and 10.0 ng/mL.Conclusions.—Use of the percentage of free PSA can reduce unnecessary biopsies in
patients undergoing evaluation for prostate cancer, with a minimal loss in
sensitivity in detecting cancer. A cutoff of 25% or less free PSA is recommended
for patients with PSA values between 4.0 and 10.0 ng/mL and a palpably benign
gland, regardless of patient age or prostate size. To our knowledge, this
study is the largest series to date evaluating the percentage of free PSA
in a population representative of patients in whom the test would be used
in clinical practice.