scispace - formally typeset
Search or ask a question
Institution

Regenstrief Institute

NonprofitIndianapolis, Indiana, United States
About: Regenstrief Institute is a nonprofit organization based out in Indianapolis, Indiana, United States. It is known for research contribution in the topics: Health care & Population. The organization has 742 authors who have published 2042 publications receiving 96966 citations.


Papers
More filters
Journal Article
TL;DR: An association between DED presence in both permanent and primary dentitions was observed, and further studies are necessary to fully characterise such relationship.
Abstract: AIM To determine if the presence of developmental enamel defects (DED) in the primary dentition is a risk indicator for the presence of DED in the permanent dentition in children with mixed dentition, as well as others factors. MATERIALS AND METHODS A cross-sectional study was undertaken in 1296 school children ages six to 72 years. The DED [FDI; 1982] in both dentitions were identified by means of an oral exam scoring enamel opacities [classified as demarcated or diffused], and enamel hypoplasia. Sociodemographic and socioeconomic variables were collected through a questionnaire. Socioeconomic status (SES) was determined based on the occupation and maximum level of education of parents. Statistical analysis included logistic regression. RESULTS Mean age of participants was 8.40 +/- 1.68; 51.6% were boys. DED prevalence was 7.5% in the permanent dentition and 10.0% in the primary dentition. The logistic regression model, adjusting for sociodemographic and socioeconomic variables, showed that for each primary tooth with DED, the odds of observing DED in the permanent dentition increased 7.38 times [95% CI = 1.17-1.64; p < 0.001]. CONCLUSION An association between DED presence in both permanent and primary dentitions was observed. Further studies are necessary to fully characterise such relationship.

33 citations

Journal ArticleDOI
TL;DR: An electronic medical record system was used to determine the distribution of medications supplied to older urban adults and to examine the correlations of these distributions with healthcare costs and use.
Abstract: OBJECTIVES: The amount of medication dispensed to older adults for the treatment of chronic disease must be balanced carefully. Insufficient medication supplies lead to inadequate treatment of chronic disease, whereas excessive supplies represent wasted resources and the potential for toxicity. We used an electronic medical record system to determine the distribution of medications supplied to older urban adults and to examine the correlations of these distributions with healthcare costs and use. DESIGN: A cross-sectional study using data acquired over 3 years (1994–1996). SETTING: A tax-supported urban public healthcare system consisting of a 300-bed hospital, an emergency department, and a network of community-based ambulatory care centers. PATIENTS: Patients were >60 years of age and had at least one prescription refill and at least two ambulatory visits or one hospitalization during the 3-year period. MEASUREMENTS: Focusing on 12 major categories of drugs used to treat chronic diseases, we determined the amounts and direct costs of these medications dispensed to older adult patients. Amounts of medications that were needed by patients to medicate themselves adequately were compared with the medication supply actually dispensed considering all sources of care (primary, emergency, and inpatient). We calculated the excess drug costs attributable to oversupply of medication (>120% of the amount needed) and the drug cost reduction caused by undersupply of medication (<80% of the amount needed). We also compared total healthcare use and costs for patients who had an over-supply, an undersupply, or an appropriate supply of their medications. RESULTS: The cohort comprised 4164 patients with a mean age of 71 ± 7 (SD) who received a mean of 3 ± 2 (SD) drugs for chronic conditions. There were 668 patients (16%) who received 120% of the supply needed. The total direct cost of targeted medications for 3 years was $1.96 million or, on average, $654,000 annually. During the 3-year period, patients receiving >120% of their needed medications had excess direct medication costs of $279,084 or $144 per patient, whereas patients receiving <80% of drugs needed had reduced medication costs of $423,438 or $634 per patient. Multivariable analyses revealed that both under- and over-supplies of medication were associated with a greater likelihood of emergency department visits and hospital admissions. CONCLUSIONS: More than one-half of the older adults in our study have under- or over-supplies of medications for the treatment of their chronic diseases. Such inappropriate supplies of medications are associated with healthcare utilization and costs. J Am Geriatr Soc 48:760–768, 2000.

33 citations

Journal ArticleDOI
TL;DR: PEFR added no predictive information to that contained in AQLQ scores and clinical and demographic data, and support the National Institutes of Health asthma guidelines’ recommendation for routinely assessing symptoms but not PEFR.
Abstract: OBJECTIVE: To investigate peak expiratory flow rate (PEFR) and quality of life scores for their ability to predict exacerbations of asthma.

33 citations

Journal ArticleDOI
TL;DR: In this article, the authors apply Fair Information Practice (FIPA) principles to electronic health records (EHRs) to allow patient control over who views their data, and demonstrate the benefits of these principles.
Abstract: INTRODUCTION Applying Fair Information Practice principles to electronic health records (EHRs) requires allowing patient control over who views their data.

32 citations

Journal ArticleDOI
TL;DR: An important advance is described in the ability to obtain useful data from narrative reports for cytology and pathology reports, which requires first that physicians use computers as their primary reporting medium and second that they limit their diagnoses and impressions to standard codes.
Abstract: Computers and other machinery of the Information Age have been touted as bringing a revolution to medical care that would improve its quality and lower its costs [1]. However, accomplishing these tasks requires electronic medical record systems that are not merely electronic renditions of paper charts. For maximum effect, electronic medical record systems should actively participate in improving patient outcomes. The first attempts to improve care with electronic medical records began more than 20 years ago with the computerizing of guidelines for simple preventive care and for identifying abnormal test results and potential drug interactions [2, 3]. Over the ensuing two decades, computers have become much faster (by orders of magnitude) and much less expensive. Meanwhile, partly in response to increasing health care costs and research showing that medical practice varied greatly among geographic locations and practices [4], professional organizations and federal agencies began developing more sophisticated clinical practice guidelines [5]. The automation of early guidelines through computers improved health care delivery [6-8] and, occasionally, patient outcomes [9, 10]. Electronic medical records thus offer a way to efficiently improve and monitor the processes and outcomes of care. The ability to implement practice guidelines using electronic medical record systems depends on having sufficient data. Comprehensive electronic medical record systems that can store long-term data and implement guidelines are still uncommon. However, as more processes in health care become computerized (for example, laboratories, pharmacies, and billing offices), more clinical data are being stored electronically. Emerging standards for data transmission [11] and coding [12, 13] will augment the building of comprehensive data repositories from disparate data sources. Dictated clinical notes of patient encounters and textual reports of procedures and imaging studies are large, mostly untapped sources of important data. One way to electronically capture these data is by paying technicians to read, hand-code, and enter summary codes from such reports. One data-entry technician who is paid $30 000 can code and enter data for 100 000 reports a year. This cost ($0.30 per report processed) is less than 1% of the charge for these tests and procedures and compares favorably with the typical 6% to 8% overhead charged by most billing organizations. However, data-entry technicians require management (hiring, firing, training, and oversight) and introduce another source of error into patients' electronic records. Diagnostic impressions can also be directly captured from physicians as they dictate reports for imaging tests, pathology specimens, and procedures. Although we currently use this method for cytology and pathology reports, it requires first that physicians use computers as their primary reporting medium and second that they limit their diagnoses and impressions to standard codes. Neither of these requirements is prevalent today. Yet, a wealth of useful information remains locked in free-text reports. In this issue, Hripcsak and coworkers [14] from Columbia-Presbyterian Medical Center describe an important advance in our ability to obtain useful data from narrative reports. Their natural language processing software did as well as radiologists and internists in coding diagnoses from chest radiographs. Importantly, they did not erroneously assume that any single coding of the report was the criterion standard but rather relied on standard measures of interobserver agreement. The authors stress that the information gleaned from clinical reports by their language processor is not meant to replace the physician. Rather, their software extracts data for their clinical alert system [15]. Limiting the role these data play in providing care is important because, for now and into the future, neither fast computers nor sophisticated programs [16] can distill all of the nuances of language contained in dictated reports. Although difficult to measure and perhaps impossible to reproduce, the physician's gestalt will remain a critical component of clinical decision making. Hence, the electronic medical record system at Columbia-Presbyterian also stores and displays the full-text reports of radiographs [17]. We do disagree with one stance taken by Hripcsak and colleagues. To avoid false-positive reminders and thus prevent physicians from losing confidence in the clinical alert system, they have programmed their language processor to err on the side of specificity. That is, they have used stricter definitions to define their diagnosis codes to reduce the number of false-positive alerts. We believe that the proper role of reminder or alert system is to prevent clinicians from overlooking details and clinical relationships. Physicians at our institution accept our more sensitive (and less specific) system as long as they know from the start that some of the reminders will be wrong and should be ignored [7, 18]. Moreover, the strength of the recommendation can be varied with the reliability of the data. Because malpractice suits and, more importantly, clinical mistakes are triggered more often by errors of omission than errors of commission [19], a stance favoring sensitivity would be more appropriate. Hripcsak and colleagues studied only one type of radiograph (chest) in one environment (the hospital). To be more broadly useful, natural language processors will have to be specific to both the type of report (for example, radiographs, scintigrams, procedure reports, histories, and physical examinations) and setting (for example, inpatient, emergency department, outpatient). Obviously, much work needs to be done. But the task is finite, and beginning at the beginning necessitates taking the technology that Hripcsak and colleagues have developed (and we hope will continue to refine) and applying it first to the most common textual reports and their most common conditions. Some crossover of terminology between reports will occur (for example, some of the codes for describing chest radiographs will apply to the physical examination of the chest), and testing the reliability and validity of such systems will become more standardized (and easier). The authors' methods have added greatly to this end. We applaud their pioneering efforts and the Editors of Annals for publishing a paper that would normally be buried in a highly technical journal with a limited circulation. The success of this experiment in natural language processing is a small but distinct step toward realizing the lofty goals of electronic medical records [1].

32 citations


Authors

Showing all 752 results

NameH-indexPapersCitations
Earl S. Ford130404116628
Andrew J. Saykin12288752431
Michael W. Weiner12173854667
Terry M. Therneau11744759144
Ting-Kai Li10949439558
Kurt Kroenke107478110326
E. John Orav10037934557
Li Shen8455826812
William M. Tierney8442324235
Robert S. Dittus8225232718
C. Conrad Johnston8017730409
Matthew Stephens8021698924
Morris Weinberger7836723600
Richard M. Frankel7433424885
Patrick J. Loehrer7327921068
Network Information
Related Institutions (5)
Veterans Health Administration
98.4K papers, 4.8M citations

89% related

Oregon Health & Science University
65.1K papers, 3.3M citations

87% related

Brigham and Women's Hospital
110.5K papers, 6.8M citations

86% related

University of Texas Health Science Center at Houston
42.5K papers, 2.1M citations

85% related

Beth Israel Deaconess Medical Center
52.5K papers, 2.9M citations

85% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20232
202220
2021170
2020127
2019154
2018133