scispace - formally typeset
Search or ask a question

Showing papers presented at "American Medical Informatics Association Annual Symposium in 2008"


Proceedings Article
06 Nov 2008
TL;DR: Analysis of data from several clinical research databases at a single academic medical center to assess frequency, distribution and features of data entry errors found many errors were non-random, organized in special and cognitive clusters, and some could potentially affect the interpretation of the study results.
Abstract: Errors in clinical research databases are common but relatively little is known about their characteristics and optimal detection and prevention strategies. We have analyzed data from several clinical research databases at a single academic medical center to assess frequency, distribution and features of data entry errors. Error rates detected by the double-entry method ranged from 2.3 to 26.9%. Errors were due to both mistakes in data entry and to misinterpretation of the information in the original documents. Error detection based on data constraint failure significantly underestimated total error rates and constraint-based alarms integrated into the database appear to prevent only a small fraction of errors. Many errors were non-random, organized in special and cognitive clusters, and some could potentially affect the interpretation of the study results. Further investigation is needed into the methods for detection and prevention of data errors in research.

118 citations


Proceedings Article
06 Nov 2008
TL;DR: This PHR analysis shows that all forms of PHRs have initial net negative value, but Interoperable PHRs provide the most value, followed by third-party PHRs and payer-tethered PHRs also showing positive net value.
Abstract: Personal health records (PHRs) are a rapidly growing area of health information technology despite a lack of significant value-based assessment. Here we present an assessment of the potential value of PHR systems, looking at both costs and benefits. We examine provider-tethered, payer-tethered, and third-party PHRs, as well as idealized interoperable PHRs. An analytical model was developed that considered eight PHR application and infrastructure functions. Our analysis projects the initial and annual costs and annual benefits of PHRs to the entire US over the next 10 years. This PHR analysis shows that all forms of PHRs have initial net negative value. However, at the end of 10 years, steady state annual net value ranging from $13 billion to -$29 billion. Interoperable PHRs provide the most value, followed by third-party PHRs and payer-tethered PHRs also showing positive net value. Provider-tethered PHRs constantly demonstrating negative net value.

92 citations


Proceedings Article
06 Nov 2008
TL;DR: Discretization acts as a variable selection method in addition to transforming the continuous values of the variable to discrete ones to improve the classification performance of machine learning algorithms as well as algorithms like Naïve Bayes that are sensitive to the dimensionality of the data.
Abstract: Discretization acts as a variable selection method in addition to transforming the continuous values of the variable to discrete ones. Machine learning algorithms such as Support Vector Machines and Random Forests have been used for classification in high-dimensional genomic and proteomic data due to their robustness to the dimensionality of the data. We show that discretization can help improve significantly the classification performance of these algorithms as well as algorithms like Naive Bayes that are sensitive to the dimensionality of the data.

88 citations


Proceedings Article
06 Nov 2008
TL;DR: Analysis of the failed matches of PatientsLikeMe symptom terms reveals challenges for online patient communication, not only with healthcare professionals, but with other patients.
Abstract: PatientsLikeMe is an online social networking community for patients. Subcommunities center on three distinct diagnoses: Amyotrophic Lateral Sclerosis, Multiple Sclerosis and Parkinson’s Disease. Community members can describe their symptoms to others in natural language terms, resulting in folksonomic tags available for clinical analysis and for browsing by other users to find “patients like me”. Forty-three percent of PatientsLikeMe symptom terms are present as exact (24%) or synonymous (19%) terms in the Unified Medical Language System Metathesaurus (National Library of Medicine; 2007AC). Slightly more than half of the symptom terms either do not match the UMLS, or are unclassifiable. A clinical vocabulary, SNOMED CT, accounts for 93% of the matching terms. Analysis of the failed matches reveals challenges for online patient communication, not only with healthcare professionals, but with other patients. In a Web 2.0 environment with lowered barriers between consumers and professionals, a deficiency in knowledge representation affects not only the professionals, but the consumers as well.

86 citations


Proceedings Article
Li Li1, Herbert S. Chase, Chintan Patel1, Carol Friedman1, Chunhua Weng 
06 Nov 2008
TL;DR: It is concluded that NLP-processed patient reports supplement important information for eligibility screening and should be used in combination with structured data.
Abstract: The prevalence of electronic medical record (EMR) systems has made mass-screening for clinical trials viable through secondary uses of clinical data, which often exist in both structured and free text formats. The tradeoffs of using information in either data format for clinical trials screening are understudied. This paper compares the results of clinical trial eligibility queries over ICD9-encoded diagnoses and NLP-processed textual discharge summaries. The strengths and weaknesses of both data sources are summarized along the following dimensions: information completeness, expressiveness, code granularity, and accuracy of temporal information. We conclude that NLP-processed patient reports supplement important information for eligibility screening and should be used in combination with structured data.

84 citations


Proceedings Article
06 Nov 2008
TL;DR: This work compared 3 approaches to semantic similarity-metrics (which rely on expert opinion, ontologies only, and information content) with 4 metrics applied to SNOMED-CT and found that there was poor agreement among those metrics based on information content with the ontology only metric.
Abstract: Semantic -similarity measures quantify concept similarities in a given ontology. Potential applications for these measures include search, data mining, and knowledge discovery in database or decision -support systems that utiliz e ontologies. To date, there have not been comparisons of the different semantic -similarity approaches on a single ontology. Such a comparison can offer insight on the validity of different approaches. We compared 3 approaches to semantic similarity -metri cs (which rely on expert opinion, ontologies only, and information content) with 4 metrics applied to SNOMED -CT. We found that there was poor agreement among those metrics based on information content with the ontology only metric . T he metric based only o n the ontology structure cor related most with expert opinion . Our results suggest that metric s based on the ontology only may be preferable to information -content –based metrics, and point to the need for more research on validating the different approaches .

84 citations


Proceedings Article
06 Nov 2008
TL;DR: This paper demonstrates how a subset of anatomy relevant to the domain of radiology can be derived from an anatomy reference ontology, the Foundational Model of Anatomy (FMA) Ontology, to create an application ontology that is robust and expressive enough to incorporate and accommodate all salient anatomical knowledge necessary to support existing and emerging systems for managing anatomical information related to radiology.
Abstract: Domain reference ontologies are being developed to serve as generalizable and reusable sources designed to support any application specific to the domain. The challenge is how to develop ways to derive or adapt pertinent portions of reference ontologies into application ontologies. In this paper we demonstrate how a subset of anatomy relevant to the domain of radiology can be derived from an anatomy reference ontology, the Foundational Model of Anatomy (FMA) Ontology, to create an application ontology that is robust and expressive enough to incorporate and accommodate all salient anatomical knowledge necessary to support existing and emerging systems for managing anatomical information related to radiology. The principles underlying this work are applicable to domains beyond radiology, so our results could be extended to other areas of biomedicine in the future.

78 citations


Proceedings Article
06 Nov 2008
TL;DR: A method that extracts medication information from discharge summaries using a broader definition of medication information than previous studies, including drug names appearing with and without dosage information, misspelled drug names, and contextual information.
Abstract: We present a method that extracts medication information from discharge summaries. The program relies on parsing rules written as a set of regular expressions and on a user-configurable drug lexicon. Our evaluation shows a precision of 94% and recall of 83% in the extraction of medication information. We use a broader definition of medication information than previous studies, including drug names appearing with and without dosage information, misspelled drug names, and contextual information.

75 citations


Proceedings Article
06 Nov 2008
TL;DR: This pilot study developed a set of pictographs and used them to enhance two mock-up discharge instructions and found that patient comprehension and recall of discharge instructions could be improved by supplementing free texts with pictographs.
Abstract: Inpatient discharge instructions provide critical information for patients to manage their own care These instructions are typically free-text and not easy for patients to understand and remember In this pilot study, we developed a set of pictographs through a participatory design process and used them to enhance two mock-up discharge instructions Tested on 13 healthy volunteers, the pictograph enhancement resulted in statistically significant better recall rates (p<0001) This suggests that patient comprehension and recall of discharge instructions could be improved by supplementing free texts with pictographs

72 citations


Proceedings ArticleDOI
19 Jun 2008
TL;DR: It is hypothesize that machine-learning algorithms (MLA) could classify completer and ideator suicide notes as well a mental health professionals (MHP) to develop an evidence based suicide predictor for emergency department use.
Abstract: We hypothesize that machine-learning algorithms (MLA) can classify completer and simulated suicide notes as well as mental health professionals (MHP). Five MHPs classified 66 simulated or completer notes; MLAs were used for the same task. Results: MHPs were accurate 71% of the time; using the sequential minimization optimization algorithm (SMO) MLAs were accurate 78% of the time. There was no significant difference between the MLA and MPH classifiers. This is an important first step in developing an evidence based suicide predictor for emergency department use.

72 citations


Proceedings Article
06 Nov 2008
TL;DR: The iPad tool as discussed by the authors enables researchers and clinicians to create semantic annotations on radiological images, enabling them to describe images and image regions using a graphical interface that maps their descriptions to structured ontologies semi-automatically.
Abstract: Radiological images contain a wealth of information, such as anatomy and pathology, which is often not explicit and computationally accessible. Information schemes are being developed to describe the semantic content of images, but such schemes can be unwieldy to operationalize because there are few tools to enable users to capture structured information easily as part of the routine research workflow. We have created iPad, an open source tool enabling researchers and clinicians to create semantic annotations on radiological images. iPad hides the complexity of the underlying image annotation information model from users, permitting them to describe images and image regions using a graphical interface that maps their descriptions to structured ontologies semi-automatically. Image annotations are saved in a variety of formats, enabling interoperability among medical records systems, image archives in hospitals, and the Semantic Web. Tools such as iPad can help reduce the burden of collecting structured information from images, and it could ultimately enable researchers and physicians to exploit images on a very large scale and glean the biological and physiological significance of image content.

Proceedings Article
06 Nov 2008
TL;DR: It is concluded that SNOMED CT based computable rules are accurate enough for the automated biosurveillance of pneumonias from radiological reports.
Abstract: Radiological reports are a rich source of clinical data which can be mined to assist with biosurveillance of emerging infectious diseases. In addition to biosurveillance, radiological reports are an important source of clinical data for health service research. Pneumonias and other radiological findings on chest xray or chest computed tomography (CT) are one type of relevant finding to both biosurveillance and health services research. In this study we examined the ability of a Natural Language Processing system to accurately identify pneumonias and other lesions from within free-text radiological reports. The system encoded the reports in the SNOMED CT Ontology and then a set of SNOMED CT based rules were created in our Health Archetype Language aimed at the identification of these radiological findings and diagnoses. The encoded rule was executed against the SNOMED CT encodings of the radiological reports. The accuracy of the reports was compared with a Clinician review of the Radiological Reports. The accuracy of the system in the identification of pneumonias was high with a Sensitivity (recall) of 100%, a specificity of 98%, and a positive predictive value (precision) of 97%. We conclude that SNOMED CT based computable rules are accurate enough for the automated biosurveillance of pneumonias from radiological reports.

Proceedings Article
06 Nov 2008
TL;DR: A hierarchical section header terminology was developed, supporting mappings to LOINC and other vocabularies; it contained 1109 concepts and 4332 synonyms and may enable better clinical note understanding and interoperability.
Abstract: Clinical documentation is often expressed in natural language text, yet providers often use common organizations that segment these notes in sections, such as “history of present illness” or “physical examination.” We developed a hierarchical section header terminology, supporting mappings to LOINC and other vocabularies; it contained 1109 concepts and 4332 synonyms. Physicians evaluated it compared to LOINC and the Evaluation and Management billing schema using a randomly selected corpus of history and physical notes. Evaluated documents contained a median of 54 sections and 27 “major sections.” There were 16,196 total sections in the evaluation note corpus. The terminology contained 99.9% of the clinical sections; LOINC matched 77% of section header concepts and 20% of section header strings in those documents. The section terminology may enable better clinical note understanding and interoperability. Future development and integration into natural language processing systems is needed.

Proceedings Article
06 Nov 2008
TL;DR: The obesity challenge is discussed, some approaches to automatically identifying obese patients and obesity co-morbidities from medical records are reviewed, and the challenge results are presented.
Abstract: The second i2b2 workshop on Natural Language Processing (NLP) for clinical records presents a shared-task and challenge on the automated extraction of obesity information from narrative patient records. The goal of the obesity challenge is to continue i2b2's effort to open patient records to studies by the NLP and Medical Informatics communities for the advancement of the state of the art in medical language processing. For this, i2b2 made available a set of de-identified patient records that are hand-annotated by medical professionals for obesity-related information, and invited the development of systems that can automatically mark the presence of obesity and co-morbidities in each patient from information in their records. In this workshop, we will discuss the obesity challenge, review some approaches to automatically identifying obese patients and obesity co-morbidities from medical records, and present the challenge results. The findings of the i2b2 challenge on obesity will shed light onto the state of the art in natural language processing for multi-label multi-class classification of narrative records for clinical applications.

Proceedings Article
06 Nov 2008
TL;DR: This paper explored supervised machine learning approaches to automatically classify an ad hoc clinical question into general topics and then evaluated different methods for automatically extracting keywords from an ad-hoc clinical question, achieving an average F-score of 56% on the 4,654 clinical questions maintained by the National Library of Medicine.
Abstract: Automatically extracting information needs from ad hoc clinical questions is an important step towards medical question answering. In this work, we first explored supervised machine-learning approaches to automatically classify an ad hoc clinical question into general topics. We then evaluated different methods for automatically extracting keywords from an ad hoc clinical question. Our methods were evaluated on the 4,654 clinical questions maintained by the National Library of Medicine. Our best systems or methods showed F-score of 76% for the task of question-topic classification and an average F-score of 56% for extracting keywords from ad hoc clinical questions.

Proceedings Article
06 Nov 2008
TL;DR: This work evaluated several alternate classification feature systems including unigram, n-gram, MeSH, and natural language processing (NLP) feature sets for their usefulness on 15 SR tasks, using the area under the receiver operating curve as a measure of goodness.
Abstract: Automated document classification can be a valuable tool for enhancing the efficiency of creating and updating systematic reviews (SRs) for evidence-based medicine. One way document classification can help is in performing work prioritization: given a set of documents, order them such that the most likely useful documents appear first. We evaluated several alternate classification feature systems including unigram, n-gram, MeSH, and natural language processing (NLP) feature sets for their usefulness on 15 SR tasks, using the area under the receiver operating curve as a measure of goodness. We also examined the impact of topic-specific training data compared to general SR inclusion data. The best feature set used a combination of n-gram and MeSH features. NLP-based features were not found to improve performance. Furthermore, topic-specific training data usually provides a significant performance gain over more general SR training.

Proceedings Article
06 Nov 2008
TL;DR: The development of an agent based simulation tool that has been designed to evaluate the impact of various physician staffing configurations on patient waiting times in the emergency department is described.
Abstract: Emergency department overcrowding is a problem that threatens the public health of communities and compromises the quality of care given to individual patients. The Institute of Medicine recommends that hospitals employ information technology and operations research methods to reduce overcrowding. This paper describes the development of an agent based simulation tool that has been designed to evaluate the impact of various physician staffing configurations on patient waiting times in the emergency department. We evaluate the feasibility of this tool at a single hospital emergency department.

Proceedings Article
06 Nov 2008
TL;DR: Experience and experimental result suggest that FRIL has the potential to increase the accuracy of data linkage across all studies involving record linkage, and will enable researchers to assess objectively the quality of linked data.
Abstract: A fine-grained record integration and linkage tool (FRIL) is presented. The tool extends traditional record linkage tools with a richer set of parameters. Users may systematically and iteratively explore the optimal combination of parameter values to enhance linking performance and accuracy. Results of linking a birth defects monitoring program and birth certificate data using FRIL show 99% precision and 95% recall rates when compared to results obtained through handcrafted algorithms, and the process took significantly less time to complete. Experience and experimental result suggest that FRIL has the potential to increase the accuracy of data linkage across all studies involving record linkage. In particular, FRIL will enable researchers to assess objectively the quality of linked data.

Proceedings Article
06 Nov 2008
TL;DR: The preliminary study demonstrated that this method for knowledge acquisition of disease-symptom pairs from clinical reports is effective, and can be applied to detect other clinical associations, such as between diseases and medications.
Abstract: Knowledge of associations between biomedical entities, such as disease-symptoms, is critical for many automated biomedical applications. In this work, we develop automated methods for acquisition and discovery of medical knowledge embedded in clinical narrative reports. MedLEE, a Natural Language Processing (NLP) system, is applied to extract and encode clinical entities from narrative clinical reports obtained from New York-Presbyterian Hospital (NYPH), and associations between the clinical entities are determined based on statistical methods adjusted by volume tests. We focus on two types of entities, disease and symptom, in this study. Evaluation based on a random sample of disease-symptom associations indicates an overall recall of 90% and a precision of 92%. In conclusion, the preliminary study demonstrated that this method for knowledge acquisition of disease-symptom pairs from clinical reports is effective. The automated method is generalizable, and can be applied to detect other clinical associations, such as between diseases and medications.

Proceedings Article
06 Nov 2008
TL;DR: These efforts to develop a novel query tool implemented in a large operational system at the Washington Hospital Center are described and the design of the interface to specify temporal patterns and the visual presentation of results are described.
Abstract: As electronic health records (EHR) become more widespread, they enable clinicians and researchers to pose complex queries that can benefit immediate patient care and deepen understanding of medical treatment and outcomes. However, current query tools make complex temporal queries difficult to pose, and physicians have to rely on computer professionals to specify the queries for them. This paper describes our efforts to develop a novel query tool implemented in a large operational system at the Washington Hospital Center (Microsoft Amalga, formerly known as Azyxxi). We describe our design of the interface to specify temporal patterns and the visual presentation of results, and report on a pilot user study looking for adverse reactions following radiology studies using contrast.

Proceedings Article
06 Nov 2008
TL;DR: A corpus of 324 health documents consisting of six different types of texts was compiled and a panel of five health literacy experts assigned a readability level (1-7 Likert scale), found to be highly correlated with a patient representatives readability rating.
Abstract: Developing easy-to-read health texts for consumers continues to be a challenge in health communication. Though readability formulae such as Flesch-Kincaid Grade Level have been used in many studies, they were found to be inadequate to estimate the difficulty of some types of health texts. One impediment to the development of new readability assessment techniques is the absence of a gold standard that can be used to validate them. To overcome this deficiency, we have compiled a corpus of 324 health documents consisting of six different types of texts. These documents were manually reviewed and assigned a readability level (1-7 Likert scale) by a panel of five health literacy experts. The expert assigned ratings were found to be highly correlated with a patient representative’s readability ratings (r = 0.81, p<0.0001).

Proceedings Article
06 Nov 2008
TL;DR: This work builds computer models that accurately predict citation counts of biomedical publications within a deep horizon of ten years using only predictive information available at publication time.
Abstract: The single most important bibliometric criterion for judging the impact of biomedical papers and their authors’ work is the number of citations received which is commonly referred to as “citation count”. This metric however is unavailable until several years after publication time. In the present work, we build computer models that accurately predict citation counts of biomedical publications within a deep horizon of ten years using only predictive information available at publication time. Our experiments show that it is indeed feasible to accurately predict future citation counts with a mixture of content-based and bibliometric features using machine learning methods. The models pave the way for practical prediction of the long-term impact of publication, and their statistical analysis provides greater insight into citation behavior.

Proceedings Article
06 Nov 2008
TL;DR: This study highlights a single architecture to extract a wide array of information elements from full-text publications of randomized clinical trials (RCTs), which combines a text classifier with a weak regular expression matcher.
Abstract: Clinical trials are one of the most valuable sources of scientific evidence for improving the practice of medicine. The Trial Bank project aims to improve structured access to trial findings by including formalized trial information into a knowledge base. Manually extracting trial information from published articles is costly, but automated information extraction techniques can assist. The current study highlights a single architecture to extract a wide array of information elements from full-text publications of randomized clinical trials (RCTs). This architecture combines a text classifier with a weak regular expression matcher. We tested this two-stage architecture on 88 RCT reports from 5 leading medical journals, extracting 23 elements of key trial information such as eligibility rules, sample size, intervention, and outcome names. Results prove this to be a promising avenue to help critical appraisers, systematic reviewers, and curators quickly identify key information elements in published RCT articles.

Proceedings Article
06 Nov 2008
TL;DR: It is described how news of a possible effect of lithium on the course of Amyotrophic Lateral Sclerosis was acquired by and diffused through an on-line community and led to participation in a patient-driven observational study of lithium and ALS.
Abstract: The Internet is not simply being used to search for information about disease and treatment. It is also being used by online disease-focused communities to organize their own experience base and to harness their own talent and insight in service to the cause of achieving better health outcomes. We describe how news of a possible effect of lithium on the course of Amyotrophic Lateral Sclerosis (ALS) was acquired by and diffused through an on-line community and led to participation in a patient-driven observational study of lithium and ALS. Our discussion suggests how the social web drives demand for patient-centered health informatics.

Proceedings Article
06 Nov 2008
TL;DR: There are approximately 108,390 IT professionals in healthcare the US, and it is concluded that to move the entire US to higher levels of adoption (Stage 4) will require an additional 40,784 IT professionals.
Abstract: One of the essential ingredients for health information technology implementation is a well trained and competent workforce. However, this workforce has not been quantified or otherwise characterized well. We extracted data from the HIMSS Analytics Database and extrapolated our findings to the US as a whole. We found that there are approximately 108,390 IT professionals in healthcare the US. In addition, the amount of IT staff hired varies by level of EMR adoption, with the rate of IT FTE per bed started at a level of 0.082 FTE per bed at the lowest level of the EMR Adoption Model (Stage0) and increasing to 0.210 FTE bed at higher levels(Stage 4). We can extrapolate nationally to conclude that to move the entire US to higher levels of adoption (Stage 4) will require an additional 40,784 IT professionals. There are limitations to this analysis, including that the data are limited to IT professionals who are mainly in hospitals and do not include those who, for example, work for vendors or in non-clinical settings. Furthermore, data on biomedical informatics professionals are still virtually non-existent. Our analysis adds to data that show there must be increasing attention paid to the workforce that will develop, implement, and evaluate HIT applications. Further research is essential to better characterize all types of workers needed for adoption of health information technology, including their job roles, required competencies, and optimal education.

Proceedings Article
06 Nov 2008
TL;DR: This study is the first to propose a user validated metric, different from readability formulas, and shows that this percentage correlates significantly with ease of understanding as indicated by users but not with the readability formula levels commonly used.
Abstract: Although understanding health information is important, the texts provided are often difficult to understand. There are formulas to measure readability levels, but there is little understanding of how linguistic structures contribute to these difficulties. We are developing a toolkit of linguistic metrics that are validated with representative users and can be measured automatically. In this study, we provide an overview of our corpus and how readability differs by topic and source. We compare two documents for three groups of linguistic metrics. We report on a user study evaluating one of the differentiating metrics: the percentage of function words in a sentence. Our results show that this percentage correlates significantly with ease of understanding as indicated by users but not with the readability formula levels commonly used. Our study is the first to propose a user validated metric, different from readability formulas.

Proceedings Article
06 Nov 2008
TL;DR: BioPortal enables ontology users to learn what biomedical ontologies exist, what a particular ontology might be good for, and how individual ontologies relate to one another, as well as define relationships among those ontologies.
Abstract: Background Ontologies provide domain knowledge to drive data integration, information retrieval, natural-language processing, and decision support. The National Center for Biomedical Ontology, one of the seven National Centers for Biomedical Computing created under the NIH Roadmap, is developing BioPortal, a Web-based system that serves as a repository for biomedical ontologies. BioPortal defines relationships among those ontologies and between the ontologies and online data resources such as PubMed, ClinicalTrials.gov, and the Gene Expression Omnibus (GEO). BioPortal supports not only the technical requirements for access to biomedical ontologies either via Web browsers or via Web services, but also community-based participation in the evaluation and evolution of ontology content. BioPortal enables ontology users to learn what biomedical ontologies exist, what a particular ontology might be good for, and how individual ontologies relate to one another.

Proceedings Article
06 Nov 2008
TL;DR: There is little difference in the indexing performance when lemmatization or stemming is used, however, the multi-terminology approach outperforms indexing relying on a single terminology in terms of recall.
Abstract: Background : To assist with the development of a French online quali ty -controlled health gateway (CISMeF) , an automatic inde xing tool assigning MeSH descriptors to medical text in French was created. T he French Multi -Terminology Indexer (F MTI) relies on a multi -terminology approach involving four prominent medical te rminolo gies and the mappings between them. Objective : In this paper, we compare lemmatization and ste mming as methods to process French medical text for indexing. We also evaluate the multi -terminology approach implemented in F -MTI . Methods : The indexing strategi es were assess ed on a corpus of 18,814 resources indexed manually. Results : There is little difference in the indexing performance when lemmatiz ation or stemming is used. However, the multi -terminology approach outperforms indexing relying on a single term inology in terms of recall . Conclusion : F-MTI will soon be used in the CISMeF production enviro nment and in a Health MultiTerminology Server in French.

Proceedings Article
06 Nov 2008
TL;DR: Evaluation of CDSS will be of utmost importance in the future with increasing use of electronic health records, and usability engineering principles can identify interface problems that may lead to potential medical adverse events, and should be incorporated early in the software design phase.
Abstract: INTRODUCTION Clinical decision support systems (CDSS) have the potential to reduce adverse medical events, but improper design can introduce new forms of error. CDSS pertaining to community acquired pneumonia and neutropenic fever were studied to determine whether usability of the graphical user interface might contribute to potential adverse medical events. METHODS Automated screen capture of 4 CDSS being used by volunteer emergency physicians was analyzed using structured methods. RESULTS 422 events were recorded over 56 sessions. In total, 169 negative comments, 55 positive comments, 130 neutral comments, 21 application events, 34 problems, 6 slips, and 5 mistakes were identified. Three mistakes could have had life-threatening consequences. CONCLUSION Evaluation of CDSS will be of utmost importance in the future with increasing use of electronic health records. Usability engineering principles can identify interface problems that may lead to potential medical adverse events, and should be incorporated early in the software design phase.

Proceedings Article
06 Nov 2008
TL;DR: A synthesis of the literature and a Delphi approach using three rounds of surveys with an expert panel resulted in identification of informatics competencies for nursing leaders that address computer skills, informatics knowledge, and informatics skills.
Abstract: Historically, educational preparation did not address informatics competencies; thus managers, administrators, or executives may not be prepared to use or lead change in the use of health information technologies. A number of resources for informatics competencies exist, however, a comprehensive list addressing the unique knowledge and skills required in the role of a manager or administrator was not found. The purpose of this study was to develop informatics competencies for nursing leaders. A synthesis of the literature and a Delphi approach using three rounds of surveys with an expert panel resulted in identification of informatics competencies for nursing leaders that address computer skills, informatics knowledge, and informatics skills.