scispace - formally typeset
Search or ask a question

Showing papers in "Yearb Med Inform in 2014"


Journal ArticleDOI
TL;DR: In reviewing the literature for the past three years, this work focuses on "big data" in the context of EHR systems and reports on some examples of how secondary use of data has been put into practice.
Abstract: Objectives: Implementation of Electronic Health Record (EHR) systems continues to expand. The massive number of patient encounters results in high amounts of stored data. Transforming clinical data into knowledge to improve patient care has been the goal of biomedical informatics professionals for many decades, and this work is now increasingly recognized outside our field. In reviewing the literature for the past three years, we focus on “big data” in the context of EHR systems and we report on some examples of how secondary use of data has been put into practice. Methods: We searched PubMed database for articles from January 1, 2011 to November 1, 2013. We initiated the search with keywords related to “big data” and EHR. We identified relevant articles and additional keywords from the retrieved articles were added. Based on the new keywords, more articles were retrieved and we manually narrowed down the set utilizing predefined inclusion and exclusion criteria. Results: Our final review includes articles categorized into the themes of data mining (pharmacovigilance, phenotyping, natural language processing), data application and integration (clinical decision support, personal monitoring, social media), and privacy and security. Conclusion: The increasing adoption of EHR systems worldwide makes it possible to capture large amounts of clinical data. There is an increasing number of articles addressing the theme of “big data”, and the concepts associated with these articles vary. The next step is to transform healthcare big data into actionable knowledge.

157 citations


Journal ArticleDOI
TL;DR: Current research that takes advantage of "Big Data" in health and biomedical informatics applications is summarized, highlighting ongoing development of powerful new methods for turning that large-scale, and often complex, data into information that provides new insights into human health, in a range of different areas.
Abstract: Objectives: To summarise current research that takes advantage of “Big Data” in health and biomedical informatics applications. Methods:Survey of trends in this work, and exploration of literature describing how large-scale structured and unstructured data sources are being used to support applications from clinical decision making and health policy, to drug design and pharmacovigilance, and further to systems biology and genetics. Results: The survey highlights ongoing development of powerful new methods for turning that large-scale, and often complex, data into information that provides new insights into human health, in a range of different areas. Consideration of this body of work identifies several important paradigm shifts that are facilitated by Big Data resources and methods: in clinical and translational research, from hypothesis-driven research to data-driven research, and in medicine, from evidence-based practice to practice-based evidence. Conclusions: The increasing scale and availability of large quantities of health data require strategies for data management, data linkage, and data integration beyond the limits of many existing information systems, and substantial effort is underway to meet those needs. As our ability to make sense of that data improves, the value of the data will continue to increase. Health systems, genetics and genomics, population and public health; all areas of biomedicine stand to benefit from Big Data and the associated technologies.

149 citations


Journal ArticleDOI
TL;DR: The way forward with the big data opportunity will require properly applied engineering principles to design studies and applications, to avoid preconceptions or over-enthusiasms, to fully exploit the available technologies, and to improve data processing and data management regulations.
Abstract: Big data are receiving an increasing attention in biomedicine and healthcare. It is therefore important to understand the reason why big data are assuming a crucial role for the biomedical informatics community. The capability of handling big data is becoming an enabler to carry out unprecedented research studies and to implement new models of healthcare delivery. Therefore, it is first necessary to deeply understand the four elements that constitute big data, namely Volume, Variety, Velocity, and Veracity, and their meaning in practice. Then, it is mandatory to understand where big data are present, and where they can be beneficially collected. There are research fields, such as translational bioinformatics, which need to rely on big data technologies to withstand the shock wave of data that is generated every day. Other areas, ranging from epidemiology to clinical care, can benefit from the exploitation of the large amounts of data that are nowadays available, from personal monitoring to primary care. However, building big data-enabled systems carries on relevant implications in terms of reproducibility of research studies and management of privacy and data access; proper actions should be taken to deal with these issues. An interesting consequence of the big data scenario is the availability of new software, methods, and tools, such as map-reduce, cloud computing, and concept drift machine learning algorithms, which will not only contribute to big data research, but may be beneficial in many biomedical informatics applications. The way forward with the big data opportunity will require properly applied engineering principles to design studies and applications, to avoid preconceptions or over-enthusiasms, to fully exploit the available technologies, and to improve data processing and data management regulations.

140 citations


Journal ArticleDOI
TL;DR: The concept of Big Data and associated analytics are to be taken seriously when approaching the use of vast volumes of both structured and unstructured data in science and health-care.
Abstract: Objectives: As technology continues to evolve and rise in various industries, such as healthcare, science, education, and gaming, a sophisticated concept known as Big Data is surfacing. The concept of analytics aims to understand data. We set out to portray and discuss perspectives of the evolving use of Big Data in science and healthcare and, to examine some of the opportunities and challenges. Methods: A literature review was conducted to highlight the implications associated with the use of Big Data in scientific research and healthcare innovations, both on a large and small scale. Results: Scientists and health-care providers may learn from one another when it comes to understanding the value of Big Data and analytics. Small data, derived by patients and consumers, also requires analytics to become actionable. Connectivism provides a framework for the use of Big Data and analytics in the areas of science and healthcare. This theory assists individuals to recognize and synthesize how human connections are driving the increase in data. Despite the volume and velocity of Big Data, it is truly about technology connecting humans and assisting them to construct knowledge in new ways. Concluding Thoughts: The concept of Big Data and associated analytics are to be taken seriously when approaching the use of vast volumes of both structured and unstructured data in science and health-care. Future exploration of issues surrounding data privacy, confidentiality, and education are needed. A greater focus on data from social media, the quantified self-movement, and the application of analytics to “small data” would also be useful.

127 citations


Journal ArticleDOI
TL;DR: Spatial analysis of diseases and health service planning are well-established research areas and the development of future technologies and new application areas for GIS and data-gathering technologies such as GPS, smartphones, remote sensing etc will be nudging the research in GIS in health.
Abstract: Objectives: The application of GIS in health science has increased over the last decade and new innovative application areas have emerged. This study reviews the literature and builds a framework to provide a conceptual overview of the domain, and to promote strategic planning for further research of GIS in health. Method: The framework is based on literature from the library databases Scopus and Web of Science. The articles were identified based on keywords and initially selected for further study based on titles and abstracts. A grounded theory-inspired method was applied to categorize the selected articles in main focus areas. Subsequent frequency analysis was performed on the identified articles in areas of infectious and non-infectious diseases and continent of origin. Results: A total of 865 articles were included. Four conceptual domains within GIS in health sciences comprise the framework: spatial analysis of disease, spatial analysis of health service planning, public health, health technologies and tools. Frequency analysis by disease status and location show that malaria and schistosomiasis are the most commonly analyzed infectious diseases where cancer and asthma are the most frequently analyzed non-infectious diseases. Across categories, articles from North America predominate, and in the category of spatial analysis of diseases an equal number of studies concern Asia. Conclusion: Spatial analysis of diseases and health service planning are well-established research areas. The development of future technologies and new application areas for GIS and data-gathering technologies such as GPS, smartphones, remote sensing etc. will be nudging the research in GIS and health.

98 citations


Journal ArticleDOI
TL;DR: The major challenge for the near future is to transform analytical methods that are used in the biomedical and health domain, to fit the distributed storage and processing model that is required to handle big data, while ensuring confidentiality of the data being analyzed.
Abstract: Objectives: To review technical and methodological challenges for big data research in biomedicine and health. Methods: We discuss sources of big datasets, survey infrastructures for big data storage and big data processing, and describe the main challenges that arise when analyzing big data. Results: The life and biomedical sciences are massively contributing to the big data revolution through secondary use of data that were collected during routine care and through new data sources such as social media. Efficient processing of big datasets is typically achieved by distributing computation over a cluster of computers. Data analysts should be aware of pitfalls related to big data such as bias in routine care data and the risk of false-positive findings in high-dimensional datasets. Conclusions: The major challenge for the near future is to transform analytical methods that are used in the biomedical and health domain, to fit the distributed storage and processing model that is required to handle big data, while ensuring confidentiality of the data being analyzed.

70 citations


Journal ArticleDOI
TL;DR: There is a compelling need to manage and use big data to make better decisions to support the transformation of healthcare to the personalized, evidence-supported model of the future.
Abstract: Objectives: This survey explores the role of big data and health analytics developed by IBM in supporting the transformation of healthcare by augmenting evidence-based decision-making. Methods: Some problems in healthcare and strategies for change are described. It is argued that change requires better decisions, which, in turn, require better use of the many kinds of healthcare information. Analytic resources that address each of the information challenges are described. Examples of the role of each of the resources are given. Results: There are powerful analytic tools that utilize the various kinds of big data in healthcare to help clinicians make more personalized, evidenced-based decisions. Such resources can extract relevant information and provide insights that clinicians can use to make evidence-supported decisions. There are early suggestions that these resources have clinical value. As with all analytic tools, they are limited by the amount and quality of data. Conclusion: Big data is an inevitable part of the future of healthcare. There is a compelling need to manage and use big data to make better decisions to support the transformation of healthcare to the personalized, evidence-supported model of the future. Cognitive computing resources are necessary to manage the challenges in employing big data in healthcare. Such tools have been and are being developed. The analytic resources, themselves, do not drive, but support healthcare transformation.

66 citations


Journal ArticleDOI
TL;DR: Considerations for the use of EHR data provide a starting point for practical applications and a CRI research agenda, which will be facilitated by CRI's key role in the infrastructure of a learning healthcare system.
Abstract: Objectives: The goal of this survey is to discuss the impact of the growing availability of electronic health record (EHR) data on the evolving field of Clinical Research Informatics (CRI), which is the union of biomedical research and informatics. Results: Major challenges for the use of EHR-derived data for research include the lack of standard methods for ensuring that data quality, completeness, and provenance are sufficient to assess the appropriateness of its use for research. Areas that need continued emphasis include methods for integrating data from heterogeneous sources, guidelines (including explicit phenotype definitions) for using these data in both pragmatic clinical trials and observational investigations, strong data governance to better understand and control quality of enterprise data, and promotion of national standards for representing and using clinical data. Conclusions: The use of EHR data has become a priority in CRI. Awareness of underlying clinical data collection processes will be essential in order to leverage these data for clinical research and patient care, and will require multi-disciplinary teams representing clinical research, informatics, and healthcare operations. Considerations for the use of EHR data provide a starting point for practical applications and a CRI research agenda, which will be facilitated by CRI's key role in the infrastructure of a learning healthcare system.

52 citations


Journal ArticleDOI
TL;DR: In order to successfully utilize wearable sensor data to infer wellbeing, and enable proactive health management, standards and ontologies must be developed which allow for data to be shared between research groups and between commercial systems, promoting the integration of these data into health information systems.
Abstract: Objectives:The aim of this paper is to discuss how recent developments in the field of big data may potentially impact the future use of wearable sensor systems in healthcare. Methods: The article draws on the scientific literature to support the opinions presented by the IMIA Wearable Sensors in Healthcare Working Group. Results: The following is discussed: the potential for wearable sensors to generate big data; how complementary technologies, such as a smartphone, will augment the concept of a wearable sensor and alter the nature of the monitoring data created; how standards would enable sharing of data and advance scientific progress. Importantly, attention is drawn to statistical inference problems for which big datasets provide little assistance, or may hinder the identification of a useful solution. Finally, a discussion is presented on risks to privacy and possible negative consequences arising from intensive wearable sensor monitoring. Conclusions: Wearable sensors systems have the potential to generate datasets which are currently beyond our capabilities to easily organize and interpret. In order to successfully utilize wearable sensor data to infer wellbeing, and enable proactive health management, standards and ontologies must be developed which allow for data to be shared between research groups and between commercial systems, promoting the integration of these data into health information systems. However, policy and regulation will be required to ensure that the detailed nature of wearable sensor data is not misused to invade privacies or prejudice against individuals.

49 citations


Journal ArticleDOI
TL;DR: The theoretical and foundational models of human factors and ergonomics (HFE) that are being advocated for achieving patient safety and quality, and their use in the evaluation of healthcare systems are examined; and the potential for macroergonomic HFE approaches within the context of current research in biomedical informatics is examined.
Abstract: Objectives: Recent federal mandates and incentives have spurred the rapid growth, development and adoption of health information technology (HIT). While providing significant benefits for better data integration, organization, and availability, recent reports have raised questions regarding their potential to cause medication errors, decreased clinician performance, and lowered efficiency. The goal of this survey article is to (a) examine the theoretical and foundational models of human factors and ergonomics (HFE) that are being advocated for achieving patient safety and quality, and their use in the evaluation of health-care systems; (b) and the potential for macroergonomic HFE approaches within the context of current research in biomedical informatics. Methods: We reviewed literature (2007-2013) on the use of HFE approaches in healthcare settings, from databases such as Pubmed, CINAHL, and Cochran. Results: Based on the review, we discuss the systems-oriented models, their use in the evaluation of HIT, and examples of their use in the evaluation of EHR systems, clinical workflow processes, and medication errors. We also discuss the opportunities for better integrating HFE methods within biomedical informatics research and its potential advantages. Conclusions: The use of HFE methods is still in its infancy - better integration of HFE within the design lifecycle, and quality improvement efforts can further the ability of informatics researchers to address the key concerns regarding the complexity in clinical settings and develop HIT solutions that are designed within the social fabric of the considered setting.

44 citations


Journal ArticleDOI
TL;DR: The big data solution, using flexible markup, provides a route to improved utilization of processing power for organizing patient records in genotype and phenotype research.
Abstract: Objectives: Given the quickening speed of discovery of variant disease drivers from combined patient genotype and phenotype data, the objective is to provide methodology using big data technology to support the definition of deep phenotypes in medical records. Methods: As the vast stores of genomic information increase with next generation sequencing, the importance of deep phenotyping increases. The growth of genomic data and adoption of Electronic Health Records (EHR) in medicine provides a unique opportunity to integrate phenotype and genotype data into medical records. The method by which collections of clinical findings and other health related data are leveraged to form meaningful phenotypes is an active area of research. Longitudinal data stored in EHRs provide a wealth of information that can be used to construct phenotypes of patients. We focus on a practical problem around data integration for deep phenotype identification within EHR data. The use of big data approaches are described that enable scalable markup of EHR events that can be used for semantic and temporal similarity analysis to support the identification of phenotype and genotype relationships. Conclusions: Stead and colleagues' 2005 concept of using light standards to increase the productivity of software systems by riding on the wave of hardware/processing power is described as a harbinger for designing future healthcare systems. The big data solution, using flexible markup, provides a route to improved utilization of processing power for organizing patient records in genotype and phenotype research.

Journal ArticleDOI
TL;DR: How current research in the area of smart homes and ambient assisted living will be influenced by the use of big data is discussed to make information usable for managers and improve decision making, tailor smart home services based on big data, develop new business models, increase competition and identify policies to ensure privacy, security and liability.
Abstract: Objectives: To discuss how current research in the area of smart homes and ambient assisted living will be influenced by the use of big data. Methods: A scoping review of literature published in scientific journals and conference proceedings was performed, focusing on smart homes, ambient assisted living and big data over the years 2011-2014. Results: The health and social care market has lagged behind other markets when it comes to the introduction of innovative IT solutions and the market faces a number of challenges as the use of big data will increase. First, there is a need for a sustainable and trustful information chain where the needed information can be transferred from all producers to all consumers in a structured way. Second, there is a need for big data strategies and policies to manage the new situation where information is handled and transferred independently of the place of the expertise. Finally, there is a possibility to develop new and innovative business models for a market that supports cloud computing, social media, crowdsourcing etc. Conclusions: The interdisciplinary area of big data, smart homes and ambient assisted living is no longer only of interest for IT developers, it is also of interest for decision makers as customers make more informed choices among today's services. In the future it will be of importance to make information usable for managers and improve decision making, tailor smart home services based on big data, develop new business models, increase competition and identify policies to ensure privacy, security and liability.

Journal ArticleDOI
TL;DR: Big data raises new ethical dilemmas, though its analysis methods can bring complementary real-time capabilities for monitoring epidemics and assessing vaccine benefit-risk balance.
Abstract: BACKGROUND: Generally benefits and risks of vaccines can be determined from studies carried out as part of regulatory compliance, followed by surveillance of routine data; however there are some rarer and more long term events that require new methods. Big data generated by increasingly affordable personalised computing, and from pervasive computing devices is rapidly growing and low cost, high volume, cloud computing makes the processing of these data inexpensive. OBJECTIVE: To describe how big data and related analytical methods might be applied to assess the benefits and risks of vaccines. METHOD: We reviewed the literature on the use of big data to improve health, applied to generic vaccine use cases, that illustrate benefits and risks of vaccination. We defined a use case as the interaction between a user and an information system to achieve a goal. We used flu vaccination and pre-school childhood immunisation as exemplars. RESULTS: We reviewed three big data use cases relevant to assessing vaccine benefits and risks: (i) Big data processing using crowdsourcing, distributed big data processing, and predictive analytics, (ii) Data integration from heterogeneous big data sources, e.g. the increasing range of devices in the "internet of things", and (iii) Real-time monitoring for the direct monitoring of epidemics as well as vaccine effects via social media and other data sources. CONCLUSIONS: Big data raises new ethical dilemmas, though its analysis methods can bring complementary real-time capabilities for monitoring epidemics and assessing vaccine benefit-risk balance.

Journal ArticleDOI
TL;DR: If the authors ever hope to have tools that can rapidly provide evidence for daily practice of medicine they need a science of health data perhaps modeled after the science of astronomy.
Abstract: Objectives: To provide an overview of the benefits of clinical data collected as a by-product of the care process, the potential problems with large aggregations of these data, the policy frameworks that have been formulated, and the major challenges in the coming years. Methods: This report summarizes some of the major observations from AMIA and IMIA conferences held on this admittedly broad topic from 2006 through 2013. This report also includes many unsupported opinions of the author. Results: The benefits of aggregating larger and larger sets of routinely collected clinical data are well documented and of great societal benefit. These large data sets will probably never answer all possible clinical questions for methodological reasons. Non-traditional sources of health data that are patient-sources will pose new data science challenges. Conclusions: If we ever hope to have tools that can rapidly provide evidence for daily practice of medicine we need a science of health data perhaps modeled after the science of astronomy

Journal ArticleDOI
TL;DR: Adapting health care systems to serve current and future needs requires new streams of data to enable better self-management, improve shared decision making, and provide more virtual care.
Abstract: Objective: Address current topics in consumer health informatics. Methods: Literature review. Results: Current health care delivery systems need to be more effective in the management of chronic conditions as the population turns older and experiences escalating chronic illness that threatens to consume more health care resources than countries can afford. Most health care systems are positioned poorly to accommodate this. Meanwhile, the availability of ever more powerful and cheaper information and communication technology, both for professionals and consumers, has raised the capacity to gather and process information, communicate more effectively, and monitor the quality of care processes. Conclusion: Adapting health care systems to serve current and future needs requires new streams of data to enable better self-management, improve shared decision making, and provide more virtual care. Changes in reimbursement for health care services, increased adoption of relevant technologies, patient engagement, and calls for data transparency raise the importance of patient-generated health information, remote monitoring, non-visit based care, and other innovative care approaches that foster more frequent contact with patients and better management of chronic conditions.

Journal ArticleDOI
TL;DR: The challenges developing countries would face and enumerate the options to be used to achieve successful implementations of Big Data programs are described.
Abstract: Background: The volume of data, the velocity with which they are generated, and their variety and lack of structure hinder their use. This creates the need to change the way information is captured, stored, processed, and analyzed, leading to the paradigm shift called Big Data. Objectives: To describe the challenges and possible solutions for developing countries when implementing Big Data projects in the health sector. Methods: A non-systematic review of the literature was performed in PubMed and Google Scholar. The following keywords were used: “big data”, “developing countries”, “data mining”, “health information systems”, and “computing methodologies”. A thematic review of selected articles was performed. Results: There are challenges when implementing any Big Data program including exponential growth of data, special infrastructure needs, need for a trained workforce, need to agree on interoperability standards, privacy and security issues, and the need to include people, processes, and policies to ensure their adoption. Developing countries have particular characteristics that hinder further development of these projects. Conclusions: The advent of Big Data promises great opportunities for the healthcare field. In this article, we attempt to describe the challenges developing countries would face and enumerate the options to be used to achieve successful implementations of Big Data programs.

Journal ArticleDOI
TL;DR: The development of new coursework should focus on those who will become experts, with training aiming to provide skills in "deep analytical talent" as well as those who need knowledge to support such individuals.
Abstract: Objective: The growing volume and diversity of health and biomedical data indicate that the era of Big Data has arrived for healthcare. This has many implications for informatics, not only in terms of implementing and evaluating information systems, but also for the work and training of informatics researchers and professionals. This article addresses the question: What do biomedical and health informaticians working in analytics and Big Data need to know? Methods: We hypothesize a set of skills that we hope will be discussed among academic and other informaticians. Results: The set of skills includes: Programming - especially with data-oriented tools, such as SQL and statistical programming languages; Statistics - working knowledge to apply tools and techniques; Domain knowledge - depending on one’s area of work, bioscience or health care; and Communication - being able to understand needs of people and organizations, and articulate results back to them. Conclusion: Biomedical and health informatics educational programs must introduce concepts of analytics, Big Data, and the underlying skills to use and apply them into their curricula. The development of new coursework should focus on those who will become experts, with training aiming to provide skills in “deep analytical talent” as well as those who need knowledge to support such individuals.

Journal ArticleDOI
TL;DR: Reuse of abundant clinical data for research is speeding discovery, and implementation of genomic data into clinical medicine is impacting care with new classes of data rarely used previously in medicine.
Abstract: Objective: To provide a survey of recent progress in the use of large-scale biologic data to impact clinical care, and the impact the reuse of electronic health record data has made in genomic discovery. Method: Survey of key themes in translational bioinformatics, primarily from 2012 and 2013. Result: This survey focuses on four major themes: the growing use of Electronic Health Records (EHRs) as a source for genomic discovery, adoption of genomics and pharmacogenomics in clinical practice, the possible use of genomic technologies for drug repurposing, and the use of personal genomics to guide care. Conclusion: Reuse of abundant clinical data for research is speeding discovery, and implementation of genomic data into clinical medicine is impacting care with new classes of data rarely used previously in medicine.

Journal ArticleDOI
TL;DR: Demonstrated promises using mobile phones in the poorest countries encourage a future in which IMIA takes a lead role in leveraging mHealth for citizen empowerment through Consumer Health Informatics.
Abstract: Objectives: Evolving technology and infrastructure can benefit patients even in the poorest countries through mobile health (mHealth). Yet, what makes mobile-phone-based services succeed in low and middle-income countries (LMIC) and what opportunities does the future hold that still need to be studied. We showcase demonstrator services that leverage mobile phones in the hands of patients to promote health and facilitate health care. Methods: We surveyed the recent biomedical literature for demonstrator services that illustrate well-considered examples of mobile phone interventions for consumer health. We draw upon those examples to discuss enabling factors, scalability, reach, and potential of mHealth as well as obstacles in LMIC. Results: Among the 227 articles returned by a PubMed search, we identified 55 articles that describe services targeting health consumers equipped with mobile phones. From those articles, we showcase 19 as demonstrator services across clinical care, prevention, infectious diseases, and population health. Services range from education, reminders, reporting, and peer support, to epidemiologic reporting, and care management with phone communication and messages. Key achievements include timely adherence to treatment and appointments, clinical effectiveness of treatment reminders, increased vaccination coverage and uptake of screening, and capacity for efficient disease surveillance. We discuss methodologies of delivery and evaluation of mobile-phone-based mHealth in LMIC, including service design, social context, and environmental factors to success. Conclusion: Demonstrated promises using mobile phones in the poorest countries encourage a future in which IMIA takes a lead role in leveraging mHealth for citizen empowerment through Consumer Health Informatics.

Journal ArticleDOI
TL;DR: It was found that regional and national usability studies can complement smaller scale usability studies, and that they are needed in order to understand larger trends regarding system usability.
Abstract: Objectives: The objective of this paper is to explore approaches to understanding the usability of health information systems at regional and national levels. Methods: Several different methods are discussed in case studies from Denmark, Finland and Canada. They range from small scale qualitative studies involving usability testing of systems to larger scale national level questionnaire studies aimed at assessing the use and usability of health information systems by entire groups of health professionals. Results: It was found that regional and national usability studies can complement smaller scale usability studies, and that they are needed in order to understand larger trends regarding system usability. Despite adoption of EHRs, many health professionals rate the usability of the systems as low. A range of usability issues have been noted when data is collected on a large scale through use of widely distributed questionnaires and websites designed to monitor user perceptions of usability. Conclusion: As health information systems are deployed on a widespread basis, studies that examine systems used regionally or nationally are required. In addition, collection of large scale data on the usability of specific IT products is needed in order to complement smaller scale studies of specific systems.

Journal ArticleDOI
TL;DR: In this article, the authors select best medical informatics research works published in 2013 on electronic health record (EHR) adoption, design, and impact, from the perspective of human factors and organizational issues.
Abstract: Objectives: To select best medical informatics research works published in 2013 on electronic health record (EHR) adoption, design, and impact, from the perspective of human factors and organizational issues (HFOI). Methods: We selected 2,764 papers by querying PubMed (Mesh and TIAB) as well as using a manual search. Papers were evaluated based on pre-defined exclusion and inclusion criteria from their title, keywords, and abstract to select 15 candidate best papers, finally reviewed by 4 external reviewers using a standard evaluation grid. Results: Five papers were selected as best papers to illustrate how human factors approaches can improve EHR adoption and design. Among other contributions, these works: (i) make use of the observational and analysis methodologies of social and cognitive sciences to understand clinicians' attitudes towards EHRs, EHR use patterns, and impact on care processes, workflows, information exchange, and coordination of care; (ii) take into account macro- (environmental) and meso- (organizational) level factors to analyze EHR adoption or lack thereof; (iii) highlight the need for qualitative studies to analyze the unexpected side effects of EHRs on cognitive and work processes as well as the persistent use of paper. Conclusion: Selected papers tend to demonstrate that HFOI approaches and methodologies are essential to bridge the gap between EHR systems and end users, and to reduce regularly reported adoption failures and unexpected consequences.

Journal ArticleDOI
TL;DR: Insight is provided into the potential benefits and challenges of applying big data approaches to healthcare as well as how to position these approaches to achieve health system objectives such as patient safety or patient-engaged care delivery.
Abstract: Objectives: While big data offers enormous potential for improving healthcare delivery, many of the existing claims concerning big data in healthcare are based on anecdotal reports and theoretical vision papers, rather than scientific evidence based on empirical research. Historically, the implementation of health information technology has resulted in unintended consequences at the individual, organizational and social levels, but these unintended consequences of collecting data have remained unaddressed in the literature on big data. The objective of this paper is to provide insights into big data from the perspective of people, social and organizational considerations. Method: We draw upon the concept of persona to define the digital persona as the intersection of data, tasks and context for different user groups. We then describe how the digital persona can serve as a framework to understanding sociotechnical considerations of big data implementation. We then discuss the digital persona in the context of micro, meso and macro user groups across the 3 Vs of big data. Results: We provide insights into the potential benefits and challenges of applying big data approaches to healthcare as well as how to position these approaches to achieve health system objectives such as patient safety or patient-engaged care delivery. We also provide a framework for defining the digital persona at a micro, meso and macro level to help understand the user contexts of big data solutions. Conclusion: While big data provides great potential for improving healthcare delivery, it is essential that we consider the individual, social and organizational contexts of data use when implementing big data solutions.

Journal ArticleDOI
TL;DR: The determining factors, presented here, are in the author's opinion crucial for conducting successful research and for developing a research career.
Abstract: Objective: What are the determining factors for good research in medical informatics or, from a broader perspective, in biomedical and health informatics? Method: From the many lessons learned during my professional career, I tried to identify a fair sampling of such factors. On the occasion of giving the IMIA Award of Excellence lecture during MedInfo 2013, they were presented for discussion. Results: Sixteen determining factors (df) have been identified: early identification and promotion (df1), appropriate education (df2), stimulating persons and environments (df3), sufficient time and backtracking opportunities (df4), breadth of medical informatics competencies (df5), considering the necessary preconditions for good medical informatics research (df6), easy access to high-quality knowledge (df7), sufficient scientific career opportunities (df8), appropriate conditions for sustainable research (df9), ability to communicate and to solve problems (df10), as well as to convey research results (df11) in a highly inter- and multidisciplinary environment, ability to think for all and, when needed, taking the lead (df12), always staying unbiased (df13), always keeping doubt (df14), but also always trying to provide solutions (df15), and, finally, being aware that life is more (df16). Conclusions: Medical Informatics is an inter- and multidisciplinary discipline “avant la lettre”. Compared to monodisciplinary research, inter- and multidisciplinary research does not only provide significant opportunities for solving major problems in science and in society. It also faces considerable additional challenges for medical informatics as a scientific field. The determining factors, presented here, are in my opinion crucial for conducting successful research and for developing a research career. Since medical informatics as a field has today become an important driving force for research progress, especially in biomedicine and health care, but also in fields like computer science, it may be helpful to consider such factors in relation with research and education in our discipline.

Journal ArticleDOI
TL;DR: NLP tools are close to being seriously concurrent to humans in some annotation tasks and their use could increase drastically the amount of data usable for meaningful use of EHR.
Abstract: Objective: To summarize the best papers in the field of Knowledge Representation and Management (KRM). Methods: A comprehensive review of medical informatics literature was performed to select some of the most interesting papers of KRM and natural language processing (NLP) published in 2013. Results: Four articles were selected, one focuses on Electronic Health Record (EHR) interoperability for clinical pathway personalization based on structured data. The other three focus on NLP (corpus creation, de-identification, and co-reference resolution) and highlight the increase in NLP tools performances. Conclusion: NLP tools are close to being seriously concurrent to humans in some annotation tasks. Their use could increase drastically the amount of data usable for meaningful use of EHR.

Journal ArticleDOI
TL;DR: Some of the major healthcare information technology infrastructures in Turkey are described, namely, Sağlık-Net (Turkish for "Health-Net"), the Centralized Hospital Appointment System, the Basic Health Statistics Module, the Core Resources Management System, and the e-prescription system of the Social Security Institution.
Abstract: Objectives: The objective of this paper is to describe some of the major healthcare information technology (IT) infrastructures in Turkey, namely, Saglik-Net (Turkish for “Health-Net”), the Centralized Hospital Appointment System, the Basic Health Statistics Module, the Core Resources Management System, and the e-prescription system of the Social Security Institution. International collaboration projects that are integrated with Saglik-Net are also briefly summarized. Methods: The authors provide a survey of the some of the major healthcare IT infrastructures in Turkey. Results: Saglik-Net has two main components: the National Health Information System (NHIS) and the Family Medicine Information System (FMIS). The NHIS is a nation-wide infrastructure for sharing patients’ Electronic Health Records (EHRs). So far, EHRs of 78.9 million people have been created in the NHIS. Similarly, family medicine is operational in the whole country via FMIS. Centralized Hospital Appointment System enables the citizens to easily make appointments in healthcare providers. Basic Health Statistics Module is used for collecting information about the health status, risks and indicators across the country. Core Resources Management System speeds up the flow of information between the headquarters and Provincial Health Directorates. The e-prescription system is linked with Saglik-Net and seamlessly integrated with the healthcare provider information systems. Finally, Turkey is involved in several international projects for experience sharing and disseminating national developments. Conclusion: With the introduction of the “Health Transformation Program” in 2003, a number of successful healthcare IT infrastructures have been developed in Turkey. Currently, work is going on to enhance and further improve their functionality.

Journal ArticleDOI
TL;DR: The potential of big data in biomedicine has been pinpointed in various viewpoint papers and editorials, but still the promises exceed the current outcomes, and the best papers published in 2013 are selected.
Abstract: Objectives: To select best papers published in 2013 in the field of big data and smart health strategies, and summarize outstanding research efforts. Methods: A systematic search was performed using two major bibliographic databases for relevant journal papers. The references obtained were reviewed in a two-stage process, starting with a blinded review performed by the two section editors, and followed by a peer review process operated by external reviewers recognized as experts in the field. Results: The complete review process selected four best papers, illustrating various aspects of the special theme, among them: (a) using large volumes of unstructured data and, specifically, clinical notes from Electronic Health Records (EHRs) for pharmacovigilance; (b) knowledge discovery via querying large volumes of complex (both structured and unstructured) biological data using big data technologies and relevant tools; (c) methodologies for applying cloud computing and big data technologies in the field of genomics, and (d) system architectures enabling high-performance access to and processing of large datasets extracted from EHRs. Conclusions: The potential of big data in biomedicine has been pinpointed in various viewpoint papers and editorials. The review of current scientific literature illustrated a variety of interesting methods and applications in the field, but still the promises exceed the current outcomes. As we are getting closer towards a solid foundation with respect to common understanding of relevant concepts and technical aspects, and the use of standardized technologies and tools, we can anticipate to reach the potential that big data offer for personalized medicine and smart health strategies in the near future.

Journal ArticleDOI
TL;DR: Progress was the miniaturization of device, increased longevity, coupled with efficient pacing functions, multisite pacing modes, leadless pacing and also a better recognition of supraventricular or ventricular tachycardia's in order to deliver appropriate therapy.
Abstract: Objectives: The goal of this paper is to review some important issues occurring during the past year in Implantable devices. Methods: First cardiac implantable device was proposed to maintain an adequate heart rate, either because the heart's natural pacemaker is not fast enough, or there is a block in the heart's electrical conduction system. During the last forty years, pacemakers have evolved considerably and become programmable and allow to configure specific patient optimum pacing modes. Various technological aspects (electrodes, connectors, algorithms diagnosis, therapies, ...) have been progressed and cardiac implants address several clinical applications: management of arrhythmias, cardioversion / defibrillation and cardiac resynchronization therapy. Results: Observed progress was the miniaturization of device, increased longevity, coupled with efficient pacing functions, multisite pacing modes, leadless pacing and also a better recognition of supraventricular or ventricular tachycardia's in order to deliver appropriate therapy. Subcutaneous implant, new modes of stimulation (leadless implant or ultrasound lead), quadripolar lead and new sensor or new algorithm for the hemodynamic management are introduced and briefly described. Each times, the main result occurring during the two past years are underlined and repositioned from the history, remaining limitations are also addressed. Conclusion: Some important technological improvements were described. Nevertheless, news trends for the future are also considered in a specific session such as the remote follow-up of the patient or the treatment of heart failure by neuromodulation.

Journal ArticleDOI
TL;DR: A synopsis of the articles selected for the 2014 edition of the IMIA Yearbook illustrates current research regarding the impact and the evaluation of health information technology and the latest developments in health information exchange.
Abstract: Objectives: To summarize excellent current research in the field of Health Information Systems. Method: Creation of a synopsis of the articles selected for the 2014 edition of the IMIA Yearbook. Results: Four papers from international peer reviewed journals were selected and are summarized. Conclusions: Selected articles illustrate current research regarding the impact and the evaluation of health information technology and the latest developments in health information exchange.

Journal ArticleDOI
TL;DR: The selection and evaluation process of this Yearbook's section on Bioinformatics and Translational Informatics yielded three excellent articles regarding data management and genome medicine, including VEST, a supervised machine learning tool for prioritizing variants found in exome sequencing projects that are more likely involved in human Mendelian diseases.
Abstract: Objective:To summarize excellent current research in the field of Bioinformatics and Translational Informatics with application in the health domain. Method: We provide a synopsis of the articles selected for the IMIA Yearbook 2014, from which we attempt to derive a synthetic overview of current and future activities in the field. A first step of selection was performed by querying MEDLINE with a list of MeSH descriptors completed by a list of terms adapted to the section. Each section editor evaluated independently the set of 1,851 articles and 15 articles were retained for peer-review. Results: The selection and evaluation process of this Yearbook's section on Bioinformatics and Translational Informatics yielded three excellent articles regarding data management and genome medicine. In the first article, the authors present VEST (Variant Effect Scoring Tool) which is a supervised machine learning tool for prioritizing variants found in exome sequencing projects that are more likely involved in human Mendelian diseases. In the second article, the authors show how to infer surnames of male individuals by crossing anonymous publicly available genomic data from the Y chromosome and public genealogy data banks. The third article presents a statistical framework called iCluster+ that can perform pattern discovery in integrated cancer genomic data. This framework was able to determine different tumor subtypes in colon cancer. Conclusions: The current research activities still attest the continuous convergence of Bioinformatics and Medical Informatics, with a focus this year on large-scale biological, genomic, and Electronic Health Records data. Indeed, there is a need for powerful tools for managing and interpreting complex data, but also a need for user-friendly tools developed for the clinicians in their daily practice. All the recent research and development efforts are contributing to the challenge of impacting clinically the results and even going towards a personalized medicine in the near future.

Journal ArticleDOI
TL;DR: The development of a mechanical multi-channel analyser for clinical laboratories that handled discrete sample technology and could prevent carry-over to the next test samples while incorporating computer technology to improve the quality of test results is highlighted.
Abstract: Objectives: This paper discusses the early history and development of a clinical analyser system in Sweden (AutoChemist, 1965). It highlights the importance of such high capacity system both for clinical use and health care screening. The device was developed to assure the quality of results and to automatically handle the orders, store the results in digital form for later statistical analyses and distribute the results to the patients' physicians by using the computer used for the analyser. Results: The most important result of the construction of an analyser able to produce analytical results on a mass scale was the development of a mechanical multi-channel analyser for clinical laboratories that handled discrete sample technology and could prevent carry-over to the next test samples while incorporating computer technology to improve the quality of test results. The AutoChemist could handle 135 samples per hour in an 8-hour shift and up to 24 possible analyses channels resulting in 3,200 results per hour. Later versions would double this capacity. Some customers used the equipment 24 hours per day. Conclusions: With a capacity of 3,000 to 6,000 analyses per hour, pneumatic driven pipettes, special units for corrosive liquids or special activities, and an integrated computer, the AutoChemist system was unique and the largest of its kind for many years. Its follower – The AutoChemist PRISMA (PRogrammable Individually Selective Modular Analyzer) – was smaller in size but had a higher capacity. Both analysers established new standards of operation for clinical laboratories and encouraged others to use new technologies for building new analysers.