IEEE Journal of Biomedical and Health Informatics
Institute of Electrical and Electronics Engineers
About: IEEE Journal of Biomedical and Health Informatics is an academic journal published by Institute of Electrical and Electronics Engineers. The journal publishes majorly in the area(s): Computer science & Medicine. It has an ISSN identifier of 2168-2194. Over the lifetime, 2902 publications have been published receiving 83790 citations. The journal is also known as: IEEE J Biomed Health Inform & Biomedical and health informatics.
Topics: Computer science, Medicine, Artificial intelligence, Segmentation, Pattern recognition (psychology)
TL;DR: The emergence of `ambient-assisted living’ (AAL) tools for older adults based on ambient intelligence paradigm is summarized and the state-of-the-art AAL technologies, tools, and techniques are summarized.
Abstract: In recent years, we have witnessed a rapid surge in assisted living technologies due to a rapidly aging society. The aging population, the increasing cost of formal health care, the caregiver burden, and the importance that the individuals place on living independently, all motivate development of innovative-assisted living technologies for safe and independent aging. In this survey, we will summarize the emergence of `ambient-assisted living” (AAL) tools for older adults based on ambient intelligence paradigm. We will summarize the state-of-the-art AAL technologies, tools, and techniques, and we will look at current and future challenges.
TL;DR: In this paper, the authors survey the current research on applying deep learning to clinical tasks based on EHR data, where they find a variety of deep learning techniques and frameworks being applied to several types of clinical applications including information extraction, representation learning, outcome prediction, phenotyping, and deidentification.
Abstract: The past decade has seen an explosion in the amount of digital information stored in electronic health records (EHRs). While primarily designed for archiving patient information and performing administrative healthcare tasks like billing, many researchers have found secondary use of these records for various clinical informatics applications. Over the same period, the machine learning community has seen widespread advances in the field of deep learning. In this review, we survey the current research on applying deep learning to clinical tasks based on EHR data, where we find a variety of deep learning techniques and frameworks being applied to several types of clinical applications including information extraction, representation learning, outcome prediction, phenotyping, and deidentification. We identify several limitations of current research involving topics such as model interpretability, data heterogeneity, and lack of universal benchmarks. We conclude by summarizing the state of the field and identifying avenues of future deep EHR research.
TL;DR: This paper proposes the use of deep learning approaches for breast ultrasound lesion detection and investigates three different methods: a Patch-based LeNet, a U-Net, and a transfer learning approach with a pretrained FCN-AlexNet.
Abstract: Breast lesion detection using ultrasound imaging is considered an important step of computer-aided diagnosis systems. Over the past decade, researchers have demonstrated the possibilities to automate the initial lesion detection. However, the lack of a common dataset impedes research when comparing the performance of such algorithms. This paper proposes the use of deep learning approaches for breast ultrasound lesion detection and investigates three different methods: a Patch-based LeNet, a U-Net, and a transfer learning approach with a pretrained FCN-AlexNet. Their performance is compared against four state-of-the-art lesion detection algorithms (i.e., Radial Gradient Index, Multifractal Filtering, Rule-based Region Ranking, and Deformable Part Models). In addition, this paper compares and contrasts two conventional ultrasound image datasets acquired from two different ultrasound systems. Dataset A comprises 306 (60 malignant and 246 benign) images and Dataset B comprises 163 (53 malignant and 110 benign) images. To overcome the lack of public datasets in this domain, Dataset B will be made available for research purposes. The results demonstrate an overall improvement by the deep learning approaches when assessed on both datasets in terms of True Positive Fraction, False Positives per image, and F-measure.
TL;DR: Some of the existing activities and future opportunities related to big data for health, outlining some of the key underlying issues that need to be tackled are discussed.
Abstract: This paper provides an overview of recent developments in big data in the context of biomedical and health informatics. It outlines the key characteristics of big data and how medical and health informatics, translational bioinformatics, sensor informatics, and imaging informatics will benefit from an integrated approach of piecing together different aspects of personalized information from a diverse range of data sources, both structured and unstructured, covering genomics, proteomics, metabolomics, as well as imaging, clinical diagnosis, and long-term continuous physiological sensing of an individual. It is expected that recent advances in big data will expand our knowledge for testing new hypotheses about disease management from diagnosis to prevention to personalized treatment. The rise of big data, however, also raises challenges in terms of privacy, security, data ownership, data stewardship, and governance. This paper discusses some of the existing activities and future opportunities related to big data for health, outlining some of the key underlying issues that need to be tackled.
TL;DR: DREAMER, a multimodal database consisting of electroencephalogram (EEG) and ECG) signals recorded during affect elicitation by means of audio-visual stimuli, indicates the prospects of using low-cost devices for affect recognition applications.
Abstract: In this paper, we present DREAMER, a multimodal database consisting of electroencephalogram (EEG) and electrocardiogram (ECG) signals recorded during affect elicitation by means of audio-visual stimuli. Signals from 23 participants were recorded along with the participants self-assessment of their affective state after each stimuli, in terms of valence, arousal, and dominance. All the signals were captured using portable, wearable, wireless, low-cost, and off-the-shelf equipment that has the potential to allow the use of affective computing methods in everyday applications. A baseline for participant-wise affect recognition using EEG and ECG-based features, as well as their fusion, was established through supervised classification experiments using support vector machines (SVMs). The self-assessment of the participants was evaluated through comparison with the self-assessments from another study using the same audio-visual stimuli. Classification results for valence, arousal, and dominance of the proposed database are comparable to the ones achieved for other databases that use nonportable, expensive, medical grade devices. These results indicate the prospects of using low-cost devices for affect recognition applications. The proposed database will be made publicly available in order to allow researchers to achieve a more thorough evaluation of the suitability of these capturing devices for affect recognition applications.