Journal of Healthcare Engineering
Hindawi Publishing Corporation
About: Journal of Healthcare Engineering is an academic journal published by Hindawi Publishing Corporation. The journal publishes majorly in the area(s): Medicine & Computer science. It has an ISSN identifier of 2040-2295. It is also open access. Over the lifetime, 3117 publications have been published receiving 25199 citations.
Topics: Medicine, Computer science, Artificial intelligence, Internal medicine, Convolutional neural network
TL;DR: It is difficult to obtain a large amount of pneumonia dataset for this classification task, so several data augmentation algorithms were deployed to improve the validation and classification accuracy of the CNN model and achieved remarkable validation accuracy.
Abstract: This study proposes a convolutional neural network model trained from scratch to classify and detect the presence of pneumonia from a collection of chest X-ray image samples. Unlike other methods that rely solely on transfer learning approaches or traditional handcrafted techniques to achieve a remarkable classification performance, we constructed a convolutional neural network model from scratch to extract features from a given chest X-ray image and classify it to determine if a person is infected with pneumonia. This model could help mitigate the reliability and interpretability challenges often faced when dealing with medical imagery. Unlike other deep learning classification tasks with sufficient image repository, it is difficult to obtain a large amount of pneumonia dataset for this classification task; therefore, we deployed several data augmentation algorithms to improve the validation and classification accuracy of the CNN model and achieved remarkable validation accuracy.
TL;DR: A comparative study is performed to show that FCNs significantly outperform, without any further postprocessing, prior results in endoluminal scene segmentation, especially with respect to polyp segmentation and localization.
Abstract: Colorectal cancer (CRC) is the third cause of cancer death worldwide. Currently, the standard approach to reduce CRC-related mortality is to perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss rate and the inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing decision support systems (DSS) aiming to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image segmentation, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. The proposed dataset consists of 4 relevant classes to inspect the endoluminal scene, targeting different clinical needs. Together with the dataset and taking advantage of advances in semantic segmentation literature, we provide new baselines by training standard fully convolutional networks (FCNs). We perform a comparative study to show that FCNs significantly outperform, without any further postprocessing, prior results in endoluminal scene segmentation, especially with respect to polyp segmentation and localization.
TL;DR: Three types of deep neural networks are designed for lung cancer calcification and the CNN network archived the best performance with an accuracy, sensitivity, and specificity of 84.32%, which has the best result among the three networks.
Abstract: Lung cancer is the most common cancer that cannot be ignored and cause death with late health care. Currently, CT can be used to help doctors detect the lung cancer in the early stages. In many cases, the diagnosis of identifying the lung cancer depends on the experience of doctors, which may ignore some patients and cause some problems. Deep learning has been proved as a popular and powerful method in many medical imaging diagnosis areas. In this paper, three types of deep neural networks (e.g., CNN, DNN, and SAE) are designed for lung cancer calcification. Those networks are applied to the CT image classification task with some modification for the benign and malignant lung nodules. Those networks were evaluated on the LIDC-IDRI database. The experimental results show that the CNN network archived the best performance with an accuracy of 84.15%, sensitivity of 83.96%, and specificity of 84.32%, which has the best result among the three networks.
TL;DR: The goal of this analysis is to demonstrate by a deep research of the 3D-printing applications in medical field the usefulness and drawbacks and how powerful technology it is.
Abstract: Three-dimensional (3D) printing refers to a number of manufacturing technologies that generate a physical model from digital information. Medical 3D printing was once an ambitious pipe dream. However, time and investment made it real. Nowadays, the 3D printing technology represents a big opportunity to help pharmaceutical and medical companies to create more specific drugs, enabling a rapid production of medical implants, and changing the way that doctors and surgeons plan procedures. Patient-specific 3D-printed anatomical models are becoming increasingly useful tools in today's practice of precision medicine and for personalized treatments. In the future, 3D-printed implantable organs will probably be available, reducing the waiting lists and increasing the number of lives saved. Additive manufacturing for healthcare is still very much a work in progress, but it is already applied in many different ways in medical field that, already reeling under immense pressure with regards to optimal performance and reduced costs, will stand to gain unprecedented benefits from this good-as-gold technology. The goal of this analysis is to demonstrate by a deep research of the 3D-printing applications in medical field the usefulness and drawbacks and how powerful technology it is.
TL;DR: This review highlights the advances of state-of-the-art activity recognition approaches, especially for the activity representation and classification methods, and classify existing literatures with a detailed taxonomy including representation and Classification methods, as well as the datasets they used.
Abstract: Human activity recognition (HAR) aims to recognize activities from a series of observations on the actions of subjects and the environmental conditions. The vision-based HAR research is the basis of many applications including video surveillance, health care, and human-computer interaction (HCI). This review highlights the advances of state-of-the-art activity recognition approaches, especially for the activity representation and classification methods. For the representation methods, we sort out a chronological research trajectory from global representations to local representations, and recent depth-based representations. For the classification methods, we conform to the categorization of template-based methods, discriminative models, and generative models and review several prevalent methods. Next, representative and available datasets are introduced. Aiming to provide an overview of those methods and a convenient way of comparing them, we classify existing literatures with a detailed taxonomy including representation and classification methods, as well as the datasets they used. Finally, we investigate the directions for future research.