scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Accurate Segmentation of Nuclear Regions with Multi-Organ Histopathology Images Using Artificial Intelligence for Cancer Diagnosis in Personalized Medicine.

04 Jun 2021-Journal of Personalized Medicine (Multidisciplinary Digital Publishing Institute)-Vol. 11, Iss: 6, pp 515
TL;DR: Wang et al. as mentioned in this paper adopted a new nuclear segmentation network empowered by residual skip connections to solve the problem of manual inspection of histopathology images under high-resolution microscopes.
Abstract: Accurate nuclear segmentation in histopathology images plays a key role in digital pathology. It is considered a prerequisite for the determination of cell phenotype, nuclear morphometrics, cell classification, and the grading and prognosis of cancer. However, it is a very challenging task because of the different types of nuclei, large intraclass variations, and diverse cell morphologies. Consequently, the manual inspection of such images under high-resolution microscopes is tedious and time-consuming. Alternatively, artificial intelligence (AI)-based automated techniques, which are fast and robust, and require less human effort, can be used. Recently, several AI-based nuclear segmentation techniques have been proposed. They have shown a significant performance improvement for this task, but there is room for further improvement. Thus, we propose an AI-based nuclear segmentation technique in which we adopt a new nuclear segmentation network empowered by residual skip connections to address this issue. Experiments were performed on two publicly available datasets: (1) The Cancer Genome Atlas (TCGA), and (2) Triple-Negative Breast Cancer (TNBC). The results show that our proposed technique achieves an aggregated Jaccard index (AJI) of 0.6794, Dice coefficient of 0.8084, and F1-measure of 0.8547 on TCGA dataset, and an AJI of 0.7332, Dice coefficient of 0.8441, precision of 0.8352, recall of 0.8306, and F1-measure of 0.8329 on the TNBC dataset. These values are higher than those of the state-of-the-art methods.
Citations
More filters
Journal ArticleDOI
TL;DR: In this paper , a dual-stream residual dense network (DSRD-Net) is proposed for surgical instrument segmentation in conventional robot-assisted minimally invasive procedures (RMIS), which mainly utilizes the strength of residual, dense, and atrous spatial pyramid pooling architectures.
Abstract: In conventional robot-assisted minimally invasive procedures (RMIS), surgeons have narrow visual and complex working spaces, along with specular reflection, blood, camera-lens fogging, and complex backgrounds, which increase the risk of human error and tissue damage. The use of deep learning-based techniques can decrease these risks by providing segmented instruments, real-time tracking, pose estimation, and surgeons’ skill assessment. Recently, several deep learning-based methods have been proposed for surgical instrument segmentation. These methods have shown significant performance for the RMIS. However, we found that most of these methods still have scope for improvement in terms of accuracy, robustness, and computational cost. In addition, gastrointestinal pathologies have not been explored in previous studies. Therefore, we propose a dual-stream residual dense network (DSRD-Net), an accurate and robust deep learning-based surgical instrument segmentation method that mainly utilizes the strength of residual, dense, and atrous spatial pyramid pooling architectures. Our proposed method was tested on publicly available gastrointestinal endoscopy (the Kvasir-Instrument Dataset) and abdominal porcine procedures datasets (The 2017 Robotic Instrument Segmentation Challenge Dataset). The experimental results show that the proposed method outperforms the state-of-the-art methods.

8 citations

Journal ArticleDOI
TL;DR: The findings showed that in many cases, the use of artificial intelligence methods had effective application in personalized medicine.
Abstract: Purpose Artificial intelligence (AI) techniques are used in precision medicine to explore novel genotypes and phenotypes data. The main aims of precision medicine include early diagnosis, screening, and personalized treatment regime for a patient based on genetic-oriented features and characteristics. The main objective of this study was to review AI techniques and their effectiveness in neoplasm precision medicine. Materials and Methods A comprehensive search was performed in Medline (through PubMed), Scopus, ISI Web of Science, IEEE Xplore, Embase, and Cochrane databases from inception to December 29, 2021, in order to identify the studies that used AI methods for cancer precision medicine and evaluate outcomes of the models. Results Sixty-three studies were included in this systematic review. The main AI approaches in 17 papers (26.9%) were linear and nonlinear categories (random forest or decision trees), and in 21 citations, rule-based systems and deep learning models were used. Notably, 62% of the articles were done in the United States and China. R package was the most frequent software, and breast and lung cancer were the most selected neoplasms in the papers. Out of 63 papers, in 34 articles, genomic data like gene expression, somatic mutation data, phenotype data, and proteomics with drug-response which is functional data was used as input in AI methods; in 16 papers' (25.3%) drug response, functional data was utilized in personalization of treatment. The maximum values of the assessment indicators such as accuracy, sensitivity, specificity, precision, recall, and area under the curve (AUC) in included studies were 0.99, 1.00, 0.96, 0.98, 0.99, and 0.9929, respectively. Conclusion The findings showed that in many cases, the use of artificial intelligence methods had effective application in personalized medicine.

7 citations

Journal ArticleDOI
TL;DR: Li et al. as mentioned in this paper proposed a domain-adaptive CAD framework, namely the dilated aggregation-based lightweight network (DAL-Net), for effective recognition of trivial COVID-19 lesions in CT scans.
Abstract: Background: Early and accurate detection of COVID-19-related findings (such as well-aerated regions, ground-glass opacity, crazy paving and linear opacities, and consolidation in lung computed tomography (CT) scan) is crucial for preventive measures and treatment. However, the visual assessment of lung CT scans is a time-consuming process particularly in case of trivial lesions and requires medical specialists. Method: A recent breakthrough in deep learning methods has boosted the diagnostic capability of computer-aided diagnosis (CAD) systems and further aided health professionals in making effective diagnostic decisions. In this study, we propose a domain-adaptive CAD framework, namely the dilated aggregation-based lightweight network (DAL-Net), for effective recognition of trivial COVID-19 lesions in CT scans. Our network design achieves a fast execution speed (inference time is 43 ms on a single image) with optimal memory consumption (almost 9 MB). To evaluate the performances of the proposed and state-of-the-art models, we considered two publicly accessible datasets, namely COVID-19-CT-Seg (comprising a total of 3520 images of 20 different patients) and MosMed (including a total of 2049 images of 50 different patients). Results: Our method exhibits average area under the curve (AUC) up to 98.84%, 98.47%, and 95.51% for COVID-19-CT-Seg, MosMed, and cross-dataset, respectively, and outperforms various state-of-the-art methods. Conclusions: These results demonstrate that deep learning-based models are an effective tool for building a robust CAD solution based on CT data in response to present disaster of COVID-19.

6 citations

Journal ArticleDOI
TL;DR: A systematic survey on nucleus segmentation using deep learning in the last five years (2017-2021) is presented in this article , highlighting various segmentation models and exploring their similarities, strengths, datasets utilized, and unfolding research areas.
Abstract: Nucleus segmentation is an imperative step in the qualitative study of imaging datasets, considered as an intricate task in histopathology image analysis. Segmenting a nucleus is an important part of diagnosing, staging, and grading cancer, but overlapping regions make it hard to separate and tell apart independent nuclei. Deep Learning is swiftly paving its way in the arena of nucleus segmentation, attracting quite a few researchers with its numerous published research articles indicating its efficacy in the field. This paper presents a systematic survey on nucleus segmentation using deep learning in the last five years (2017–2021), highlighting various segmentation models (U-Net, SCPP-Net, Sharp U-Net, and LiverNet) and exploring their similarities, strengths, datasets utilized, and unfolding research areas.

4 citations

Journal ArticleDOI
TL;DR: Three different deep learning-based frameworks are proposed to identify different types of shoulder implants in X-ray scans using an efficient ensemble network called the Inception Mobile Fully-Connected Convolutional Network (IMFC-Net), which is comprised of two designed convolutional neural networks and a classifier.
Abstract: Background: Early recognition of prostheses before reoperation can reduce perioperative morbidity and mortality. Because of the intricacy of the shoulder biomechanics, accurate classification of implant models before surgery is fundamental for planning the correct medical procedure and setting apparatus for personalized medicine. Expert surgeons usually use X-ray images of prostheses to set the patient-specific apparatus. However, this subjective method is time-consuming and prone to errors. Method: As an alternative, artificial intelligence has played a vital role in orthopedic surgery and clinical decision-making for accurate prosthesis placement. In this study, three different deep learning-based frameworks are proposed to identify different types of shoulder implants in X-ray scans. We mainly propose an efficient ensemble network called the Inception Mobile Fully-Connected Convolutional Network (IMFC-Net), which is comprised of our two designed convolutional neural networks and a classifier. To evaluate the performance of the IMFC-Net and state-of-the-art models, experiments were performed with a public data set of 597 de-identified patients (597 shoulder implants). Moreover, to demonstrate the generalizability of IMFC-Net, experiments were performed with two augmentation techniques and without augmentation, in which our model ranked first, with a considerable difference from the comparison models. A gradient-weighted class activation map technique was also used to find distinct implant characteristics needed for IMFC-Net classification decisions. Results: The results confirmed that the proposed IMFC-Net model yielded an average accuracy of 89.09%, a precision rate of 89.54%, a recall rate of 86.57%, and an F1.score of 87.94%, which were higher than those of the comparison models. Conclusion: The proposed model is efficient and can minimize the revision complexities of implants.

4 citations

References
More filters
Book ChapterDOI
05 Oct 2015
TL;DR: Neber et al. as discussed by the authors proposed a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently, which can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Abstract: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net .

49,590 citations

Journal ArticleDOI
TL;DR: Fiji is a distribution of the popular open-source software ImageJ focused on biological-image analysis that facilitates the transformation of new algorithms into ImageJ plugins that can be shared with end users through an integrated update system.
Abstract: Fiji is a distribution of the popular open-source software ImageJ focused on biological-image analysis. Fiji uses modern software engineering practices to combine powerful software libraries with a broad range of scripting languages to enable rapid prototyping of image-processing algorithms. Fiji facilitates the transformation of new algorithms into ImageJ plugins that can be shared with end users through an integrated update system. We propose Fiji as a platform for productive collaboration between computer science and biology research communities.

43,540 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.
Abstract: Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20% relative improvement to 62.2% mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.

28,225 citations

Proceedings ArticleDOI
20 Mar 2017
TL;DR: This work presents a conceptually simple, flexible, and general framework for object instance segmentation, which extends Faster R-CNN by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition.
Abstract: We present a conceptually simple, flexible, and general framework for object instance segmentation. Our approach efficiently detects objects in an image while simultaneously generating a high-quality segmentation mask for each instance. The method, called Mask R-CNN, extends Faster R-CNN by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition. Mask R-CNN is simple to train and adds only a small overhead to Faster R-CNN, running at 5 fps. Moreover, Mask R-CNN is easy to generalize to other tasks, e.g., allowing us to estimate human poses in the same framework. We show top results in all three tracks of the COCO suite of challenges, including instance segmentation, bounding-box object detection, and person keypoint detection. Without tricks, Mask R-CNN outperforms all existing, single-model entries on every task, including the COCO 2016 challenge winners. We hope our simple and effective approach will serve as a solid baseline and help ease future research in instance-level recognition. Code will be made available.

14,299 citations