Showing papers in "Medical Image Analysis in 2019"
••
TL;DR: A review of recent advances in medical imaging using the adversarial training scheme with the hope of benefiting researchers interested in this technique.
1,053 citations
••
TL;DR: Experimental results show that AG models consistently improve the prediction performance of the base architectures across different datasets and training sizes while preserving computational efficiency.
966 citations
••
TL;DR: Fast AnoGAN (f‐AnoGAN), a generative adversarial network (GAN) based unsupervised learning approach capable of identifying anomalous images and image segments, that can serve as imaging biomarker candidates is presented.
777 citations
••
TL;DR: A novel convolutional neural network is presented for simultaneous nuclear segmentation and classification that leverages the instance-rich information encoded within the vertical and horizontal distances of nuclear pixels to their centres of mass to separate clustered nuclei, resulting in an accurate segmentation.
554 citations
••
TL;DR: In this article, a survey of semi-supervised, multiple instance and transfer learning in medical image segmentation is presented, and connections between these learning scenarios, and opportunities for future research are discussed.
531 citations
••
TL;DR: In this paper, the Deep Learning Image Registration (DLIR) framework is proposed for unsupervised affine and deformable image registration, where CNNs are trained for image registration by exploiting image similarity analogous to conventional intensity-based image registration.
488 citations
••
TL;DR: A novel self-supervised learning strategy based on context restoration is proposed in order to better exploit unlabelled images and is validated in three common problems in medical imaging: classification, localization, and segmentation.
393 citations
••
TL;DR: In this article, the authors compared stain color augmentation and normalization techniques and quantified their effect on CNN classification performance using a heterogeneous dataset of hematoxylin and eosin histopathology images from 4 organs and 9 pathology laboratories.
362 citations
••
TL;DR: The Grand Challenge on Breast Cancer Histology images (BACH) was organized in conjunction with the 15th International Conference on Image Analysis and Recognition (ICIAR 2018) as mentioned in this paper.
335 citations
••
TL;DR: A probabilistic generative model is presented and an unsupervised learning-based inference algorithm is derived that uses insights from classical registration methods and makes use of recent developments in convolutional neural networks (CNNs).
251 citations
••
TL;DR: In this paper, the authors proposed a novel up-sampling path which incorporates long skip and short-cut connections to overcome the feature map explosion in conventional FCN based architectures.
••
TL;DR: The proposed synergic deep learning model using multiple deep convolutional neural networks simultaneously and enabling them to mutually learn from each other achieves the state‐of‐the‐art performance in these medical image classification tasks.
••
TL;DR: A differentiable penalty is proposed, which enforces inequality constraints directly in the loss function, avoiding expensive Lagrangian dual iterates and proposal generation and has the potential to close the gap between weakly and fully supervised learning in semantic medical image segmentation.
••
TL;DR: Combining localized classification via CNNs with statistical anatomical knowledge via SSMs results in a state‐of‐the‐art segmentation method for knee bones and cartilage from MRI data.
••
TL;DR: A fully convolutional neural network is proposed that counters the loss of information caused by max‐pooling by re‐introducing the original image at multiple points within the network, and introduces random transformations during test time for an enhanced segmentation result that simultaneously generates an uncertainty map, highlighting areas of ambiguity.
••
Fudan University1, Shanghai Jiao Tong University2, Graz University of Technology3, University of Lübeck4, University of Lorraine5, Royal Institute of Technology6, Shenzhen University7, The Chinese University of Hong Kong8, University of Central Florida9, Southeast University10, François Rabelais University11, Wuhan University12, Chinese Academy of Sciences13, University of Bern14, University of Edinburgh15, King's College London16, National Institutes of Health17
TL;DR: This work presents the methodologies and evaluation results for the WHS algorithms selected from the submissions to the Multi-Modality Whole Heart Segmentation (MM-WHS) challenge, in conjunction with MICCAI 2017.
••
TL;DR: This work proposes a CNN architecture that learns to split the localization task into two simpler sub‐problems, reducing the overall need for large training datasets, and proposes a fully convolutional SpatialConfiguration‐Net (SCN), which outperforms related methods in terms of landmark localization error.
••
TL;DR: Zhang et al. as discussed by the authors designed a fully convolutional network that is subject to dual-guidance: ground-truth guidance using deformation fields obtained by an existing registration method; and image dissimilarity guidance using the difference between the images after registration.
••
TL;DR: The achieved results are promising given the difficulty of the tasks and weakly‐labeled nature of the ground truth, however, further research is needed to improve the practical utility of image analysis methods for this task.
••
TL;DR: This work introduces a novel framework for multi-organ segmentation of abdominal regions by using organ-attention networks with reverse connections (OAN-RCs) which are applied to 2D views, of the 3D CT volume, and output estimates which are combined by statistical fusion exploiting structural similarity.
••
TL;DR: In this paper, a graph neural network was incorporated into a unified CNN architecture to exploit both local appearances and global vessel structures for vessel segmentation, and the proposed method outperformed or is on par with current state-of-theart methods in terms of the average precision and the area under the receiver operating characteristic curve.
••
TL;DR: The proposed Micro‐Net is aimed at better object localization in the face of varying intensities and is robust to noise, and compares the results on publicly available data sets and shows that the proposed network outperforms recent deep learning algorithms.
••
TL;DR: An iterative instance segmentation approach that uses a fully convolutional neural network to segment and label vertebrae one after the other, independently of the number of visible vertebraes is proposed and compares favorably with state‐of‐the‐art methods.
••
TL;DR: An automated breast cancer diagnosis model for ultrasonography images using deep convolutional neural networks with multi‐scale kernels and skip connections is developed and achieves a performance comparable to human sonographers and can be applied to clinical scenarios.
••
TL;DR: A semi-supervised adversarial classification (SSAC) model that can be trained by using both labeled and unlabeled data for benign-malignant lung nodule classification is proposed and achieves superior performance on the benchmark LIDC-IDRI dataset.
••
TL;DR: The proposed algorithm is able to accurately and efficiently determine the direction and radius of coronary arteries based on information derived directly from the image data, and once trained allows fast automatic or interactive extraction of coronary artery trees from CCTA images.
••
TL;DR: Novel deep reinforcement learning (RL) strategies to train agents that can precisely and robustly localize target landmarks in medical scans are evaluated and the performance of these agents surpasses state‐of‐the‐art supervised and RL methods.
••
TL;DR: This paper first select the discriminative instances, and then utilize these instances to diagnose diseases based on the proposed RMDL approach, and builds a large whole-slide gastric histopathology image dataset with detailed pixel-level annotations.
••
TL;DR: An automatic and accurate system for detecting mitosis in histopathology images using a deep segmentation network to produce segmentation map and a novel concentric loss function is proposed to train the semantic segmentsation network on weakly supervised mitosis data.
••
TL;DR: This study proposes a novel deep-learning-based CAD system, guided by task-specific prior knowledge, for automated nodule detection and classification in ultrasound images, and demonstrates that the proposed method is effective in the discrimination of thyroid nodules.