scispace - formally typeset
Open AccessBook ChapterDOI

Models Genesis: Generic Autodidactic Models for 3D Medical Image Analysis.

TLDR
The authors' extensive experiments demonstrate that their Models Genesis significantly outperform learning from scratch in all five target 3D applications covering both segmentation and classification, and are attributed to the unified self-supervised learning framework, built on a simple yet powerful observation.
Abstract
Transfer learning from natural image to medical image has established as one of the most practical paradigms in deep learning for medical image analysis. However, to fit this paradigm, 3D imaging tasks in the most prominent imaging modalities (e.g., CT and MRI) have to be reformulated and solved in 2D, losing rich 3D anatomical information and inevitably compromising the performance. To overcome this limitation, we have built a set of models, called Generic Autodidactic Models, nicknamed Models Genesis, because they are created ex nihilo (with no manual labeling), self-taught (learned by self-supervision), and generic (served as source models for generating application-specific target models). Our extensive experiments demonstrate that our Models Genesis significantly outperform learning from scratch in all five target 3D applications covering both segmentation and classification. More importantly, learning a model from scratch simply in 3D may not necessarily yield performance better than transfer learning from ImageNet in 2D, but our Models Genesis consistently top any 2D approaches including fine-tuning the models pre-trained from ImageNet as well as fine-tuning the 2D versions of our Models Genesis, confirming the importance of 3D anatomical information and significance of our Models Genesis for 3D medical imaging. This performance is attributed to our unified self-supervised learning framework, built on a simple yet powerful observation: the sophisticated yet recurrent anatomy in medical images can serve as strong supervision signals for deep models to learn common anatomical representation automatically via self-supervision. As open science, all pre-trained Models Genesis are available at https://github.com/MrGiovanni/ModelsGenesis.

read more

Citations
More filters
Journal ArticleDOI

A review of deep learning in medical imaging: Imaging traits, technology trends, case studies with progress highlights, and future promises

TL;DR: This survey article presents traits of medical imaging, highlights both clinical needs and technical challenges in medical Imaging, and describes how emerging trends in DL are addressing these issues, including the topics of network architecture, sparse and noisy labels, federating learning, interpretability, uncertainty quantification, and so on.
Book ChapterDOI

Self-Supervision with Superpixels: Training Few-shot Medical Image Segmentation without Annotation

TL;DR: A novel self-supervised FSS framework for medical images in order to eliminate the requirement for annotations during training, and superpixel-based pseudo-labels are generated to provide supervision.
Journal ArticleDOI

Toward data-efficient learning: A benchmark for COVID-19 CT lung and infection segmentation.

TL;DR: Wang et al. as mentioned in this paper presented the first data-efficient learning benchmark for medical image segmentation, and provided more than 40 pre-trained baseline models, which not only serve as out-of-the-box segmentation tools but also save computational time for researchers who are interested in COVID-19 lung and infection segmentation.
Posted Content

MoCo-CXR: MoCo Pretraining Improves Representation and Transferability of Chest X-ray Models

TL;DR: This study demonstrates that MoCo-pretraining provides high-quality representations and transferable initializations for chest X-ray interpretation and suggests that pretraining on unlabeled X-rays can provide transfer learning benefits for a target task.
References
More filters
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Journal ArticleDOI

Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning

TL;DR: Two specific computer-aided detection problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification are studied, achieving the state-of-the-art performance on the mediastinal LN detection, and the first five-fold cross-validation classification results are reported.
Posted Content

Context Encoders: Feature Learning by Inpainting

TL;DR: Context Encoders as mentioned in this paper is a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings, which can be used for semantic inpainting tasks, either stand-alone or as initialization for nonparametric methods.
Journal ArticleDOI

Convolutional Neural Networks for Medical Image Analysis: Full Training or Fine Tuning?

TL;DR: This paper considered four distinct medical imaging applications in three specialties involving classification, detection, and segmentation from three different imaging modalities, and investigated how the performance of deep CNNs trained from scratch compared with the pre-trained CNNs fine-tuned in a layer-wise manner.
Journal ArticleDOI

Self-Supervised Visual Feature Learning With Deep Neural Networks: A Survey

TL;DR: An extensive review of deep learning-based self-supervised general visual feature learning methods from images or videos as a subset of unsupervised learning methods to learn general image and video features from large-scale unlabeled data without using any human-annotated labels is provided.
Related Papers (5)