scispace - formally typeset
Open AccessJournal ArticleDOI

Attention gated networks: Learning to leverage salient regions in medical images.

Reads0
Chats0
TLDR
Experimental results show that AG models consistently improve the prediction performance of the base architectures across different datasets and training sizes while preserving computational efficiency.
About
This article is published in Medical Image Analysis.The article was published on 2019-02-05 and is currently open access. It has received 966 citations till now. The article focuses on the topics: Convolutional neural network & Contextual image classification.

read more

Citations
More filters
Journal ArticleDOI

Inf-Net: Automatic COVID-19 Lung Infection Segmentation From CT Images

TL;DR: Li et al. as discussed by the authors proposed a COVID-19 Lung Infection Segmentation Deep Network ( Inf-Net) to automatically identify infected regions from chest CT slices, where a parallel partial decoder is used to aggregate the high-level features and generate a global map.
Journal ArticleDOI

Explainable Machine Learning for Scientific Insights and Discoveries

TL;DR: In this paper, the authors provide a survey of recent scientific works that incorporate machine learning and the way that explainable machine learning is used in combination with domain knowledge from the application areas.
Journal ArticleDOI

U-Net and Its Variants for Medical Image Segmentation: A Review of Theory and Applications

TL;DR: A narrative literature review examines the numerous developments and breakthroughs in the U-net architecture and provides observations on recent trends, and discusses the many innovations that have advanced in deep learning and how these tools facilitate U-nets.
Journal ArticleDOI

Deep semantic segmentation of natural and medical images: a review

TL;DR: This review categorizes the leading deep learning-based medical and non-medical image segmentation solutions into six main groups of deep architectural, data synthesis- based, loss function-based, sequenced models, weakly supervised, and multi-task methods.
Book ChapterDOI

TransFuse: Fusing Transformers and CNNs for Medical Image Segmentation

TL;DR: TransFuse as discussed by the authors combines Transformers and CNNs in a parallel style, where both global dependency and low-level spatial details can be efficiently captured in a much shallower manner.
References
More filters
Book ChapterDOI

U-Net: Convolutional Networks for Biomedical Image Segmentation

TL;DR: Neber et al. as discussed by the authors proposed a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently, which can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Proceedings ArticleDOI

Fully convolutional networks for semantic segmentation

TL;DR: The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.

Automatic differentiation in PyTorch

TL;DR: An automatic differentiation module of PyTorch is described — a library designed to enable rapid research on machine learning models that focuses on differentiation of purely imperative programs, with a focus on extensibility and low overhead.
Journal ArticleDOI

A survey on deep learning in medical image analysis

TL;DR: This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year, to survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks.
Journal ArticleDOI

Dermatologist-level classification of skin cancer with deep neural networks

TL;DR: This work demonstrates an artificial intelligence capable of classifying skin cancer with a level of competence comparable to dermatologists, trained end-to-end from images directly, using only pixels and disease labels as inputs.
Related Papers (5)