A token-mixer architecture for CAD-RADS classification of coronary stenosis on multiplanar reconstruction CT images
Marco Penso,Sara Moccia,Enrico G. Caiani,Gloria Caredda,Maria Luisa Lampus,M. Ludovica Carerj,Mario Babbaro,Mauro Pepi,Mattia Chiesa,Gianluca Pontone +9 more
Reads0
Chats0
TLDR
In this paper , a token-mixer architecture (ConvMixer) was proposed to learn structural relationship over the whole coronary artery, which consists of a patch embedding layer followed by repeated convolutional blocks to learn long-range dependences between pixels.About:
This article is published in Computers in Biology and Medicine.The article was published on 2022-12-26 and is currently open access. It has received 1 citations till now. The article focuses on the topics: Medicine & Computer science.read more
Citations
More filters
Journal ArticleDOI
CAD-RADS scoring of coronary CT angiography with Multi-Axis Vision Transformer: a clinically-inspired deep learning pipeline
Alessia Gerbasi,Arianna Dagliati,Giuseppe Albi,Mattia Chiesa,Daniele Andreini,Andrea Baggiano,Saima Mushtaq,Gianluca Pontone,Riccardo Bellazzi,Gualtiero I. Colombo +9 more
TL;DR: In this paper , the authors proposed a fully automated, and visually explainable, deep learning pipeline to be used as a decision support system for the Coronary Artery Disease screening procedure, which performs two classification tasks: identifying patients who require further clinical investigations and classifying patients into subgroups based on the degree of stenosis, according to commonly used CAD-RADS thresholds.
References
More filters
Proceedings Article
Attention is All you Need
Ashish Vaswani,Noam Shazeer,Niki Parmar,Jakob Uszkoreit,Llion Jones,Aidan N. Gomez,Lukasz Kaiser,Illia Polosukhin +7 more
TL;DR: This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Posted Content
Deep Residual Learning for Image Recognition
TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Proceedings ArticleDOI
Going deeper with convolutions
Christian Szegedy,Wei Liu,Yangqing Jia,Pierre Sermanet,Scott Reed,Dragomir Anguelov,Dumitru Erhan,Vincent Vanhoucke,Andrew Rabinovich +8 more
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Proceedings Article
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe,Christian Szegedy +1 more
TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Proceedings ArticleDOI
Densely Connected Convolutional Networks
TL;DR: DenseNet as mentioned in this paper proposes to connect each layer to every other layer in a feed-forward fashion, which can alleviate the vanishing gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.