scispace - formally typeset
Search or ask a question
Author

Ismail Ben Ayed

Bio: Ismail Ben Ayed is an academic researcher from École de technologie supérieure. The author has contributed to research in topics: Segmentation & Image segmentation. The author has an hindex of 34, co-authored 200 publications receiving 4260 citations. Previous affiliations of Ismail Ben Ayed include École Normale Supérieure & Université de Montréal.


Papers
More filters
Journal ArticleDOI
TL;DR: This work is the first to study subcortical structure segmentation on such large‐scale and heterogeneous data and yielded segmentations that are highly consistent with a standard atlas‐based approach, while running in a fraction of the time needed by atlas-based methods and avoiding registration/normalization steps.

367 citations

Journal ArticleDOI
TL;DR: HyperDenseNet is proposed, a 3-D fully convolutional neural network that extends the definition of dense connectivity to multi-modal segmentation problems and has total freedom to learn more complex combinations between the modalities, within and in-between all the levels of abstraction, which increases significantly the learning representation.
Abstract: Recently, dense connections have attracted substantial attention in computer vision because they facilitate gradient flow and implicit deep supervision during training. Particularly, DenseNet that connects each layer to every other layer in a feed-forward fashion and has shown impressive performances in natural image classification tasks. We propose HyperDenseNet , a 3-D fully convolutional neural network that extends the definition of dense connectivity to multi-modal segmentation problems. Each imaging modality has a path, and dense connections occur not only between the pairs of layers within the same path but also between those across different paths. This contrasts with the existing multi-modal CNN approaches, in which modeling several modalities relies entirely on a single joint layer (or level of abstraction) for fusion, typically either at the input or at the output of the network. Therefore, the proposed network has total freedom to learn more complex combinations between the modalities, within and in-between all the levels of abstraction , which increases significantly the learning representation. We report extensive evaluations over two different and highly competitive multi-modal brain tissue segmentation challenges, iSEG 2017 and MRBrainS 2013, with the former focusing on six month infant data and the latter on adult images. HyperDenseNet yielded significant improvements over many state-of-the-art segmentation networks, ranking at the top on both benchmarks. We further provide a comprehensive experimental analysis of features re-use, which confirms the importance of hyper-dense connections in multi-modal representation learning. Our code is publicly available.

366 citations

Journal ArticleDOI
TL;DR: A differentiable penalty is proposed, which enforces inequality constraints directly in the loss function, avoiding expensive Lagrangian dual iterates and proposal generation and has the potential to close the gap between weakly and fully supervised learning in semantic medical image segmentation.

238 citations

Journal ArticleDOI
TL;DR: Best results show that an average 80% Dice accuracy and a 1cm Hausdorff distance can be expected from semi-automated algorithms for this challenging task on the datasets, and that an automated algorithm can reach similar performance, at the expense of a high computational burden.

220 citations

Journal ArticleDOI
TL;DR: The purpose of this study is to investigate multiregion graph cut image partitioning via kernel mapping of the image data and affords an effective alternative to complex modeling of the original image data while taking advantage of the computational benefits of graph cuts.
Abstract: The purpose of this study is to investigate multiregion graph cut image partitioning via kernel mapping of the image data. The image data is transformed implicitly by a kernel function so that the piecewise constant model of the graph cut formulation becomes applicable. The objective function contains an original data term to evaluate the deviation of the transformed data, within each segmentation region, from the piecewise constant model, and a smoothness, boundary preserving regularization term. The method affords an effective alternative to complex modeling of the original image data while taking advantage of the computational benefits of graph cuts. Using a common kernel function, energy minimization typically consists of iterating image partitioning by graph cut iterations and evaluations of region parameters via fixed point computation. A quantitative and comparative performance assessment is carried out over a large number of experiments using synthetic grey level data as well as natural images from the Berkeley database. The effectiveness of the method is also demonstrated through a set of experiments with real images of a variety of types such as medical, synthetic aperture radar, and motion maps.

219 citations


Cited by
More filters
Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

01 Jan 1979
TL;DR: This special issue aims at gathering the recent advances in learning with shared information methods and their applications in computer vision and multimedia analysis and addressing interesting real-world computer Vision and multimedia applications.
Abstract: In the real world, a realistic setting for computer vision or multimedia recognition problems is that we have some classes containing lots of training data and many classes contain a small amount of training data. Therefore, how to use frequent classes to help learning rare classes for which it is harder to collect the training data is an open question. Learning with Shared Information is an emerging topic in machine learning, computer vision and multimedia analysis. There are different level of components that can be shared during concept modeling and machine learning stages, such as sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, etc. Regarding the specific methods, multi-task learning, transfer learning and deep learning can be seen as using different strategies to share information. These learning with shared information methods are very effective in solving real-world large-scale problems. This special issue aims at gathering the recent advances in learning with shared information methods and their applications in computer vision and multimedia analysis. Both state-of-the-art works, as well as literature reviews, are welcome for submission. Papers addressing interesting real-world computer vision and multimedia applications are especially encouraged. Topics of interest include, but are not limited to: • Multi-task learning or transfer learning for large-scale computer vision and multimedia analysis • Deep learning for large-scale computer vision and multimedia analysis • Multi-modal approach for large-scale computer vision and multimedia analysis • Different sharing strategies, e.g., sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, • Real-world computer vision and multimedia applications based on learning with shared information, e.g., event detection, object recognition, object detection, action recognition, human head pose estimation, object tracking, location-based services, semantic indexing. • New datasets and metrics to evaluate the benefit of the proposed sharing ability for the specific computer vision or multimedia problem. • Survey papers regarding the topic of learning with shared information. Authors who are unsure whether their planned submission is in scope may contact the guest editors prior to the submission deadline with an abstract, in order to receive feedback.

1,758 citations

Journal ArticleDOI
TL;DR: How far state-of-the-art deep learning methods can go at assessing CMRI, i.e., segmenting the myocardium and the two ventricles as well as classifying pathologies is measured, to open the door to highly accurate and fully automatic analysis of cardiac CMRI.
Abstract: Delineation of the left ventricular cavity, myocardium, and right ventricle from cardiac magnetic resonance images (multi-slice 2-D cine MRI) is a common clinical task to establish diagnosis. The automation of the corresponding tasks has thus been the subject of intense research over the past decades. In this paper, we introduce the “Automatic Cardiac Diagnosis Challenge” dataset (ACDC), the largest publicly available and fully annotated dataset for the purpose of cardiac MRI (CMR) assessment. The dataset contains data from 150 multi-equipments CMRI recordings with reference measurements and classification from two medical experts. The overarching objective of this paper is to measure how far state-of-the-art deep learning methods can go at assessing CMRI, i.e., segmenting the myocardium and the two ventricles as well as classifying pathologies. In the wake of the 2017 MICCAI-ACDC challenge, we report results from deep learning methods provided by nine research groups for the segmentation task and four groups for the classification task. Results show that the best methods faithfully reproduce the expert analysis, leading to a mean value of 0.97 correlation score for the automatic extraction of clinical indices and an accuracy of 0.96 for automatic diagnosis. These results clearly open the door to highly accurate and fully automatic analysis of cardiac CMRI. We also identify scenarios for which deep learning methods are still failing. Both the dataset and detailed results are publicly available online, while the platform will remain open for new submissions.

1,056 citations

Journal ArticleDOI
TL;DR: A review of recent advances in medical imaging using the adversarial training scheme with the hope of benefiting researchers interested in this technique.

1,053 citations