Author
Maxime Sermesant
Other affiliations: King's College London, Microsoft, University of Bordeaux ...read more
Bio: Maxime Sermesant is an academic researcher from French Institute for Research in Computer Science and Automation. The author has contributed to research in topics: Population & Ventricular tachycardia. The author has an hindex of 48, co-authored 246 publications receiving 8232 citations. Previous affiliations of Maxime Sermesant include King's College London & Microsoft.
Papers published on a yearly basis
Papers
More filters
••
University of Lyon1, University of Burgundy2, Université de Sherbrooke3, The Chinese University of Hong Kong4, Pompeu Fabra University5, Stanford University6, Queen Mary University of London7, University of Crete8, Indian Institute of Technology Madras9, French Institute for Research in Computer Science and Automation10, German Cancer Research Center11, Mannheim University of Applied Sciences12, ETH Zurich13, Utrecht University14, Yonsei University15, University of Nice Sophia Antipolis16
TL;DR: How far state-of-the-art deep learning methods can go at assessing CMRI, i.e., segmenting the myocardium and the two ventricles as well as classifying pathologies is measured, to open the door to highly accurate and fully automatic analysis of cardiac CMRI.
Abstract: Delineation of the left ventricular cavity, myocardium, and right ventricle from cardiac magnetic resonance images (multi-slice 2-D cine MRI) is a common clinical task to establish diagnosis. The automation of the corresponding tasks has thus been the subject of intense research over the past decades. In this paper, we introduce the “Automatic Cardiac Diagnosis Challenge” dataset (ACDC), the largest publicly available and fully annotated dataset for the purpose of cardiac MRI (CMR) assessment. The dataset contains data from 150 multi-equipments CMRI recordings with reference measurements and classification from two medical experts. The overarching objective of this paper is to measure how far state-of-the-art deep learning methods can go at assessing CMRI, i.e., segmenting the myocardium and the two ventricles as well as classifying pathologies. In the wake of the 2017 MICCAI-ACDC challenge, we report results from deep learning methods provided by nine research groups for the segmentation task and four groups for the classification task. Results show that the best methods faithfully reproduce the expert analysis, leading to a mean value of 0.97 correlation score for the automatic extraction of clinical indices and an accuracy of 0.96 for automatic diagnosis. These results clearly open the door to highly accurate and fully automatic analysis of cardiac CMRI. We also identify scenarios for which deep learning methods are still failing. Both the dataset and detailed results are publicly available online, while the platform will remain open for new submissions.
1,056 citations
••
TL;DR: A new model to simulate the three-dimensional (3-D) growth of glioblastomas multiforma (GBMs), the most aggressive glial tumors, and a new coupling equation taking into account the mechanical influence of the tumor cells on the invaded tissues are proposed.
Abstract: We propose a new model to simulate the three-dimensional (3-D) growth of glioblastomas multiforma (GBMs), the most aggressive glial tumors. The GBM speed of growth depends on the invaded tissue: faster in white than in gray matter, it is stopped by the dura or the ventricles. These different structures are introduced into the model using an atlas matching technique. The atlas includes both the segmentations of anatomical structures and diffusion information in white matter fibers. We use the finite element method (FEM) to simulate the invasion of the GBM in the brain parenchyma and its mechanical interaction with the invaded structures (mass effect). Depending on the considered tissue, the former effect is modeled with a reaction-diffusion or a Gompertz equation, while the latter is based on a linear elastic brain constitutive equation. In addition, we propose a new coupling equation taking into account the mechanical influence of the tumor cells on the invaded tissues. The tumor growth simulation is assessed by comparing the in-silico GBM growth with the real growth observed on two magnetic resonance images (MRIs) of a patient acquired with 6 mo difference. Results show the feasibility of this new conceptual approach and justifies its further evaluation.
363 citations
••
TL;DR: Results indicate that this proactive model, which integrates a priori knowledge on the cardiac anatomy and on its dynamical behavior, can improve the accuracy and robustness of the extraction of functional parameters from cardiac images even in the presence of noisy or sparse data.
Abstract: This paper presents a new three-dimensional electromechanical model of the two cardiac ventricles designed both for the simulation of their electrical and mechanical activity, and for the segmentation of time series of medical images. First, we present the volumetric biomechanical models built. Then the transmembrane potential propagation is simulated, based on FitzHugh-Nagumo reaction-diffusion equations. The myocardium contraction is modeled through a constitutive law including an electromechanical coupling. Simulation of a cardiac cycle, with boundary conditions representing blood pressure and volume constraints, leads to the correct estimation of global and local parameters of the cardiac function. This model enables the introduction of pathologies and the simulation of electrophysiology interventions. Moreover, it can be used for cardiac image analysis. A new proactive deformable model of the heart is introduced to segment the two ventricles in time series of cardiac images. Preliminary results indicate that this proactive model, which integrates a priori knowledge on the cardiac anatomy and on its dynamical behavior, can improve the accuracy and robustness of the extraction of functional parameters from cardiac images even in the presence of noisy or sparse data. Such a model also allows the simulation of cardiovascular pathologies in order to test therapy strategies and to plan interventions.
240 citations
••
TL;DR: How the personalisation of an electromechanical model of the myocardium can predict the acute haemodynamic changes associated with CRT is presented, demonstrating the potential of physiological models personalised from images and electrophysiology signals to improve patient selection and plan CRT.
231 citations
••
228 citations
Cited by
More filters
••
9,362 citations
••
Technische Universität München1, ETH Zurich2, University of Bern3, Harvard University4, National Institutes of Health5, University of Debrecen6, University Hospital Heidelberg7, McGill University8, University of Pennsylvania9, French Institute for Research in Computer Science and Automation10, University at Buffalo11, Microsoft12, University of Cambridge13, Stanford University14, University of Virginia15, Imperial College London16, Massachusetts Institute of Technology17, Columbia University18, Sabancı University19, Old Dominion University20, RMIT University21, Purdue University22, General Electric23
TL;DR: The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) as mentioned in this paper was organized in conjunction with the MICCAI 2012 and 2013 conferences, and twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low and high grade glioma patients.
Abstract: In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients—manually annotated by up to four raters—and to 65 comparable scans generated using tumor image simulation software Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%–85%), illustrating the difficulty of this task We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource
3,699 citations
••
TL;DR: nnU-Net as mentioned in this paper is a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task.
Abstract: Biomedical imaging is a driver of scientific discovery and a core component of medical care and is being stimulated by the field of deep learning. While semantic segmentation algorithms enable image analysis and quantification in many applications, the design of respective specialized solutions is non-trivial and highly dependent on dataset properties and hardware conditions. We developed nnU-Net, a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task. The key design choices in this process are modeled as a set of fixed parameters, interdependent rules and empirical decisions. Without manual intervention, nnU-Net surpasses most existing approaches, including highly specialized solutions on 23 public datasets used in international biomedical segmentation competitions. We make nnU-Net publicly available as an out-of-the-box tool, rendering state-of-the-art segmentation accessible to a broad audience by requiring neither expert knowledge nor computing resources beyond standard network training.
2,040 citations
01 Jan 1979
TL;DR: This special issue aims at gathering the recent advances in learning with shared information methods and their applications in computer vision and multimedia analysis and addressing interesting real-world computer Vision and multimedia applications.
Abstract: In the real world, a realistic setting for computer vision or multimedia recognition problems is that we have some classes containing lots of training data and many classes contain a small amount of training data. Therefore, how to use frequent classes to help learning rare classes for which it is harder to collect the training data is an open question. Learning with Shared Information is an emerging topic in machine learning, computer vision and multimedia analysis. There are different level of components that can be shared during concept modeling and machine learning stages, such as sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, etc. Regarding the specific methods, multi-task learning, transfer learning and deep learning can be seen as using different strategies to share information. These learning with shared information methods are very effective in solving real-world large-scale problems. This special issue aims at gathering the recent advances in learning with shared information methods and their applications in computer vision and multimedia analysis. Both state-of-the-art works, as well as literature reviews, are welcome for submission. Papers addressing interesting real-world computer vision and multimedia applications are especially encouraged. Topics of interest include, but are not limited to: • Multi-task learning or transfer learning for large-scale computer vision and multimedia analysis • Deep learning for large-scale computer vision and multimedia analysis • Multi-modal approach for large-scale computer vision and multimedia analysis • Different sharing strategies, e.g., sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, • Real-world computer vision and multimedia applications based on learning with shared information, e.g., event detection, object recognition, object detection, action recognition, human head pose estimation, object tracking, location-based services, semantic indexing. • New datasets and metrics to evaluate the benefit of the proposed sharing ability for the specific computer vision or multimedia problem. • Survey papers regarding the topic of learning with shared information. Authors who are unsure whether their planned submission is in scope may contact the guest editors prior to the submission deadline with an abstract, in order to receive feedback.
1,758 citations
••
Johns Hopkins University1, Leipzig University2, Korea University3, Yale University4, West Virginia University5, University of Barcelona6, St George's, University of London7, Indiana University8, National Yang-Ming University9, Cleveland Clinic10, Aarhus University11, University at Buffalo12, Imperial College London13, Primary Children's Hospital14, Erasmus University Rotterdam15, Yeshiva University16, Ghent University17, Baylor University18, Virginia Commonwealth University19, Harvard University20, Federal University of São Paulo21, University of California, San Francisco22, Beaumont Hospital23, Boston University24, University of Oklahoma25, University of Michigan26, Carlos III Health Institute27, University of Melbourne28, Saint Louis University29, Université de Montréal30, University of Pennsylvania31, McGill University32, Mayo Clinic33, Lahey Hospital & Medical Center34, Royal Adelaide Hospital35, University of Milan36, University of Toronto37, Loyola University Chicago38, Jikei University School of Medicine39
TL;DR: This 2017 Consensus Statement is to provide a state-of-the-art review of the field of catheter and surgical ablation of AF and to report the findings of a writing group, convened by these five international societies.
1,626 citations