scispace - formally typeset
Search or ask a question

Showing papers by "Nicholas Ayache published in 2019"


Journal ArticleDOI
TL;DR: Image compensation successfully realigned feature distributions computed from different CT imaging protocols and should facilitate multicenter radiomic studies.
Abstract: Background Radiomics extracts features from medical images more precisely and more accurately than visual assessment. However, radiomics features are affected by CT scanner parameters such as reconstruction kernel or section thickness, thus obscuring underlying biologically important texture features. Purpose To investigate whether a compensation method could correct for the variations of radiomic feature values caused by using different CT protocols. Materials and Methods Phantom data involving 10 texture patterns and 74 patients in cohorts 1 (19 men; 42 patients; mean age, 60.4 years; September-October 2013) and 2 (16 men; 32 patients; mean age, 62.1 years; January-September 2007) scanned by using different CT protocols were retrospectively included. For any radiomic feature, the compensation approach identified a protocol-specific transformation to express all data in a common space that were devoid of protocol effects. The differences in statistical distributions between protocols were assessed by using Friedman tests before and after compensation. Principal component analyses were performed on the phantom data to evaluate the ability to distinguish between texture patterns after compensation. Results In the phantom data, the statistical distributions of features were different between protocols for all radiomic features and texture patterns (P .05). Principal component analysis demonstrated that each texture pattern was no longer displayed as different clusters corresponding to different imaging protocols, unlike what was observed before compensation. The correction for scanner effect was confirmed in patient data with 100% (10 of 10 features for cohort 1) and 98% (87 of 89 features for cohort 2) of P values less than .05 before compensation, compared with 30% (three of 10) and 15% (13 of 89) after compensation. Conclusion Image compensation successfully realigned feature distributions computed from different CT imaging protocols and should facilitate multicenter radiomic studies. © RSNA, 2019 Online supplemental material is available for this article. See also the editorial by Steiger and Sood in this issue.

223 citations


Journal ArticleDOI
TL;DR: In this article, a conditional variational autoencoder network is proposed to learn a low-dimensional probabilistic deformation model from data which can be used for the registration and the analysis of deformations.
Abstract: We propose to learn a low-dimensional probabilistic deformation model from data which can be used for the registration and the analysis of deformations. The latent variable model maps similar deformations close to each other in an encoding space. It enables to compare deformations, to generate normal or pathological deformations for any new image, or to transport deformations from one image pair to any other image. Our unsupervised method is based on the variational inference. In particular, we use a conditional variational autoencoder network and constrain transformations to be symmetric and diffeomorphic by applying a differentiable exponentiation layer with a symmetric loss function. We also present a formulation that includes spatial regularization such as the diffusion-based filters. In addition, our framework provides multi-scale velocity field estimations. We evaluated our method on 3-D intra-subject registration using 334 cardiac cine-MRIs. On this dataset, our method showed the state-of-the-art performance with a mean DICE score of 81.2% and a mean Hausdorff distance of 7.3 mm using 32 latent dimensions compared to three state-of-the-art methods while also demonstrating more regular deformation fields. The average time per registration was 0.32 s. Besides, we visualized the learned latent space and showed that the encoded deformations can be used to transport deformations and to cluster diseases with a classification accuracy of 83% after applying a linear projection.

173 citations


Journal ArticleDOI
TL;DR: In this paper, a CNN-based model was proposed to combine the advantages of the short-range 3D context and the long-range 2D context for tumor segmentation in multisequence MR images.

85 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed to use both fully annotated and weakly annotated data to train a deep learning model for segmentation, which achieved a significant improvement in segmentation performance compared to the standard supervised learning.
Abstract: Most of the current state-of-the-art methods for tumor segmentation are based on machine learning models trained manually on segmented images. This type of training data is particularly costly, as manual delineation of tumors is not only time-consuming but also requires medical expertise. On the other hand, images with a provided global label (indicating presence or absence of a tumor) are less informative but can be obtained at a substantially lower cost. We propose to use both types of training data (fully annotated and weakly annotated) to train a deep learning model for segmentation. The idea of our approach is to extend segmentation networks with an additional branch performing image-level classification. The model is jointly trained for segmentation and classification tasks to exploit the information contained in weakly annotated images while preventing the network from learning features that are irrelevant for the segmentation task. We evaluate our method on the challenging task of brain tumor segmentation in magnetic resonance images from the Brain Tumor Segmentation 2018 Challenge. We show that the proposed approach provides a significant improvement in segmentation performance compared to the standard supervised learning. The observed improvement is proportional to the ratio between weakly annotated and fully annotated images available for training.

82 citations


Journal ArticleDOI
TL;DR: An explainable, simple and flexible model for pathology classification based on a novel approach to extract image derived features to characterize the shape and motion of the heart, comparable to that of the state-of-the-art.

70 citations


Journal ArticleDOI
TL;DR: The fast CRT pacing predictions are a step forward to a noninvasive CRT patient selection and therapy optimisation, to help clinicians in these difficult tasks.
Abstract: Goal: Noninvasive cardiac electrophysiology (EP) model personalisation has raised interest for instance in the scope of predicting EP cardiac resynchronization therapy (CRT) response. However, the restricted clinical applicability of current methods is due in particular to the limitation to simple situations and the important computational cost. Methods: We propose in this manuscript an approach to tackle these two issues. First, we analyze more complex propagation patterns (multiple onsets and scar tissue) using relevance vector regression and shape dimensionality reduction on a large simulated database. Second, this learning is performed offline on a reference anatomy and transferred onto patient-specific anatomies in order to achieve fast personalized predictions online. Results: We evaluated our method on a dataset composed of 20 dyssynchrony patients with a total of 120 different cardiac cycles. The comparison with a commercially available electrocardiographic imaging (ECGI) method shows a good identification of the cardiac activation pattern. From the cardiac parameters estimated in sinus rhythm, we predicted five different paced patterns for each patient. The comparison with the body surface potential mappings (BSPM) measured during pacing and the ECGI method indicates a good predictive power. Conclusion: We showed that learning offline from a large simulated database on a reference anatomy was able to capture the main cardiac EP characteristics from noninvasive measurements for fast patient-specific predictions. Significance: The fast CRT pacing predictions are a step forward to a noninvasive CRT patient selection and therapy optimisation, to help clinicians in these difficult tasks.

40 citations


Journal ArticleDOI
TL;DR: A deformation-based framework to jointly model the influence of aging and Alzheimer's disease (AD) on the brain morphological evolution and shows promising results to describe age-related brain diseases over long time scales.

35 citations


Proceedings Article
24 May 2019
TL;DR: This work extends the variational framework of VAE to bring parsimony and inter-pretability when jointly account for latent relationships across multiple channels, and allows to identify the joint effect of age and pathology in describing clinical condition in a large scale clinical cohort.
Abstract: Interpretable modeling of heterogeneous data channels is essential in medical applications, for example when jointly analyzing clinical scores and medical images. Variational Autoencoders (VAE) are powerful generative models that learn representations of complex data. The flexibility of VAE may come at the expense of lack of interpretability in describing the joint relationship between heterogeneous data. To tackle this problem, in this work we extend the variational framework of VAE to bring parsimony and inter-pretability when jointly account for latent relationships across multiple channels. In the latent space, this is achieved by constraining the varia-tional distribution of each channel to a common target prior. Parsimonious latent representations are enforced by variational dropout. Experiments on synthetic data show that our model correctly identifies the prescribed latent dimensions and data relationships across multiple testing scenarios. When applied to imaging and clinical data, our method allows to identify the joint effect of age and pathology in describing clinical condition in a large scale clinical cohort.

28 citations


Journal ArticleDOI
TL;DR: The use of the three-dimensional fully convolutional neural networks is proposed to predict FLAIR pulse sequences from other MRI pulse sequences and results show that this method is competitive for FLAIR synthesis.
Abstract: Multiple sclerosis (MS) is a white matter (WM) disease characterized by the formation of WM lesions, which can be visualized by magnetic resonance imaging (MRI). The fluid-attenuated inversion recovery (FLAIR) MRI pulse sequence is used clinically and in research for the detection of WM lesions. However, in clinical settings, some MRI pulse sequences could be missed because of various constraints. The use of the three-dimensional fully convolutional neural networks is proposed to predict FLAIR pulse sequences from other MRI pulse sequences. In addition, the contribution of each input pulse sequence is evaluated with a pulse sequence-specific saliency map. This approach is tested on a real MS image dataset and evaluated by comparing this approach with other methods and by assessing the lesion contrast in the synthetic FLAIR pulse sequence. Both the qualitative and quantitative results show that this method is competitive for FLAIR synthesis.

24 citations


Journal ArticleDOI
TL;DR: This work proposes a new approach called Sketcher-Refiner generative adversarial networks (GANs) with specifically designed adversarial loss functions that outperform the state-of-the-art methods in terms of image quality and myelin content prediction and shows potential for clinical management of patients with MS.

24 citations


Journal ArticleDOI
TL;DR: The results derived from this study are a proof of concept that the use of model-based feature augmentation strengthens the performance of a purely image driven learning scheme for the prediction of cardiac ablation targets.
Abstract: Goal: We present a model-based feature augmentation scheme to improve the performance of a learning algorithm for the detection of cardiac radio-frequency ablation (RFA) targets with respect to learning from images alone. Methods: Initially, we compute image features from delayed-enhanced magnetic resonance imaging (DE-MRI) to describe local tissue heterogeneities and feed them into a machine learning framework with uncertainty assessment for the identification of potential ablation targets. Next, we introduce the use of a patient-specific image-based model derived from DE-MRI coupled with the Mitchell–Schaeffer electrophysiology model and a dipole formulation for the simulation of intracardiac electrograms. Relevant features are extracted from these simulated signals which serve as a feature augmentation scheme for the learning algorithm. We assess the classifier's performance when using only image features and with model-based feature augmentation. Results: We obtained average classification scores of 97.2 $\%$ accuracy, 82.4 $\%$ sensitivity, and 95.0 $\%$ positive predictive value by using a model-based feature augmentation scheme. Preliminary results also show that training the algorithm on the closest patient from the database, instead of using all the patients, improves the classification results. Conclusion: We presented a feature augmentation scheme based on biophysical cardiac electrophysiology modeling to increase the prediction scores of a machine learning framework for the RFA target prediction. Significance: The results derived from this study are a proof of concept that the use of model-based feature augmentation strengthens the performance of a purely image driven learning scheme for the prediction of cardiac ablation targets.

Book ChapterDOI
13 Oct 2019
TL;DR: In this paper, an unsupervised generative deformation model is applied within a temporal convolutional network which leads to a diffeomorphic motion model encoded as a low-dimensional motion matrix.
Abstract: We propose to learn a probabilistic motion model from a sequence of images. Besides spatio-temporal registration, our method offers to predict motion from a limited number of frames, useful for temporal super-resolution. The model is based on a probabilistic latent space and a novel temporal dropout training scheme. This enables simulation and interpolation of realistic motion patterns given only one or any subset of frames of a sequence. The encoded motion also allows to be transported from one subject to another without the need of inter-subject registration. An unsupervised generative deformation model is applied within a temporal convolutional network which leads to a diffeomorphic motion model – encoded as a low-dimensional motion matrix. Applied to cardiac cine-MRI sequences, we show improved registration accuracy and spatio-temporally smoother deformations compared to three state-of-the-art registration algorithms. Besides, we demonstrate the model’s applicability to motion transport by simulating a pathology in a healthy case. Furthermore, we show an improved motion reconstruction from incomplete sequences compared to linear and cubic interpolation.

Journal ArticleDOI
TL;DR: In this paper, an approach based on an algorithm called Iteratively Updated Priors (IUP) is proposed, in which successive personalisations of a full database through maximum a posteriori (MAP) estimation, where the prior probability at an iteration is set from the distribution of personalised parameters in the database at the previous iteration, at the convergence of the algorithm, estimated parameters of the population lie on a linear subspace of reduced (and possibly sufficient) dimension in which for each case of the database, there is a (possibly unique) parameter value for which the simulation fits
Abstract: Personalised cardiac models are a virtual representation of the patient heart, with parameter values for which the simulation fits the available clinical measurements Models usually have a large number of parameters while the available data for a given patient are typically limited to a small set of measurements; thus, the parameters cannot be estimated uniquely This is a practical obstacle for clinical applications, where accurate parameter values can be important Here, we explore an original approach based on an algorithm called Iteratively Updated Priors (IUP), in which we perform successive personalisations of a full database through maximum a posteriori (MAP) estimation, where the prior probability at an iteration is set from the distribution of personalised parameters in the database at the previous iteration At the convergence of the algorithm, estimated parameters of the population lie on a linear subspace of reduced (and possibly sufficient) dimension in which for each case of the database, there is a (possibly unique) parameter value for which the simulation fits the measurements We first show how this property can help the modeller select a relevant parameter subspace for personalisation In addition, since the resulting priors in this subspace represent the population statistics in this subspace, they can be used to perform consistent parameter estimation for cases where measurements are possibly different or missing in the database, which we illustrate with the personalisation of a heterogeneous database of 811 cases

Posted Content
TL;DR: An unsupervised generative deformation model is applied within a temporal convolutional network which leads to a diffeomorphic motion model, encoded as a low-dimensional motion matrix, which enables simulation and interpolation of realistic motion patterns given only one or any subset of frames of a sequence.
Abstract: We propose to learn a probabilistic motion model from a sequence of images. Besides spatio-temporal registration, our method offers to predict motion from a limited number of frames, useful for temporal super-resolution. The model is based on a probabilistic latent space and a novel temporal dropout training scheme. This enables simulation and interpolation of realistic motion patterns given only one or any subset of frames of a sequence. The encoded motion also allows to be transported from one subject to another without the need of inter-subject registration. An unsupervised generative deformation model is applied within a temporal convolutional network which leads to a diffeomorphic motion model, encoded as a low-dimensional motion matrix. Applied to cardiac cine-MRI sequences, we show improved registration accuracy and spatio-temporally smoother deformations compared to three state-of-the-art registration algorithms. Besides, we demonstrate the model's applicability to motion transport by simulating a pathology in a healthy case. Furthermore, we show an improved motion reconstruction from incomplete sequences compared to linear and cubic interpolation.

Posted Content
TL;DR: Unsupervised analysis of image-derived shape and motion features extracted from 3822 cardiac 4D MRIs of the UK Biobank identifies two small clusters which probably correspond to two pathological categories.
Abstract: We perform unsupervised analysis of image-derived shape and motion features extracted from 3822 cardiac 4D MRIs of the UK Biobank. First, with a feature extraction method previously published based on deep learning models, we extract from each case 9 feature values characterizing both the cardiac shape and motion. Second, a feature selection is performed to remove highly correlated feature pairs. Third, clustering is carried out using a Gaussian mixture model on the selected features. After analysis, we identify two small clusters which probably correspond to two pathological categories. Further confirmation using a trained classification model and dimensionality reduction tools is carried out to support this discovery. Moreover, we examine the differences between the other large clusters and compare our measures with the ground-truth.

Posted Content
TL;DR: In this article, a deformation-based framework is proposed to jointly model the influence of aging and Alzheimer's disease on the brain morphological evolution, which can be used to generate plausible morphological trajectories associated with the disease.
Abstract: In this study we propose a deformation-based framework to jointly model the influence of aging and Alzheimer's disease (AD) on the brain morphological evolution. Our approach combines a spatio-temporal description of both processes into a generative model. A reference morphology is deformed along specific trajectories to match subject specific morphologies. It is used to define two imaging progression markers: 1) a morphological age and 2) a disease score. These markers can be computed locally in any brain region. The approach is evaluated on brain structural magnetic resonance images (MRI) from the ADNI database. The generative model is first estimated on a control population, then, for each subject, the markers are computed for each acquisition. The longitudinal evolution of these markers is then studied in relation with the clinical diagnosis of the subjects and used to generate possible morphological evolution. In the model, the morphological changes associated with normal aging are mainly found around the ventricles, while the Alzheimer's disease specific changes are more located in the temporal lobe and the hippocampal area. The statistical analysis of these markers highlights differences between clinical conditions even though the inter-subject variability is quiet high. In this context, the model can be used to generate plausible morphological trajectories associated with the disease. Our method gives two interpretable scalar imaging biomarkers assessing the effects of aging and disease on brain morphology at the individual and population level. These markers confirm an acceleration of apparent aging for Alzheimer's subjects and can help discriminate clinical conditions even in prodromal stages. More generally, the joint modeling of normal and pathological evolutions shows promising results to describe age-related brain diseases over long time scales.

Patent
Julian Krebs1, Hervé Delingette, Nicholas Ayache, Tommaso Mansi1, Shun Miao1 
04 Jul 2019
TL;DR: For registration of medical images with deep learning, a neural network is designed to include a diffeomorphic layer in the architecture to provide for more regularized and realistic registration.
Abstract: For registration of medical images with deep learning, a neural network is designed to include a diffeomorphic layer in the architecture. The network may be trained using supervised or unsupervised approaches. By enforcing the diffeomorphic characteristic in the architecture of the network, the training of the network and application of the learned network may provide for more regularized and realistic registration.

Journal ArticleDOI
TL;DR: Sparse high‐dimensional discriminant analysis method which performs a class‐specific variable selection through Bayesian sparsity and an exemplar application to cancer characterization based on medical imaging using radiomic feature extraction is proposed.
Abstract: Although the ongoing digital revolution in fields such as chemometrics, genomics or personalized medicine gives hope for considerable progress in these areas, it also provides more and more high-dimensional data to analyze and interpret. A common usual task in those fields is discriminant analysis, which however may suffer from the high dimensionality of the data. The recent advances, through subspace classification or variable selection methods, allowed to reach either excellent classification performances or useful visualizations and interpretations. Obviously, it is of great interest to have both excellent classification accuracies and a meaningful variable selection for interpretation. This work addresses this issue by introducing a subspace discriminant analysis method which performs a class-specific variable selection through Bayesian sparsity. The resulting classification methodology is called sparse high-dimensional discriminant analysis (sHDDA). Contrary to most sparse methods which are based on the Lasso, sHDDA relies on a Bayesian modeling of the sparsity pattern and avoids the painstaking and sensitive cross-validation of the sparsity level. The main features of sHDDA are illustrated on simulated and real-world data. In particular, we propose an exemplar application to cancer characterization based on medical imaging using radiomic feature extraction is in particular proposed.

Posted Content
TL;DR: An efficient algorithm to train neural networks for an end-to-end segmentation of multiple and non-exclusive classes is proposed, addressing problems related to computational costs and missing ground truth segmentations for a subset of classes.
Abstract: Planning of radiotherapy involves accurate segmentation of a large number of organs at risk, i.e. organs for which irradiation doses should be minimized to avoid important side effects of the therapy. We propose a deep learning method for segmentation of organs at risk inside the brain region, from Magnetic Resonance (MR) images. Our system performs segmenta-tion of eight structures: eye, lens, optic nerve, optic chiasm, pituitary gland, hippocampus, brainstem and brain. We propose an efficient algorithm to train neural networks for an end-to-end segmentation of multiple and non-exclusive classes, addressing problems related to computational costs and missing ground truth segmentations for a subset of classes. We enforce anatomical consistency of the result in a postprocessing step, in particular we introduce a graph-based algorithm for segmentation of the optic nerves, enforcing the connectivity between the eyes and the optic chiasm. We report cross-validated quantitative results on a database of 44 contrast-enhanced T1-weighted MRIs with provided segmentations of the considered organs at risk, which were originally used for radiotherapy planning. In addition, the segmentations produced by our model on an independent test set of 50 MRIs are evaluated by an experienced radiotherapist in order to qualitatively assess their accuracy. The mean distances between produced segmentations and the ground truth ranged from 0.1 mm to 0.7 mm across different organs. A vast majority (96 %) of the produced segmentations were found acceptable for radiotherapy planning.

01 Jan 2019
TL;DR: A probabilistic generative model for disentangling spatio-temporal disease trajectories from series of high-dimensional brain images allows to disentangle differential temporal progression patterns mapping brain regions key to neurodegeneration, while revealing a disease-specific time scale associated to the clinical diagnosis.
Abstract: We introduce a probabilistic generative model for disentangling spatio-temporal disease trajectories from series of high-dimensional brain images. The model is based on spatio-temporal matrix factorization, where inference on the sources is constrained by anatomically plausible statistical priors. To model realistic trajectories, the temporal sources are defined as monotonic and time-reparametrized Gaussian Processes. To account for the non-stationarity of brain images, we model the spatial sources as sparse codes convolved at multiple scales. The method was tested on synthetic data favourably comparing with standard blind source separation approaches. The application on large-scale imaging data from a clinical study allows to disentangle differential temporal progression patterns mapping brain regions key to neurodegeneration, while revealing a disease-specific time scale associated to the clinical diagnosis.

Journal ArticleDOI
21 Feb 2019-Cancers
TL;DR: The broad strategic directions and key advances of OncoAge are outlined and some of the issues faced by this consortium are summarized, as well as the short- and long-term perspectives.
Abstract: It is generally accepted that carcinogenesis and aging are two biological processes, which are known to be associated. Notably, the frequency of certain cancers (including lung cancer), increases significantly with the age of patients and there is now a wealth of data showing that multiple mechanisms leading to malignant transformation and to aging are interconnected, defining the so-called common biology of aging and cancer. OncoAge, a consortium launched in 2015, brings together the multidisciplinary expertise of leading public hospital services and academic laboratories to foster the transfer of scientific knowledge rapidly acquired in the fields of cancer biology and aging into innovative medical practice and silver economy development. This is achieved through the development of shared technical platforms (for research on genome stability, (epi)genetics, biobanking, immunology, metabolism, and artificial intelligence), clinical research projects, clinical trials, and education. OncoAge focuses mainly on two pilot pathologies, which benefit from the expertise of several members, namely lung and head and neck cancers. This review outlines the broad strategic directions and key advances of OncoAge and summarizes some of the issues faced by this consortium, as well as the short- and long-term perspectives.

Posted Content
TL;DR: In this paper, a probabilistic generative model for disentangling spatio-temporal disease trajectories from series of high-dimensional brain images is introduced, where inference on the sources is constrained by anatomically plausible statistical priors.
Abstract: We introduce a probabilistic generative model for disentangling spatio-temporal disease trajectories from series of high-dimensional brain images. The model is based on spatio-temporal matrix factorization, where inference on the sources is constrained by anatomically plausible statistical priors. To model realistic trajectories, the temporal sources are defined as monotonic and time-reparametrized Gaussian Processes. To account for the non-stationarity of brain images, we model the spatial sources as sparse codes convolved at multiple scales. The method was tested on synthetic data favourably comparing with standard blind source separation approaches. The application on large-scale imaging data from a clinical study allows to disentangle differential temporal progression patterns mapping brain regions key to neurodegeneration, while revealing a disease-specific time scale associated to the clinical diagnosis.

Journal Article
TL;DR: In this article, the relevance of radiomic features based on dual-point 18F-FDOPA PET images to distinguish between recurrence and radiation-induced necrosis was studied.
Abstract: 57 Objectives: In glioblastomas, the differentiation between recurrence and radiation-induced necrosis after initial treatment is often difficult on MRI. 18F-FDOPA PET improves differential diagnosis but is not perfectly accurate. We studied the relevance of radiomic features based on dual-point 18F-FDOPA PET images to distinguish between recurrence and radiation-induced necrosis. Methods: After an initial standard treatment (STUPP protocol ± 2nd line bevacizumab therapy), 78 patients with a glioblastoma and with a clinical suspicion of recurrence were retrospectively included in this study. The final diagnosis was based on pathological data or, if not available, on a 6-month clinical/imaging patient follow up. Two static PET-CT scans were performed 20 and 90 min after the injection of 2 MBq/kg of 18F-FDOPA (mCT-Siemens: OSEM 5 iterations, 24 subsets), called respectively PET-20 and PET-90. The PET images were automatically registered using a rigid registration based on CT images and we created a subtraction image (PET-90 minus PET-20 = PET-sub). Based on PET-20 images, we segmented the contralateral striatum (VOI-S) using a threshold equal to 50% of SUVmax. We used the same value of threshold to segment the suspicious lesion (VOI-L). For each patient, we copied the VOIs on the PET-90 and PET-sub images. For each VOI and image, we computed 43 radiomic features using LIFEx software, including SUVmax, SUVmean, Metabolic Volume (MV) and TLG, as well as histogram, shape and texture indices (resampling step: bin width of 0.1 SUV). We evaluated the performance to differentiate recurrence and radiation-induced necrosis using the high dimensional discriminant analysis (HDDA) based on all features and after a first step of variable selection. The performance was evaluated using the Youden index (Y=sensitivity + specificity -1). Within the leave-one-out cross-validation, the selection step consists in choosing, based on the (N-1) learning patients, the number of top features incrementally (5, 10, 15, 20, 25 or 30), ranked by the p-value of Wilcoxon test. We determined the best number of features that maximizes Y, and tested this combination on the Nth patient. We applied this methodology based on VOI-L feature values and on the ratio between VOI-L and VOI-S feature values for PET-20, PET-90 and PET-sub, and on the difference of VOI-L feature values between PET-90 and PET-20. We compared the results with the visual assessment of PET-20 images using the conventional “Lizzaraga scale”. Results: 68 patients had tumor recurrence and 10 had radiation necrosis. The visual interpretation led to Y equal to 0.27 (Se=97%, Sp=30%). Using radiomic features, Y ranged between -0.26 and 0.45. The best performance was obtained for VOI-L radiomic features extracted from PET-sub images with a Y of 0.45 (Se=65%, Sp=80%), for an average selection of 20±3 features. The study of the correlation of the most frequently selected features highlights 3 types of information: features highly correlated to MV (segmented based on PET-20), features associated with Entropy and features linked to Homogeneity. By comparison, the best performance based on PET-20 images was obtained with the ratio of radiomic features between VOI-L and VOI-S (Y=0.25). The combination of SUVmax and MV based on VOI-L from PET-20 led to a Y of -0.12. Conclusions: In glioblastomas, we demonstrated that, thanks to a machine learning approach designed for low-sample size/high-dimensional data, it is possible to distinguish recurrence and radiation necrosis with better performance than visual assessment. The best finding was obtained based on parametric images resulting from the evolution of 18F-FDOPA uptake between 20 and 90 min post-injection. Our results should be validated on an independent cohort, but confirm that modern machine learning methods applied to medical images can improve patients’ management.

Journal ArticleDOI
01 Dec 2019
TL;DR: In this article, the authors present an approche consisting of a developper des algorithmes integrant des donnees heterogenes (imagerie, donnes biologiques, capteurs, and donnes cliniques).
Abstract: Aujourd’hui, il y a un besoin croissant d’harmonisation et d’innovation dans les evaluations cognitives et comportementales. Les outils actuels sont parfois trop invasifs, couteux ou demande des temps de realisations trop important dans le cadre d’une simple consultation. De ce faite de nouvelles methodes ecologiquement valides et sensibles pourraient etre utiles pour ameliorer l’accessibilite en tant que depistage de premiere ligne dans la population souffrant de troubles neuropsychiatriques. Les technologies de l’information et de la communication (TIC) sont des solutions non invasives et qui ont montrees une utilite pour identifier les sujets aux premiers stades cliniques des maladies neurodegeneratives [1] , [2] . Les recherches actuelles s’orientent sur l’interet des ICT a un stade pre clinique, comme marqueur d’evolution, au cours des essais therapeutiques et dans les troubles psychiatriques [3] , [4] . Cette session a pour objectif d’illustrer les scenario d’utilisation d’outils numeriques novateurs, qui pourraient etre utilises pour le depistage a grande echelle et pour le suivi des patients dans les essais cliniques. Lea Domain, interne de psychiatrie au centre hospitalier Guillaume-Regnier de Rennes presentera les resultats preliminaires de l’etude DEFLUENCE. Cette etude a pour objectif de determiner si les alterations qualitatives aux tests de fluences verbales mesurees de facon automatisee peuvent constituer un bio marqueur pronostic de l’evolution de la depression. Alexandra Konig, neuropsychologue et chercheuse au laboratoire CoBTeK presentera l’application Δelta sur tablette qui permet aux cliniciens de faire passer et d’analyser automatiquement des tests neuropsychologiques classiques meme a distance a l’aide de l’intelligence artificielle (IA), de l’analyse automatisee de l’expression faciale et de la voix. Un exemple d’une analyse psycholinguistique informatisee d’une entrevue clinique sera egalement presentee. Clement Abi Nader, doctorant dans l’equipe Epione INRIA presentera les travaux portant sur la modelisation de l’evolution de la maladie d’Alzheimer a partir de donnees cliniques longitudinales acquises. Cette approche consiste a developper des algorithmes integrant des donnees heterogenes (imagerie, donnees biologiques, capteurs, donnees cliniques).

07 Jul 2019
TL;DR: This paper defines a novel classification method for Barrett’s images based on fractal textures that performs particularly well on pre-cancer stages with an overall accuracy of 89.2%.
Abstract: Barrett’s esophagus is a complication of gastroesophageal reflux diseases that generates a transformation of esophagus epithelium turning into adenocarcinoma with a high risk. The surveillance of the changes in the esophageal mucosa is primordial to estimate the cancer progression. Confocal laser endomicroscopy is a novel imaging technique allowing physicians to perform in-vivo and real-time histological analysis in order to decrease the number of biopsies needed for the diagnosis. This paper uses the notion of local density function to extract characteristic morphologies of tissues. This allows us to define a novel classification method for Barrett’s images based on fractal textures. The method performs particularly well on pre-cancer stages with an overall accuracy of 89.2%.