scispace - formally typeset
Search or ask a question

Showing papers by "Lucian Mihai Itu published in 2021"


Journal ArticleDOI
TL;DR: In this paper, the authors present an approach for assessing AI for predicting treatment response in triple-negative breast cancer (TNBC), using real-world data and molecular-omics data from clinical data warehouses and biobanks.
Abstract: BACKGROUND Artificial intelligence (AI) has the potential to transform our healthcare systems significantly. New AI technologies based on machine learning approaches should play a key role in clinical decision-making in the future. However, their implementation in health care settings remains limited, mostly due to a lack of robust validation procedures. There is a need to develop reliable assessment frameworks for the clinical validation of AI. We present here an approach for assessing AI for predicting treatment response in triple-negative breast cancer (TNBC), using real-world data and molecular -omics data from clinical data warehouses and biobanks. METHODS The European "ITFoC (Information Technology for the Future Of Cancer)" consortium designed a framework for the clinical validation of AI technologies for predicting treatment response in oncology. RESULTS This framework is based on seven key steps specifying: (1) the intended use of AI, (2) the target population, (3) the timing of AI evaluation, (4) the datasets used for evaluation, (5) the procedures used for ensuring data safety (including data quality, privacy and security), (6) the metrics used for measuring performance, and (7) the procedures used to ensure that the AI is explainable. This framework forms the basis of a validation platform that we are building for the "ITFoC Challenge". This community-wide competition will make it possible to assess and compare AI algorithms for predicting the response to TNBC treatments with external real-world datasets. CONCLUSIONS The predictive performance and safety of AI technologies must be assessed in a robust, unbiased and transparent manner before their implementation in healthcare settings. We believe that the consideration of the ITFoC consortium will contribute to the safe transfer and implementation of AI in clinical settings, in the context of precision oncology and personalized care.

13 citations


Journal ArticleDOI
TL;DR: An encoding method is proposed that enables typical HE schemes to operate on real-valued numbers of arbitrary precision and size and is evaluated on two real-world scenarios relying on EEG signals: seizure detection and prediction of predisposition to alcoholism.
Abstract: Data privacy is a major concern when accessing and processing sensitive medical data. A promising approach among privacy-preserving techniques is homomorphic encryption (HE), which allows for computations to be performed on encrypted data. Currently, HE still faces practical limitations related to high computational complexity, noise accumulation, and sole applicability the at bit or small integer values level. We propose herein an encoding method that enables typical HE schemes to operate on real-valued numbers of arbitrary precision and size. The approach is evaluated on two real-world scenarios relying on EEG signals: seizure detection and prediction of predisposition to alcoholism. A supervised machine learning-based approach is formulated, and training is performed using a direct (non-iterative) fitting method that requires a fixed and deterministic number of steps. Experiments on synthetic data of varying size and complexity are performed to determine the impact on runtime and error accumulation. The computational time for training the models increases but remains manageable, while the inference time remains in the order of milliseconds. The prediction performance of the models operating on encoded and encrypted data is comparable to that of standard models operating on plaintext data.

10 citations


Journal ArticleDOI
TL;DR: A privacy-preserving cloud-based machine learning framework for wearable devices, a library for fast implementation and deployment of deep learning-based solutions on homomorphically encrypted data, and a proof-of-concept study for atrial fibrillation detection from electrocardiograms recorded on a wearable device are proposed.
Abstract: Medical wearable devices monitor health data and, coupled with data analytics, cloud computing, and artificial intelligence (AI), enable early detection of disease. Privacy issues arise when personal health information is sent or processed outside the device. We propose a framework that ensures the privacy and integrity of personal medical data while performing AI-based homomorphically encrypted data analytics in the cloud. The main contributions are: (i) a privacy-preserving cloud-based machine learning framework for wearable devices, (ii) CipherML—a library for fast implementation and deployment of deep learning-based solutions on homomorphically encrypted data, and (iii) a proof-of-concept study for atrial fibrillation (AF) detection from electrocardiograms recorded on a wearable device. In the context of AF detection, two approaches are considered: a multi-layer perceptron (MLP) which receives as input the ECG features computed and encrypted on the wearable device, and an end-to-end deep convolutional neural network (1D-CNN), which receives as input the encrypted raw ECG data. The CNN model achieves a lower mean F1-score than the hand-crafted feature-based model. This illustrates the benefit of hand-crafted features over deep convolutional neural networks, especially in a setting with a small training data. Compared to state-of-the-art results, the two privacy-preserving approaches lead, with reasonable computational overhead, to slightly lower, but still similar results: the small performance drop is caused by limitations related to the use of homomorphically encrypted data instead of plaintext data. The findings highlight the potential of the proposed framework to enhance the functionality of wearables through privacy-preserving AI, by providing, within a reasonable amount of time, results equivalent to those achieved without privacy enhancing mechanisms. While the chosen homomorphic encryption scheme prioritizes performance and utility, certain security shortcomings remain open for future development.

4 citations


Journal ArticleDOI
TL;DR: Recent developments related to explainability and interpretability have become core requirements for AI algorithms, to ensure that the rationale behind output inference can be revealed, and the clinical impact of proposed solutions are discussed.
Abstract: Medical imaging provides valuable input for managing cardiovascular disease (CVD), ranging from risk assessment to diagnosis, therapy planning and follow-up. Artificial intelligence (AI) based medical image analysis algorithms provide nowadays state-of-the-art results in CVD management, mainly due to the increase in computational power and data storage capacities. Various challenges remain to be addressed to speed-up the adoption of AI based solutions in routine CVD management. Although medical imaging and in general health data are abundant, the access and transfer of such data is difficult to realize due to ethical considerations. Hence, AI algorithms are often trained on relatively small datasets, thus limiting their robustness, and potentially leading to biased or skewed results for certain patient or pathology sub-groups. Furthermore, explainability and interpretability have become core requirements for AI algorithms, to ensure that the rationale behind output inference can be revealed. The paper focuses on recent developments related to these two challenges, discusses the clinical impact of proposed solutions, and provides conclusions for further research and development. It also presents examples related to the diagnosis of stable coronary artery disease, a whole-body circulation model for the assessment of structural heart disease, and to the diagnosis and treatment planning of aortic coarctation, a congenital heart disease.

4 citations


Journal ArticleDOI
TL;DR: In this paper, a framework for automatically and robustly personalizing aortic hemodynamic computations for the assessment of pre-and post-intervention CoA patients from 3D rotational angiography (3DRA) data is proposed.
Abstract: Coarctation of Aorta (CoA) is a congenital disease consisting of a narrowing that obstructs the systemic blood flow. This proof-of-concept study aimed to develop a framework for automatically and robustly personalizing aortic hemodynamic computations for the assessment of pre- and post-intervention CoA patients from 3D rotational angiography (3DRA) data. We propose a framework that combines hemodynamic modelling and machine learning (ML) based techniques, and rely on 3DRA data for non-invasive pressure computation in CoA patients. The key features of our framework are a parameter estimation method for calibrating inlet and outlet boundary conditions, and regional mechanical wall properties, to ensure that the computational results match the patient-specific measurements, and an improved ML based pressure drop model capable of predicting the instantaneous pressure drop for a wide range of flow conditions and anatomical CoA variations. We evaluated the framework by investigating 6 patient datasets, under pre- and post-operative setting, and, since all calibration procedures converged successfully, the proposed approach is deemed robust. We compared the peak-to-peak and the cycle-averaged pressure drop computed using the reduced-order hemodynamic model with the catheter based measurements, before and after virtual and actual stenting. The mean absolute error for the peak-to-peak pressure drop, which is the most relevant measure for clinical decision making, was 2.98 mmHg for the pre- and 2.11 mmHg for the post-operative setting. Moreover, the proposed method is computationally efficient: the average execution time was of only $$2.1 \pm 0.8$$ minutes on a standard hardware configuration. The use of 3DRA for hemodynamic modelling could allow for a complete hemodynamic assessment, as well as virtual interventions or surgeries and predictive modeling. However, before such an approach can be used routinely, significant advancements are required for automating the workflow.