scispace - formally typeset
Open AccessJournal ArticleDOI

Deep learning models for electrocardiograms are susceptible to adversarial attack.

Reads0
Chats0
TLDR
A method to construct smoothed adversarial examples for ECG tracings that are invisible to human expert evaluation is developed and it is shown that a deep learning model for arrhythmia detection from single-lead ECG 6 is vulnerable to this type of attack.
Abstract
Electrocardiogram (ECG) acquisition is increasingly widespread in medical and commercial devices, necessitating the development of automated interpretation strategies. Recently, deep neural networks have been used to automatically analyze ECG tracings and outperform physicians in detecting certain rhythm irregularities1. However, deep learning classifiers are susceptible to adversarial examples, which are created from raw data to fool the classifier such that it assigns the example to the wrong class, but which are undetectable to the human eye2,3. Adversarial examples have also been created for medical-related tasks4,5. However, traditional attack methods to create adversarial examples do not extend directly to ECG signals, as such methods introduce square-wave artefacts that are not physiologically plausible. Here we develop a method to construct smoothed adversarial examples for ECG tracings that are invisible to human expert evaluation and show that a deep learning model for arrhythmia detection from single-lead ECG6 is vulnerable to this type of attack. Moreover, we provide a general technique for collating and perturbing known adversarial examples to create multiple new ones. The susceptibility of deep learning ECG algorithms to adversarial misclassification implies that care should be taken when evaluating these models on ECGs that may have been altered, particularly when incentives for causing misclassification exist. The development of an algorithm that can imperceptibly manipulate electrocardiographic data to fool a deep learning model for diagnosing cardiac arrhythmia highlights the potential vulnerability of artificial intelligence-enabled diagnosis to adversarial attacks.

read more

Citations
More filters
Journal ArticleDOI

Deep learning and the electrocardiogram: review of the current state-of-the-art.

TL;DR: In the recent decade, deep learning, a subset of artificial intelligence and machine learning, has been used to identify patterns in big healthcare datasets for disease phenotyping, event predictions, and complex decision making as discussed by the authors.
Journal ArticleDOI

Application of artificial intelligence to the electrocardiogram.

TL;DR: In this paper, a review describes the mathematical background behind supervised AI algorithms, and discusses selected AI ECG cardiac screening algorithms including those for the detection of left ventricular dysfunction, episodic atrial fibrillation from a tracing recorded during normal sinus rhythm, and other structural and valvular diseases.
Posted Content

CLOCS: Contrastive Learning of Cardiac Signals Across Space, Time, and Patients

TL;DR: It is shown that CLOCS consistently outperforms the state-of-the-art methods, BYOL and SimCLR, when performing a linear evaluation of, and fine-tuning on, downstream tasks.
References
More filters
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Dissertation

Learning Multiple Layers of Features from Tiny Images

TL;DR: In this paper, the authors describe how to train a multi-layer generative model of natural images, using a dataset of millions of tiny colour images, described in the next section.
Proceedings Article

Intriguing properties of neural networks

TL;DR: It is found that there is no distinction between individual highlevel units and random linear combinations of high level units, according to various methods of unit analysis, and it is suggested that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks.
Proceedings Article

Explaining and Harnessing Adversarial Examples

TL;DR: It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.
Posted Content

Towards Deep Learning Models Resistant to Adversarial Attacks

TL;DR: This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.
Related Papers (5)