scispace - formally typeset
Proceedings ArticleDOI

Deep Unsupervised Representation Learning for Abnormal Heart Sound Classification

Reads0
Chats0
TLDR
A comparison of conventional and state-of-theart deep learning based computer audition paradigms for the audio classification task of normal, mild abnormalities, and moderate/severe abnormalities as present in phonocardiogram recordings is presented.
Abstract
Given the world-wide prevalence of heart disease, the robust and automatic detection of abnormal heart sounds could have profound effects on patient care and outcomes. In this regard, a comparison of conventional and state-of-theart deep learning based computer audition paradigms for the audio classification task of normal, mild abnormalities, and moderate/severe abnormalities as present in phonocardiogram recordings, is presented herein. In particular, we explore the suitability of deep feature representations as learnt by sequence to sequence autoencoders based on the auDeep toolkit. Key results, gained on the new Heart Sounds Shenzhen corpus, indicate that a fused combination of deep unsupervised features is well suited to the three-way classification problem, achieving our highest unweighted average recall of 47.9% on the test partition.

read more

Citations
More filters
Journal ArticleDOI

Scalogram based prediction model for respiratory disorders using optimized convolutional neural networks.

TL;DR: A pre-trained optimized Alexnet Convolutional Neural Network (CNN) architecture is proposed for predicting respiratory disorders and achieves significant performance improvement in accuracy compared to the existing state-of- the-art techniques in literature.
Journal ArticleDOI

Recurrent Neural Network-Based Semantic Variational Autoencoder for Sequence-to-Sequence Learning

TL;DR: Experimental results of three natural language tasks confirm that the proposed RNN--SVAE yields higher performance than two benchmark models, and the mean and standard deviation of the continuous semantic space are learned to take advantage of the variational method.
Journal ArticleDOI

Natural Language Processing Methods for Acoustic and Landmark Event-Based Features in Speech-Based Depression Detection

TL;DR: A framework for analyzing speech as a sequence of acoustic events, which combines acoustic words and speech landmarks, which are articulation-related speech events is proposed, and its application to depression detection is investigated.
Journal ArticleDOI

Machine Listening for Heart Status Monitoring: Introducing and Benchmarking HSS—The Heart Sounds Shenzhen Corpus

TL;DR: This paper introduces the publicly accessible database, the Heart Sounds Shenzhen Corpus (HSS), and provides a survey of machine learning work in the area of heart sound recognition, as well as a benchmark for HSS utilising standard acoustic features and machine learning models.
Proceedings ArticleDOI

Predicting Biological Signals from Speech: Introducing a Novel Multimodal Dataset and Results

TL;DR: The BioSpeech Database is presented, a novel database of audio and biological signals - blood volume pulse (BVP) and skin conductance) - from 55 individuals speaking aloud in front of others, whilst having their emotional state annotated in real time.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Journal ArticleDOI

The WEKA data mining software: an update

TL;DR: This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.
Proceedings Article

Sequence to Sequence Learning with Neural Networks

TL;DR: The authors used a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector.
Posted Content

Sequence to Sequence Learning with Neural Networks

TL;DR: This paper presents a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure, and finds that reversing the order of the words in all source sentences improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.
Related Papers (5)