scispace - formally typeset
Open AccessJournal ArticleDOI

Optimal Feature Search for Vigilance Estimation Using Deep Reinforcement Learning

TLDR
A deep Q-network (DQN) algorithm was designed, using conventional feature engineering and deep convolutional neural network methods, to extract the optimal features from electroencephalogram and electrocardiogram measurements, and the results suggest that the DQN could be applied to investigating biomarkers for physiological responses and optimizing the classification system to reduce the input resources.
Abstract
A low level of vigilance is one of the main reasons for traffic and industrial accidents. We conducted experiments to evoke the low level of vigilance and record physiological data through single-channel electroencephalogram (EEG) and electrocardiogram (ECG) measurements. In this study, a deep Q-network (DQN) algorithm was designed, using conventional feature engineering and deep convolutional neural network (CNN) methods, to extract the optimal features. The DQN yielded the optimal features: two CNN features from ECG and two conventional features from EEG. The ECG features were more significant for tracking the transitions within the alertness continuum with the DQN. The classification was performed with a small number of features, and the results were similar to those from using all of the features. This suggests that the DQN could be applied to investigating biomarkers for physiological responses and optimizing the classification system to reduce the input resources.

read more

Citations
More filters
Journal ArticleDOI

EEG Signal Multichannel Frequency-Domain Ratio Indices for Drowsiness Detection Based on Multicriteria Optimization.

TL;DR: In this paper, the authors used an evolutionary metaheuristic algorithm to find the nearly optimal set of features and channels from which the indices are calculated, and they showed that drowsiness is best described by the powers in delta and alpha bands.
Journal ArticleDOI

Intelligent Feature Selection for ECG-Based Personal Authentication Using Deep Reinforcement Learning

TL;DR: In this article , the optimal features of electrocardiogram (ECG) signals were investigated for the implementation of a personal authentication system using a reinforcement learning (RL) algorithm, and the deep learning architecture in RL was automatically constructed based on an optimization approach called Bayesian optimization hyperband.
Book ChapterDOI

Reinforcement learning in EEG-based human-robot interaction

TL;DR: In this article , the authors introduce the basics of reinforcement learning, the usage of RL in EEG classification, and the use of RL-based robot learning in EEG-based learning.
References
More filters
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Journal ArticleDOI

Human-level control through deep reinforcement learning

TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Journal ArticleDOI

Mastering the game of Go with deep neural networks and tree search

TL;DR: Using this search algorithm, the program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0.5, the first time that a computer program has defeated a human professional player in the full-sized game of Go.
Book

Markov Decision Processes: Discrete Stochastic Dynamic Programming

TL;DR: Puterman as discussed by the authors provides a uniquely up-to-date, unified, and rigorous treatment of the theoretical, computational, and applied research on Markov decision process models, focusing primarily on infinite horizon discrete time models and models with discrete time spaces while also examining models with arbitrary state spaces, finite horizon models, and continuous time discrete state models.
Journal ArticleDOI

PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals.

TL;DR: The newly inaugurated Research Resource for Complex Physiologic Signals (RRSPS) as mentioned in this paper was created under the auspices of the National Center for Research Resources (NCR Resources).
Related Papers (5)