scispace - formally typeset
Search or ask a question
Author

Ye Yuan

Bio: Ye Yuan is an academic researcher from Beijing University of Technology. The author has contributed to research in topics: Deep learning & Activity recognition. The author has an hindex of 17, co-authored 24 publications receiving 1075 citations.

Papers
More filters
Proceedings ArticleDOI
19 Jul 2018
TL;DR: An end-to-end framework named Event Adversarial Neural Network (EANN), which can derive event-invariant features and thus benefit the detection of fake news on newly arrived events, is proposed.
Abstract: As news reading on social media becomes more and more popular, fake news becomes a major issue concerning the public and government. The fake news can take advantage of multimedia content to mislead readers and get dissemination, which can cause negative effects or even manipulate the public events. One of the unique challenges for fake news detection on social media is how to identify fake news on newly emerged events. Unfortunately, most of the existing approaches can hardly handle this challenge, since they tend to learn event-specific features that can not be transferred to unseen events. In order to address this issue, we propose an end-to-end framework named Event Adversarial Neural Network (EANN), which can derive event-invariant features and thus benefit the detection of fake news on newly arrived events. It consists of three main components: the multi-modal feature extractor, the fake news detector, and the event discriminator. The multi-modal feature extractor is responsible for extracting the textual and visual features from posts. It cooperates with the fake news detector to learn the discriminable representation for the detection of fake news. The role of event discriminator is to remove the event-specific features and keep shared features among events. Extensive experiments are conducted on multimedia datasets collected from Weibo and Twitter. The experimental results show our proposed EANN model can outperform the state-of-the-art methods, and learn transferable feature representations.

627 citations

Proceedings ArticleDOI
15 Oct 2018
TL;DR: EI, a deep-learning based device free activity recognition framework that can remove the environment and subject specific information contained in the activity data and extract environment/subject-independent features shared by the data collected on different subjects under different environments is proposed.
Abstract: Driven by a wide range of real-world applications, significant efforts have recently been made to explore device-free human activity recognition techniques that utilize the information collected by various wireless infrastructures to infer human activities without the need for the monitored subject to carry a dedicated device. Existing device free human activity recognition approaches and systems, though yielding reasonably good performance in certain cases, are faced with a major challenge. The wireless signals arriving at the receiving devices usually carry substantial information that is specific to the environment where the activities are recorded and the human subject who conducts the activities. Due to this reason, an activity recognition model that is trained on a specific subject in a specific environment typically does not work well when being applied to predict another subject's activities that are recorded in a different environment. To address this challenge, in this paper, we propose EI, a deep-learning based device free activity recognition framework that can remove the environment and subject specific information contained in the activity data and extract environment/subject-independent features shared by the data collected on different subjects under different environments. We conduct extensive experiments on four different device free activity recognition testbeds: WiFi, ultrasound, 60 GHz mmWave, and visible light. The experimental results demonstrate the superior effectiveness and generalizability of the proposed EI framework.

340 citations

Journal ArticleDOI
TL;DR: A new autoencoder-based multi-view learning model is constructed by incorporating both inter and intra correlations of EEG channels to unleash the power of multi-channel information by adding a channel-wise competition mechanism in the training phase.
Abstract: The recent advances in pervasive sensing technologies have enabled us to monitor and analyze the multi-channel electroencephalogram (EEG) signals of epilepsy patients to prevent serious outcomes caused by epileptic seizures. To avoid manual visual inspection from long-term EEG readings, automatic EEG seizure detection has garnered increasing attention among researchers. In this paper, we present a unified multi-view deep learning framework to capture brain abnormalities associated with seizures based on multi-channel scalp EEG signals. The proposed approach is an end-to-end model that is able to jointly learn multi-view features from both unsupervised multi-channel EEG reconstruction and supervised seizure detection via spectrogram representation. We construct a new autoencoder-based multi-view learning model by incorporating both inter and intra correlations of EEG channels to unleash the power of multi-channel information. By adding a channel-wise competition mechanism in the training phase, we propose a channel-aware seizure detection module to guide our multi-view structure to focus on important and relevant EEG channels. To validate the effectiveness of the proposed framework, extensive experiments against nine baselines, including both traditional handcrafted feature extraction and conventional deep learning methods, are carried out on a benchmark scalp EEG dataset. Experimental results show that the proposed model is able to achieve higher average accuracy and f1-score at 94.37% and 85.34%, respectively, using 5-fold subject-independent cross validation, demonstrating a powerful and effective method in the task of EEG seizure detection.

163 citations

Journal ArticleDOI
TL;DR: This paper uses a convolutional neural network to capture local important information in EHRs and then feeds the learned representation into triplet loss or softmax cross entropy loss, which can better represent the longitudinal EHR sequences.
Abstract: Predicting patients’ risk of developing certain diseases is an important research topic in healthcare. Accurately identifying and ranking the similarity among patients based on their historical records is a key step in personalized healthcare. The electric health records (EHRs), which are irregularly sampled and have varied patient visit lengths, cannot be directly used to measure patient similarity due to the lack of an appropriate representation. Moreover, there needs an effective approach to measure patient similarity on EHRs. In this paper, we propose two novel deep similarity learning frameworks which simultaneously learn patient representations and measure pairwise similarity. We use a convolutional neural network (CNN) to capture local important information in EHRs and then feed the learned representation into triplet loss or softmax cross entropy loss. After training, we can obtain pairwise distances and similarity scores. Utilizing the similarity information, we then perform disease predictions and patient clustering. Experimental results show that CNN can better represent the longitudinal EHR sequences, and our proposed frameworks outperform state-of-the-art distance metric learning methods.

90 citations

Proceedings ArticleDOI
20 Aug 2017
TL;DR: A multi-view deep learning model to capture brain abnormality from multi-channel epileptic EEG signals for seizure detection and is effective in detecting epileptic seizure is proposed.
Abstract: With the advances in pervasive sensor technologies, physiological signals can be captured continuously to prevent the serious outcomes caused by epilepsy. Detection of epileptic seizure onset on collected multi-channel electroencephalogram (EEG) has attracted lots of attention recently. Deep learning is a promising method to analyze large-scale unlabeled data. In this paper, we propose a multi-view deep learning model to capture brain abnormality from multi-channel epileptic EEG signals for seizure detection. Specifically, we first generate EEG spectrograms using short-time Fourier transform (STFT) to represent the time-frequency information after signal segmentation. Second, we adopt stacked sparse denoising autoencoders (SSDA) to unsupervisedly learn multiple features by considering both intra and inter correlation of EEG channels, denoted as intra-channel and cross-channel features, respectively. Third, we add an SSDA-based channel selection procedure using proposed response rate to reduce the dimension of intra-channel feature. Finally, we concatenate the learned multi-features and apply a fully-connected SSDA model with softmax classifier to jointly learn the cross-patient seizure detector in a supervised fashion. To evaluate the performance of the proposed model, we carry out experiments on a real world benchmark EEG dataset and compare it with six baselines. Extensive experimental results demonstrate that the proposed learning model is able to extract latent features with meaningful interpretation, and hence is effective in detecting epileptic seizure.

88 citations


Cited by
More filters
01 Jan 2013
TL;DR: From the experience of several industrial trials on smart grid with communication infrastructures, it is expected that the traditional carbon fuel based power plants can cooperate with emerging distributed renewable energy such as wind, solar, etc, to reduce the carbon fuel consumption and consequent green house gas such as carbon dioxide emission.
Abstract: A communication infrastructure is an essential part to the success of the emerging smart grid. A scalable and pervasive communication infrastructure is crucial in both construction and operation of a smart grid. In this paper, we present the background and motivation of communication infrastructures in smart grid systems. We also summarize major requirements that smart grid communications must meet. From the experience of several industrial trials on smart grid with communication infrastructures, we expect that the traditional carbon fuel based power plants can cooperate with emerging distributed renewable energy such as wind, solar, etc, to reduce the carbon fuel consumption and consequent green house gas such as carbon dioxide emission. The consumers can minimize their expense on energy by adjusting their intelligent home appliance operations to avoid the peak hours and utilize the renewable energy instead. We further explore the challenges for a communication infrastructure as the part of a complex smart grid system. Since a smart grid system might have over millions of consumers and devices, the demand of its reliability and security is extremely critical. Through a communication infrastructure, a smart grid can improve power reliability and quality to eliminate electricity blackout. Security is a challenging issue since the on-going smart grid systems facing increasing vulnerabilities as more and more automation, remote monitoring/controlling and supervision entities are interconnected.

1,036 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a review of 154 studies that apply deep learning to EEG, published between 2010 and 2018, and spanning different application domains such as epilepsy, sleep, brain-computer interfacing, and cognitive and affective monitoring.
Abstract: Context Electroencephalography (EEG) is a complex signal and can require several years of training, as well as advanced signal processing and feature extraction methodologies to be correctly interpreted. Recently, deep learning (DL) has shown great promise in helping make sense of EEG signals due to its capacity to learn good feature representations from raw data. Whether DL truly presents advantages as compared to more traditional EEG processing approaches, however, remains an open question. Objective In this work, we review 154 papers that apply DL to EEG, published between January 2010 and July 2018, and spanning different application domains such as epilepsy, sleep, brain-computer interfacing, and cognitive and affective monitoring. We extract trends and highlight interesting approaches from this large body of literature in order to inform future research and formulate recommendations. Methods Major databases spanning the fields of science and engineering were queried to identify relevant studies published in scientific journals, conferences, and electronic preprint repositories. Various data items were extracted for each study pertaining to (1) the data, (2) the preprocessing methodology, (3) the DL design choices, (4) the results, and (5) the reproducibility of the experiments. These items were then analyzed one by one to uncover trends. Results Our analysis reveals that the amount of EEG data used across studies varies from less than ten minutes to thousands of hours, while the number of samples seen during training by a network varies from a few dozens to several millions, depending on how epochs are extracted. Interestingly, we saw that more than half the studies used publicly available data and that there has also been a clear shift from intra-subject to inter-subject approaches over the last few years. About [Formula: see text] of the studies used convolutional neural networks (CNNs), while [Formula: see text] used recurrent neural networks (RNNs), most often with a total of 3-10 layers. Moreover, almost one-half of the studies trained their models on raw or preprocessed EEG time series. Finally, the median gain in accuracy of DL approaches over traditional baselines was [Formula: see text] across all relevant studies. More importantly, however, we noticed studies often suffer from poor reproducibility: a majority of papers would be hard or impossible to reproduce given the unavailability of their data and code. Significance To help the community progress and share work more effectively, we provide a list of recommendations for future studies and emphasize the need for more reproducible research. We also make our summary table of DL and EEG papers available and invite authors of published work to contribute to it directly. A planned follow-up to this work will be an online public benchmarking portal listing reproducible results.

699 citations

Posted Content
TL;DR: A structured and comprehensive overview of research methods in deep learning-based anomaly detection, grouped state-of-the-art research techniques into different categories based on the underlying assumptions and approach adopted.
Abstract: Anomaly detection is an important problem that has been well-studied within diverse research areas and application domains. The aim of this survey is two-fold, firstly we present a structured and comprehensive overview of research methods in deep learning-based anomaly detection. Furthermore, we review the adoption of these methods for anomaly across various application domains and assess their effectiveness. We have grouped state-of-the-art research techniques into different categories based on the underlying assumptions and approach adopted. Within each category we outline the basic anomaly detection technique, along with its variants and present key assumptions, to differentiate between normal and anomalous behavior. For each category, we present we also present the advantages and limitations and discuss the computational complexity of the techniques in real application domains. Finally, we outline open issues in research and challenges faced while adopting these techniques.

522 citations

Journal ArticleDOI
TL;DR: A systematic review of deep learning models for electronic health record (EHR) data is conducted, and various deep learning architectures for analyzing different data sources and their target applications are illustrated.

478 citations