S
Shengchen Li
Researcher at Beijing University of Posts and Telecommunications
Publications - 35
Citations - 167
Shengchen Li is an academic researcher from Beijing University of Posts and Telecommunications. The author has contributed to research in topics: Singing & Convolutional neural network. The author has an hindex of 5, co-authored 35 publications receiving 96 citations. Previous affiliations of Shengchen Li include Xi'an Jiaotong-Liverpool University & Queen Mary University of London.
Papers
More filters
Book ChapterDOI
Transfer Learning for Music Classification and Regression Tasks Using Artist Tags
TL;DR: The experiment results show that the features learned using artist tags under the context of transfer learning are able to be effectively applied in music genre classification and music emotion recognition tasks.
Journal ArticleDOI
Computer Audition for Healthcare: Opportunities and Challenges
Kun Qian,Xiao Li,Haifeng Li,Shengchen Li,Wei Li,Zuoliang Ning,Shuai Yu,Limin Hou,Gang Tang,Jing Lu,Feng Li,Shufei Duan,Chengcheng Du,Yao Cheng,Yujun Wang,Lin Gan,Yoshiharu Yamamoto,Björn Schuller +17 more
TL;DR: This research presents a novel and scalable approach called “Embedded Intelligence for Health Care and Wellbeing” (EERING) that combines natural language processing and artificial intelligence (AI) to provide real-time information about a person’s brain activity.
Proceedings ArticleDOI
Sound Event Detection with Sequentially Labelled Data Based on Connectionist Temporal Classification and Unsupervised Clustering
TL;DR: A connectionist temporal classification (CTC) based SED system that uses SLD instead of strongly labelled data, with a novel unsupervised clustering stage is proposed, which indicates the effectiveness of the proposed two-stage method trained on SLD without any onset/offset time of sound events.
Journal Article
Polyphonic audio tagging with sequentially labelled data using CRNN with learnable gated linear units
TL;DR: The proposed Connectionist Temporal Classification (CTC) loss function on the top of Convolutional Recurrent Neural Network with learnable Gated Linear Units is used to tag the polyphonic audio recordings, based on a new type of audio label data: Sequentially Labelled Data (SLD).
Proceedings ArticleDOI
Multi-level Attention Model with Deep Scattering Spectrum for Acoustic Scene Classification
TL;DR: Deep scattering spectrum (DSS) features combined with a multi-level attention model based on CNN for ASC tasks and results show that the DSS features provide between a 11%-14% relative improvement in accuracy over log-mel features, within a state-of-the-art framework.