scispace - formally typeset
R

Rui Yang

Researcher at Sun Yat-sen University

Publications -  17
Citations -  583

Rui Yang is an academic researcher from Sun Yat-sen University. The author has contributed to research in topics: Digital audio & Speech coding. The author has an hindex of 11, co-authored 17 publications receiving 485 citations. Previous affiliations of Rui Yang include New Jersey Institute of Technology.

Papers
More filters
Journal ArticleDOI

Geometric Invariant Audio Watermarking Based on an LCM Feature

TL;DR: Both the theoretical analysis and experimental results show that the proposed audio watermarking scheme is not only resilient against common signal processing operations, but also has conquered the challenging audio geometric distortion and achieves the best robustness against simultaneous geometric distortions.
Proceedings ArticleDOI

Detecting digital audio forgeries by checking frame offsets

TL;DR: This piece of work is the first one to investigate digital forensics on MP3 format and demonstrates the validity of the proposed approach on detecting some common forgeries, such as deletion, insertion, substitution and splicing.
Proceedings ArticleDOI

Defeating fake-quality MP3

TL;DR: This work uses the numbers of small-value MDCT coefficients as features as features to discriminate fake-quality MP3 from normal MP3, and is the first one to find out fake- quality MP3.
Proceedings ArticleDOI

Detecting double compression of audio signal

TL;DR: This piece of work is the first one to detect double compression of audio signal and uses support vector machine classifiers with feature vectors formed by the distributions of the first digits of the quantized MDCT (modified discrete cosine transform) coefficients.
Journal ArticleDOI

Detection of Double Compressed AMR Audio Using Stacked Autoencoder

TL;DR: This paper proposes a framework for detecting double compressed AMR audio based on the stacked autoencoder (SAE) network and the universal background model-Gaussian mixture model (UBM-GMM), and uses the SAE to learn the optimal features automatically from the audio waveforms.