scispace - formally typeset
L

Lingyun Xiang

Researcher at Changsha University of Science and Technology

Publications -  21
Citations -  453

Lingyun Xiang is an academic researcher from Changsha University of Science and Technology. The author has contributed to research in topics: Computer science & Steganalysis. The author has an hindex of 7, co-authored 15 publications receiving 316 citations.

Papers
More filters
Journal ArticleDOI

Coverless real-time image information hiding based on image block matching and dense convolutional network

TL;DR: Experimental results demonstrate that the proposed novel coverless information hiding approach based on deep learning provides better robustness and has higher retrieval accuracy and capacity when compared with some existing coverless image information hiding.
Journal ArticleDOI

Reversible Natural Language Watermarking Using Synonym Substitution and Arithmetic Coding

TL;DR: Experimental results illustrate that the proposed novel reversible natural language watermarking method can extract the watermark successfully and recover the original text losslessly and achieves a high embedding capacity.
Journal ArticleDOI

Discrete Multi-graph Hashing for Large-Scale Visual Search

TL;DR: A novel hashing method, called discrete multi-graph hashing (DMGH), which uses a multi- graph learning technique to fuse multiple views, and adaptively learns the weights of each view to reduce the distortion error in the quantization stage.
Journal ArticleDOI

TUMK-ELM: A Fast Unsupervised Heterogeneous Data Learning Approach

TL;DR: A fast two-stage unsupervised multiple kernel extreme learning machine (TUMK-ELM) that alternatively extracts information from multiple sources and learns the heterogeneous data representation with closed-form solutions, which enables its extremely fast speed.
Journal ArticleDOI

Novel Linguistic Steganography Based on Character-Level Text Generation

TL;DR: A character-level linguistic steganographic method to embed the secret information into characters instead of words by employing a long short-term memory (LSTM) based language model, which has the fastest running speed and highest embedding capacity.