scispace - formally typeset
Y

Yongfeng Huang

Researcher at Tsinghua University

Publications -  237
Citations -  4855

Yongfeng Huang is an academic researcher from Tsinghua University. The author has contributed to research in topics: Steganography & Steganalysis. The author has an hindex of 30, co-authored 230 publications receiving 2703 citations. Previous affiliations of Yongfeng Huang include Association for Computing Machinery & Microsoft.

Papers
More filters
Journal ArticleDOI

A Multi-grained Log Auditing Scheme for Cloud Data Confidentiality

TL;DR: This paper designs a logging mechanism to support multi-grained data access with Merkle Hash Tree structure and presents a log auditing approach to achieve data confidentiality auditing and leakage investigation by making an Access List.
Book ChapterDOI

Fragile Watermarking Based Proofs of Retrievability for Archival Cloud Data

TL;DR: This paper proposes a novel fragile watermarking based public auditable POR scheme for archival cloud data, able to not only improve the efficiency of audit process but also ensure both privacy-preserving and replay attack resistance simultaneously.
Proceedings ArticleDOI

Protocol of Steganography in Streaming Media on VOIP Network Based on Variable Length Coding

TL;DR: The experimental results has proved theoretical prediction that the proposed protocol is able to achieve reliable and efficient steganography on VoIP network.
Posted Content

Behavioral Security in Covert Communication Systems

TL;DR: A new covert communication framework is proposed, which considers both content security and behavioral security in the process of information transmission, and it is hoped this new proposed framework will help researchers to design better covert communication systems.
Posted Content

Fastformer: Additive Attention Can Be All You Need

TL;DR: The authors proposed Fastformer, which is an efficient Transformer model based on additive attention, instead of modeling the pair-wise interactions between tokens, they first use additive attention mechanism to model global contexts, and then further transform each token representation based on its interaction with global context representations.