scispace - formally typeset
Open AccessJournal ArticleDOI

Large-scale JPEG steganalysis using hybrid deep-learning framework

TLDR
Wang et al. as mentioned in this paper proposed a generic hybrid deep learning framework for JPEG steganalysis incorporating the domain knowledge behind rich steganalytic models, which involves two main stages: the first stage is hand-crafted, corresponding to the convolution phase and the quantization & truncation phase of the rich models.
Abstract
Adoption of deep learning in image steganalysis is still in its initial stage. In this paper we propose a generic hybrid deep-learning framework for JPEG steganalysis incorporating the domain knowledge behind rich steganalytic models. Our proposed framework involves two main stages. The first stage is hand-crafted, corresponding to the convolution phase and the quantization & truncation phase of the rich models. The second stage is a compound deep neural network containing multiple deep subnets in which the model parameters are learned in the training procedure. We provided experimental evidences and theoretical reflections to argue that the introduction of threshold quantizers, though disable the gradient-descent-based learning of the bottom convolution phase, is indeed cost-effective. We have conducted extensive experiments on a large-scale dataset extracted from ImageNet. The primary dataset used in our experiments contains 500,000 cover images, while our largest dataset contains five million cover images. Our experiments show that the integration of quantization and truncation into deep-learning steganalyzers do boost the detection performance by a clear margin. Furthermore, we demonstrate that our framework is insensitive to JPEG blocking artifact alterations, and the learned model can be easily transferred to a different attacking target and even a different dataset. These properties are of critical importance in practical applications.

read more

Citations
More filters
Journal ArticleDOI

Automatic Steganographic Distortion Learning Using a Generative Adversarial Network

TL;DR: Experimental results show that the proposed automatic steganographic distortion learning framework can effectively evolve from nearly naïve random $\pm 1$ embedding at the beginning to much more advanced content-adaptive embedding which tries to embed secret bits in textural regions.
Journal ArticleDOI

Invisible steganography via generative adversarial networks

TL;DR: Wang et al. as mentioned in this paper proposed a novel CNN architecture named as ISGAN to conceal a secret gray image into a color cover image on the sender side and exactly extract the secret image out on the receiver side.
Posted Content

Invisible Steganography via Generative Adversarial Networks

TL;DR: A novel CNN architecture named as ISGAN is proposed to conceal a secret gray image into a color cover image on the sender side and exactly extract the secret image out on the receiver side and can achieve start-of-art performances on LFW, PASCAL-VOC12 and ImageNet datasets.
Journal ArticleDOI

CNN Based Adversarial Embedding with Minimum Alteration for Image Steganography.

TL;DR: Adversarial embedding as mentioned in this paper adjusts the costs of image element modifications according to the gradients backpropagated from the CNN classifier targeted by the attack, so that the modification direction has a higher probability to be the same as the sign of the gradient.
Posted Content

End-to-end Trained CNN Encode-Decoder Networks for Image Steganography

TL;DR: In this paper, a convolutional neural network based encoder-decoder architecture for embedding of images as payload is proposed, which achieves state-of-the-art payload capacity at high PSNR and SSIM values.
References
More filters
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Posted Content

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Batch Normalization as mentioned in this paper normalizes layer inputs for each training mini-batch to reduce the internal covariate shift in deep neural networks, and achieves state-of-the-art performance on ImageNet.
Journal ArticleDOI

Deep learning in neural networks

TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.
Posted Content

Caffe: Convolutional Architecture for Fast Feature Embedding

TL;DR: Caffe as discussed by the authors is a BSD-licensed C++ library with Python and MATLAB bindings for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures.
Journal ArticleDOI

Rich Models for Steganalysis of Digital Images

TL;DR: A novel general strategy for building steganography detectors for digital images by assembling a rich model of the noise component as a union of many diverse submodels formed by joint distributions of neighboring samples from quantized image noise residuals obtained using linear and nonlinear high-pass filters.
Related Papers (5)