scispace - formally typeset
Journal ArticleDOI

Depth-Wise Separable Convolutions and Multi-Level Pooling for an Efficient Spatial CNN-Based Steganalysis

Reads0
Chats0
TLDR
The experimental results show that the proposed CNN structure is significantly better than other five methods when it is used to detect three spatial algorithms such as WOW, S-UNIWARD and HILL with a wide variety of datasets and payloads.
Abstract
For steganalysis, many studies showed that convolutional neural network (CNN) has better performances than the two-part structure of traditional machine learning methods. Existing CNN architectures use various tricks to improve the performance of steganalysis, such as fixed convolutional kernels, the absolute value layer, data augmentation and the domain knowledge. However, some designing of the network structure were not extensively studied so far, such as different convolutions (inception, xception, etc.) and variety ways of pooling(spatial pyramid pooling, etc.). In this paper, we focus on designing a new CNN network structure to improve detection accuracy of spatial-domain steganography. First, we use $3\times 3$ kernels instead of the traditional $5\times 5$ kernels and optimize convolution kernels in the preprocessing layer. The smaller convolution kernels are used to reduce the number of parameters and model the features in a small local region. Next, we use separable convolutions to utilize channel correlation of the residuals, compress the image content and increase the signal-to-noise ratio (between the stego signal and the image signal). Then, we use spatial pyramid pooling (SPP) to aggregate the local features and enhance the representation ability of features by multi-level pooling. Finally, data augmentation is adopted to further improve network performance. The experimental results show that the proposed CNN structure is significantly better than other five methods such as SRM, Ye-Net, Xu-Net, Yedroudj-Net and SRNet, when it is used to detect three spatial algorithms such as WOW, S-UNIWARD and HILL with a wide variety of datasets and payloads.

read more

Citations
More filters
Book ChapterDOI

Deep learning in steganography and steganalysis

TL;DR: The structure of a deep neural network is presented in a generic way and the networks proposed in the existing literature for different scenarios of steganalysis are presented, and steganography by deep learning is discussed.
Journal ArticleDOI

Deep convolutional neural network for enhancing traffic sign recognition developed on Yolo V4

TL;DR: Experiments show that Yolo V4_1 (with SPP) outperforms the state-of-the-art schemes, achieving 99.4% accuracy in the authors' experiments, along with the best total BFLOPS and mAP (99.32%) in their experiment, and SPP can enhance the achievement of all models in the experiment.
Journal ArticleDOI

GBRAS-Net: A Convolutional Neural Network Architecture for Spatial Image Steganalysis

TL;DR: In this paper, a preprocessing stage using filter banks to enhance steganographic noise, a feature extraction stage using depthwise and separable convolutional layers, and skip connections are presented.
Journal ArticleDOI

Pooling in convolutional neural networks for medical image analysis: a survey and an empirical study

TL;DR: In this article , a comprehensive review of various pooling techniques proposed in the literature of computer vision and medical image analysis is provided, and an extensive set of experiments are conducted to compare a selected set of pooling algorithms on two different medical image classification problems, namely HEp-2 cells and diabetic retinopathy image classification.
Proceedings ArticleDOI

ImageNet Pre-trained CNNs for JPEG Steganalysis

TL;DR: This paper investigates pre-trained computer-vision deep architectures, such as the EfficientNet, MixNet, and ResNet for steganalysis, and demonstrates that avoiding pooling/stride in the first layers enables better performance, as noticed by other top competitors.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Proceedings Article

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Related Papers (5)