scispace - formally typeset
Search or ask a question
Author

Fangjun Huang

Bio: Fangjun Huang is an academic researcher from Sun Yat-sen University. The author has contributed to research in topics: JPEG & Information hiding. The author has an hindex of 12, co-authored 28 publications receiving 1346 citations. Previous affiliations of Fangjun Huang include New Jersey Institute of Technology & Chinese Academy of Sciences.

Papers
More filters
Journal ArticleDOI
TL;DR: An edge adaptive scheme which can select the embedding regions according to the size of secret message and the difference between two consecutive pixels in the cover image is proposed and can enhance the security significantly compared with typical LSB-based approaches as well as their edge adaptive ones, while preserving higher visual quality of stego images at the same time.
Abstract: The least-significant-bit (LSB)-based approach is a popular type of steganographic algorithms in the spatial domain. However, we find that in most existing approaches, the choice of embedding positions within a cover image mainly depends on a pseudorandom number generator without considering the relationship between the image content itself and the size of the secret message. Thus the smooth/flat regions in the cover images will inevitably be contaminated after data hiding even at a low embedding rate, and this will lead to poor visual quality and low security based on our analysis and extensive experiments, especially for those images with many smooth regions. In this paper, we expand the LSB matching revisited image steganography and propose an edge adaptive scheme which can select the embedding regions according to the size of secret message and the difference between two consecutive pixels in the cover image. For lower embedding rates, only sharper edge regions are used while keeping the other smoother regions as they are. When the embedding rate increases, more edge regions can be released adaptively for data hiding by adjusting just a few parameters. The experimental results evaluated on 6000 natural images with three specific and four universal steganalytic algorithms show that the new scheme can enhance the security significantly compared with typical LSB-based approaches as well as their edge adaptive ones, such as pixel-value-differencing-based approaches, while preserving higher visual quality of stego images at the same time.

594 citations

Journal ArticleDOI
TL;DR: A new simple yet effective framework for RDH in encrypted domain that the server manager does not need to design a new RDH scheme according to the encryption algorithm that has been conducted by the content owner and most of those previously proposed RDH schemes can be applied to the encrypted image directly.
Abstract: In the past more than one decade, hundreds of reversible data hiding (RDH) algorithms have been reported. Via exploring the correlation between the neighboring pixels (or coefficients), extra information can be embedded into the host image reversibly. However, these RDH algorithms cannot be accomplished in encrypted domain directly, since the correlation between the neighboring pixels will disappear after encryption. In order to accomplish RDH in encrypted domain, specific RDH schemes have been designed according to the encryption algorithm utilized. In this paper, we propose a new simple yet effective framework for RDH in encrypted domain. In the proposed framework, the pixels in a plain image are first divided into sub-blocks with the size of $m\times n$ . Then, with an encryption key, a key stream (a stream of random or pseudorandom bits/bytes that are combined with a plaintext message to produce the encrypted message) is generated, and the pixels in the same sub-block are encrypted with the same key stream byte. After the stream encryption, the encrypted $m\times n$ sub-blocks are randomly permutated with a permutation key. Since the correlation between the neighboring pixels in each sub-block can be well preserved in the encrypted domain, most of those previously proposed RDH schemes can be applied to the encrypted image directly. One of the main merits of the proposed framework is that the RDH scheme is independent of the image encryption algorithm. That is, the server manager (or channel administrator) does not need to design a new RDH scheme according to the encryption algorithm that has been conducted by the content owner; instead, he/she can accomplish the data hiding by applying the numerous RDH algorithms previously proposed to the encrypted domain directly.

178 citations

Journal ArticleDOI
TL;DR: A new histogram shifting-based RDH scheme for JPEG images is proposed, in which the zero coefficients remain unchanged and only coefficients with values 1 and -1 are expanded to carry message bits, and a block selection strategy based on the number of zero coefficients in each 8 × 8 block can be utilized to adaptively choose DCT coefficients for data hiding.
Abstract: Among various digital image formats used in daily life, the Joint Photographic Experts Group (JPEG) is the most popular. Therefore, reversible data hiding (RDH) in JPEG images is important and useful for many applications such as archive management and image authentication. However, RDH in JPEG images is considerably more difficult than that in uncompressed images because there is less information redundancy in JPEG images than that in uncompressed images, and any modification in the compressed domain may introduce more distortion in the host image. Furthermore, along with the embedding capacity and fidelity (visual quality), which have to be considered for uncompressed images, the storage size of the marked JPEG file should be considered. In this paper, based on the philosophy behind the JPEG encoder and the statistical properties of discrete cosine transform (DCT) coefficients, we present some basic insights into how to select quantized DCT coefficients for RDH. Then, a new histogram shifting-based RDH scheme for JPEG images is proposed, in which the zero coefficients remain unchanged and only coefficients with values 1 and −1 are expanded to carry message bits. Moreover, a block selection strategy based on the number of zero coefficients in each $8\,\times \,8$ block is proposed, which can be utilized to adaptively choose DCT coefficients for data hiding. Experimental results demonstrate that by using the proposed method we can easily realize high embedding capacity and good visual quality. The storage size of the host JPEG file can also be well preserved.

174 citations

Journal ArticleDOI
TL;DR: This algorithm is based on the observation that in the process of recompressing a JPEG image with the same quantization matrix over and over again, the number of different JPEG coefficients will monotonically decrease in general.
Abstract: Detection of double joint photographic experts group (JPEG) compression is of great significance in the field of digital forensics. Some successful approaches have been presented for detecting double JPEG compression when the primary compression and the secondary compression have different quantization matrixes. However, when the primary compression and the secondary compression have the same quantization matrix, no detection method has been reported yet. In this paper, we present a method which can detect double JPEG compression with the same quantization matrix. Our algorithm is based on the observation that in the process of recompressing a JPEG image with the same quantization matrix over and over again, the number of different JPEG coefficients, i.e., the quantized discrete cosine transform coefficients between the sequential two versions will monotonically decrease in general. For example, the number of different JPEG coefficients between the singly and doubly compressed images is generally larger than the number of different JPEG coefficients between the corresponding doubly and triply compressed images. Via a novel random perturbation strategy implemented on the JPEG coefficients of the recompressed test image, we can find a “proper” randomly perturbed ratio. For different images, this universal “proper” ratio will generate a dynamically changed threshold, which can be utilized to discriminate the singly compressed image and doubly compressed image. Furthermore, our method has the potential to detect triple JPEG compression, four times JPEG compression, etc.

171 citations

Journal ArticleDOI
TL;DR: A new channel selection rule is presented, which can be utilized to find the discrete cosine transform (DCT) coefficients that may introduce minimal detectable distortion for data hiding in JPEG steganography.
Abstract: In this paper, we present a new channel selection rule for joint photographic experts group (JPEG) steganography, which can be utilized to find the discrete cosine transform (DCT) coefficients that may introduce minimal detectable distortion for data hiding. Three factors are considered in our proposed channel selection rule, i.e., the perturbation error (PE), the quantization step (QS), and the magnitude of quantized DCT coefficient to be modified (MQ). Experimental results demonstrate that higher security performance can be obtained in JPEG steganography via our new channel selection rule.

93 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper provides a state-of-the-art review and analysis of the different existing methods of steganography along with some common standards and guidelines drawn from the literature and some recommendations and advocates for the object-oriented embedding mechanism.

1,572 citations

Journal ArticleDOI
TL;DR: A novel general strategy for building steganography detectors for digital images by assembling a rich model of the noise component as a union of many diverse submodels formed by joint distributions of neighboring samples from quantized image noise residuals obtained using linear and nonlinear high-pass filters.
Abstract: We describe a novel general strategy for building steganography detectors for digital images. The process starts with assembling a rich model of the noise component as a union of many diverse submodels formed by joint distributions of neighboring samples from quantized image noise residuals obtained using linear and nonlinear high-pass filters. In contrast to previous approaches, we make the model assembly a part of the training process driven by samples drawn from the corresponding cover- and stego-sources. Ensemble classifiers are used to assemble the model as well as the final steganalyzer due to their low computational complexity and ability to efficiently work with high-dimensional feature spaces and large training sets. We demonstrate the proposed framework on three steganographic algorithms designed to hide messages in images represented in the spatial domain: HUGO, edge-adaptive algorithm by Luo , and optimally coded ternary ±1 embedding. For each algorithm, we apply a simple submodel-selection technique to increase the detection accuracy per model dimensionality and show how the detection saturates with increasing complexity of the rich model. By observing the differences between how different submodels engage in detection, an interesting interplay between the embedding and detection is revealed. Steganalysis built around rich image models combined with ensemble classifiers is a promising direction towards automatizing steganalysis for a wide spectrum of steganographic schemes.

1,553 citations

Journal ArticleDOI
TL;DR: This paper proposes an alternative and well-known machine learning tool-ensemble classifiers implemented as random forests-and argues that they are ideally suited for steganalysis.
Abstract: Today, the most accurate steganalysis methods for digital media are built as supervised classifiers on feature vectors extracted from the media. The tool of choice for the machine learning seems to be the support vector machine (SVM). In this paper, we propose an alternative and well-known machine learning tool-ensemble classifiers implemented as random forests-and argue that they are ideally suited for steganalysis. Ensemble classifiers scale much more favorably w.r.t. the number of training examples and the feature dimensionality with performance comparable to the much more complex SVMs. The significantly lower training complexity opens up the possibility for the steganalyst to work with rich (high-dimensional) cover models and train on larger training sets-two key elements that appear necessary to reliably detect modern steganographic algorithms. Ensemble classification is portrayed here as a powerful developer tool that allows fast construction of steganography detectors with markedly improved detection accuracy across a wide range of embedding methods. The power of the proposed framework is demonstrated on three steganographic methods that hide messages in JPEG images.

967 citations

Journal ArticleDOI
TL;DR: This paper proposes a universal distortion design called universal wavelet relative distortion (UNIWARD) that can be applied for embedding in an arbitrary domain and demonstrates experimentally using rich models as well as targeted attacks that steganographic methods built using UNIWARD match or outperform the current state of the art in the spatial domain, JPEG domain, and side-informed JPEG domain.
Abstract: Currently, the most successful approach to steganography in empirical objects, such as digital media, is to embed the payload while minimizing a suitably defined distortion function. The design of the distortion is essentially the only task left to the steganographer since efficient practical codes exist that embed near the payload-distortion bound. The practitioner’s goal is to design the distortion to obtain a scheme with a high empirical statistical detectability. In this paper, we propose a universal distortion design called universal wavelet relative distortion (UNIWARD) that can be applied for embedding in an arbitrary domain. The embedding distortion is computed as a sum of relative changes of coefficients in a directional filter bank decomposition of the cover image. The directionality forces the embedding changes to such parts of the cover object that are difficult to model in multiple directions, such as textures or noisy regions, while avoiding smooth regions or clean edges. We demonstrate experimentally using rich models as well as targeted attacks that steganographic methods built using UNIWARD match or outperform the current state of the art in the spatial domain, JPEG domain, and side-informed JPEG domain.

859 citations

Proceedings ArticleDOI
01 Dec 2012
TL;DR: A new approach to defining additive steganographic distortion in the spatial domain, where the change in the output of directional high-pass filters after changing one pixel is weighted and then aggregated using the reciprocal Hölder norm to define the individual pixel costs.
Abstract: This paper presents a new approach to defining additive steganographic distortion in the spatial domain The change in the output of directional high-pass filters after changing one pixel is weighted and then aggregated using the reciprocal Holder norm to define the individual pixel costs In contrast to other adaptive embedding schemes, the aggregation rule is designed to force the embedding changes to highly textured or noisy regions and to avoid clean edges Consequently, the new embedding scheme appears markedly more resistant to steganalysis using rich models The actual embedding algorithm is realized using syndrome-trellis codes to minimize the expected distortion for a given payload

728 citations