scispace - formally typeset
Search or ask a question

Showing papers on "JPEG published in 2020"


Journal ArticleDOI
TL;DR: Experimental results show that the proposed framework for detecting image operator chain based on convolutional neural network not only obtains significant detection performance but also can distinguish the order in some cases that previous works were unable to identify.
Abstract: Many forensic techniques have recently been developed to determine whether an image has undergone a specific manipulation operation. When multiple consecutive operations are applied to images, forensic analysts not only need to identify the existence of each manipulation operation, but also to distinguish the order of the involved operations. However, image operator chain detection is still a challenging problem. In this paper, an order forensics framework for detecting image operator chain based on convolutional neural network (CNN) is presented. Two-stream CNN architecture is designed to capture both tampering artifact evidence and local noise residual evidence. Specifically, the new CNN-based method is proposed for forensically detecting a chain made of two image operators, which could automatically learn manipulation detection features directly from image data. Further, we empirically investigate the robustness of our proposed method in two practical scenarios: forensic investigators have no access to the operating parameters, and manipulations are applied to a JPEG compressed image. Experimental results show that the proposed framework not only obtains significant detection performance but also can distinguish the order in some cases that previous works were unable to identify.

131 citations


Journal ArticleDOI
TL;DR: This paper model and implement a discrete wavelet transform (DWT) based deep learning model for image compression in IoUT and validate DWT–CNN model using extensive set of experimentations and depict that it is superior to existing methods such as super-resolution convolutional neural networks (SRCNN), JPEG and JPEG2000 in terms of compression performance as well as reconstructed image quality.
Abstract: Recently, the advancements of Internet-of-Things (IoT) have expanded its application in underwater environment which leads to the development of a new field of Internet of Underwater Things (IoUT). It offers a broader view of applications such as atmosphere observation, habitat monitoring of sea animals, defense and disaster prediction. Data transmission of images captured by the smart underwater objects is very challenging due to the nature of underwater environment and necessitates an efficient image transmission strategy for IoUT. In this paper, we model and implement a discrete wavelet transform (DWT) based deep learning model for image compression in IoUT. For achieving effective compression with better reconstruction image quality, convolution neural network (CNN) is used at the encoding as well as decoding side. We validate DWT–CNN model using extensive set of experimentations and depict that the presented deep learning model is superior to existing methods such as super-resolution convolutional neural networks (SRCNN), JPEG and JPEG2000 in terms of compression performance as well as reconstructed image quality. The DWT–CNN model attains an average peak signal-to-noise ratio (PSNR) of 53.961 with average space saving (SS) of 79.7038%.

125 citations


Journal ArticleDOI
TL;DR: The proposed watermarking method based on 4 × 4 image blocks using redundant wavelet transform with singular value decomposition considering human visual system (HVS) characteristics expressed by entropy values provides high robustness especially under image processing attacks, JPEG2000 and JPEG XR attacks.
Abstract: With the rapid growth of internet technology, image watermarking method has become a popular copyright protection method for digital images. In this paper, we propose a watermarking method based on $$4\times 4$$ image blocks using redundant wavelet transform with singular value decomposition considering human visual system (HVS) characteristics expressed by entropy values. The blocks which have the lower HVS entropies are selected for embedding the watermark. The watermark is embedded by examining $$U_{2,1}$$ and $$U_{3,1}$$ components of the orthogonal matrix obtained from singular value decomposition of the redundant wavelet transformed image block where an optimal threshold value based on the trade-off between robustness and imperceptibility is used. In order to provide additional security, a binary watermark is scrambled by Arnold transform before the watermark is embedded into the host image. The proposed scheme is tested under various image processing, compression and geometrical attacks. The test results are compared to other watermarking schemes that use SVD techniques. The experimental results demonstrate that our method can achieve higher imperceptibility and robustness under different types of attacks compared to existing schemes. Our method provides high robustness especially under image processing attacks, JPEG2000 and JPEG XR attacks. It has been observed that the proposed method achieves better performance over the recent existing watermarking schemes.

76 citations


Journal ArticleDOI
TL;DR: A lightweight Dense Dilated Fusion Network (DDFN) is designed as an embodiment of the boosting unit, which addresses the vanishing of gradients during training due to the cascading of networks while promoting the efficiency of limited parameters.
Abstract: We propose a Deep Boosting Framework (DBF) for real-world image denoising by integrating the deep learning technique into the boosting algorithm. The DBF replaces conventional handcrafted boosting units by elaborate convolutional neural networks, which brings notable advantages in terms of both performance and speed. We design a lightweight Dense Dilated Fusion Network (DDFN) as an embodiment of the boosting unit, which addresses the vanishing of gradients during training due to the cascading of networks while promoting the efficiency of limited parameters. The capabilities of the proposed method are first validated on several representative simulation tasks including non-blind and blind Gaussian denoising and JPEG image deblocking. We then focus on a practical scenario to tackle with the complex and challenging real-world noise. To facilitate leaning-based methods including ours, we build a new Real-world Image Denoising (RID) dataset, which contains 200 pairs of high-resolution images with diverse scene content under various shooting conditions. Moreover, we conduct comprehensive analysis on the domain shift issue for real-world denoising and propose an effective one-shot domain transfer scheme to address this issue. Comprehensive experiments on widely used benchmarks demonstrate that the proposed method significantly surpasses existing methods on the task of real-world image denoising. Code and dataset are available at https://github.com/ngchc/deepBoosting .

69 citations


Proceedings Article
01 Jan 2020
TL;DR: A linearly-assembled pixel-adaptive regression network (LAPAR) is proposed, which casts the direct LR to HR mapping learning into a linear coefficient regression task over a dictionary of multiple predefined filter bases, which renders the model highly lightweight and easy to optimize while achieving state-of-the-art results on SISR benchmarks.
Abstract: Single image super-resolution (SISR) deals with a fundamental problem of upsampling a low-resolution (LR) image to its high-resolution (HR) version. Last few years have witnessed impressive progress propelled by deep learning methods. However, one critical challenge faced by existing methods is to strike a sweet spot of deep model complexity and resulting SISR quality. This paper addresses this pain point by proposing a linearly-assembled pixel-adaptive regression network (LAPAR), which casts the direct LR to HR mapping learning into a linear coefficient regression task over a dictionary of multiple predefined filter bases. Such a parametric representation renders our model highly lightweight and easy to optimize while achieving state-of-the-art results on SISR benchmarks. Moreover, based on the same idea, LAPAR is extended to tackle other restoration tasks, e.g., image denoising and JPEG image deblocking, and again, yields strong performance. The code is available at this https URL.

54 citations


Journal ArticleDOI
TL;DR: A series of experimental results demonstrate that the proposed algorithm can extract embedded messages with significantly higher accuracy after different attacks, compared with the state-of-the-art adaptive steganography, and robust watermarking algorithms, while maintaining good detection resistant performance.
Abstract: Considering that traditional image steganography technologies suffer from the potential risk of failure under lossy channels, an enhanced adaptive steganography with multiple robustness against image processing attacks is proposed, while maintaining good detection resistance. First, a robust domain constructing method is proposed utilizing robust element extraction and optimal element modification, which can be applied to both spatial and JPEG images. Then, a robust steganography is proposed based on “Robust Domain Constructing + RS-STC Codes,” combined with cover selection, robust cover extraction, message coding, and embedding with minimized costs. In addition, to provide a theoretical basis for message extraction integrity, the fault tolerance of the proposed algorithm is deduced using error model based on burst errors and decoding damage. Finally, on the basis of parameter discussion about robust domain construction, performance experiments are conducted, and the recommended coding parameters are given for lossy channels with different attacks using the analytic results for fault tolerance. A series of experimental results demonstrate that the proposed algorithm can extract embedded messages with significantly higher accuracy after different attacks, such as compression, noising, scaling and other attacks, compared with the state-of-the-art adaptive steganography, and robust watermarking algorithms, while maintaining good detection resistant performance.

48 citations


Journal ArticleDOI
TL;DR: This paper proposes a method to estimate the downscaling factors of pre-JPEG compressed images in the presence of image downscaled after JPEG compressions and adopts the difference image extremum interval histogram and combines the spectral method to obtain the final estimation.
Abstract: Resampling detection is one of the most important topics in image forensics, and the most widely used method in resampling detection is spectral analysis. Since JPEG is the most widely used image format, it is reasonable that the resampling operation is processed on JPEG images. JPEG block artifacts bring severe interference to spectrum-based methods and degrade the detection performance. In addition, the spectral characteristics of the downscaling scenarios are very weak. The detection of downscaling still presents a considerable challenge to forensic applications. In this paper, we propose a method to estimate the downscaling factors of pre-JPEG compressed images in the presence of image downscaling after JPEG compressions. We first analyze the spectrum of scaled images and give an exact formulation of how the scaling factors influence the appearance of periodic artifacts. The expected positions of the characteristic resampling peaks are analytically derived. For the downscaling scenario, the shifted JPEG block artifacts produce periodic peaks, which cause misdetection in the characteristic peak. We find that the interval between the adjacent extrema of difference images obeys the geometric distribution and the distribution has periodic peaks for JPEG images. Hence, we adopt the difference image extremum interval histogram and combine the spectral method to obtain the final estimation. The experimental results demonstrate that the proposed detection method outperforms some state-of-the-art methods.

47 citations


Journal ArticleDOI
TL;DR: In this article, the authors present an end-to-end image compression system based on compressive sensing, which integrates the conventional scheme of compressive sampling (on the entire image) and reconstruction with quantization and entropy coding.
Abstract: We present an end-to-end image compression system based on compressive sensing. The presented system integrates the conventional scheme of compressive sampling (on the entire image) and reconstruction with quantization and entropy coding. The compression performance, in terms of decoded image quality versus data rate, is shown to be comparable with JPEG and significantly better at the low rate range. We study the parameters that influence the system performance, including (i) the choice of sensing matrix, (ii) the trade-off between quantization and compression ratio, and (iii) the reconstruction algorithms. We propose an effective method to select, among all possible combinations of quantization step and compression ratio, the ones that yield the near-best quality at any given bit rate. Furthermore, our proposed image compression system can be directly used in the compressive sensing camera, e.g., the single pixel camera, to construct a hardware compressive sampling system.

47 citations


Journal ArticleDOI
TL;DR: The main originality of this scheme stands on its ability to give access to watermarking-based security services from both encrypted and compressed image bitstreams without having to decrypt or to decompress them, even partially.
Abstract: In this paper, we propose the first joint watermarking-encryption-compression scheme for the purpose of protecting of medical images. The main originality of this scheme stands on its ability to give access to watermarking-based security services from both encrypted and compressed image bitstreams without having to decrypt or to decompress them, even partially. More clearly, there is no need neither to decrypt the encrypted image bitstream nor to decode the compressed image bitstream in order to extract watermarks. A second contribution is that it combines in a single algorithm the bit substitution watermarking modulation with JPEG-LS and the AES block cipher algorithm in its CBC mode. On their side, decompression, decryption and message extraction are conducted separately. Doing so makes our scheme compliant to the medical image standard DICOM. This scheme allows tracing images and controlling their reliability (i.e. based on proofs of image integrity and authenticity) either from the encrypted domain or from the compressed one. Experiments conducted on broad sets of Retina and ultrasound medical images demonstrate the capability of our system to securely make available a message in both encrypted and compressed domains while minimizing image distortion. Achieved watermarking capacities are large enough to support several watermarking-based security services at the same time.

47 citations


Journal ArticleDOI
TL;DR: The proposed scheme first decomposes the image into two planes and then employs prediction error expansion (PEE) to embed the secret data into the HSB plane and the experimental results show that the proposed scheme has superior performance than the previous works.

42 citations


Journal ArticleDOI
TL;DR: An effective method to detect the recompression in the color images by using the conversion error, rounding error, and truncation error on the pixel in the spherical coordinate system is proposed and experimental results show that the performance of the proposed method is better than the existing methods.
Abstract: Detection of double Joint Photographic Experts Group (JPEG) compression is an important part of image forensics. Although methods in the past studies have been presented for detecting the double JPEG compression with a different quantization matrix, the detection of double JPEG compression with the same quantization matrix is still a challenging problem. In this paper, an effective method to detect the recompression in the color images by using the conversion error, rounding error, and truncation error on the pixel in the spherical coordinate system is proposed. The randomness of truncation errors, rounding errors, and quantization errors result in random conversion errors. The pixel number of the conversion error is used to extract six-dimensional features. Truncation error and rounding error on the pixel in its three channels are mapped to the spherical coordinate system based on the relation of a color image to the pixel values in the three channels. The former is converted into amplitude and angles to extract 30-dimensional features and 8-dimensional auxiliary features are extracted from the number of special points and special blocks. As a result, a total of 44-dimensional features have been used in the classification by using the support vector machine (SVM) method. Thereafter, the support vector machine recursive feature elimination (SVMRFE) method is used to improve the classification accuracy. The experimental results show that the performance of the proposed method is better than the existing methods.

Posted Content
TL;DR: This work creates a novel architecture which is parameterized by the JPEG files quantization matrix, which allows a single model to achieve state-of-the-art performance over models trained for specific quality settings.
Abstract: The JPEG image compression algorithm is the most popular method of image compression because of its ability for large compression ratios. However, to achieve such high compression, information is lost. For aggressive quantization settings, this leads to a noticeable reduction in image quality. Artifact correction has been studied in the context of deep neural networks for some time, but the current state-of-the-art methods require a different model to be trained for each quality setting, greatly limiting their practical application. We solve this problem by creating a novel architecture which is parameterized by the JPEG files quantization matrix. This allows our single model to achieve state-of-the-art performance over models trained for specific quality settings.

Journal ArticleDOI
TL;DR: This paper examines the ability of deep image compressors to be “aware” of the additional objective of raw reconstruction and describes a general framework that enables deep networks targeting image compression to jointly consider both image fidelity errors and raw reconstruction errors.
Abstract: Deep learning-based image compressors are actively being explored in an effort to supersede conventional image compression algorithms, such as JPEG. Conventional and deep learning-based compression algorithms focus on minimizing image fidelity errors in the nonlinear standard RGB (sRGB) color space. However, for many computer vision tasks, the sensor's linear raw-RGB image is desirable. Recent work has shown that the original raw-RGB image can be reconstructed using only small amounts of metadata embedded inside the JPEG image [1] . However, [1] relied on the conventional JPEG encoding that is unaware of the raw-RGB reconstruction task. In this paper, we examine the ability of deep image compressors to be “aware” of the additional objective of raw reconstruction. Towards this goal, we describe a general framework that enables deep networks targeting image compression to jointly consider both image fidelity errors and raw reconstruction errors. We describe this approach in two scenarios: (1) the network is trained from scratch using our proposed joint loss, and (2) a network originally trained only for sRGB fidelity loss is later fine-tuned to incorporate our raw reconstruction loss. When compared to sRGB fidelity-only compression, our combined loss leads to appreciable improvements in PSNR of the raw reconstruction with only minor impact on sRGB fidelity as measured by MS-SSIM.

Proceedings ArticleDOI
22 Jun 2020
TL;DR: A novel method for steganography in JPEG-compressed images, extended the so-called MiPOD scheme based on minimizing the detection accuracy of the most-powerful test using a Gaussian model of independent DCT coefficients to address the problem of embedding into color JPEG images.
Abstract: This short paper presents a novel method for steganography in JPEG-compressed images, extended the so-called MiPOD scheme based on minimizing the detection accuracy of the most-powerful test using a Gaussian model of independent DCT coefficients This method is also applied to address the problem of embedding into color JPEG images The main issue in such case is that color channels are not processed in the same way and, hence, a statistically based approach is expected to bring significant improvements when one needs to consider heterogeneous channels together The results presented show that, on the one hand, the extension of MiPOD for JPEG domain, referred to as J-MiPOD, is very competitive as compared to current state-of-the-art embedding schemes On the other hands, we also show that addressing the problem of embedding in JPEG color images is far from being straightforward and that future works are required to understand better how to deal with color channels in JPEG images

Proceedings ArticleDOI
06 Dec 2020
TL;DR: This paper investigates pre-trained computer-vision deep architectures, such as the EfficientNet, MixNet, and ResNet for steganalysis, and demonstrates that avoiding pooling/stride in the first layers enables better performance, as noticed by other top competitors.
Abstract: In this paper, we investigate pre-trained computer-vision deep architectures, such as the EfficientNet, MixNet, and ResNet for steganalysis. These models pre-trained on ImageNet can be rather quickly refined for JPEG steganalysis while offering significantly better performance than CNNs designed purposely for steganalysis, such as the SRNet, trained from scratch. We show how different architectures compare on the ALASKA II dataset. We demonstrate that avoiding pooling/stride in the first layers enables better performance, as noticed by other top competitors, which aligns with the design choices of many CNNs designed for steganalysis. We also show how pre-trained computer-vision deep architectures perform on the ALASKA I dataset.

Journal ArticleDOI
TL;DR: This paper presents a JPEG crypto-compression method which allows us to recompress a JPEG Crypto-compressed image several times, without any information about the secret key or the original image content, and produces an image with a very similar visual quality when compared to the original picture.
Abstract: The rising popularity of social networks and cloud computing has greatly increased a number of JPEG compressed image exchanges. In this context, the security of the transmission channel and/or the cloud storage can be susceptible to privacy leaks. The selective encryption is an efficient tool to mask the image content and to protect confidentiality while remaining format-compliant. However, image processing in the encrypted domain is not a trivial task. In this paper, we present a JPEG crypto-compression method which allows us to recompress a JPEG crypto-compressed image several times, without any information about the secret key or the original image content. Indeed, using the proposed method in this paper, each recompression can be done directly on the JPEG bitstream by removing the last bit of the code representation of each non-zero coefficient, adapting the entropic code part, and slightly modifying the quantization table. This method is efficient to recompress JPEG crypto-compressed images in terms of compression ratio. Moreover, the decryption of the recompressed image produces an image with a very similar visual quality when compared to the original image, according to the obtained results.

Journal ArticleDOI
TL;DR: A novel scheme for JPEG steganalysis is proposed which designs the diverse base filters which are able to obtain the image residuals from various directions and proposes a cascade filter generation strategy to construct a set of high order cascade filters from the base filters.
Abstract: Steganalysis is a technique for detecting the existence of secret information hidden in digital media. In this paper, we propose a novel scheme for JPEG steganalysis. In this scheme, we first design the diverse base filters which are able to obtain the image residuals from various directions. Then, we propose a cascade filter generation strategy to construct a set of high order cascade filters from the base filters. We further select the cascade filters with the maximum diversity. The selected filters are convolved with the decompressed JPEG image to obtain residuals which capture the subtle embedding traces. The residuals, termed as the maximum diversity cascade filter residual, are eventually used to extract features to train an ensemble classifier for classification. The experiments are carried out on the detection of stego-images generated using common JPEG steganographic schemes, the results of which demonstrate the effectiveness of the proposed scheme for JPEG steganalysis.

Journal ArticleDOI
TL;DR: A pseudo-blind system that estimates the quality factor for a given compressed image and then applies a network that is trained with a similar quality factor and the experimental results show that the proposed pseudo- blind network performs better than the blind one for the various cases and requires fewer computations.
Abstract: This paper presents methods based on convolutional neural networks (CNNs) for removing compression artifacts. We modify the Inception module for the image restoration problem and use it as a building block for constructing blind and non-blind artifact removal networks. It is known that a CNN trained in a non-blind scenario (known compression quality factor) performs better than the one trained in a blind scenario (unknown factor), and our network is not an exception. However, the blind system is more practical because the compression quality factor is not always available or does not reflect the actual quality when the image is a transcoded or requantized image. Hence, in this paper, we also propose a pseudo-blind system that estimates the quality factor for a given compressed image and then applies a network that is trained with a similar quality factor. For this purpose, we propose a CNN that estimates the compression quality factor and prepare several non-blind artifact removal networks that are trained for some specific compression quality factors. We train the networks and conduct experiments on widely used compression standards, such as JPEG, MPEG-2, H.264, and HEVC. In addition, we conduct experiments for dynamically changing and transcoded videos to demonstrate the effectiveness of the quality estimation method. The experimental results show that the proposed pseudo-blind network performs better than the blind one for the various cases stated above and requires fewer computations.

Journal ArticleDOI
TL;DR: A novel steganalysis method for JPEG images is introduced that is universal in the sense that it reliably detects any type of steganography as well as small payloads and the best detection in practice is obtained with machine learning tools.
Abstract: A novel steganalysis method for JPEG images is introduced that is universal in the sense that it reliably detects any type of steganography as well as small payloads. It is limited to quality factors 99 and 100. The detection statistic is formed from the rounding errors in the spatial domain after decompressing the JPEG image. The attack works whenever, during compression, the discrete cosine transform is applied to integer-valued signal. Reminiscent of the well-established JPEG compatibility steganalysis, we call the new approach the “reverse JPEG compatibility attack.” While the attack is introduced and analyzed under simplifying assumptions using reasoning based on statistical signal detection, the best detection in practice is obtained with machine learning tools. Experiments on diverse datasets of both grayscale and color images, five steganographic schemes, and with a variety of JPEG compressors demonstrate the universality and applicability of this steganalysis method in practice.

Proceedings ArticleDOI
30 May 2020
TL;DR: This work proposes JPEG for ACTivations (JPEGACT), a lossy activation offload accelerator for training CNNs that works by discarding redundant spatial information, and shows how to optimize the JPEG algorithm so as to ensure convergence and maintain accuracy during training.
Abstract: A reduction in the time it takes to train machine learning models can be translated into improvements in accuracy. An important factor that increases training time in deep neural networks (DNNs) is the need to store large amounts of temporary data during the back-propagation algorithm. To enable training very large models this temporary data can be offloaded from limited size GPU memory to CPU memory but this data movement incurs large performance overheads.We observe that in one important class of DNNs, convolutional neural networks (CNNs), there is spatial correlation in these temporary values. We propose JPEG for ACTivations (JPEGACT), a lossy activation offload accelerator for training CNNs that works by discarding redundant spatial information. JPEGACT adapts the well-known JPEG algorithm from 2D image compression to activation compression. We show how to optimize the JPEG algorithm so as to ensure convergence and maintain accuracy during training. JPEG-ACT achieves $2.4\times$ higher training performance compared to prior offload accelerators, and $1.6\times$ compared to prior activation compression methods. An efficient hardware implementation allows JPEG-ACT to consume less than 1% of the power and area of a modern GPU.

Journal ArticleDOI
TL;DR: A novel coding framework for higher-dimensional image modalities, called Steered Mixture-of-Experts (SMoE), which performs comparable to the state of the art for low-to-mid range bitrates with respect to subjective visual quality of 4-D LF images and 5- D LF video.
Abstract: Research in light field (LF) processing has heavily increased over the last decade. This is largely driven by the desire to achieve the same level of immersion and navigational freedom for camera-captured scenes as it is currently available for CGI content. Standardization organizations such as MPEG and JPEG continue to follow conventional coding paradigms in which viewpoints are discretely represented on 2-D regular grids. These grids are then further decorrelated through hybrid DPCM/transform techniques. However, these 2-D regular grids are less suited for high-dimensional data, such as LFs. We propose a novel coding framework for higher-dimensional image modalities, called Steered Mixture-of-Experts (SMoE). Coherent areas in the higher-dimensional space are represented by single higher-dimensional entities, called kernels. These kernels hold spatially localized information about light rays at any angle arriving at a certain region. The global model consists thus of a set of kernels which define a continuous approximation of the underlying plenoptic function. We introduce the theory of SMoE and illustrate its application for 2-D images, 4-D LF images, and 5-D LF video. We also propose an efficient coding strategy to convert the model parameters into a bitstream. Even without provisions for high-frequency information, the proposed method performs comparable to the state of the art for low-to-mid range bitrates with respect to subjective visual quality of 4-D LF images. In case of 5-D LF video, we observe superior decorrelation and coding performance with coding gains of a factor of 4x in bitrate for the same quality. At least equally important is the fact that our method inherently has desired functionality for LF rendering which is lacking in other state-of-the-art techniques: (1) full zero-delay random access, (2) light-weight pixel-parallel view reconstruction, and (3) intrinsic view interpolation and super-resolution.

Journal ArticleDOI
TL;DR: The proposed RDH scheme based on the presented negative influence models can achieve low image distortion and small increase in the file size of marked images.
Abstract: Reversible data hiding (RDH) can be used to imperceptibly embed data into images in a reversible manner. Many RDH schemes have been developed for uncompressed images. However, JPEG compressed images are more widely used in our daily lives. The existing RDH techniques for JPEG images may cause significant distortion or a large increase in the file size of marked images. In this paper, a novel RDH scheme for JPEG images is proposed. First, the negative influence models of data embedding, including image visual distortion model and file size change model, are mathematically established. Then, a negative index for each frequency is defined as the weighted sum of the normalized average image visual distortion and the normalized average file size change per 1-bit hidden data, and the frequencies with small negative indices will be used for data embedding with a high priority. The weighting factor can be adjusted according to the user’s preference for less image distortion or smaller file size. Lastly, secret data is embedded into non-zero quantized AC coefficients of the selected frequencies in ascending order of zero-run length. Extensive experiments conducted on typical images and a well-known image database show that the presented negative influence models are effective, and the proposed RDH scheme based on the models can achieve low image distortion and small increase in the file size of marked images.

Journal ArticleDOI
TL;DR: In this paper, a 2-channel-based CNN was proposed to compare camera fingerprint and image noise at patch level, which can be used to identify the source of an image.
Abstract: Source identification is an important topic in image forensics, since it allows to trace back the origin of an image. This represents a precious information to claim intellectual property but also to reveal the authors of illicit materials. In this letter we address the problem of device identification based on sensor noise and propose a fast and accurate solution using convolutional neural networks (CNNs). Specifically, we propose a 2-channel-based CNN that learns a way of comparing camera fingerprint and image noise at patch level. The proposed solution turns out to be much faster than the conventional approach and to ensure an increased accuracy. This makes the approach particularly suitable in scenarios where large databases of images are analyzed, like over social networks. In this vein, since images uploaded on social media usually undergo at least two compression stages, we include investigations on double JPEG compressed images, always reporting higher accuracy than standard approaches.

Journal ArticleDOI
TL;DR: A new JPEG RDH scheme based on pairwise nonzero AC coefficient expansion (pairwise NACE) is proposed and an adaptive embedding strategy based on block and DCT frequency selection is proposed in order to preserve the visual quality and reduce the file size increase of the marked JPEG image.

Journal ArticleDOI
TL;DR: This work proposes a JPEG RDH method considering both the rate-distortion and the file size expansion at the same time while designing the algorithm, and shows that the proposed algorithm outperforms the state-of-the-art methods in terms of rate- Distortion andfile size expansion performance.
Abstract: Among various methods of reversible data hiding (RDH) in JPEG images, only rate-distortion, i.e. the image quality with given payload, is taken into consideration during algorithm designing. Howeve...

Journal ArticleDOI
TL;DR: The results show a slight influence of image format and compression levels in flat or slightly flat surfaces; in the case of a complex 3D model, instead, the choice of a format became important and processing times were found to also play a key role, especially in point cloud generation.
Abstract: The aim of this study is to evaluate the degradation of the accuracy and quality of the images in relation to the TIFF format and the different compression level of the JPEG format compared to the raw images acquired by UAV platform. Experiments were carried out using DJI Mavic 2 Pro and Hasselblad L1D-20c camera on three test sites. Post-processing of images was performed using software based on structure from motion and multi-view stereo approaches. The results show a slight influence of image format and compression levels in flat or slightly flat surfaces; in the case of a complex 3D model, instead, the choice of a format became important. Across all tests, processing times were found to also play a key role, especially in point cloud generation. The qualitative and quantitative analysis, carried out on the different orthophotos, allowed to highlight a modest impact in the use of the TIFF format and a strong influence as the JPEG compression level increases.

Journal ArticleDOI
TL;DR: This study combines Cryptography (Twofish and Triple data decryption (3DES) algorithms and Steganography (Least Significant Bits) to solve the problem of attacking or hacking biometric template for a malicious act, which has become a huge problem in the iris recognition system.
Abstract: This study combines Cryptography (Twofish and Triple data decryption (3DES)) algorithms and Steganography (Least Significant Bits) to solve the problem of attacking or hacking biometric template for a malicious act, which has become a huge problem in the iris recognition system. Twofish and Triple data encryption are good and secured cryptography algorithms which are used to change readable secret data (plain image) into an unreadable format (cipher image) while least significant bits (LSB) is a steganography algorithm which embeds ciphertext/image directly into a cover image to produce an image known as stego image. In this work, Hough transform, Daugman rubber-sheet model and Log Gabor filter were used for iris image segmentation, normalization and feature extraction and the iris template generated was encrypted using 3DES and Twofish algorithms. The cipher image was then embedded into a cover image to produce stego image using LSB. The result of this work slightly changes the master file after embedding the secret image (stego file) that cannot be identified by the physical eyes and only a JPEG image was used as the master or cover file. The two levels of security technique provide high embedded capacity and eminence stego images that will able to withstand attackers.

Journal ArticleDOI
TL;DR: A new JPEG steganographic method that can resist repetitive compression during network transmission, without even knowing the compression process controlled by the network service providers is designed.

Journal ArticleDOI
TL;DR: A forgery detection technique is proposed which exploits the artifacts originated due to manipulations performed on JPEG encoded images which showcases better detection rates compared with the state-of-the-art methods.

Journal ArticleDOI
01 Dec 2020
TL;DR: A framework improving robustness for image forgery detection based on a camera identification model based on convolutional neural networks and an in-depth supervision of the layer and an experimental analysis of the influence of the learned features is presented.
Abstract: Images available on online sharing platforms have a high probability of being modified, with additional global transformations such as compression, resizing or filtering covering the possible alteration. Such manipulations impose many constraints on forgery detection algorithms. This article presents a framework improving robustness for image forgery detection. The most important step of our framework is to take into account the image quality corresponding to the chosen application. Therefore, we relied on a camera identification model based on convolutional neural networks. Lossy compression such as JPEG being considered as the most common type of intentional or inadvertent concealment of image forgery, that leads us to experiment our proposal on this manipulation. Thus, our trained CNN is fed with a mixture of different qualities of compressed and uncompressed images. Experimental results showed the importance of this step to improve the effectiveness of our approach against recent literature approaches. To better interpret our trained CNN, we proposed an in-depth supervision by first a visualization of the layer and an experimental analysis of the influence of the learned features. This analysis led us to a more robust and accurate framework. Finally, we applied this improved system on an image forgery detection application and showed some promising results.