scispace - formally typeset
Search or ask a question

Showing papers on "JPEG published in 2015"


Posted Content
TL;DR: A general framework for variable-rate image compression and a novel architecture based on convolutional and deconvolutional LSTM recurrent networks are proposed, which provide better visual quality than (headerless) JPEG, JPEG2000 and WebP, with a storage size reduced by 10% or more.
Abstract: A large fraction of Internet traffic is now driven by requests from mobile devices with relatively small screens and often stringent bandwidth requirements. Due to these factors, it has become the norm for modern graphics-heavy websites to transmit low-resolution, low-bytecount image previews (thumbnails) as part of the initial page load process to improve apparent page responsiveness. Increasing thumbnail compression beyond the capabilities of existing codecs is therefore a current research focus, as any byte savings will significantly enhance the experience of mobile device users. Toward this end, we propose a general framework for variable-rate image compression and a novel architecture based on convolutional and deconvolutional LSTM recurrent networks. Our models address the main issues that have prevented autoencoder neural networks from competing with existing image compression algorithms: (1) our networks only need to be trained once (not per-image), regardless of input image dimensions and the desired compression rate; (2) our networks are progressive, meaning that the more bits are sent, the more accurate the image reconstruction; and (3) the proposed architecture is at least as efficient as a standard purpose-trained autoencoder for a given number of bits. On a large-scale benchmark of 32$\times$32 thumbnails, our LSTM-based approaches provide better visual quality than (headerless) JPEG, JPEG2000 and WebP, with a storage size that is reduced by 10% or more.

432 citations


Journal ArticleDOI
TL;DR: A novel feature set for steganalysis of JPEG images engineered as first-order statistics of quantized noise residuals obtained from the decompressed JPEG image using 64 kernels of the discrete cosine transform (DCT) (the so-called undecimated DCT).
Abstract: This paper introduces a novel feature set for steganalysis of JPEG images. The features are engineered as first-order statistics of quantized noise residuals obtained from the decompressed JPEG image using 64 kernels of the discrete cosine transform (DCT) (the so-called undecimated DCT). This approach can be interpreted as a projection model in the JPEG domain, forming thus a counterpart to the projection spatial rich model. The most appealing aspect of this proposed steganalysis feature set is its low computational complexity, lower dimensionality in comparison with other rich models, and a competitive performance with respect to previously proposed JPEG domain steganalysis features.

350 citations


Proceedings ArticleDOI
17 Jun 2015
TL;DR: The experimental results show that the proposed steganalysis feature can achieve a competitive performance by comparing with the other Steganalysis features when they are used for the detection performance of adaptive JPEG steganography such as UED, JUNIWARD and SI-UNIward.
Abstract: Adaptive JPEG steganographic schemes are difficult to preserve the image texture features in all scales and orientations when the embedding changes are constrained to the complicated texture regions, then a steganalysis feature extraction method is proposed based on 2 dimensional (2D) Gabor filters. The 2D Gabor filters have certain optimal joint localization properties in the spatial domain and in the spatial frequency domain. They can describe the image texture features from different scales and orientations, therefore the changes of image statistical characteristics caused by steganography embedding can be captured more effectively. For the proposed feature extraction method, the decompressed JPEG image is filtered by 2D Gabor filters with different scales and orientations firstly. Then, the histogram features are extracted from all the filtered images.Lastly, the ensemble classifier is used to assemble the proposed steganalysis feature as well as the final steganalyzer. The experimental results show that the proposed steganalysis feature can achieve a competitive performance by comparing with the other steganalysis features when they are used for the detection performance of adaptive JPEG steganography such as UED, JUNIWARD and SI-UNIWARD.

252 citations


Journal ArticleDOI
TL;DR: A thorough objective investigation of the performance-complexity trade-offs offered by these techniques on medical data is carried out and a comparison of the presented techniques to H.265/MPEG-H HEVC, which is currently the most state-of-the-art video codec available is provided.
Abstract: The amount of image data generated each day in health care is ever increasing, especially in combination with the improved scanning resolutions and the importance of volumetric image data sets. Handling these images raises the requirement for efficient compression, archival and transmission techniques. Currently, JPEG 2000's core coding system, defined in Part 1, is the default choice for medical images as it is the DICOM-supported compression technique offering the best available performance for this type of data. Yet, JPEG 2000 provides many options that allow for further improving compression performance for which DICOM offers no guidelines. Moreover, over the last years, various studies seem to indicate that performance improvements in wavelet-based image coding are possible when employing directional transforms. In this paper, we thoroughly investigate techniques allowing for improving the performance of JPEG 2000 for volumetric medical image compression. For this purpose, we make use of a newly developed generic codec framework that supports JPEG 2000 with its volumetric extension (JP3D), various directional wavelet transforms as well as a generic intra-band prediction mode. A thorough objective investigation of the performance-complexity trade-offs offered by these techniques on medical data is carried out. Moreover, we provide a comparison of the presented techniques to H.265/MPEG-H HEVC, which is currently the most state-of-the-art video codec available. Additionally, we present results of a first time study on the subjective visual performance when using the aforementioned techniques. This enables us to provide a set of guidelines and settings on how to optimally compress medical volumetric images at an acceptable complexity level. HighlightsWe investigated how to optimally compress volumetric medical images with JP3D.We extend JP3D with directional wavelets and intra-band prediction.Volumetric wavelets and entropy-coding improve the compression performance.Compression gains for medical images with directional wavelets are often minimal.We recommend further adoption of JP3D for volumetric medical image compression.

139 citations


Journal ArticleDOI
TL;DR: The results show that the fusion method improves the quality of the output image visually and outperforms the previous DCT based techniques and the state-of-art methods in terms of the objective evaluation.
Abstract: Multi-focus image fusion in wireless visual sensor networks (WVSN) is a process of fusing two or more images to obtain a new one which contains a more accurate description of the scene than any of the individual source images. In this letter, we propose an efficient algorithm to fuse multi-focus images or videos using discrete cosine transform (DCT) based standards in WVSN. The spatial frequencies of the corresponding blocks from source images are calculated as the contrast criteria, and the blocks with the larger spatial frequencies compose the DCT presentation of the output image. Experiments on plenty of pairs of multi-focus images coded in Joint Photographic Experts Group (JPEG) standard are conducted to evaluate the fusion performance. The results show that our fusion method improves the quality of the output image visually and outperforms the previous DCT based techniques and the state-of-art methods in terms of the objective evaluation. 2014 IEEE.

138 citations


Proceedings ArticleDOI
TL;DR: A novel feature set called PHase Aware pRojection Model (PHARM) in which residuals obtained using a small number of small-support kernels are represented using first-order statistics of their random projections as in the projection spatial rich model PSRM.
Abstract: State-of-the-art JPEG steganographic algorithms, such as J-UNIWARD, are currently better detected in the spatial domain rather than the JPEG domain. Rich models built from pixel residuals seem to better capture the impact of embedding than features constructed as co-occurrences of quantized JPEG coefficients. However, when steganalyzing JPEG steganographic algorithms in the spatial domain, the pixels’ statistical properties vary because of the underlying 8 × 8 pixel grid imposed by the compression. In order to detect JPEG steganography more accurately, we split the statistics of noise residuals based on their phase w.r.t. the 8 × 8 grid. Because of the heterogeneity of pixels in a decompressed image, it also makes sense to keep the kernel size of pixel predictors small as larger kernels mix up qualitatively different statistics more, losing thus on the detection power. Based on these observations, we propose a novel feature set called PHase Aware pRojection Model (PHARM) in which residuals obtained using a small number of small-support kernels are represented using first-order statistics of their random projections as in the projection spatial rich model PSRM. The benefit of making the features “phase-aware” is shown experimentally on selected modern JPEG steganographic algorithms with the biggest improvement seen for J-UNIWARD. Additionally, the PHARM feature vector can be computed at a fraction of computational costs of existing projection rich models.

101 citations


Proceedings ArticleDOI
07 Jun 2015
TL;DR: The prowess of the proposed approach is directly restoring DCT coefficients of the latent image to prevent the spreading of quantization errors into the pixel domain, and at the same time using on-line machine-learnt local spatial features to regulate the solution of the underlying inverse problem.
Abstract: Arguably the most common cause of image degradation is compression. This papers presents a novel approach to restoring JPEG-compressed images. The main innovation is in the approach of exploiting residual redundancies of JPEG code streams and sparsity properties of latent images. The restoration is a sparse coding process carried out jointy in the DCT and. pixel domains. The prowess of the proposed approach is directly restoring DCT coefficients of the latent image to prevent the spreading of quantization errors into the pixel domain, and at the same time using on-line machine-learnt local spatial features to regulate the solution of the underlying inverse problem. Experimental results are encouraging and show the promise of the new approach in significantly improving the quality of DCT-coded images.

83 citations



Journal ArticleDOI
TL;DR: The present work is the second of two papers on a variational model for image reconstruction whose specific features are twofold: first, data fidelity is realized by interval constraints on the coefficients of a Riesz basis representation, and second, total generalized variation of arbitrary order is employed as image prior.
Abstract: A variational model for image reconstruction is introduced and analyzed in function space. Specific to the model is the data fidelity, which is realized via a basis transformation with respect to a Riesz basis followed by interval constraints. This setting in particular covers the task of reconstructing images constrained to data obtained from JPEG or JPEG 2000 compressed files. As image prior, the total generalized variation (TGV) functional of arbitrary order is employed. The present paper, the first of two works that deal with both analytical and numerical aspects of the model, provides a comprehensive analysis in function space and defines concrete instances for particular applications. A new, noncoercive existence result and optimality conditions, including a characterization of the subdifferential of the TGV functional, are obtained in the general setting.

76 citations


Journal ArticleDOI
TL;DR: This work presents an efficient semi-local approximation scheme to large-scale Gaussian processes that allows efficient learning of task-specific image enhancements from example images without reducing quality.
Abstract: Improving the quality of degraded images is a key problem in image processing, but the breadth of the problem leads to domain-specific approaches for tasks such as super-resolution and compression artifact removal. Recent approaches have shown that a general approach is possible by learning application-specific models from examples; however, learning models sophisticated enough to generate high-quality images is computationally expensive, and so specific per-application or per-dataset models are impractical. To solve this problem, we present an efficient semi-local approximation scheme to large-scale Gaussian processes. This allows efficient learning of task-specific image enhancements from example images without reducing quality. As such, our algorithm can be easily customized to specific applications and datasets, and we show the efficiency and effectiveness of our approach across five domains: single-image super-resolution for scene, human face, and text images, and artifact removal in JPEG- and JPEG 2000-encoded images.

72 citations


Journal ArticleDOI
TL;DR: The proposed image variational deconvolution framework outperforms the state-of-the-art median filtering anti-forensics, with a better forensic undetectability against existing detectors as well as a higher visual quality of the processed image.
Abstract: Median filtering enjoys its popularity as a widely adopted image denoising and smoothing tool. It is also used by anti-forensic researchers in helping disguise traces of other image processing operations, e.g. , image resampling and JPEG compression. This paper proposes an image variational deconvolution framework for both quality enhancement and anti-forensics of median filtered (MF) images. The proposed optimization-based framework consists of a convolution term, a fidelity term with respect to the MF image, and a prior term. The first term is for the approximation of the median filtering process, using a convolution kernel. The second fidelity term keeps the processed image to some extent still close to the MF image, retaining some denoising or other image processing artifact hiding effects. Using the generalized Gaussian as the distribution model, the last image prior term regularizes the pixel value derivative of the obtained image so that its distribution resembles the original one. Our method can serve as an MF image quality enhancement technique, whose efficacy is validated by experiments conducted on MF images which have been previously “salt & pepper” noised. Using another parameter setting and with an additional pixel value perturbation procedure, the proposed method outperforms the state-of-the-art median filtering anti-forensics, with a better forensic undetectability against existing detectors as well as a higher visual quality of the processed image. Furthermore, the feasibility of concealing image resampling traces and JPEG blocking artifacts is demonstrated by experiments, using the proposed median filtering anti-forensic method.

Proceedings ArticleDOI
29 Oct 2015
TL;DR: This paper presents the use of conventional image based compression methods for 3D point clouds, and presents the results of several lossless compression methods and the lossy JPEG on point cloud compression.
Abstract: Modern 3D laser scanners make it easy to collect large 3D point clouds. In this paper we present the use of conventional image based compression methods for 3D point clouds. We map the point cloud onto panorama images to encode the range, reflectance and color value for each point. An encoding method is presented to map the floating point measured ranges on to a three channel image. The image compression methods are used to compress the generated panorama images. We present the results of several lossless compression methods and the lossy JPEG on point cloud compression. Lossless compression methods are designed to retain the original data. On the other hand lossy compression methods sacrifice the details for higher compression ratio. This produces artefacts in the recovered point cloud data. We study the effects of these artefacts on encoded range data. A filtration process is presented for determination of range outliers from uncompressed point clouds.

Journal ArticleDOI
TL;DR: A new stereoscopic image quality assessment database rendered using the 2D-image- plus-depth source, called MCL-3D, is described and the performance benchmarking of several known 2D and 3D image quality metrics is presented.
Abstract: A new stereoscopic image quality assessment database rendered using the 2D-image- plus-depth source, called MCL-3D, is described and the performance benchmarking of several known 2D and 3D image quality metrics using the MCL-3D database is presented in this work. Nine image-plus-depth sources are first selected, and a depth image- based rendering (DIBR) technique is used to render stereoscopic image pairs. Distortions applied to either the texture image or the depth image before stereoscopic image rendering include: Gaussian blur, additive white noise, down-sampling blur, JPEG and JPEG-2000 (JP2K) compression and transmission error. Furthermore, the distortion caused by imperfect rendering is also examined. The MCL-3D database contains 693 stereoscopic image pairs, where one third of them are of resolution 1024*768 and two thirds are of resolution 1920*1080. The pair-wise comparison was adopted in the subjective test for user friendliness, and the Mean Opinion Score (MOS) were computed accordingly. Finally, we evaluate the performance of several 2D and 3D image quality metrics applied to MCL-3D. All texture images, depth images, rendered image pairs in MCL- 3D and their MOS values obtained in the subjective test are available to the public (http: //mcl.usc.edu/mcl-3d-database/) for future research and development.

Proceedings ArticleDOI
30 Jul 2015
TL;DR: This paper proposes an Encryption-then-Compression system using a JPEG-friendly perceptual encryption method, which enables to be conducted prior to JPEG compression, and can provides approximately the same compression performance as that of JPEG compression without any encryption.
Abstract: In many multimedia applications, image encryption has to be conducted prior to image compression. This paper proposes an Encryption-then-Compression system using a JPEG-friendly perceptual encryption method, which enables to be conducted prior to JPEG compression. The proposed encryption method can provides approximately the same compression performance as that of JPEG compression without any encryption, where both gray scale images and color ones are considered. It is also shown that the proposed system consists of four block-based encryption steps, and provide a reasonably high level of security. Most of conventional perceptual encryption methods have not been designed for international compression standards, but this paper focuses on applying the JPEG standard, as one of the most widely used image compression standards.

Proceedings ArticleDOI
19 Apr 2015
TL;DR: The experimental results demonstrated that the proposed ETC system achieved both acceptable compression performance and enough key-space for secure image communication while remaining compatible with the JPEG 2000 standard.
Abstract: A new Encryption-then-Compression (ETC) system for the JPEG 2000 standard is proposed in this paper. An ETC system is known as a system that makes image communication secure and efficient by using perceptual encryption and image compression. The proposed system uses the sign-scrambling and block-shuffling of discrete wavelet transform (DWT) coefficients as perceptual encryption. Unlike conventional ETC systems, the proposed system is compatible with the JPEG 2000 standard because the perceptually encrypted coefficients can be efficiently compressed by the JPEG 2000. The experimental results demonstrated that the proposed system achieved both acceptable compression performance and enough key-space for secure image communication while remaining compatible with the JPEG 2000 standard.

Journal ArticleDOI
TL;DR: This paper presents a novel scheme to implement blind image watermarking based on the feature parameters extracted from a composite domain including the discrete wavelet transform (DWT), singular value decomposition (SVD), and discrete cosinetransform (DCT).

Journal ArticleDOI
01 Nov 2015-Optik
TL;DR: An improved medical image compression technique based on region of interest (ROI) is proposed to maximize compression and a set of experiments is designed to assess the effectiveness of the proposed compression method.

Proceedings ArticleDOI
19 Apr 2015
TL;DR: The proposed algorithm for the compression of plenoptic images is compared with state-of-the-art image compression algorithms, namely, JPEG 2000 and JPEG XR and demonstrated that the proposed algorithm improves the coding efficiency.
Abstract: Plenoptic images are obtained from the projection of the light crossing a matrix of microlens arrays which replicates the scene from different direction into a camera device sensor. Plenoptic images have a different structure with respect to regular digital images, and novel algorithms for data compression are currently under research. This paper proposes an algorithm for the compression of plenoptic images. The micro images composing a plenoptic image are processed by an adaptive prediction tool, aiming at reducing data correlation before entropy coding takes place. The algorithm is compared with state-of-the-art image compression algorithms, namely, JPEG 2000 and JPEG XR. Obtained results demonstrate that the proposed algorithm improves the coding efficiency.

Journal ArticleDOI
TL;DR: This paper develops a simple yet very effective detection algorithm to identify decompressed JPEG images that outperforms the state-of-the-art methods by a large margin especially for high-quality compressed images through extensive experiments on various sources of images.
Abstract: To identify whether an image has been JPEG compressed is an important issue in forensic practice. The state-of-the-art methods fail to identify high-quality compressed images, which are common on the Internet. In this paper, we provide a novel quantization noise-based solution to reveal the traces of JPEG compression. Based on the analysis of noises in multiple-cycle JPEG compression, we define a quantity called forward quantization noise. We analytically derive that a decompressed JPEG image has a lower variance of forward quantization noise than its uncompressed counterpart. With the conclusion, we develop a simple yet very effective detection algorithm to identify decompressed JPEG images. We show that our method outperforms the state-of-the-art methods by a large margin especially for high-quality compressed images through extensive experiments on various sources of images. We also demonstrate that the proposed method is robust to small image size and chroma subsampling. The proposed algorithm can be applied in some practical applications, such as Internet image classification and forgery detection.

Proceedings ArticleDOI
26 May 2015
TL;DR: This paper addresses the limited availability of suitable image datasets for studying and evaluation of HDR image compression by creating a publicly available dataset of 20 HDR images and corresponding versions compressed at four different bit rates with three profiles of the upcoming JPEG XT standard.
Abstract: Recent advances in high dynamic range (HDR) capturing and display technologies attracted a lot of interest to HDR imaging. Many issues that are considered as being resolved for conventional low dynamic range (LDR) images pose new challenges in HDR context. One such issue is a lack of standards for HDR image compression. Another is the limited availability of suitable image datasets that are suitable for studying and evaluation of HDR image compression. In this paper, we address this problem by creating a publicly available dataset of 20 HDR images and corresponding versions compressed at four different bit rates with three profiles of the upcoming JPEG XT standard for HDR image compression. The images cover different scenes, dynamic ranges, and acquisition methods (fusion from several exposures, frame of an HDR video, and CGI generated images). The dataset also includes Mean Opinion Scores (MOS) for each compressed version of the images obtained from extensive subjective experiments using SIM2 HDR monitor.

Journal ArticleDOI
TL;DR: The DCT-GIST image representation model is introduced which is useful to summarize the context of the scene, and closely matches other state-of-the-art methods based on bag of Textons collected on spatial hierarchy.

Proceedings ArticleDOI
17 Jun 2015
TL;DR: A new image encryption scheme specially designed to protect JPEG images in cloud photo storage services, which allows efficient reconstruction of an accurate low-resolution thumbnail from the ciphertext image, but aims to prevent the extraction of any more detailed information.
Abstract: With more and more data being stored in the cloud, securing multimedia data is becoming increasingly important. Use of existing encryption methods with cloud services is possible, but makes many web-based applications difficult or impossible to use. In this paper, we propose a new image encryption scheme specially designed to protect JPEG images in cloud photo storage services. Our technique allows efficient reconstruction of an accurate low-resolution thumbnail from the ciphertext image, but aims to prevent the extraction of any more detailed information. This will allow efficient storage and retrieval of image data in the cloud but protect its contents from outside hackers or snooping cloud administrators. Experiments of the proposed approach using an online selfie database show that it can achieve a good balance of privacy, utility, image quality, and file size.

Journal ArticleDOI
TL;DR: Inspired by the blocking artifact characteristics matrix (BACM), a method to detect tampers caused by seam modification on JPEG retargeted images without knowledge of the original image is proposed in this paper.
Abstract: Content-aware image retargeting has been investigated since the last decade as a paradigm of image modification for proper display on the different screen sizes Modifications, such as seam carving or seam insertion, have been introduced to achieve aforesaid image retargeting The changes in an image are not easily recognizable by human eyes Inspired by the blocking artifact characteristics matrix (BACM), a method to detect tampers caused by seam modification on JPEG retargeted images without knowledge of the original image is proposed in this paper In a BACM block matrix, we found that the original JPEG image demonstrates a regular symmetrical data, whereas the symmetrical data in a block reconstructed by seam modification is destroyed Twenty-two features are proposed to train the data using a support vector machine classification method The experimental results clearly demonstrate that the proposed method provides a very high recognition rate for those JPEG retargeted images The source codes and the complete experimental data can be accessed at http://videominelabtw/DETS/

Journal ArticleDOI
TL;DR: A rate-distortion performance analysis of the HEVC MSP profile in comparison to other popular still image and video compression schemes, including JPEG, JPEG 2000, JPEG XR, H.264/MPEG-4 AVC, VP8, VP9, and WebP is presented.
Abstract: The first version of the High Efficiency Video Coding (HEVC) standard was approved by both ITU-T and ISO/IEC in 2013 and includes three profiles: Main and Main 10 for typical video data with 8 and 10 bits, respectively, as well as a profile referred to as Main Still Picture (MSP) profile. Apparently, the MSP profile extends the HEVC application space toward still images which, in turn, brings up the question of how this HEVC profile performs relative to existing still image coding technologies. This paper aims at addressing this question from a coding-efficiency point-of-view by presenting a rate-distortion performance analysis of the HEVC MSP profile in comparison to other popular still image and video compression schemes, including JPEG, JPEG 2000, JPEG XR, H.264/MPEG-4 AVC, VP8, VP9, and WebP. In summary, it can be stated that the HEVC MSP profile provides average bit-rate savings in the range from 10% to 44% relative to the whole set of competing video and still image compression schemes when averaged over a representative test set of photographic still images. Compared with Baseline JPEG alone, the average bit-rate saving for the HEVC MSP profile is 44%.

Journal ArticleDOI
TL;DR: This work proposes a new compression format, .zfib, for streamline tractography datasets reconstructed from diffusion magnetic resonance imaging (dMRI), which is highly compressible and opens new opportunities for connectomics and tractometry applications.

Proceedings ArticleDOI
01 Nov 2015
TL;DR: This paper describes a general principle for incorporating the side-information in any steganographic scheme designed to minimize embedding distortion and appears to improve empirical security of existing embedding schemes by a rather large margin.
Abstract: Side-informed steganography is a term used for embedding secret messages while utilizing a higher quality form of the cover object called the precover The embedding algorithm typically makes use of the quantization errors available when converting the precover to a lower quality cover object Virtually all previously proposed side-informed steganographic schemes were limited to the case when the side-information is in the form of an uncompressed image and the embedding uses the unquantized DCT coefficients to improve the security when JPEG compressing the precover Inspired by the side-informed (SI) UNIWARD embedding scheme, in this paper we describe a general principle for incorporating the side-information in any steganographic scheme designed to minimize embedding distortion Further improvement in security is obtained by allowing a ternary embedding operation instead of binary and computing the costs from the unquantized cover The usefulness of the proposed embedding paradigm is demonstrated on a wide spectrum of various information-reducing image processing operations, including image downsampling, color depth reduction, and filtering Side-information appears to improve empirical security of existing embedding schemes by a rather large margin

Proceedings ArticleDOI
TL;DR: This paper will review and analyze past and on-going work for the compression of digital holographic data, focusing on JPEG and MPEG compression techniques for holograms.
Abstract: Holography has the potential to become the ultimate 3D experience. Nevertheless, in order to achieve practical working systems, major scientific and technological challenges have to be tackled. In particular, as digital holographic data represents a huge amount of information, the development of efficient compression techniques is a key component. This problem has gained significant attention by the research community during the last 10 years. Given that holograms have very different signal properties when compared to natural images and video sequences, existing compression techniques (e.g. JPEG or MPEG) remain suboptimal, calling for innovative compression solutions. In this paper, we will review and analyze past and on-going work for the compression of digital holographic data.

Journal ArticleDOI
TL;DR: A statistical analysis of JPEG noises, including the quantization noise and the rounding noise during a JPEG compression cycle reveals that the noise distributions in higher compression cycles are different from those in the first compression cycle, and they are dependent on thequantization parameters used between two successive cycles.
Abstract: In this paper, we present a statistical analysis of JPEG noises, including the quantization noise and the rounding noise during a JPEG compression cycle. The JPEG noises in the first compression cycle have been well studied; however, so far less attention has been paid on the statistical model of JPEG noises in higher compression cycles. Our analysis reveals that the noise distributions in higher compression cycles are different from those in the first compression cycle, and they are dependent on the quantization parameters used between two successive cycles. To demonstrate the benefits from the analysis, we apply the statistical model in JPEG quantization step estimation. We construct a sufficient statistic by exploiting the derived noise distributions, and justify that the statistic has several special properties to reveal the ground-truth quantization step. Experimental results demonstrate that the proposed estimator can uncover JPEG compression history with a satisfactory performance.

Journal ArticleDOI
TL;DR: These techniques can be implemented in the field for storing and transmitting medical images in a secure manner and have also been proved by the experimental results.
Abstract: Exchanging a medical image via network from one place to another place or storing a medical image in a particular place in a secure manner has become a challenge. To overwhelm this, secure medical image Lossless Compression LC schemes have been proposed. The original input grayscale medical images are encrypted by Tailored Visual Cryptography Encryption Process TVCE which is a proposed encryption system. To generate these encrypted images, four types of processes are adopted which play a vital role. These processes are Splitting Process, Converting Process, Pixel Process and Merging process. The encrypted medical image is compressed by proposed compression algorithms, i.e Pixel Block Short algorithm PBSA and one conventional Lossless Compression LC algorithm has been adopted JPEG 2000LS. The above two compression methods are used to separate compression for encrypted medical images. And also, decompressions have been done in a separate manner. The encrypted output image which is generated from decompression of the proposed compression algorithm, JPEG 2000LS are decrypted by the Tailored Visual Cryptography Decryption Process TVCD. To decrypt the encrypted grayscale medical images, four types of processes are involved. These processes are Segregation Process, Inverse Pixel Process, 8-Bit into Decimal Conversion Process and Amalgamate Process. However, this paper is focused on the proposed visual cryptography only. From these processes, two original images have been reconstructed which are given by two compression algorithms. Ultimately, two combinations are compared with each other based on the various parameters. These techniques can be implemented in the field for storing and transmitting medical images in a secure manner. The Confidentiality, Integrity and Availability CIA property of a medical image have also been proved by the experimental results. In this paper we have focused on only proposed visual cryptography scheme.

Journal ArticleDOI
01 Dec 2015-Optik
TL;DR: Experimental results reveal that the proposed DCT-Arnold chaotic based watermarking algorithm not only attains satisfactory imperceptibility, but also has stronger robustness against most common attacks such as JPEG compression, cropping, different noises adding, filtering, scaling etc.