scispace - formally typeset
Search or ask a question

Showing papers on "Lossless JPEG published in 2018"


Journal ArticleDOI
TL;DR: A JPEG encryption algorithm is proposed, which enciphers an image to a smaller size and keeps the format compliant to JPEG decoder and outperforms a previous work in terms of separation capability, embedding capacity and security.
Abstract: While most techniques of reversible data hiding in encrypted images (RDH-EI) are developed for uncompressed images, this paper provides a separable reversible data hiding protocol for encrypted JPEG bitstreams. We first propose a JPEG encryption algorithm, which enciphers an image to a smaller size and keeps the format compliant to JPEG decoder. After a content owner uploads the encrypted JPEG bitstream to a remote server, a data hider embeds an additional message into the encrypted copy without changing the bitstream size. On the recipient side, the original bitstream can be reconstructed losslessly using an iterative recovery algorithm based on the blocking artifact. Since message extraction and image recovery are separable, anyone who has the embedding key can extract the message from the marked encrypted copy. Experimental results show that the proposed method outperforms a previous work in terms of separation capability, embedding capacity and security.

95 citations


Journal ArticleDOI
TL;DR: Compared with the current S-UNIWARD steganography, the message extraction error rates of the proposed algorithm after JPEG compression decrease from about 50 % to nearly 0, and the algorithm not only possesses the comparable JPEG compression resistant ability, but also has a stronger detection resistant performance and a higher operation efficiency.
Abstract: In order to improve the JPEG compression resistant performance of the current steganogrpahy algorithms resisting statistic detection, an adaptive steganography algorithm resisting JPEG compression and detection based on dither modulation is proposed. Utilizing the adaptive dither modulation algorithm based on the quantization tables, the embedding domains resisting JPEG compression for spatial images and JPEG images are determined separately. Then the embedding cost function is constructed by the embedding costs calculation algorithm based on side information. Finally, the RS coding is combined with the STCs to realize the minimum costs messages embedding while improving the correct rates of the extracted messages after JPEG compression. The experimental results demonstrate that the algorithm can be applied to both spatial images and JPEG images. Compared with the current S-UNIWARD steganography, the message extraction error rates of the proposed algorithm after JPEG compression decrease from about 50 % to nearly 0; compared with the current JPEG compression and detection resistant steganography algorithms, the proposed algorithm not only possesses the comparable JPEG compression resistant ability, but also has a stronger detection resistant performance and a higher operation efficiency.

39 citations


Journal ArticleDOI
TL;DR: This work proposes a statistical model that is designed for the meaningful interpretation of image distortions data, which is affected by visual search and imprecision of manual marking, and demonstrates the utility of such a metric in visually lossless JPEG compression, super-resolution and watermarking.
Abstract: A large number of imaging and computer graphics applications require localized information on the visibility of image distortions. Existing image quality metrics are not suitable for this task as they provide a single quality value per image. Existing visibility metrics produce visual difference maps, and are specifically designed for detecting just noticeable distortions but their predictions are often inaccurate. In this work, we argue that the key reason for this problem is the lack of large image collections with a good coverage of possible distortions that occur in different applications. To address the problem, we collect an extensive dataset of reference and distorted image pairs together with user markings indicating whether distortions are visible or not. We propose a statistical model that is designed for the meaningful interpretation of such data, which is affected by visual search and imprecision of manual marking. We use our dataset for training existing metrics and we demonstrate that their performance significantly improves. We show that our dataset with the proposed statistical model can be used to train a new CNN-based metric, which outperforms the existing solutions. We demonstrate the utility of such a metric in visually lossless JPEG compression, super-resolution and watermarking.

36 citations


Journal ArticleDOI
TL;DR: This paper proposes a technique for image watermarking during JPEG compression to address the optimal trade-off between major performance parameters including embedding and compression rates, robustness and embedding alterations against different known signal processing attacks.
Abstract: This paper presents a computationally efficient joint imperceptible image watermarking and joint photographic experts group (JPEG) compression scheme In recent times, the transmission and storage of digital documents/information over the unsecured channel are enormous concerns and nearly all of the digital documents are compressed before they are stored or transmitted to save the bandwidth requirements There are many similar computational operations performed during watermarking and compression which lead to computational redundancy and time delay This demands development of joint watermarking and compression scheme for various multimedia contents In this paper, we propose a technique for image watermarking during JPEG compression to address the optimal trade-off between major performance parameters including embedding and compression rates, robustness and embedding alterations against different known signal processing attacks The performance of the proposed technique is extensively evaluated in the form of peak signal to noise ratio (PSNR), correlation, compression ratio and execution time for different discrete cosine transform (DCT) blocks and watermark sizes Embedding is done on DCT coefficients using additive watermarking

24 citations


Journal ArticleDOI
TL;DR: A tricky anti-forensic method has been proposed to remove the traces left by the JPEG compression in both the spatial domain and discrete cosine transform domain and a novel Least Cuckoo Search algorithm is devised in this paper.
Abstract: The lossy nature of the JPEG compression leaves traces which are utilized by the forensic agents to identify the local tampering in the image. In this paper, a tricky anti-forensic method h...

22 citations


Journal ArticleDOI
Nan Jiang, Xiaowei Lu1, Hao Hu1, Yijie Dang1, Yongquan Cai1 
TL;DR: This paper uses GQIR (the generalized quantum image representation) to represent an image, and tries to decrease the operations used in preparation, which is also known as quantum image compression.
Abstract: Quantum image processing has been a hot topic. The first step of it is to store an image into qubits, which is called quantum image preparation. Different quantum image representations may have different preparation methods. In this paper, we use GQIR (the generalized quantum image representation) to represent an image, and try to decrease the operations used in preparation, which is also known as quantum image compression. Our compression scheme is based on JPEG (named from its inventor: the Joint Photographic Experts Group) — the most widely used method for still image compression in classical computers. We input the quantized JPEG coefficients into qubits and then convert them into pixel values. Theoretical analysis and experimental results show that the compression ratio of our scheme is obviously higher than that of the previous compression method.

21 citations


Journal ArticleDOI
TL;DR: A hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images is proposed, which exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance.
Abstract: To improve the compression rates for lossless compression of medical images, an efficient algorithm, based on irregular segmentation and region-based prediction, is proposed in this paper. Considering that the first step of a region-based compression algorithm is segmentation, this paper proposes a hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images. Then, least square (LS)-based predictors are adaptively designed for each region (regular subblock or irregular subregion). The proposed adaptive algorithm not only exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance. Experimental results show that the average compression performance of the proposed algorithm is 10.48, 4.86, 3.58, and 0.10% better than that of JPEG 2000, CALIC, EDP, and JPEG-LS, respectively.

18 citations


Journal ArticleDOI
TL;DR: CVQ-SA algorithm with codebook optimization by Simulated Annealing for the compression of CT images was validated in terms of metrics like Peak to Signal Noise Ratio, Mean Square Error and Compression Ratio and the result was superior when compared with classical VQ, CVQ, JPEG lossless and JPEG lossy algorithms.
Abstract: The role of compression is vital in telemedicine for the storage and transmission of medical images. This work is based on Contextual Vector Quantization (CVQ) compression algorithm with codebook optimization by Simulated Annealing (SA) for the compression of CT images. The region of interest (foreground) and background are separated initially by region growing algorithm. The region of interest is encoded with low compression ratio and high bit rate; the background region is encoded with high compression ratio and low bit rate. The codebook generated from foreground and background is merged, optimized by simulated annealing algorithm. The performance of CVQ-SA algorithm was validated in terms of metrics like Peak to Signal Noise Ratio (PSNR), Mean Square Error (MSE) and Compression Ratio (CR), the result was superior when compared with classical VQ, CVQ, JPEG lossless and JPEG lossy algorithms. The algorithms are developed in Matlab 2010a and tested on real-time abdomen CT datasets. The quality of reconstructed image was also validated by metrics like Structural Content (SC), Normalized Absolute Error (NAE), Normalized Cross Correlation (NCC) and statistical analysis was performed by Mann Whitney U Test. The outcome of this work will be an aid in the field of telemedicine for the transfer of medical images.

13 citations


Journal ArticleDOI
TL;DR: Five new techniques are proposed to further improve the performance ofContext-based adaptive arithmetic coding and make the frequency table of CAAC converge to the true probability distribution rapidly and hence improve the coding efficiency.
Abstract: Context-based adaptive arithmetic coding (CAAC) has high coding efficiency and is adopted by the majority of advanced compression algorithms. In this paper, five new techniques are proposed to further improve the performance of CAAC. They make the frequency table (the table used to estimate the probability distribution of data according to the past input) of CAAC converge to the true probability distribution rapidly and hence improve the coding efficiency. Instead of varying only one entry of the frequency table, the proposed range-adjusting scheme adjusts the entries near to the current input value together. With the proposed mutual-learning scheme, the frequency tables of the contexts highly correlated to the current context are also adjusted. The proposed increasingly adjusting step scheme applies a greater adjusting step for recent data. The proposed adaptive initialization scheme uses a proper model to initialize the frequency table. Moreover, a local frequency table is generated according to local information. We perform several simulations on edge-directed prediction-based lossless image compression, coefficient encoding in JPEG, bit plane coding in JPEG 2000, and motion vector residue coding in video compression. All simulations confirm that the proposed techniques can reduce the bit rate and are beneficial for data compression.

12 citations


Journal ArticleDOI
TL;DR: A model for classifying bulk JPEG images produced by the data carving process or other means into three different classes to solve the problem of identifying forgery quickly and effectively can help investigators to automatically classify JPEG images, which reduce the time needed in the overall digital investigation process.
Abstract: From the digital forensics point of view, image forgery is considered as evidence that could provide a major breakthrough in the investigation process. Additionally, the development of storage device technologies has increased storage space significantly. Thus a digital investigator can be overwhelmed by the amount of data on storage devices that needs to be analysed. In this paper, we propose a model for classifying bulk JPEG images produced by the data carving process or other means into three different classes to solve the problem of identifying forgery quickly and effectively. The first class is JPEG images that contain errors or corrupted data, the second class is JPEG images that contain forged regions, and the third is JPEG images that have no signs of corruption or forgery. To test the proposed model, some experiments were conducted on our own dataset in addition to CASIA V2 image forgery dataset. The experiments covered different types of forgery technique. The results yielded around 88% accuracy rate in the classification process using five different machine learning methods on CASIA V2 dataset. It can be concluded that the proposed model can help investigators to automatically classify JPEG images, which reduce the time needed in the overall digital investigation process.

7 citations


Book ChapterDOI
01 Jan 2018
TL;DR: The proposed JPEG compression algorithms are built on image smoothening operators mean, median, harmonic mean, and contra-harmonic mean that enhances speed while minimizing memory necessities by tumbling the number of encoded bits.
Abstract: The operations of conveyed information across the Internet have soar exponentially over the past two decades. Image compression is a significant approach to shrink an image. JPEG is the core prevailing still image compression for bandwidth preservation. So images could be ensued and transmitted earlier. In this paper, the anticipated JPEG algorithms manifest finer conclusions compared to conventional JPEG compressed data in terms of image encoded bits on images might be corrupted with speckle, Poisson, and salt-and-pepper noise. The proposed JPEG compression algorithms are built on image smoothening operators mean, median, harmonic mean, and contra-harmonic mean that enhances speed while minimizing memory necessities by tumbling the number of encoded bits.

Journal ArticleDOI
TL;DR: This paper proposes a novel countering method based on the noise level estimation to identify the uncompressed images from those forged ones and analyzes the strategies available to the investigator and the forger in the case of that they are aware of the existence of each other.
Abstract: Quantization artifact and blocking artifact are the two types of well-known fingerprints of JPEG compression. Most JPEG forensic techniques are focused on these fingerprints. However, recent research shows that these fingerprints can be intentionally concealed via anti-forensics, which in turn makes current JPEG forensic methods vulnerable. A typical JPEG anti-forensic method is adding anti-forensic dither to DCT transform coefficients and erasing blocking artifact to remove the trace of compression history. To deal with this challenge in JPEG forensics, in this paper, we propose a novel countering method based on the noise level estimation to identify the uncompressed images from those forged ones. The experimental results show that the proposed method achieves superior performance on several image databases with only one-dimensional feature. It is also worth emphasizing that the proposed threshold-based method has explicit physical meaning and is simple to be implemented in practice. Moreover, we analyze the strategies available to the investigator and the forger in the case of that they are aware of the existence of each other. Game theory is used to evaluate the ultimate performance when both sides adopt their Nash equilibrium strategies.

Proceedings ArticleDOI
06 Mar 2018
TL;DR: This paper divides the image into two parts, and adopts a block-based image compression technique to decrease the range of error diffusion, and provides JPEG-LS lossless compression for the image blocks which are included in the non-ROI (unimportant) regions.
Abstract: JPEG-LS lossless compression algorithm is used in many specialized applications that emphasize on the attainment of high fidelity for its lower complexity and better compression ratios than the lossless JPEG standard. But it cannot prevent error diffusion because of the context dependence of the algorithm, and have low compression rate when compared to lossy compression. In this paper, we firstly divide the image into two parts: ROI regions and non-ROI regions. Then we adopt a block-based image compression technique to decrease the range of error diffusion. We provide JPEG-LS lossless compression for the image blocks which include the whole or part region of interest (ROI) and JPEG-LS near lossless compression for the image blocks which are included in the non-ROI (unimportant) regions. Finally, a set of experiments are designed to assess the effectiveness of the proposed compression method.

Book ChapterDOI
31 Oct 2018
TL;DR: By using the technique presented in this chapter, JPEG image data can be encoded in real time and may thereafter be streamed to a network destination, saved as an image file, or appended to a movie sequence.
Abstract: Transporting frames within a DirectX application to a target destination is a nontrivial task. For example, both low latency and available bandwidth are important factors when streaming image frames via a network connection. By using the technique presented in this chapter, JPEG image data can be encoded in real time. The resulting data may thereafter be streamed to a network destination, saved as an image file, or appended to a movie sequence.

Book ChapterDOI
05 Jul 2018
TL;DR: This study presents a comparison of lossless compression techniques: LOCO-I, LZW, and Lossless JPEG algorithms which tested on 5 modalities of medical images to find the best algorithm to implement in compressing medical images.
Abstract: Medical images play importance role in medical science. Through medical images, doctor can do more accurate diagnoses and treatment for patients. However, medical images consume large data size; therefore, data compression is necessary to be applied in medical images. This study presents a comparison of lossless compression techniques: LOCO-I, LZW, and Lossless JPEG algorithms which tested on 5 modalities of medical images. All the algorithms are theoretically and practically a lossless image compression which has MSE equal to 0. Moreover, LZW offers higher compression ratio and faster decompression process but slower compression process than the other two algorithms. Lossless JPEG has the lowest compression ratio and requires more time both to compress and decompress the images. Therefore, in general, in this study, LZW is the best algorithm to implement in compressing medical images.