scispace - formally typeset
Search or ask a question

Showing papers on "Lossless JPEG published in 2007"


Proceedings ArticleDOI
20 Sep 2007
TL;DR: The goal of this paper is to determine the steganographic capacity of JPEG images (the largest payload that can be undetectably embedded) with respect to current best steganalytic methods and to evaluate the influence of specific design elements and principles.
Abstract: The goal of this paper is to determine the steganographic capacity of JPEG images (the largest payload that can be undetectably embedded) with respect to current best steganalytic methods. Additionally, by testing selected steganographic algorithms we evaluate the influence of specific design elements and principles, such as the choice of the JPEG compressor, matrix embedding, adaptive content-dependent selection channels, and minimal distortion steganography using side information at the sender. From our experiments, we conclude that the average steganographic capacity of grayscale JPEG images with quality factor 70 is approximately 0.05 bits per non-zero AC DCT coefficient.

390 citations


Proceedings ArticleDOI
27 Feb 2007
TL;DR: A novel statistical model based on Benford's law for the probability distributions of the first digits of the block-DCT and quantized JPEG coefficients is presented and a parametric logarithmic law, i.e., the generalized Benford't law, is formulated.
Abstract: In this paper, a novel statistical model based on Benford's law for the probability distributions of the first digits of the block-DCT and quantized JPEG coefficients is presented. A parametric logarithmic law, i.e., the generalized Benford's law, is formulated. Furthermore, some potential applications of this model in image forensics are discussed in this paper, which include the detection of JPEG compression for images in bitmap format, the estimation of JPEG compression Qfactor for JPEG compressed bitmap image, and the detection of double compressed JPEG image. The results of our extensive experiments demonstrate the effectiveness of the proposed statistical model.

287 citations


Proceedings ArticleDOI
15 Apr 2007
TL;DR: A novel method for the detection of image tampering operations in JPEG images by exploiting the blocking artifact characteristics matrix (BACM) to train a support vector machine (SVM) classifier for recognizing whether an image is an original JPEG image or it has been cropped from another JPEG image and re-saved as a JPEG image.
Abstract: One of the most common practices in image tampering involves cropping a patch from a source and pasting it onto a target. In this paper, we present a novel method for the detection of such tampering operations in JPEG images. The lossy JPEG compression introduces inherent blocking artifacts into the image and our method exploits such artifacts to serve as a 'watermark' for the detection of image tampering. We develop the blocking artifact characteristics matrix (BACM) and show that, for the original JPEG images, the BACM exhibits regular symmetrical shape; for images that are cropped from another JPEG image and re-saved as JPEG images, the regular symmetrical property of the BACM is destroyed. We fully exploit this property of the BACM and derive representation features from the BACM to train a support vector machine (SVM) classifier for recognizing whether an image is an original JPEG image or it has been cropped from another JPEG image and re-saved as a JPEG image. We present experiment results to show the efficacy of our method.

197 citations


Journal ArticleDOI
TL;DR: A novel steganographic method, based on JPEG and Particle Swarm Optimization algorithm (PSO), that has larger message capacity and better image quality than Chang et al.'s and has a high security level is proposed.

179 citations


Book ChapterDOI
22 Aug 2007
TL;DR: This paper proposes a lossless data hiding technique for JPEG images based on histogram pairs that embeds data into the JPEG quantized 8x8 block DCT coefficients and can obtain higher payload than the prior arts.
Abstract: This paper proposes a lossless data hiding technique for JPEG images based on histogram pairs It embeds data into the JPEG quantized 8x8 block DCT coefficients and can achieve good performance in terms of PSNR versus payload through manipulating histogram pairs with optimum threshold and optimum region of the JPEG DCT coefficients It can obtain higher payload than the prior arts In addition, the increase of JPEG file size after data embedding remains unnoticeable These have been verified by our extensive experiments

91 citations


Proceedings ArticleDOI
TL;DR: The overall performance of rate-distortion performance between JPEG 2000, AVC/H.264 High 4:4:4 Intra and HD Photo is quite comparable for the three coding approaches, within an average range of ±10% in bitrate variations, and outperforming the conventional JPEG.
Abstract: In this paper, we report a study evaluating rate-distortion performance between JPEG 2000, AVC/H.264 High 4:4:4 Intra and HD Photo. A set of ten high definition color images with different spatial resolutions has been used. Both the PSNR and the perceptual MSSIM index were considered as distortion metrics. Results show that, for the material used to carry out the experiments, the overall performance, in terms of compression efficiency, are quite comparable for the three coding approaches, within an average range of ±10% in bitrate variations, and outperforming the conventional JPEG.

69 citations


Journal ArticleDOI
TL;DR: The spectral redundancy in hyperspectral images is exploited using a context-match method driven by the correlation between adjacent bands, which compares favorably with the recent proposed lossless compression algorithms in terms of compression, with a lower complexity.
Abstract: In this paper, a new algorithm for lossless compression of hyperspectral images is proposed. The spectral redundancy in hyperspectral images is exploited using a context-match method driven by the correlation between adjacent bands. This method is suitable for hyperspectral images in the band-sequential format. Moreover, this method compares favorably with the recent proposed lossless compression algorithms in terms of compression, with a lower complexity.

61 citations


Journal ArticleDOI
TL;DR: It is found that the image contrast and the average gray level play important roles in image compression and quality evaluation and in the future, the image gray level and contrast effect should be considered in developing new objective metrics.
Abstract: Previous studies have shown that Joint Photographic Experts Group (JPEG) 2000 compression is better than JPEG at higher compression ratio levels. However, some findings revealed that this is not valid at lower levels. In this study, the qualities of compressed medical images in these ratio areas (∼20), including computed radiography, computed tomography head and body, mammographic, and magnetic resonance T1 and T2 images, were estimated using both a pixel-based (peak signal to noise ratio) and two 8 × 8 window-based [Q index and Moran peak ratio (MPR)] metrics. To diminish the effects of blocking artifacts from JPEG, jump windows were used in both window-based metrics. Comparing the image quality indices between jump and sliding windows, the results showed that blocking artifacts were produced from JPEG compression, even at low compression ratios. However, even after the blocking artifacts were omitted in JPEG compressed images, JPEG2000 outperformed JPEG at low compression levels. We found in this study that the image contrast and the average gray level play important roles in image compression and quality evaluation. There were drawbacks in all metrics that we used. In the future, the image gray level and contrast effect should be considered in developing new objective metrics.

48 citations


Journal ArticleDOI
Yair Wiseman1
TL;DR: A replacement of the traditional Huffman compression used by JPEG by the Burrows-Wheeler compression will yield a better compression ratio, and if the image is synthetic, even a poor quality image can be compressed better.
Abstract: Recently, the use of the Burrows-Wheeler method for data compression has been expanded. A method of enhancing the compression efficiency of the common JPEG standard is presented in this paper, exploiting the Burrows-Wheeler compression technique. The paper suggests a replacement of the traditional Huffman compression used by JPEG by the Burrows-Wheeler compression. When using high quality images, this replacement will yield a better compression ratio. If the image is synthetic, even a poor quality image can be compressed better.

41 citations


Journal Article
TL;DR: The underlying innovative technology is described in detail and its performance is characterized for lossless and near lossless representation, both in conjunction with an AAC coder and as a stand-alone compression engine.
Abstract: Recently the MPEG Audio standardization group has successfully concluded the standardization process on technology for lossless coding of audio signals. A summary of the scalable lossless coding (SLS) technology as one of the results of this standardization work is given. MPEG-4 scalable lossless coding provides a fine-grain scalable lossless extension of the well-known MPEG-4 AAC perceptual audio coder up to fully lossless reconstruction at word lengths and sampling rates typically used for high-resolution audio. The underlying innovative technology is described in detail and its performance is characterized for lossless and near lossless representation, both in conjunction with an AAC coder and as a stand-alone compression engine. A number of application scenarios for the new technology are discussed.

37 citations


Journal ArticleDOI
TL;DR: A comparison of subjective image quality between JPEG and JPEG 2000 to establish whether JPEG 2000 does indeed demonstrate significant improvements in visual quality, and a particular focus of this work is the inherent scene dependency of the two algorithms and their influence on subjective imagequality results.
Abstract: The original JPEG compression standard is efficient at low to medium levels of compression with relatively low levels of loss in visual image quality and has found widespread use in the imaging industry. Excessive compression using JPEG however, results in well-known artifacts such as "blocking" and "ringing," and the variation in image quality as a result of differing scene content is well documented. JPEG 2000 has been developed to improve on JPEG in terms of functionality and image quality at lower bit rates. One of the more fundamental changes is the use of a discrete wavelet transform instead of a discrete cosine transform, which provides several advantages both in terms of the way in which the image is encoded and overall image quality. This study involves a comparison of subjective image quality between JPEG and JPEG 2000 to establish whether JPEG 2000 does indeed demonstrate significant improvements in visual quality. A particular focus of this work is the inherent scene dependency of the two algorithms and their influence on subjective image quality results. Further work on the characterization of scene content is carried out in a connected study [S. Triantaphillidou, E. Allen, and R. E. Jacobson, "Image quality comparison between JPEG and JPEG2000. II. Scene dependency, scene analysis, and classification"

Journal ArticleDOI
TL;DR: An error-resilient arithmetic coder with a forbidden symbol is used in order to improve the performance of the joint source/channel scheme and the practical relevance of the proposed joint decoding approach is demonstrated within the JPEG 2000 coding standard.
Abstract: In this paper, an innovative joint-source channel coding scheme is presented. The proposed approach enables iterative soft decoding of arithmetic codes by means of a soft-in soft-out decoder based on suboptimal search and pruning of a binary tree. An error-resilient arithmetic coder with a forbidden symbol is used in order to improve the performance of the joint source/channel scheme. The performance in the case of transmission across the AWGN channel is evaluated in terms of word error probability and compared to a traditional separated approach. The interleaver gain, the convergence property of the system, and the optimal source/channel rate allocation are investigated. Finally, the practical relevance of the proposed joint decoding approach is demonstrated within the JPEG 2000 coding standard. In particular, an iterative channel and JPEG 2000 decoder is designed and tested in the case of image transmission across the AWGN channel

Patent
19 Feb 2007
TL;DR: In this article, the authors proposed a lossless data hiding technique for JPEG images based on histogram pairs that can imperceptibly hide data into digital images and can reconstruct the original image without any distortion after the hidden data have been extracted in various digital image formats including, but not limited to Joint Photographic Experts Group (JPEG).
Abstract: Embodiments of the invention are directed toward reversible/invertible and lossless, image data hiding that can imperceptibly hide data into digital images and can reconstruct the original image without any distortion after the hidden data have been extracted in various digital image formats including, but not limited to Joint Photographic Experts Group (JPEG). In particular, embodiments of the invention provide a lossless data hiding technique for JPEG images based on histogram pairs. that embeds data into the JPEG quantized 8×8 block DCT coefficients and achieves good performance in terms of peak signal-to-noise ratio (PSNR) versus payload through manipulating histogram pairs with optimum threshold and optimum region of the JPEG DCT coefficients. Furthermore, the invented technology is expected to be able to apply to the I-frame of Motion Picture Experts Group (MPEG) video for various applications including annotation, authentication, and forensics.

Journal ArticleDOI
TL;DR: Lossless compression schemes for ECG signals based on neural network predictors and entropy encoders are presented and it is shown that superior performances in terms of the distortion parameters of the reconstructed signals can be achieved with the proposed schemes.
Abstract: This paper presents lossless compression schemes for ECG signals based on neural network predictors and entropy encoders. Decorrelation is achieved by nonlinear prediction in the first stage and encoding of the residues is done by using lossless entropy encoders in the second stage. Different types of lossless encoders, such as Huffman, arithmetic, and runlength encoders, are used. The performances of the proposed neural network predictor-based compression schemes are evaluated using standard distortion and compression efficiency measures. Selected records from MIT-BIH arrhythmia database are used for performance evaluation. The proposed compression schemes are compared with linear predictor-based compression schemes and it is shown that about 11% improvement in compression efficiency can be achieved for neural network predictor-based schemes with the same quality and similar setup. They are also compared with other known ECG compression methods and the experimental results show that superior performances in terms of the distortion parameters of the reconstructed signals can be achieved with the proposed schemes.

Proceedings ArticleDOI
12 Nov 2007
TL;DR: This note explores correlation between adjacent rows (or columns) at the block boundaries for predicting DCT coefficients of the first row/column of DCT blocks to reduce the average JPEG DC residual for images compressed at the default quality level.
Abstract: The JPEG baseline algorithm follows a block-based coding approach and therefore, it does not explore source redundancy at the sub-block level. This note explores correlation between adjacent rows (or columns) at the block boundaries for predicting DCT coefficients of the first row/column of DCT blocks. Experimental results show that our prediction method reduces the average JPEG DC residual by about 75% for images compressed at the default quality level. The same for AC01/10 coefficients is about 30%. It reduces the final code bits by about 4.55% of the total image code for grey images. Our method can be implemented as a part of the JPEG codec without requiring any changes to its control structure or to its code stream syntax.

Proceedings ArticleDOI
01 Nov 2007
TL;DR: The zigzag unit typically found in implementations of JPEG encoders is eliminated and the division operation of the quantization step is replaced by a combination of multiplication and shift operations.
Abstract: This paper presents the implementation of a JPEG encoder that exploits minimal usage of FPGA resources. The encoder compresses an image as a stream of 8times8 blocks with each element of the block applied and processed individually. The zigzag unit typically found in implementations of JPEG encoders is eliminated. The division operation of the quantization step is replaced by a combination of multiplication and shift operations. The encoder is implemented on Xilinx Spartan-3 FPGA and is benchmarked against two software implementations on four test images. It is demonstrated that it yields performance of similar quality while requiring very limited FPGA resources. A co-emulation technique is applied to reduce development time and to test and verify the encoder design.

Proceedings ArticleDOI
12 Nov 2007
TL;DR: The proposed method is intended to retrieve similar images, including their compressed versions, and to identify exact match and all compressed versions of a query image simultaneously, and is robust to JPEG compression, particularly for image identification purpose.
Abstract: We propose a fast method to retrieve images from a JPEG image database. The proposed method is intended to retrieve similar images, including their compressed versions, and to identify exact match and all compressed versions of a query image simultaneously. Similarity level is measured based on the non-zero DCT coefficients signs, which serve as features. The method is simple and fast because the DCT coefficients signs can be obtained by only entropy-decoding the bitstream. There is no need to calculate features explicitly. Furthermore, the method is robust to JPEG compression, particularly for image identification purpose.

Proceedings ArticleDOI
01 Oct 2007
TL;DR: A scheme which hides data in bitmap images, in a way that there is almost no perceptible difference between the original image and this new image and which is also resistant to JPEG compression is proposed.
Abstract: Steganography is the science and art of hiding data into inconspicuous media. Various steganography schemes exist which can be utilized to hide data in digital images without bringing about any perceptible change in the image. Most of these schemes lack the robustness to retain the hidden data after the image has been converted to another format using a lossy compression algorithm. We propose a scheme which hides data in bitmap images, in a way that there is almost no perceptible difference between the original image and this new image and which is also resistant to JPEG compression. In all the tests we were able to retrieve the whole data from an image after we had hidden it in a raster graphics image and the image had been compressed using the JPEG algorithm. JPEG compression is performed independently on blocks of 8x8 pixels in an image while converting it to the JPEG format. The proposed scheme makes use of this property.

Proceedings ArticleDOI
J. Takada1, S. Senda1, Hiroki Hihara2, M. Hamai2, T. Oshima2, S. Hagino2 
23 Jul 2007
TL;DR: The method, which is called HIREW, is based on hierarchical interpolating prediction and adaptive Golomb-Rice coding, and achieves 7-35 times faster compression than existing methods such as JPEG2000 and JPEG-LS, at similar compression ratios.
Abstract: This paper presents a fast lossless image compression method for space and satellite images. The method, which we call HIREW, is based on hierarchical interpolating prediction and adaptive Golomb-Rice coding, and achieves 7-35 times faster compression than existing methods such as JPEG2000 and JPEG-LS, at similar compression ratios. Additionally, unlike JPEG-LS, it supports additional features such as progressive decompression using resolution scaling. An implementation of this codec will be used in the Japan Aerospace Exploration Agency (JAXA)'s Venus Climate Orbiter mission (PLANET-C).

Proceedings ArticleDOI
29 Sep 2007
TL;DR: This paper describes the world's first JPEG 2000 and Motion JPEG 2000 encoder on Cell/B.E.E, a novel multi-core microprocessor designed to provide high-performance processing capabilities for a wide range of applications, and develops all of the code from scratch for effective multilevel parallelization.
Abstract: The Cell Broadband Engine (Cell/B.E.) is a novel multi-core microprocessor designed to provide high-performance processing capabilities for a wide range of applications. In this paper, we describe the world's first JPEG 2000 and Motion JPEG 2000 encoder on Cell/B.E. Novel parallelization techniques for a Motion JPEG 2000 encoder that unleash the performance of the Cell/B.E. are proposed. Our Motion JPEG 2000 encoder consists of multiple video frame encoding servers on a cluster system for high-level parallelization. Each video frame encoding server runs on a heterogeneous multi-core Cell/B.E. processor, and utilizes its 8 Synergistic Processor Elements (SPEs) for low-level parallelization of the time consuming parts of the JPEG 2000 encoding process, such as the wavelet transform, the bit modeling, and the arithmetic coding. The effectiveness of high-level parallelization by the cluster system is also described, not only for the parallel encoding, but also for scalable performance improvement for real-time encoding and future enhancements. We developed all of the code from scratch for effective multilevel parallelization. Our results show that the Cell/B.E. is extremely efficient for this workload compared with commercially available processors, and thus we conclude that the Cell/B.E. is quite suitable for encoding next generation large pixel formats, such as 4K/2K-Digital Cinema.

Journal ArticleDOI
TL;DR: This paper proposes straightforward extensions to the JPEG2000 image compression standard which allow for the efficient coding of floating-point data and test results show that the proposed lossless methods have raw compression performance that is competitive with, and sometime exceeds, current state-of-the-art methods.
Abstract: Many scientific applications require that image data be stored in floating-point format due to the large dynamic range of the data. These applications pose a problem if the data needs to be compressed since modern image compression standards, such as JPEG2000, are only defined to operate on fixed-point or integer data. This paper proposes straightforward extensions to the JPEG2000 image compression standard which allow for the efficient coding of floating-point data. These extensions maintain desirable properties of JPEG2000, such as lossless and rate distortion optimal lossy decompression from the same coded bit stream, scalable embedded bit streams, error resilience, and implementation on low-memory hardware. Although the proposed methods can be used for both lossy and lossless compression, the discussion in this paper focuses on, and the test results are limited to, the lossless case. Test results on real image data show that the proposed lossless methods have raw compression performance that is competitive with, and sometime exceeds, current state-of-the-art methods.

Proceedings ArticleDOI
13 Dec 2007
TL;DR: The proposed algorithm outperforms the existing entropy encoding algorithms such as Huffman and arithmetic coding in terms of compressed file size, encoding and decoding time and is very much suitable for multimedia applications.
Abstract: Entropy encoding is a term referring to lossless coding technique that replaces data elements with coded representations. Entropy encoding in combination with the transformation and quantization results in significantly reduced data size. For any conventional multimedia coding, entropy encoding is a bit assigning and lossless module. Since entropy encoding is a lossless module, compression ratio is the only constraint. Thus this paper develops a new entropy coding technique with higher compression ratio and minimum computational complexity. Huffman encoding and Arithmetic coding are well known entropy encoding method applied in JPEG and MPEG coding standards. In this paper, an efficient entropy encoding technique for multimedia coding is proposed. This proposed algorithm uses the concept of number of occurrence in a sequence of symbols. According to the rank of its occurrence, number of bits and groups are assigned and coded effectively. Based on the available channel band width, the appropriate bit-rate can also be achieved by using recursive property of our proposed encoding algorithm. Here two level of recursion has been considered for the proposed algorithm. The experiments were conducted in various multimedia data such as text, image, audio and video sequences. It is observed that the proposed algorithm outperforms the existing entropy encoding algorithms such as Huffman and arithmetic coding in terms of compressed file size, encoding and decoding time. Thus the proposal is very much suitable for multimedia applications.

Patent
02 Jul 2007
TL;DR: In this paper, a method and apparatus for a novel statistical model based on Benford's law for the probability distributions of the first digits of the block-DCT and quantized JPEG coefficients is presented.
Abstract: A method and apparatus for a novel statistical model based on Benford's law for the probability distributions of the first digits of the block-DCT and quantized JPEG coefficients. A parametric logarithmic law, the generalized Benford's law, is formulated. Furthermore, some potential applications of this model in image forensics, which include the detection of JPEG compression for images in bitmap format, the estimation of JPEG compression Q-factor for JPEG compressed bitmap image, and the detection of double compressed JPEG image. Experimental results demonstrate the effectiveness of the statistical model used in embodiments of the invention.

Journal ArticleDOI
TL;DR: The experimental results confirm that common features can always be extracted from JPEG- and JPEG 2000-compressed domains irrespective of the values of the compression ratio and the types of WT kernels used.

Journal ArticleDOI
01 May 2007
TL;DR: A priority-driven scheduling approach is introduced into the coding algorithm, which makes the transmission of important parts earlier with more data than other parts, which can satisfy users with desired images quality and lead to a significant reduction of the important parts’ deadline misses.
Abstract: Since high-quality image/video systems based on the JPEG/MPEG compression standards often require power-expensive implementations at relatively high bit-rates, they have not been widely used in low-power wireless applications To alleviate this problem, we designed, implemented, and evaluated a strategy that can adapt to different compression and transmission rates (1) It gives important parts of an image higher priority over unimportant parts Therefore, the high-priority parts can achieve high image quality, while the low-priority parts, with a slight sacrifice of quality, can achieve huge compression rate and thus save the power/energy of a low-power wireless system (2) We also introduce a priority-driven scheduling approach into our coding algorithm, which makes the transmission of important parts earlier with more data than other parts Through a balanced trade-off between the available time/bandwidth/power and the image quality, this adaptive strategy can satisfy users with desired images quality and lead to a significant reduction of the important parts' deadline misses

Proceedings ArticleDOI
12 Nov 2007
TL;DR: This paper proposes a novel algorithm to find the optimal SDQ coefficient indices in the form of run-size pairs among all possible candidates given that the other two parameters are fixed and formulates an iterative algorithm to jointly optimize the run-length coding, Huffman coding and quantization step sizes.
Abstract: JPEG optimization strives to maximize the best rate distortion performance while remaining faithful to the JPEG syntax. Given an image, if soft decision quantization (SDQ) is applied to its DCT coefficients, then Huffman table, quantization step sizes and SDQ coefficients are three free parameters over which a JPEG encoder can optimize. In this paper, we first propose a novel algorithm to find the optimal SDQ coefficient indices in the form of run-size pairs among all possible candidates given that the other two parameters are fixed. Based on this algorithm, we then formulate an iterative algorithm to jointly optimize the run-length coding, Huffman coding and quantization step sizes. The proposed iterative algorithm achieves a compression performance better than any previously known JPEG compression results and even exceeds the quoted PSNR results of some state-of-the-art wavelet-based image coders like Shapiro's embedded zerotree wavelet algorithm at the common bit rates under comparison.

Journal ArticleDOI
TL;DR: A wavelet-based DVC scheme that utilizs current JPEG 2000 standard and has scalability with regard to resolution and quality is proposed, and the introduction of a Gray code is proposed.
Abstract: Distributed Video Coding (DVC), based on the theorems proposed by Slepian-Wolf and Wyner-Ziv, is attracting attention as a new paradigm for video compression. Some of the DVC systems use intra-frame compression based on discrete cosine transform (DCT). Unfortunately, conventional DVC systems have low affinity with DCT. In this paper, we propose a wavelet-based DVC scheme that utilizs current JPEG 2000 standard. Accordingly, the scheme has scalability with regard to resolution and quality. In addition, we propose two methods to increase the coding gain of the new DVC scheme. One is the introduction of a Gray code, and the other method involves optimum quantization. An interesting point is that though our proposed method uses Gray code, it still achieves quality scalability. Tests confirmed that the PSNR is increased about 5 [dB] by the two methods, and the PSNR of the new scheme (with methods) is about 1.5--3 [dB] higher than that of conventional JPEG 2000.

Patent
28 Nov 2007
TL;DR: In this paper, a hidden-writing method for JPEG lossless compressed images based on forecast coding was proposed, which utilizes modular arithmetic when inserting, which not only reduces alteration of data insertion to carrier images to keep a rather high image quality but also stores the inserted data either in the loss-less compressed JPRG code streams or in the compressed cryptograph images.
Abstract: This invention discloses a hidden-writing method for JPEG lossless compressed images based on forecast coding, when inserting data in a JPFG lossless compressed image, it first of all carries Huffman decoding to get a forecast error, then inserts secret data in the forecast error and finally carries out Huffman coding to the modified forecast error to generate cryptograph image of the JPEG lossless coding, which utilizes modular arithmetic when inserting, which not only reduces alteration of data insertion to carrier images to keep a rather high image quality but also stores the inserted data either in the lossless compressed JPRG code streams or in the compressed cryptograph images.

Proceedings ArticleDOI
27 Jun 2007
TL;DR: A perceptual image coder for the compression of monochrome images is presented here, in which the coding structure is coupled with a vision model to produce coded images with an improved visual quality at low bit-rates.
Abstract: A perceptual image coder for the compression of monochrome images is presented here, in which the coding structure is coupled with a vision model to produce coded images with an improved visual quality at low bit-rates. The coder is an improvement on the Joint Photographic Experts Group (JPEG -Discrete Cosine Transform (DCT) based) image compression standard, and the structure can be easily extended to implement as an improvement on the new JPEG standard. The proposed coding structure incorporates the human vision model in all stages of compression, and gives very good results compared to the existing JPEG standard. Though the mathematical model used are not new, the simple structure of it which is similar to the coding structure of the JPEG, incorporates the visual processing stages in a very systematic manner in the same way as the visual neurons process the signal. The results presented in this paper reveal that the proposed structure gives very good perceptual quality compared to the JPEG scheme especially as we go for lower bit rates. One of the major advantages of the proposed scheme is that it can be easily extended to a structure in which the rate control optimization can be incorporated, as in the new JPEG scheme.

Journal ArticleDOI
TL;DR: This paper proposes a multi-scale support vector regression approach, which can model the images with steep variations and smooth variations very well resulting in good performance and test the proposed MS-SVR based algorithm on some standard images, which achieves better performance than standard SVR.