scispace - formally typeset
Search or ask a question

Showing papers on "Lossless compression published in 2010"


Journal ArticleDOI
TL;DR: Almost lossless analog compression for analog memoryless sources in an information-theoretic framework, in which the compressor or decompressor is constrained by various regularity conditions, in particular linearity of the compressor and Lipschitz continuity of the decompressor.
Abstract: In Shannon theory, lossless source coding deals with the optimal compression of discrete sources Compressed sensing is a lossless coding strategy for analog sources by means of multiplication by real-valued matrices In this paper we study almost lossless analog compression for analog memoryless sources in an information-theoretic framework, in which the compressor or decompressor is constrained by various regularity conditions, in particular linearity of the compressor and Lipschitz continuity of the decompressor The fundamental limit is shown to the information dimension proposed by Renyi in 1959

228 citations


Journal ArticleDOI
TL;DR: A resolution progressive compression scheme which compresses an encrypted image progressively in resolution, such that the decoder can observe a low-resolution version of the image, study local statistics based on it, and use the statistics to decode the next resolution level.
Abstract: Lossless compression of encrypted sources can be achieved through Slepian-Wolf coding. For encrypted real-world sources, such as images, the key to improve the compression efficiency is how the source dependency is exploited. Approaches in the literature that make use of Markov properties in the Slepian-Wolf decoder do not work well for grayscale images. In this correspondence, we propose a resolution progressive compression scheme which compresses an encrypted image progressively in resolution, such that the decoder can observe a low-resolution version of the image, study local statistics based on it, and use the statistics to decode the next resolution level. Good performance is observed both theoretically and experimentally.

217 citations


01 Jan 2010
TL;DR: Huffman algorithm is analyzed and compared with other common compression techniques like Arithmetic, LZW and Run Length Encoding to make storing easier for large amount of data.
Abstract: Data compression is also called as source coding. It is the process of encoding information using fewer bits than an uncoded representation is also making a use of specific encoding schemes. Compression is a technology for reducing the quantity of data used to represent any content without excessively reducing the quality of the picture. It also reduces the number of bits required to store and/or transmit digital media. Compression is a technique that makes storing easier for large amount of data. There are various techniques available for compression in my paper work , I have analyzed Huffman algorithm and compare it with other common compression techniques like Arithmetic, LZW and Run Length Encoding.

166 citations


Journal ArticleDOI
TL;DR: This work presents a lossless compression algorithm that has been designed for fast on-line data compression, and cache compression in particular, and reduces the proposed algorithm to a register transfer level hardware design, permitting performance, power consumption, and area estimation.
Abstract: Microprocessor designers have been torn between tight constraints on the amount of on-chip cache memory and the high latency of off-chip memory, such as dynamic random access memory. Accessing off-chip memory generally takes an order of magnitude more time than accessing on-chip cache, and two orders of magnitude more time than executing an instruction. Computer systems and microarchitecture researchers have proposed using hardware data compression units within the memory hierarchies of microprocessors in order to improve performance, energy efficiency, and functionality. However, most past work, and all work on cache compression, has made unsubstantiated assumptions about the performance, power consumption, and area overheads of the proposed compression algorithms and hardware. It is not possible to determine whether compression at levels of the memory hierarchy closest to the processor is beneficial without understanding its costs. Furthermore, as we show in this paper, raw compression ratio is not always the most important metric. In this work, we present a lossless compression algorithm that has been designed for fast on-line data compression, and cache compression in particular. The algorithm has a number of novel features tailored for this application, including combining pairs of compressed lines into one cache line and allowing parallel compression of multiple words while using a single dictionary and without degradation in compression ratio. We reduced the proposed algorithm to a register transfer level hardware design, permitting performance, power consumption, and area estimation. Experiments comparing our work to previous work are described.

161 citations


Journal Article
TL;DR: Analysis of Least Significant Bit (LSB) based Steganography and Discrete Cosine Transform (DCT) basedSteganography is presented, an implementation of both methods and their performance analysis has been done.
Abstract: This paper presents analysis of Least Significant Bit (LSB) based Steganography and Discrete Cosine Transform (DCT) based Steganography. LSB based Steganography embed the text message in least significant bits of digital picture. Least significant bit (LSB) insertion is a common, simple approach to embedding information in a cover file. Unfortunately, it is vulnerable to even a small image manipulation. Converting an image from a format like GIF or BMP, which reconstructs the original message exactly (lossless compression) to a JPEG, which does not (lossy compression), and then back could destroy the information hidden in the LSBs. DCT based Steganography embed the text message in least significant bits of the Discrete Cosine (DC) coefficient of digital picture. When information is hidden inside video, the program hiding the information usually performs the DCT. DCT works by slightly changing each of the images in the video, only to the extent that is not noticeable by the human eye. An implementation of both these methods and their performance analysis has been done in this paper.

131 citations


Journal ArticleDOI
TL;DR: The essence of invertible image sharing approaches is that the revealed content of the secret image must be lossless and the distorted stego images must be able to be reverted to the original cover image, and this work transforms the secret pixels into the m-ary notational system and calculates the information data used to reconstruct original pixels from camouflaged pixels.

124 citations


Proceedings ArticleDOI
18 Mar 2010
TL;DR: Conventional image/video sensors acquire visual information from a scene in time-quantized fashion at some predetermined frame rate, which leads, depending on the dynamic contents of the scene, to a more or less high degree of redundancy in the image data.
Abstract: Conventional image/video sensors acquire visual information from a scene in time-quantized fashion at some predetermined frame rate. Each frame carries the information from all pixels, regardless of whether or not this information has changed since the last frame had been acquired, which is usually not long ago. This method obviously leads, depending on the dynamic contents of the scene, to a more or less high degree of redundancy in the image data. Acquisition and handling of these dispensable data consume valuable resources; sophisticated and resource-hungry video compression methods have been developed to deal with these data.

122 citations


01 Dec 2010
TL;DR: An experimental comparison of a number of different lossless data compression algorithms is presented and it is stated which algorithm performs well for text data.
Abstract: Data compression is a common requirement for most of the computerized applications. There are number of data compression algorithms, which are dedicated to compress different data formats. Even for a single data type there are number of different compression algorithms, which use different approaches. This paper examines lossless data compression algorithms and compares their performance. A set of selected algorithms are examined and implemented to evaluate the performance in compressing text data. An experimental comparison of a number of different lossless data compression algorithms is presented in this paper. The article is concluded by stating which algorithm performs well for text data.

120 citations


01 Jan 2010
TL;DR: The Lossless method of image compression and decompression using a simple coding technique called Huffman coding is proposed, which is simple in implementation and utilizes less memory.
Abstract: The need for an efficient technique for compression of Images ever increasing because the raw images need large amounts of disk space seems to be a big disadvantage during transmission & storage. Even though there are so many compression technique already present a better technique which is faster, memory efficient and simple surely suits the requirements of the user. In this paper we proposed the Lossless method of image compression and decompression using a simple coding technique called Huffman coding. This technique is simple in implementation and utilizes less memory. A software algorithm has been developed and implemented to compress and decompress the given image using Huffman coding techniques in a MATLAB platform.

118 citations


Patent
09 Jun 2010
TL;DR: In this paper, the authors describe a hardware-accelerated lossless data compression system that includes a plurality of hash memories each associated with a different lane of a plurality-of-lanes (each lane including data bytes of a data unit being received by the compression apparatus).
Abstract: Systems for hardware-accelerated lossless data compression are described. At least some embodiments include data compression apparatus that includes a plurality of hash memories each associated with a different lane of a plurality of lanes (each lane including data bytes of a data unit being received by the compression apparatus), an array including array elements each including a plurality of validity bits (each validity bit within an array element corresponding to a different lane of the plurality of lanes), control logic that initiates a read of a hash memory entry if a corresponding validity bit indicates that said entry is valid, and an encoder that compresses at least the data bytes for the lane associated with the hash memory comprising the valid entry if said valid entry comprises data that matches the lane data bytes.

110 citations


Journal ArticleDOI
Jaemoon Kim1, Chong-Min Kyung1
TL;DR: A lossless EC algorithm for HD video sequences and related hardware architecture is proposed that consists of a hierarchical prediction method based on pixel averaging and copying and significant bit truncation (SBT).
Abstract: Increasing the image size of a video sequence aggravates the memory bandwidth problem of a video coding system. Despite many embedded compression (EC) algorithms proposed to overcome this problem, no lossless EC algorithm able to handle high-definition (HD) size video sequences has been proposed thus far. In this paper, a lossless EC algorithm for HD video sequences and related hardware architecture is proposed. The proposed algorithm consists of two steps. The first is a hierarchical prediction method based on pixel averaging and copying. The second step involves significant bit truncation (SBT) which encodes prediction errors in a group with the same number of bits so that the multiple prediction errors are decoded in a clock cycle. The theoretical lower bound of the compression ratio of the SBT coding was also derived. Experimental results have shown a 60% reduction of memory bandwidth on average. Hardware implementation results have shown that a throughput of 14.2 pixels/cycle can be achieved with 36 K gates, which is sufficient to handle HD-size video sequences in real time.

Book
08 Dec 2010
TL;DR: The principles of digital image compression, as well as some of the algorithms used, are explained in detail in the preface and Appendix A.
Abstract: Preface. 1. Principles of digital image compression. 2. Compression algorithm fundamentals. 3. CCITT facsimile compression standards. 4. JBIG compression standard. 5. JPEG compression standard. 6. Digital video compression standards. 7. Digital image compression: advanced topics. Appendix A: Mathematical descriptions. Appendix B: Fast DCT algorithms. Glossary. Information on ISO/IEC standards. Information on ITU standards. Bibliography. Index.

Journal ArticleDOI
TL;DR: A lossless compression algorithm for hyperspectral images inspired by the distributed-source-coding (DSC) principle is proposed and three algorithms based on this paradigm provide different tradeoffs between compression performance, error resilience, and complexity.
Abstract: In this paper, we propose a lossless compression algorithm for hyperspectral images inspired by the distributed-source-coding (DSC) principle. DSC refers to separate compression and joint decoding of correlated sources, which are taken as adjacent bands of a hyperspectral image. This concept is used to design a compression scheme that provides error resilience, very low complexity, and good compression performance. These features are obtained employing scalar coset codes to encode the current band at a rate that depends on its correlation with the previous band, without encoding the prediction error. Iterative decoding employs the decoded version of the previous band as side information and uses a cyclic redundancy code to verify correct reconstruction. We develop three algorithms based on this paradigm, which provide different tradeoffs between compression performance, error resilience, and complexity. Their performance is evaluated on raw and calibrated AVIRIS images and compared with several existing algorithms. Preliminary results of a field-programmable gate array implementation are also provided, which show that the proposed algorithms can sustain an extremely high throughput.

Journal ArticleDOI
TL;DR: The problem of functional compression is considered, motivated by applications to sensor networks and privacy preserving databases, and an asymptotic characterization of conditional graph coloring for an OR product of graphs generalizing a result of Korner (1973), is obtained.
Abstract: Motivated by applications to sensor networks and privacy preserving databases, we consider the problem of functional compression. The objective is to separately compress possibly correlated discrete sources such that an arbitrary but fixed deterministic function of those sources can be computed given the compressed data from each source. We consider both the lossless and lossy computation of a function. Specifically, we present results of the rate regions for three instances of the problem where there are two sources: 1) lossless computation where one source is available at the decoder; 2) under a special condition, lossless computation where both sources are separately encoded; and 3) lossy computation where one source is available at the decoder. For all of these instances, we present a layered architecture for distributed coding: first preprocess data at each source using colorings of certain characteristic graphs and then use standard distributed source coding (a la Slepian and Wolfs scheme) to compress them. For the first instance, our results extend the approach developed by Orlitsky and Roche (2001) in the sense that our scheme requires simpler structure of coloring rather than independent sets as in the previous case. As an intermediate step to obtain these results, we obtain an asymptotic characterization of conditional graph coloring for an OR product of graphs generalizing a result of Korner (1973), which should be of interest in its own right.

Journal ArticleDOI
TL;DR: A novel method for generic visible watermarking with a capability of lossless image recovery is proposed, based on the use of deterministic one-to-one compound mappings for overlaying a variety of visible watermarks of arbitrary sizes on cover images.
Abstract: A novel method for generic visible watermarking with a capability of lossless image recovery is proposed. The method is based on the use of deterministic one-to-one compound mappings of image pixel values for overlaying a variety of visible watermarks of arbitrary sizes on cover images. The compound mappings are proved to be reversible, which allows for lossless recovery of original images from watermarked images. The mappings may be adjusted to yield pixel values close to those of desired visible watermarks. Different types of visible watermarks, including opaque monochrome and translucent full color ones, are embedded as applications of the proposed generic approach. A two-fold monotonically increasing compound mapping is created and proved to yield more distinctive visible watermarks in the watermarked image. Security protection measures by parameter and mapping randomizations have also been proposed to deter attackers from illicit image recoveries. Experimental results demonstrating the effectiveness of the proposed approach are also included.

Proceedings ArticleDOI
01 Sep 2010
TL;DR: A new approach to joint source-channel coding is presented in the context of communicating correlated sources over multiple access channels, whereby the source encoding and channel decoding operations are decoupled and the same codeword is used for both source coding and channel coding.
Abstract: A new approach to joint source-channel coding is presented in the context of communicating correlated sources over multiple access channels. Similar to the separation architecture, the joint source-channel coding system architecture in this approach is modular, whereby the source encoding and channel decoding operations are decoupled. However, unlike the separation architecture, the same codeword is used for both source coding and channel coding, which allows the resulting coding scheme to achieve the performance of the best known schemes despite its simplicity. In particular, it recovers as special cases previous results on lossless communication of correlated sources over multiple access channels by Cover, El Gamal, and Salehi, distributed lossy source coding by Berger and Tung, and lossy communication of the bivariate Gaussian source over the Gaussian multiple access channel by Lapidoth and Tinguely. The proof of achievability involves a new technique for analyzing the probability of decoding error when the message index depends on the codebook itself. Applications of the new joint source-channel coding system architecture in other settings are also discussed.

Proceedings ArticleDOI
23 Aug 2010
TL;DR: The effectiveness of the proposed scheme, proven through experiments on various medical images through various image quality measure matrices enables it to be argued that, the method will help to maintain Electronic Patient Report (EPR)/DICOM data privacy and medical image integrity.
Abstract: In this article, a new fragile, blind, high payload capacity, ROI (Region of Interest) preserving Medical image watermarking (MIW) technique in the spatial domain for gray scale medical images is proposed. We present a watermarking scheme that combines lossless data compression and encryption technique in application to medical images. The effectiveness of the proposed scheme, proven through experiments on various medical images through various image quality measure matrices such as PSNR, MSE and MSSIM enables us to argue that, the method will help to maintain Electronic Patient Report(EPR)/DICOM data privacy and medical image integrity.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the technique not only possesses the robustness to resist on image-manipulation attacks under consideration but also, in average, is superior to other existing methods being considered in the paper.

Journal ArticleDOI
TL;DR: Experimental results show that the performance of the proposed scheme is significantly improved and the original cover image can be recovered without any distortion after the hidden data have been extracted if the stego-image remains intact.

Journal ArticleDOI
TL;DR: A digital scheme that combines ideas from the lossless version of the problem, i.e., Slepian-Wolf coding over broadcast channels, and dirty paper coding, is presented and analyzed and it is shown that it is more advantageous to send the refinement information to the receiver with ¿better¿ combined quality.
Abstract: This paper addresses lossy transmission of a common source over a broadcast channel when there is correlated side information at the receivers, with emphasis on the quadratic Gaussian and binary Hamming cases. A digital scheme that combines ideas from the lossless version of the problem, i.e., Slepian-Wolf coding over broadcast channels, and dirty paper coding, is presented and analyzed. This scheme uses layered coding where the common layer information is intended for both receivers and the refinement information is destined only for one receiver. For the quadratic Gaussian case, a quantity characterizing the combined quality of each receiver is identified in terms of channel and side information parameters. It is shown that it is more advantageous to send the refinement information to the receiver with ?better? combined quality. In the case where all receivers have the same overall quality, the presented scheme becomes optimal. Unlike its lossless counterpart, however, the problem eludes a complete characterization.

Journal ArticleDOI
TL;DR: To detect double MP3 compression, this paper extracts the statistical features on the modified discrete cosine transform and applies a support vector machine to the extracted features for classification and shows that the designed method is highly effective for detecting faked MP3 files.
Abstract: MPEG-1 Audio Layer 3, more commonly referred to as MP3, is a popular audio format for consumer audio storage and a de facto standard of digital audio compression for the transfer and playback of music on digital audio players. MP3 audio forgery manipulations generally uncompress a MP3 file, tamper with the file in the temporal domain, and then compress the doctored audio file back into MP3 format. If the compression quality of doctored MP3 file is different from the quality of original MP3 file, the doctored MP3 file is said to have undergone double MP3 compression. Although double MP3 compression does not prove a malicious tampering, it is evidence of manipulation and thus may warrant further forensic analysis since, e.g., faked MP3 files can be generated by using double MP3 compression at a higher bit-rate for the second compression to claim a higher quality of the audio files. To detect double MP3 compression, in this paper, we extract the statistical features on the modified discrete cosine transform and apply a support vector machine to the extracted features for classification. Experimental results show that our designed method is highly effective for detecting faked MP3 files. Our study also indicates that the detection performance is closely related to the bit-rate of the first-time MP3 encoding and the bit-rate of the second-time MP3 encoding.

Journal ArticleDOI
TL;DR: Experimental comparisons with existing approaches validate the usefulness of the proposed multiple semi-fragile watermarking approach, based on integer wavelet transform with improved security against collage attack, enhanced robustness, and capability of producing better quality recovered image.

Proceedings ArticleDOI
11 Nov 2010
TL;DR: The reconstructed EEG signals are applied to REACT, a state-of-the-art seizure detection algorithm, in order to determine the effect of lossy compression on its seizure detection ability.
Abstract: Compression of biosignals is an important means of conserving power in wireless body area networks and ambulatory monitoring systems. In contrast to lossless compression techniques, lossy compression algorithms can achieve higher compression ratios and hence, higher power savings, at the expense of some degradation of the reconstructed signal. In this paper, a variant of the lossy JPEG2000 algorithm is applied to Electroencephalogram (EEG) data from the Freiburg epilepsy database. By varying compression parameters, a range of reconstructions of varying signal fidelity is produced. Although lossy compression has been applied to EEG data in previous studies, it is unclear what level of signal degradation, if any, would be acceptable to a clinician before diagnostically significant information is lost. In this paper, the reconstructed EEG signals are applied to REACT, a state-of-the-art seizure detection algorithm, in order to determine the effect of lossy compression on its seizure detection ability. By using REACT in place of a clinician, many hundreds of hours of reconstructed EEG data are efficiently analysed, thereby allowing an analysis of the amount of EEG signal distortion that can be tolerated. The corresponding compression ratios that can be achieved are also presented.

Book ChapterDOI
25 Apr 2010
TL;DR: A set of domain specific loss-less compression schemes that achieve over 40× compression of fragments, outperforming bzip2 by over 6× are introduced and the study of using ‘lossy' quality values is initiated.
Abstract: With the advent of next generation sequencing technologies, the cost of sequencing whole genomes is poised to go below $1000 per human individual in a few years As more and more genomes are sequenced, analysis methods are undergoing rapid development, making it tempting to store sequencing data for long periods of time so that the data can be re-analyzed with the latest techniques The challenging open research problems, huge influx of data, and rapidly improving analysis techniques have created the need to store and transfer very large volumes of data. Compression can be achieved at many levels, including trace level (compressing image data), sequence level (compressing a genomic sequence), and fragment-level (compressing a set of short, redundant fragment reads, along with quality-values on the base-calls) We focus on fragment-level compression, which is the pressing need today. Our paper makes two contributions, implemented in a tool, SlimGene First, we introduce a set of domain specific loss-less compression schemes that achieve over 40× compression of fragments, outperforming bzip2 by over 6× Including quality values, we show a 5× compression using less running time than bzip2 Second, given the discrepancy between the compression factor obtained with and without quality values, we initiate the study of using ‘lossy' quality values Specifically, we show that a lossy quality value quantization results in 14× compression but has minimal impact on downstream applications like SNP calling that use the quality values Discrepancies between SNP calls made between the lossy and lossless versions of the data are limited to low coverage areas where even the SNP calls made by the lossless version are marginal.

Journal ArticleDOI
TL;DR: The proposed 3-D scalable compression method achieves a higher reconstruction quality, in terms of the peak signal-to-noise ratio, than that achieved by 3D-JPEG2000 with VOI coding, when using the MAXSHIFT and general scaling-based methods.
Abstract: We present a novel 3-D scalable compression method for medical images with optimized volume of interest (VOI) coding. The method is presented within the framework of interactive telemedicine applications, where different remote clients may access the compressed 3-D medical imaging data stored on a central server and request the transmission of different VOIs from an initial lossy to a final lossless representation. The method employs the 3-D integer wavelet transform and a modified EBCOT with 3-D contexts to create a scalable bit-stream. Optimized VOI coding is attained by an optimization technique that reorders the output bit-stream after encoding, so that those bits belonging to a VOI are decoded at the highest quality possible at any bit-rate, while allowing for the decoding of background information with peripherally increasing quality around the VOI. The bit-stream reordering procedure is based on a weighting model that incorporates the position of the VOI and the mean energy of the wavelet coefficients. The background information with peripherally increasing quality around the VOI allows for placement of the VOI into the context of the 3-D image. Performance evaluations based on real 3-D medical imaging data showed that the proposed method achieves a higher reconstruction quality, in terms of the peak signal-to-noise ratio, than that achieved by 3D-JPEG2000 with VOI coding, when using the MAXSHIFT and general scaling-based methods.

Journal ArticleDOI
TL;DR: In this paper, a passive lossless snubber cell and its dual structure for reducing the switching loss of a range of switching converters is presented, which provides zero current switching and zero voltage switching conditions for turning on and off, respectively, the switch over a wide load range.
Abstract: A passive lossless snubber cell and its dual structure for reducing the switching loss of a range of switching converters are presented. The proposed snubber cell has several advantages over existing snubbering techniques. First, it provides zero-current-switching and zero-voltage-switching conditions for turning on and off, respectively, the switch over a wide load range. Second, it does not introduce extra voltage stress on the switch. Third, by taking the ripple current through the switch into account, the peak switch current during the snubber resonance period is designed to be less than the designed switch current without the snubber. Hence, the proposed snubber does not introduce extra current stress on the switch. The operating principle, procedure of designing the values of the components, and soft-switching range of the snubber will be given. The connections of the snubber cells to different switching converters will be illustrated. A performance comparison among the proposed snubber and a prior-art snubber will be addressed. The proposed snubber has been successfully applied to an example of a 200-W, 380-V/24-V, 100-kHz two-switch flyback converter. Experimental results are in good agreement with the theoretical predictions.

Journal ArticleDOI
TL;DR: The results show that, only in the almost sure setup can the authors effectively exploit the signal correlations to achieve effective gains in sampling efficiency, and an efficient and robust distributed sampling and reconstruction algorithm based on annihilating filters is proposed.
Abstract: We study the distributed sampling and centralized reconstruction of two correlated signals, modeled as the input and output of an unknown sparse filtering operation. This is akin to a Slepian-Wolf setup, but in the sampling rather than the lossless compression case. Two different scenarios are considered: In the case of universal reconstruction, we look for a sensing and recovery mechanism that works for all possible signals, whereas in what we call almost sure reconstruction, we allow to have a small set (with measure zero) of unrecoverable signals. We derive achievability bounds on the number of samples needed for both scenarios. Our results show that, only in the almost sure setup can we effectively exploit the signal correlations to achieve effective gains in sampling efficiency. In addition to the above theoretical analysis, we propose an efficient and robust distributed sampling and reconstruction algorithm based on annihilating filters. We evaluate the performance of our method in one synthetic scenario, and two practical applications, including the distributed audio sampling in binaural hearing aids and the efficient estimation of room impulse responses. The numerical results confirm the effectiveness and robustness of the proposed algorithm in both synthetic and practical setups.

Journal ArticleDOI
TL;DR: It is demonstrated that under certain conditions visual quality of compressed images can be slightly better than quality of original noisy images due to image filtering through lossy compression.
Abstract: This paper concerns lossy compression of images corrupted by additive noise The main contribution of the paper is that analysis is carried out from the viewpoint of compressed image visual quality Several coders for which the compression ratio is controlled in different manner are considered Visual quality metrics that are the most adequate for the considered application (WSNR, MSSIM, PSNR-HVS-M, and PSNR-HVS) are used It is demonstrated that under certain conditions visual quality of compressed images can be slightly better than quality of original noisy images due to image filtering through lossy compression The "optimal" parameters of coders for which this positive effect can be observed depend upon standard deviation of the noise This allows proposing automatic procedure for compressing noisy images in the neighborhood of optimal operation point, that is, when visual quality either improves or degrades insufficiently Comparison results for a set of grayscale test images and several variances of noise are presented

Journal ArticleDOI
TL;DR: This paper proposes a hardware-friendly IntDCT that can be applied to both lossless and lossy coding, and is validated by its application to lossless-to-lossy image coding.
Abstract: A discrete cosine transform (DCT) can be easily implemented in software and hardware for the JPEG and MPEG formats. However, even though some integer DCTs (IntDCTs) for lossless-to-lossy image coding have been proposed, such transform requires redesigned devices. This paper proposes a hardware-friendly IntDCT that can be applied to both lossless and lossy coding. Our IntDCT is implemented by direct-lifting of DCT and inverse DCT (IDCT). Consequently, any existing DCT device can be directly applied to every lifting block. Although our method requires a small side information block (SIB), it is validated by its application to lossless-to-lossy image coding.

Journal ArticleDOI
TL;DR: A novel reversible data-hiding scheme that embeds secret data into a transformed image and achieves lossless reconstruction of vector quantization (VQ) indices is presented.
Abstract: This work presents a novel reversible data-hiding scheme that embeds secret data into a transformed image and achieves lossless reconstruction of vector quantization (VQ) indices. The VQ compressed image is modified by the side-matched VQ scheme to yield a transformed image. Distribution of the transformed image is employed to achieve high embedding capacity and a low bit rate. Moreover, three configurations, under-hiding, normal-hiding, and over-hiding schemes, are utilized to improve the proposed scheme further for various applications. Experimental results demonstrate that the proposed scheme significantly enhances the compression ratio and embedding capacity. Experimental results also show that the proposed scheme achieves the best performance among approaches in literature in terms of the compression ratio and embedding capacity.