scispace - formally typeset
Search or ask a question

Showing papers on "Lossless compression published in 2012"


Journal ArticleDOI
TL;DR: SCALCE, a 'boosting' scheme based on Locally Consistent Parsing technique, which reorganizes the reads in a way that results in a higher compression speed and compression rate, independent of the compression algorithm in use and without using a reference genome is presented.
Abstract: Motivation: The high throughput sequencing (HTS) platforms generate unprecedented amounts of data that introduce challenges for the computational infrastructure. Data management, storage and analysis have become major logistical obstacles for those adopting the new platforms. The requirement for large investment for this purpose almost signalled the end of the Sequence Read Archive hosted at the National Center for Biotechnology Information (NCBI), which holds most of the sequence data generated world wide. Currently, most HTS data are compressed through general purpose algorithms such as gzip. These algorithms are not designed for compressing data generated by the HTS platforms; for example, they do not take advantage of the specific nature of genomic sequence data, that is, limited alphabet size and high similarity among reads. Fast and efficient compression algorithms designed specifically for HTS data should be able to address some of the issues in data management, storage and communication. Such algorithms would also help with analysis provided they offer additional capabilities such as random access to any read and indexing for efficient sequence similarity search. Here we present SCALCE, a ‘boosting’ scheme based on Locally Consistent Parsing technique, which reorganizes the reads in a way that results in a higher compression speed and compression rate, independent of the compression algorithm in use and without using a reference genome. Results: Our tests indicate that SCALCE can improve the compression rate achieved through gzip by a factor of 4.19—when the goal is to compress the reads alone. In fact, on SCALCE reordered reads, gzip running time can improve by a factor of 15.06 on a standard PC with a single core and 6 GB memory. Interestingly even the running time of SCALCE þ gzip improves that of gzip alone by a factor of 2.09. When compared with the recently published BEETL, which aims to sort the (inverted) reads in lexicographic order for improving bzip2, SCALCE þ gzip provides up to 2.01 times better compression while improving the running time by a factor of 5.17. SCALCE also provides the option to compress the quality scores as well as the read names, in addition to the reads themselves. This is achieved by compressing the quality scores through order-3 Arithmetic Coding (AC) and the read names through gzip through the reordering SCALCE provides on the reads. This way, in comparison with gzip compression of the unordered FASTQ files (including reads, read names and quality scores), SCALCE (together with gzip and arithmetic encoding) can provide up to 3.34 improvement in the compression rate and 1.26 improvement in running time.

155 citations


Journal ArticleDOI
TL;DR: The lossless coding mode of the High Efficiency Video Coding (HEVC) main profile that bypasses transform, quantization, and in-loop filters is described and a sample-based angular intra prediction method is presented to improve the coding efficiency.
Abstract: The lossless coding mode of the High Efficiency Video Coding (HEVC) main profile that bypasses transform, quantization, and in-loop filters is described. Compared to the HEVC nonlossless coding mode with the smallest quantization parameter value (i.e., 0 for 8-b video and -12 for 10-b video), the HEVC lossless coding mode provides perfect fidelity and an average bit-rate reduction of 3.2%-13.2%. It also significantly outperforms the existing lossless compression solutions, such as JPEG2000 and JPEG-LS for images as well as 7-Zip and WinRAR for data archiving. To further improve the coding efficiency of the HEVC lossless mode, a sample-based angular intra prediction (SAP) method is presented. The SAP employs the same prediction mode signaling method and the sample interpolation method as the HEVC block-based angular prediction, but uses adjacent neighbors for better intra prediction accuracy and performs prediction sample by sample. The experimental results reveal that the SAP provides an additional bit-rate reduction of 1.8%-11.8% on top of the HEVC lossless coding mode.

114 citations


Proceedings ArticleDOI
13 May 2012
TL;DR: This work utilizing a two-level hierarchical sort for BWT, design a novel scan-based parallel MTF algorithm, and implement a parallel reduction scheme to build the Huffman tree to parallelize the bzip2 compression pipeline.
Abstract: We present parallel algorithms and implementations of a bzip2-like lossless data compression scheme for GPU architectures. Our approach parallelizes three main stages in the bzip2 compression pipeline: Burrows-Wheeler transform (BWT), move-to-front transform (MTF), and Huffman coding. In particular, we utilize a two-level hierarchical sort for BWT, design a novel scan-based parallel MTF algorithm, and implement a parallel reduction scheme to build the Huffman tree. For each algorithm, we perform detailed performance analysis, discuss its strengths and weaknesses, and suggest future directions for improvements. Overall, our GPU implementation is dominated by BWT performance and is 2.78× slower than bzip2, with BWT and MTF-Huffman respectively 2.89× and 1.34× slower on average.

93 citations


Proceedings ArticleDOI
19 Sep 2012
TL;DR: This paper investigates required microarchitectural changes to support lossless compression techniques for data transferred between the GPU and its off-chip memory to provide higher effective bandwidth and proposes to apply lossy compression to floating-point numbers after truncating their least-significant bits.
Abstract: State-of-the-art graphic processing units (GPUs) provide very high memory bandwidth, but the performance of many general-purpose GPU (GPGPU) workloads is still bounded by memory bandwidth. Although compression techniques have been adopted by commercial GPUs, they are only used for compressing texture and color data, not data for GPGPU workloads. Furthermore, the microarchitectural details of GPU compression are proprietary and its performance benefits have not been previously published. In this paper, we first investigate required microarchitectural changes to support lossless compression techniques for data transferred between the GPU and its off-chip memory to provide higher effective bandwidth. Second, by exploiting some characteristics of floating-point numbers in many GPGPU workloads, we propose to apply lossless compression to floating-point numbers after truncating their least-significant bits (i.e., lossy compression). This can reduce the bandwidth usage even further with very little impact on overall computational accuracy. Finally, we demonstrate that a GPU with our lossless and lossy compression techniques can improve the performance of memory-bound GPGPU workloads by 26% and 41% on average.

86 citations


Journal ArticleDOI
TL;DR: This paper can gain the lossless secret image and meantime enhance the contrast of previewed image, and introduces a new definition of contrast to evaluate the visual quality of the Previewed image.

79 citations


01 Jan 2012
TL;DR: Experimental results demonstrate that the proposed technique provides sufficient high compression ratios compared to other compression techniques.
Abstract: Image compression is a key technology in transmission and storage of digital images because of vast data associated with them. This research suggests a new image compression scheme with pruning proposal based on discrete wavelet transformation (DWT). The effectiveness of the algorithm has been justified over some real images, and the performance of the algorithm has been compared with other common compression standards. The algorithm has been implemented using Visual C++ and tested on a Pentium Core 2 Duo 2.1 GHz PC with 1 GB RAM. Experimental results demonstrate that the proposed technique provides sufficient high compression ratios compared to other compression techniques.

72 citations


Book
28 Sep 2012
TL;DR: This paper presents a meta-modelling framework for text Compression using Ziv-Lempel Compressors with Deferred-Innovation, and some examples from real-time Dictionary-Based Text Compression show how this framework can be modified for image compression.
Abstract: Introduction. Part 1: Image Compression. 1. Image Compression and Tree-Structured Vector Quantization R.M. Gray, P.C. Cosman, E.A. Riskin. 2. Fractal Image Compression Using Itereated Transforms Y. Fisher, E.W. Jacobs, R.D. Boss. 3. Optical Techniques for Image Compression J.H. Reif, A. Yoshida. Part 2: Text Compression. 4. Practical Implementations of Arithmetic Coding P.G. Howard, J.S. Vitter. 5. Context Modeling for Text Compression D.S. Hirschberg, D.A. Lelewer. 6. Ziv-Lempel Compressors with Deferred-Innovation M. Cohn. 7. Massively Parallel Systolic Algorithms for Real-Time Dictionary-Based Text Compression J.A. Storer. Part 3: Coding Theory. 8. Variations on a Theme by Gallager R.-M. Capocelli, A. De Santis. 9. On the Coding Delay of a General Coder M.J. Weinberger, A. Lempel, J. Ziv. 10. Finite State Two-Dimensional Compressibility D. Sheinwald. Bibliography. Index.

72 citations


01 Apr 2012
TL;DR: In this paper, the authors presented new topologies for realizing one lossless grounded inductor and two floating inductors employing a single differential difference current conveyor and a minimum number of passive components, two resistors, and one grounded capacitor.
Abstract: In this work, we present new topologies for realizing one lossless grounded inductor and two floating, one lossless and one lossy, inductors employing a single differential difference current conveyor (DDCC) and a minimum number of passive components, two resistors, and one grounded capacitor. The floating inductors are based on ordinary dual-output differential difference cur- rent conveyor (DO-DDCC) while the grounded lossless inductor is based one a modified dual-output differential difference current conveyor (MDO-DDCC). The proposed lossless floating inductor is obtained from the lossy one by employing a negative impedance converter (NIC). The non-ideality effects of the active element on the simulated inductors are investigated. To demonstrate the perform- ance of the proposed grounded inductance simulator as an example, it is used to construct a parallel resonant circuit. SPICE simulation results are given to confirm the theoretical analysis.

71 citations


Proceedings ArticleDOI
12 Nov 2012
TL;DR: It is shown that excellent compression gains can be achieved when investing a moderate amount of memory, and four lossless compression algorithms for smart meters are proposed.
Abstract: Smart meters are increasingly penetrating the market, resulting in enormous data volumes to be communicated. In many cases, embedded devices collect the metering data and transmit them wirelessly to achieve cheap and facile deployment. Bandwidth is yet scarce and transmission occupies the spectrum. Smart meter data should hence be compressed prior to transmission. Here, solutions for personal computers are not applicable, as they are too resource-demanding. In this paper, we propose four lossless compression algorithms for smart meters. We analyze processing time and compression gains and compare the results with five off-the-shelf compression algorithms. We show that excellent compression gains can be achieved when investing a moderate amount of memory. A discussion of the suitability of the algorithms for different kinds of metering data is presented.

71 citations


Journal ArticleDOI
TL;DR: This paper presents an adaptive lossless data compression (ALDC) algorithm that performs compression losslessly using multiple code options and demonstrates the merits of the proposed compression algorithm in comparison with other recently proposed lossless compression algorithms for WSNs.
Abstract: Energy is an important consideration in the design and deployment of wireless sensor networks (WSNs) since sensor nodes are typically powered by batteries with limited capacity. Since the communication unit on a wireless sensor node is the major power consumer, data compression is one of possible techniques that can help reduce the amount of data exchanged between wireless sensor nodes resulting in power saving. However, wireless sensor networks possess significant limitations in communication, processing, storage, bandwidth, and power. Thus, any data compression scheme proposed for WSNs must be lightweight. In this paper, we present an adaptive lossless data compression (ALDC) algorithm for wireless sensor networks. Our proposed ALDC scheme performs compression losslessly using multiple code options. Adaptive compression schemes allow compression to dynamically adjust to a changing source. The data sequence to be compressed is partitioned into blocks, and the optimal compression scheme is applied for each block. Using various real-world sensor datasets we demonstrate the merits of our proposed compression algorithm in comparison with other recently proposed lossless compression algorithms for WSNs.

70 citations


Journal ArticleDOI
TL;DR: A similarity measure based on compression with dictionaries, the Fast Compression Distance (FCD), which reduces the complexity of these methods, without degradations in performance, is proposed.

Journal ArticleDOI
TL;DR: Experiments show that both lossy and lossless transformations are useful, and that simple coding methods, which consume less computing resources, are highly competitive, especially when random access to reads is needed.
Abstract: Motivation: The growth of next-generation sequencing means that more effective and efficient archiving methods are needed to store the generated data for public dissemination and in anticipation of more mature analytical methods later. This article examines methods for compressing the quality score component of the data to partly address this problem. Results: We compare several compression policies for quality scores, in terms of both compression effectiveness and overall efficiency. The policies employ lossy and lossless transformations with one of several coding schemes. Experiments show that both lossy and lossless transformations are useful, and that simple coding methods, which consume less computing resources, are highly competitive, especially when random access to reads is needed. Availability and implementation: Our C++ implementation, released under the Lesser General Public License, is available for download at http://www.cb.k.u-tokyo.ac.jp/asailab/members/rwan. Contact: [email protected] Supplementary information:Supplementary data are available at Bioinformatics online.

Proceedings ArticleDOI
01 Apr 2012
TL;DR: The In-Situ Orthogonal Byte Aggregate Reduction Compression (ISOBAR-compression) methodology is introduced as a preconditioner of loss less compression to identify and optimize the compression efficiency and throughput of hard-to-compress datasets.
Abstract: Efficient handling of large volumes of data is a necessity for exascale scientific applications and database systems. To address the growing imbalance between the amount of available storage and the amount of data being produced by high speed (FLOPS) processors on the system, data must be compressed to reduce the total amount of data placed on the file systems. General-purpose loss less compression frameworks, such as zlib and bzlib2, are commonly used on datasets requiring loss less compression. Quite often, however, many scientific data sets compress poorly, referred to as hard-to-compress datasets, due to the negative impact of highly entropic content represented within the data. An important problem in better loss less data compression is to identify the hard-to-compress information and subsequently optimize the compression techniques at the byte-level. To address this challenge, we introduce the In-Situ Orthogonal Byte Aggregate Reduction Compression (ISOBAR-compress) methodology as a preconditioner of loss less compression to identify and optimize the compression efficiency and throughput of hard-to-compress datasets.

Journal ArticleDOI
TL;DR: Capacity, one of the main problems of steganography, has been increased to 7.042% and by employing stego keys and Combinatorics-based coding security of the proposed method has been supported.

Book ChapterDOI
27 Aug 2012
TL;DR: This paper focuses on developing effective and efficient algorithms for compressing scientific simulation data computed on structured and unstructured grids in which the data is modeled as a graph, which gets decomposed into sets of vertices which satisfy a user defined error constraint.
Abstract: This paper focuses on developing effective and efficient algorithms for compressing scientific simulation data computed on structured and unstructured grids. A paradigm for lossy compression of this data is proposed in which the data computed on the grid is modeled as a graph, which gets decomposed into sets of vertices which satisfy a user defined error constraint e. Each set of vertices is replaced by a constant value with reconstruction error bounded by e. A comprehensive set of experiments is conducted by comparing these algorithms and other state-of-the-art scientific data compression methods. Over our benchmark suite, our methods obtained compression of 1% of the original size with average PSNR of 43.00 and 3% of the original size with average PSNR of 63.30. In addition, our schemes outperform other state-of-the-art lossy compression approaches and require on the average 25% of the space required by them for similar or better PSNR levels.

Journal ArticleDOI
TL;DR: The results demonstrate that the polynomial predictor, APE, is comparable to previous approaches in terms of speed but achieves better compression rates on average, and ACE, the combined predictor, is able to achieve the best compression rate on all datasets, with significantly better rates on most of the datasets.
Abstract: In this work, we address the problem of lossless compression of scientific and medical floating-point volume data. We propose two prediction-based compression methods that share a common framework, which consists of a switched prediction scheme wherein the best predictor out of a preset group of linear predictors is selected. Such a scheme is able to adapt to different datasets as well as to varying statistics within the data. The first method, called APE (Adaptive Polynomial Encoder), uses a family of structured interpolating polynomials for prediction, while the second method, which we refer to as ACE (Adaptive Combined Encoder), combines predictors from previous work with the polynomial predictors to yield a more flexible, powerful encoder that is able to effectively decorrelate a wide range of data. In addition, in order to facilitate efficient visualization of compressed data, our scheme provides an option to partition floating-point values in such a way as to provide a progressive representation. We compare our two compressors to existing state-of-the-art lossless floating-point compressors for scientific data, with our data suite including both computer simulations and observational measurements. The results demonstrate that our polynomial predictor, APE, is comparable to previous approaches in terms of speed but achieves better compression rates on average. ACE, our combined predictor, while somewhat slower, is able to achieve the best compression rate on all datasets, with significantly better rates on most of the datasets.

Proceedings ArticleDOI
07 May 2012
TL;DR: This paper describes a new depth map encoding algorithm which aims at exploiting the intrinsic depth maps properties and is assessed against JPEG-2000 and HEVC, both in terms of PSNR of the depth maps versus rate.
Abstract: The multi-view plus depth video (MVD) format has recently been introduced for 3DTV and free-viewpoint video (FVV) scene rendering. Given one view (or several views) with its depth information, depth image-based rendering techniques have the ability to generate intermediate views. The MVD format however generates large volumes of data which need to be compressed for storage and transmission. This paper describes a new depth map encoding algorithm which aims at exploiting the intrinsic depth maps properties. Depth images indeed represent the scene surface and are characterized by areas of smoothly varying grey levels separated by sharp edges at the position of object boundaries. Preserving these characteristics is important to enable high quality view rendering at the receiver side. The proposed algorithm proceeds in three steps: the edges at object boundaries are first detected using a Sobel operator. The positions of the edges are encoded using the JBIG algorithm. The luminance values of the pixels along the edges are then encoded using an optimized path encoder. The decoder runs a fast diffusion-based inpainting algorithm which fills in the unknown pixels within the objects by starting from their boundaries. The performance of the algorithm is assessed against JPEG-2000 and HEVC, both in terms of PSNR of the depth maps versus rate as well as in terms of PSNR of the synthesized virtual views.

Journal ArticleDOI
TL;DR: A new lossless progressive compression algorithm based on rate-distortion optimization for meshes with color attributes and a new metric which estimates the geometry and color importance of each vertex during the simplification in order to faithfully preserve the feature elements is proposed.
Abstract: We propose a new lossless progressive compression algorithm based on rate-distortion optimization for meshes with color attributes; the quantization precision of both the geometry and the color information is adapted to each intermediate mesh during the encoding/decoding process. This quantization precision can either be optimally determined with the use of a mesh distortion measure or quasi-optimally decided based on an analysis of the mesh complexity in order to reduce the calculation time. Furthermore, we propose a new metric which estimates the geometry and color importance of each vertex during the simplification in order to faithfully preserve the feature elements. Experimental results show that our method outperforms the state-of-the-art algorithm for colored meshes and competes with the most efficient algorithms for non-colored meshes.

Book
30 Oct 2012
TL;DR: This edition of Introduction to Data Compression provides an extensive introduction to the theory underlying todays compression techniques with detailed instruction for their applications using several examples to explain the concepts.
Abstract: Each edition of Introduction to Data Compression has widely been considered the best introduction and reference text on the art and science of data compression, and thefourth edition continues in this tradition Data compression techniques and technology are ever-evolving with new applications in image, speech, text, audio, and video The fourth edition includes all the cutting edge updates the reader will need during the work day and in class Khalid Sayood provides an extensive introduction to the theory underlying todays compression techniques with detailed instruction for their applications using several examples to explain the concepts Encompassing the entire field of data compression, Introduction to Data Compression includes lossless and lossy compression, Huffman coding, arithmetic coding, dictionary techniques, context based compression, scalar and vector quantization Khalid Sayood provides a working knowledge of data compression, giving the reader the tools to develop a complete and concise compression package upon completion of his book New content added to include a more detailed description of the JPEG 2000 standard New content includes speech coding for internet applications Explains established and emerging standards in depth including JPEG 2000, JPEG-LS, MPEG-2, H264, JBIG 2, ADPCM, LPC, CELP, MELP, and iLBC Source code provided via companion web site that gives readers the opportunity to build their own algorithms, choose and implement techniques in their own applications

Journal ArticleDOI
TL;DR: This paper proposes an effective solution to RLDE by improving the histogram rotation (HR)-based embedding model and eliminates the ''salt-and-pepper'' noise in HR by the pixel adjustment mechanism, so reliable regions for embedding can be well constructed.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed method for tumor segmentation from mammogram images by means of improved watershed transform using prior information gives promising results in the compression applications.
Abstract: In this study, an automatic image segmentation method is proposed for the tumor segmentation from mammogram images by means of improved watershed transform using prior information. The segmented results of individual regions are then applied to perform a loss and lossless compression for the storage efficiency according to the importance of region data. These are mainly performed in two procedures, including region segmentation and region compression. In the first procedure, the canny edge detector is used to detect the edge between the background and breast. An improved watershed transform based on intrinsic prior information is then adopted to extract tumor boundary. Finally, the mammograms are segmented into tumor, breast without tumor and background. In the second procedure, vector quantization (VQ) with competitive Hopfield neural network (CHNN) is applied on the three regions with different compression rates according to the importance of region data so as to simultaneously reserve important tumor features and reduce the size of mammograms for storage efficiency. Experimental results show that the proposed method gives promising results in the compression applications.

Patent
07 Jun 2012
TL;DR: In this paper, the authors describe a computing device (300 ) that includes a storage ( 325 ) that over time is operable to include video and graphics content and the storage has a first set of instructions representing lossy video compression (130 ) and a second set of instruction representing lossless compression (120 ); and a processor ( 320 ) coupled with said storage (325 ), and said processor( 320 ) operability to electronically analyze ( 110 ) at least a portion of the content for motion based on magnitudes of motion vectors and, on detection of a significant amount of motion,
Abstract: A computing device ( 300 ) includes a storage ( 325 ) that over time is operable to include video and graphics content and the storage has a first set of instructions representing lossy video compression ( 130 ) and a second set of instructions representing lossless compression ( 120 ); and a processor ( 320 ) coupled with said storage ( 325 ), and said processor ( 320 ) operable to electronically analyze ( 110 ) at least a portion of the content for motion based on magnitudes of motion vectors and, on detection of a significant amount of motion, further operable to activate the first set of instructions ( 130 ) to compress at least the motion video, and otherwise to activate the second set of instructions representing lossless compression ( 120 ) to compress at least the graphics. Other devices, systems, and processes are disclosed.

Journal ArticleDOI
TL;DR: An application for 3D cameras by reversibly hiding the depth map in the corresponding 2D images, prospective in cameras capable of capturing simultaneously the 2D image and resultant depth map of an object is proposed.

Journal ArticleDOI
TL;DR: K_m proves to be a finer-grained measure and a potential alternative approach to lossless compression algorithms for small entities, where compression fails, and a first Beta version of an Online Algorithmic Complexity Calculator (OACC) is announced, based on a combination of theoretical concepts and numerical calculations.
Abstract: We show that real-value approximations of Kolmogorov-Chaitin (K_m) using the algorithmic Coding theorem as calculated from the output frequency of a large set of small deterministic Turing machines with up to 5 states (and 2 symbols), is in agreement with the number of instructions used by the Turing machines producing s, which is consistent with strict integer-value program-size complexity. Nevertheless, K_m proves to be a finer-grained measure and a potential alternative approach to lossless compression algorithms for small entities, where compression fails. We also show that neither K_m nor the number of instructions used shows any correlation with Bennett's Logical Depth LD(s) other than what's predicted by the theory. The agreement between theory and numerical calculations shows that despite the undecidability of these theoretical measures, approximations are stable and meaningful, even for small programs and for short strings. We also announce a first Beta version of an Online Algorithmic Complexity Calculator (OACC), based on a combination of theoretical concepts, as a numerical implementation of the Coding Theorem Method.

Journal ArticleDOI
TL;DR: The results show that the C-DPCM-with-adaptive-prediction-length method has lower bit-per-pixel value than the original C- DPCM method for Consultative Committee for Space Data Systems 2006 AVIRIS test images.
Abstract: This letter explores the use of adaptive prediction length in clustered differential pulse code modulation (C-DPCM) lossless compression method for hyperspectral images. In the C-DPCM method, linear prediction is performed using coefficients optimized for each spectral cluster separately. The difference between the predicted and original values is entropy coded using an adaptive range coder for each cluster. The results show that the C-DPCM-with-adaptive-prediction-length method has lower bit-per-pixel value than the original C-DPCM method for Consultative Committee for Space Data Systems 2006 AVIRIS test images. Both calibrated and uncalibrated image compression results are improved by adaptive prediction length.

Posted Content
TL;DR: This paper proposes a new algorithm for data compression, called j-bit encoding (JBE), which manipulates each bit of data inside file to minimize the size without losing any data after decoding which is classified to lossless compression.
Abstract: People tend to store a lot of files inside theirs storage. When the storage nears it limit, they then try to reduce those files size to minimum by using data compression software. In this paper we propose a new algorithm for data compression, called j-bit encoding (JBE). This algorithm will manipulates each bit of data inside file to minimize the size without losing any data after decoding which is classified to lossless compression. This basic algorithm is intended to be combining with other data compression algorithms to optimize the compression ratio. The performance of this algorithm is measured by comparing combination of different data compression algorithms.

Journal ArticleDOI
TL;DR: This paper presents an online algorithm for lightweight grammar-based compression based on the LCA algorithm which guarantees nearly optimum compression ratio and space and proposes more practical encoding based on parentheses representation of a binary tree.
Abstract: Grammar-based compression is a well-studied technique to construct a context-free grammar (CFG) deriving a given text uniquely. In this work, we propose an online algorithm for grammar-based compression. Our algorithm guarantees O(log2 n)- approximation ratio for the minimum grammar size, where n is an input size, and it runs in input linear time and output linear space. In addition, we propose a practical encoding, which transforms a restricted CFG into a more compact representation. Experimental results by comparison with standard compressors demonstrate that our algorithm is especially effective for highly repetitive text.

Journal ArticleDOI
TL;DR: This paper outlines the comparison of compression methods such as Shape- Adaptive Wavelet Transform and Scaling Based ROI, JPEG2000 Max-Shift ROI Coding, JPEG 2000 Scaling- Based ROi Coding , Discrete Cosine Transform, Discrete Wavelet transform and Subband Block Hierarchical Partitioning on the basis of compression ratio and compression quality.
Abstract: Medical image compression plays a key role as hospitals move towards filmless imaging and go completely digital. Image compression will allow Picture Archiving and Communication Systems (PACS) to reduce the file sizes on their storage requirements while maintaining relevant diagnostic information. Lossy compression schemes are not used in medical image compression due to possible loss of useful clinical information and as operations like enhancement may lead to further degradations in the lossy compression. Medical imaging poses the great challenge of having compression algorithms that reduce the loss of fidelity as much as possible so as not to contribute to diagnostic errors and yet have high compression rates for reduced storage and transmission time. This paper outlines the comparison of compression methods such as Shape- Adaptive Wavelet Transform and Scaling Based ROI, JPEG2000 Max-Shift ROI Coding, JPEG2000 Scaling- Based ROI Coding, Discrete Cosine Transform, Discrete Wavelet Transform and Subband Block Hierarchical Partitioning on the basis of compression ratio and compression quality.

Proceedings ArticleDOI
10 Apr 2012
TL;DR: A low-complexity integer-reversible spectral-spatial transform that allows for efficient loss less and lossy compression of color-filter-array images, allowing for very high quality offline post-processing, but with camera-raw files that can be half the size of those of existingcamera-raw formats.
Abstract: We present a low-complexity integer-reversible spectral-spatial transform that allows for efficient loss less and lossy compression of color-filter-array images (also referred to as camera-raw images). The main advantage of this new transform is that it maps the pixel array values into a format that can be directly compressed in a loss less, lossy, or progressive-to-loss less manner by an existing typical image coder such as JPEG 2000 or JPEG XR. Thus, no special codec design is needed for compressing the camera-raw data. Another advantage is that the new transform allows for mild compression of camera-raw data in a near-loss less format, allowing for very high quality offline post-processing, but with camera-raw files that can be half the size of those of existing camera-raw formats.

Journal ArticleDOI
TL;DR: A new hierarchical approach to resolution scalable lossless and near-lossless (NLS) compression that combines the adaptability of DPCM schemes with new hierarchical oriented predictors to provide resolution scalability with better compression performances than the usual hierarchical interpolation predictor or the wavelet transform.
Abstract: We propose a new hierarchical approach to resolution scalable lossless and near-lossless (NLS) compression. It combines the adaptability of DPCM schemes with new hierarchical oriented predictors to provide resolution scalability with better compression performances than the usual hierarchical interpolation predictor or the wavelet transform. Because the proposed hierarchical oriented prediction (HOP) is not really efficient on smooth images, we also introduce new predictors, which are dynamically optimized using a least-square criterion. Lossless compression results, which are obtained on a large-scale medical image database, are more than 4% better on CTs and 9% better on MRIs than resolution scalable JPEG-2000 (J2K) and close to nonscalable CALIC. The HOP algorithm is also well suited for NLS compression, providing an interesting rate-distortion tradeoff compared with JPEG-LS and equivalent or a better PSNR than J2K for a high bit rate on noisy (native) medical images.