scispace - formally typeset
Search or ask a question

Showing papers on "Lossless compression published in 2014"


Journal ArticleDOI
TL;DR: A fixed-rate, near-lossless compression scheme that maps small blocks of 4d values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity.
Abstract: Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.

449 citations


Journal ArticleDOI
TL;DR: A highly efficient image encryption-then-compression (ETC) system, where both lossless and lossy compression are considered, and the proposed image encryption scheme operated in the prediction error domain is shown to be able to provide a reasonably high level of security.
Abstract: In many practical scenarios, image encryption has to be conducted prior to image compression. This has led to the problem of how to design a pair of image encryption and compression algorithms such that compressing the encrypted images can still be efficiently performed. In this paper, we design a highly efficient image encryption-then-compression (ETC) system, where both lossless and lossy compression are considered. The proposed image encryption scheme operated in the prediction error domain is shown to be able to provide a reasonably high level of security. We also demonstrate that an arithmetic coding-based approach can be exploited to efficiently compress the encrypted images. More notably, the proposed compression approach applied to encrypted images is only slightly worse, in terms of compression efficiency, than the state-of-the-art lossless/lossy image coders, which take original, unencrypted images as inputs. In contrast, most of the existing ETC solutions induce significant penalty on the compression efficiency.

173 citations


Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed algorithm not only achieves high embedding capacity but also enhances the PSNR of the stego image.
Abstract: Steganography is knowledge and art of hiding secret data into information which is largely used in information security systems. Various methods have been proposed in the literature which most of them are not capable of both preventing visual degradation and providing a large embedding capacity. In this paper, we propose a tunable visual image quality and data lossless method in spatial domain based on a genetic algorithm (GA). The main idea of the proposed technique is modeling the steganography problem as a search and optimization problem. Experimental results, in comparison with other currently popular steganography techniques, demonstrate that the proposed algorithm not only achieves high embedding capacity but also enhances the PSNR of the stego image.

150 citations


Journal ArticleDOI
TL;DR: Experimental result shows that the proposed scheme significantly outperforms the previous approaches on reversible data hiding in encrypted images based on lossless compression of encrypted data.

143 citations


Journal ArticleDOI
08 May 2014-PLOS ONE
TL;DR: In this article, the authors present a numerical approach to the problem of approximating the Kolmogorov-Chaitin complexity of short strings, motivated by the notion of algorithmic probability.
Abstract: Drawing on various notions from theoretical computer science, we present a novel numerical approach, motivated by the notion of algorithmic probability, to the problem of approximating the Kolmogorov-Chaitin complexity of short strings. The method is an alternative to the traditional lossless compression algorithms, which it may complement, the two being serviceable for different string lengths. We provide a thorough analysis for all binary strings of length and for most strings of length by running all Turing machines with 5 states and 2 symbols ( with reduction techniques) using the most standard formalism of Turing machines, used in for example the Busy Beaver problem. We address the question of stability and error estimation, the sensitivity of the continued application of the method for wider coverage and better accuracy, and provide statistical evidence suggesting robustness. As with compression algorithms, this work promises to deliver a range of applications, and to provide insight into the question of complexity calculation of finite (and short) strings. Additional material can be found at the Algorithmic Nature Group website at http://www.algorithmicnature.org. An Online Algorithmic Complexity Calculator implementing this technique and making the data available to the research community is accessible at http://www.complexitycalculator.com.

125 citations


Journal ArticleDOI
TL;DR: The source dispersion and the source varentropy rate serves to tightly approximate the fundamental nonasymptotic limits of fixed-to-variable compression for all but very small block lengths.
Abstract: This paper provides an extensive study of the behavior of the best achievable rate (and other related fundamental limits) in variable-length strictly lossless compression. In the non-asymptotic regime, the fundamental limits of fixed-to-variable lossless compression with and without prefix constraints are shown to be tightly coupled. Several precise, quantitative bounds are derived, connecting the distribution of the optimal code lengths to the source information spectrum, and an exact analysis of the best achievable rate for arbitrary sources is given. Fine asymptotic results are proved for arbitrary (not necessarily prefix) compressors on general mixing sources. Nonasymptotic, explicit Gaussian approximation bounds are established for the best achievable rate on Markov sources. The source dispersion and the source varentropy rate are defined and characterized. Together with the entropy rate, the varentropy rate serves to tightly approximate the fundamental nonasymptotic limits of fixed-to-variable compression for all but very small block lengths.

121 citations


Journal ArticleDOI
TL;DR: A new lossy image compression technique which uses singular value decomposition (SVD) and wavelet difference reduction (WDR) in order for the SVD compression to boost the performance of the WDR compression.

120 citations


Proceedings ArticleDOI
12 May 2014
TL;DR: This work uses the Open Computing Language (OpenCL) to implement high-speed data compression (Gzip) on a field-programmable gate-arrays (FPGA) to achieve the high throughput of 3 GB/s with more than 2x compression ratio over standard compression benchmarks.
Abstract: Hardware implementation of lossless data compression is important for optimizing the capacity/cost/power of storage devices in data centers, as well as communication channels in high-speed networks. In this work we use the Open Computing Language (OpenCL) to implement high-speed data compression (Gzip) on a field-programmable gate-arrays (FPGA). We show how we make use of a heavily-pipelined custom hardware implementation to achieve the high throughput of ~3 GB/s with more than 2x compression ratio over standard compression benchmarks. When compared against a highly-tuned CPU implementation, the performance-per-watt of our OpenCL FPGA implementation is 12x better and compression ratio is on-par. Additionally, we compare our implementation to a hand-coded commercial implementation of Gzip to quantify the gap between a high-level language like OpenCL, and a hardware description language like Verilog. OpenCL performance is 5.3% lower than Verilog, and area is 2% more logic and 25% more of the FPGA's available memory resources but the productivity gains are significant.

110 citations


Journal ArticleDOI
TL;DR: It is shown that numerical approximations of Kolmogorov complexity (K) of graphs and networks capture some group-theoretic and topological properties of empirical networks, ranging from metabolic to social networks, and of small synthetic networks that are produced.
Abstract: We show that numerical approximations of Kolmogorov complexity ( K ) of graphs and networks capture some group-theoretic and topological properties of empirical networks, ranging from metabolic to social networks, and of small synthetic networks that we have produced. That K and the size of the group of automorphisms of a graph are correlated opens up interesting connections to problems in computational geometry, and thus connects several measures and concepts from complexity science. We derive these results via two different Kolmogorov complexity approximation methods applied to the adjacency matrices of the graphs and networks. The methods used are the traditional lossless compression approach to Kolmogorov complexity, and a normalised version of a Block Decomposition Method (BDM) based on algorithmic probability theory.

88 citations


Journal ArticleDOI
TL;DR: A new representation method for multidimensional color images, called an n -qubit normal arbitrary superposition state (NASS), where n qubits represent the colors and coordinates of 2 n pixels, is proposed.

88 citations


Journal ArticleDOI
TL;DR: This paper is a survey for lossy image compression using Discrete Cosine Transform, it covers JPEG compression algorithm which is used for full-colour still image applications and describes all the components of it.
Abstract: Due to the increasing requirements for transmission of images in computer, mobile environments, the research in the field of image compression has increased significantly. Image compression plays a crucial role in digital image processing, it is also very important for efficient transmission and storage of images. When we compute the number of bits per image resulting from typical sampling rates and quantization methods, we find that Image compression is needed. Therefore development of efficient techniques for image compression has become necessary .This paper is a survey for lossy image compression using Discrete Cosine Transform, it covers JPEG compression algorithm which is used for full-colour still image applications and describes all the components of it.

Journal ArticleDOI
TL;DR: This cTIM scheme is implemented in the range extensions of high-efficiency video coding (HEVC-RExt) as an additional tool of intracoding to complement conventional spatial angular prediction to better exploit the screen content redundancy.
Abstract: This paper presents an advanced screen content coding solution using color table and index map (cTIM) method. This cTIM scheme is implemented in the range extensions of high-efficiency video coding (HEVC-RExt) as an additional tool of intracoding to complement conventional spatial angular prediction to better exploit the screen content redundancy. For each coding unit, a number of major colors will be selected to form the color table, then the original pixel block is translated to the corresponding index map. A 1D or hybrid 1D/2D string match scheme is introduced to derive matched pairs of index map for better compression. Leveraging the color distribution similarity between neighboring image blocks, color table merge is developed to carry it implicitly. For those blocks that color table has to be signaled explicitly, intertable color sharing and intratable color differential predictive coding are applied to reduce the signaling overhead. Extensive experiments have been performed and they have demonstrated the significant coding efficiency improvement over conventional HEVC-RExt, resulting in 26%, 18%, and 15% bit rate reduction at lossless case and 23%, 19%, and 13% Bjontegaard Delta-rate improvement at lossy scenario of typical screen content with text and graphics, for respective all intra, random access, and low-delay using B picture encoder settings. Detailed performance study and complexity analysis (as well as the comparison with other algorithms) have been included as well to evidence the efficiency of proposed algorithm.

Journal ArticleDOI
TL;DR: This paper presents a low-power ECG recording system-on-chip (SoC) with on-chip low-complexity lossless ECG compression for data reduction in wireless/ambulatory ECG sensor devices.
Abstract: This paper presents a low-power ECG recording system-on-chip (SoC) with on-chip low-complexity lossless ECG compression for data reduction in wireless/ambulatory ECG sensor devices. The chip uses a linear slope predictor for data compression, and incorporates a novel low-complexity dynamic coding-packaging scheme to frame the prediction error into fixed-length 16 bit format. The proposed technique achieves an average compression ratio of 2.25× on MIT/BIH ECG database. Implemented in a standard 0.35 μm process, the compressor uses 0.565 K gates/channel occupying 0.4 mm 2 for four channels, and consumes 535 nW/channel at 2.4 V for ECG sampled at 512 Hz. Small size and ultra-low-power consumption makes the proposed technique suitable for wearable ECG sensor applications.

Journal ArticleDOI
09 Jul 2014
TL;DR: A comparison of several lossless and lossy data compression algorithms and discusses their methodology under the exascale environment to discover an increasing trend of new domain-driven algorithms that exploit the inherent characteristics exhibited in many scientific dataset.
Abstract: While periodic checkpointing has been an important mechanism for tolerating faults in high performance computing HPC systems, it is cost-prohibitive as the HPC system approaches exascale. Applying compression techniques is one common way to mitigate such burdens by reducing the data size, but they are often found to be less effective for scientific datasets. Traditional lossless compression techniques that look for repeated patterns are ineffective for scientific data in which high-precision data is used and hence common patterns are rare to find. In this paper, we present a comparison of several lossless and lossy data compression algorithms and discuss their methodology under the exascale environment. As data volume increases, we discover an increasing trend of new domain-driven algorithms that exploit the inherent characteristics exhibited in many scientific dataset, such as relatively small changes in data values from one simulation iteration to the next or among neighboring data. In particular, significant data reduction has been observed in lossy compression. This paper also discusses how the errors introduced by lossy compressions are controlled and the tradeoffs with the compression ratio.

Proceedings ArticleDOI
16 Nov 2014
TL;DR: NUMARCK, North western University Machine learning Algorithm for Resiliency and Check pointing, is proposed that makes use of the emerging distributions of data changes between consecutive simulation iterations and encodes them into an indexing space that can be concisely represented.
Abstract: Data check pointing is an important fault tolerance technique in High Performance Computing (HPC) systems. As the HPC systems move towards exascale, the storage space and time costs of check pointing threaten to overwhelm not only the simulation but also the post-simulation data analysis. One common practice to address this problem is to apply compression algorithms to reduce the data size. However, traditional lossless compression techniques that look for repeated patterns are ineffective for scientific data in which high-precision data is used and hence common patterns are rare to find. This paper exploits the fact that in many scientific applications, the relative changes in data values from one simulation iteration to the next are not very significantly different from each other. Thus, capturing the distribution of relative changes in data instead of storing the data itself allows us to incorporate the temporal dimension of the data and learn the evolving distribution of the changes. We show that an order of magnitude data reduction becomes achievable within guaranteed user-defined error bounds for each data point. We propose NUMARCK, North western University Machine learning Algorithm for Resiliency and Check pointing, that makes use of the emerging distributions of data changes between consecutive simulation iterations and encodes them into an indexing space that can be concisely represented. We evaluate NUMARCK using two production scientific simulations, FLASH and CMIP5, and demonstrate a superior performance in terms of compression ratio and compression accuracy. More importantly, our algorithm allows users to specify the maximum tolerable error on a per point basis, while compressing the data by an order of magnitude.

Journal ArticleDOI
TL;DR: In this study, capacity and security issues of text steganography have been considered by proposing a compression based approach and Huffmann coding has been chosen due to its frequent use in the literature and significant compression ratio.
Abstract: In this study, capacity and security issues of text steganography have been considered by proposing a compression based approach. Because of using textual data in steganography, firstly, the employed data compression algorithm has to be lossless. Accordingly, Huffmann coding has been chosen due to its frequent use in the literature and significant compression ratio. Besides, the proposed method constructs and uses stego-keys in order to increase security. Secret information has been hidden in the chosen text from the previously constructed text base that consists of naturally generated texts. Email has been chosen as communication channel between the two parties, so the stego cover has been arranged as a forward mail platform. As the result of performed experiments, average capacity has been computed as 7.962 % for the secret message with 300 characters (or 300∙8 bits). Finally, comparison of the proposed method with the other contemporary methods in the literature has been carried out in Section 5.

Patent
24 Sep 2014
TL;DR: In this article, a fast near-lossless compression of 3D voxel grids is proposed, which includes four steps: voxelsization of the 3D geometry, decomposing the space into consecutive slices, encoding each slice with chain codes, and compressing the chain code with entropy coding.
Abstract: Fast near-lossless compression includes four steps: voxelization of the 3D geometry, decomposing the 3D voxel space into consecutive slices, encoding each slice with chain codes, and compressing the chain code with entropy coding. The decompression works by applying the aforementioned steps in inverse order. Smoothing over the voxels' centers is applied afterwards in order to reconstruct the input 3D points. Optionally 3D mesh is reconstructed over the approximate point cloud in order to obtain the original geometric object. The quality of the compression/decompression is controlled by resolution of the 3D voxel grid.

Proceedings ArticleDOI
19 May 2014
TL;DR: This work presents a framework that optimizes compression and query efficiency by allowing bitmaps to be compressed using variable encoding lengths while still maintaining alignment to avoid explicit decompression.
Abstract: Bitmap indices are widely used for large read-only repositories in data warehouses and scientific databases. Their binary representation allows for the use of bitwise operations and specialized run-length compression techniques. Due to a trade-off between compression and query efficiency, bitmap compression schemes are aligned using a fixed encoding length size (typically the word length) to avoid explicit decompression during query time. In general, smaller encoding lengths provide better compression, but require more decoding during query execution. However, when the difference in size is considerable, it is possible for smaller encodings to also provide better execution time. We posit that a tailored encoding length for each bit vector will provide better performance than a one-size-fits-all approach. We present a framework that optimizes compression and query efficiency by allowing bitmaps to be compressed using variable encoding lengths while still maintaining alignment to avoid explicit decompression. Efficient algorithms are introduced to process queries over bitmaps compressed using different encoding lengths. An input parameter controls the aggressiveness of the compression providing the user with the ability to tune the tradeoff between space and query time. Our empirical study shows this approach achieves significant improvements in terms of both query time and compression ratio for synthetic and real data sets. Compared to 32-bit WAH, VAL-WAH produces up to 1.8× smaller bitmaps and achieves query times that are 30% faster.

Journal ArticleDOI
TL;DR: One of the transformations, RDgDb, which requires just 2 integer subtractions per image pixel, on average results in the best ratios for JPEG2000 and JPEG XR, while for a specific set or in case of JPEG-LS its compression ratios are either the best or within 0.1 bpp from the best.

Journal ArticleDOI
TL;DR: A new lossless color image compression algorithm, based on the hierarchical prediction and context-adaptive arithmetic coding, that enables the use of upper, left, and lower pixels for the pixel prediction, whereas the conventional raster scan prediction methods use upper and left pixels.
Abstract: This paper presents a new lossless color image compression algorithm, based on the hierarchical prediction and context-adaptive arithmetic coding. For the lossless compression of an RGB image, it is first decorrelated by a reversible color transform and then Y component is encoded by a conventional lossless grayscale image compression method. For encoding the chrominance images, we develop a hierarchical scheme that enables the use of upper, left, and lower pixels for the pixel prediction, whereas the conventional raster scan prediction methods use upper and left pixels. An appropriate context model for the prediction error is also defined and the arithmetic coding is applied to the error signal corresponding to each context. For several sets of images, it is shown that the proposed method further reduces the bit rates compared with JPEG2000 and JPEG-XR.

Journal ArticleDOI
TL;DR: Using various real-world WSN data sets, it is shown that the proposed algorithm significantly outperforms existing popular lossless compression algorithms for WSNs such as LEC and S-LZW.
Abstract: Data compression is a useful technique in the deployments of resource-constrained wireless sensor networks (WSNs) for energy conservation. In this letter, we present a new lossless data compression algorithm in WSNs. Compared to existing WSN data compression algorithms, our proposed algorithm is not only efficient but also highly robust for diverse WSN data sets with very different characteristics. Using various real-world WSN data sets, we show that the proposed algorithm significantly outperforms existing popular lossless compression algorithms for WSNs such as LEC and S-LZW. The robustness of our algorithm has been demonstrated, and the insight is provided. The energy consumption of our devised algorithm is also analyzed.

Proceedings ArticleDOI
01 Jun 2014
TL;DR: A theoretical framework is introduced that can be used to give bounds on solution quality for any perfect-recall extensive-form game, and it is proved that level-by-level abstraction can be too myopic and thus fail to find even obvious lossless abstractions.
Abstract: ion has emerged as a key component in solving extensive-form games of incomplete information. However, lossless abstractions are typically too large to solve, so lossy abstraction is needed. All prior lossy abstraction algorithms for extensive-form games either 1) had no bounds on solution quality or 2) depended on specific equilibrium computation approaches, limited forms of abstraction, and only decreased the number of information sets rather than nodes in the game tree. We introduce a theoretical framework that can be used to give bounds on solution quality for any perfect-recall extensive-form game. The framework uses a new notion for mapping abstract strategies to the original game, and it leverages a new equilibrium refinement for analysis. Using this framework, we develop the first general lossy extensive-form game abstraction method with bounds. Experiments show that it finds a lossless abstraction when one is available and lossy abstractions when smaller abstractions are desired. While our framework can be used for lossy abstraction, it is also a powerful tool for lossless abstraction if we set the bound to zero. Prior abstraction algorithms typically operate level by level in the game tree. We introduce the extensive-form game tree isomorphism and action subset selection problems, both important problems for computing abstractions on a level-by-level basis. We show that the former is graph isomorphism complete, and the latter NP-complete. We also prove that level-by-level abstraction can be too myopic and thus fail to find even obvious lossless abstractions.

Journal ArticleDOI
TL;DR: A JPEG 2000-based codec framework is proposed that provides a generic architecture suitable for the compression of many types of off-axis holograms, and is extended with a JPEG 2000 codec at its core, extended with fully arbitrary wavelet decomposition styles and directional wavelet transforms.
Abstract: With the advent of modern computing and imaging technologies, digital holography is becoming widespread in various scientific disciplines such as microscopy, interferometry, surface shape measurements, vibration analysis, data encoding, and certification Therefore, designing an efficient data representation technology is of particular importance Off-axis holograms have very different signal properties with respect to regular imagery, because they represent a recorded interference pattern with its energy biased toward the high-frequency bands This causes traditional images’ coders, which assume an underlying 1/f2 power spectral density distribution, to perform suboptimally for this type of imagery We propose a JPEG 2000-based codec framework that provides a generic architecture suitable for the compression of many types of off-axis holograms This framework has a JPEG 2000 codec at its core, extended with (1) fully arbitrary wavelet decomposition styles and (2) directional wavelet transforms Using this codec, we report significant improvements in coding performance for off-axis holography relative to the conventional JPEG 2000 standard, with Bjontegaard delta-peak signal-to-noise ratio improvements ranging from 13 to 116 dB for lossy compression in the 0125 to 200 bpp range and bit-rate reductions of up to 16 bpp for lossless compression

Journal ArticleDOI
TL;DR: It is shown that the rate controller has excellent performance in terms of accuracy in the output rate, rate-distortion characteristics, and is extremely competitive with respect to state-of-the-art transform coding.
Abstract: Predictive coding is attractive for compression onboard of spacecrafts thanks to its low computational complexity, modest memory requirements and the ability to accurately control quality on a pixel-by-pixel basis. Traditionally, predictive compression focused on the lossless and near-lossless modes of operation where the maximum error can be bounded but the rate of the compressed image is variable. Rate control is considered a challenging problem for predictive encoders due to the dependencies between quantization and prediction in the feedback loop, and the lack of a signal representation that packs the signal's energy into few coefficients. In this paper, we show that it is possible to design a rate control scheme intended for onboard implementation. In particular, we propose a general framework to select quantizers in each spatial and spectral region of an image so as to achieve the desired target rate while minimizing distortion. The rate control algorithm allows to achieve lossy, near-lossless compression, and any in-between type of compression, e.g., lossy compression with a near-lossless constraint. While this framework is independent of the specific predictor used, in order to show its performance, in this paper we tailor it to the predictor adopted by the CCSDS-123 lossless compression standard, obtaining an extension that allows to perform lossless, near-lossless and lossy compression in a single package. We show that the rate controller has excellent performance in terms of accuracy in the output rate, rate-distortion characteristics and is extremely competitive with respect to state-of-the-art transform coding.

Journal ArticleDOI
TL;DR: A novel lossless beam-forming network architecture for limited field of view and multi-beam linear arrays is introduced, which is based on the overlapped sub-array technique and presents a number of advantages if compared to previous architectures.
Abstract: A novel lossless beam-forming network (BFN) architecture for limited field of view and multi-beam linear arrays is introduced. The BFN, which is based on the overlapped sub-array technique, presents a number of advantages if compared to previous architectures. The most attractive feature consists in a substantial reduction of the number of control elements and associated active devices (i.e., amplifiers, variable phase shifters, etc.). This advantage is achieved adopting a lossless topology with reduced implementation complexity together with a rigorous and analytical design methodology which allows to fully define the BFN at architectural level and to optimize the sub-array radiation pattern in a least mean square error sense.

Proceedings ArticleDOI
Liwei Guo1, Wei Pu1, Feng Zou1, Joel Sole1, Marta Karczewicz1, Rajan Laxman Joshi1 
01 Oct 2014
TL;DR: Simulation has been performed using the common screen content coding test condition defined by JCT-VC and the results show that palette coding can effectively improve screencontent coding efficiency for both lossless and lossy scenarios.
Abstract: With the prevalence of high speed Internet access, emerging video applications such as remote desktop sharing, virtual desktop infrastructure, and wireless display require high compression efficiency of screen contents. However, traditional intra and inter video coding tools were designed primarily for natural contents. Screen contents have significantly different characteristics compared with nature contents, e.g. sharp edges, less or no noise, which makes those traditional coding tools less sufficient. In this research, a new color palette based video coding tool is presented. Different from traditionally intra and inter prediction that mainly removes redundancy between different coding units, palette coding targets at the redundancy of repetitive pixel values/patterns within the coding unit. In the palette coding mode, a lookup table named palette which maps pixel values into table indices (also called palette indices) is signaled first. Then the mapped indice for a coding unit (which we call index block) are coded with a novel three-mode run-length entropy coding. Some encoder-side optimization for palette coding is also presented in detail in this paper. Simulation has been performed using the common screen content coding test condition defined by JCT-VC and the results show that palette coding can effectively improve screen content coding efficiency for both lossless and lossy scenarios.

Journal ArticleDOI
TL;DR: This work proposes to endow the lossless compression algorithm (LEC), previously proposed by us in the context of wireless sensor networks, with two simple adaptation schemes relying on the novel concept of appropriately rotating the prefix-free tables, and shows that the adaptation schemes can achieve significant compression efficiencies in all the datasets.
Abstract: Internet of Things (IoT) devices are typically powered by small batteries with a limited capacity. Thus, saving power as much as possible becomes crucial to extend their lifetime and therefore to allow their use in real application domains. Since radio communication is in general the main cause of power consumption, one of the most used approaches to save energy is to limit the transmission/reception of data, for instance, by means of data compression. However, the IoT devices are also characterized by limited computational resources which impose the development of specifically designed algorithms. To this aim, we propose to endow the lossless compression algorithm (LEC), previously proposed by us in the context of wireless sensor networks, with two simple adaptation schemes relying on the novel concept of appropriately rotating the prefix-free tables. We tested the proposed schemes on several datasets collected in several real sensor network deployments by monitoring four different environmental phenomena, namely, air and surface temperatures, solar radiation and relative humidity. We show that the adaptation schemes can achieve significant compression efficiencies in all the datasets. Further, we compare such results with the ones obtained by LEC and, by means of a non-parametric multiple statistical test, we show that the performance improvements introduced by the adaptation schemes are statistically significant.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a general framework to select quantizers in each spatial and spectral region of an image to achieve the desired target rate while minimizing distortion and showed that the rate controller has excellent performance in terms of accuracy in the output rate, rate-distortion characteristics, and is extremely competitive with respect to state-of-the-art transform coding.
Abstract: Predictive coding is attractive for compression on board of spacecraft due to its low computational complexity, modest memory requirements, and the ability to accurately control quality on a pixel-by-pixel basis. Traditionally, predictive compression focused on the lossless and near-lossless modes of operation, where the maximum error can be bounded but the rate of the compressed image is variable. Rate control is considered a challenging problem for predictive encoders due to the dependencies between quantization and prediction in the feedback loop and the lack of a signal representation that packs the signal’s energy into few coefficients. In this paper, we show that it is possible to design a rate control scheme intended for onboard implementation. In particular, we propose a general framework to select quantizers in each spatial and spectral region of an image to achieve the desired target rate while minimizing distortion. The rate control algorithm allows achieving lossy near-lossless compression and any in-between type of compression, e.g., lossy compression with a near-lossless constraint. While this framework is independent of the specific predictor used, in order to show its performance, in this paper, we tailor it to the predictor adopted by the CCSDS-123 lossless compression standard, obtaining an extension that allows performing lossless, near-lossless, and lossy compression in a single package. We show that the rate controller has excellent performance in terms of accuracy in the output rate, rate–distortion characteristics, and is extremely competitive with respect to state-of-the-art transform coding.

Journal ArticleDOI
TL;DR: A dynamic compression scheme is proposed to deal with the challenge of ultralow power and real-time wireless ECG application and has high-energy efficiency, low computational complexity, less resource consumption, and rapid time response.
Abstract: Wireless body sensor networks enabled electrocardiogram (ECG) biosensors are a novel solution for patient-centric telecardiology. With this solution, the prevention and early diagnosis of cardiovascular diseases can be effectively improved. However, the energy efficiency of the present wireless ECG biosensors still needs to be improved. In this paper, a dynamic compression scheme is proposed to deal with the challenge of ultralow power and real-time wireless ECG application. This compression scheme consists of a digital integrate-and-fire sampler and a lossless entropy encoder, which can reduce airtime over energy-hungry wireless links and improve the energy efficiency of the biosensors. The efficiency improvement is evidenced by the experiments using the MIT-BIH arrhythmia database in MICAz node. The lifetime of dc-implemented MICAz node can be extended up to 76.60% with high signal recovery quality. This scheme is also compared with the digital wavelet transform-based and compressed sensing-based compression schemes. All experimental results indicate that the proposed scheme has high-energy efficiency, low computational complexity, less resource consumption, and rapid time response.

Patent
30 Oct 2014
TL;DR: In this paper, adaptive compression and decompression for dictionaries of a column-store database can reduce the amount of memory used for columns of the database, allowing a system to keep column data in memory for more columns, while delays for access operations remain acceptable.
Abstract: Innovations for adaptive compression and decompression for dictionaries of a column-store database can reduce the amount of memory used for columns of the database, allowing a system to keep column data in memory for more columns, while delays for access operations remain acceptable. For example, dictionary compression variants use different compression techniques and implementation options. Some dictionary compression variants provide more aggressive compression (reduced memory consumption) but result in slower run-time performance. Other dictionary compression variants provide less aggressive compression (higher memory consumption) but support faster run-time performance. As another example, a compression manager can automatically select a dictionary compression variant for a given column in a column-store database. For different dictionary compression variants, the compression manager predicts run-time performance and compressed dictionary size, given the values of the column, and selects one of the dictionary compression variants.