scispace - formally typeset
Search or ask a question

Showing papers on "Lossless compression published in 2016"


Proceedings ArticleDOI
23 May 2016
TL;DR: This paper proposes a novel HPC data compression method that works very effectively on compressing large-scale HPCData sets, and evaluates it using 13 real-world HPC applications across different scientific domains, and compared to many other state-of-the-art compression methods.
Abstract: Today's HPC applications are producing extremely large amounts of data, thus it is necessary to use an efficient compression before storing them to parallel file systems. In this paper, we optimize the error-bounded HPC data compression, by proposing a novel HPC data compression method that works very effectively on compressing large-scale HPC data sets. The compression method starts by linearizing multi-dimensional snapshot data. The key idea is to fit/predict the successive data points with the bestfit selection of curve fitting models. The data that can be predicted precisely will be replaced by the code of the corresponding curve-fitting model. As for the unpredictable data that cannot be approximated by curve-fitting models, we perform an optimized lossy compression via a binary representation analysis. We evaluate our proposed solution using 13 real-world HPC applications across different scientific domains, and compare it to many other state-of-the-art compression methods (including Gzip, FPC, ISABELA, NUMARCK, ZFP, FPZIP, etc.). Experiments show that the compression ratio of our compressor ranges in 3.3/1 - 436/1, which is higher than the second-best solution ZFP by as little as 2x and as much as an order of magnitude for most cases. The compression time of SZ is comparable to other solutions', while its decompression time is less than the second best one by 50%-90%. On an extreme-scale use case, experiments show that the compression ratio of SZ exceeds that of ZFP by 80%.

341 citations


Journal ArticleDOI
TL;DR: This paper proposes lossless, reversible, and combined data hiding schemes for ciphertext images encrypted by public-key cryptosystems with probabilistic and homomorphic properties.
Abstract: This paper proposes lossless, reversible, and combined data hiding schemes for ciphertext images encrypted by public-key cryptosystems with probabilistic and homomorphic properties. In the lossless scheme, the ciphertext pixels are replaced with new values to embed the additional data into several least significant bit planes of ciphertext pixels by multilayer wet paper coding. Then, the embedded data can be directly extracted from the encrypted domain, and the data-embedding operation does not affect the decryption of original plaintext image. In the reversible scheme, a preprocessing is employed to shrink the image histogram before image encryption, so that the modification on encrypted images for data embedding will not cause any pixel oversaturation in plaintext domain. Although a slight distortion is introduced, the embedded data can be extracted and the original image can be recovered from the directly decrypted image. Due to the compatibility between the lossless and reversible schemes, the data-embedding operations in the two manners can be simultaneously performed in an encrypted image. With the combined technique, a receiver may extract a part of embedded data before decryption, and extract another part of embedded data and recover the original plaintext image after decryption.

230 citations


Journal ArticleDOI
TL;DR: Experimental results and security analysis demonstrate that the proposed algorithm has a high security, fast speed and can resist various attacks.

192 citations


Proceedings ArticleDOI
01 Sep 2016
TL;DR: For any type of image, this method performs as good or better (on average) than any of the existing image formats for lossless compression.
Abstract: We present a novel lossless image compression algorithm. It achieves better compression than popular lossless image formats like PNG and lossless JPEG 2000. Existing image formats have specific strengths and weaknesses: e.g. JPEG works well for photographs, PNG works well for line drawings or images with few distinct colors. For any type of image, our method performs as good or better (on average) than any of the existing image formats for lossless compression. Interlacing is improved compared to PNG, making the format suitable for progressive decoding and responsive web design.

116 citations


Journal ArticleDOI
TL;DR: The algorithm employs the discrete cosine transformation dictionary to sparsely represent the color image and then combines it with the encryption algorithm based on the hyper-chaotic system to achieve image compression and encryption simultaneously.
Abstract: For the low security and compression performance of the existing joint image encryption and compression technology, an improvement algorithm for joint image compression and encryption is proposed. The algorithm employs the discrete cosine transformation dictionary to sparsely represent the color image and then combines it with the encryption algorithm based on the hyper-chaotic system to achieve image compression and encryption simultaneously. Through the experimental analysis, the algorithm proposed in this paper has a good performance in security and compression.

100 citations


Journal ArticleDOI
01 Mar 2016
TL;DR: A chaos based image encryption and lossless compression algorithm using hash table and Chinese Remainder Theorem is proposed and simulation results show the high effectiveness and security features of the proposed algorithm.
Abstract: No useful information is leaked out as full encryption is done.A very larger key space of 10195 is achieved.Complex diffusion matrix provides strong sensitivity.A lossless compression ratio of 5:1 is achieved. A chaos based image encryption and lossless compression algorithm using hash table and Chinese Remainder Theorem is proposed. Initially, the Henon map is used to generate the scrambled blocks of the input image. The scrambled block undergoes a fixed number of iterations based on the plain image using Arnold cat map. Since hyper chaos system has complex dynamical characteristics than chaos, the confused image is further permuted using the index sequence generated by the hyper chaos along with hash table structure. The permuted image is divided into blocks and the diffusion is carried out either by using Lorenz equations or by using another complex matrix generated from the plain image appropriately. Along with diffusion, compression is also carried out by Chinese Remainder Theorem for each block. This encryption algorithm has high key space, good NPCR and UACI values and very less correlation among adjacent pixels. Simulation results show the high effectiveness and security features of the proposed algorithm.

93 citations


Journal ArticleDOI
01 Aug 2016
TL;DR: This work initiates work on compressed linear algebra (CLA), in which lightweight database compression techniques are applied to matrices and then linear algebra operations such as matrix-vector multiplication are executed directly on the compressed representations.
Abstract: Large-scale machine learning (ML) algorithms are often iterative, using repeated read-only data access and I/O-bound matrix-vector multiplications to converge to an optimal model. It is crucial for performance to fit the data into single-node or distributed main memory. General-purpose, heavy- and lightweight compression techniques struggle to achieve both good compression ratios and fast decompression speed to enable block-wise uncompressed operations. Hence, we initiate work on compressed linear algebra (CLA), in which lightweight database compression techniques are applied to matrices and then linear algebra operations such as matrix-vector multiplication are executed directly on the compressed representations. We contribute effective column compression schemes, cache-conscious operations, and an efficient sampling-based compression algorithm. Our experiments show that CLA achieves in-memory operations performance close to the uncompressed case and good compression ratios that allow us to fit larger datasets into available memory. We thereby obtain significant end-to-end performance improvements up to 26x or reduced memory requirements.

84 citations


Journal ArticleDOI
TL;DR: A low-power 3-lead electrocardiogram (ECG)-on-chip with integrated real-time QRS detection and lossless data compression for wearable wireless ECG sensors allows computational resources to be shared among multiple functions, thus lowering the overall system power.
Abstract: This brief presents the design of a low-power 3-lead electrocardiogram (ECG)-on-chip with integrated real-time QRS detection and lossless data compression for wearable wireless ECG sensors. Data compression and QRS detection can reduce the sensor power by up to 2–5 times. A joint QRS detection and lossless data compression circuit allows computational resources to be shared among multiple functions, thus lowering the overall system power. The proposed technique achieves an average compression ratio of 2.15 times on standard test data. The QRS detector achieves a sensitivity (Se) of 99.58% and positive productivity (+P) of 99.57% @ 256 Hz when tested with the MIT/BIH database. Implemented in 0.35 $ \mu \text{m}$ process, the circuit consumes 0.96 $ \mu \text{W}$ @ 2.4 V with a core area of 1.56 mm $^{2}$ for two-channel ECG compression and QRS detection. Small size and ultralow-power consumption makes the chip suitable for usage in wearable/ambulatory ECG sensors.

78 citations


Journal ArticleDOI
TL;DR: This paper focuses on watermarking of ultrasound medical images with Lempel-Ziv-Welch (LZW) lossless-compressed watermarks, which is the combination of defined region of interest (ROI) and imageWatermarking secret key and the performance of the LZW compression technique was compared with other conventional compression methods based on compression ratio.
Abstract: In teleradiology, image contents may be altered due to noisy communication channels and hacker manipulation. Medical image data is very sensitive and can not tolerate any illegal change. Illegally changed image-based analysis could result in wrong medical decision. Digital watermarking technique can be used to authenticate images and detect as well as recover illegal changes made to teleradiology images. Watermarking of medical images with heavy payload watermarks causes image perceptual degradation. The image perceptual degradation directly affects medical diagnosis. To maintain the image perceptual and diagnostic qualities standard during watermarking, the watermark should be lossless compressed. This paper focuses on watermarking of ultrasound medical images with Lempel-Ziv-Welch (LZW) lossless-compressed watermarks. The watermark lossless compression reduces watermark payload without data loss. In this research work, watermark is the combination of defined region of interest (ROI) and image watermarking secret key. The performance of the LZW compression technique was compared with other conventional compression methods based on compression ratio. LZW was found better and used for watermark lossless compression in ultrasound medical images watermarking. Tabulated results show the watermark bits reduction, image watermarking with effective tamper detection and lossless recovery.

74 citations


Journal ArticleDOI
TL;DR: This paper reports on the results of a lossy data compression experiment with output from the CESM Large Ensemble (CESM-LE) Community Project, in which climate scientists are challenged to examine features of the data relevant to their interests, and to identify which of the ensemble members have been compressed and reconstructed.
Abstract: . High-resolution Earth system model simulations generate enormous data volumes, and retaining the data from these simulations often strains institutional storage resources. Further, these exceedingly large storage requirements negatively impact science objectives, for example, by forcing reductions in data output frequency, simulation length, or ensemble size. To lessen data volumes from the Community Earth System Model (CESM), we advocate the use of lossy data compression techniques. While lossy data compression does not exactly preserve the original data (as lossless compression does), lossy techniques have an advantage in terms of smaller storage requirements. To preserve the integrity of the scientific simulation data, the effects of lossy data compression on the original data should, at a minimum, not be statistically distinguishable from the natural variability of the climate system, and previous preliminary work with data from CESM has shown this goal to be attainable. However, to ultimately convince climate scientists that it is acceptable to use lossy data compression, we provide climate scientists with access to publicly available climate data that have undergone lossy data compression. In particular, we report on the results of a lossy data compression experiment with output from the CESM Large Ensemble (CESM-LE) Community Project, in which we challenge climate scientists to examine features of the data relevant to their interests, and attempt to identify which of the ensemble members have been compressed and reconstructed. We find that while detecting distinguishing features is certainly possible, the compression effects noticeable in these features are often unimportant or disappear in post-processing analyses. In addition, we perform several analyses that directly compare the original data to the reconstructed data to investigate the preservation, or lack thereof, of specific features critical to climate science. Overall, we conclude that applying lossy data compression to climate simulation data is both advantageous in terms of data reduction and generally acceptable in terms of effects on scientific results.

73 citations


Journal ArticleDOI
TL;DR: The proposed JND model can outperform the conventional JND guided compression schemes by providing better visual quality at the same coding bits and outperforms the state-of-the-art schemes in terms of the distortion masking ability.
Abstract: We propose a novel just noticeable difference (JND) model for a screen content image (SCI). The distinct properties of the SCI result in different behaviors of the human visual system when viewing the textual content, which motivate us to employ a local parametric edge model with an adaptive representation of the edge profile in JND modeling. In particular, we decompose each edge profile into its luminance, contrast, and structure, and then evaluate the visibility threshold in different ways. The edge luminance adaptation, contrast masking, and structural distortion sensitivity are studied in subjective experiments, and the final JND model is established based on the edge profile reconstruction with tolerable variations. Extensive experiments are conducted to verify the proposed JND model, which confirm that it is accurate in predicting the JND profile, and outperforms the state-of-the-art schemes in terms of the distortion masking ability. Furthermore, we explore the applicability of the proposed JND model in the scenario of perceptually lossless SCI compression, and experimental results show that the proposed scheme can outperform the conventional JND guided compression schemes by providing better visual quality at the same coding bits.

Posted Content
TL;DR: The Block Decomposition Method (BDM) as discussed by the authors is a block decomposition method based on Coding Theorem Method (CTM) that approximates local estimations of algorithmic complexity.
Abstract: We investigate the properties of a Block Decomposition Method (BDM), which extends the power of a Coding Theorem Method (CTM) that approximates local estimations of algorithmic complexity based upon Solomonoff-Levin's theory of algorithmic probability providing a closer connection to algorithmic complexity than previous attempts based on statistical regularities e.g. as spotted by some popular lossless compression schemes. The strategy behind BDM is to find small computer programs that produce the components of a larger, decomposed object. The set of short computer programs can then be artfully arranged in sequence so as to produce the original object and to estimate an upper bound on the length of the shortest computer program that produces said original object. We show that the method provides efficient estimations of algorithmic complexity but that it performs like Shannon entropy when it loses accuracy. We estimate errors and study the behaviour of BDM for different boundary conditions, all of which are compared and assessed in detail. The measure may be adapted for use with more multi-dimensional objects than strings, objects such as arrays and tensors. To test the measure we demonstrate the power of CTM on low algorithmic-randomness objects that are assigned maximal entropy (e.g. $\pi$) but whose numerical approximations are closer to the theoretical low algorithmic-randomness expectation. We also test the measure on larger objects including dual, isomorphic and cospectral graphs for which we know that algorithmic randomness is low. We also release implementations of the methods in most major programming languages---Wolfram Language (Mathematica), Matlab, R, Perl, Python, Pascal, C++, and Haskell---and a free online algorithmic complexity calculator.

Journal ArticleDOI
TL;DR: Results show that the proposed iDTT algorithm not only has higher compression ratio than iDCT method, but also is compatible with the widely used JPEG standard, and a framework of lossless image compression based on integer DTT is proposed.

Journal ArticleDOI
TL;DR: It is shown that for some modern coders that take HVS into consideration it is possible to give practical recommendations on setting a fixed PCC to provide a desired visual quality in a non-iterative manner.
Abstract: The problem of how to automatically provide a desired (required) visual quality in lossy compression of still images and video frames is considered in this paper. The quality can be measured based on different conventional and visual quality metrics. In this paper, we mainly employ human visual system (HVS) based metrics PSNR-HVS-M and MSSIM since both of them take into account several important peculiarities of HVS. To provide a desired visual quality with high accuracy, iterative image compression procedures are proposed and analyzed. An experimental study is performed for a large number of grayscale test images. We demonstrate that there exist several coders for which the number of iterations can be essentially decreased using a reasonable selection of the starting value and the variation interval for the parameter controlling compression (PCC). PCC values attained at the end of the iterative procedure may heavily depend upon the coder used and the complexity of the image. Similarly, the compression ratio also considerably depends on the above factors. We show that for some modern coders that take HVS into consideration it is possible to give practical recommendations on setting a fixed PCC to provide a desired visual quality in a non-iterative manner. The case when original images are corrupted by visible noise is also briefly studied.

Journal ArticleDOI
TL;DR: A low-complexity feld programmable gate arrays (FPGAs) implementation of this recent CCSDS 123 standard for multispectral and hyperspectral image (MHI) compression is presented, which demonstrates its main features in terms of compression efficiency and suitability for an implementation on the available on-board technologies.
Abstract: An efficient compression of hyperspectral images on-board satellites is mandatory in current and future space missions in order to save bandwidth and storage space. Reducing the data volume in space is a challenge that has been faced with a twofold approach: to propose new highly efficient compression algorithms; and to present technologies and strategies to execute the compression in the hardware available on-board. The Consultative Committee for Space Data Systems (CCSDS), a consortium of the major space agencies in the world, has recently issued the CCSDS 123 standard for multispectral and hyperspectral image (MHI) compression, with the aim of facilitating the inclusion of on-board compression on satellites by the space industry. In this paper, we present a low-complexity feld programmable gate arrays (FPGAs) implementation of this recent CCSDS 123 standard, which demonstrates its main features in terms of compression efficiency and suitability for an implementation on the available on-board technologies. A hardware architecture is conceived and designed with the aim of achieving low hardware occupancy and high performance on a space-qualified FPGA from the Microsemi RTAX family. The resulting FPGA implementation is therefore suitable for on-board compression. The effect of the several CCSDS-123 configuration parameters on the compression efficiency and hardware complexity is taken into consideration to provide flexibility in such a way that the implementation can be adapted to different application scenarios. Synthesis results show a very low occupancy of 34% and a maximum frequency of 43 MHz on a space-qualified RTAX1000S. The benefits of the proposed implementation are further evidenced by a demonstrator, which is implemented on a commercial prototyping board from Xilinx. Finally, a comparison with other FPGA implementations of on-board data compression algorithms is provided.

Proceedings Article
22 Feb 2016
TL;DR: The results show that the proposed design solution can largely reduce the write stress on SLC-mode flash memory pages without significant latency overhead and meanwhile incurs relatively small silicon implementation cost.
Abstract: Inside modern SSDs, a small portion of MLC/TLC NAND flash memory blocks operate in SLC-mode to serve as write buffer/cache and/or store hot data. These SLC-mode blocks absorb a large percentage of write operations. To balance memory wear-out, such MLC/TLC-to-SLC configuration rotates among all the memory blocks inside SSDs. This paper presents a simple yet effective design approach to reduce write stress on SLC-mode flash blocks and hence improve the overall SSD lifetime. The key is to implement well-known delta compression without being subject to the read latency and data management complexity penalties inherent to conventional practice. The underlying theme is to leverage the partial programmability of SLC-mode flash memory pages to ensure that the original data and all the subsequent deltas always reside in the same memory physical page. To avoid the storage capacity overhead, we further propose to combine intra-sector lossless data compression with intra-page delta compression, leading to opportunistic in-place delta compression. This paper presents specific techniques to address important issues for its practical implementation, including data error correction, and intra-page data placement and management. We carried out comprehensive experiments, simulations, and ASIC (application-specific integrated circuit) design. The results show that the proposed design solution can largely reduce the write stress on SLC-mode flash memory pages without significant latency overhead and meanwhile incurs relatively small silicon implementation cost.

Journal ArticleDOI
TL;DR: A robust, secure and lossless digital image watermarking based on DWT and DCT is presented, which embeds watermark like patient's name, disease'sName, hospital'sname, and doctor's signature into original medical image and provides privacy of patient.

Journal ArticleDOI
TL;DR: This paper proposes a novel lossless compression method to further reduce the size of a set of JPEG coded correlated images without any loss of information, and jointly removes inter/intra image redundancy in the feature, spatial, and frequency domains.
Abstract: The explosion of digital photos has posed a significant challenge to photo storage and transmission for both personal devices and cloud platforms. In this paper, we propose a novel lossless compression method to further reduce the size of a set of JPEG coded correlated images without any loss of information. The proposed method jointly removes inter/intra image redundancy in the feature, spatial, and frequency domains. For each collection, we first organize the images into a pseudo video by minimizing the global prediction cost in the feature domain. We then present a hybrid disparity compensation method to better exploit both the global and local correlations among the images in the spatial domain. Furthermore, the redundancy between each compensated signal and the corresponding target image is adaptively reduced in the frequency domain. Experimental results demonstrate the effectiveness of the proposed lossless compression method. Compared with the JPEG coded image collections, our method achieves average bit savings of more than 31%.

01 Jul 2016
TL;DR: This specification defines a lossless compressed data format that compresses data using a combination of the LZ77 algorithm and Huffman coding, with efficiency comparable to the best currently available general-purpose compression methods.
Abstract: This specification defines a lossless compressed data format that compresses data using a combination of the LZ77 algorithm and Huffman coding, with efficiency comparable to the best currently available general-purpose compression methods.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: In this article, the authors propose two new techniques to increase the degree of parallelism during decompression, which can exploit the massive parallelism of modern multi-core processors and GPUs for data decompression within a block.
Abstract: Today's exponentially increasing data volumes and the high cost of storage make compression essential for the Big Data industry. Although research has concentrated on efficient compression, fast decompression is critical for analytics queries that repeatedly read compressed data. While decompression can be parallelized somewhat by assigning each data block to a different process, break-through speed-ups require exploiting the massive parallelism of modern multi-core processors and GPUs for data decompression within a block. We propose two new techniques to increase the degree of parallelism during decompression. The first technique exploits the massive parallelism of GPU and SIMD architectures. The second sacrifices some compression efficiency to eliminate data dependencies that limit parallelism during decompression. We evaluate these techniques on the decompressor of the DEFLATE scheme, called Inflate, which is based on LZ77 compression and Huffman encoding. We achieve a 2× speed-up in a head-to-head comparison with several multi core CPU-based libraries, while achieving a 17% energy saving with comparable compression ratios.

Proceedings ArticleDOI
01 Mar 2016
TL;DR: In this article, the average status update age is derived from the exponential bound on the probability of error in streaming source coding with delay, and an age optimal block coding scheme is proposed based on an approximation of the average age by converting the streaming source decoding system into a D/G/1 queue.
Abstract: We examine lossless data compression from an average delay perspective. An encoder receives input symbols one per unit time from an i.i.d. source and submits binary codewords to a FIFO buffer that transmits bits at a fixed rate to a receiver/decoder. Each input symbol at the encoder is viewed as a status update by the source and the system performance is characterized by the status update age, defined as the number of time units (symbols) the decoder output lags behind the encoder input. An upper bound on the average status age is derived from the exponential bound on the probability of error in streaming source coding with delay. Apart from the influence of the error exponent that describes the convergence of the error, this upper bound also scales with the constant multiplier term in the error probability. However, the error exponent does not lead to an accurate description of the status age for small delay and small blocklength. An age optimal block coding scheme is proposed based on an approximation of the average age by converting the streaming source coding system into a D/G/1 queue. We compare this scheme to the error exponent optimal coding scheme which uses the method of types. We show that maximizing the error exponent is not equivalent to minimizing the average status age.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: The recommendations are made for the use of compression techniques in the construction of onboard systems for remote sensing with major advantages and disadvantages of image compression methods in terms of implementation on FPGA.
Abstract: The review of the main methods for lossless images compression that can be used in remote sensing tasks are given. The major advantages and disadvantages of image compression methods in terms of implementation on FPGA are shown. The recommendations are made for the use of compression techniques in the construction of onboard systems for remote sensing.

Journal ArticleDOI
TL;DR: A high throughput memory efficient pipelining architecture for Fast Efficient Set Partitioning in Hierarchical Trees (SPIHT) image compression system is explained and maximum PSNR value, CR is attained and very high accurate image after decompression is produced.
Abstract: In this research paper, a high throughput memory efficient pipelining architecture for Fast Efficient Set Partitioning in Hierarchical Trees (SPIHT) image compression system is explained. The main aim of this paper is to compress and implement the image without any loss of information. So, we are using spatial oriented tree approach in Fast Efficient SPIHT algorithm for compression and Spartan 3 EDK kit for hardware implementation analysis purpose. Integer wavelet transform is used for encoding and decoding process in SPIHT algorithm. Here, we are using pipelining architecture to implement it in FPGA kit because pipeline architecture is more suitable for hardware utility purpose. Generally an image file will occupy more amount of space. In order to reduce the memory size no loss during transmission we are using this approach. By this way we are attained maximum PSNR value, CR and also produced very high accurate image after decompression, when compared with the results of other previous algorithms. In this module, the hardware tools used are dual core processor and FPGA Spartan 3 EDK kit and the software tool windows 8 operating system and the tool kit is MATLAB 7.8.

Journal ArticleDOI
TL;DR: This paper improves predictive lossy compression in several ways, using a standard issued by the Consultative Committee on Space Data Systems, namely CCSDS-123, as an example of application, and proposes a constant-signal-to-noise-ratio algorithm that bounds the maximum relative error between each pixel of the reconstructed image and the correspondingpixel of the original image.
Abstract: Predictive lossy compression has been shown to represent a very flexible framework for lossless and lossy onboard compression of multispectral and hyperspectral images with quality and rate control. In this paper, we improve predictive lossy compression in several ways, using a standard issued by the Consultative Committee on Space Data Systems, namely CCSDS-123, as an example of application. First, exploiting the flexibility in the error control process, we propose a constant-signal-to-noise-ratio algorithm that bounds the maximum relative error between each pixel of the reconstructed image and the corresponding pixel of the original image. This is very useful to avoid low-energy areas of the image being affected by large errors. Second, we propose a new rate control algorithm that has very low complexity and provides performance equal to or better than existing work. Third, we investigate several entropy coding schemes that can speed up the hardware implementation of the algorithm and, at the same time, improve coding efficiency. These advances make predictive lossy compression an extremely appealing framework for onboard systems due to its simplicity, flexibility, and coding efficiency.

Book
25 May 2016
TL;DR: This book describes recent results in the application of universal codes to prediction and the statistical analysis of time series, including attacks on block ciphers and a homogeneity test used to determine authorship of literary texts.
Abstract: Universal codes efficiently compress sequences generated by stationary and ergodic sources with unknown statistics, and they were originally designed for lossless data compression. In the meantime,it was realized that they can be used for solving important problems of prediction and statistical analysis of time series, and this book describes recent results in this area. The first chapter introduces and describes the application of universal codes to prediction and the statistical analysis of time series; the second chapter describes applications of selected statistical methods to cryptography, including attacks on block ciphers; and the third chapter describes a homogeneity test used to determine authorship of literary texts. The book will be useful for researchers and advanced students in information theory, mathematical statistics, time-series analysis, and cryptography. It is assumed that the reader has some grounding in statistics and in information theory.

Posted Content
TL;DR: In this article, the authors propose two new techniques to increase the degree of parallelism during decompression of DEFLATE, which is based on LZ77 compression and Huffman encoding and achieves a 2X speedup in a head-to-head comparison with several multi-core CPU-based libraries.
Abstract: Today's exponentially increasing data volumes and the high cost of storage make compression essential for the Big Data industry. Although research has concentrated on efficient compression, fast decompression is critical for analytics queries that repeatedly read compressed data. While decompression can be parallelized somewhat by assigning each data block to a different process, break-through speed-ups require exploiting the massive parallelism of modern multi-core processors and GPUs for data decompression within a block. We propose two new techniques to increase the degree of parallelism during decompression. The first technique exploits the massive parallelism of GPU and SIMD architectures. The second sacrifices some compression efficiency to eliminate data dependencies that limit parallelism during decompression. We evaluate these techniques on the decompressor of the DEFLATE scheme, called Inflate, which is based on LZ77 compression and Huffman encoding. We achieve a 2X speed-up in a head-to-head comparison with several multi-core CPU-based libraries, while achieving a 17% energy saving with comparable compression ratios.

Journal ArticleDOI
TL;DR: Compared with existing methods, experiments show the feasibility and efficiency of the proposed method, especially in aspect of embedding capacity, embedding quality and error-free recovery with increasing payload.
Abstract: In this paper, a novel reversible data hiding algorithm for encrypted images is proposed. In encryption phase, chaotic sequence is applied to encrypt the original image. Then the least significant bits (LSBs) of pixels in encrypted image are losslessly compressed to leave place for secret data. With auxiliary bit stream, the lossless compression is realized by the Hamming distance calculation between the LSB stream and auxiliary stream. At receiving terminal, the operation is flexible, that is, it meets the requirement of separation. With the decryption key, a receiver can get access to the marked decrypted image which is similar to the original one. With data-hiding key, the receiver can successfully extract secret data from the marked encrypted image. With both keys, the receiver can get secret data and the exactly original image. Compared with existing methods, experiments show the feasibility and efficiency of the proposed method, especially in aspect of embedding capacity, embedding quality and error-free recovery with increasing payload.

Journal ArticleDOI
TL;DR: Several methods for preprocessing of phasor angles are presented, including a new method-frequency compensated difference encoding-that is able to significantly reduce angle data entropy, and an entropy encoder based on Golomb-Rice codes that is ideal for high-throughput signal compression.
Abstract: Phasor measurement units (PMUs) are being increasingly deployed to improve monitoring and control of the power grid due to their improved data synchronization and reporting rates in comparison with legacy metering devices. However, one drawback of their higher data rates is the associated increase in bandwidth (for transmission) and storage requirements (for data archives). Fortunately, typical grid behavior can lead to significant compression opportunities for phasor angle measurements. For example, operation of the grid at near-nominal frequency results in small changes in phase angles between frames, and the similarity in frequencies throughout the system results in a high level of correlation between phasor angles of different PMUs. This paper presents several methods for preprocessing of phasor angles that take advantage of these system characteristics, including a new method—frequency compensated difference encoding—that is able to significantly reduce angle data entropy. After the preprocessor stage, the signal is input to an entropy encoder, based on Golomb–Rice codes, that is ideal for high-throughput signal compression. The ability of the proposed methods to compress phase angles is demonstrated using a large corpus of data—over 1 billion phasor angles from 25 data sets—captured during typical and atypical grid conditions.

Journal ArticleDOI
01 Jan 2016-Optik
TL;DR: A block based lossless image compression algorithm using Hadamard transform and Huffman encoding which is a simple algorithm with less complexity that yields better results in terms of compression ratio when compared with existing lossless compression algorithms such as JPEG 2000.

Journal ArticleDOI
TL;DR: The purpose of the image compression is to decrease the redundancy and irrelevance of image data to be capable to record or send data in an effective form, which decreases the time of transmit in the network and raises the transmission speed.
Abstract: Image compression is an implementation of the data compression which encodes actual image with some bits. Thepurpose of the image compression is to decrease the redundancy and irrelevance of image data to be capable to record or send data in an effective form. Hence the image compression decreases the time of transmit in the network and raises the transmission speed. In Lossless technique of image compression, no data get lost while doing the compression. To solve these types of issues various techniques for the image compression are used. Now questions like how to do mage compression and second one is which types of technology is used, may be arises. For this reason commonly two types’ of approaches are explained called as lossless and the lossy image compression approaches. These techniques are easy in their applications and consume very little memory. An algorithm has also been introduced and applied to compress images and to decompress them back, by using the Huffman encoding techniques.