scispace - formally typeset
Search or ask a question

Showing papers on "Lossless compression published in 2011"


Journal ArticleDOI
TL;DR: The biomimetic CMOS dynamic vision and image sensor described in this paper is based on a QVGA array of fully autonomous pixels containing event-based change detection and pulse-width-modulation imaging circuitry, which ideally results in lossless video compression through complete temporal redundancy suppression at the pixel level.
Abstract: The biomimetic CMOS dynamic vision and image sensor described in this paper is based on a QVGA (304×240) array of fully autonomous pixels containing event-based change detection and pulse-width-modulation (PWM) imaging circuitry. Exposure measurements are initiated and carried out locally by the individual pixel that has detected a change of brightness in its field-of-view. Pixels do not rely on external timing signals and independently and asynchronously request access to an (asynchronous arbitrated) output channel when they have new grayscale values to communicate. Pixels that are not stimulated visually do not produce output. The visual information acquired from the scene, temporal contrast and grayscale data, are communicated in the form of asynchronous address-events (AER), with the grayscale values being encoded in inter-event intervals. The pixel-autonomous and massively parallel operation ideally results in lossless video compression through complete temporal redundancy suppression at the pixel level. Compression factors depend on scene activity and peak at ~1000 for static scenes. Due to the time-based encoding of the illumination information, very high dynamic range - intra-scene DR of 143 dB static and 125 dB at 30 fps equivalent temporal resolution - is achieved. A novel time-domain correlated double sampling (TCDS) method yields array FPN of 56 dB (9.3 bit) for >10 Lx illuminance.

632 citations


Journal ArticleDOI
TL;DR: These anti-forensic techniques are capable of removing forensically detectable traces of image compression without significantly impacting an image's visual quality and can be used to render several forms of image tampering such as double JPEG compression, cut-and-paste image forgery, and image origin falsification undetectable through compression-history-based forensic means.
Abstract: As society has become increasingly reliant upon digital images to communicate visual information, a number of forensic techniques have been developed to verify the authenticity of digital images. Amongst the most successful of these are techniques that make use of an image's compression history and its associated compression fingerprints. Little consideration has been given, however, to anti-forensic techniques capable of fooling forensic algorithms. In this paper, we present a set of anti-forensic techniques designed to remove forensically significant indicators of compression from an image. We do this by first developing a generalized framework for the design of anti-forensic techniques to remove compression fingerprints from an image's transform coefficients. This framework operates by estimating the distribution of an image's transform coefficients before compression, then adding anti-forensic dither to the transform coefficients of a compressed image so that their distribution matches the estimated one. We then use this framework to develop anti-forensic techniques specifically targeted at erasing compression fingerprints left by both JPEG and wavelet-based coders. Additionally, we propose a technique to remove statistical traces of the blocking artifacts left by image compression algorithms that divide an image into segments during processing. Through a series of experiments, we demonstrate that our anti-forensic techniques are capable of removing forensically detectable traces of image compression without significantly impacting an image's visual quality. Furthermore, we show how these techniques can be used to render several forms of image tampering such as double JPEG compression, cut-and-paste image forgery, and image origin falsification undetectable through compression-history-based forensic means.

214 citations


Book ChapterDOI
29 Aug 2011
TL;DR: This work proposes an effective method for In-situ Sort-And-B-spline Error-bounded Lossy Abatement (ISABELA) of scientific data that is widely regarded as effectively incompressible and significantly outperforms existing lossy compression methods, such as Wavelet compression.
Abstract: Modern large-scale scientific simulations running on HPC systems generate data in the order of terabytes during a single run. To lessen the I/O load during a simulation run, scientists are forced to capture data infrequently, thereby making data collection an inherently lossy process. Yet, lossless compression techniques are hardly suitable for scientific data due to its inherently random nature; for the applications used here, they offer less than 10% compression rate. They also impose significant overhead during decompression, making them unsuitable for data analysis and visualization that require repeated data access. To address this problem, we propose an effective method for In-situ Sort-And-B-spline Error-bounded Lossy Abatement (ISABELA) of scientific data that is widely regarded as effectively incompressible. With ISABELA, we apply a preconditioner to seemingly random and noisy data along spatial resolution to achieve an accurate fitting model that guarantees a ≥ 0.99 correlation with the original data. We further take advantage of temporal patterns in scientific data to compress data by ≈ 85%, while introducing only a negligible overhead on simulations in terms of runtime. ISABELA significantly outperforms existing lossy compression methods, such as Wavelet compression. Moreover, besides being a communication-free and scalable compression technique, ISABELA is an inherently local decompression method, namely it does not decode the entire data, making it attractive for random access.

174 citations


Journal ArticleDOI
TL;DR: The higher the compression ratio and the smoother the original image, the better the quality of the reconstructed image.
Abstract: This work proposes a novel scheme for lossy compression of an encrypted image with flexible compression ratio. A pseudorandom permutation is used to encrypt an original image, and the encrypted data are efficiently compressed by discarding the excessively rough and fine information of coefficients generated from orthogonal transform. After receiving the compressed data, with the aid of spatial correlation in natural image, a receiver can reconstruct the principal content of the original image by iteratively updating the values of coefficients. This way, the higher the compression ratio and the smoother the original image, the better the quality of the reconstructed image.

172 citations


Journal ArticleDOI
TL;DR: A novel framework for LDE is developed by incorporating the merits from the generalized statistical quantity histogram (GSQH) and the histogram-based embedding and is secure for copyright protection because of the safe storage and transmission of side information.
Abstract: Histogram-based lossless data embedding (LDE) has been recognized as an effective and efficient way for copyright protection of multimedia. Recently, a LDE method using the statistical quantity histogram has achieved good performance, which utilizes the similarity of the arithmetic average of difference histogram (AADH) to reduce the diversity of images and ensure the stable performance of LDE. However, this method is strongly dependent on some assumptions, which limits its applications in practice. In addition, the capacities of the images with the flat AADH, e.g., texture images, are a little bit low. For this purpose, we develop a novel framework for LDE by incorporating the merits from the generalized statistical quantity histogram (GSQH) and the histogram-based embedding. Algorithmically, we design the GSQH driven LDE framework carefully so that it: (1) utilizes the similarity and sparsity of GSQH to construct an efficient embedding carrier, leading to a general and stable framework; (2) is widely adaptable for different kinds of images, due to the usage of the divide-and-conquer strategy; (3) is scalable for different capacity requirements and avoids the capacity problems caused by the flat histogram distribution; (4) is conditionally robust against JPEG compression under a suitable scale factor; and (5) is secure for copyright protection because of the safe storage and transmission of side information. Thorough experiments over three kinds of images demonstrate the effectiveness of the proposed framework.

167 citations


Proceedings Article
19 Jun 2011
TL;DR: This paper presents several language model implementations that are both highly compact and fast to query, including a simple but novel language model caching technique that improves the query speed of the language models (and SRILM) by up to 300%.
Abstract: N-gram language models are a major resource bottleneck in machine translation. In this paper, we present several language model implementations that are both highly compact and fast to query. Our fastest implementation is as fast as the widely used SRILM while requiring only 25% of the storage. Our most compact representation can store all 4 billion n-grams and associated counts for the Google n-gram corpus in 23 bits per n-gram, the most compact lossless representation to date, and even more compact than recent lossy compression techniques. We also discuss techniques for improving query speed during decoding, including a simple but novel language model caching technique that improves the query speed of our language models (and SRILM) by up to 300%.

158 citations


Proceedings ArticleDOI
22 May 2011
TL;DR: The proposed compression scheme using RLS-DLA learned dictionaries in the 9/7 wavelet domain performs better than using dictionaries learned by other methods, and the compression rate is just below the JPEG-2000 rate which is promising considering the simple entropy coding used.
Abstract: The recently presented recursive least squares dictionary learning algorithm (RLS-DLA) is tested in a general image compression application. Dictionaries are learned in the pixel domain and in the 9/7 wavelet domain, and then tested in a straightforward compression scheme. Results are compared with state-of-the-art compression methods. The proposed compression scheme using RLS-DLA learned dictionaries in the 9/7 wavelet domain performs better than using dictionaries learned by other methods. The compression rate is just below the JPEG-2000 rate which is promising considering the simple entropy coding used.

123 citations


01 Jan 2011
TL;DR: This paper attempts to give a recipe for selecting one of the popular image compression algorithms based on Wavelet, JPEG/DCT, VQ, and Fractal approaches.
Abstract: Image compression is now essential for applications such as transmission and storage in data bases. In this paper we review and discuss about the image compression, need of compression, its principles, and classes of compression and various algorithm of image compression. This paper attempts to give a recipe for selecting one of the popular image compression algorithms based on Wavelet, JPEG/DCT, VQ, and Fractal approaches. We review and discuss the advantages and disadvantages of these algorithms for compressing grayscale images, give an experimental comparison on 256×256 commonly used image of Lenna and one 400×400 fingerprint image.

87 citations


Journal ArticleDOI
TL;DR: This paper presents a highly integrated VLSI implementation of a mixed bio-signal lossless data compressor capable of handling multichannel electroencephalogram (EEG), electrocardiogram (ECG) and diffuse optical tomography (DOT) bio-Signal data for reduced storage and communication bandwidth requirements in portable, wireless brain-heart monitoring systems used in hospital or home care settings.
Abstract: This paper presents a highly integrated VLSI implementation of a mixed bio-signal lossless data compressor capable of handling multichannel electroencephalogram (EEG), electrocardiogram (ECG) and diffuse optical tomography (DOT) bio-signal data for reduced storage and communication bandwidth requirements in portable, wireless brain-heart monitoring systems used in hospital or home care settings. The compressor integrated in a multiprocessor brain-heart monitoring IC comprises 15 k gates and 12 kbits of RAM, occupying a total area of 58 k μm2 in 65 nm CMOS technology. Results demonstrate an average compression ratio (CR) of 2.05, and a simulated power consumption of 170 μW at an operating condition of 24 MHz clock and 1.0 V core voltage. Nominal power savings of 43% and 47% at the transmitter can be achieved when employing Bluetooth and Zigbee transceivers, respectively.

83 citations


01 Jan 2011
TL;DR: A survey of different basic lossless data compression algorithms using Statistical compression techniques and Dictionary based compression techniques on text data is provided.
Abstract: Data Compression is the science and art of representing information in a compact form. For decades, Data compression has been one of the critical enabling technologies for the ongoing digital multimedia revolution. There are lot of data compression algorithms which are available to compress files of different formats. This paper provides a survey of different basic lossless data compression algorithms. Experimental results and comparisons of the lossless compression algorithms using Statistical compression techniques and Dictionary based compression techniques were performed on text data. Among the statistical coding techniques the algorithms such as Shannon-Fano Coding, Huffman coding, Adaptive Huffman coding, Run Length Encoding and Arithmetic coding are considered. Lempel Ziv scheme which is a dictionary based technique is divided into two families: those derived from LZ77 (LZ77, LZSS, LZH and LZB) and those derived from LZ78 (LZ78, LZW and LZFG). A set of interesting conclusions are derived on their basis.

80 citations


Journal ArticleDOI
TL;DR: The present work enhances the basic DUDE scheme by incorporating statistical modeling tools that have proven successful in addressing similar issues in lossless image compression, and significantly surpass the state of the art in the case of salt and pepper (S&P) and -ary symmetric noise, and perform well for Gaussian noise.
Abstract: We present an extension of the discrete universal denoiser DUDE, specialized for the denoising of grayscale images. The original DUDE is a low-complexity algorithm aimed at recovering discrete sequences corrupted by discrete memoryless noise of known statistical characteristics. It is universal, in the sense of asymptotically achieving, without access to any information on the statistics of the clean sequence, the same performance as the best denoiser that does have access to such information. The DUDE, however, is not effective on grayscale images of practical size. The difficulty lies in the fact that one of the DUDE's key components is the determination of conditional empirical probability distributions of image samples, given the sample values in their neighborhood. When the alphabet is relatively large (as is the case with grayscale images), even for a small-sized neighborhood, the required distributions would be estimated from a large collection of sparse statistics, resulting in poor estimates that would not enable effective denoising. The present work enhances the basic DUDE scheme by incorporating statistical modeling tools that have proven successful in addressing similar issues in lossless image compression. Instantiations of the enhanced framework, which is referred to as iDUDE, are described for examples of additive and nonadditive noise. The resulting denoisers significantly surpass the state of the art in the case of salt and pepper (S&P) and -ary symmetric noise, and perform well for Gaussian noise.

Journal ArticleDOI
TL;DR: 2-D based compression schemes yielded higher lossless compression compared to the standard vector-based compression, predictive and entropy coding schemes and were investigated and compared with other schemes such as JPEG2000 image compression standard, predictive coding based shorten, and simple entropy coding.

Journal ArticleDOI
TL;DR: The efficiency of the proposed scheme is demonstrated by results, especially, when faced to the method presented in the recently published paper based on the block truncation coding using pattern fitting principle.
Abstract: This paper considers the design of a lossy image compression algorithm dedicated to color still images. After a preprocessing step (mean removing and RGB to YCbCr transformation), the DCT transform is applied and followed by an iterative phase (using the bisection method) including the thresholding, the quantization, dequantization, the inverse DCT, YCbCr to RGB transform and the mean recovering. This is done in order to guarantee that a desired quality (fixed in advance using the well known PSNR metric) is checked. For the aim to obtain the best possible compression ratio CR, the next step is the application of a proposed adaptive scanning providing, for each (n, n) DCT block a corresponding (n×n) vector containing the maximum possible run of zeros at its end. The last step is the application of a modified systematic lossless encoder. The efficiency of the proposed scheme is demonstrated by results, especially, when faced to the method presented in the recently published paper based on the block truncation coding using pattern fitting principle.

Journal ArticleDOI
TL;DR: The sequence memoizer is a new hierarchical Bayesian model for discrete sequence data that captures long range dependencies and power-law characteristics, while remaining computationally attractive.
Abstract: Probabilistic models of sequences play a central role in most machine translation, automated speech recognition, lossless compression, spell-checking, and gene identification applications to name but a few. Unfortunately, real-world sequence data often exhibit long range dependencies which can only be captured by computationally challenging, complex models. Sequence data arising from natural processes also often exhibits power-law properties, yet common sequence models do not capture such properties. The sequence memoizer is a new hierarchical Bayesian model for discrete sequence data that captures long range dependencies and power-law characteristics, while remaining computationally attractive. Its utility as a language model and general purpose lossless compressor is demonstrated.

Proceedings ArticleDOI
05 Mar 2011
TL;DR: This paper has created the highly parallel GFC compression algorithm for double-precision floating-point data, specifically designed for GPUs, that can improve internode communication throughput on current and upcoming networks by fully saturating the interconnection links with compressed data.
Abstract: Numeric simulations often generate large amounts of data that need to be stored or sent to other compute nodes. This paper investigates whether GPUs are powerful enough to make real-time data compression and decompression possible in such environments, that is, whether they can operate at the 32- or 40-Gb/s throughput of emerging network cards. The fastest parallel CPU-based floating-point data compression algorithm operates below 20 Gb/s on eight Xeon cores, which is significantly slower than the network speed and thus insufficient for compression to be practical in high-end networks. As a remedy, we have created the highly parallel GFC compression algorithm for double-precision floating-point data. This algorithm is specifically designed for GPUs. It compresses at a minimum of 75 Gb/s, decompresses at 90 Gb/s and above, and can therefore improve internode communication throughput on current and upcoming networks by fully saturating the interconnection links with compressed data.

Proceedings ArticleDOI
26 Sep 2011
TL;DR: This paper presents an implementation of the Lempel-Ziv-Storer-Szymanski (LZSS) loss less data compression algorithm on GPUs by using NVIDIA GPUs Compute Unified Device Architecture (CUDA) Framework and provides an API for in memory compression without the need for reading from and writing to files.
Abstract: Increasing needs in efficient storage management and better utilization of network bandwidth with less data transfer have led the computing community to consider data compression as a solution. However, compression introduces extra overhead and performance can suffer. The key elements in making the decision to use compression are execution time and compression ratio. Due to negative performance impact, compression is often neglected. General purpose computing on graphic processing units (GPUs) introduces new opportunities where parallelism is available. Our work targets the use of opportunities in GPU based systems by exploiting parallelism in compression algorithms. In this paper we present an implementation of the Lempel-Ziv-Storer-Szymanski (LZSS) loss less data compression algorithm by using NVIDIA GPUs Compute Unified Device Architecture (CUDA) Framework. Our implementation of the LZSS algorithm on GPUs significantly improves the performance of the compression process compared to CPU based implementation without any loss in compression ratio. This can support GPU based clusters in solving application bandwidth problems. Our system outperforms the serial CPU LZSS implementation by up to 18x, the parallel threaded version up to 3x and the BZIP2 program by up to 6x in terms of compression time, showing the promise of CUDA systems in loss less data compression. To give the programmers an easy to use tool, our work also provides an API for in memory compression without the need for reading from and writing to files, in addition to the version involving I/O.

Journal ArticleDOI
TL;DR: A software based lossless ECG compression algorithm that has been developed in using the reversed logic and it has been observed that data is reconstructed with almost negligible difference as compared with the original (PRD 0.023%).

Journal ArticleDOI
TL;DR: A novel transform based on the Karhunen-Loêve transform is introduced, which, while obtaining a better coding performance than the wavelets, does not have the mentioned disadvantages of the KLT.
Abstract: Spectral transforms are widely used for the codification of remote-sensing imagery, with the Karhunen-Loeve transform (KLT) and wavelets being the two most common transforms. The KLT presents a higher coding performance than the wavelets. However, it also carries several disadvantages: high computational cost and memory requirements, difficult implementation, and lack of scalability. In this paper, we introduce a novel transform based on the KLT, which, while obtaining a better coding performance than the wavelets, does not have the mentioned disadvantages of the KLT. Due to its very small amount of side information, the transform can be applied in a line-based scheme, which particularly reduces the transform memory requirements. Extensive experimental results are conducted for the Airborne Visible/Infrared Imaging Spectrometer and Hyperion images, both for lossy and lossless and in combination with various hyperspectral coders. The results of the effects on Reed Xiaoli anomaly detection and k-means clustering are also included. The theoretical and experimental evidences suggest that the proposed transform might be a good replacement for the wavelets as a spectral decorrelator in many of the situations where the KLT is not a suitable option.

Journal ArticleDOI
TL;DR: The proposed lossless algorithm has compression ratio of approximately 73% for endoscopic images and has better compression ratio, lower computational complexity, and lesser memory requirement than the existing lossless compression standard such as JPEG-LS.
Abstract: We present a lossless and low-complexity image compression algorithm for endoscopic images. The algorithm consists of a static prediction scheme and a combination of golomb-rice and unary encoding. It does not require any buffer memory and is suitable to work with any commercial low-power image sensors that output image pixels in raster-scan fashion. The proposed lossless algorithm has compression ratio of approximately 73% for endoscopic images. Compared to the existing lossless compression standard such as JPEG-LS, the proposed scheme has better compression ratio, lower computational complexity, and lesser memory requirement. The algorithm is implemented in a 0.18 µm CMOS technology and consumes 0.16mm × 0.16mm silicon area and 18 µW of power when working at 2 frames per second.

Journal ArticleDOI
TL;DR: It is shown that encoding 3D models using lossless data compression algorithms prior to secret sharing helps reduce share sizes and remove redundancies and patterns that possibly ease cryptanalysis.
Abstract: In this paper, we propose two secret sharing approaches for 3D models using Blakely and Thien and Lin schemes. We show that encoding 3D models using lossless data compression algorithms prior to secret sharing helps reduce share sizes and remove redundancies and patterns that possibly ease cryptanalysis. The proposed approaches provide a higher tolerance against data corruption/loss than existing 3D protection mechanisms, such as encryption. Experimental results are provided to demonstrate the secrecy and safety of the proposed schemes. The feasibility of the proposed algorithms is demonstrated on various 3D models.

Book
30 Dec 2011
TL;DR: This book contains a wealth of illustrations, step-by-step descriptions of algorithms, examples and practice problems, which make it an ideal textbook for senior undergraduate and graduate students, as well as a useful self-study tool for researchers and professionals.
Abstract: With clear and easy-to-understand explanations, this book covers the fundamental concepts and coding methods of signal compression, whilst still retaining technical depth and rigor. It contains a wealth of illustrations, step-by-step descriptions of algorithms, examples and practice problems, which make it an ideal textbook for senior undergraduate and graduate students, as well as a useful self-study tool for researchers and professionals. Principles of lossless compression are covered, as are various entropy coding techniques, including Huffman coding, arithmetic coding and Lempel-Ziv coding. Scalar and vector quantization and trellis coding are thoroughly explained, and a full chapter is devoted to mathematical transformations including the KLT, DCT and wavelet transforms. The workings of transform and subband/wavelet coding systems, including JPEG2000 and SBHP image compression and H.264/AVC video compression, are explained and a unique chapter is provided on set partition coding, shedding new light on SPIHT, SPECK, EZW and related methods.

Journal ArticleDOI
TL;DR: The simulation and experimental results show that the Wavelet-Bandelets method has a higher compression ratio than Wavelet methods and all the other methods investigated in this paper, while it still maintains low NRMS error.
Abstract: In the transformation based compression algorithms of digital hologram for three-dimensional object, the balance between compression ratio and normalized root mean square (NRMS) error is always the core of algorithm development. The Wavelet transform method is efficient to achieve high compression ratio but NRMS error is also high. In order to solve this issue, we propose a hologram compression method using Wavelet-Bandelets transform. Our simulation and experimental results show that the Wavelet-Bandelets method has a higher compression ratio than Wavelet methods and all the other methods investigated in this paper, while it still maintains low NRMS error.

Proceedings ArticleDOI
29 Dec 2011
TL;DR: Experimental results show that the proposed compression approach improves the compression efficiency when compared to the traditional MPEG-4/AVC compression method for II.
Abstract: Integral imaging is an attractive auto-stereoscopic three dimensional (3D) technique for next generation 3DTV. To improve its video quality, new techniques are required to effectively compress the huge volume of integral image (II) data. In this paper, a new compression method implemented by multi-view video coding (MVC) is provided and used for sub-images (SI). SI is an alternative form of 2D image transformed from original II. Each SI represents the 3D scene from parallel viewing directions and contains superior compression capabilities than original captured elemental images (EI). For this reason, we consider arranging the group of SIs as the format of multi-view video (MVV) and then encode the generated MVV by MVC standard. Experimental results show that our proposed compression approach improves the compression efficiency when compared to the traditional MPEG-4/AVC compression method for II.

Journal ArticleDOI
TL;DR: A lossy reference frame compression technique that can be used in video coding with minimal impact on quality while significantly reducing power and bandwidth requirement while compatible with all existing video standards is described.
Abstract: Large external memory bandwidth requirement leads to increased system power dissipation and cost in video coding application. Majority of the external memory traffic in video encoder is due to reference data accesses. We describe a lossy reference frame compression technique that can be used in video coding with minimal impact on quality while significantly reducing power and bandwidth requirement. The low cost transformless compression technique uses lossy reference for motion estimation to reduce memory traffic, and lossless reference for motion compensation (MC) to avoid drift. Thus, it is compatible with all existing video standards. We calculate the quantization error bound and show that by storing quantization error separately, bandwidth overhead due to MC can be reduced significantly. The technique meets key requirements specific to the video encode application. 24-39% reduction in peak bandwidth and 23-31% reduction in total average power consumption are observed for IBBP sequences.

Journal ArticleDOI
TL;DR: Using low-density parity check (LDPC) code as ECC in storage systems is used to enable an opportunistic use of a stronger error correction code (ECC) with more coding redundancy in data storage systems, and trade such opportunistic extra error correction capability to improve other system performance metrics in the runtime.
Abstract: Lossless data compression for data storage has become less popular as mass data storage systems are becoming increasingly cheap. This leaves many files stored on mass data storage media uncompressed although they are losslessly compressible. This paper proposes to exploit the lossless compressibility of those files to improve the underlying storage system performance metrics such as energy efficiency and access speed, other than saving storage space as in conventional practice. The key idea is to apply runtime lossless data compression to enable an opportunistic use of a stronger error correction code (ECC) with more coding redundancy in data storage systems, and trade such opportunistic extra error correction capability to improve other system performance metrics in the runtime. Since data storage is typically realized in the unit of equal-sized sectors (e.g., 512 B or 4 KB user data per sector), we only apply this strategy to each individual sector independently in order to be completely transparent to the firmware, operating systems, and users. Using low-density parity check (LDPC) code as ECC in storage systems, this paper quantitatively studies the effectiveness of this design strategy in both hard disk drives and NAND flash memories. For hard disk drives, we use this design strategy to reduce average hard disk drive read channel signal processing energy consumption, and results show that up to 38 percent read channel energy saving can be achieved. For NAND flash memories, we use this design strategy to improve average NAND flash memory write speed, and results show that up to 36 percent write speed improvement can be achieved for 2 bits/cell NAND flash memories.

Patent
29 Aug 2011
TL;DR: In this article, techniques relating to modifying packet data to be sent across a communication link and/or bus are discussed, where data may be modified in accordance with one or more data processing algorithms, and according to the capabilities of a destination device to receive such modified data.
Abstract: Techniques are disclosed relating to modifying packet data to be sent across a communication link and/or bus. Data may be modified in accordance with one or more data processing algorithms, and according to the capabilities of a destination device to receive such modified data. Lossless compression algorithms may be used on data in order to achieve a higher effective bandwidth over a particular bus or link. Encryption algorithms may be used, as well as data format conversion algorithms. One or more processing elements of a communication channel controller or other structure within a computing device may be used to modify packet data, which may be in PCI-Express format in some embodiments. A packet prefix or header may be used to store an indication of what algorithm(s) has been used to modify packet data so that a destination device can process packets accordingly.

Proceedings ArticleDOI
16 Sep 2011
TL;DR: A spatial division design shows a speedup of 72x in the four-GPU-based implementation of the PPVQ compression scheme, which consists of linear prediction, bit depth partitioning, vector quantization, and entropy coding.
Abstract: For the ultraspectral sounder data which features thousands of channels at each observation location, lossless compression is desirable to save storage space and transmission time without losing precision in retrieval of geophysical parameters. Predictive partitioned vector quantization (PPVQ) has been proven to be an effective lossless compression scheme for ultraspectral sounder data. It consists of linear prediction, bit-depth partitioning, vector quantization, and entropy coding. In our previous work, the two most time consuming stages of linear prediction and vector quantization were identified for GPU implementation. For GIFTS data, using a spectral division strategy for sharing the compression workload among four GPUs, a speedup of ~42x was achieved. To further enhance the speedup, this work will explore a spatial division strategy for sharing workload in processing the six parts of a GIFTS datacube. As result, the total processing time of a GIFTS datacube on four GPUs can be less than 13 seconds which is equivalent to a speedup of ~72x. The use of multiple GPUs for PPVQ compression is thus promising as a low-cost and effective compression solution for ultraspectral sounder data for rebroadcast use.

Journal ArticleDOI
TL;DR: By designing an adaptive threshold value in the extraction process, the proposed blind watermarking scheme is more robust for resisting common attacks such as median filtering, average filtering, and Gaussian noise.
Abstract: This paper proposes a blind watermarking scheme based on wavelet tree quantization for copyright protection. In such a quantization scheme, there exists a large significant difference while embedding a watermark bit 1 and a watermark bit 0; it then does not require any original image or watermark during watermark extraction process. As a result, the watermarked images look lossless in comparison with the original ones, and the proposed method can effectively resist common image processing attacks; especially for JPEG compression and low-pass filtering. Moreover, by designing an adaptive threshold value in the extraction process, our method is more robust for resisting common attacks such as median filtering, average filtering, and Gaussian noise. Experimental results show that the watermarked image looks visually identical to the original, and the watermark can be effectively extracted.

Journal ArticleDOI
TL;DR: A spatial division design shows a speedup of 72x in the four-GPU-based implementation of the PPVQ compression scheme, which consists of linear prediction, bit depth partitioning, vector quantization, and entropy coding.
Abstract: For the large-volume ultraspectral sounder data, compression is desirable to save storage space and transmission time. To retrieve the geophysical paramters without losing precision the ultraspectral sounder data compression has to be lossless. Recently there is a boom on the use of graphic processor units (GPU) for speedup of scientific computations. By identifying the time dominant portions of the code that can be executed in parallel, significant speedup can be achieved by using GPU. Predictive partitioned vector quantization (PPVQ) has been proven to be an effective lossless compression scheme for ultraspectral sounder data. It consists of linear prediction, bit depth partitioning, vector quantization, and entropy coding. Two most time consuming stages of linear prediction and vector quantization are chosen for GPU-based implementation. By exploiting the data parallel characteristics of these two stages, a spatial division design shows a speedup of 72x in our four-GPU-based implementation of the PPVQ compression scheme.

Journal ArticleDOI
TL;DR: A compression algorithm based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence, which achieves the best compression ratio for DNA sequences for larger genome.
Abstract: Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.