scispace - formally typeset
Search or ask a question

Showing papers on "Data compression published in 2010"


Journal ArticleDOI
TL;DR: Extensive tests of videos, natural images, and psychological patterns show that the proposed PQFT model is more effective in saliency detection and can predict eye fixations better than other state-of-the-art models in previous literature.
Abstract: Salient areas in natural scenes are generally regarded as areas which the human eye will typically focus on, and finding these areas is the key step in object detection. In computer vision, many models have been proposed to simulate the behavior of eyes such as SaliencyToolBox (STB), Neuromorphic Vision Toolkit (NVT), and others, but they demand high computational cost and computing useful results mostly relies on their choice of parameters. Although some region-based approaches were proposed to reduce the computational complexity of feature maps, these approaches still were not able to work in real time. Recently, a simple and fast approach called spectral residual (SR) was proposed, which uses the SR of the amplitude spectrum to calculate the image's saliency map. However, in our previous work, we pointed out that it is the phase spectrum, not the amplitude spectrum, of an image's Fourier transform that is key to calculating the location of salient areas, and proposed the phase spectrum of Fourier transform (PFT) model. In this paper, we present a quaternion representation of an image which is composed of intensity, color, and motion features. Based on the principle of PFT, a novel multiresolution spatiotemporal saliency detection model called phase spectrum of quaternion Fourier transform (PQFT) is proposed in this paper to calculate the spatiotemporal saliency map of an image by its quaternion representation. Distinct from other models, the added motion dimension allows the phase spectrum to represent spatiotemporal saliency in order to perform attention selection not only for images but also for videos. In addition, the PQFT model can compute the saliency map of an image under various resolutions from coarse to fine. Therefore, the hierarchical selectivity (HS) framework based on the PQFT model is introduced here to construct the tree structure representation of an image. With the help of HS, a model called multiresolution wavelet domain foveation (MWDF) is proposed in this paper to improve coding efficiency in image and video compression. Extensive tests of videos, natural images, and psychological patterns show that the proposed PQFT model is more effective in saliency detection and can predict eye fixations better than other state-of-the-art models in previous literature. Moreover, our model requires low computational cost and, therefore, can work in real time. Additional experiments on image and video compression show that the HS-MWDF model can achieve higher compression rate than the traditional model.

944 citations


Book
09 Aug 2010
TL;DR: This book unravels the mysteries behind the latest H.264 standard and delves deeper into each of the operations in the codec, providing readers with practical advice on how to get the most out of the standard.
Abstract: H.264 Advanced Video Coding or MPEG-4 Part 10 is fundamental to a growing range of markets such as high definition broadcasting, internet video sharing, mobile video and digital surveillance. This book reflects the growing importance and implementation of H.264 video technology. Offering a detailed overview of the system, it explains the syntax, tools and features of H.264 and equips readers with practical advice on how to get the most out of the standard. Packed with clear examples and illustrations to explain H.264 technology in an accessible and practical way. Covers basic video coding concepts, video formats and visual quality. Explains how to measure and optimise the performance of H.264 and how to balance bitrate, computation and video quality. Analyses recent work on scalable and multi-view versions of H.264, case studies of H.264 codecs and new technological developments such as the popular High Profile extensions. An invaluable companion for developers, broadcasters, system integrators, academics and students who want to master this burgeoning state-of-the-art technology. "[This book] unravels the mysteries behind the latest H.264 standard and delves deeper into each of the operations in the codec. The reader can implement (simulate, design, evaluate, optimize) the codec with all profiles and levels. The book ends with extensions and directions (such as SVC and MVC) for further research." Professor K. R. Rao, The University of Texas at Arlington, co-inventor of the Discrete Cosine Transform

663 citations



Journal ArticleDOI
TL;DR: The new JPEG error analysis method can reliably detect JPEG image blocks which are as small as 8 × 8 pixels and compressed with quality factors as high as 98.5%, important for analyzing and locating small tampered regions within a composite image.
Abstract: JPEG is one of the most extensively used image formats. Understanding the inherent characteristics of JPEG may play a useful role in digital image forensics. In this paper, we introduce JPEG error analysis to the study of image forensics. The main errors of JPEG include quantization, rounding, and truncation errors. Through theoretically analyzing the effects of these errors on single and double JPEG compression, we have developed three novel schemes for image forensics including identifying whether a bitmap image has previously been JPEG compressed, estimating the quantization steps of a JPEG image, and detecting the quantization table of a JPEG image. Extensive experimental results show that our new methods significantly outperform existing techniques especially for the images of small sizes. We also show that the new method can reliably detect JPEG image blocks which are as small as 8 × 8 pixels and compressed with quality factors as high as 98. This performance is important for analyzing and locating small tampered regions within a composite image.

260 citations


Journal ArticleDOI
TL;DR: Almost lossless analog compression for analog memoryless sources in an information-theoretic framework, in which the compressor or decompressor is constrained by various regularity conditions, in particular linearity of the compressor and Lipschitz continuity of the decompressor.
Abstract: In Shannon theory, lossless source coding deals with the optimal compression of discrete sources Compressed sensing is a lossless coding strategy for analog sources by means of multiplication by real-valued matrices In this paper we study almost lossless analog compression for analog memoryless sources in an information-theoretic framework, in which the compressor or decompressor is constrained by various regularity conditions, in particular linearity of the compressor and Lipschitz continuity of the decompressor The fundamental limit is shown to the information dimension proposed by Renyi in 1959

228 citations


Journal ArticleDOI
TL;DR: A foveation model as well as a foveated JND (FJND) model in which the spatial and temporal JND models are enhanced to account for the relationship between visibility and eccentricity is described.
Abstract: Traditional video compression methods remove spatial and temporal redundancy based on the signal statistical correlation. However, to reach higher compression ratios without perceptually degrading the reconstructed signal, the properties of the human visual system (HVS) need to be better exploited. Research effort has been dedicated to modeling the spatial and temporal just-noticeable-distortion (JND) based on the sensitivity of the HVS to luminance contrast, and accounting for spatial and temporal masking effects. This paper describes a foveation model as well as a foveated JND (FJND) model in which the spatial and temporal JND models are enhanced to account for the relationship between visibility and eccentricity. Since the visual acuity decreases when the distance from the fovea increases, the visibility threshold increases with increased eccentricity. The proposed FJND model is then used for macroblock (MB) quantization adjustment in H.264/advanced video coding (AVC). For each MB, the quantization parameter is optimized based on its FJND information. The Lagrange multiplier in the rate-distortion optimization is adapted so that the MB noticeable distortion is minimized. The performance of the FJND model has been assessed with various comparisons and subjective visual tests. It has been shown that the proposed FJND model can increase the visual quality versus rate performance of the H.264/AVC video coding scheme.

194 citations


Journal ArticleDOI
TL;DR: A novel video compression scheme based on a highly flexible hierarchy of unit representation which includes three block concepts: coding unit (CU), prediction unit (PU), and transform unit (TU), which was a candidate in the competitive phase of the high-efficiency video coding (HEVC) standardization work.
Abstract: This paper proposes a novel video compression scheme based on a highly flexible hierarchy of unit representation which includes three block concepts: coding unit (CU), prediction unit (PU), and transform unit (TU). This separation of the block structure into three different concepts allows each to be optimized according to its role; the CU is a macroblock-like unit which supports region splitting in a manner similar to a conventional quadtree, the PU supports nonsquare motion partition shapes for motion compensation, while the TU allows the transform size to be defined independently from the PU. Several other coding tools are extended to arbitrary unit size to maintain consistency with the proposed design, e.g., transform size is extended up to 64 × 64 and intraprediction is designed to support an arbitrary number of angles for variable block sizes. Other novel techniques such as a new noncascading interpolation Alter design allowing arbitrary motion accuracy and a leaky prediction technique using both open-loop and closed-loop predictors are also introduced. The video codec described in this paper was a candidate in the competitive phase of the high-efficiency video coding (HEVC) standardization work. Compared to H.264/AVC, it demonstrated bit rate reductions of around 40% based on objective measures and around 60% based on subjective testing with 1080 p sequences. It has been partially adopted into the first standardization model of the collaborative phase of the HEVC effort.

193 citations


Journal ArticleDOI
TL;DR: A video copy detection system which efficiently matches individual frames and then verifies their spatio-temporal consistency and shows that this system obtains excellent results for the TRECVID 2008 copy detection task.
Abstract: This paper introduces a video copy detection system which efficiently matches individual frames and then verifies their spatio-temporal consistency. The approach for matching frames relies on a recent local feature indexing method, which is at the same time robust to significant video transformations and efficient in terms of memory usage and computation time. We match either keyframes or uniformly sampled frames. To further improve the results, a verification step robustly estimates a spatio-temporal model between the query video and the potentially corresponding video segments. Experimental results evaluate the different parameters of our system and measure the trade-off between accuracy and efficiency. We show that our system obtains excellent results for the TRECVID 2008 copy detection task.

173 citations


Journal ArticleDOI
TL;DR: A video coding architecture is described that is based on nested and pre-configurable quadtree structures for flexible and signal-adaptive picture partitioning that was ranked among the five best performing proposals, both in terms of subjective and objective quality.
Abstract: -A video coding architecture is described that is based on nested and pre-configurable quadtree structures for flexible and signal-adaptive picture partitioning. The primary goal of this partitioning concept is to provide a high degree of adaptability for both temporal and spatial prediction as well as for the purpose of space-frequency representation of prediction residuals. At the same time, a leaf merging mechanism is included in order to prevent excessive partitioning of a picture into prediction blocks and to reduce the amount of bits for signaling the prediction signal. For fractional-sample motion-compensated prediction, a fixed-point implementation of the maximal-order minimum-support algorithm is presented that uses a combination of infinite impulse response and FIR filtering. Entropy coding utilizes the concept of probability interval partitioning entropy codes that offers new ways for parallelization and enhanced throughput. The presented video coding scheme was submitted to a joint call for proposals of ITU-T Visual Coding Experts Group and ISO/IEC Moving Picture Experts Group and was ranked among the five best performing proposals, both in terms of subjective and objective quality.

171 citations


Journal ArticleDOI
TL;DR: This algorithm is based on the observation that in the process of recompressing a JPEG image with the same quantization matrix over and over again, the number of different JPEG coefficients will monotonically decrease in general.
Abstract: Detection of double joint photographic experts group (JPEG) compression is of great significance in the field of digital forensics. Some successful approaches have been presented for detecting double JPEG compression when the primary compression and the secondary compression have different quantization matrixes. However, when the primary compression and the secondary compression have the same quantization matrix, no detection method has been reported yet. In this paper, we present a method which can detect double JPEG compression with the same quantization matrix. Our algorithm is based on the observation that in the process of recompressing a JPEG image with the same quantization matrix over and over again, the number of different JPEG coefficients, i.e., the quantized discrete cosine transform coefficients between the sequential two versions will monotonically decrease in general. For example, the number of different JPEG coefficients between the singly and doubly compressed images is generally larger than the number of different JPEG coefficients between the corresponding doubly and triply compressed images. Via a novel random perturbation strategy implemented on the JPEG coefficients of the recompressed test image, we can find a “proper” randomly perturbed ratio. For different images, this universal “proper” ratio will generate a dynamically changed threshold, which can be utilized to discriminate the singly compressed image and doubly compressed image. Furthermore, our method has the potential to detect triple JPEG compression, four times JPEG compression, etc.

171 citations


01 Jan 2010
TL;DR: Huffman algorithm is analyzed and compared with other common compression techniques like Arithmetic, LZW and Run Length Encoding to make storing easier for large amount of data.
Abstract: Data compression is also called as source coding. It is the process of encoding information using fewer bits than an uncoded representation is also making a use of specific encoding schemes. Compression is a technology for reducing the quantity of data used to represent any content without excessively reducing the quality of the picture. It also reduces the number of bits required to store and/or transmit digital media. Compression is a technique that makes storing easier for large amount of data. There are various techniques available for compression in my paper work , I have analyzed Huffman algorithm and compare it with other common compression techniques like Arithmetic, LZW and Run Length Encoding.

Journal ArticleDOI
TL;DR: This work presents a lossless compression algorithm that has been designed for fast on-line data compression, and cache compression in particular, and reduces the proposed algorithm to a register transfer level hardware design, permitting performance, power consumption, and area estimation.
Abstract: Microprocessor designers have been torn between tight constraints on the amount of on-chip cache memory and the high latency of off-chip memory, such as dynamic random access memory. Accessing off-chip memory generally takes an order of magnitude more time than accessing on-chip cache, and two orders of magnitude more time than executing an instruction. Computer systems and microarchitecture researchers have proposed using hardware data compression units within the memory hierarchies of microprocessors in order to improve performance, energy efficiency, and functionality. However, most past work, and all work on cache compression, has made unsubstantiated assumptions about the performance, power consumption, and area overheads of the proposed compression algorithms and hardware. It is not possible to determine whether compression at levels of the memory hierarchy closest to the processor is beneficial without understanding its costs. Furthermore, as we show in this paper, raw compression ratio is not always the most important metric. In this work, we present a lossless compression algorithm that has been designed for fast on-line data compression, and cache compression in particular. The algorithm has a number of novel features tailored for this application, including combining pairs of compressed lines into one cache line and allowing parallel compression of multiple words while using a single dictionary and without degradation in compression ratio. We reduced the proposed algorithm to a register transfer level hardware design, permitting performance, power consumption, and area estimation. Experiments comparing our work to previous work are described.

Journal ArticleDOI
01 Jan 2010
TL;DR: An ECG signal processing method with quad level vector (QLV) is proposed for the ECG holter system to achieve better performance with low-computation complexity.
Abstract: An ECG signal processing method with quad level vector (QLV) is proposed for the ECG holter system. The ECG processing consists of the compression flow and the classification flow, and the QLV is proposed for both flows to achieve better performance with low-computation complexity. The compression algorithm is performed by using ECG skeleton and the Huffman coding. Unit block size optimization, adaptive threshold adjustment, and 4-bit-wise Huffman coding methods are applied to reduce the processing cost while maintaining the signal quality. The heartbeat segmentation and the R-peak detection methods are employed for the classification algorithm. The performance is evaluated by using the Massachusetts Institute of Technology-Boston's Beth Israel Hospital Arrhythmia Database, and the noise robust test is also performed for the reliability of the algorithm. Its average compression ratio is 16.9:1 with 0.641% percentage root mean square difference value and the encoding rate is 6.4 kbps. The accuracy performance of the R-peak detection is 100% without noise and 95.63% at the worst case with -10-dB SNR noise. The overall processing cost is reduced by 45.3% with the proposed compression techniques.

Journal ArticleDOI
TL;DR: A complete survey of the representative video encryption algorithms proposed so far is given and it is shown that each scheme has its own strengths and weaknesses and no scheme can meet all specific requirements.

Proceedings ArticleDOI
18 Mar 2010
TL;DR: Conventional image/video sensors acquire visual information from a scene in time-quantized fashion at some predetermined frame rate, which leads, depending on the dynamic contents of the scene, to a more or less high degree of redundancy in the image data.
Abstract: Conventional image/video sensors acquire visual information from a scene in time-quantized fashion at some predetermined frame rate. Each frame carries the information from all pixels, regardless of whether or not this information has changed since the last frame had been acquired, which is usually not long ago. This method obviously leads, depending on the dynamic contents of the scene, to a more or less high degree of redundancy in the image data. Acquisition and handling of these dispensable data consume valuable resources; sophisticated and resource-hungry video compression methods have been developed to deal with these data.

01 Dec 2010
TL;DR: An experimental comparison of a number of different lossless data compression algorithms is presented and it is stated which algorithm performs well for text data.
Abstract: Data compression is a common requirement for most of the computerized applications. There are number of data compression algorithms, which are dedicated to compress different data formats. Even for a single data type there are number of different compression algorithms, which use different approaches. This paper examines lossless data compression algorithms and compares their performance. A set of selected algorithms are examined and implemented to evaluate the performance in compressing text data. An experimental comparison of a number of different lossless data compression algorithms is presented in this paper. The article is concluded by stating which algorithm performs well for text data.

Proceedings ArticleDOI
14 Mar 2010
TL;DR: A new sharpness measure where sharpness is identified as strong local phase coherence evaluated in the complex wavelet transform domain is proposed and shows that the proposed algorithm correlates well with subjective quality evaluations.
Abstract: Sharpness is one of the most determining factors in the perceptual assessment of image quality. Objective image sharpness measures may play important roles in the design and optimization of visual perception-based auto-focus systems and image enhancement, restoration and compression algorithms. Here we propose a new sharpness measure where sharpness is identified as strong local phase coherence evaluated in the complex wavelet transform domain. Our test using the LIVE blur database shows that the proposed algorithm correlates well with subjective quality evaluations. An additional advantage of our approach is that other image distortions such as compression, median filtering and noise contamination that may affect perceptual sharpness can also be detected.

01 Jan 2010
TL;DR: The Lossless method of image compression and decompression using a simple coding technique called Huffman coding is proposed, which is simple in implementation and utilizes less memory.
Abstract: The need for an efficient technique for compression of Images ever increasing because the raw images need large amounts of disk space seems to be a big disadvantage during transmission & storage. Even though there are so many compression technique already present a better technique which is faster, memory efficient and simple surely suits the requirements of the user. In this paper we proposed the Lossless method of image compression and decompression using a simple coding technique called Huffman coding. This technique is simple in implementation and utilizes less memory. A software algorithm has been developed and implemented to compress and decompress the given image using Huffman coding techniques in a MATLAB platform.

Journal ArticleDOI
TL;DR: A robust single-image super-resolution method for enlarging low quality web image/video degraded by downsampling and compression is proposed and it is verified that this adaptive regularization can steadily and greatly improve the pair matching accuracy in learning-based super- resolution.
Abstract: This paper proposes a robust single-image super-resolution method for enlarging low quality web image/video degraded by downsampling and compression. To simultaneously improve the resolution and perceptual quality of such web image/video, we bring forward a practical solution which combines adaptive regularization and learning-based super-resolution. The contribution of this work is twofold. First, we propose to analyze the image energy change characteristics during the iterative regularization process, i.e., the energy change ratio between primitive (e.g., edges, ridges and corners) and nonprimitive fields. Based on the revealed convergence property of the energy change ratio, appropriate regularization strength can then be determined to well balance compression artifacts removal and primitive components preservation. Second, we verify that this adaptive regularization can steadily and greatly improve the pair matching accuracy in learning-based super-resolution. Consequently, their combination effectively eliminates the quantization noise and meanwhile faithfully compensates the missing high-frequency details, yielding robust super-resolution performance in the compression scenario. Experimental results demonstrate that our solution produces visually pleasing enlargements for various web images/videos.

Journal ArticleDOI
TL;DR: This work proposes an approach to perform lossy compression on single node based on a differential pulse code modulation scheme with quantization of the differences between consecutive samples, and discusses how this approach outperforms LTC, a lossy compressed algorithm purposely designed to be embedded in sensor nodes, in terms of compression rate and complexity.

Proceedings ArticleDOI
14 Mar 2010
TL;DR: It is shown how the proper addition of noise to an image's discrete cosine transform coefficients can sufficiently remove quantization artifacts which act as indicators of JPEG compression while introducing an acceptable level of distortion.
Abstract: The widespread availability of photo editing software has made it easy to create visually convincing digital image forgeries. To address this problem, there has been much recent work in the field of digital image forensics. There has been little work, however, in the field of anti-forensics, which seeks to develop a set of techniques designed to fool current forensic methodologies. In this work, we present a technique for disguising an image's JPEG compression history. An image's JPEG compression history can be used to provide evidence of image manipulation, supply information about the camera used to generate an image, and identify forged regions within an image. We show how the proper addition of noise to an image's discrete cosine transform coefficients can sufficiently remove quantization artifacts which act as indicators of JPEG compression while introducing an acceptable level of distortion. Simulation results are provided to verify the efficacy of this anti-forensic technique.

Patent
09 Jun 2010
TL;DR: In this paper, the authors describe a hardware-accelerated lossless data compression system that includes a plurality of hash memories each associated with a different lane of a plurality-of-lanes (each lane including data bytes of a data unit being received by the compression apparatus).
Abstract: Systems for hardware-accelerated lossless data compression are described. At least some embodiments include data compression apparatus that includes a plurality of hash memories each associated with a different lane of a plurality of lanes (each lane including data bytes of a data unit being received by the compression apparatus), an array including array elements each including a plurality of validity bits (each validity bit within an array element corresponding to a different lane of the plurality of lanes), control logic that initiates a read of a hash memory entry if a corresponding validity bit indicates that said entry is valid, and an encoder that compresses at least the data bytes for the lane associated with the hash memory comprising the valid entry if said valid entry comprises data that matches the lane data bytes.

Journal ArticleDOI
TL;DR: The results obtained by performance evaluations using MPEG-4 coded video streams have demonstrated the effectiveness of the proposed NR video quality metric.
Abstract: A no-reference (NR) quality measure for networked video is introduced using information extracted from the compressed bit stream without resorting to complete video decoding. This NR video quality assessment measure accounts for three key factors which affect the overall perceived picture quality of networked video, namely, picture distortion caused by quantization, quality degradation due to packet loss and error propagation, and temporal effects of the human visual system. First, the picture quality in the spatial domain is measured, for each frame, relative to quantization under an error-free transmission condition. Second, picture quality is evaluated with respect to packet loss and the subsequent error propagation. The video frame quality in the spatial domain is, therefore, jointly determined by coding distortion and packet loss. Third, a pooling scheme is devised as the last step of the proposed quality measure to capture the perceived quality degradation in the temporal domain. The results obtained by performance evaluations using MPEG-4 coded video streams have demonstrated the effectiveness of the proposed NR video quality metric.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed scheme improves the coding efficiency even more than 10 dB at most bit rates for compound images and keeps a comparable efficient performance to H.264 for natural images.
Abstract: Compound images are a combination of text, graphics and natural image. They present strong anisotropic features, especially on the text and graphics parts. These anisotropic features often render conventional compression inefficient. Thus, this paper proposes a novel coding scheme from the H.264 intraframe coding. In the scheme, two new intramodes are developed to better exploit spatial correlation in compound images. The first is the residual scalar quantization (RSQ) mode, where intrapredicted residues are directly quantized and coded without transform. The second is the base colors and index map (BCIM) mode that can be viewed as an adaptive color quantization. In this mode, an image block is represented by several representative colors, referred to as base colors, and an index map to compress. Every block selects its coding mode from two new modes and the previous intramodes in H.264 by rate-distortion optimization (RDO). Experimental results show that the proposed scheme improves the coding efficiency even more than 10 dB at most bit rates for compound images and keeps a comparable efficient performance to H.264 for natural images.

Journal ArticleDOI
TL;DR: The main contributions of this paper are the introduction of the 1D DFT along temporal direction for watermarking that enables the robustness against video compression, and the Radon transform-based watermark embedding and extraction that produces the robustity against geometric transformations.

Journal ArticleDOI
Jaemoon Kim1, Chong-Min Kyung1
TL;DR: A lossless EC algorithm for HD video sequences and related hardware architecture is proposed that consists of a hierarchical prediction method based on pixel averaging and copying and significant bit truncation (SBT).
Abstract: Increasing the image size of a video sequence aggravates the memory bandwidth problem of a video coding system. Despite many embedded compression (EC) algorithms proposed to overcome this problem, no lossless EC algorithm able to handle high-definition (HD) size video sequences has been proposed thus far. In this paper, a lossless EC algorithm for HD video sequences and related hardware architecture is proposed. The proposed algorithm consists of two steps. The first is a hierarchical prediction method based on pixel averaging and copying. The second step involves significant bit truncation (SBT) which encodes prediction errors in a group with the same number of bits so that the multiple prediction errors are decoded in a clock cycle. The theoretical lower bound of the compression ratio of the SBT coding was also derived. Experimental results have shown a 60% reduction of memory bandwidth on average. Hardware implementation results have shown that a throughput of 14.2 pixels/cycle can be achieved with 36 K gates, which is sufficient to handle HD-size video sequences in real time.

Proceedings ArticleDOI
15 Dec 2010
TL;DR: This work presents a full-reference video quality metric geared specifically towards the requirements of Computer Graphics applications as a faster computational alternative to subjective evaluation.
Abstract: Numerous current Computer Graphics methods produce video sequences as their outcome. The merit of these methods is often judged by assessing the quality of a set of results through lengthy user studies. We present a full-reference video quality metric geared specifically towards the requirements of Computer Graphics applications as a faster computational alternative to subjective evaluation. Our metric can compare a video pair with arbitrary dynamic ranges, and comprises a human visual system model for a wide range of luminance levels, that predicts distortion visibility through models of luminance adaptation, spatiotemporal contrast sensitivity and visual masking. We present applications of the proposed metric to quality prediction of HDR video compression and temporal tone mapping, comparison of different rendering approaches and qualities, and assessing the impact of variable frame rate to perceived quality.

Journal ArticleDOI
TL;DR: An efficient algorithm is proposed for improved image compression and reconstruction based on fuzzy transform based on monotonicity invariance on the basis of the Lipschitz continuity invariance.

Proceedings ArticleDOI
24 Mar 2010
TL;DR: This work introduces an alternative Lempel-Ziv text parsing, LZ-End, that converges to the entropy and in practice gets very close to LZ77, which is ideal as a compression format for highly repetitive sequence databases, where access to individual sequences is required.
Abstract: We introduce an alternative Lempel-Ziv text parsing, LZ-End, that converges to the entropy and in practice gets very close to LZ77. LZ-End forces sources to finish at the end of a previous phrase. Most Lempel-Ziv parsings can decompress the text only from the beginning. LZ-End is the only parsing we know of able of decompressing arbitrary phrases in optimal time, while staying closely competitive with LZ77, especially on highly repetitive collections, where LZ77 excells. Thus LZ-End is ideal as a compression format for highly repetitive sequence databases, where access to individual sequences is required, and it also opens the door to compressed indexing schemes for such collections.

Journal ArticleDOI
TL;DR: It is shown that optimal protocols for noisy channel coding of public or private information over either classical or quantum channels can be directly constructed from two more primitive information-theoretic protocols: privacy amplification and information reconciliation, also known as data compression with side information.
Abstract: We show that optimal protocols for noisy channel coding of public or private information over either classical or quantum channels can be directly constructed from two more primitive information-theoretic tools: privacy amplification and information reconciliation, also known as data compression with side information. We do this in the one-shot scenario of structureless resources, and formulate our results in terms of the smooth min- and max-entropy. In the context of classical information theory, this shows that essentially all two-terminal protocols can be reduced to these two primitives, which are in turn governed by the smooth min- and max-entropies, respectively. In the context of quantum information theory, the recently-established duality of these two protocols means essentially all two-terminal protocols can be constructed using just a single primitive.