scispace - formally typeset
Search or ask a question

Showing papers on "Data compression published in 2009"


Journal ArticleDOI
Hany Farid1
TL;DR: A technique to detect whether the part of an image was initially compressed at a lower quality than the rest of the image is described, applicable to images of high and low quality as well as resolution.
Abstract: When creating a digital forgery, it is often necessary to combine several images, for example, when compositing one person's head onto another person's body. If these images were originally of different JPEG compression quality, then the digital composite may contain a trace of the original compression qualities. To this end, we describe a technique to detect whether the part of an image was initially compressed at a lower quality than the rest of the image. This approach is applicable to images of high and low quality as well as resolution.

427 citations


Book
09 Nov 2009
TL;DR: This book provides a comprehensive reference for the many different types and methods of compression, including a detailed and helpful taxonomy, analysis of most common methods, and discussions on the use and comparative benefits of methods and description of "how to" use them.
Abstract: Data compression is one of the most important fields and tools in modern computing. From archiving data, to CD-ROMs, and from coding theory to image analysis, many facets of modern computing rely upon data compression. This book provides a comprehensive reference for the many different types and methods of compression. Included are a detailed and helpful taxonomy, analysis of most common methods, and discussions on the use and comparative benefits of methods and description of "how to" use them. Detailed descriptions and explanations of the most well-known and frequently used compression methods are covered in a self-contained fashion, with an accessible style and technical level for specialists and non-specialists.

322 citations


Journal ArticleDOI
TL;DR: The results show that-although its rate-distortion (R-D) performance is worse-platelet-based depth coding outperforms H.264, due to improved sharp edge preservation.
Abstract: This article investigates the interaction between different techniques for depth compression and view synthesis rendering with multiview video plus scene depth data Two different approaches for depth coding are compared, namely H264/MVC, using temporal and inter-view reference images for efficient prediction, and the novel platelet-based coding algorithm, characterized by being adapted to the special characteristics of depth-images Since depth-images are a 2D representation of the 3D scene geometry, depth-image errors lead to geometry distortions Therefore, the influence of geometry distortions resulting from coding artifacts is evaluated for both coding approaches in two different ways First, the variation of 3D surface meshes is analyzed using the Hausdorff distance and second, the distortion is evaluated for 2D view synthesis rendering, where color and depth information are used together to render virtual intermediate camera views of the scene The results show that-although its rate-distortion (R-D) performance is worse-platelet-based depth coding outperforms H264, due to improved sharp edge preservation Therefore, depth coding needs to be evaluated with respect to geometry distortions

287 citations


Journal ArticleDOI
TL;DR: A DCT based JND model for monochrome pictures is proposed that incorporates the spatial contrast sensitivity function (CSF), the luminance adaptation effect, and the contrast masking effect based on block classification and is consistent with the human visual system.
Abstract: In image and video processing field, an effective compression algorithm should remove not only the statistical redundancy information but also the perceptually insignificant component from the pictures. Just-noticeable distortion (JND) profile is an efficient model to represent those perceptual redundancies. Human eyes are usually not sensitive to the distortion below the JND threshold. In this paper, a DCT based JND model for monochrome pictures is proposed. This model incorporates the spatial contrast sensitivity function (CSF), the luminance adaptation effect, and the contrast masking effect based on block classification. Gamma correction is also considered to compensate the original luminance adaptation effect which gives more accurate results. In order to extend the proposed JND profile to video images, the temporal modulation factor is included by incorporating the temporal CSF and the eye movement compensation. Moreover, a psychophysical experiment was designed to parameterize the proposed model. Experimental results show that the proposed model is consistent with the human visual system (HVS). Compared with the other JND profiles, the proposed model can tolerate more distortion and has much better perceptual quality. This model can be easily applied in many related areas, such as compression, watermarking, error protection, perceptual distortion metric, and so on.

257 citations


Proceedings ArticleDOI
19 Apr 2009
TL;DR: A distributed compressive video sensing (DCVS) framework is proposed to simultaneously capture and compress video data, where almost all computation burdens can be shifted to the decoder, resulting in a very low-complexity encoder.
Abstract: Low-complexity video encoding has been applicable to several emerging applications. Recently, distributed video coding (DVC) has been proposed to reduce encoding complexity to the order of that for still image encoding. In addition, compressive sensing (CS) has been applicable to directly capture compressed image data efficiently. In this paper, by integrating the respective characteristics of DVC and CS, a distributed compressive video sensing (DCVS) framework is proposed to simultaneously capture and compress video data, where almost all computation burdens can be shifted to the decoder, resulting in a very low-complexity encoder. At the decoder, compressed video can be efficiently reconstructed using the modified GPSR (gradient projection for sparse reconstruction) algorithm. With the assistance of the proposed initialization and stopping criteria for GRSR, derived from statistical dependencies among successive video frames, our modified GPSR algorithm can terminate faster and reconstruct better video quality. The performance of our DCVS method is demonstrated via simulations to outperform three known CS reconstruction algorithms.

228 citations


Journal ArticleDOI
TL;DR: FPC is described and evaluated, a fast lossless compression algorithm for linear streams of 64-bit floating-point data that works well on hard-to-compress scientific data sets and meets the throughput demands of high-performance systems.
Abstract: Many scientific programs exchange large quantities of double-precision data between processing nodes and with mass storage devices. Data compression can reduce the number of bytes that need to be transferred and stored. However, data compression is only likely to be employed in high-end computing environments if it does not impede the throughput. This paper describes and evaluates FPC, a fast lossless compression algorithm for linear streams of 64-bit floating-point data. FPC works well on hard-to-compress scientific data sets and meets the throughput demands of high-performance systems. A comparison with five lossless compression schemes, BZIP2, DFCM, FSD, GZIP, and PLMI, on 4 architectures and 13 data sets shows that FPC compresses and decompresses one to two orders of magnitude faster than the other algorithms at the same geometric-mean compression ratio. Moreover, FPC provides a guaranteed throughput as long as the prediction tables fit into the L1 data cache. For example, on a 1.6-GHz Itanium 2 server, the throughput is 670 Mbytes/s regardless of what data are being compressed.

224 citations


Patent
30 Mar 2009
TL;DR: In this article, a system for deduplicating data comprises a card operable to receive at least one data block and a processor on the card that generates a hash for each data block.
Abstract: A system for deduplicating data comprises a card operable to receive at least one data block and a processor on the card that generates a hash for each data block. The system further comprises a first module that determines a processing status for the hash and a second module that discards duplicate hashes and their data blocks and writes unique hashes and their data blocks to a computer readable medium. In one embodiment, the processor also compresses each data block using a compression algorithm.

209 citations


Proceedings ArticleDOI
01 Sep 2009
TL;DR: The effectiveness of LIVEcut is shown through timing comparisons to other interactive methods, accuracy comparisons to unsupervised methods, and qualitatively through selections on various video sequences.
Abstract: Video sequences contain many cues that may be used to segment objects in them, such as color, gradient, color adjacency, shape, temporal coherence, camera and object motion, and easily-trackable points. This paper introduces LIVEcut, a novel method for interactively selecting objects in video sequences by extracting and leveraging as much of this information as possible. Using a graph-cut optimization framework, LIVEcut propagates the selection forward frame by frame, allowing the user to correct any mistakes along the way if needed. Enhanced methods of extracting many of the features are provided. In order to use the most accurate information from the various potentially-conflicting features, each feature is automatically weighted locally based on its estimated accuracy using the previous implicitly-validated frame. Feature weights are further updated by learning from the user corrections required in the previous frame. The effectiveness of LIVEcut is shown through timing comparisons to other interactive methods, accuracy comparisons to unsupervised methods, and qualitatively through selections on various video sequences.

199 citations


Proceedings ArticleDOI
18 Mar 2009
TL;DR: Unlike conventional DVC schemes, the DISCOS framework can perform most encoding operations in the analog domain with very low-complexity, making it be a promising candidate for real-time, practical applications where the analog to digital conversion is expensive, e.g., in Terahertz imaging.
Abstract: This paper proposes a novel framework called Distributed Compressed Video Sensing (DISCOS) - a solution for Distributed Video Coding (DVC) based on the Compressed Sensing (CS) theory. The DISCOS framework compressively samples each video frame independently at the encoder and recovers video frames jointly at the decoder by exploiting an interframe sparsity model and by performing sparse recovery with side information. Simulation results show that DISCOS significantly outperforms the baseline CS-based scheme of intraframe-coding and intraframe-decoding. Moreover, our DISCOS framework can perform most encoding operations in the analog domain with very low-complexity. This makes DISCOS a promising candidate for real-time, practical applications where the analog to digital conversion is expensive, e.g., in Terahertz imaging.

183 citations


Proceedings ArticleDOI
01 Sep 2009
TL;DR: An approximate representation of bag-of-features obtained by projecting the corresponding histogram onto a set of pre-defined sparse projection functions, producing several image descriptors is proposed, which is at least one order of magnitude faster than standard bag- of-features while providing excellent search quality.
Abstract: One of the main limitations of image search based on bag-of-features is the memory usage per image Only a few million images can be handled on a single machine in reasonable response time In this paper, we first evaluate how the memory usage is reduced by using lossless index compression We then propose an approximate representation of bag-of-features obtained by projecting the corresponding histogram onto a set of pre-defined sparse projection functions, producing several image descriptors Coupled with a proper indexing structure, an image is represented by a few hundred bytes A distance expectation criterion is then used to rank the images Our method is at least one order of magnitude faster than standard bag-of-features while providing excellent search quality

182 citations


Journal ArticleDOI
TL;DR: This paper proposes a simple lossless entropy compression (LEC) algorithm which can be implemented in a few lines of code, requires very low computational power, compresses data on the fly and uses a very small dictionary whose size is determined by the resolution of the analog-to-digital converter.
Abstract: Energy is a primary constraint in the design and deployment of wireless sensor networks (WSNs), since sensor nodes are typically powered by batteries with a limited capacity. Energy efficiency is generally achieved by reducing radio communication, for instance, limiting transmission/reception of data. Data compression can be a valuable tool in this direction. The limited resources available in a sensor node demand, however, the development of specifically designed compression algorithms. In this paper, we propose a simple lossless entropy compression (LEC) algorithm which can be implemented in a few lines of code, requires very low computational power, compresses data on the fly and uses a very small dictionary whose size is determined by the resolution of the analog-to-digital converter. We have evaluated the effectiveness of LEC by compressing four temperature and relative humidity data sets collected by real WSNs, and solar radiation, seismic and ECG data sets. We have obtained compression ratios up to 70.81% and 62.08% for temperature and relative humidity data sets, respectively, and of the order of 70% for the other data sets. Then, we have shown that LEC outperforms two specifically designed compression algorithms for WSNs. Finally, we have compared LEC with gzip, bzip2, rar, classical Huffman and arithmetic encodings.

Journal ArticleDOI
TL;DR: Experimental results indicate that the proposed method is insensitive to heart rate variation, introduces negligible error in the processed PPG signals due to the additional processing, preserves all the morphological features of the PPG, provides 35 dB reduction in motion artifacts, and achieves a data compression factor of 12.
Abstract: Pulse oximeters require artifact-free clean photoplethysmograph (PPG) signals obtained at red and infrared (IR) wavelengths for the estimation of the level of oxygen saturation ( SpO2) in the arterial blood of a patient. Movement of a patient corrupts a PPG signal with motion artifacts and introduces large errors in the computation of SpO2. A novel method for removing motion artifacts from corrupted PPG signals by applying Fourier series analysis on a cycle-by-cycle basis is presented in this paper. Aside from artifact reduction, the proposed method also provides data compression. Experimental results indicate that the proposed method is insensitive to heart rate variation, introduces negligible error in the processed PPG signals due to the additional processing, preserves all the morphological features of the PPG, provides 35 dB reduction in motion artifacts, and achieves a data compression factor of 12.

Proceedings ArticleDOI
06 May 2009
TL;DR: This paper considers the application of CS to video signals in order to lessen the sensing and compression burdens in single- and multi-camera imaging systems, and develops a coarse-to-fine reconstruction algorithm for CS recovery.
Abstract: Compressive Sensing (CS) allows the highly efficient acquisition of many signals that could be difficult to capture or encode using conventional methods. From a relatively small number of random measurements, a high-dimensional signal can be recovered if it has a sparse or near-sparse representation in a basis known to the decoder. In this paper, we consider the application of CS to video signals in order to lessen the sensing and compression burdens in single- and multi-camera imaging systems. In standard video compression, motion compensation and estimation techniques have led to improved sparse representations that are more easily compressible; we adapt these techniques for the problem of CS recovery. Using a coarse-to-fine reconstruction algorithm, we alternate between the tasks of motion estimation and motion-compensated wavelet-domain signal recovery. We demonstrate that our algorithm allows the recovery of video sequences from fewer measurements than either frame-by-frame or inter-frame difference recovery methods.

Proceedings ArticleDOI
06 May 2009
TL;DR: Depth enhanced stereo (DES) as mentioned in this paper is proposed as a flexible, generic, and efficient 3D video format that can unify all others and serve as universal 3Dvideo format in the future.
Abstract: Recently, popularity of 3D video has been growing significantly and it may turn into a home user mass market in the near future. However, diversity of 3D video content formats is still hampering wide success. An overview of available and emerging 3D video formats and standards is given, which are mostly related to specific types of applications and 3D displays. This includes conventional stereo video, multiview video, video plus depth, multiview video plus depth and layered depth video. Features and limitations are explained. Finally, depth enhanced stereo (DES) is introduced as a flexible, generic, and efficient 3D video format that can unify all others and serve as universal 3D video format in the future.

Journal ArticleDOI
Jianwei Ma1
TL;DR: A new sampling theory named compressed sensing for aerospace remote sensing is applied to reduce data acquisition and imaging cost and would lead to new instruments with less storage space, less power consumption, and smaller size than currently used charged coupled device cameras, which would match effective needs particularly for probes sent very far away.
Abstract: In this letter, we apply a new sampling theory named compressed sensing (CS) for aerospace remote sensing to reduce data acquisition and imaging cost. We can only record directly single or multiple pixels while need not the use of additional compression step to improve the problems of power consumption, data storage, and transmission, without degrading spatial resolution and quality of pictures. The CS remote sensing includes two steps: encoding imaging and decoding recovery. A noiselet-transform-based single-pixel imaging and a random Fourier-sampling-based multipixel imaging are alternatively used for encoding, and an iterative curvelet thresholding method is used for decoding. The new sensing mechanism shifts onboard imaging cost to offline decoding recovery. It would lead to new instruments with less storage space, less power consumption, and smaller size than currently used charged coupled device cameras, which would match effective needs particularly for probes sent very far away. Numerical experiments on potential applications for Chinese Chang'e-1 lunar probe are presented.

Journal ArticleDOI
TL;DR: An automatic key frame extraction method dedicated to summarizing consumer video clips acquired from digital cameras and demonstrates the effectiveness of the method by comparing the results with two alternative methods against the ground truth agreed by multiple judges.
Abstract: Extracting key frames from video is of great interest in many applications, such as video summary, video organization, video compression, and prints from video. Key frame extraction is not a new problem but existing literature has focused primarily on sports or news video. In the personal or consumer video space, the biggest challenges for key frame selection are the unconstrained content and lack of any pre-imposed structures. First, in a psychovisual study, we conduct ground truth collection of key frames from video clips taken by digital cameras (as opposed to camcorders) using both first- and third-party judges. The goals of this study are to: 1) create a reference database of video clips reasonably representative of the consumer video space; 2) identify consensus key frames by which automated algorithms can be compared and judged for effectiveness, i.e., ground truth; and 3) uncover the criteria used by both first- and third-party human judges so these criteria can influence algorithm design. Next, we develop an automatic key frame extraction method dedicated to summarizing consumer video clips acquired from digital cameras. Analysis of spatio-temporal changes over time provides semantically meaningful information about the scene and the camera operator's general intents. In particular, camera and object motion are estimated and used to derive motion descriptors. A video clip is segmented into homogeneous parts based on major types of camera motion (e.g., pan, zoom, pause, steady). Dedicated rules are used to extract candidate key frames from each segment. In addition, confidence measures are computed for the candidates to enable ranking in semantic relevance. This method is scalable so that one can produce any desired number of key frames from the candidates. Finally, we demonstrate the effectiveness of our method by comparing the results with two alternative methods against the ground truth agreed by multiple judges.

Proceedings ArticleDOI
06 May 2009
TL;DR: A new Distributed Video Coding algorithm based on Compressive Sampling principles, which can be useful in those video applications that require very low complex encoders but is less efficient than another state-of-the-art DVC technique.
Abstract: In this paper, we propose a new Distributed Video Coding (DVC) algorithm based on Compressive Sampling principles. Our encoding algorithm transmits a set of measurements of every frame block. Using these measurements, the decoder finds an approximation of each block as a linear combination of a small number of blocks in previously transmitted frames. Thanks to the simplicity of the encoding, our algorithm can be useful in those video applications that require very low complex encoders. However, our algorithm is less efficient than another state-of-the-art DVC technique.

BookDOI
28 Dec 2009
TL;DR: A survey of recent results in the field of compression of remote sensed 3D data, with a particular interest in hyperspectral imagery is provided, where there is a tradeoff between compression achieved and the quality of the decompressed image.
Abstract: Hyperspectral Data Compression provides a survey of recent results in the field of compression of remote sensed 3D data, with a particular interest in hyperspectral imagery.Chapter 1 addresses compression architecture, and reviews and compares compression methods. Chapters 2 through 4 focus on lossless compression (where the decompressed image must be bit for bit identical to the original).Chapter 5, contributed by the editors, describes a lossless algorithm based on vector quantization with extensions to near lossless and possibly lossy compression for efficient browning and pure pixel classification.Chapter 6 deals with near lossless compression while. Chapter 7 considers lossy techniques constrained by almost perfect classification. Chapters 8 through 12 address lossy compression of hyperspectral imagery, where there is a tradeoff between compression achieved and the quality of the decompressed image.Chapter 13 examines artifacts that can arise from lossy compression.

Journal ArticleDOI
TL;DR: A state-of-the-art, scalable, electrophysiology platform designed for acquisition, compression, encryption, and storage of large-scale data is described that incorporates lossless data compression using range-encoded differences, a 32-bit cyclically redundant checksum to ensure data integrity, and 128-bit encryption for protection of patient information.

Proceedings ArticleDOI
16 Mar 2009
TL;DR: This paper proposes a rate-efficient codec designed for tree-based retrieval by encoding a tree histogram, which can achieve a more than 5x rate reduction compared to sending compressed feature descriptors.
Abstract: For mobile image matching applications, a mobile device captures a query image, extracts descriptive features, and transmits these features wirelessly to a server. The server recognizes the query image by comparing the extracted features to its database and returns information associated with the recognition result. For slow links, query feature compression is crucial for low-latency retrieval. Previous image retrieval systems transmit compressed feature descriptors, which is well suited for pairwise image matching. For fast retrieval from large databases, however, scalable vocabulary trees are commonly employed. In this paper, we propose a rate-efficient codec designed for tree-based retrieval. By encoding a tree histogram, our codec can achieve a more than 5x rate reduction compared to sending compressed feature descriptors. By discarding the order amongst a list of features, histogram coding requires 1.5x lower rate than sending a tree node index for every feature. A statistical analysis is performed to study how the entropy of encoded symbols varies with tree depth and the number of features.

Journal ArticleDOI
TL;DR: A compression scheme that combines efficient storage with fast retrieval for the information in a node and exploits the properties of the Web Graph without assuming an ordering of the URLs, so that it may be applied to more general graphs.
Abstract: The Web Graph is a large-scale graph that does not fit in main memory, so that lossless compression methods have been proposed for it. This paper introduces a compression scheme that combines efficient storage with fast retrieval for the information in a node. The scheme exploits the properties of the Web Graph without assuming an ordering of the URLs, so that it may be applied to more general graphs. Tests on some datasets of use achieve space savings of about 10% over existing methods.

Journal ArticleDOI
TL;DR: A quantitative comparison between the energy costs associated with direct transmission of uncompressed images and sensor platform-based JPEG compression followed by transmission of the compressed image data is presented.
Abstract: One of the most important goals of current and future sensor networks is energy-efficient communication of images. This paper presents a quantitative comparison between the energy costs associated with 1) direct transmission of uncompressed images and 2) sensor platform-based JPEG compression followed by transmission of the compressed image data. JPEG compression computations are mapped onto various resource-constrained platforms using a design environment that allows computation using the minimum integer and fractional bit-widths needed in view of other approximations inherent in the compression process and choice of image quality parameters. Advanced applications of JPEG, such as region of interest coding and successive/progressive transmission, are also examined. Detailed experimental results examining the tradeoffs in processor resources, processing/transmission time, bandwidth utilization, image quality, and overall energy consumption are presented.

Journal ArticleDOI
TL;DR: This paper proposes a practical approach of uniform down sampling in image space and yet making the sampling adaptive by spatially varying, directional low-pass prefiltering, which outperforms JPEG 2000 in PSNR measure at low to medium bit rates and achieves superior visual quality.
Abstract: Recently, many researchers started to challenge a long-standing practice of digital photography: oversampling followed by compression and pursuing more intelligent sparse sampling techniques. In this paper, we propose a practical approach of uniform down sampling in image space and yet making the sampling adaptive by spatially varying, directional low-pass prefiltering. The resulting down-sampled prefiltered image remains a conventional square sample grid, and, thus, it can be compressed and transmitted without any change to current image coding standards and systems. The decoder first decompresses the low-resolution image and then upconverts it to the original resolution in a constrained least squares restoration process, using a 2-D piecewise autoregressive model and the knowledge of directional low-pass prefiltering. The proposed compression approach of collaborative adaptive down-sampling and upconversion (CADU) outperforms JPEG 2000 in PSNR measure at low to medium bit rates and achieves superior visual quality, as well. The superior low bit-rate performance of the CADU approach seems to suggest that oversampling not only wastes hardware resources and energy, and it could be counterproductive to image quality given a tight bit budget.

Journal ArticleDOI
01 Sep 2009
TL;DR: This paper proposes a method that combines the JPEG-LS and an interframe coding with motion vectors to enhance the compression performance of using JPEG- LS alone, and achieves average compression gains of 13.3% and 26.3 % over the methods of using Photoshop and JPEG2000 alone, respectively.
Abstract: Hospitals and medical centers produce an enormous amount of digital medical images every day, especially in the form of image sequences, which requires considerable storage space. One solution could be the application of lossless compression. Among available methods, JPEG-LS has excellent coding performance. However, it only compresses a single picture with intracoding and does not utilize the interframe correlation among pictures. Therefore, this paper proposes a method that combines the JPEG-LS and an interframe coding with motion vectors to enhance the compression performance of using JPEG-LS alone. Since the interframe correlation between two adjacent images in a medical image sequence is usually not as high as that in a general video image sequence, the interframe coding is activated only when the interframe correlation is high enough. With six capsule endoscope image sequences under test, the proposed method achieves average compression gains of 13.3% and 26.3 % over the methods of using JPEG-LS and JPEG2000 alone, respectively. Similarly, for an MRI image sequence, coding gains of 77.5% and 86.5% are correspondingly obtained.

Journal ArticleDOI
TL;DR: The results of a comprehensive workload evaluation of the MediaBench II benchmark suite demonstrate the high processing regularity of video workloads, as compared with general-purpose workloads and illustrate how the growing complexity of the emerging video standards is beginning to negatively impact video workload characteristics.

Patent
31 Jul 2009
TL;DR: In this paper, column based data encoding where raw data to be compressed is organized by columns, and then, as first and second layers of reduction of the data size, dictionary encoding and/or value encoding are applied to the data to create integer sequences that correspond to the columns, a hybrid greedy run length encoding and bit packing compression algorithm further compacts the data according to an analysis of bit savings.
Abstract: The subject disclosure relates to column based data encoding where raw data to be compressed is organized by columns, and then, as first and second layers of reduction of the data size, dictionary encoding and/or value encoding are applied to the data as organized by columns, to create integer sequences that correspond to the columns. Next, a hybrid greedy run length encoding and bit packing compression algorithm further compacts the data according to an analysis of bit savings. Synergy of the hybrid data reduction techniques in concert with the column-based organization, coupled with gains in scanning and querying efficiency owing to the representation of the compact data, results in substantially improved data compression at a fraction of the cost of conventional systems.

Journal ArticleDOI
TL;DR: This paper proposes a novel mapping scheme, known as the rhombic dodecahedron map (RD map) to represent data over the spherical domain, and shows that with its ultra-fast data indexing capability, it can playback omnidirectional videos with very high frame rates on conventional PCs with GPU support.
Abstract: Omnidirectional videos are usually mapped to planar domain for encoding with off-the-shelf video compression standards. However, existing work typically neglects the effect of the sphere-to-plane mapping. In this paper, we show that by carefully designing the mapping, we can improve the visual quality, stability and compression efficiency of encoding omnidirectional videos. Here we propose a novel mapping scheme, known as the rhombic dodecahedron map (RD map) to represent data over the spherical domain. By using a family of skew great circles as the subdivision kernel, the RD map not only produces a sampling pattern with very low discrepancy, it can also support a highly efficient data indexing mechanism over the spherical domain. Since the proposed map is quad-based, geodesic-aligned, and of very low area and shape distortion, we can reliably apply 2-D wavelet-based and DCT-based encoding methods that are originally designated to planar perspective videos. At the end, we perform a series of analysis and experiments to investigate and verify the effectiveness of the proposed method; with its ultra-fast data indexing capability, we show that we can playback omnidirectional videos with very high frame rates on conventional PCs with GPU support.

Journal ArticleDOI
TL;DR: This review focuses on a systematic presentation of the key areas of bioinformatics and computational biology where compression has been used and a unifying organization of the main ideas and techniques is provided.
Abstract: Motivation: Textual data compression, and the associated techniques coming from information theory, are often perceived as being of interest for data communication and storage. However, they are also deeply related to classification and data mining and analysis. In recent years, a substantial effort has been made for the application of textual data compression techniques to various computational biology tasks, ranging from storage and indexing of large datasets to comparison and reverse engineering of biological networks. Results: The main focus of this review is on a systematic presentation of the key areas of bioinformatics and computational biology where compression has been used. When possible, a unifying organization of the main ideas and techniques is also provided. Availability: It goes without saying that most of the research results reviewed here offer software prototypes to the bioinformatics community. The supplementary material (see next) provides pointers to software and benchmark datasets for a range of applications of broad interest. Contact: raffaele@math.unipa.it Supplementary: In addition to provide reference to software, the supplementary material also gives a brief presentation of some fundamental results and techniques related to this paper. It is at: http://www.math.unipa.it/˜raffaele/suppMaterial/ compReview/

Journal ArticleDOI
TL;DR: An iterative algorithm is presented to jointly optimize run-length coding, Huffman coding, and quantization table selection that results in a compressed bitstream completely compatible with existing JPEG and MPEG decoders, but is also computationally efficient.
Abstract: To maximize rate distortion performance while remaining faithful to the JPEG syntax, the joint optimization of the Huffman tables, quantization step sizes, and DCT indices of a JPEG encoder is investigated. Given Huffman tables and quantization step sizes, an efficient graph-based algorithm is first proposed to find the optimal DCT indices in the form of run-size pairs. Based on this graph-based algorithm, an iterative algorithm is then presented to jointly optimize run-length coding, Huffman coding, and quantization table selection. The proposed iterative algorithm not only results in a compressed bitstream completely compatible with existing JPEG and MPEG decoders, but is also computationally efficient. Furthermore, when tested over standard test images, it achieves the best JPEG compression results, to the extent that its own JPEG compression performance even exceeds the quoted PSNR results of some state-of-the-art wavelet-based image coders such as Shapiro's embedded zerotree wavelet algorithm at the common bit rates under comparison. Both the graph-based algorithm and the iterative algorithm can be applied to application areas such as web image acceleration, digital camera image compression, MPEG frame optimization, and transcoding, etc.

Journal ArticleDOI
TL;DR: The experimental results demonstrate the superiority of the proposed reversible visible watermarking scheme compared to the existing methods, and adopts data compression for further reduction in the recovery packet size and improvement in embedding capacity.
Abstract: A reversible (also called lossless, distortion-free, or invertible) visible watermarking scheme is proposed to satisfy the applications, in which the visible watermark is expected to combat copyright piracy but can be removed to losslessly recover the original image. We transparently reveal the watermark image by overlapping it on a user-specified region of the host image through adaptively adjusting the pixel values beneath the watermark, depending on the human visual system-based scaling factors. In order to achieve reversibility, a reconstruction/recovery packet, which is utilized to restore the watermarked area, is reversibly inserted into non-visibly-watermarked region. The packet is established according to the difference image between the original image and its approximate version instead of its visibly watermarked version so as to alleviate its overhead. For the generation of the approximation, we develop a simple prediction technique that makes use of the unaltered neighboring pixels as auxiliary information. The recovery packet is uniquely encoded before hiding so that the original watermark pattern can be reconstructed based on the encoded packet. In this way, the image recovery process is carried out without needing the availability of the watermark. In addition, our method adopts data compression for further reduction in the recovery packet size and improvement in embedding capacity. The experimental results demonstrate the superiority of the proposed scheme compared to the existing methods.