scispace - formally typeset
Search or ask a question

Showing papers on "Data compression published in 2006"


Journal ArticleDOI
TL;DR: This work presents a distributed random linear network coding approach for transmission and compression of information in general multisource multicast networks, and shows that this approach can take advantage of redundant network capacity for improved success probability and robustness.
Abstract: We present a distributed random linear network coding approach for transmission and compression of information in general multisource multicast networks. Network nodes independently and randomly select linear mappings from inputs onto output links over some field. We show that this achieves capacity with probability exponentially approaching 1 with the code length. We also demonstrate that random linear coding performs compression when necessary in a network, generalizing error exponents for linear Slepian-Wolf coding in a natural way. Benefits of this approach are decentralized operation and robustness to network changes or link failures. We show that this approach can take advantage of redundant network capacity for improved success probability and robustness. We illustrate some potential advantages of random linear network coding over routing in two examples of practical scenarios: distributed network operation and networks with dynamically varying connections. Our derivation of these results also yields a new bound on required field size for centralized network coding on general multicast networks

2,806 citations


Book
01 Dec 2006
TL;DR: Detailed descriptions and explanations of the most well-known and frequently used compression methods are covered in a self-contained fashion, with an accessible style and technical level for specialists and nonspecialists.
Abstract: Data compression is one of the most important fields and tools in modern computing. From archiving data, to CD ROMs, and from coding theory to image analysis, many facets of modern computing rely upon data compression. Data Compression provides a comprehensive reference for the many different types and methods of compression. Included are a detailed and helpful taxonomy, analysis of most common methods, and discussions on the use and comparative benefits of methods and description of "how to" use them. The presentation is organized into the main branches of the field of data compression: run length encoding, statistical methods, dictionary-based methods, image compression, audio compression, and video compression. Detailed descriptions and explanations of the most well-known and frequently used compression methods are covered in a self-contained fashion, with an accessible style and technical level for specialists and nonspecialists. Topics and features: coverage of video compression, including MPEG-1 and H.261 thorough coverage of wavelets methods, including CWT, DWT, EZW and the new Lifting Scheme technique complete audio compression QM coder used in JPEG and JBIG, including new JPEG 200 standard image transformations and detailed coverage of discrete cosine transform and Haar transform coverage of EIDAC method for compressing simple images prefix image compression ACB and FHM curve compression geometric compression and edgebreaker technique.Data Compression provides an invaluable reference and guide for all computer scientists, computer engineers, electrical engineers, signal/image processing engineers and other scientists needing a comprehensive compilation for a broad range of compression methods.

1,745 citations


Journal ArticleDOI
TL;DR: A new method is proposed for the problem of digital camera identification from its images based on the sensor's pattern noise, which serves as a unique identification fingerprint for each camera under investigation by averaging the noise obtained from multiple images using a denoising filter.
Abstract: In this paper, we propose a new method for the problem of digital camera identification from its images based on the sensor's pattern noise. For each camera under investigation, we first determine its reference pattern noise, which serves as a unique identification fingerprint. This is achieved by averaging the noise obtained from multiple images using a denoising filter. To identify the camera from a given image, we consider the reference pattern noise as a spread-spectrum watermark, whose presence in the image is established by using a correlation detector. Experiments on approximately 320 images taken with nine consumer digital cameras are used to estimate false alarm rates and false rejection rates. Additionally, we study how the error rates change with common image processing, such as JPEG compression or gamma correction.

1,195 citations


Journal ArticleDOI
TL;DR: This article summarizes and categories hardware-based test vector compression techniques for scan architectures, which fall broadly into three categories: code-based schemes use data compression codes to encode test cubes; linear-decompression- based schemes decompress the data using only linear operations; and broadcast-scan-based scheme rely on broadcasting the same values to multiple scan chains.
Abstract: Test data compression consists of test vector compression on the input side and response, compaction on the output side This vector compression has been an active area of research This article summarizes and categories these techniques The focus is on hardware-based test vector compression techniques for scan architectures Test vector compression schemes fall broadly into three categories: code-based schemes use data compression codes to encode test cubes; linear-decompression-based schemes decompress the data using only linear operations (that is LFSRs and XOR networks) and broadcast-scan-based schemes rely on broadcasting the same values to multiple scan chains

429 citations


Proceedings ArticleDOI
31 Oct 2006
TL;DR: This paper discusses the design issues involved with implementing, adapting, and customizing compression algorithms specifically geared for sensor nodes and shows how different amounts of compression can lead to energy savings on both the compressing node and throughout the network.
Abstract: Sensor networks are fundamentally constrained by the difficulty and energy expense of delivering information from sensors to sink. Our work has focused on garnering additional significant energy improvements by devising computationally-efficient lossless compression algorithms on the source node. These reduce the amount of data that must be passed through the network and to the sink, and thus have energy benefits that are multiplicative with the number of hops the data travels through the network.Currently, if sensor system designers want to compress acquired data, they must either develop application-specific compression algorithms or use off-the-shelf algorithms not designed for resource-constrained sensor nodes. This paper discusses the design issues involved with implementing, adapting, and customizing compression algorithms specifically geared for sensor nodes. While developing Sensor LZW (S-LZW) and some simple, but effective, variations to this algorithm, we show how different amounts of compression can lead to energy savings on both the compressing node and throughout the network and that the savings depends heavily on the radio hardware. To validate and evaluate our work, we apply it to datasets from several different real-world deployments and show that our approaches can reduce energy consumption by up to a factor of 4.5X across the network.

424 citations


Journal ArticleDOI
TL;DR: This work demonstrates that, with several typical compression algorithms, there is a actually a net energy increase when compression is applied before transmission, and suggestions are made to avoid it.
Abstract: Wireless transmission of a single bit can require over 1000 times more energy than a single computation. It can therefore be beneficial to perform additional computation to reduce the number of bits transmitted. If the energy required to compress data is less than the energy required to send it, there is a net energy savings and an increase in battery life for portable computers. This article presents a study of the energy savings possible by losslessly compressing data prior to transmission. A variety of algorithms were measured on a StrongARM SA-110 processor. This work demonstrates that, with several typical compression algorithms, there is a actually a net energy increase when compression is applied before transmission. Reasons for this increase are explained and suggestions are made to avoid it. One such energy-aware suggestion is asymmetric compression, the use of one compression algorithm on the transmit side and a different algorithm for the receive path. By choosing the lowest-energy compressor and decompressor on the test platform, overall energy to send and receive data can be reduced by 11p compared with a well-chosen symmetric pair, or up to 57p over the default symmetric zlib scheme.

402 citations


Journal ArticleDOI
TL;DR: This work proposes a simple scheme for lossless, online compression of floating-point data that transparently integrates into the I/O of many applications, and achieves state-of-the-art compression rates and speeds.
Abstract: Large scale scientific simulation codes typically run on a cluster of CPUs that write/read time steps to/from a single file system. As data sets are constantly growing in size, this increasingly leads to I/O bottlenecks. When the rate at which data is produced exceeds the available I/O bandwidth, the simulation stalls and the CPUs are idle. Data compression can alleviate this problem by using some CPU cycles to reduce the amount of data needed to be transfered. Most compression schemes, however, are designed to operate offline and seek to maximize compression, not throughput. Furthermore, they often require quantizing floating-point values onto a uniform integer grid, which disqualifies their use in applications where exact values must be retained. We propose a simple scheme for lossless, online compression of floating-point data that transparently integrates into the I/O of many applications. A plug-in scheme for data-dependent prediction makes our scheme applicable to a wide variety of data used in visualization, such as unstructured meshes, point sets, images, and voxel grids. We achieve state-of-the-art compression rates and speeds, the latter in part due to an improved entropy coder. We demonstrate that this significantly accelerates I/O throughput in real simulation runs. Unlike previous schemes, our method also adapts well to variable-precision floating-point and integer data

401 citations


Proceedings ArticleDOI
09 Jul 2006
TL;DR: In this paper, an investigation of H.264/MPEG4-AVC conforming coding with hierarchical B pictures is presented, and simulation results turned out that in comparison to the widely used IBBP structure coding gains can be achieved at the expense of an increased coding delay.
Abstract: In this paper, an investigation of H.264/MPEG4-AVC conforming coding with hierarchical B pictures is presented. We analyze the coding delay and memory requirements, describe details of an improved encoder control, and compare the coding efficiency for different coding delays. Additionally, the coding efficiency of hierarchical B picture coding is compared to that of MCTF-based coding by using identical coding structures and a similar degree of encoder optimization. Our simulation results turned out that in comparison to the widely used IBBP...structure coding gains of more than 1 dB can be achieved at the expense of an increased coding delay. Further experiments have shown that the coding efficiency gains obtained by using the additional update steps in MCTF coding are generally smaller than the losses resulting from the required open-loop encoder control.

394 citations


Proceedings ArticleDOI
01 Dec 2006
TL;DR: This paper proposes algorithms and hardware to support a new theory of compressive imaging based on a new digital image/video camera that directly acquires random projections of the signal without first collecting the pixels/voxels.
Abstract: Compressive Sensing is an emerging field based on the revelation that a small group of non-adaptive linear projections of a compressible signal contains enough information for reconstruction and processing. In this paper, we propose algorithms and hardware to support a new theory of Compressive Imaging. Our approach is based on a new digital image/video camera that directly acquires random projections of the signal without first collecting the pixels/voxels. Our camera architecture employs a digital micromirror array to perform optical calculations of linear projections of an image onto pseudo-random binary patterns. Its hallmarks include the ability to obtain an image with a single detection element while measuring the image/video fewer times than the number of pixels ? this can significantly reduce the computation required for video acquisition/encoding. Because our system relies on a single photon detector, it can also be adapted to image at wavelengths that are currently impossible with conventional CCD and CMOS imagers. We are currently testing a proto-type design for the camera and include experimental results.

374 citations


Journal ArticleDOI
TL;DR: It is proved theoretically, and corroborated with examples, that when the noise distributions are either completely known, partially known or completely unknown, distributed estimation is possible with minimal bandwidth requirements which can achieve the same order of mean square error (MSE) performance as the corresponding centralized clairvoyant estimators.
Abstract: This paper provides an overview of distributed estimation-compression problems encountered with wireless sensor networks (WSN). A general formulation of distributed compression-estimation under rate constraints was introduced, pertinent signal processing algorithms were developed, and emerging tradeoffs were delineated from an information theoretic perspective. Specifically, we designed rate-constrained distributed estimators for various signal models with variable knowledge of the underlying data distributions. We proved theoretically, and corroborated with examples, that when the noise distributions are either completely known, partially known or completely unknown, distributed estimation is possible with minimal bandwidth requirements which can achieve the same order of mean square error (MSE) performance as the corresponding centralized clairvoyant estimators.

362 citations


Journal ArticleDOI
TL;DR: The proposed image hashing paradigm using visually significant feature points is proposed, which withstands standard benchmark attacks, including compression, geometric distortions of scaling and small-angle rotation, and common signal-processing operations.
Abstract: We propose an image hashing paradigm using visually significant feature points. The feature points should be largely invariant under perceptually insignificant distortions. To satisfy this, we propose an iterative feature detector to extract significant geometry preserving feature points. We apply probabilistic quantization on the derived features to introduce randomness, which, in turn, reduces vulnerability to adversarial attacks. The proposed hash algorithm withstands standard benchmark (e.g., Stirmark) attacks, including compression, geometric distortions of scaling and small-angle rotation, and common signal-processing operations. Content changing (malicious) manipulations of image data are also accurately detected. Detailed statistical analysis in the form of receiver operating characteristic (ROC) curves is presented and reveals the success of the proposed scheme in achieving perceptual robustness while avoiding misclassification

Journal ArticleDOI
TL;DR: A practical quality-aware image encoding, decoding and quality analysis system, which employs a novel reduced-reference image quality assessment algorithm based on a statistical model of natural images and a previously developed quantization watermarking-based data hiding technique in the wavelet transform domain.
Abstract: We propose the concept of quality-aware image , in which certain extracted features of the original (high-quality) image are embedded into the image data as invisible hidden messages. When a distorted version of such an image is received, users can decode the hidden messages and use them to provide an objective measure of the quality of the distorted image. To demonstrate the idea, we build a practical quality-aware image encoding, decoding and quality analysis system, which employs: 1) a novel reduced-reference image quality assessment algorithm based on a statistical model of natural images and 2) a previously developed quantization watermarking-based data hiding technique in the wavelet transform domain.

Patent
08 Apr 2006
TL;DR: In this paper, a method for compressing data comprises the steps of: analyzing a data block of an input data stream to identify a data type of the data block, the input dataset consisting of a plurality of disparate data types; performing content dependent data compression on the block; and performing content independent data compression if the data type is not identified.
Abstract: Systems and methods for providing fast and efficient data compression using a combination of content independent data compression and content dependent data compression. In one aspect, a method for compressing data comprises the steps of: analyzing a data block of an input data stream to identify a data type of the data block, the input data stream comprising a plurality of disparate data types; performing content dependent data compression on the data block, if the data type of the data block is identified; performing content independent data compression on the data block, if the data type of the data block is not identified.

Journal ArticleDOI
TL;DR: A novel framework for lossless (invertible) authentication watermarking is presented, which enables zero-distortion reconstruction of the un-watermarked images upon verification and enables public(-key) authentication without granting access to the perfect original and allows for efficient tamper localization.
Abstract: We present a novel framework for lossless (invertible) authentication watermarking, which enables zero-distortion reconstruction of the un-watermarked images upon verification. As opposed to earlier lossless authentication methods that required reconstruction of the original image prior to validation, the new framework allows validation of the watermarked images before recovery of the original image. This reduces computational requirements in situations when either the verification step fails or the zero-distortion reconstruction is not needed. For verified images, integrity of the reconstructed image is ensured by the uniqueness of the reconstruction procedure. The framework also enables public(-key) authentication without granting access to the perfect original and allows for efficient tamper localization. Effectiveness of the framework is demonstrated by implementing the framework using hierarchical image authentication along with lossless generalized-least significant bit data embedding.

Journal ArticleDOI
TL;DR: The proposed algorithm is based on an interesting use of the integer wavelet transform followed by a fast adaptive context-based Golomb-Rice coding for lossless compression of color mosaic images generated by a Bayer CCD color filter array.
Abstract: Lossless compression of color mosaic images poses a unique and interesting problem of spectral decorrelation of spatially interleaved R, G, B samples. We investigate reversible lossless spectral-spatial transforms that can remove statistical redundancies in both spectral and spatial domains and discover that a particular wavelet decomposition scheme, called Mallat wavelet packet transform, is ideally suited to the task of decorrelating color mosaic data. We also propose a low-complexity adaptive context-based Golomb-Rice coding technique to compress the coefficients of Mallat wavelet packet transform. The lossless compression performance of the proposed method on color mosaic images is apparently the best so far among the existing lossless image codecs.

Journal ArticleDOI
TL;DR: A novel approach to spam filtering based on adaptive statistical data compression models that outperform currently established spam filters, as well as a number of methods proposed in previous studies.
Abstract: Spam filtering poses a special problem in text categorization, of which the defining characteristic is that filters face an active adversary, which constantly attempts to evade filtering. Since spam evolves continuously and most practical applications are based on online user feedback, the task calls for fast, incremental and robust learning algorithms. In this paper, we investigate a novel approach to spam filtering based on adaptive statistical data compression models. The nature of these models allows them to be employed as probabilistic text classifiers based on character-level or binary sequences. By modeling messages as sequences, tokenization and other error-prone preprocessing steps are omitted altogether, resulting in a method that is very robust. The models are also fast to construct and incrementally updateable. We evaluate the filtering performance of two different compression algorithms; dynamic Markov compression and prediction by partial matching. The results of our empirical evaluation indicate that compression models outperform currently established spam filters, as well as a number of methods proposed in previous studies.

Journal ArticleDOI
TL;DR: Subjective and objective tests reveal that the proposed watermarking scheme maintains high audio quality and is simultaneously highly robust to pirate attacks, including MP3 compression, low-pass filtering, amplitude scaling, time scaling, digital-to-analog/analog- to-digital reacquisition, cropping, sampling rate change, and bit resolution transformation.
Abstract: This work proposes a method of embedding digital watermarks into audio signals in the time domain. The proposed algorithm exploits differential average-of-absolute-amplitude relations within each group of audio samples to represent one-bit information. The principle of low-frequency amplitude modification is employed to scale amplitudes in a group manner (unlike the sample-by-sample manner as used in pseudonoise or spread-spectrum techniques) in selected sections of samples so that the time-domain waveform envelope can be almost preserved. Besides, when the frequency-domain characteristics of the watermark signal are controlled by applying absolute hearing thresholds in the psychoacoustic model, the distortion associated with watermarking is hardly perceivable by human ears. The watermark can be blindly extracted without knowledge of the original signal. Subjective and objective tests reveal that the proposed watermarking scheme maintains high audio quality and is simultaneously highly robust to pirate attacks, including MP3 compression, low-pass filtering, amplitude scaling, time scaling, digital-to-analog/analog-to-digital reacquisition, cropping, sampling rate change, and bit resolution transformation. Security of embedded watermarks is enhanced by adopting unequal section lengths determined by a secret key.

Journal ArticleDOI
TL;DR: Experimental results show that the performance of the proposed reversible data hiding scheme based on side match vector quantization (SMVQ) for digitally compressed images is better than those of other information hiding schemes for VQ- based and SMVQ-based compressed images.
Abstract: Many researchers have studied reversible data hiding techniques in recent years and most have proposed reversible data hiding schemes that guarantee only that the original cover image can be reconstructed completely. Once the secret data are embedded in the compression domain and the receiver wants to store the cover image in a compression mode to save storage space, the receiver must extract the secret data, reconstruct the cover image, and compress the cover image again to generate compression codes. In this paper, we present a reversible data hiding scheme based on side match vector quantization (SMVQ) for digitally compressed images. With this scheme, the receiver only performs two steps to achieve the same goal: extract the secret data and reconstruct the original SMVQ compression codes. In terms of the size of the secret data, the visual quality, and the compression rate, experimental results show that the performance of our proposed scheme is better than those of other information hiding schemes for VQ-based and SMVQ-based compressed images. The experimental results further confirm the effectiveness and reversibility of the proposed scheme

Journal ArticleDOI
01 Jul 2006
TL;DR: A compact reconstruction function is introduced that is used to decompose an HDR video stream into a residual stream and a standard LDR stream, which can be played on existing MPEG decoders, such as DVD players.
Abstract: To embrace the imminent transition from traditional low-contrast video (LDR) content to superior high dynamic range (HDR) content, we propose a novel backward compatible HDR video compression (HDR MPEG) method. We introduce a compact reconstruction function that is used to decompose an HDR video stream into a residual stream and a standard LDR stream, which can be played on existing MPEG decoders, such as DVD players. The reconstruction function is finely tuned to the content of each HDR frame to achieve strong decorrelation between the LDR and residual streams, which minimizes the amount of redundant information. The size of the residual stream is further reduced by removing invisible details prior to compression using our HDR-enabled filter, which models luminance adaptation, contrast sensitivity, and visual masking based on the HDR content. Designed especially for DVD movie distribution, our HDR MPEG compression method features low storage requirements for HDR content resulting in a 30% size increase to an LDR video sequence. The proposed compression method does not impose restrictions or modify the appearance of the LDR or HDR video. This is important for backward compatibility of the LDR stream with current DVD appearance, and also enables independent fine tuning, tone mapping, and color grading of both streams.

Journal ArticleDOI
TL;DR: This paper discusses various error resiliency schemes employed by H.264/AVC, and some experimental results are discussed to show the performance of error resilience schemes.

Journal ArticleDOI
TL;DR: This paper proposes a context-based adaptive method to speed up the multiple reference frames ME, and shows that the proposed algorithm can maintain competitively the same video quality as exhaustive search of several reference frames.
Abstract: In the new video coding standard H.264/AVC, motion estimation (ME) is allowed to search multiple reference frames. Therefore, the required computation is highly increased, and it is in proportion to the number of searched reference frames. However, the reduction in prediction residues is mostly dependent on the nature of sequences, not on the number of searched frames. Sometimes the prediction residues can be greatly reduced, but frequently a lot of computation is wasted without achieving any better coding performance. In this paper, we propose a context-based adaptive method to speed up the multiple reference frames ME. Statistical analysis is first applied to the available information for each macroblock (MB) after intra-prediction and inter-prediction from the previous frame. Context-based adaptive criteria are then derived to determine whether it is necessary to search more reference frames. The reference frame selection criteria are related to selected MB modes, inter-prediction residues, intra-prediction residues, motion vectors of subpartitioned blocks, and quantization parameters. Many available standard video sequences are tested as examples. The simulation results show that the proposed algorithm can maintain competitively the same video quality as exhaustive search of multiple reference frames. Meanwhile, 76 %-96 % of computation for searching unnecessary reference frames can be avoided. Moreover, our fast reference frame selection is orthogonal to conventional fast block matching algorithms, and they can be easily combined to achieve further efficient implementations.

Journal ArticleDOI
TL;DR: A robust watermarking algorithm using balanced multiwavelet transform is proposed, which achieves simultaneous orthogonality and symmetry without requiring any input prefiltering, making this transform a good candidate for real-timeWatermarking implementations such as audio broadcast monitoring and DVD video water marking.
Abstract: In this paper, a robust watermarking algorithm using balanced multiwavelet transform is proposed. The latter transform achieves simultaneous orthogonality and symmetry without requiring any input prefiltering. Therefore, considerable reduction in computational complexity is possible, making this transform a good candidate for real-time watermarking implementations such as audio broadcast monitoring and DVD video watermarking. The embedding scheme is image adaptive using a modified version of a well-established perceptual model. Therefore, the strength of the embedded watermark is controlled according to the local properties of the host image. This has been achieved by the proposed perceptual model, which is only dependent on the image activity and is not dependent on the multifilter sets used, unlike those developed for scalar wavelets. This adaptivity is a key factor for achieving the imperceptibility requirement often encountered in watermarking applications. In addition, the watermark embedding scheme is based on the principles of spread-spectrum communications to achieve higher watermark robustness. The optimal bounds for the embedding capacity are derived using a statistical model for balanced multiwavelet coefficients of the host image. The statistical model is based on a generalized Gaussian distribution. Limits of data hiding capacity clearly show that balanced multiwavelets provide higher watermarking rates. This increase could also be exploited as a side channel for embedding watermark synchronization recovery data. Finally, the analytical expressions are contrasted with experimental results where the robustness of the proposed watermarking system is evaluated against standard watermarking attacks.

Journal ArticleDOI
TL;DR: Two methods are proposed: early SKIP mode decision and selective intra mode decision that are verified to significantly reduce the entire encoding time by about 60% with only negligible coding loss.
Abstract: The MPEG-4 Part-10 AVC/H.264 standard employs several powerful coding methods to obtain high compression efficiency. To reduce the temporal and spatial redundancy more effectively, motion compensation uses variable block sizes, and directional intra prediction investigates all available coding modes to decide the best one. When the rate-distortion (R-D) optimization technique is used, an encoder finds the coding mode having minimum R-D cost. Due to the large number of combinations of coding modes, the decision process requires extremely high computation. To reduce the complexity, we propose two methods: early SKIP mode decision and selective intra mode decision. In the simulation results, the proposed methods are verified to significantly reduce the entire encoding time by about 60% with only negligible coding loss

Proceedings ArticleDOI
19 Apr 2006
TL;DR: The system simultaneously computes random projections of the sensor data and disseminates them throughout the network using a simple gossiping algorithm, providing a practical and universal approach to decentralized compression and content distribution in wireless sensor networks.
Abstract: Developing energy efficient strategies for the extraction, transmission, and dissemination of information is a core theme in wireless sensor network research. In this paper we present a novel system for decentralized data compression and predistribution. The system simultaneously computes random projections of the sensor data and disseminates them throughout the network using a simple gossiping algorithm. These summary statistics are stored in an efficient manner and can be extracted from a small subset of nodes anywhere in the network. From these measurements one can reconstruct an accurate approximation of the data at all nodes in the network, provided the original data is compressible in a certain sense which need not be known to the nodes ahead of time. The system provides a practical and universal approach to decentralized compression and content distribution in wireless sensor networks. Two example applications, network health monitoring and field estimation, demonstrate the utility of our method.

Proceedings ArticleDOI
09 Jul 2006
TL;DR: A coding scheme utilizing an H.264/MPEG4-AVC codec to handle the specific requirements of multi-view datasets, namely temporal and inter-view correlation, and two main features of the coder are used: hierarchical B pictures for temporal dependencies and an adapted prediction scheme to exploit inter-View dependencies.
Abstract: Efficient Multi-view coding requires coding algorithms that exploit temporal, as well as inter-view dependencies between adjacent cameras. Based on a spatiotemporal analysis on the multi-view data set, we present a coding scheme utilizing an H.264/MPEG4-AVC codec. To handle the specific requirements of multi-view datasets, namely temporal and inter-view correlation, two main features of the coder are used: hierarchical B pictures for temporal dependencies and an adapted prediction scheme to exploit inter-view dependencies. Both features are set up in the H.264/MPEG4-AVC configuration file, such that coding and decoding is purely based on standardized software. Additionally, picture reordering before coding to optimize coding efficiency and inverse reordering after decoding to obtain individual views are applied. Finally, coding results are shown for the proposed multi-view coder and compared to simulcast anchor and simulcast hierarchical B picture coding.

Patent
08 Apr 2006
TL;DR: In this paper, a data storage and retrieval accelerator is proposed to reduce the time required to store and retrieve data from computer to disk, in conjunction with random access memory, in a display controller, and/or in an input/output controller.
Abstract: Systems and methods for providing accelerated data storage and retrieval utilizing lossless data compression and decompression. A data storage accelerator includes one or a plurality of high speed data compression encoders that are configured to simultaneously or sequentially losslessly compress data at a rate equivalent to or faster than the transmission rate of an input data stream. The compressed data is subsequently stored in a target memory or other storage device whose input data storage bandwidth is lower than the original input data stream bandwidth. Similarly, a data retrieval accelerator includes one or a plurality of high speed data decompression decoders that are configured to simultaneously or sequentially losslessly decompress data at a rate equivalent to or faster than the input data stream from the target memory or storage device. The decompressed data is then output at rate data that is greater than the output rate from the target memory or data storage device. The data storage and retrieval accelerator method and system may employed: in a disk storage adapter to reduce the time required to store and retrieve data from computer to disk; in conjunction with random access memory to reduce the time required to store and retrieve data from random access memory; in a display controller to reduce the time required to send display data to the display controller or processor; and/or in an input/output controller to reduce the time required to store, retrieve, or transmit data.

Journal Article
TL;DR: This work proposes an approach that can detect doctored JPEG images and further locate the doctored parts, by examining the double quantization effect hidden among the DCT coefficients.
Abstract: The steady improvement in image/video editing techniques has enabled people to synthesize realistic images/videos conveniently. Some legal issues may occur when a doctored image cannot be distinguished from a real one by visual examination. Realizing that it might be impossible to develop a method that is universal for all kinds of images and JPEG is the most frequently used image format, we propose an approach that can detect doctored JPEG images and further locate the doctored parts, by examining the double quantization effect hidden among the DCT coefficients. Up to date, this approach is the only one that can locate the doctored part automatically. And it has several other advantages: the ability to detect images doctored by different kinds of synthesizing methods (such as alpha matting and inpainting, besides simple image cut/paste), the ability to work without fully decompressing the JPEG images, and the fast speed. Experiments show that our method is effective for JPEG images, especially when the compression quality is high.

Journal ArticleDOI
TL;DR: A fast-acting level control circuit for the cGC filter is described and it is shown how psychophysical data involving two-tone suppression and compression can be used to estimate the parameter values for this dynamic version of the c GC filter (referred to as the "dcGC" filter).
Abstract: It is now common to use knowledge about human auditory processing in the development of audio signal processors. Until recently, however, such systems were limited by their linearity. The auditory filter system is known to be level-dependent as evidenced by psychophysical data on masking, compression, and two-tone suppression. However, there were no analysis/synthesis schemes with nonlinear filterbanks. This paper describe 18300060s such a scheme based on the compressive gammachirp (cGC) auditory filter. It was developed to extend the gammatone filter concept to accommodate the changes in psychophysical filter shape that are observed to occur with changes in stimulus level in simultaneous, tone-in-noise masking. In models of simultaneous noise masking, the temporal dynamics of the filtering can be ignored. Analysis/synthesis systems, however, are intended for use with speech sounds where the glottal cycle can be long with respect to auditory time constants, and so they require specification of the temporal dynamics of auditory filter. In this paper, we describe a fast-acting level control circuit for the cGC filter and show how psychophysical data involving two-tone suppression and compression can be used to estimate the parameter values for this dynamic version of the cGC filter (referred to as the "dcGC" filter). One important advantage of analysis/synthesis systems with a dcGC filterbank is that they can inherit previously refined signal processing algorithms developed with conventional short-time Fourier transforms (STFTs) and linear filterbanks

Proceedings ArticleDOI
28 Mar 2006
TL;DR: A new compression algorithm that is tailored to scientific computing environments where large amounts of floating-point data often need to be transferred between computers as well as to and from storage devices is described and evaluated.
Abstract: In scientific computing environments, large amounts of floating-point data often need to be transferred between computers as well as to and from storage devices. Compression can reduce the number of bits that need to be transferred and stored. However, the run-time overhead due to compression may be undesirable in high-performance settings where short communication latencies and high bandwidths are essential. This paper describes and evaluates a new compression algorithm that is tailored to such environments. It typically compresses numeric floating-point values better and faster than other algorithms do. On our data sets, it achieves compression ratios between 1.2 and 4.2 as well as compression and decompression throughputs between 2.8 and 5.9 million 64-bit double-precision numbers per second on a 3 GHz Pentium 4 machine.

Journal ArticleDOI
01 Jul 2006
TL;DR: A lossy compression algorithm for large databases of motion capture data using Bezier curves and clustered principal component analysis is presented, which can compress 6 hours 34 minutes of human motion capture from 1080 MB data into 35.5 MB with little visible degradation.
Abstract: We present a lossy compression algorithm for large databases of motion capture data. We approximate short clips of motion using Bezier curves and clustered principal component analysis. This approximation has a smoothing effect on the motion. Contacts with the environment (such as foot strikes) have important detail that needs to be maintained. We compress these environmental contacts using a separate, JPEG like compression algorithm and ensure these contacts are maintained during decompression.Our method can compress 6 hours 34 minutes of human motion capture from 1080 MB data into 35.5 MB with little visible degradation. Compression and decompression is fast: our research implementation can decompress at about 1.2 milliseconds/frame, 7 times faster than real-time (for 120 frames per second animation). Our method also yields smaller compressed representation for the same error or produces smaller error for the same compressed size.