scispace - formally typeset
Search or ask a question

Showing papers on "Lossless compression published in 2013"


Journal ArticleDOI
TL;DR: A new polar coding scheme is proposed, which can attain the channel capacity without any alphabet extension by invoking results on polar coding for lossless compression, and it is shown that the proposed scheme achieves a better tradeoff between complexity and decoding error probability in many cases.
Abstract: This paper considers polar coding for asymmetric settings, that is, channel coding for asymmetric channels and lossy source coding for nonuniform sources and/or asymmetric distortion measures. The difficulty for asymmetric settings comes from the fact that the optimal symbol distributions of codewords are not always uniform. It is known that such nonuniform distributions can be realized by Gallager's scheme which maps multiple auxiliary symbols distributed uniformly to an actual symbol. However, the complexity of Gallager's scheme increases considerably for the case that the optimal distribution cannot be approximated by simple rational numbers. To overcome this problem for the asymmetric settings, a new polar coding scheme is proposed, which can attain the channel capacity without any alphabet extension by invoking results on polar coding for lossless compression. It is also shown that the proposed scheme achieves a better tradeoff between complexity and decoding error probability in many cases.

219 citations


Journal ArticleDOI
TL;DR: It is proved that, for independent identically distributed gray-scale host signals, the proposed method asymptotically approaches the rate-distortion bound of RDH as long as perfect compression can be realized, and establishes the equivalency between reversible data hiding and lossless data compression.
Abstract: State-of-the-art schemes for reversible data hiding (RDH) usually consist of two steps: first construct a host sequence with a sharp histogram via prediction errors, and then embed messages by modifying the histogram with methods, such as difference expansion and histogram shift. In this paper, we focus on the second stage, and propose a histogram modification method for RDH, which embeds the message by recursively utilizing the decompression and compression processes of an entropy coder. We prove that, for independent identically distributed (i.i.d.) gray-scale host signals, the proposed method asymptotically approaches the rate-distortion bound of RDH as long as perfect compression can be realized, i.e., the entropy coder can approach entropy. Therefore, this method establishes the equivalency between reversible data hiding and lossless data compression. Experiments show that this coding method can be used to improve the performance of previous RDH schemes and the improvements are more significant for larger images.

208 citations


01 Jan 2013
TL;DR: This study proposes context based adaptive lossless image codec.(CALIC)(12) that addresses the need for efficient methods and tools for implementation of data compression methods in medical applications.
Abstract: Compression methods are important in many medical applications to ensure fast interactivity through large sets of images (e.g. volumetric data sets, image databases), for searching context dependant images and for quantitative analysis of measured data. Medical data are increasingly represented in digital form. The limitations in transmission bandwidth and storage space on one side and the growing size of image datasets on the other side has necessitated the need for efficient methods and tools for implementation. Many techniques for achieving data compression have been introduced. In this study we propose context based adaptive lossless image codec.(CALIC)(12)

170 citations


Journal ArticleDOI
TL;DR: An algorithm by which an image of image of N pixels and different colors is stored in a quantum system just using 2N+m qubits to find all solutions in the expected times in O(t\sqrt{N} )$$ is proposed.
Abstract: A set of quantum states for $$M$$ colors and another set of quantum states for $$N$$ coordinates are proposed in this paper to represent $$M$$ colors and coordinates of the $$N$$ pixels in an image respectively. We design an algorithm by which an image of $$N$$ pixels and $$m$$ different colors is stored in a quantum system just using $$2N+m$$ qubits. An algorithm for quantum image compression is proposed. Simulation result on the Lena image shows that compression ratio of lossless is 2.058. Moreover, an image segmentation algorithm based on quantum search quantum search which can find all solutions in the expected times in $$O(t\sqrt{N} )$$ is proposed, where $$N$$ is the number of pixels and $$t$$ is the number of targets to be segmented.

132 citations


Journal ArticleDOI
TL;DR: The proposed scheme combines lossless data compression and encryption technique to embed electronic health record (EHR)/DICOM metadata, image hash, indexing keyword, doctor identification code and tamper localization information in the medical images.

90 citations


Journal ArticleDOI
TL;DR: An ECG compression method based on beta wavelet using lossless encoding technique is presented, which shows the superiority of this technique in terms of compression ratio and a desirable signal quality.

89 citations


Journal ArticleDOI
TL;DR: This paper reimplemented several state-of-the-art methods in a comparable manner, and measured various performance factors with the authors' benchmark, including compression ratio, computation time, model maintenance cost, approximation quality, and robustness to noisy data.
Abstract: As the volumes of sensor data being accumulated are likely to soar, data compression has become essential in a wide range of sensor-data applications. This has led to a plethora of data compression techniques for sensor data, in particular model-based approaches have been spotlighted due to their significant compression performance. These methods, however, have never been compared and analyzed under the same setting, rendering a "right" choice of compression technique for a particular application very difficult. Addressing this problem, this paper presents a benchmark that offers a comprehensive empirical study on the performance comparison of the model-based compression techniques. Specifically, we reimplemented several state-of-the-art methods in a comparable manner, and measured various performance factors with our benchmark, including compression ratio, computation time, model maintenance cost, approximation quality, and robustness to noisy data. We then provide in-depth analysis of the benchmark results, obtained by using 11 different real data sets consisting of 346 heterogeneous sensor data signals. We believe that the findings from the benchmark will be able to serve as a practical guideline for applications that need to compress sensor data.

86 citations


Journal ArticleDOI
TL;DR: NGC enables lossless and lossy compression and introduces the following two novel ideas: first, a way to reduce the number of required code words by exploiting common features of reads mapped to the same genomic positions; second, a highly configurable way for the quantization of per-base quality values, which takes their influence on downstream analyses into account.
Abstract: A major challenge of current high-throughput sequencing experiments is not only the generation of the sequencing data itself but also their processing, storage and transmission. The enormous size of these data motivates the development of data compression algorithms usable for the implementation of the various storage policies that are applied to the produced intermediate and final result files. In this article, we present NGC, a tool for the compression of mapped short read data stored in the wide-spread SAM format. NGC enables lossless and lossy compression and introduces the following two novel ideas: first, we present a way to reduce the number of required code words by exploiting common features of reads mapped to the same genomic positions; second, we present a highly configurable way for the quantization of per-base quality values, which takes their influence on downstream analyses into account. NGC, evaluated with several real-world data sets, saves 33–66% of disc space using lossless and up to 98% disc space using lossy compression. By applying two popular variant and genotype prediction tools to the decompressed data, we could show that the lossy compression modes preserve >99% of all called variants while outperforming comparable methods in some configurations.

79 citations


Journal Article
TL;DR: In this paper, a lossless compression scheme for low flying aircraft equipped with modern laser-range scanning technology, called Light Detection and Ranging (LIDAR), can collect precise elevation information for entire cities, counties, and even states.
Abstract: This article describes how low flying aircraft equipped with modern laser-range scanning technology, which is also called Light Detection and Ranging (LIDAR), can collect precise elevation information for entire cities, counties, and even states By shooting 100,000 or more laser pulses per second into the Earth’s surface, these aircraft often take measurements at resolutions exceeding one point per square meter Derivatives of this data such as digital elevation models are used in numerous applications to assess flood hazards, to plan solar and wind applications, to carry out forest inventories, and to aid in power grid maintenance However, the shear amount of LIDAR data collected poses a significant challenge as billions of elevation samples need to be stored, processed, and distributed The article describes a lossless compression scheme for LIDAR in binary LAS format The compressed LAZ files are only 7 to 25 percent of the original size The encoding and decoding speeds are several million points per second Compression is streaming and decompression supports random-access On a national scale, the compression savings of storing LAZ instead of LAS can be measured in Petabyes of data, ie, data that no longer needs to be hosted, backed up, and served

78 citations


Journal ArticleDOI
Wei Sun1, Zhe-Ming Lu1, Yu-Chun Wen1, Fa-Xin Yu1, Rong-Jun Shen1 
TL;DR: Experimental results show that the novel reversible data hiding scheme based on the joint neighbor coding technique for BTC-compressed images outperforms three existing BTC-based data hiding works, in terms of the bit rate, capacity, and efficiency.
Abstract: Reversible data hiding has been a hot research topic because it can recover both the host media and hidden data without distortion. Because most digital images are stored and transmitted in compressed forms, such as JPEG, vector quantization, and block truncation coding (BTC), the reversible data hiding schemes in compressed domains have been paid more and more attention. Compared with transform coding, BTC has a significantly low complexity and less memory requirement, it therefore becomes an ideal data hiding domain. Traditional data hiding schemes in the BTC domain modify the BTC encoding stage or BTC-compressed data according to the secret bits, and they have a relatively low efficiency and meanwhile may reduce the image quality. This paper presents a novel reversible data hiding scheme based on the joint neighbor coding technique for BTC-compressed images by further losslessly encoding the BTC-compressed data according to the secret bits. First, BTC is performed on the original image to obtain the BTC-compressed data that can be represented by a high mean table, a low mean table, and a bitplane sequence. Then, the secret data are losslessly embedded in both the high mean and low mean tables. Our hiding scheme is a lossless method based on the relation among the current value and the neighboring ones in mean tables. In addition, it can averagely embed 2 bits in each mean value, which increases the capacity and efficiency. Experimental results show that our scheme outperforms three existing BTC-based data hiding works, in terms of the bit rate, capacity, and efficiency.

76 citations


Journal ArticleDOI
01 Feb 2013
TL;DR: A robust lossless copyright protection scheme, based on overlapping discrete cosine transform (DCT) and singular value decomposition (SVD), is presented, demonstrating the robustness of the proposed scheme against different image-manipulation attacks.
Abstract: In this paper, a robust lossless copyright protection scheme, based on overlapping discrete cosine transform (DCT) and singular value decomposition (SVD), is presented. The original host image is separated into overlapping blocks, to which the DCT is applied. Direct current (DC) coefficients are extracted from the transformed blocks to form a DC-map. A series of random positions are selected on the map and SVD is performed to construct an ownership share which is used for copyright verification. Simulation results are carried out, demonstrating the robustness of the proposed scheme against different image-manipulation attacks.

Journal ArticleDOI
TL;DR: Compared with the previous low-complexity and high performance techniques, this work achieves lower hardware cost, lower power consumption, and a better compression rate than other lossless ECG encoder designs.
Abstract: An efficient VLSI architecture of a lossless ECG encoding circuit is proposed for wireless healthcare monitoring applications. To reduce the transmission and storage data, a novel lossless compression algorithm is proposed for ECG signal compression. It consists of a novel adaptive rending predictor and a novel two-stage entropy encoder based on two Huffman coding tables. The proposed lossless ECG encoder design was implemented using only simple arithmetic units. To improve the performance, the proposed ECG encoder was designed by pipeline technology and implemented the two-stage entropy encoder by the architecture of a look-up table. The VLSI architecture of this work contains 3.55 K gate counts and its core area is 45987 µm2 synthesised by a 0.18 µm CMOS process. It can operate at 100 MHz processing rate with only 36.4 µW. The data compression rate reaches an average value 2.43 for the MIT-BIH Arrhythmia Database. Compared with the previous low-complexity and high performance techniques, this work achieves lower hardware cost, lower power consumption, and a better compression rate than other lossless ECG encoder designs.

Journal ArticleDOI
TL;DR: A general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO), and a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression).
Abstract: In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition, we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance, 4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware.

Journal ArticleDOI
TL;DR: In this paper, lossless and near-lossless compression algorithms for multichannel electroencephalogram (EEG) signals are presented based on image and volumetric coding, consisting of a wavelet-based lossy coding layer followed by arithmetic coding on the residual.
Abstract: In this paper, lossless and near-lossless compression algorithms for multichannel electroencephalogram (EEG) signals are presented based on image and volumetric coding. Multichannel EEG signals have significant correlation among spatially adjacent channels; moreover, EEG signals are also correlated across time. Suitable representations are proposed to utilize those correlations effectively. In particular, multichannel EEG is represented either in the form of image (matrix) or volumetric data (tensor), next a wavelet transform is applied to those EEG representations. The compression algorithms are designed following the principle of “lossy plus residual coding,” consisting of a wavelet-based lossy coding layer followed by arithmetic coding on the residual. Such approach guarantees a specifiable maximum error between original and reconstructed signals. The compression algorithms are applied to three different EEG datasets, each with different sampling rate and resolution. The proposed multichannel compression algorithms achieve attractive compression ratios compared to algorithms that compress individual channels separately.

Journal ArticleDOI
TL;DR: This paper takes the perspective of the forensic analyst, and shows how it is possible to counter the aforementioned anti-forensic method revealing the traces of JPEG compression, regardless of the quantization matrix being used.
Abstract: Due to the lossy nature of transform coding, JPEG introduces characteristic traces in the compressed images. A forensic analyst might reveal these traces by analyzing the histogram of discrete cosine transform (DCT) coefficients and exploit them to identify local tampering, copy-move forgery, etc. At the same time, it has been recently shown that a knowledgeable adversary can possibly conceal the traces of JPEG compression, by adding a dithering noise signal in the DCT domain, in order to restore the histogram of the original image. In this paper, we study the processing chain that arises in the case of JPEG compression anti-forensics. We take the perspective of the forensic analyst, and we show how it is possible to counter the aforementioned anti-forensic method revealing the traces of JPEG compression, regardless of the quantization matrix being used. Tests on a large image dataset demonstrated that the proposed detector was able to achieve an average accuracy equal to 93%, rising above 99% when excluding the case of nearly lossless JPEG compression.

Journal ArticleDOI
TL;DR: The proposed 2-D dual-mode LDWT architecture has the merits of low transpose memory (TM), low latency, and regular signal flow, making it suitable for very large-scale integration implementation, and can be applied to real-time visual operations such as JPEG2000, motion-JPEG2000, MPEG-4 still texture object decoding, and wavelet-based scalable video coding applications.
Abstract: Memory requirements (for storing intermediate signals) and critical path are essential issues for 2-D (or multidimensional) transforms. This paper presents new algorithms and hardware architectures to address the above issues in 2-D dual-mode (supporting 5/3 lossless and 9/7 lossy coding) lifting-based discrete wavelet transform (LDWT). The proposed 2-D dual-mode LDWT architecture has the merits of low transpose memory (TM), low latency, and regular signal flow, making it suitable for very large-scale integration implementation. The TM requirement of the $N\times N$ 2-D 5/3 mode LDWT and 2-D 9/7 mode LDWT are $2N$ and $4N$ , respectively. Comparison results indicate that the proposed hardware architecture has a lower lifting-based low TM size requirement than the previous architectures. As a result, it can be applied to real-time visual operations such as JPEG2000, motion-JPEG2000, MPEG-4 still texture object decoding, and wavelet-based scalable video coding applications.

Posted Content
TL;DR: This paper analyzes different types of existing method of image compression, namely lossless and lossy image compression techniques, and presents a survey of existing research papers.
Abstract: This paper addresses about various image compression techniques. On the basis of analyzing the various image compression techniques this paper presents a survey of existing research papers. In this paper we analyze different types of existing method of image compression. Compression of an image is significantly different then compression of binary raw data. To solve these use different types of techniques for image compression. Now there is question may be arise that how to image compress and which types of technique is used. For this purpose there are basically two types are method are introduced namely lossless and lossy image compression techniques. In present time some other techniques are added with basic method. In some area neural network genetic algorithms are used for image compression. Keywords-Image Compression; Lossless; Lossy; Redundancy; Benefits of Compression.

Journal ArticleDOI
TL;DR: In this article, the quality factor (Q) of antennas is minimized to obtain lower bounds for the Q of antennas of small, lossy or lossless, combined electric and magnetic dipole ant.
Abstract: General expressions for the quality factor (Q) of antennas are minimized to obtain lower-bound formulas for the Q of electrically small, lossy or lossless, combined electric and magnetic dipole ant ...

Journal ArticleDOI
TL;DR: A novel reversible data-hiding scheme in the index tables of the vector quantization (VQ) compressed images based on index mapping mechanism that ensures the correctness of secret data extraction and the lossless recovery of index table.

Proceedings ArticleDOI
01 Sep 2013
TL;DR: A new IQ data compression scheme with 1/2 compression ratio is proposed, which can be easily implemented by using both IQ bit width reduction and a common lossless audio compression scheme.
Abstract: In the Centralized-RAN (C-RAN), some baseband units (BBUs) are centralized in the same location, which is typically less than 20 km away from remote radio heads (RRHs) locally distributed. Each RRH is connected to BBUs via front-haul link by optical fibers which convey the digital IQ data by CPRI protocol. This paper proposes a new IQ data compression scheme with 1/2 compression ratio, which can be easily implemented by using both IQ bit width reduction and a common lossless audio compression scheme. Through performance evaluation, it is verified that the proposed method meets the requirements from an implementation point of view. By applying the proposed scheme to the front-haul link, the installation cost of optical fibers can be reduced by half.

Book ChapterDOI
26 May 2013
TL;DR: This study introduces a novel lossless compression technique for RDF datasets, called Rule Based Compression (RB Compression), that compresses datasets by generating a set of new logical rules from the dataset and removing triples that can be inferred from these rules.
Abstract: Linked data has experienced accelerated growth in recent years. With the continuing proliferation of structured data, demand for RDF compression is becoming increasingly important. In this study, we introduce a novel lossless compression technique for RDF datasets, called Rule Based Compression (RB Compression) that compresses datasets by generating a set of new logical rules from the dataset and removing triples that can be inferred from these rules. Unlike other compression techniques, our approach not only takes advantage of syntactic verbosity and data redundancy but also utilizes semantic associations present in the RDF graph. Depending on the nature of the dataset, our system is able to prune more than 50% of the original triples without affecting data integrity.

Proceedings ArticleDOI
11 Aug 2013
TL;DR: A parallel framework for solving the all-pairs similarity search in metric spaces with flexible support for multiple metrics of interest and an on-the- fly lossless compression strategy to reduce both the running time and the final output size is proposed.
Abstract: Given a set of entities, the all-pairs similarity search aims at identifying all pairs of entities that have similarity greater than (or distance smaller than) some user-defined threshold. In this article, we propose a parallel framework for solving this problem in metric spaces. Novel elements of our solution include: i) flexible support for multiple metrics of interest; ii) an autonomic approach to partition the input dataset with minimal redundancy to achieve good load-balance in the presence of limited computing resources; iii) an on-the- fly lossless compression strategy to reduce both the running time and the final output size. We validate the utility, scalability and the effectiveness of the approach on hundreds of machines using real and synthetic datasets.

Journal ArticleDOI
TL;DR: The experimental results reveal that the proposed method is better for preserving the important features of SAR images with a competitive compression performance than JPEG, JPEG2000, and a single-scale dictionary-based compression scheme.
Abstract: In this letter, we focus on a new compression scheme for synthetic aperture radar (SAR) amplitude images. The last decade has seen a growing interest in the study of dictionary learning and sparse representation, which have been proved to perform well on natural image compression. Because of the special techniques of radar imaging, SAR images have some distinct properties when compared with natural images that can affect the design of a compression method. First, we introduce SAR properties, sparse representation, and dictionary learning theories. Second, we propose a novel SAR image compression scheme by using multiscale dictionaries. The experimental results carried out on amplitude SAR images reveal that, when compared with JPEG, JPEG2000, and a single-scale dictionary-based compression scheme, the proposed method is better for preserving the important features of SAR images with a competitive compression performance.

Patent
19 Dec 2013
TL;DR: In this article, a point of focus in the NED field of view is identified, often based on natural user input data, and allowed loss image data is transmitted and extracted from received image data allowing for lossy transmission.
Abstract: Technology is described for reducing display update time for a near-eye display (NED) device. A point of focus in the NED field of view is identified, often based on natural user input data. A communication module of a computer system communicatively coupled to the NED device transmits lossless priority data, an example of which is user focal region image data, using one or more communication techniques for satisfying lossless transmission criteria. Allowed loss image data is identified based at least in part on its distance vector from a point of focus in the display field of view. An example of allowed loss image data is image data to be displayed outside the user focal region. The allowed loss image data is transmitted and extracted from received image data allowing for lossy transmission.

Proceedings ArticleDOI
01 Sep 2013
TL;DR: This work designs an entropy coding scheme that seeks the internal ordering of the descriptor that minimizes the number of bits necessary to represent it and evaluates the discriminative power of descriptors as a function of rate, in order to investigate the trade-offs in a bandwidth constrained scenario.
Abstract: Binary descriptors have recently emerged as low-complexity alternatives to state-of-the-art descriptors such as SIFT. The descriptor is represented by means of a binary string, in which each bit is the result of the pair-wise comparison of smoothed pixel values properly selected in a patch around each keypoint. Previous works have focused on the construction of the descriptor neglecting the opportunity of performing lossless compression. In this paper, we propose two contributions. First, design an entropy coding scheme that seeks the internal ordering of the descriptor that minimizes the number of bits necessary to represent it. Second, we compare different selection strategies that can be adopted to identify which pair-wise comparisons to use when building the descriptor. Unlike previous works, we evaluate the discriminative power of descriptors as a function of rate, in order to investigate the trade-offs in a bandwidth constrained scenario.

Journal ArticleDOI
TL;DR: A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) based on matrix/tensor decomposition models that achieves attractive compression ratios compared to compressing individual channels separately.
Abstract: A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.

Journal ArticleDOI
TL;DR: Visibility thresholds (VTs) are measured and used for quantization of subband signals in JPEG2000 in order to hide coding artifacts caused by quantization, and are experimentally determined from statistically modeled quantization distortion.
Abstract: Due to exponential growth in image sizes, visually lossless coding is increasingly being considered as an alternative to numerically lossless coding, which has limited compression ratios. This paper presents a method of encoding color images in a visually lossless manner using JPEG2000. In order to hide coding artifacts caused by quantization, visibility thresholds (VTs) are measured and used for quantization of subband signals in JPEG2000. The VTs are experimentally determined from statistically modeled quantization distortion, which is based on the distribution of wavelet coefficients and the dead-zone quantizer of JPEG2000. The resulting VTs are adjusted for locally changing backgrounds through a visual masking model, and then used to determine the minimum number of coding passes to be included in the final codestream for visually lossless quality under the desired viewing conditions. Codestreams produced by this scheme are fully JPEG2000 Part-I compliant.

Journal ArticleDOI
TL;DR: A general method for compressing the modulation time-bandwidth product of analog signals is introduced, inspired by operation of Fovea centralis in the human eye and by anamorphic transformation in visual arts, to alleviate the storage and transmission bottlenecks associated with "big data".
Abstract: A general method for compressing the modulation time-bandwidth product of analog signals is introduced and experimentally demonstrated. As one of its applications, this physics-based signal grooming performs feature-selective stretch, enabling a conventional digitizer to capture fast temporal features that were beyond its bandwidth. At the same time, the total digital data size is reduced. The compression is lossless and is achieved through a same-domain transformation of the signal's complex field, performed in the analog domain prior to digitization. Our method is inspired by operation of Fovea centralis in the human eye and by anamorphic transformation in visual arts. The proposed transform can also be performed in the digital domain as a digital data compression algorithm to alleviate the storage and transmission bottlenecks associated with "big data".

Proceedings ArticleDOI
01 Sep 2013
TL;DR: A novel method to compress video content based on image retargeting that introduces a non-uniform antialiasing technique that significantly improves the image resampling quality and achieves a significant improvement of the visual quality of salient image regions.
Abstract: In this paper we propose a novel method to compress video content based on image retargeting. First, a saliency map is extracted from the video frames either automatically or according to user input. Next, nonlinear image scaling is performed which assigns a higher pixel count to salient image regions and fewer pixels to non-salient regions. The non-linearly downscaled images can then be compressed using existing compression techniques and decoded and upscaled at the receiver. To this end we introduce a non-uniform antialiasing technique that significantly improves the image resampling quality. The overall process is complementary to existing compression methods and can be seamlessly incorporated into existing pipelines. We compare our method to JPEG 2000 and H.264/AVC-10 and show that, at the cost of visual quality in non-salient image regions, our method achieves a significant improvement of the visual quality of salient image regions in terms of Structural Similarity (SSIM) and Peak Signal-to-Noise-Ratio (PSNR) quality measures, in particular for scenarios with high compression ratios.

Proceedings ArticleDOI
Javier Lorca1, L. Cucala1
04 Jun 2013
TL;DR: A lossless compression technique is explored for LTE and LTE-Advanced wireless networks where actual compression ratios depend upon the resources occupancy, thereby allowing for statistical multiplexing in the aggregation network which translates into significant cost reductions.
Abstract: Some of the most advanced functionalities in LTE and LTE-Advanced wireless networks rely upon some kind of collaborative processing between cells, as happens in coordinated scheduling, Cooperative Multi-Point (CoMP), or enhanced Inter-Cell Interference Coordination (eICIC), among other techniques. In some of these functionalities the required amount of information exchange between cells is so high that centralized processing scenarios represent a more viable alternative, whereby central nodes perform baseband processing tasks and remote radio heads are connected to them via high-capacity fiber links (usually known as fronthaul links). The high fiber cost incurred by these so-called Cloud-RAN architectures is the main drawback for practical deployments, and compression techniques are therefore needed at the fronthaul network. The present paper explores a lossless compression technique for LTE and LTE-Advanced wireless networks where actual compression ratios depend upon the resources occupancy, thereby allowing for statistical multiplexing in the aggregation network which translates into significant cost reductions. Compression ratios as high as 30:1 can be achieved in lightly loaded LTE 2×2 MIMO cells while values around 6:1 are typically obtained for 50% cell loads.