scispace - formally typeset
Search or ask a question

Showing papers on "Lossless compression published in 2009"


Posted Content
TL;DR: Polar codes, introduced recently by Arıkan, are the first family of codes known to achieve capacity of symmetric channels using a low complexity successive cancellation decoder, and several techniques to improve their finite-length performance are discussed.
Abstract: Polar codes, introduced recently by Ar\i kan, are the first family of codes known to achieve capacity of symmetric channels using a low complexity successive cancellation decoder. Although these codes, combined with successive cancellation, are optimal in this respect, their finite-length performance is not record breaking. We discuss several techniques through which their finite-length performance can be improved. We also study the performance of these codes in the context of source coding, both lossless and lossy, in the single-user context as well as for distributed applications.

249 citations


Journal ArticleDOI
TL;DR: FPC is described and evaluated, a fast lossless compression algorithm for linear streams of 64-bit floating-point data that works well on hard-to-compress scientific data sets and meets the throughput demands of high-performance systems.
Abstract: Many scientific programs exchange large quantities of double-precision data between processing nodes and with mass storage devices. Data compression can reduce the number of bytes that need to be transferred and stored. However, data compression is only likely to be employed in high-end computing environments if it does not impede the throughput. This paper describes and evaluates FPC, a fast lossless compression algorithm for linear streams of 64-bit floating-point data. FPC works well on hard-to-compress scientific data sets and meets the throughput demands of high-performance systems. A comparison with five lossless compression schemes, BZIP2, DFCM, FSD, GZIP, and PLMI, on 4 architectures and 13 data sets shows that FPC compresses and decompresses one to two orders of magnitude faster than the other algorithms at the same geometric-mean compression ratio. Moreover, FPC provides a guaranteed throughput as long as the prediction tables fit into the L1 data cache. For example, on a 1.6-GHz Itanium 2 server, the throughput is 670 Mbytes/s regardless of what data are being compressed.

224 citations


Proceedings ArticleDOI
01 Sep 2009
TL;DR: An approximate representation of bag-of-features obtained by projecting the corresponding histogram onto a set of pre-defined sparse projection functions, producing several image descriptors is proposed, which is at least one order of magnitude faster than standard bag- of-features while providing excellent search quality.
Abstract: One of the main limitations of image search based on bag-of-features is the memory usage per image Only a few million images can be handled on a single machine in reasonable response time In this paper, we first evaluate how the memory usage is reduced by using lossless index compression We then propose an approximate representation of bag-of-features obtained by projecting the corresponding histogram onto a set of pre-defined sparse projection functions, producing several image descriptors Coupled with a proper indexing structure, an image is represented by a few hundred bytes A distance expectation criterion is then used to rank the images Our method is at least one order of magnitude faster than standard bag-of-features while providing excellent search quality

182 citations


Journal ArticleDOI
TL;DR: This paper proposes a simple lossless entropy compression (LEC) algorithm which can be implemented in a few lines of code, requires very low computational power, compresses data on the fly and uses a very small dictionary whose size is determined by the resolution of the analog-to-digital converter.
Abstract: Energy is a primary constraint in the design and deployment of wireless sensor networks (WSNs), since sensor nodes are typically powered by batteries with a limited capacity. Energy efficiency is generally achieved by reducing radio communication, for instance, limiting transmission/reception of data. Data compression can be a valuable tool in this direction. The limited resources available in a sensor node demand, however, the development of specifically designed compression algorithms. In this paper, we propose a simple lossless entropy compression (LEC) algorithm which can be implemented in a few lines of code, requires very low computational power, compresses data on the fly and uses a very small dictionary whose size is determined by the resolution of the analog-to-digital converter. We have evaluated the effectiveness of LEC by compressing four temperature and relative humidity data sets collected by real WSNs, and solar radiation, seismic and ECG data sets. We have obtained compression ratios up to 70.81% and 62.08% for temperature and relative humidity data sets, respectively, and of the order of 70% for the other data sets. Then, we have shown that LEC outperforms two specifically designed compression algorithms for WSNs. Finally, we have compared LEC with gzip, bzip2, rar, classical Huffman and arithmetic encodings.

160 citations


Journal ArticleDOI
TL;DR: A series of techniques are applied to James Watson's genome that in combination reduce it to a mere 4MB, small enough to be sent as an email attachment.
Abstract: Summary: The amount of genomic sequence data being generated and made available through public databases continues to increase at an ever-expanding rate. Downloading, copying, sharing and manipulating these large datasets are becoming difficult and time consuming for researchers. We need to consider using advanced compression techniques as part of a standard data format for genomic data. The inherent structure of genome data allows for more efficient lossless compression than can be obtained through the use of generic compression programs. We apply a series of techniques to James Watson's genome that in combination reduce it to a mere 4MB, small enough to be sent as an email attachment. Availability: Our algorithms are implemented in C++ and are freely available from http://www.ics.uci.edu/~xhx/project/DNAzip. Contact:[email protected]; [email protected] Supplementary information:Supplementary data are available at Bioinformatics online.

155 citations


Journal ArticleDOI
TL;DR: The uplink of a backhaul-constrained, MIMO coordinated network with N + 1 multi-antenna base stations that cooperate in order to decode the users' data, and that are linked by means of a common lossless backhaul, is considered.
Abstract: We consider the uplink of a backhaul-constrained, MIMO coordinated network. That is, a single-frequency network with N + 1 multi-antenna base stations (BSs) that cooperate in order to decode the users' data, and that are linked by means of a common lossless backhaul, of limited capacity R. To implement the receive cooperation, we propose distributed compression: N BSs, upon receiving their signals, compress them using a multi-source lossy compression code. Then, they send the compressed vectors to a central BS, which performs users' decoding. Distributed Wyner-Ziv coding is proposed to be used, and is designed in this work. The first part of the paper is devoted to a network with a unique multi-antenna user, that transmits a predefined Gaussian space-time codeword. For such a scenario, the "compression noise" covariance at the BSs is optimized, considering the user's achievable rate as the performance metric. In particular, for N = 1 the optimum covariance is derived in closed form, while for N > 1 an iterative algorithm is devised. The second part of the contribution focusses on the multi-user scenario. For it, the achievable rate region is obtained by means of the optimum "compression noise" covariances for sum-rate and weighted sum-rate, respectively.

128 citations


Patent
Axel Lakus-Becker1
17 Nov 2009
TL;DR: In this article, a computer implemented method of storing pixel data corresponding to a pixel is disclosed, where a first and a second set of pixel data are determined for the pixel and parity bits for the first and second sets are generated, using error correction.
Abstract: A computer implemented method of storing pixel data corresponding to a pixel is disclosed. A first and a second set of pixel data is determined for the pixel. Parity bits for the first set of pixel data are generated, using error correction. An encoded version of the first set of pixel data including the parity bits is stored. An encoded version of the second set of pixel data is stored, using lossless data compression, for use in decoding the first set of pixel data.

127 citations


BookDOI
28 Dec 2009
TL;DR: A survey of recent results in the field of compression of remote sensed 3D data, with a particular interest in hyperspectral imagery is provided, where there is a tradeoff between compression achieved and the quality of the decompressed image.
Abstract: Hyperspectral Data Compression provides a survey of recent results in the field of compression of remote sensed 3D data, with a particular interest in hyperspectral imagery.Chapter 1 addresses compression architecture, and reviews and compares compression methods. Chapters 2 through 4 focus on lossless compression (where the decompressed image must be bit for bit identical to the original).Chapter 5, contributed by the editors, describes a lossless algorithm based on vector quantization with extensions to near lossless and possibly lossy compression for efficient browning and pure pixel classification.Chapter 6 deals with near lossless compression while. Chapter 7 considers lossy techniques constrained by almost perfect classification. Chapters 8 through 12 address lossy compression of hyperspectral imagery, where there is a tradeoff between compression achieved and the quality of the decompressed image.Chapter 13 examines artifacts that can arise from lossy compression.

119 citations


Journal ArticleDOI
TL;DR: A state-of-the-art, scalable, electrophysiology platform designed for acquisition, compression, encryption, and storage of large-scale data is described that incorporates lossless data compression using range-encoded differences, a 32-bit cyclically redundant checksum to ensure data integrity, and 128-bit encryption for protection of patient information.

119 citations


Journal ArticleDOI
TL;DR: A compression scheme that combines efficient storage with fast retrieval for the information in a node and exploits the properties of the Web Graph without assuming an ordering of the URLs, so that it may be applied to more general graphs.
Abstract: The Web Graph is a large-scale graph that does not fit in main memory, so that lossless compression methods have been proposed for it. This paper introduces a compression scheme that combines efficient storage with fast retrieval for the information in a node. The scheme exploits the properties of the Web Graph without assuming an ordering of the URLs, so that it may be applied to more general graphs. Tests on some datasets of use achieve space savings of about 10% over existing methods.

114 citations


Journal ArticleDOI
01 Sep 2009
TL;DR: This paper proposes a method that combines the JPEG-LS and an interframe coding with motion vectors to enhance the compression performance of using JPEG- LS alone, and achieves average compression gains of 13.3% and 26.3 % over the methods of using Photoshop and JPEG2000 alone, respectively.
Abstract: Hospitals and medical centers produce an enormous amount of digital medical images every day, especially in the form of image sequences, which requires considerable storage space. One solution could be the application of lossless compression. Among available methods, JPEG-LS has excellent coding performance. However, it only compresses a single picture with intracoding and does not utilize the interframe correlation among pictures. Therefore, this paper proposes a method that combines the JPEG-LS and an interframe coding with motion vectors to enhance the compression performance of using JPEG-LS alone. Since the interframe correlation between two adjacent images in a medical image sequence is usually not as high as that in a general video image sequence, the interframe coding is activated only when the interframe correlation is high enough. With six capsule endoscope image sequences under test, the proposed method achieves average compression gains of 13.3% and 26.3 % over the methods of using JPEG-LS and JPEG2000 alone, respectively. Similarly, for an MRI image sequence, coding gains of 77.5% and 86.5% are correspondingly obtained.

Journal ArticleDOI
TL;DR: The experimental results demonstrate the superiority of the proposed reversible visible watermarking scheme compared to the existing methods, and adopts data compression for further reduction in the recovery packet size and improvement in embedding capacity.
Abstract: A reversible (also called lossless, distortion-free, or invertible) visible watermarking scheme is proposed to satisfy the applications, in which the visible watermark is expected to combat copyright piracy but can be removed to losslessly recover the original image. We transparently reveal the watermark image by overlapping it on a user-specified region of the host image through adaptively adjusting the pixel values beneath the watermark, depending on the human visual system-based scaling factors. In order to achieve reversibility, a reconstruction/recovery packet, which is utilized to restore the watermarked area, is reversibly inserted into non-visibly-watermarked region. The packet is established according to the difference image between the original image and its approximate version instead of its visibly watermarked version so as to alleviate its overhead. For the generation of the approximation, we develop a simple prediction technique that makes use of the unaltered neighboring pixels as auxiliary information. The recovery packet is uniquely encoded before hiding so that the original watermark pattern can be reconstructed based on the encoded packet. In this way, the image recovery process is carried out without needing the availability of the watermark. In addition, our method adopts data compression for further reduction in the recovery packet size and improvement in embedding capacity. The experimental results demonstrate the superiority of the proposed scheme compared to the existing methods.

Journal ArticleDOI
TL;DR: To underscore the potential impact of exploiting calibration-induced artifacts in the standard AVIRIS data sets, a compression algorithm is presented that achieves noticeably smaller compressed sizes for these data sets than is reported for any other algorithm.
Abstract: Algorithms for compression of hyperspectral data are commonly evaluated on a readily available collection of Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) images. These images are the end product of processing raw data from the instrument, and their sample value distributions contain artificial regularities that are introduced by the conversion of raw data values to radiance units. It is shown that some of the best reported lossless compression results for these images are achieved by algorithms that significantly exploit these artifacts. This fact has not been widely reported and may not be widely recognized. Compression performance comparisons involving such algorithms and these standard AVIRIS images can be misleading if they are extrapolated to images that lack such artifacts, such as unprocessed hyperspectral images. In fact, two of these algorithms are shown to achieve rather unremarkable compression performance on a set of more recent AVIRIS images that do not contain appreciable calibration-induced artifacts. This newer set of AVIRIS images, which contains both calibrated and raw images, is made available for compression experiments. To underscore the potential impact of exploiting calibration-induced artifacts in the standard AVIRIS data sets, a compression algorithm is presented that achieves noticeably smaller compressed sizes for these data sets than is reported for any other algorithm.

Journal ArticleDOI
TL;DR: The experimental results show that the proposed method achieves the best visual quality of reconstructed images compared with the two related works, and obtains as high embedding capacity as Lin and Chang's method, followed by Yang et al.'s method.

Journal ArticleDOI
TL;DR: A modified version of the embedded block coder with optimized truncation (EBCOT), tailored according to the characteristics of the data, encodes the residual data generated after prediction to provide resolution and quality scalability.
Abstract: We propose a novel symmetry-based technique for scalable lossless compression of 3D medical image data. The proposed method employs the 2D integer wavelet transform to decorrelate the data and an intraband prediction method to reduce the energy of the sub-bands by exploiting the anatomical symmetries typically present in structural medical images. A modified version of the embedded block coder with optimized truncation (EBCOT), tailored according to the characteristics of the data, encodes the residual data generated after prediction to provide resolution and quality scalability. Performance evaluations on a wide range of real 3D medical images show an average improvement of 15% in lossless compression ratios when compared to other state-of-the art lossless compression methods that also provide resolution and quality scalability including 3D-JPEG2000, JPEG2000, and H.264/AVC intra-coding.

Proceedings ArticleDOI
01 Nov 2009
TL;DR: A method is proposed to achieve compression of the encrypted image data based on com-pressive sensing technique based on compressive sensing using the modified basis pursuit decoding method.
Abstract: The problem of lossy compression of encrypted image data is considered in this paper. A method is proposed to achieve compression of the encrypted image data based on com-pressive sensing technique. Joint decoding/decryption is proposed with the modified basis pursuit decoding method to take care of encryption. Simulation results are provided to demonstrate the compression results of the proposed compression method based on compressive sensing.

Journal ArticleDOI
TL;DR: An algorithm and a hardware architecture of a new type EC codec engine with multiple modes are presented and the proposed four-tree pipelining scheme can reduce 83% latency and 67% buffer size between transform and entropy coding.
Abstract: In a typical portable multimedia system, external access, which is usually dominated by block-based video content, induces more than half of total system power. Embedded compression (EC) effectively reduces external access caused by video content by reducing the data size. In this paper, an algorithm and a hardware architecture of a new type EC codec engine with multiple modes are presented. Lossless mode, and lossy modes with rate control modes and quality control modes are all supported by single algorithm. The proposed four-tree pipelining scheme can reduce 83% latency and 67% buffer size between transform and entropy coding. The proposed EC codec engine can save 62%, 66%, and 77% external access at lossless mode, half-size mode, and quarter-size mode and can be used in various system power conditions. With TSMC 0.18 mum 1P6M CMOS logic process, the proposed EC codec engine can encode or decode CIF 30 frame per second video data and achieve power saving of more than 109 mW. The EC codec engine itself consumes only 2 mW power.

Journal ArticleDOI
TL;DR: This paper presents an information-theoretic analysis based on the concept of conditional entropy, which is used to assess the available amount of correlation and the potential compression gain, and proposes a new lossless compression algorithm that employs a Kalman filter in the prediction stage.
Abstract: Hyperspectral images exhibit significant spectral correlation, whose exploitation is crucial for compression. In this paper, we investigate the problem of predicting a given band of a hyperspectral image using more than one previous band. We present an information-theoretic analysis based on the concept of conditional entropy, which is used to assess the available amount of correlation and the potential compression gain. Then, we propose a new lossless compression algorithm that employs a Kalman filter in the prediction stage. Simulation results are presented on Airborne Visible Infrared Imaging Spectrometer, Hyperspectral Digital Imagery Collection Experiment, and Hyperspectral Mapper scenes, showing competitive performance with other state-of-the-art compression algorithms.

Patent
Fejzo Zoran1
09 Jan 2009
TL;DR: In this paper, an adaptive segmentation technique that fixes segment start points based on constraints imposed by the existence of a desired RAP and/or detected transient in the frame and selects a optimum segment duration in each frame to reduce encoded frame payload subject to an encoded segment payload constraint
Abstract: A lossless audio codec encodes/decodes a lossless variable bit rate (VBR) bitstream with random access point (RAP) capability to initiate loss less decoding at a specified segment within a frame and/or multiple prediction parameter set (MPPS) capability partitioned to mitigate transient effects. This is accomplished with an adaptive segmentation technique that fixes segment start points based on constraints imposed by the existence of a desired RAP and/or detected transient in the frame and selects a optimum segment duration in each frame to reduce encoded frame payload subject to an encoded segment payload constraint RAP and MPPS are particularly applicable to improve overall performance for longer frame durations.

Journal ArticleDOI
TL;DR: The author proposes to use reversible data hiding applications with a vector quantisation (VQ)-compressed image to provide a higher hiding capacity and a better stego-image quality.
Abstract: Reversible data hiding is required and preferable in many applications such as medical diagnosis, military, law enforcement, fine art work and so on. The author proposes to use reversible data hiding applications with a vector quantisation (VQ)-compressed image. The histogram of the prediction VQ-compressed image is explored. The prediction VQ encoded image is identical to traditional VQ encoding. The index of prediction encoded VQ images is modified to embed secret data. Furthermore, the VQ images can be completely reconstructed by the recovery procedure. The experimental results show the performance of the proposed method and the efficiency of the embedding, extraction and recovery procedures. In comparison with other VQ-based schemes, the proposed method provides a higher hiding capacity and a better stego-image quality. Also, the lossless VQ image is recovered.

Proceedings ArticleDOI
19 Apr 2009
TL;DR: The amount of inter-channel redundancy for Higher Order Ambisonics is investigated, and lossless compression techniques that build on this redundancy are studied, with a focus on low-delay algorithms for real-time, or two-way, applications.
Abstract: When Higher Order Ambisonics (HOA) is used to represent a sound field, the channels might contain a lot of redundancy in some cases. This redundancy can be exploited in order to provide more efficient network transmission and storage. In this work the amount of inter-channel redundancy for Higher Order Ambisonics is investigated. Furthermore, lossless compression techniques that build on this redundancy are studied, with a focus on low-delay algorithms for real-time, or two-way, applications. The presented encoding scheme results in a delay of 256 samples, but with a rather high computational complexity both for encoding and decoding. The system also preserves the desired features of the HOA format, such as the scalability and the ability to reproduce over arbitrary loudspeaker layouts.

Journal ArticleDOI
15 Jul 2009
TL;DR: A novel progressive lossless mesh compression algorithm based on Incremental Parametric Refinement, where the connectivity is uncontrolled in a first step, yielding visually pleasing meshes at each resolution level while saving connectivity information compared to previous approaches.
Abstract: In this paper, we propose a novel progressive lossless mesh compression algorithm based on Incremental Parametric Refinement, where the connectivity is uncontrolled in a first step, yielding visually pleasing meshes at each resolution level while saving connectivity information compared to previous approaches. The algorithm starts with a coarse version of the original mesh, which is further refined by means of a novel refinement scheme. The mesh refinement is driven by a geometric criterion, in spirit with surface reconstruction algorithms, aiming at generating uniform meshes. The vertices coordinates are also quantized and transmitted in a progressive way, following a geometric criterion, efficiently allocating the bit budget. With this assumption, the generated intermediate meshes tend to exhibit a uniform sampling. The potential discrepancy between the resulting connectivity and the original one is corrected at the end of the algorithm. We provide a proof-of-concept implementation, yielding very competitive results compared to previous works in terms of rate/distortion trade-off.

Journal ArticleDOI
TL;DR: In this article, the authors considered multiple description (MD) coding for the Gaussian source with K descriptions under the symmetric mean-squared error (MSE) distortion constraints, and provided an approximate characterization of the rate region.
Abstract: We consider multiple description (MD) coding for the Gaussian source with K descriptions under the symmetric mean-squared error (MSE) distortion constraints, and provide an approximate characterization of the rate region. We show that the rate region can be sandwiched between two polytopes, between which the gap can be upper-bounded by constants dependent on the number of descriptions, but independent of the distortion constraints. Underlying this result is an exact characterization of the lossless multilevel diversity source coding problem: a lossless counterpart of the MD problem. This connection provides a polytopic template for the inner and outer bounds to the rate region. In order to establish the outer bound, we generalize Ozarow's technique to introduce a strategic expansion of the original probability space by more than one random variable. For the symmetric rate case with any number of descriptions, we show that the gap between the upper bound and the lower bound for the individual description rate-distortion function is no larger than 0.92 bit. The results developed in this work also suggest that the ldquoseparationrdquo approach of combining successive refinement quantization and lossless multilevel diversity coding is a competitive one, since its performance is only a constant away from the optimum. The results are further extended to general sources under the MSE distortion measure, where a similar but looser bound on the gap holds.

Journal ArticleDOI
TL;DR: A scheme for lossy compression of discrete memoryless sources that proves asymptotic optimality of the scheme for any separable (letter-by-letter) bounded distortion criterion and presents a suboptimal compression algorithm that exhibits near-optimal performance for moderate block lengths.
Abstract: We propose a scheme for lossy compression of discrete memoryless sources: The compressor is the decoder of a nonlinear channel code, constructed from a sparse graph. We prove asymptotic optimality of the scheme for any separable (letter-by-letter) bounded distortion criterion. We also present a suboptimal compression algorithm, which exhibits near-optimal performance for moderate block lengths.

Proceedings ArticleDOI
04 May 2009
TL;DR: It is revealed that software based data compression cannot be considered as a universal solution to reduce energy consumption and in some cases compression to save substantial energy and improve performance is found.
Abstract: Data compression has been claimed to be an attractive solution to save energy consumption in high-end servers and data centers. However, there has not been a study to explore this. In this paper, we present a comprehensive evaluation of energy consumption for various file compression techniques implemented in software. We apply various compression tools available on Linux to a variety of data files, and we try them on server class and workstation class systems. We compare their energy and performance results against raw reads and writes. Our results reveal that software based data compression cannot be considered as a universal solution to reduce energy consumption. Various factors like the type of the data file, the compression tool being used, the read-to-write ratio of the workload, and the hardware configuration of the system impact the efficacy of this technique. In some cases, however, we found compression to save substantial energy and improve performance.

Journal ArticleDOI
TL;DR: A new transform scheme of multiplierless reversible time-domain lapped transform and Karhunen-Loeve transform (RTDLT/KLT) for lossy-to-lossless hyperspectral image compression that can realize progressive lossy to lossless compression from a single embedded code-stream file is proposed.
Abstract: We proposed a new transform scheme of multiplierless reversible time-domain lapped transform and Karhunen-Loeve transform (RTDLT/KLT) for lossy-to-lossless hyperspectral image compression. Instead of applying discrete wavelet transform (DWT) in the spatial domain, RTDLT is applied for decorrelation. RTDLT can be achieved by existing discrete cosine transform and pre- and postfilters, while the reversible transform is guaranteed by a matrix factorization method. In the spectral direction, reversible integer low-complexity KLT is used for decorrelation. Owing to completely reversible transform, the proposed method can realize progressive lossy-to-lossless compression from a single embedded code-stream file. Numerical experiments on benchmark images show that the proposed transform scheme performs better than 5/3DWT-based methods in both lossy and lossless compressions, comparable with the optimal 9/7DWT-FloatKLT-based lossy compression method.

Journal ArticleDOI
TL;DR: This paper presents an extension of dynamic mesh compression techniques based on PCA, which allows very compact representation of moving 3D surfaces and since the data can be encoded very efficiently, the size of the basis cannot be neglected when considering the overall performance of a compression algorithm.
Abstract: In this paper, we present an extension of dynamic mesh compression techniques based on PCA. Such representation allows very compact representation of moving 3D surfaces; however, it requires some side information to be transmitted along with the main data. The biggest part of this information is the PCA basis, and since the data can be encoded very efficiently, the size of the basis cannot be neglected when considering the overall performance of a compression algorithm. We present a new work in this area, as none of the papers about PCA based compression really addresses this issue. We will show that for an efficient and accurate encoding there are better choices than even sophisticated algorithms such as LPC. We will present results showing that our approach can reduce the size of the basis by 90% with respect to direct encoding, which can lead to approximately 25% increase of performance of the compression algorithm without any significant loss of accuracy. Such improvement moves the performance of the PCA encoder beyond the performance of current state of the art dynamic mesh compression algorithms, such as the recently adopted MPEG standard, FAMC.

Patent
25 Feb 2009
TL;DR: In this article, the control of signal compression is coordinated by selectively modifying control parameters affecting the bit rate, sample rate, dynamic range, and compression operations, which can include a ratio parameter that indicates the relative or proportional amounts of change to the control parameters.
Abstract: Control of signal compression is coordinated by selectively modifying control parameters affecting the bit rate, sample rate, dynamic range and compression operations Selected control parameters are modified according to a control function The control function can include a ratio parameter that indicates the relative or proportional amounts of change to the control parameters Alternatively, the control function can be represented in a lookup table with values for the selected control parameters related by the control function The input signal samples can be resampled according to a sample rate control parameter The dynamic range of signal samples can be selectively adjusted according to a dynamic range control parameter to form modified signal samples The resampling and dynamic range adjustment can be applied in any order The modified signal samples are encoded according to a compression control parameter to form compressed samples The encoder can apply lossless or lossy encoding

Journal ArticleDOI
TL;DR: Experimental results show that not only the image is lossless but also the proposed method can effectively resist the common malicious attacks.
Abstract: A copyright protection method for digital image with 1/T rate forward error correction (FEC) is proposed in this paper. In this method, the original image is lossless and the watermark is robust to malicious attacks including geometric attacks such as scaling, rotation, cropping, print-photocopy-scan, and scaling-cropping attacks and nongeometric attacks such as low-pass filtering, sharpening, JPEG compression attacks. The watermark logo is fused with noise bits to improve the security, and later XORed with the feature value of the image by 1/T rate FEC. During extraction, the watermark bits are determined by majority voting, and the extraction procedure needs neither the original image nor the watermark logo. Experimental results show that not only the image is lossless but also the proposed method can effectively resist the common malicious attacks. Since the proposed method is based on spatial domain and there is no need to do frequency transform, the embedding and extraction performances are quite improved.

Patent
29 Jul 2009
TL;DR: In this article, a foreground identifier corresponding to pixels in number not more than a prescribed number K of pixels is replaced with a background identifier, and the respective binary images are subjected to lossless compression.
Abstract: When priority is placed on a small file size, the number N of kinds foreground identifiers used for identifying color information of respective pixels of a foreground of a color image is reduced to M, which is smaller than N. For this purpose, a foreground identifier corresponding to pixels in number not more than a prescribed number K of pixels is replaced with a background identifier. When M is not more than a prescribed number P, M binary images respectively corresponding to the M kinds of identifiers are generated on the basis of a foreground layer including the M kinds of foreground identifiers, the respective binary images are subjected to lossless compression, and a background layer generated on the basis of the foreground layer is subjected to lossy compression.