scispace - formally typeset
Search or ask a question

Showing papers on "Run-length encoding published in 2011"


BookDOI
TL;DR: A simple quantisation and coding scheme of colour MP decomposition based on Run Length Encoding (RLE) which can achieve comparable performance to JPEG 2000 even though the latter utilises careful data modelling at the coding stage.
Abstract: We present and evaluate a novel idea for scalable lossy colour image coding with Matching Pursuit (MP) performed in a transform domain. The idea is to exploit correlations in RGB colour space between image subbands after wavelet transformation rather than in the spatial domain. We propose a simple quantisation and coding scheme of colour MP decomposition based on Run Length Encoding (RLE) which can achieve comparable performance to JPEG 2000 even though the latter utilises careful data modelling at the coding stage. Thus, the obtained image representation has the potential to outperform JPEG 2000 with a more sophisticated coding algorithm.

37 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel decode-aware compression technique to improve both compression and de compression efficiencies and outperforms the existing compression approaches by 15%, while the decompression hardware for variable-length coding is capable of operating at the speed closest to the best known field-programmable gate array-based decoder for fixed- length coding.
Abstract: Bitstream compression is important in reconfigurable system design since it reduces the bitstream size and the memory requirement. It also improves the communication bandwidth and thereby decreases the reconfiguration time. Existing research in this field has explored two directions: efficient compression with slow decompression or fast decompression at the cost of compression efficiency. This paper proposes a novel decode-aware compression technique to improve both compression and de compression efficiencies. The three major contributions of this paper are: 1) smart placement of compressed bitstreams that can significantly decrease the overhead of decompression engine; 2) selection of profitable parameters for bitstream compression; and 3) efficient combination of bitmask-based compression and run length encoding of repetitive patterns. Our proposed technique outperforms the existing compression approaches by 15%, while our decompression hardware for variable-length coding is capable of operating at the speed closest to the best known field-programmable gate array-based decoder for fixed-length coding.

28 citations


Journal ArticleDOI
TL;DR: This study concentrates on the lossless compression of image using approximate matching technique and run length encoding and the performance of this method is compared with the available jpeg compression technique, showing good agreements.
Abstract: Image compression is currently a prominent topic for both military and commercial researchers. Due to rapid growth of digital media and the subsequent need for reduced storage and to transmit the image in an effective manner Image compression is needed. Image compression attempts to reduce the number of bits required to digitally represent an image while maintaining its perceived visual quality. This study concentrates on the lossless compression of image using approximate matching technique and run length encoding. The performance of this method is compared with the available jpeg compression technique over a wide number of images, showing good agreements.

25 citations


Proceedings ArticleDOI
16 May 2011
TL;DR: Results show that I/O speedup may double by using an SSD vs. HDD disk on larger seismic datasets and a simple predictive model for the execution time is developed, which should be a good tool for predicting when to take advantage of multithreaded compression.
Abstract: One of the main challenges of modern computer systems is to overcome the ever more prominent limitations of disk I/O and memory bandwidth, which today are thousands-fold slower than computational speeds. In this paper, we investigate reducing memory bandwidth and overall I/O and memory access times by using multithreaded compression and decompression of large datasets. Since the goal is to achieve a significant overall speedup of I/O, both level of compression achieved and efficiency of the compression and decompression algorithms, are of importance. Several compression methods for efficient disk access for large seismic datasets are implemented and empirically tested on on several modern CPUs and GPUs, including the Intel i7 and NVIDIA c2050 GPU. To reduce I/O time, both lossless and lossy symmetrical compression algorithms as well as hardware alternatives, are tested. Results show that I/O speedup may double by using an SSD vs. HDD disk on larger seismic datasets. Lossy methods investigated include variations of DCT-based methods in several dimensions, and combining these with lossless compression methods such as RLE (Run-Length Encoding) and Huffman encoding. Our best compression rate (0.16%) and speedups (6 for HDD and 3.2 for SSD) are achieved by using DCT in 3D and combining this with a modified RLE for lossy methods. It has an average error of 0.46% which is very acceptable for seismic applications. A simple predictive model for the execution time is also developed and shows an error of maximum 5% vs. our obtained results. It should thus be a good tool for predicting when to take advantage of multithreaded compression. This model and other techniques developed in this paper should also be applicable to several other data intensive applications.

16 citations


Journal Article
TL;DR: This proposed yield to decrease the size of compressing image, but the original method used primarily for compressing a binary images will yield increasing thesize of an original image mostly when used for color images.
Abstract: In this paper, we will present p roposed enhance process of image compression by using RLE algorithm. This proposed yield to decrease the size of compressing image, but the original method used primarily for compressing a binary images [1].Which will yield increasing the size of an original image mostly when used for color images. The test of an enhanced algorithm is performed on sample consists of ten BMP 24-bit true color images, building an application by using visual basic 6.0 to show the size after and before compression process and computing the compression ratio for RLE and for the enhanced RLE algorithm.

14 citations


Proceedings ArticleDOI
11 Jul 2011
TL;DR: The proposed run length encoding scheme removes the unintended redundancy by using an ordered pair only when a zero occurs and using the same EOB (End of Block) parameter at the end of each block.
Abstract: This work aims to present an optimized scheme for entropy encoding part of JPEG image compression by modifying the run length encoding method. In JPEG (Joint Photographic Experts Group) image compression algorithm run length coding performs the actual compression by removing the redundancy from transformed and quantized image data. Using the fact that the preceding processes of run length coding, in JPEG compression algorithm, produces a large number of zeros, the original run length coding uses an ordered pair (a,b), where ‘a’ is the length of consecutive zeros preceding the ASCII character ‘b’. It has been observed the occurrence of consecutive ASCII characters at the input introduce another redundancy to the encoded data, i.e. ‘a = 0’ before each consecutive ASCII character ‘b’. The proposed run length encoding scheme removes the unintended redundancy by using an ordered pair only when a zero occurs and using the same EOB (End of Block) parameter at the end of each block. The proposed encoding scheme does not alter the PSNR value for the algorithm. Using Matlab simulation, the proposed scheme has been tested on various images over a range of quantization (quality) factor and the results confirmed the effectiveness of the new run length encoding scheme in reducing the run length encoded data.

13 citations


Posted Content
TL;DR: These are the first algorithms that achieve running times polynomial in the size of the compressed input and output representations of a string T, and are theoretically faster in the worst case than any algorithm which first decompresses the string for the conversion.
Abstract: We consider the problem of {\em restructuring} compressed texts without explicit decompression. We present algorithms which allow conversions from compressed representations of a string $T$ produced by any grammar-based compression algorithm, to representations produced by several specific compression algorithms including LZ77, LZ78, run length encoding, and some grammar based compression algorithms. These are the first algorithms that achieve running times polynomial in the size of the compressed input and output representations of $T$. Since most of the representations we consider can achieve exponential compression, our algorithms are theoretically faster in the worst case, than any algorithm which first decompresses the string for the conversion.

12 citations


Patent
17 Aug 2011
TL;DR: In this article, a template matching algorithm based on similarity measure is adopted and an index template is built through a run length encoding method to appoint pixels participating in operation, wherein the template is selected by adopting a maximal overlap region so as to contain image characteristics as much as possible, and the index template only contains pixels with edge characteristics or other pixels with obvious gray level change.
Abstract: The invention relates to an automatic image mosaicking method for a high-accuracy image measuring apparatus of a super-view field part. A template matching algorithm based on similarity measure is adopted and an index template is built through a run length encoding method to appoint pixels participating in operation, wherein the template is selected by adopting a maximal overlap region so as to contain image characteristics as much as possible, and the index template only contains pixels with edge characteristics or other pixels with obvious gray level change. The invention has the advantages that: the template is automatically determined without man-machine interaction selection, thereby enhancing the efficiency; the determined template has most image characteristics of objects to be measured; the pixels participating in the operation have obvious characteristics, and the other pixels do not participate in the operation, therefore matching accuracy is enhanced; the pixels participating in the operation, which is appointed through the index template, only occupies a great small part of the integral template, therefore operation efficiency is enhanced; the index template is built through the run length encoding method, therefore storage efficiency is high, and computer processing is convenient.

12 citations


Proceedings Article
Amin, Qureshi, Junaid, Habib, Anjum 
01 Jan 2011

10 citations


01 Jan 2011
TL;DR: 5D-ODETLAP is described, a lossy compression algorithm and implementation for high dimensional geospatial data that exploits the spatial dependency and autocorrelation in every dimension in these large datasets.
Abstract: This thesis describes HD-ODETLAP, a geospatial data compression technique to lossily compress high dimensional geospatial datasets. A five dimensional (5D) geospatial dataset consists of several multivariable 4D datasets, which are sequences of time-varying volumetric 3D geographical datasets. These datasets are typically very large in size and demand a great amount of resources for storage and transmission. HD-ODETLAP consists of work from two steps. Firstly, we build the foundation of HD-ODETLAP method from 3D-ODETLAP method, which targets at compressing 3D geospatial datasets. With proper point selection, our 3D-ODETLAP method approximates uncompressed 3D data using an over-determined system of linear equations. Then this approximation is refined via an error metric. These two steps work alternatively until a predefined satisfying approximation is found. This chosen representative sample set of original 3D dataset is then encoded using simple Run Length Encoding (RLE) and prefix coding technique. Secondly, based on 3D-ODETLAP, we present 5D-ODETLAP, a lossy compression algorithm and implementation for high dimensional geospatial data. 5D-ODETLAP exploits the spatial dependency and autocorrelation in every dimension in these large datasets. This is an advance on traditional methods that compress only lower dimensional slices. 5D-ODETLAP greedily selects a characteristic subset of the original 5D dataset, chosen to minimize information loss. The selected set of points is further compressed using a coder built from classic encoding methods. That coded set of points is the compressed representation of our dataset. To uncompress the data, 5D-ODETLAP recomputes the values at each point in 5D by solving a sparse over-determined linear system of equations. After preliminary test of 5D-ODETLAP, we optimize it by using a much more advanced encoding method than the simple RLE and prefix coding. The second advance in 5D-ODETLAP is our incorporation of a CUDA-based conjugate gradient linear solver into this framework. That exploits the massive, and inexpensive, parallelism available in modern GPUs. We have interfaced CUDA with Matlab to maximize programming efficiency and to minimize data transfer overhead. We have tested 5D-ODETLAP with various datasets and error metrics. With the same mean percentage error, compressed file size by 5D-ODETLAP is 7.67 and 2.14 times as small as that by JPEG2000 and 3D-SPIHT respectively in our eight test datasets on average. 5D-ODETLAP's advantage is even larger under the same maximum percentage error. 5D-ODETLAP has no restrictions in the data types, and it has the exibility to properly adjust the parameter setting for other datasets with spatial and temporal redundancy.

8 citations


Proceedings ArticleDOI
26 Oct 2011
TL;DR: An algorithm of the ECG signal compression, based on the combination of the run length encoding and discrete wavelet transform, intended for a simulated transmission via the IEEE 802.11b WLAN channel, is presented in this work.
Abstract: An algorithm of the ECG signal compression, based on the combination of the run length encoding and discrete wavelet transform, intended for a simulated transmission via the IEEE 802.11b WLAN channel, is presented in this work. The algorithm consists of two basic phases that are ECG signal compression and transmission via the IEEE 802.11b WLAN channel. The algorithm is based on applying the run length coding upon the thresholded discrete wavelet transform of the real ECG signal. In terms of compression efficiency, applying the compression procedure on several ECG data, presenting diverse cardiac status, selected from the MIT-BIH arrhythmia data base, achieves compression ratio of around 10:1, normalized root mean squared error (NRMSE) of 4% and (mean± standard deviation) of the difference between the restituted ECG signal and the original one of around (3 10-6) ± 0.03. The end point of this work is to simulate transmission of the compressed ECG signal via the IEEE 802.11b WLAN channel. The unavoidable distortion introduced by the transmission channel reduces the compression ratio to about 6.7:1 in the cost of preserving the ECG signal fidelity.

Journal ArticleDOI
TL;DR: This work uses image with 24 color bit to split the image into the sequence of frames and then split its color (RGB) to (YUV) component and PSNR has been computed to check the compression ratio in the resulted compressed image.
Abstract: Video compression techniques play a vital role in storage & transmission through a limited bandwidth. As a resulted containing video files repeated sequence of still image, each second represented by 24 images or more which is represent the motion in video file. The repeated sequence generates a large amount from temporal redundancy through similarities of sequence image with little difference between them, in addition to the spatial redundancy contained in image data itself. In this work we use image with 24 color bit. VCDCUT used to split the image into the sequence of frames and then split its color (RGB) to (YUV) component. Skew algorithm used to apply motion only on the Y luminance component of sequence of frame, then apply wavelet transform on Components and apply multi multisampling process on (UV) components. The quantization stage has been done to transform image from two dimension to one dimension by run length encoding (RLE) algorithm. In the decompression process the inverse length encoding has been done first and de quantization process has been applied to transform the (RGB) to the (YUV). The inverse of wavelet transform apply on the Y component and de sampling process has been made on the (UV) component then return YUV component to RGB color . Enhancement process done on the result in order to remove noise. Finally PSNR has been computed to check the compression ratio in the resulted compressed image.

Proceedings ArticleDOI
01 Nov 2011
TL;DR: This work shows that further compression gains can be achieved for color-mapped images over GIF when a structured arithmetic coder is used along with the pseudo-distance metric, instead of a Huffman coder as suggested by others.
Abstract: Color-mapped images are widely used in many applications, especially in WWW, and are usually compressed with Graphic Interchange Format (GIF) without any loss In our recent work, we showed that further compression gains can be achieved for color-mapped images over GIF when a structured arithmetic coder is used along with the pseudo-distance metric, instead of a Huffman coder as suggested by others In this work, we show that further compression gains are possible when block-sorting transformations are employed along with the pseudo-distance technique

Book ChapterDOI
01 Jan 2011
TL;DR: A new approach for removing blocking artifacts in reconstructed block-encoded images is presented and digital halftoning is a nonlinear system that quantizes a gray level image to one bit per pixel.
Abstract: A new approach for removing blocking artifacts in reconstructed block-encoded images is presented in [1].The perceptual quality of video affected by packet losses, low resolution and low bit video coded by the H.264/AVC encoder is studied in [2]. Digital halftoning is a nonlinear system that quantizes a gray level image to one bit per pixel[3]. Halftoning by error diffusion scans the image, quantizes the current pixel, and subtracts the quantization error from neighboring pixels in fixed proportions according to the error filter. The error filter is designed to minimize a local weighted error introduced by quantization.

Journal ArticleDOI
TL;DR: This paper intend to focus on a comparative investigation of three near lossless image compression technique, NLIC (near loss less image compression), SPIHT with DWT (Discrete Wavelets Transform), RLE (Run Length Encoding) with DCT (discrete Cosine Transform).
Abstract: minimizations of the storage space and transmission time are the two most important riding factors in image compression for telemedicine. Keeping this in view this paper intend to focus on a comparative investigation of three near lossless image compression technique, NLIC (near lossless image compression), SPIHT with DWT (Discrete Wavelets Transform), RLE (Run Length Encoding) with DCT (Discrete Cosine Transform). These techniques are analyzed and tested on various type of square state of art photographic images and medical images. The performance evaluation parameters like PSNR (Peak Signal to Noise Ratio), CR (Compression Ratio), RMSE (Root Mean Square Error), Computational time (CT) are calculated to evaluate the performance of mentioned near lossless image compression techniques.

Book ChapterDOI
14 Sep 2011
TL;DR: A simple quantisation and coding scheme of colour MP decomposition based on Run Length Encoding (RLE) which can achieve comparable performance to JPEG 2000 even though the latter utilises careful data modelling at the coding stage.
Abstract: We present and evaluate a novel idea for scalable lossy colour image coding with Matching Pursuit (MP) performed in a transform domain. The idea is to exploit correlations in RGB colour space between image subbands after wavelet transformation rather than in the spatial domain. We propose a simple quantisation and coding scheme of colour MP decomposition based on Run Length Encoding (RLE) which can achieve comparable performance to JPEG 2000 even though the latter utilises careful data modelling at the coding stage. Thus, the obtained image representation has the potential to outperform JPEG 2000 with a more sophisticated coding algorithm.

Patent
28 Oct 2011
TL;DR: In this article, a data processing device is discussed that includes a data encoding system and a data decoding system, which is operable to receive a data input, and to combine the precoded parity bits and a derivative of the run length limited output to yield an output data set.
Abstract: Various embodiments of the present invention provide systems, devices and methods for data processing. As an example, a data processing device is discussed that include a data encoding system and a data decoding system. The data encoding system is operable to receive a data input, and to: apply a maximum transition run length encoding to the data input to yield a run length limited output; apply a low density parity check encoding algorithm to the run length limited output to yield a number of original parity bits; apply a precode algorithm to the original parity bits to yield precoded parity bits; and combine the precoded parity bits and a derivative of the run length limited output to yield an output data set.

Patent
27 Apr 2011
TL;DR: In this article, a radar video signal encoding method for a ship is presented, where a sector is selected as an encoding unit, and the method conforms with the human eye characteristic, so real-time transmission of video information is facilitated.
Abstract: The invention discloses a radar video signal encoding method for a ship A radar video signal for the ship is a polar coordinate image, and a circular image under a rectangular coordinate is displayed at a display end, so coordinate conversion of the image is needed In the conventional encoding method, the problem of the coordinate conversion is generally not considered; and human eye characteristic during image display is not considered by the conventional coordinate conversion method In the method, a sector is selected as an encoding unit, and the method conforms with the human eye characteristic, so real-time transmission of video information is facilitated; and by selecting a rational sector point sorting mode, leakage points generated in the coordinate conversion process are eliminated, subsequent run length encoding and entropy encoding are facilitated, and compression ratio is improved The method specifically comprises the following steps of: establishing a sector structure to form a sort table; querying the sort table to perform the coordinate conversion; and performing the run length encoding and the entropy encoding on a converted one-dimensional signal and performing data flow encapsulation so as to encoding all radar video signals

Patent
13 Apr 2011
TL;DR: In this article, a color table with an RLE (Run Length Encoding) compression algorithm is proposed for LOGO pictures. But it is not suitable for large areas of the same color.
Abstract: The invention discloses a picture file compression method which is particularly suitable for LOGO pictures. Based on the characteristics of relatively pure color and large area same color of LOGO pictures, the method adopts a compression mode which combines a color table with an RLE (Run Length Encoding) compression algorithm, i.e. only subscripts in the color table are stored for each pixel dot of a picture, and adjacent dots are represented by using color values and the color number of continuous the same colors. The invention achieves the purpose of keeping high picture compression rate as well as easily operating.

Journal ArticleDOI
TL;DR: This paper decomposes the captured fingerprint image by applying the lifting wavelet transform, and uses an efficient second adaptive run-length encoding in the high-frequency part of the Huffman stream to improve the encoding efficiency.
Abstract: FBI’s WSQ (Wavelet Scalar Quantization) algorithm is well and widely used as the specific standard in the field of fingerprint image compressions currently. In this paper, based on the WSQ, first we decompose the captured fingerprint image by applying the lifting wavelet transform, and then use an efficient second adaptive run-length encoding in the high-frequency part of the Huffman stream to improve the encoding efficiency. Our experiment results show that the performance of proposed method is better than WSQ at the same bit rate.

Book ChapterDOI
25 Feb 2011
TL;DR: It has been observed that, there is a significant reduction in the computation time, and the amount of memory (Run dimension) required for the proposed work.
Abstract: In this paper, we have proposed a modified run-length encoding (RLE) method for binary patterns This method of encoding considers only the run-length of ‘1’ bit sequences The proposed distance algorithm considers binary patterns in the proposed encoded form and computes the distance between patterns Handwritten digit data in the binary sequence is encoded using our proposed encoding method and the proposed distance algorithm is used to classify them in the encoded form itself It has been observed that, there is a significant reduction in the computation time, and the amount of memory (Run dimension) required for the proposed work

Journal ArticleDOI
TL;DR: The paper presents results of compression using Run Length Encoding (RLE) scheme on speech signals of International Phonetic Alphabet (IPA) database and observed that RLE scheme gives high Compression Ratio (CR) for noisy speech signal compared to non noisyspeech signal.
Abstract: The paper presents results of compression using Run Length Encoding (RLE) scheme on speech signals of International Phonetic Alphabet (IPA) database. These speech signals are compressed with no noise being added then they are compressed after adding some noise to them. It observed that RLE scheme gives high Compression Ratio (CR) for noisy speech signal compared to non noisy speech signal. The performance of RLE scheme on standard speech signal as well as noisy speech signal is compared with compression by Huffman coding. The obtained results indicate that RLE scheme gives high CR compared to CR by Huffman coding.

Patent
08 Apr 2011
TL;DR: In this paper, a data compressing-restoring method is provided to increase compression rate by adaptively selecting one among various algorithm according to image data, which is comprised as follows: a step of determining whether upper bits is same or not by comparing upper bit of n number and n+1 number pixel data with upper bits of previous pixel data which is before n number pixel pixel data.
Abstract: PURPOSE: A data compressing-restoring method is provided to increase compression rate by adaptively selecting one among various algorithm according to image data. CONSTITUTION: A data compressing-restoring method is comprised as follows: a step of determining whether upper bits is same or not by comparing upper bit of n number and n+1 number pixel data with upper bits of previous pixel data which is before n number pixel data, a step of PPC, TPPC&PFC encoding which outputs pixel data, a step of RLE encoding which outputs compressed pixel data by removing the pixel data which is continued by run length encoding(RLE), a step of storing one between the PPC,TPPC&PFC encoding output and the RLE encoding output according to selected compressed algorithm based on a slop between compression rate and the pixel data, a step of restoring the compressed data by selecting restoring algorithm according to a slope between the compression rate and the pixel data.

Patent
29 Sep 2011
TL;DR: In this article, the run length encoding method is composed of four steps, i.e., the second or third shortest code words are used for shorter or longer sequences of pixels of a predetermined color (transparent), while the short code words were used for single pixels having different colors.
Abstract: PROBLEM TO BE SOLVED: To provide a method for optimally encoding a subtitle layer or subpicture layers. SOLUTION: The size of subtitle bitmaps may exceed video frame dimensions, so that only portions are displayed at a time. The bitmaps are a separate layer lying above the video for synchronized video subtitles, and contain a plurality of transparent pixels. The run length encoding method is composed of four steps. That is, the second or third shortest code words are used for shorter or longer sequences of pixels of a predetermined color (transparent), while the shortest code words are used for single pixels having different colors, and the third or fourth shortest code words are used for shorter or longer sequences of an equal color value. COPYRIGHT: (C)2011,JPO&INPIT

Patent
03 Feb 2011
TL;DR: In this article, an advanced adaptation for bitmap encoding for HDTV per frame is defined for the Blu-ray Disc Prerecorded format, and optimized compression effects are provided for such subtitling bitmaps, but this method is achieved by a four-stage run length encoding.
Abstract: PROBLEM TO BE SOLVED: To encode superimposed subtitling used for text information and graphics data on bitmaps. SOLUTION: If subtitle bit maps exceed a video frame, only portions are displayed at a time. The bitmaps are a separate layer lying above the video, e.g., for synchronized video subtitles, animations and navigation menus, and therefore contain many transparent pixels. An advanced adaptation for bitmap encoding for HDTV per frame is defined for the Blu-ray Disc Prerecorded format, and optimized compression effects are provided for such subtitling bitmaps, but this method is achieved by a four-stage run length encoding. Shorter or longer sequences of transparent pixels are encoded using the second or third shortest code words, single pixels of different color are encoded using the shortest code words, and shorter and longer sequences of pixels of equal color use the third or fourth shortest code words. COPYRIGHT: (C)2011,JPO&INPIT

Patent
27 Apr 2011
TL;DR: In this article, the run length encoding method is composed of four steps, i.e., the second or third shortest code words are used for shorter or longer sequences of pixels of a predetermined color (transparent), while the short code words were used for single pixels having different colors.
Abstract: PROBLEM TO BE SOLVED: To provide a method for optimally encoding a subtitle layer or subpicture layers for high-resolution video. SOLUTION: Bitmaps are a separate layer lying above the video for subtitles synchronized with video images, for example, animations and navigation menus, and contain a plurality of transparent pixels. The run length encoding method is composed of four steps. That is, the second or third shortest code words are used for shorter or longer sequences of pixels of a predetermined color (transparent), while the shortest code words are used for single pixels having different colors, and the third or fourth shortest code words are used for shorter or longer sequences of an equal color value. COPYRIGHT: (C)2011,JPO&INPIT