scispace - formally typeset
Search or ask a question

Showing papers on "Run-length encoding published in 2016"


Journal ArticleDOI
TL;DR: This paper proposes an efficient method of the ECG signal compression using the discrete wavelet transform and the run length encoding based on the decomposition of theECG signal, the thresholding stage and the encoding of the final data.
Abstract: The storage capacity of the ECG records presents an important issue in the medical practices. These data could contain hours of recording, which needs a large space for storage to save these records. The compression of the ECG signal is widely used to deal with this issue. The problem with this process is the possibility of losing some important features of the ECG signal. This loss could influence negatively the analyzing of the heart condition. In this paper, we shall propose an efficient method of the ECG signal compression using the discrete wavelet transform and the run length encoding. This method is based on the decomposition of the ECG signal, the thresholding stage and the encoding of the final data. This method is tested on some of the MIT-BIH arrhythmia signals from the international database Physionet. This method shows high performances comparing to other methods recently published.

10 citations


Proceedings ArticleDOI
01 Sep 2016
TL;DR: In this work, a lossless compression technique is proposed where the input data is split into two sets which helps in attaining data redundancy and then the algorithm is applied.
Abstract: Space systems demand some of the most precise and accurate technology. The body of space systems are subjected to many vibrations and shocks. To make sure their stable operation, these vibrations are monitored continuously at a higher sampling rate which results in large amount of data. The data is transmitted over a wireless network to the ground station. Therefore data compression becomes unavoidable, not only for efficient utilization of transmission bandwidth but also for reduced storage requirements. Even the smallest vibration is critical in space applications. So lossless compression techniques are preferred. In this work, a lossless compression technique is proposed where the input data is split into two sets which helps in attaining data redundancy and then the algorithm is applied. Modified Move to front(MTF) coding is done on this data, where positional values of a dictionary are transmitted instead of the sample value. Run length encoding(RLE) is done on the MTF coded data. Next all the successively repeating number patterns in the data set is recognized and RLE coded. The compression ratio obtained for data without and with noise are 4.15 and 2.75 respectively.

7 citations


Journal ArticleDOI
TL;DR: This work utilizes the performance of the Proposed IMWT for lossy compression of images with encoding techniques like Magnitude set coding and Run Length Encoding and coding techniques which results with low bits.
Abstract: The performance of the wavelets within the field of image process is standard. Multiwavelets is the next step in riffle theory and it takes the performance of wavelets to the next level. In this work the performance of the Integer Multiwavelet transform (IMWT) for lossy compression has been studied. The Proposed IMWT shows sensible performance in lossy reconstruction of the images than that of Existing lossy reconstruction. This work utilizes the performance of the Proposed IMWT for lossy compression of images with encoding techniques like Magnitude set coding and Run Length Encoding. The transform coefficients are unit coded by means of Magnitude set coding and run length coding techniques which in turn results with low bits. The transform coefficient matrix is coded on not taking under consideration of the sign values using the Magnitude Set--Variable Length Integer illustration. The sign data of the coefficients is coded as bit plane with zero thresholds. This Bit plane may be used as it is or coded to scale back the bits per pixels. The Simulation was exhausted using Matlab.

6 citations


Journal Article
TL;DR: Modification to MRLE technique in which the constant size ‘Comp-Bit List’ have been replaced by ‘Variable Size Comp-Bit list’ and the new technique is referred to as improved – MRLE (iMRLE) technique.
Abstract: Run Length Encoding (RLE) is one of the simplest and primitive lossless data compression technique. RLE sometimes doubles the size of compressed data stream. To overcome this disadvantage, several algorithms have been introduced, one of which being Mespotine RLE (MRLE). This paper introduces modification to MRLE technique in which the constant size ‘Comp-Bit List’ have been replaced by ‘Variable Size Comp-Bit List’ and refers to the new technique as improved – MRLE (iMRLE) technique. This paper discusses the details of ‘Variable Size Comp-Bit List’ and utilizes this concept for lossless compression and decompression of 8-bit grayscale medical images and extends the concept to 16-bit grayscale medical images. Image quality metrics such as Compression Ratio (CR), Root Mean Square Error (RMSE), Peak Signal-to-Noise Ratio (PSNR) and Entropy are used to check the quality of decompressed image obtained using iMRLE technique. Finally, the compression ratio achieved for existing MRLE and iMRLE techniques for 8-bit and 16-bit grayscale images have been assessed and iMRLE is found to produce best results for lossless compression and decompression of medical images

6 citations


Journal ArticleDOI
TL;DR: This study shows that the wave atoms transform is more appropriate than wavelets transform since it offers a higher compression ratio and a better speech quality.
Abstract: This paper proposes a new adaptive speech compression system based on discrete wave atoms transform. First, the signal is decomposed on wave atoms, then wave atom coefficients are truncated using a new adaptive thresholding which depends on the SNR estimation. The thresholded coefficients are quantized using Max Lloyd scalar quantizer. Besides, they are encoded using zero run length encoding followed by Huffman coding. Numerous simulations are performed to prove the robustness of our approach. The results of current work are compared with wavelet based compression by using objective criteria, namely CR, SNR, PSNR and NRMSE. This study shows that the wave atoms transform is more appropriate than wavelets transform since it offers a higher compression ratio and a better speech quality.

6 citations


Proceedings ArticleDOI
01 Oct 2016
TL;DR: A novel compression technique is proposed for genomic data that converts each base in DNA sequence into binary form using 2-bit encoding system and converts the sequence into ASCII characters.
Abstract: Modern Biotechnology produces large amount of genomic data. The explosion of DNA data has given a challenge for understanding genomic structure, the disk storage and computation. It is essential for the development of efficient compression techniques to handle genomic data storage. Data compression is used to store the data in less memory. The properties of DNA sequence offer a chance to build DNA specific compression algorithms. In this paper, a novel compression technique is proposed for genomic data. In the first stage, each base in DNA sequence is converted into binary form using 2-bit encoding system. On the resultant binary string, A Modified run length encoding is applied. The output is compressed again using Huffman encoding technique in second stage. The encoded sequence is converted into ASCII characters. This technique is quite simple and effective.

5 citations


Proceedings ArticleDOI
01 Jan 2016
TL;DR: DTCWT method for power quality monitoring is proposed and integrates run length encoding technique for compress disturbance data and Matlab was used for generation of test signals and to implement the different algorithms for data compression.
Abstract: Power quality disturbance is a one the challenging issues, where many researchers are gaining the attention towards it, now a day it is very much necessary to monitor power quality disturbances for analyze and other purpose. The quantity of the data captured in present power quality monitoring system has been increasing drastically. It is difficult to store and transmission huge data. So compression technique is required to reduce the storage space for data. This paper proposes Dual tree complex wavelet transform (DTCWT) method for power quality monitoring and integrates run length encoding technique for compress disturbance data. Voltage sag, swell, transients and flickers are Power quality disturbances used to test the proposed method. And Matlab was used for generation of test signals and to implement the different algorithms for data compression.

4 citations


Journal ArticleDOI
TL;DR: In the proposed approach, a hybrid test pattern compression technique is used along with different schemes such as Huffman and Run length encoding and an improved compression ratio is obtained.
Abstract: Background : VLSI testing plays a very crucial role in the design of a VLSI chip. The advances in technology have led to increasing density of transistors and increased circuit complexity in a chip. With the increasing number of inputs, the memory overheads associated with storing test patterns increases. Thus the test pattern volume needs to be compressed. Method: In the proposed approach, a hybrid test pattern compression technique is used along with different schemes such as Huffman and Run length encoding. These encoding schemes are applied on ISCAS’85 and ISCAS’89 benchmark circuits and the results are compared and analyzed based on their compression ratio . Findings : In the proposed approach, an improved compression ratio is obtained when compared to the existing techniques in the literature.

4 citations


Journal ArticleDOI
TL;DR: This study developed a novel high-capacity steganographic access control in data hiding to transform sharpbitstreams into smooth bitstreams before it is hidden into a cover image while maintaining high image quality.
Abstract: The vast potential of information and communication technologies such as computer-based communication networks and telecommunication systems is indeed blooming innovations. This paper is a novel attempt in the field of authorization and access control, which uses steganography that coverts the data into a form that cannot be interpreted by unauthorized persons. The proposed algorithm transfers compressed data and hides it into a cover medium by improvising existing run-length technique in steganography. It deals with a compression technique that uses the redundancy feature of a bitstream and then uses this compressed stream to embed data in an image. The run-length encoding technique is exploited to compress a bitstream being embedded in a cover image. However, run-length encoding method has inefficient compression like the sharp bitstreams, such as the pattern "101010101". This study developed a novel high-capacity steganographic access control in data hiding to transform sharp bitstreams into smooth bitstreams before it is hidden into a cover image. Our scheme performs the logical Exclusive-OR XOR operation to smoothen the secret bitstream and to embed the result into a cover medium. Additionally, the proposed scheme employs generalized difference expansion transform for image recovery after data extraction; consequently, the image fidelity can be preserved. The experimental results show that our scheme owns a higher embedding capacity than previous approaches while maintaining high image quality. Copyright © 2011 John Wiley & Sons, Ltd.

4 citations


Proceedings ArticleDOI
01 Nov 2016
TL;DR: The ARLE circuit can be directly applied to real-time dataflow encoding between FPGA and external storage devices and it does not only ensure RTP but also narrow the circuit scale (CS).
Abstract: In order to reduce the pressure of data storage and transmission on satellite, researchers implemented a method of object region data extraction from remote sensing image in orbit. This method stores and downloads pixels of interesting region through interesting region labeling. But encoding data volume (EDV), hardware scale and real-time property (RTP) are difficult to be balanced. To solve this problem, the paper proposes the Adaptive Run-length Encoding (ARLE) circuit which is used in target region labeling and applied in FPGA. The circuits are designed upon cascaded structure which is simple, lightweight, modular, extensible and transplantable. Experiment shows that comparing with the existing methods, ARLE circuit has better compression effect and better utilization of resource. And it does not only ensure RTP but also narrow the circuit scale (CS). The target region extraction method can be easily extended to various application scenarios of rapid target region extraction. The ARLE circuit can be directly applied to real-time dataflow encoding between FPGA and external storage devices.

3 citations


Journal Article
TL;DR: A new text Steganography method is proposed that based on a parser and the ASCII of non-printed characters to hide the secret information in the English cover text after coding the secret message and compression it using modified Run Length Encoding method (RLE).
Abstract: Data hiding (Steganography) is a method used for data security purpose and to protect the data during its transmission. Steganography is used to hide the communication between two parties by embedding a secret message inside another cover (audio, text, image or video). In this paper a new text Steganography method is proposed that based on a parser and the ASCII of non-printed characters to hide the secret information in the English cover text after coding the secret message and compression it using modified Run Length Encoding method (RLE). The proposed method achieved a high capacity ratio for Steganography (five times more than the cover text length) when compared with other methods, and provides a 1.0 transparency by depending on some of the similarity measures of Steganography.

Proceedings ArticleDOI
01 May 2016
TL;DR: A hybrid test data compression method is presented which is targeted at minimizing the volume of test data, which reduces memory requirements for test data and also time required to test the entire data.
Abstract: A hybrid test data compression method is presented which is targeted at minimizing the volume of test data, which reduces memory requirements for test data and also time required to test the entire data. The compression scheme is so called hybrid as it combines a transform along with the encoding scheme. In the proposed approach, encoding schemes such as Frequency Directed Run length encoding and Shannon Fano encoding schemes are applied on the transformed data. The proposed scheme is applied on ISCAS'85, ISCAS'89 and ITC'99 benchmark circuits and compared in terms of their compression ratio.

Journal ArticleDOI
Peng Wu, Shunping Zhou, Bo wan, Fang Fang, Sha Zhou 
TL;DR: An improved two-dimensional run-length encoding (I2DRLE) scheme for representing grayscale images that applies some predefined patterns to represent various data and can sharply reduce the number of blocks in image representation.
Abstract: In this paper, we propose an improved two-dimensional run-length encoding (I2DRLE) scheme for representing grayscale images. Conventional 2D run-length encoding scheme is simple and effective that has been widely used, while it is not suitable to represent non-block images. Our approach is a new data compression algorithm inspired by 2D run-length encoding and quadtree, which apply some predefined patterns to represent various data and can sharply reduce the number of blocks in image representation. Experimental results show that this method is an effective lossless grayscale image encoding scheme.

Proceedings ArticleDOI
01 Mar 2016
TL;DR: An in-depth investigation of the potential of Run Length Encoding and Lempel-Ziv based methods for efficiently computing convolutions between a sequence of patterns of a fixed shape/size and a given image.
Abstract: The computation of convolutions is a fundamental problem that arises in applications from different fields as digital signal processing, image processing and string processing, among others. Here, we provide an in-depth investigation of the potential of Run Length Encoding and Lempel-Ziv based methods for efficiently computing convolutions between a sequence of patterns of a fixed shape/size and a given image. Our contribution consists in developing new methods and variants of existing ones and providing (extensive) empirical evaluations of them. Our fastest method outperforms a highly optimized implementation based on Fast Fourier Transform for small patterns.

Journal ArticleDOI
TL;DR: New modified RLE algorithms to compress grayscale images with lossy and lossless compression, depending on the probability of repetition of pixels in the image and the pixel values to reduce the size of the encoded data by sending bit 1 instead of the original value of the pixel if the pixel’s value is repeated.
Abstract: New modified RLE algorithms to compress grayscale images with lossy and lossless compression, depending on the probability of repetition of pixels in the image and the pixel values to reduce the size of the encoded data by sending bit 1 instead of the original value of the pixel if the pixel’s value is repeated. The proposed algorithms achieved good reduction of encoded size as compared with other compression method that used to compare with our method and decrease encoding time by good ratio.

Journal ArticleDOI
29 Nov 2016
TL;DR: This research proposes a new lossless compression algorithm called YRL that improve RLE using the idea of Relative Encoding, which can treat the value of neighboring pixels as the same value by saving those little differences / relative value separately.
Abstract: Data Compression can save some storage space and accelerate data transfer. Among many compression algorithm, Run Length Encoding (RLE) is a simple and fast algorithm. RLE can be used to compress many types of data. However, RLE is not very effective for image lossless compression because there are many little differences between neighboring pixels. This research proposes a new lossless compression algorithm called YRL that improve RLE using the idea of Relative Encoding. YRL can treat the value of neighboring pixels as the same value by saving those little differences / relative value separately. The test done by using various standard image test shows that YRL have an average compression ratio of 75.805% for 24-bit bitmap and 82.237% for 8-bit bitmap while RLE have an average compression ratio of 100.847% for 24-bit bitmap and 97.713% for 8-bit bitmap.

Journal ArticleDOI
TL;DR: An attempt is made to compress the DNA sequences using run length encoding RLE in a simplest form and to obtain a better compression ratio and compression gain.
Abstract: Collecting and organising the DNA data of all organisms have become the corner stone of the basic biological science It is indispensable to all those areas utilising this knowledge of DNA sequences such as diagnostic, biotechnology, forensic biology, drug design, etc This area is developing fast with demand as well as the growth of databases, one inducing the other's growth Due to this ever increasing demand, the size of the DNA databases are growing in an exponential manner The scientific community working on data compression has proved that the data can be considerably reduced in their size if the repetitions are supervised and thereby increasing the capacity of the usage of the storage media To address this issue, an attempt is made to compress the DNA sequences using run length encoding RLE in a simplest form and to obtain a better compression ratio and compression gain

Patent
Bas Hulsken1
09 Sep 2016
TL;DR: In this paper, a method and apparatus for fast and efficient image compression and decompression comprising transform coding of image data to generate an image representation using transform coefficients, bit-plane serialization of the image representation with transform coefficients and for each bitplane optimal prefix encoding of bits in a bitplane sharing local context, run- length encoding of 0 bit sequences in the bitplane and storing the coefficients received after optimal prefix decoding, starting with the sign followed by the bits in the order of significance, starting from the most significant bit to the least significant bit in a seektable in a header
Abstract: The invention relates to a method and apparatus for fast and efficient image compression and decompression comprising transform coding of image data to generate an image representation using transform coefficients, bit-plane serialization of the image representation using transform coefficients, and for each bitplane optimal prefix encoding of bits in a bitplane sharing local context, run- length encoding of 0 bit sequences in the bitplane and storing the coefficients received after optimal prefix encoding and 0 run length encoding starting with the sign followed by the bits in the order of significance, starting with the most significant bit to the least significant bit in a seektable in a header section.

Proceedings ArticleDOI
17 Mar 2016
TL;DR: A new algorithm such as FREQVCTDB (Frequent Vertical Compressed Transaction Database) is developed for preserving memory space by compressing the transaction database with statistical analysis in vertical data layout based on the statistical analysis with prior knowledge about dataset.
Abstract: In the modern digital era, we are welcoming the data with an enormous collection of information from different sorts of day to day life. There are promising challenges for handling as well as storing the data. For the past three decades many researchers have developed algorithms to meet the crucial challenges in frequent pattern mining; but still there is a thirst to overcome it. Based on this, a new algorithm such as FREQVCTDB (Frequent Vertical Compressed Transaction Database) is developed for preserving memory space by compressing the transaction database with statistical analysis in vertical data layout. The proposed algorithm has also adopted and analysed the dense dataset and sparse dataset based on the statistical analysis with prior knowledge about dataset and it is carried out in three phases. In the phase 1, the basic step is carried out such as horizontal data layout is converted into vertical data layout. Then the frequency of each item is counted and checked with the minsupport threshold filter for all items whether it is frequent or not. The input data is analyzed by using statistical properties of transaction database in phase 2. The properties such as a) Density of two types such as Maximum density or Minimum density b) Distance function c) Entropy function from information theory are analyzed. Based on this analysis the algorithm the Run length Encoding is implemented for compressing the frequent patterns in phase 3. The two types of datasets such as dense dataset and sparse dataset are analysed and discussed which have been taken from the valid website Fimi.cs.uk. These datasets are executed and the experimental results are shown in a graphical representation as validity results.

Journal ArticleDOI
TL;DR: An attempt is made to compare between the traditional methods of performing image compression and the artificial neural network approach.
Abstract: compression is a highly essential part of image processing and is a necessity of the modern world required in various fields. It is a process of representing image data using fewer bits than it is required for the original, by performing image compression a certain amount of data used by the image for its storage can be reduced. Compression is necessary in cases where a large amount of data is to be stored or transferred. This paper reviews some of the conventional methods for achieving Image compression, viz. Run length encoding, DCT, DWT to name a few. Artificial neural networks can also be used to achieve image compression. Here, an attempt is made to compare between the traditional methods of performing image compression and the artificial neural network approach. KeywordsCompression, Run-length encoding, DCT, DWT, Levenberg-Marquardt.

Patent
07 Dec 2016
TL;DR: In this paper, a transparent scrambling method for block data based on Hadoop MapReduce is proposed, which includes the following steps: carrying out Zigzag mapping and run length encoding on a discrete cosine transform coefficient according to the DCT encoding characteristics of a video image.
Abstract: The invention discloses a transparent scrambling method for Block data based on Hadoop MapReduce. The transparent scrambling method comprises the following steps: carrying out Zigzag mapping and run length encoding on a discrete cosine transform coefficient according to the discrete cosine transform encoding characteristics of a video image; and introducing a scrambling algorithm to the discrete cosine transform coefficient after the Zigzag mapping and the run length encoding to realize transparent scrambling of the data of the video image. Compared with the prior art, the discrete cosine transform (DCT) encoding characteristics of the video image are fully considered in the method disclosed by the invention, and the scrambling algorithm is introduced after the Zigzag mapping and the run length encoding, so the original format of the video is not changed to ensure the compatibility of the video data format, and the transparent scrambling method is very suitable for online video on demand, digital TV and other video service applications.

Patent
27 Aug 2016
TL;DR: In this paper, a method for multi-dimensional run-length encoding of an unprocessed data file is presented. But, the method is not suitable for large data sets, as it requires the data points along the traversal path of the path to be generated.
Abstract: Methods and systems for multi-dimensional run-length encoding of data are provided In one embodiment, a method for multi-dimensional run-length encoding of an unprocessed data file is provided The method includes obtaining an admission key and determining a traversal path within a virtual multi-dimensional shape based on the admission key The method also includes transforming unprocessed data of the unprocessed data file into a plurality of compressed data segments Also, the method includes plotting the plurality of compressed data segments onto a plurality of data points along the traversal path to obtain a plurality of secured data segments Further, the method includes generically sorting the plurality of secured data segments to obtain a plurality of generically sorted data segments, and writing the plurality of generically sorted data segments into a processed data file