scispace - formally typeset
Search or ask a question

Showing papers on "Run-length encoding published in 2021"


Journal ArticleDOI
TL;DR: In this article, a 3-level Haar wavelet transform is used as a common building block to save the resources for color image blur detection and compression in parallel with compression and encryption.
Abstract: This paper presents a 3 in 1 standalone FPGA system which can perform color image blur detection in parallel with compression and encryption. Both blur detection and compression are based on the 3-level Haar wavelet transform, which is used as a common building block to save the resources. The compression is based on performing the hard thresholding scheme followed by the Run Length Encoding (RLE) technique. The encryption is based on the 128-bit Advanced Encryption Standard (AES), which is considered one of the most secure algorithms. Moreover, the modified Lorenz chaotic system is combined with the AES to perform the Cipher Block Chaining (CBC) mode. The proposed system is realized using HDL and implemented using Xilinx on XC5VLX50T FPGA. The system has utilized only 25% of the available slices. Furthermore, the system can achieve a throughput of 3.458 Gbps, which is suitable for real-time applications. To validate the compression performance, the system has been tested with all the standard $256\times 256$ images. It is shown that depending on the amount of details in the image, the system can achieve 30dB PSNR at compression ratios in the range of (0.08-0.38). The proposed system can be integrated with digital cameras to process the captured images on-the-fly prior to transmission or storage. Based on the application, the blurred images can be either marked for future enhancement or simply filtered out.

10 citations


Journal ArticleDOI
09 Oct 2021-Irbm
TL;DR: In this paper, a novel electrocardiogram data compression technique which utilizes modified run-length encoding of wavelet coefficients is presented. But the proposed technique can be utilized for compression of ECG records of Holter monitoring.
Abstract: Objective In cardiac patient-care, compression of long-term ECG data is essential to minimize the data storage requirement and transmission cost. Hence, this paper presents a novel electrocardiogram data compression technique which utilizes modified run-length encoding of wavelet coefficients. Method First, wavelet transform is applied to the ECG data which decomposes it and packs maximum energy to less number of transform coefficients. The wavelet transform coefficients are quantized using dead-zone quantization. It discards small valued coefficients lying in the dead-zone interval while other coefficients are kept at the formulated quantized output interval. Among all the quantized coefficients, an average value is assigned to those coefficients for which energy packing efficiency is less than 99.99%. The obtained coefficients are encoded using modified run-length coding. It offers higher compression ratio than conventional run-length coding without any loss of information. Results Compression performance of the proposed technique is evaluated using different ECG records taken from the MIT-BIH arrhythmia database. The average compression performance in terms of compression ratio, percent root mean square difference, normalized percent mean square difference, and signal to noise ratio are 17.18, 3.92, 6.36, and 28.27 dB respectively for 48 ECG records. Conclusion The compression results obtained by the proposed technique is better than techniques recently introduced by others. The proposed technique can be utilized for compression of ECG records of Holter monitoring.

7 citations


Proceedings ArticleDOI
19 Sep 2021
TL;DR: This work proposes a novel event compression algorithm based on a quad tree (QT) segmentation map derived from the adjacent intensity images that achieves greater than greater than $6 \times higher compression compared to the state of the art.
Abstract: With several advantages over conventional RGB cameras, event cameras have provided new opportunities for tackling visual tasks under challenging scenarios with fast motion, high dynamic range, and/or power constraint. Yet unlike image/video compression, the performance of event compression algorithm is far from satisfying and practical. The main challenge for compressing events is the unique event data form, i.e., a stream of asynchronously fired event tuples each encoding the 2D spatial location, timestamp, and polarity (denoting an increase or decrease in brightness). Since events only encode temporal variations, they lack spatial structure which is crucial for compression. To address this problem, we propose a novel event compression algorithm based on a quad tree (QT) segmentation map derived from the adjacent intensity images. The QT informs 2D spatial priority within the 3D space-time volume. In the event encoding step, events are first aggregated over time to form polarity-based event histograms. The histograms are then variably sampled via Poisson Disk Sampling prioritized by the QT based segmentation map. Next, differential encoding and run length encoding are employed for encoding the spatial and polarity information of the sampled events, respectively, followed by Huffman encoding to produce the final encoded events. Our Poisson Disk Sampling based Lossy Event Compression (PDS-LEC) algorithm performs rate-distortion based optimal allocation. On average, our algorithm achieves greater than 6x compression compared to the state of the art.

6 citations


Journal ArticleDOI
TL;DR: A modified video compression model is proposed that adapts the genetic algorithm to build an optimal codebook for adaptive vector quantization that is used as an activation function inside the neural network’s hidden layer to achieve higher compression ratio.
Abstract: Video compression has great significance in the communication of motion pictures. Video compression techniques try to remove the different types of redundancy within or between video sequences. In the temporal domain, the video compression techniques remove the redundancies between the highly correlated consequence frames of the video. In the spatial domain, the video compression techniques remove the redundancies between the highly correlated consequence pixels (samples) in the same frame. Evolving neural-networks based video coding research efforts are focused on improving existing video codecs by performing better predictions that are incorporated within the same codec framework or holistic methods of end-to-end video compression schemes. Current neural network-based video compression adapts static codebook to achieve compression that leads to learning inability from new samples. This paper proposes a modified video compression model that adapts the genetic algorithm to build an optimal codebook for adaptive vector quantization that is used as an activation function inside the neural network’s hidden layer. Background subtraction algorithm is employed to extract motion objects within frames to generate the context-based initial codebook. Furthermore, Differential Pulse Code Modulation (DPCM) is utilized for lossless compression of significant wavelet coefficients; whereas low energy coefficients are lossy compressed using Learning Vector Quantization (LVQ) neural networks. Finally, Run Length Encoding (RLE) is engaged to encode the quantized coefficients to achieve a higher compression ratio. Experiments have proven the system’s ability to achieve higher compression ratio with acceptable efficiency measured by PSNR.

5 citations


Proceedings ArticleDOI
23 Mar 2021
TL;DR: In this paper, a bijective Burrows-Wheeler-Scott Transform (BWT) was used to generate a reversible permutation of the input byte array with long repetitions of similar symbols, where the most frequent byte values are mapped to the lowest binary values.
Abstract: The binary representation of an arbitrary string does not contain long runs of repeating bits, but, first, reading all most significant bits of all bytes, then all second most significant bits and so on, results in much longer average runs. We use this observation in combination with several preprocessing steps to obtain a lossless RLE based compression algorithm comparable to ZIP: First, the uncompressed byte array is analyzed and for each byte its number of occurrences is counted. In parallel, a bijective Burrows-Wheeler-Scott Transform is applied, which produces a reversible permutation of the input byte array with long repetitions of similar symbols. Afterwards, each byte is remapped, where the most frequent byte values are mapped to the lowest binary values. The resulting byte array is interpreted in a specific way, known as Bit-Layers text representation, where all bits of same significance are read consecutively, starting with the most significant bits, resulting in long average runs of identical bits. On this representation, a run length encoding (RLE) is applied and the runs are counted to generate a Huffman tree. Then, the runs are output with a variable length code, together with the mapping needed to decompress the file.

4 citations


Proceedings ArticleDOI
03 Jul 2021
TL;DR: In this paper, the authors proposed an efficient EEG data compression system in terms of time and space complexities, which consists of three main units: preprocessing unit, compression unit, and reconstruction unit.
Abstract: The Electroencephalography (EEG) signals that indicate the electrical activity of the brain are acquired with a high sampling rate. Consequently, the size of the recorded EEG data is large. For storing and transmitting these data, large space and bandwidth are demanded. Therefore, preprocessing and compressing EEG data are important for efficient data transmission and storage. The purpose of this approach is to design an efficient EEG data compression system in terms of time and space complexities. The proposed system consists of three main units: preprocessing unit, compression unit, and reconstruction unit. The core of the compression process occurs in the compression unit. Different combinations of hybrid lossy/lossless compression techniques were tried in the compression process. In this study, both the Discrete Cosine Transform and the Discrete Wavelet Transform techniques were experimented for the lossy compression algorithm. The Arithmetic Coding and the Run Length Encoding were experimented then for the lossless compression algorithm. The final results showed that combining both the Discrete Cosine Transform and the Run Length Encoding yields the most optimal system complexity and compression ratio. This approach achieved up to CR = 94% at RMSE = 0.188.

3 citations


Journal ArticleDOI
TL;DR: Simulation findings indicate that the inclusion of RLE following the DCT algorithm increases performance in terms of CR and complexity, and results suggest the highest output in compression ratio and in complexity by adding RLE after the D CT algorithm.
Abstract: Due to the high sampling rate, the recorded Electrocardiograms (ECG) data are huge. For storing and transmitting ECG data, wide spaces and more bandwidth are therefore needed. The ECG data are also very important to preprocessing and compress so that it is distributed and processed with less bandwidth and less space effectively. This manuscript is aimed at creating an effective ECG compression method. The reported ECG data are processed first in the pre-processing unit (ProUnit) in this method. In this unit, ECG data have been standardized and segmented. The resulting ECG data would then be sent to the Compression Unit (CompUnit). The unit consists of an algorithm for lossy compression (LosyComp), with a lossless algorithm for compression (LossComp). The randomness ECG data is transformed into high randomness data by the failure compression algorithm. The data's high redundancy is then used with the LosyComp algorithm to reach a high compression ratio (CR) with no degradation. The LossComp algorithms recommended in this manuscript are the Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT). LossComp algorithms such as Arithmetic Encoding (Arithm) and Run Length Encoding (RLE) are also suggested. To evaluate the proposed method, we measure the Compression Time (CompTime), and Reconstruction Time (RecTime) (T), RMSE and CR. Simulation results suggest the highest output in compression ratio and in complexity by adding RLE after the DCT algorithm. The simulation findings indicate that the inclusion of RLE following the DCT algorithm increases performance in terms of CR and complexity. With CR = 55% with RMSE = 0:14 and above 94% with RMSE = 0:2, DCT as a LossComp algorithm was utilized initially, followed by RLE as a LossComp algorithm.

3 citations


DOI
24 Sep 2021
TL;DR: In this article, the authors analyze the performance of different compression algorithms, including Adaptive Huffman Encoding Algorithm (AHE), Shannon Fano Algorithm, and Run Length Encoding algorithm.
Abstract: Compression is the specialty of presenting the data in a conservative structure as opposed to its unique or uncompressed structure. Moreover, utilizing information compression, the size of a specific document can be decreased. This is extremely helpful when preparing, putting away, or moving a gigantic document, which needs huge resources. The speed of transmission relies on the number of pieces sent, the time needed for the encoder to create the coded message, and the time needed for the decoder to recoup the primary ensemble. In an information storage application, the level of compression is the essential concern. Compression can be named either lossless or lossy. Lossless compression methods remake the primary data from the compacted record with no loss of information. In this manner, the data does not alter during the decompression and compression measures. These sorts of compression calculations are called reversible compressions since the primary message is recreated by the decompression cycle. This paper analyzes the exhibition of the Huffman Encoding Algorithm, Lempel Zev Welch Algorithm, Arithmetic Encoding Algorithm, Adaptive Huffman Encoding Algorithm, Shannon Fano Algorithm, and Run Length Encoding Algorithm. Specifically, the efficiency of these calculations in compacting text information is compressed and assessed.

2 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed an improved algorithm based on run length encoding to compress power dispatch data, which not only saves a lot of space, but also improves the speed of data retrieval.

2 citations


Journal ArticleDOI
01 Mar 2021
TL;DR: A lossless compression method based on adaptive interval run length encoding is proposed, which automatically identifies the frame format of telemetry data, and carries out longitudinal run length adaptive interval encoding for inter frame differential data to improve the compression efficiency.
Abstract: Aiming at the problem of massive historical telemetry data storage in flight test, a lossless compression method based on adaptive interval run length encoding is proposed. Aiming at the problem of low compression efficiency of traditional run length encoding algorithm for word data, by studying the storage characteristics of telemetry data, this algorithm automatically identifies the frame format of telemetry data, and carries out longitudinal run length adaptive interval encoding for inter frame differential data to improve the compression efficiency. The test results show that the compression ratio of the improved algorithm is improved by 58.1% and 1.5% compared with the traditional run length encoding algorithm and the inter frame differential lateral run length encoding algorithm.

1 citations


Proceedings ArticleDOI
Liyuan Cai, Hao Luo, Chen Yao, Lianfang Wang1, Liyong Ma1 
13 Aug 2021
TL;DR: In this article, a lossless compression method combining variational mode decomposition (VMD) and run length encoding (RLE) for marine diesel high-frequency data is proposed.
Abstract: In Marine diesel engine vibration monitoring system, a large number of high frequency data can be detected by high frequency sensors. Using data compression technology can effectively reduce the amount of data monitored by the ship vibration monitoring system and improve the communication efficiency of the system. In this paper, a lossless compression method combining variational mode decomposition (VMD) and run length encoding (RLE) for marine diesel high-frequency data is proposed. First, the signal is decomposed by VMD, and then the decomposed sub-signals and residual signals are encoded. In order to obtain better compression ratio, the compression method of RLE and Lempel-Ziv-Welch (LZW) is adopted. Experimental results show that this method can compress data effectively for high frequency signal of diesel engine morning and save network bandwidth.

Journal ArticleDOI
TL;DR: A spatial lossy compression algorithm for gray scale images is presented that exploits the inter-pixel and the psycho-visual data redundancies in images and achieves a promising quality vs. compression ratio results.
Abstract: Image compression is vital for many areas such as communication and storage of data that is rapidly growing nowadays. In this paper, a spatial lossy compression algorithm for gray scale images is presented. It exploits the inter-pixel and the psycho-visual data redundancies in images. The proposed technique finds paths of connected pixels that fluctuate in value within some small threshold. The path is calculated by looking at the 4-neighbors of a pixel then choosing the best one based on two conditions; the first is that the selected pixel must not be included in another path and the second is that the difference between the first pixel in the path and the selected pixel is within the specified threshold value. A path starts with a given pixel and consists of the locations of the subsequently selected pixels. Run-length encoding scheme is applied on paths to harvest the inter-pixel redundancy. After applying the proposed algorithm on several test images, a promising quality vs. compression ratio results have been achieved.

Journal ArticleDOI
TL;DR: This paperソ proposesﻹ anﻷ anﷻ� audioﻵ steganography method based onﻴ on-the-run-length-encoding onﷷ run-length encoding, on the basis of integer wavelet, large Capacity, Run Length Encoding, and robustness.
Abstract: This paper proposes an audio steganography method based on run length encoding and integer wavelet transform which can be used to hide secret message in digital audio. The major contribution of the proposed scheme is to propose an audio steganography with high capacity, where the secret information is compressed by run length encoding. In the applicable scenario, the main purpose is to hide as more information as possible in the cover audio files. First, the secret information is chaotic scrambling, then the result of scrambling is run length encoded, and finally, the secret information is embedded into integer wavelet coefficients. The experimental results and comparison with existing technique show that by utilizing the lossless compression of run length encoding and anti-attack of wavelet domain, the proposed method has improved the capacity, good audio quality, and can achieve blind extraction while maintaining imperceptibility and strong robustness. KEyWoRDS Audio Steganography, Integer Wavelet, Large Capacity, Run Length Encoding

Journal ArticleDOI
TL;DR: The proposed coding method will provide a high level of security and complexity and produce ASCII of non-printed characters which can be employed for steganography purposes to obtain complete similarity between secret text and cover text.
Abstract: Transferring data in a safe way is one of the things that have been of interest since ancient times. Data hiding or steganography is a method used to protect the data during its transmission. Coding is a method to represent the data in another shape, so a high level of security is achieved by using coding and hiding data together. The proposed method is a hybrid between coding and hiding, but this paper focuses on proposed a data coding part only, such that the cover text (that used for information hiding) will be used to extract private information between the sender and the receiver for coding process, and the output of the proposed coding method can be used for information hiding process. Apply the proposed coding method will provide a high level of security and complexity and produce ASCII of non-printed characters which can be employed for steganography purposes to obtain complete similarity between secret text and cover text.

Posted Content
TL;DR: In this article, the authors present a combination of preprocessing steps that turn arbitrary input data in a byte-wise encoding into a bit-string, and combine this approach by a dynamic byte remapping as well as a Burrows-Wheeler-Scott transform on a byte level.
Abstract: The Run Length Encoding (RLE) compression method is a long standing simple lossless compression scheme which is easy to implement and achieves a good compression on input data which contains repeating consecutive symbols. In its pure form RLE is not applicable on natural text or other input data with short sequences of identical symbols. We present a combination of preprocessing steps that turn arbitrary input data in a byte-wise encoding into a bit-string which is highly suitable for RLE compression. The main idea is to first read all most significant bits of the input byte-string, followed by the second most significant bit, and so on. We combine this approach by a dynamic byte remapping as well as a Burrows-Wheeler-Scott transform on a byte level. Finally, we apply a Huffman Encoding on the output of the bit-wise RLE encoding to allow for more dynamic lengths of code words encoding runs of the RLE. With our technique we can achieve a lossless average compression which is better than the standard RLE compression by a factor of 8 on average.

Journal ArticleDOI
TL;DR: The algorithm proposed in this paper optimizes bit-level Run-Length Encoding data compression, uses special encoding of repeating data blocks, and, if necessary, combines it with delta data transformation or representation of data in its original form intending to increase compression efficiency compared to a conventional bit- level Run-length Encoding approach.
Abstract: Lossless data compression algorithms can use statistical redundancy to represent data using a fewer number of bits in comparison to the original uncompressed data. RunLength Encoding is one of the simplest lossless compression algorithms in terms of understanding its principles and software implementation, as well as in terms of temporal and spatial complexity. If this principle is applied to individual bits of original uncompressed data without respecting the byte boundaries, this approach is referred to as bit-level Run-Length Encoding. Algorithm for lossless data compression, proposed in this paper, optimizes bit-level Run-Length Encoding data compression, uses special encoding of repeating data blocks, and, if necessary, combines it with delta data transformation or representation of data in its original form intending to increase compression efficiency compared to a conventional bit-level Run-Length Encoding approach. The advantage of the algorithm proposed in this paper is in the increase of the compression ratio in comparison to the bit-level Run-Length Encoding, with the higher time and space consumption as the trade-off. Test results, that were obtained by compression of the segmentation metadata of different volume datasets take place in the last part of the paper.

Journal ArticleDOI
26 Mar 2021
TL;DR: It was found that in comparison with the adaptation of the combined encoder structure using direct determination of the arithmetic code volume of each bit plane of DPCM pixel values, the proposedEncoder structure provides a significant reduction in computational complexity while maintaining high image compression ratios.
Abstract: The problem of increasing the efficiency of coding of halftone images in the space of bit planes of differences in pixel values obtained using differential coding (DPCM – Differential pulse-code modulation) is considered. For a compact representation of DPCM pixel values, it is proposed to use a combined compression encoder that implements arithmetic coding and run-length coding. An arithmetic encoder provides high compression ratios, but has high computational complexity and significant encoding overhead. This makes it effective primarily for compressing the mean-value bit-planes of DPCM pixel values. Run-length coding is extremely simple and outperforms arithmetic coding in compressing long sequences of repetitive symbols that often occur in the upper bit planes of DPCM pixel values. For DPCM bit planes of pixel values of any image, a combination of simple run length coders and complex arithmetic coders can be selected that provides the maximum compression ratio for each bit plane and all planes in general with the least computational complexity. As a result, each image has its own effective combined encoder structure, which depends on the distribution of bits in the bit planes of the DPCM pixel values. To adapt the structure of the combined encoder to the distribution of bits in the bit planes of DPCM pixel values, the article proposes to use prediction of the volume of arithmetic code based on entropy and comparison of the obtained predicted value with the volume of run length code. The entropy is calculated based on the values of the number of repetitions of ones and zero symbols, which are obtained as intermediate results of the run length encoding. This does not require additional computational costs. It was found that in comparison with the adaptation of the combined encoder structure using direct determination of the arithmetic code volume of each bit plane of DPCM pixel values, the proposed encoder structure provides a significant reduction in computational complexity while maintaining high image compression ratios.

Posted Content
TL;DR: In this paper, a run length encoding algorithm for lossless data compression that exploits positional redundancy by representing data in a two-dimensional model of concentric circles is presented, which enables detection of runs (each of a different character) in which runs need not be contiguous.
Abstract: A new run length encoding algorithm for lossless data compression that exploits positional redundancy by representing data in a two-dimensional model of concentric circles is presented. This visual transform enables detection of runs (each of a different character) in which runs need not be contiguous and hence, is a generalization of run length encoding. Its advantages and drawbacks are characterized by comparing its performance with TurboRLE.


Journal ArticleDOI
TL;DR: The proposed RDH scheme will be useful for secure message transmission also where the receiver is also concerned about the restoration of the cover image and the results show that the proposed scheme performs better than the existing schemes.
Abstract: Histogram shifting-based Reversible Data Hiding (RDH) is a well-explored information security domain for secure message transmission. In this paper, we propose a novel RDH scheme that considers the block-wise histograms of the image. Most of the existing histogram shifting techniques will have additional overhead information to recover the overflow and/or the underflow pixels. In the new scheme, the meta-data that is required for a block is embedded within the same block in such a way that the receiver can perform image recovery and data extraction. As per the proposed data hiding process, all the blocks need not be used for data hiding, so we have used marker information to distinguish between the blocks which are used to hide data and the blocks which are not used for data hiding. Since the marker information needs to be embedded within the image, we have compressed the marker information using run-length encoding. The run-length encoded sequence is represented by an Elias gamma encoding procedure. The compression on the marker information ensures a better Embedding Rate (ER) for the proposed scheme. The proposed RDH scheme will be useful for secure message transmission also where we are also concerned about the restoration of the cover image. The proposed scheme's experimental analysis is conducted on the USC-SIPI image dataset maintained by the University of Southern California, and the results show that the proposed scheme performs better than the existing schemes.


Journal ArticleDOI
TL;DR: Various compression techniques such as NULL Suppression, Dictionary Encoding, Run LengthEncoding, Bit Vector Encoding and Lempel Ziv Encoding which are being popularly used to optimise the performance of the columnar databases are discussed.

Journal ArticleDOI
O Elshamy, I Fayed, A Khalifa, A Abdeen, N Zaher 
01 Feb 2021
TL;DR: The solution presents a low-cost alternative for start-up agricultural projects in third world countries, where network instability is an issue and off-the-shelf expensive solutions are not a viable option.
Abstract: In this paper a method for the effective transmission of data, from a Wireless Sensor Network (WSN) in a remote agricultural location with limited connectivity, is proposed; along with a meaningful visualisation of the data at the user end to help in decision making and control of the irrigation system. The paper encompasses three main segments which are data compression, checking for network availability, and user interface. First data compression is implemented to reduce the amount of transmitted data, due to limited connectivity in the area, which is translated in low data rate and high cost of sending the data. Four different techniques are compared: Huffman encoding, Lempel-Ziv-Welsh (LZW), differential encoding, and Run Length Encoding (RLE) to carry out data compression, where Huffman encoding showed the best results when combined with differential encoding. The second phase is the handling of poor network conditions in the agricultural plant by switching between two different communication modes: Mobile data and Short Message Service (SMS). Lastly, the user monitoring centre which is a mobile application that allows the user to monitor and control the agricultural plant remotely. It includes mapping and visualization of temperature and soil moisture data and a control system to control the irrigation system. The solution presents a low-cost alternative for start-up agricultural projects in third world countries, where network instability is an issue and off-the-shelf expensive solutions are not a viable option.