scispace - formally typeset
Search or ask a question

Showing papers on "Run-length encoding published in 2020"


Journal ArticleDOI
TL;DR: In this article, the authors used Frei-Chen bases technique and Modified Run Length Encoding (RLE) to compress images, where the average subspace is applied at the first stage in which the blocks with the highest energy are replaced by a single value that represents the average value of the pixels in the corresponding block.

8 citations


Proceedings ArticleDOI
Jia Shi1
11 Jun 2020
TL;DR: This paper proposes an incremental heuristic that identifies the set of columns to be compressed and the order of rows that offer a better compression ratio, and improves the compression rate by up to 25% on test data, compared with compressing all columns of a table.
Abstract: Effective compression is essential when databases are used in Big Data applications. For in-memory columnar databases, compression can help to load the columns faster and speed up query evaluation. In this paper, we consider compressing columns using the Run Length Encoding (RLE). This algorithm encodes each region with identical value using a single run. The question we study in this paper is 'how to rearrange table columns for better compression?' We observe that not every column of a table benefits from column compression in an ideal column arrangement. Because finding the optimal column arrangement is NP-hard, we propose an incremental heuristic that identifies the set of columns to be compressed and the order of rows that offer a better compression ratio. Our preliminary experiments confirm that our algorithm improves the compression rate by up to 25% on test data, compared with compressing all columns of a table.

6 citations


Journal ArticleDOI
TL;DR: This work introduces a selective image encryption technique that encrypts predetermined bulks of the original image data in order to reduce the encryption/decryption time and the computational complexity of processing the huge image data.
Abstract: Most of today’s techniques encrypt all of the image data, which consumes a tremendous amount of time and computational payload. This work introduces a selective image encryption technique that encrypts predetermined bulks of the original image data in order to reduce the encryption/decryption time and thecomputational complexity of processing the huge image data. This technique is applying a compression algorithm based on Discrete Cosine Transform (DCT). Two approaches are implemented based on color space conversion as a preprocessing for the compression phases YCbCr and RGB, where the resultant compressed sequence is selectively encrypted using randomly generated combined secret key.The results showed a significant reduction in image quality degradation when applying the system based on YCbCr over RGB, where the compression ratio was raised in some of the tested images to 50% for the same Peak Signal to Noise Ratio (PSNR). The usage of 1-D DCT reduced the transform time by 47:1 times comparedto the same transform using 2-D DCT. The values of the adaptive scalar quantization parameters were reduced to the half for the luminance (Y band) to preserve the visual quality, while the chrominance (Cb and Cr bands) were quantized by the predetermined quantization parameters. In the hybrid encoder horizontal zigzag,block scanning was applied to scan the image. The Detailed Coefficient (DC) coefficients are highly correlated in this arrangement- where DC are losslessly compressed by Differential Pulse Coding Modulation (DPCM) and theAccumulative Coefficients (AC) are compressed using Run Length Encoding (RLE). As a consequence, for the compression algorithm, the compression gain obtained was up to 95%. Three arrays are resulted from each band (DC coefficients, AC values, and AC runs), where the cipher is applied to some or all of those bulksselectively. This reduces the encryption decryption time significantly, where encrypting the DC coefficients provided the second best randomness and the least encryption/decryption time recorded (3 10-3 sec.) for the entire image. Although the compression algorithm consumes time but it is more efficient than the savedencryption time.

6 citations


Proceedings ArticleDOI
01 Feb 2020
TL;DR: This paper presents a hybrid approach to compress text data that encompasses various methodologies like Run Length Encoding (RLE), Infix Encoding and Bit reduction techniques to achieve the best possible compression ratio for large text files and dictionary data.
Abstract: Data in today's world is the most important asset one can have, but with it comes the issue of handling it properly. To increase data portability, facilitated by reducing the size of data to be stored due to limited storage space, a strong need for data compression has arisen. A lot of ways have been defined for data compression; since the 1950's we have been digging up and devising many algorithms to modify data compression techniques and over the years, many seemingly powerful algorithms have been developed. With a wide range of compression techniques available, there comes the need to choose the right method for text compression, which gives good results in lesser time. In this paper, we present a hybrid approach to compress text data. This hybrid approach encompasses various methodologies like Run Length Encoding (RLE), Infix Encoding and Bit reduction techniques. We aim to overlay different techniques to achieve the best possible compression ratio for large text files and dictionary data.

4 citations


Journal ArticleDOI
01 Nov 2020
TL;DR: In this article, a new geometrical image fusion method was introduced by using the representation of chain code for the image objects that been matched among the images to be fused, and a matching method was applied to extract the matching objects from the overall objects that included in the origin images.
Abstract: In this paper, a new geometrical image fusion method introduced by using the representation of chain code for the image objects that been matched among the images to be fused. The input images are separated into number of objects. A matching method is applied to extract the matching objects from the overall objects that included in the origin images. Then the chain code string is calculated for each one. The run length encoding is applied on that code strings to reduce the space that required. The image resulted from the geometrical fusion is more informative which contain the more important features those included in the origin images.

4 citations


Journal ArticleDOI
TL;DR: A combination of Run Length Encoding (RLE) and Huffman coding for two dimensional binary image compression namely 2DRLE is proposed, which achieves a higher compression ratio than conventional HuffMan coding for image by achieving more than 8:1 of compression ratio without any distortion.
Abstract: Text images are used in many types of conventional data communication where texts are not directly represented by digital character such as ASCII but represented by an image, for instance facsimile file or scanned documents. We propose a combination of Run Length Encoding (RLE) and Huffman coding for two dimensional binary image compression namely 2DRLE. Firstly, each row in an image is read sequentially. Each consecutive recurring row is kept once and the number of occurrences is stored. Secondly, the same procedure is performed column-wise to the image produced by the first stage to obtain an image without consecutive recurring row and column. The image from the last stage is then compressed using Huffman coding. The experiment shows that the 2DRLE achieves a higher compression ratio than conventional Huffman coding for image by achieving more than 8:1 of compression ratio without any distortion.

3 citations


Book ChapterDOI
01 Jan 2020
TL;DR: An algorithm to improve the compression ratio is proposed which uses the concept of RLE (run length encoding) with a Modified HuffBit algorithm and is found to be 20% more accurate when compared to existing algorithms.
Abstract: Over the last two decades, the DNA sequence handling and storing problem has been considered as a big problem for many bioinformatics researchers because genomic databases are increasing drastically. To handle this problem, computational biology plays an important role, such as searching for homology, genome formulation, predicting for a new protein sequence, hereditary control networks, and new creative genomics structure. Available resources are not sufficient for storing and handling large DNA sequences. There are various tools developed by using different algorithms and approach. We have proposed an algorithm to improve the compression ratio which uses the concept of RLE (run length encoding) with a Modified HuffBit algorithm. The results obtained by the proposed method are found to be 20% more accurate when compared to existing algorithms.

2 citations



Journal ArticleDOI
TL;DR: This paper proposes a compression algorithm using octonary repetition tree (ORT) based on run length encoding based on RLE, one type of lossless data compression method which has duplica...
Abstract: This paper proposes a compression algorithm using octonary repetition tree (ORT) based on run length encoding (RLE). Generally, RLE is one type of lossless data compression method which has duplica...

2 citations


Journal ArticleDOI
TL;DR: This paper presents an efficient parallel approach to reduce execution time for compression algorithms, and shows that performance parallel model experimented using Open Multi-Processing (OpenMP) Application Programming Interface through Intel Parallel studio on multicore architecture platform has been improved remarkably against sequential approach.
Abstract: Today, there is a huge demand for data compression due to the need to reduce the transmission time and increase the capacity of data storage. Data compression is a technique which represents an information, images, video files in a compressed or in a compact format. There are various data compression techniques which keep information as accurately as possible with the fewest number of bits and send it through communication channel. Arithmetic algorithm, Lempel–Ziv 77 (LZ77) and run length encoding with a K-precision (K-RLE) algorithms are lossless data compression algorithms which have lower performance rate because of their processing complexity as well as execution time. This paper presents an efficient parallel approach to reduce execution time for compression algorithms. The proposed OpenMP is an efficient tool for programming within parallel shared-memory environments. Finally, it shows that performance parallel model experimented using Open Multi-Processing (OpenMP) Application Programming Interface through Intel Parallel studio on multicore architecture platform with spec of Core 2 duo—2.4 GHz, 1 Gb RAM machine of parallel approach for compression algorithms has been improved remarkably against sequential approach. The improvement in compression ratio through an efficient parallel approach leads to reduction on transmission cost, reduction in storage space and bandwidth without additional hardware infrastructure. An overall performance evaluation shows arithmetic data compression algorithm with 46% which is better than LZ77 of 44% as well as K-RLE of 37% data compression algorithms.

2 citations


Posted Content
02 May 2020
TL;DR: This paper performs lossy event compression (LEC) based on a quadtree (QT) segmentation map derived from an adjacent image that provides a priority map for the 3D space-time volume, albeit in a 2D manner.
Abstract: With several advantages over conventional RGB cameras, event cameras have provided new opportunities for tackling visual tasks under challenging scenarios with fast motion, high dynamic range, and/or power constraint. Yet unlike image/video compression, the performance of event compression algorithm is far from satisfying and practical. The main challenge for compressing events is the unique event data form, i.e., a stream of asynchronously fired event tuples each encoding the 2D spatial location, timestamp, and polarity (denoting an increase or decrease in brightness). Since events only encode temporal variations, they lack spatial structure which is crucial for compression. To address this problem, we propose a novel event compression algorithm based on a quad tree (QT) segmentation map derived from the adjacent intensity images. The QT informs 2D spatial priority within the 3D space-time volume. In the event encoding step, events are first aggregated over time to form polarity-based event histograms. The histograms are then variably sampled via Poisson Disk Sampling prioritized by the QT based segmentation map. Next, differential encoding and run length encoding are employed for encoding the spatial and polarity information of the sampled events, respectively, followed by Huffman encoding to produce the final encoded events. Our Poisson Disk Sampling based Lossy Event Compression (PDS-LEC) algorithm performs rate-distortion based optimal allocation. On average, our algorithm achieves greater than 6x compression compared to the state of the art.

Proceedings ArticleDOI
01 Sep 2020
TL;DR: A new lossless image compression algorithm that provides reduction of data redundancy in video processing systems and provides 1.33 times better compression quality index than the RLE algorithm is considered.
Abstract: This paper considers a new lossless image compression algorithm that provides reduction of data redundancy in video processing systems. To achieve the high compression level group encoding approach is proposed. The experimental research shows that proposed algorithm provides 1.33 times better compression quality index than the Run Length Encoding (RLE) algorithm.

Posted Content
TL;DR: In this paper, a quad tree (QT) segmentation map derived from adjacent intensity images is proposed to estimate 2D spatial priority within the 3D space-time volume.
Abstract: With several advantages over conventional RGB cameras, event cameras have provided new opportunities for tackling visual tasks under challenging scenarios with fast motion, high dynamic range, and/or power constraint. Yet unlike image/video compression, the performance of event compression algorithm is far from satisfying and practical. The main challenge for compressing events is the unique event data form, i.e., a stream of asynchronously fired event tuples each encoding the 2D spatial location, timestamp, and polarity (denoting an increase or decrease in brightness). Since events only encode temporal variations, they lack spatial structure which is crucial for compression. To address this problem, we propose a novel event compression algorithm based on a quad tree (QT) segmentation map derived from the adjacent intensity images. The QT informs 2D spatial priority within the 3D space-time volume. In the event encoding step, events are first aggregated over time to form polarity-based event histograms. The histograms are then variably sampled via Poisson Disk Sampling prioritized by the QT based segmentation map. Next, differential encoding and run length encoding are employed for encoding the spatial and polarity information of the sampled events, respectively, followed by Huffman encoding to produce the final encoded events. Our Poisson Disk Sampling based Lossy Event Compression (PDS-LEC) algorithm performs rate-distortion based optimal allocation. On average, our algorithm achieves greater than 6x compression compared to the state of the art.

12 May 2020
TL;DR: The Burrows-Wheeler Transform (BWT) technique of lossless compression is used in this paper to transform the plain text and the transformation permutes the order of characters so that the security after applying the cipher text in Elgamal public-key algorithm can be increased.
Abstract: In day to day life, a secure communication is an important criteria over non-secure network channel. While transmitting the plain text, it is necessary to compress the text before encrypting the plain text, so that the speed of transmission of data, data storage space can be increased and also the redundancy of data in the plain text can be reduced. The process of encoding characters forms the different format so that fewer bits will be representing the original data whereby the size of the original data is reduced. Compression technique plays a vital role to compress the plain text. Though different compression techniques like lossy and lossless are available, the lossless compression technique can recover the original text from the reconstructed text. While compressing the larger amount of text, the reconstructed text must be identical to the original text. The Burrows-Wheeler Transform (BWT) technique of lossless compression is used in this paper to transform the plain text and the transformation permutes the order of characters. To reduce the redundancy and also to increase the efficiency of algorithm, move-to-front transformation is done by BWT. Further, the transformation code is compressed by using run length encoding so that the security will be increased after applying the cipher text in Elgamal public-key algorithm. The transmission speed, the security of data can be increased. Due to double security of the plain text, the hackers may not hack the code easily.


Book ChapterDOI
19 Oct 2020
TL;DR: In this paper, a modified video compression model that optimizes vector quantization codebook by using the adapted Quantum Genetic Algorithm (QGA) that uses the quantum features, superposition, and entanglement to build optimal codebook for vector quantisation.
Abstract: This paper proposes a modified video compression model that optimizes vector quantization codebook by using the adapted Quantum Genetic Algorithm (QGA) that uses the quantum features, superposition, and entanglement to build optimal codebook for vector quantization. A context-based initial codebook is created by using a background subtraction algorithm; then, the QGA is adapted to get the optimal codebook. This optimal feature vector is then utilized as an activation function inside the neural network’s hidden layer to remove redundancy. Furthermore, approximation wavelet coefficients were lossless compressed with Differential Pulse Code Modulation (DPCM); whereas details coefficients are lossy compressed using Learning Vector Quantization (LVQ) neural networks. Finally, Run Length Encoding is engaged to encode the quantized coefficients to achieve a high compression ratio. As individuals in the QGA are actually the superposition of multiple individuals, it is less likely that good individuals will be lost. Experiments have proven the system’s ability to achieve a higher compression ratio with acceptable efficiency measured by PSNR.

Patent
07 May 2020
TL;DR: In this article, the authors present a data processing method for running length encoding for source data in which unit data is serially connected by a computing device, which consists of: dividing the source data into a plurality of blocks, adjusting the reading order of the source source data such that unit data having the same relative offset in a block are sequentially arranged while sequentially moving each of the divided blocks.
Abstract: Provided is a data compressing method efficiently compressing run length encoding by adjusting a reading order for source data. According to one embodiment of the present invention, the data processing method, which is a method compressing run length encoding for source data in which unit data is serially connected by a computing device, comprises the steps of: dividing the source data into a plurality of blocks; adjusting the reading order of the source data such that unit data having the same relative offset in a block are sequentially arranged while sequentially moving each of the divided blocks; and compressing the source data by running length encoding by sequentially reading the unit data in accordance with the adjusted order.

Journal ArticleDOI
TL;DR: A novel approach of hiding secret messages in multiple images (a cover image) using run length encoding and LSB techniques and communicate the message to intended person over the communication channel by transmitting individual images is proposed.
Abstract: The purpose of steganography is to communicate secret messages between the sender and intended recipient in such a way that no one suspects the very existence of the message. The techniques aim to protect the secret information from third parties by embedding them into other information such as text, audio signals, images, and video frames. In this paper we propose a novel approach of hiding secret messages in multiple images (a cover image)using run length encoding and LSB techniques and communicate the message to intended person over the communication channel by transmitting individual images.Experiments are performed on a set of color images and performance of the proposed system is presented.

Journal ArticleDOI
19 Nov 2020
TL;DR: The characteristic of video coding from the scratch of key frame selection to the evolutions of various standards is discussed and the characteristics of video compression techniques are discussed.
Abstract: Video compression is the process of decreasing the number of bits required to represent a certain video. Video compression can be done by a specific algorithm for deciding the most ideal approach to reduce the amount of data storage. The video file is coded in such a way that consuming less space than the original file and is easier to transmit over the Internet. The basic idea of video compression based on removing the redundant data that exists in the video. There are four types of redundancy in digital video: colorize redundancy, temporal redundancy, statistical redundancy, and spatial redundancy. Video compression algorithms must reduce these redundancies in such a way that keep the quality of the compressed video when the decompression process is done. Most video compression techniques consist of the following steps: Motion Estimation,Motion Compensation, Discrete Cosine Transform, Run Length Encoding, Huffman Coding. Frame Difference. This paper discusses the characteristic of video coding from the scratch of key frame selection to the evolutions of various standards

01 Jan 2020
TL;DR: In this paper, a lossless and completed technique for improving run length encoding results is presented, which is a loss-less and complete technique, it is consisting of two parts the compression part and decompression part.
Abstract: Image compression, is used to reduce the quantity of pixels used in image representation without excessively change image visualization. Reducing image size enhance images sharing, transmitting and storing. Data compression has become more important than ever, due to the increasing demand for internet use and exchange of a huge amount of images, videos, audio and documents as well as growing demand for electronic archiving by government departments that produce thousands of documents per day. In this paper, a proposed technique for improving run length encoding results will be presented. The proposed technique is a lossless and completed technique, it is consisting of two parts the compression part and decompression part. The compression part contains some basic stages such as: pre-processing, run length encoding, replace maximum values by unused values, minimize levels, delta encoding, while the decompression part is the revers of compression part. This technique is applied on twenty documents and compared with RLE. The experimental results showed that the proposed technique gives a higher compression ratio than the RLE