scispace - formally typeset
Search or ask a question

Showing papers on "Run-length encoding published in 2019"


Journal ArticleDOI
Sulaiman Khan, Shah Nazir, Anwar Hussain, Amjad Ali1, Ayaz Ullah 
TL;DR: An approach of Haar wavelet transform, discrete cosine transforms, and run length encoding techniques for advanced manufacturing processes with high image compression rates is presented and can easily be implemented in industries for the compression of images.
Abstract: Image compression plays a key role in the transmission of an image and storage capacity. Image compression aims to reduce the size of the image with no loss of significant information and no loss o...

13 citations


Journal ArticleDOI
TL;DR: The proposed algorithm, namely 3D-RLE, is designed to compress binary volumetric data by employing also the inter-slice correlation between the voxels and is extended to several scanning forms such as Hilbert and perimeter to determine an optimal scanning procedure coherent with the morphology of the segmented organ in data.
Abstract: Image compression has become an inevitable tool along with the advancing medical data acquisition and telemedicine systems. The run-length encoding (RLE), one of the most effective and practical lo...

10 citations


Journal ArticleDOI
01 Jun 2019
TL;DR: This paper presents a comprehensive analysis and comparison of common and well-known meta-heuristics for columnar run minimization, based on standard implementations by using real datasets, and provides comprehensive implementations of the heuristic RLE compression approaches based on common optimization methods.
Abstract: Structured data are one of the most important segments in the realm of big data analysis that have undeniably prevailed over the years. In recent years, column-oriented design has become a frequent practice to organize structured data in analytical systems. The storage systems that organize data in a column-wise manner are often referred to as column stores. Column-oriented databases or warehouses and spreadsheet applications in particular have recently become a popular and a convenient tool for column-wise data processing and analysis. At the same time, the volume of data is increasing at an extreme rate, which despite the decrease in pricing of storage systems stresses the importance of data compression. Apart from resounding performance gain in large read-mostly data repositories, column-oriented data are easily compressible, which enables efficient query processing and pushes the peak of the overall performance. Many compression algorithms, including the Run Length Encoding (RLE), exploit the similarity among the column values, where repetitions of the same value form columnar runs that can be found in most database systems. This paper presents a comprehensive analysis and comparison of common and well-known meta-heuristics for columnar run minimization, based on standard implementations by using real datasets. We have analyzed genetic algorithms, simulated annealing, cuckoo search, particle swarm optimization, Tabu search, and the bat algorithm. The first three being the most efficient have undergone sensitivity analysis on synthetic datasets to fine-tune their parameters. These meta-heuristics were then tested on real datasets. The experiments show that the algorithms perform consistently well on both synthetic and real data, demonstrating higher run-reduction efficiency compared to existing approaches. Moreover, the results show that the applied meta-heuristics exhibit quick convergence to nearly optimal solutions, accompanied by an insignificant overhead. In addition, we provide comprehensive implementations of the heuristic RLE compression approaches based on common optimization methods. They are effective at physical compression to an extent that makes them suitable as everyday appliances. The experiments on real datasets also indicate that our implementations are able overcome the expected on-disk file compression ratio, in most cases being better than the respective reduction in runs.

8 citations


Proceedings ArticleDOI
01 Oct 2019
TL;DR: This research paper is going to propose an extension or maybe an upgradation to RLE method which will ensure that the size of an image never exceeds beyond its original size, even in the worst possible scenario.
Abstract: Images are among the most common and popular representations of data. Digital images are used for professional and personal use ranging from official documents to social media. Thus, any Organization or individual needs to store and share a large number of images. One of the most common issues associated with using images is the potentially large file-size of the image. Advancements in image acquisition technology and an increase in the popularity of digital content means that images now have very high resolutions and high quality, inevitably leading to an increase in size. Image compression has become one of the most important parts of image processing these days due to this. The goal is to achieve the least size possible for an image while not compromising on the quality of the image, that gives us the perfect balance. Therefore, to achieve this perfect balance many compression techniques have been devised and it is not possible to pinpoint the best one because it is really dependent on the type of image to be compressed. So here we are going to elaborate on converting images into binary images and the Run length Encoding (RLE) algorithm used for compressing binary images. Now, RLE is itself a very effective and simple approach for compression of images but, sometimes, the size of an image actually increases after RLE algorithm is applied to the image and this is one of the major drawbacks of RLE. In this research paper we are going to propose an extension or maybe an upgradation to RLE method which will ensure that the size of an image never exceeds beyond its original size, even in the worst possible scenario

7 citations


Journal ArticleDOI
TL;DR: The results show that this method can be efficiently used for compression of ECG signal from multiple leads and performs well than the techniques based on SVD and Huffman Encoding.
Abstract: ECG (Electrocardiogram) is a test that analyzes the electrical behaviour of the heart. ECG is used in diagnosing most of the cardiac diseases. Large amount of ECG data from multiple leads needs to be stored and transmitted, which requires compression for effective data storage and retrieval. Proposed work has been developed with Singular Value Decomposition (SVD) followed by Run Length Encoding (RLE) combined with Huffman Encoding (HE) and Arithmetic Encoding (AE) individually. The ECG signal is first preprocessed. SVD is used to factorize the signal into three smaller set of values, which preserve the significant features of the ECG. Finally, Run Length Encoding combined with Huffman encoding (RLE-HE) and Arithmetic encoding (RLE-AE) individually are employed and the compression performance metrics are compared. The proposed method is evaluated with PTB Diagnostic database. Performance measures such as Compression Ratio (CR), Percentage Root mean square Difference (PRD) and Signal to Noise Ratio (SNR) of the reconstructed signal are used to evaluate the proposed technique. It is evident that the proposed method performs well than the techniques based on SVD and Huffman Encoding. The results show that this method can be efficiently used for compression of ECG signal from multiple leads.

5 citations


Journal ArticleDOI
S M Hardi1, B Angga1, Maya Silvi Lydia1, I Jaya1, J T Tarigan1 
01 Jun 2019
TL;DR: Run Length Encoding and Fibonacci Code algorithm is a type of lossless data compression used in this research, which performance will be measured by comparison parameters of the Compression Ratio, Redundancy, Space Saving, and Compression Time.
Abstract: Compression purpose to reduce the redundancy data as small as possible and speed up the data transmission process. To solve the size problem in saving data and transmission process, we use Run Length Encoding and Fibonacci Code algorithm to do compression process. Run Length Encoding and Fibonacci Code algorithm is a type of lossless data compression used in this research, which performance will be measured by comparison parameters of the Compression Ratio (CR), Redundancy (RD), Space Saving (SS) and Compression Time. The compression process is only done on image files with Bitmap format (*.bmp) and encode using Run Length Encoding or Fibonacci Code, then perform the compression process. The final result of the compression is file with extension *.rle or *.fib which contains compressed information that can be decompressed back. The output of the decompression result is an original image file that is stored with *.bmp extension. Fibonacci algorithm will give a better compressed size on image color, while in a grayscale image Run Length Encoding will give a better compressed size. Based on the results of research at two different types of images, each algorithm has its own advantages. Fibonacci Code algorithm is better for color image compression while Run-Length algorithm Encoding is better for grayscale image compression.

4 citations


Journal ArticleDOI
TL;DR: The proposed system introduces a lossless image compression technique based on Run Length Encoding (RLE) that encodes the original magnetic resonance imaging (MRI) image into actual values and their numbers of occurrence that is applied to values array only.
Abstract: Medical image compression is considered one of the most important research fields nowadays in biomedical applications. The majority of medical images must be compressed without loss because each pixel information is of great value. With the widespread use of applications concerning medical imaging in the health-care context and the increased significance in telemedicine technologies, it has become crucial to minimize both the storage and bandwidth requirements needed for archiving and transmission of medical imaging data, rather by employing means of lossless image compression algorithms. Furthermore, providing high resolution and image quality preservation of the processed image data has become of great benefit. The proposed system introduces a lossless image compression technique based on Run Length Encoding (RLE) that encodes the original magnetic resonance imaging (MRI) image into actual values and their numbers of occurrence. The actual image data values are separated from their runs and they are stored in a vector array. Lempel–Ziv–Welch (LZW) is used to provide further compression that is applied to values array only. Finally the Variable Length Coding (VLC) will be applied to code the values and runs arrays for the precise amount of bits adaptively into a binary file. These bit streams are reconstructed using inverse LZW of the values array and inverse RLE to reconstruct the input image. The obtained compression gain is enhanced by 25% after applying LZW to the values array.

2 citations


DOI
16 Jan 2019
TL;DR: Through an exploratory study, this paper examines image compression as discussed in extant literature and emphasises on different methods used in image compression, and recommends compression techniques to adopt depending on the industry’s’ goals.
Abstract: The limited available storage and bandwidth required for successful transmission of large images make image compression a key component in digital image transmission. Digital image application in various industries, such as entertainment and advertising, has brought image processing to the fore of these industries. However, the entire image processing is faced with the problem of data redundancy, which is mitigated through image compression. This is simply the art and science of reducing the number of bits/data of an image before it is transmitted and stored easily while the quality of image is maintained. Thus, through an exploratory study, this paper examines image compression as discussed in extant literature and emphasises on different methods used in image compression. The paper reviewed relevant literature from Elsevier, Emerald, IEEE, ProQuest and Google scholar databases. Specific methods are lossy and lossless techniques, which are further divided into run length encoding, and entropy encoding. In conclusion, the paper recommends compression techniques to adopt depending on the industry’s’ goals. Preferably, lossy compression is used to compress multimedia data which includes audio, video and images, while lossless compression technique is used to compress text and data files.

2 citations


Proceedings ArticleDOI
01 Apr 2019
TL;DR: Test results of proposed encoding schemas using binarized volume datasets obtained by the medical imaging techniques including Computed Tomography and Magnetic Resonance Imaging and the influence of the length of the encoding word to the compression rate are discussed.
Abstract: The paper deals with the problematics of lossless data compression and is focused on the use of Run-Length Encoding (RLE) principle of lossless compression of binary images, especially binarized volume datasets that can be considered as the 3D binary images. Different alternative encoding schemas are proposed in the first part of this work for bit-level run-length encoding and capacities of their encoding words are discussed also with the focus on the encoding of long symbol runs. Test results of proposed encoding schemas using binarized volume datasets obtained by the medical imaging techniques including Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) and the influence of the length of the encoding word to the compression rate of proposed encoding schemas are discussed in the second part of the paper.

2 citations


Patent
05 Dec 2019
TL;DR: In this paper, the authors present devices, methods, and computer readable medium for techniques to display rectangular content in non-rectangular display areas without clipping or cutting off the content.
Abstract: Embodiments of the present disclosure present devices, methods, and computer readable medium for techniques to display rectangular content in non-rectangular display areas without clipping or cutting off the content. These bounding path techniques can be employed for electronic devices with rounded corners and for display of content within software windows for applications, in which the windows have non-rectangular corners. The techniques disclosed include content shifting, aspect fit, run length encoding and corner encoding. These techniques can be applied to both static content and for dynamic content. Memory optimization techniques are disclosed to reduce the memory requirements for encoding display bitmaps and for optimal performance. The run length encoding feature can reduce the time and decrease the memory requirements for determining a location where the content can fit within a viewable area of the display. The corner encoding technique provides for encoding areas with non-linear curves.

1 citations


Book ChapterDOI
01 Jan 2019
TL;DR: The proposed FF-TLBO algorithm is evaluated by comparing its results with existing FF algorithm using a same set of benchmark images in terms of Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR), Structural Similarity index (SSIM), Compression Ratio and Compression Time (CT).
Abstract: In the recent days, the importance of image compression techniques is exponentially increased due to the generation of massive amount of data which needs to be stored or transmitted. Numerous approaches have been presented for effective image compression by the principle of representing images in its compact form through the avoidance of unnecessary pixels. Vector quantization (VA) is an effective method in image compression and the construction of quantization table is an important process is an important task. The compression performance and the quality of reconstructed data are based on the quantization table, which is actually a matrix of 64 integers. The quantization table selection is a complex combinatorial problem which can be resolved by the evolutionary algorithms (EA). Presently, EA became famous to resolve the real world problems in a reasonable amount of time. This chapter introduces Firefly (FF) with Teaching and learning based optimization (TLBO) algorithm termed as FF-TLBO algorithm for the selection of quantization table. As the FF algorithm faces a problem when brighter FFs are insignificant, the TLBO algorithm is integrated to it to resolve the problem. This algorithm determines the best fit value for every bock as local best and best fitness value for the entire image is considered as global best. When these values are found by FF algorithm, compression process takes place by efficient image compression algorithm like Run Length Encoding and Huffman coding. The proposed FF-TLBO algorithm is evaluated by comparing its results with existing FF algorithm using a same set of benchmark images in terms of Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR), Structural Similarity index (SSIM), Compression Ratio (CR) and Compression Time (CT). The obtained results ensure the superior performance of FF-TLBO algorithm over FF algorithm and make it highly useful for real time applications.

Journal ArticleDOI
30 Dec 2019
TL;DR: The paper proposes the rules for combined coding and combined encoders for bit planes of pixel differences of images with a tunable and constant structure, which have lower computational complexity and the same compression ratio as compared to an arithmetic encoder of bit planes.
Abstract: The aim of this work is to reduce the computational complexity of lossless compression in the spatial domain due to the combined coding (arithmetic and Run-Length Encoding) of a series of bits of bit planes. Known effective compression encoders separately encode the bit planes of the image or transform coefficients, which leads to an increase in computational complexity due to multiple processing of each pixel. The paper proposes the rules for combined coding and combined encoders for bit planes of pixel differences of images with a tunable and constant structure, which have lower computational complexity and the same compression ratio as compared to an arithmetic encoder of bit planes.

Patent
08 Jan 2019
TL;DR: In this paper, the authors proposed a sensor data processing system for supporting data compression and network transmission, which can increase expandability by transmitting raw data measured from a sensor to an upper system without any loss of data through a data compression technology.
Abstract: The present invention relates to a sensor data processing system for supporting data compression and network transmission, and to a method thereof, which can increase expandability by transmitting raw data measured from a sensor to an upper system without any loss of data through a data compression technology The sensor data processing system comprises: a DPCM processing unit which invalidates and deletes data except for data in a set area by identifying validity/invalidity of data when sensing data is inputted, and compresses the data by calculating a difference between a predicted model and an actual measurement value; a wavelet transformation processing unit and a quantization processing unit which conduct wavelet transformation in three phases by receiving the data confirmed by the DPCM processing unit, and then conduct quantization in order to compress the data; and a run length encoding (RLE) unit which finally determines effectiveness of the data through RLE compression in two phases by receiving the data confirmed in the wavelet transformation processing unit and the quantization processing unit, and then outputs the data

Patent
05 Dec 2019
TL;DR: In this paper, the authors present devices, methods, and computer readable medium for techniques to display rectangular content in non-rectangular display areas without clipping or cutting off the content.
Abstract: Embodiments of the present disclosure present devices, methods, and computer readable medium for techniques to display rectangular content in non-rectangular display areas without clipping or cutting off the content. These bounding path techniques can be employed for electronic devices with rounded corners and for display of content within software windows for applications, in which the windows have non-rectangular corners. The techniques disclosed include content shifting, aspect fit, run length encoding and corner encoding. These techniques can be applied to both static content and for dynamic content. Memory optimization techniques are disclosed to reduce the memory requirements for encoding display bitmaps and for optimal performance. The run length encoding feature can reduce the time and decrease the memory requirements for determining a location where the content can fit within a viewable area of the display. The corner encoding technique provides for encoding areas with non-linear curves.

Patent
04 Jul 2019
TL;DR: An apparatus and method for occlusion data compression can be found in this article, where vertex generation circuitry/logic is used to generate vertices of objects in a 3D space including occlusions for the vertices, the objects captured by a set of M cameras.
Abstract: An apparatus and method for occlusion data compression. For example, one embodiment of a graphics processing apparatus comprises: vertex generation circuitry/logic to generate vertices of objects in a 3D space including occlusion binary vectors for the vertices, the objects captured by a set of M cameras; sorting circuitry/logic to sort the vertices in accordance with coordinates of the vertices; pre-compression circuitry/logic to transform the occlusion binary vectors of the vertices by logically combining adjacent bit fields in the sorted order to generate converted bit strings having a larger number of binary zero values than the occlusion binary vectors; compression circuitry/logic to compress the converted bit strings using run length encoding (RLE) compression to generate compressed bit fields; and a memory and/or storage device to store the compressed bit fields.

Book ChapterDOI
01 Jan 2019
TL;DR: This paper proposes an image processing approach for compression of ECG signals based on 2D compression standards that surpasses some of the prevailing methods in the literature by attaining a higher compression ratio (CR) and moderate percentage-root-mean-square difference (PRD).
Abstract: This paper proposes an image processing approach for compression of ECG signals based on 2D compression standards. This will explore both inter-beat and intra-beat redundancies that exist in the ECG signal leading to higher compression ratio (CR) as compared to 1D signal compression standards which explore only the inter-beat redundancies. The proposed method is twofold: In the first step, ECG signal is preprocessed and QRS detection is used to detect the peaks. In the second step, baseline wander is removed and a 2D array of data is obtained through the cut-and-align beat approach. Further beat reordering is done to arrange the ECG array depending upon the similarities available in the adjacent beats. Then ECG signal is compressed by first applying the lossless compression scheme called the 2D Run Length Encoding (RLE), and then a variant of discrete wavelet transform (DWT) called set partitioning in hierarchical trees (SPIHT) is applied to further compress the ECG signal. The proposed method is evaluated on the selected data from MITs Beth Israel Hospital, and it was conceded that this method surpasses some of the prevailing methods in the literature by attaining a higher compression ratio (CR) and moderate percentage-root-mean-square difference (PRD).

Patent
19 Sep 2019
TL;DR: In this article, a system for processing spatial data may be designed to receive neural network outputs corresponding to a first spatial data set, and translate the neural network output corresponding to the first spatial dataset to the second spatial dataset based on the motion between the first and second spatial datasets.
Abstract: A system for processing spatial data may be designed to receive neural network outputs corresponding to a first spatial data set, and translate the neural network outputs corresponding to the first spatial data set based on the motion between a second spatial data set and the first spatial data set. The system may perform zero-gap run length encoding on the neural network outputs to store the neural network outputs in memory. The system may also perform on-the-fly skip zero decoding and bilinear interpolation to translate the neural network outputs.

Book ChapterDOI
11 May 2019
TL;DR: VaFLE is proposed, a general-purpose lossless data compression algorithm, where the number of bits allocated for representing the length of a given run is a function of the Length of the run itself, which is independent of the maximum run length of the input data.
Abstract: The Run Length Encoding (RLE) algorithm substitutes long runs of identical symbols with the value of that symbol followed by the binary representation of the frequency of occurrences of that value. This lossless technique is effective for encoding images where many consecutive pixels have similar intensity values. One of the major problems of RLE for encoding runs of bits is that the encoded runs have their lengths represented as a fixed number of bits in order to simplify decoding. The number of bits assigned is equal to the number required to encode the maximum length run, which results in the addition of padding bits on runs whose lengths do not require as many bits for representation as the maximum length run. Due to this, the encoded output sometimes exceeds the size of the original input, especially for input data where in the runs can have a wide range of sizes. In this paper, we propose VaFLE, a general-purpose lossless data compression algorithm, where the number of bits allocated for representing the length of a given run is a function of the length of the run itself. The total size of an encoded run is independent of the maximum run length of the input data. In order to exploit the inherent data parallelism of RLE, VaFLE was also implemented in a multithreaded OpenMP environment. Our algorithm guarantees better compression rates of upto 3X more than standard RLE. The parallelized algorithm attains a speedup as high as 5X in grayscale and 4X in color images compared to the RLE approach.

Book ChapterDOI
04 Jan 2019
TL;DR: This study found that it was possible to intuitively present the entire sequence to a researcher by converting the data for the entire genome into a 3-dimensional plot and obtained improved search results and could examine sequences from various angles using layered information.
Abstract: With the development of next generation sequencing (NGS) technology, genomic research now requires analysis at the entire genome level. Because of easy access to very large amounts of data, it is desirable to look at all the data rather than examine individual bases. At this time, data visualization of the entire genome level can be very useful. However, most visualization tools simply visualize the resulting files derived from external analysis systems. In this study, it was possible to intuitively present the entire sequence to a researcher by converting the data for the entire genome into a 3-dimensional plot. In addition, by compressing the information in 3D space with run length encoding and storing it in a skip list, it is possible to perform fast comparison and search sequences with low complexity by layering base information. As a result, compared to alignment-based sequence comparisons, we obtained improved search results, and we could examine sequences from various angles using layered information.

Posted Content
TL;DR: In this article, the authors present a branch-free implementation of Golomb codes for encoding and decoding, and demonstrate that the resulting representation length is very close to the optimal Huffman code, to the extent that the expected difference is practically negligible.
Abstract: Text compression schemes and compact data structures usually combine sophisticated probability models with basic coding methods whose average codeword length closely match the entropy of known distributions. In the frequent case where basic coding represents run-lengths of outcomes that have probability $p$, i.e. the geometric distribution $\Pr(i)=p^i(1-p)$, a \emph{Golomb code} is an optimal instantaneous code, which has the additional advantage that codewords can be computed using only an integer parameter calculated from $p$, without need for a large or sophisticated data structure. Golomb coding does not, however, gracefully handle the case where run-lengths are bounded by a known integer~$n$. In this case, codewords allocated for the case $i>n$ are wasted. While negligible for large $n$, this makes Golomb coding unattractive in situations where $n$ is recurrently small, e.g., when representing many short lists of integers drawn from limited ranges, or when the range of $n$ is narrowed down by a recursive algorithm. We address the problem of choosing a code for this case, considering efficiency from both information-theoretic and computational perspectives, and arrive at a simple code that allows computing a codeword using only $O(1)$ simple computer operations and $O(1)$ machine words. We demonstrate experimentally that the resulting representation length is very close (equal in a majority of tested cases) to the optimal Huffman code, to the extent that the expected difference is practically negligible. We describe efficient branch-free implementation of encoding and decoding.