scispace - formally typeset
Search or ask a question
Topic

Run-length encoding

About: Run-length encoding is a research topic. Over the lifetime, 504 publications have been published within this topic receiving 4441 citations. The topic is also known as: RLE.


Papers
More filters
01 Jan 2014
TL;DR: Two different Run Length based methods to compress the data where the same sequence appeared repeatedly, such as an image with little change, or a set of smooth fluid data are proposed.
Abstract: It is dicult to carry out visualization of the large-scale time-varying data directly, even with the supercomputers. Data compression and ROI (Region of Interest) detection are often used to improve eciency of the visualization of numerical data. It is well known that the Run Length encoding is a good technique to compress the data where the same sequence appeared repeatedly, such as an image with little change, or a set of smooth fluid data. Another advantage of Run Length encoding is that it can be applied to every dimension of data separately. Therefore, the Run Length method can be implemented easily as a parallel processing algorithm. We proposed two different Run Length based methods. When using the Run Length method to compress a data set, its size may increase after the compression if the data does not contain many repeated parts. We only apply the compression for the case that the data can be compressed effectively. By checking the compression ratio, we can detect ROI. The effectiveness and eciency of the proposed methods are demonstrated through comparing with several existing compression methods using different sets of fluid data. c
Proceedings ArticleDOI
17 Mar 2016
TL;DR: A new algorithm such as FREQVCTDB (Frequent Vertical Compressed Transaction Database) is developed for preserving memory space by compressing the transaction database with statistical analysis in vertical data layout based on the statistical analysis with prior knowledge about dataset.
Abstract: In the modern digital era, we are welcoming the data with an enormous collection of information from different sorts of day to day life. There are promising challenges for handling as well as storing the data. For the past three decades many researchers have developed algorithms to meet the crucial challenges in frequent pattern mining; but still there is a thirst to overcome it. Based on this, a new algorithm such as FREQVCTDB (Frequent Vertical Compressed Transaction Database) is developed for preserving memory space by compressing the transaction database with statistical analysis in vertical data layout. The proposed algorithm has also adopted and analysed the dense dataset and sparse dataset based on the statistical analysis with prior knowledge about dataset and it is carried out in three phases. In the phase 1, the basic step is carried out such as horizontal data layout is converted into vertical data layout. Then the frequency of each item is counted and checked with the minsupport threshold filter for all items whether it is frequent or not. The input data is analyzed by using statistical properties of transaction database in phase 2. The properties such as a) Density of two types such as Maximum density or Minimum density b) Distance function c) Entropy function from information theory are analyzed. Based on this analysis the algorithm the Run length Encoding is implemented for compressing the frequent patterns in phase 3. The two types of datasets such as dense dataset and sparse dataset are analysed and discussed which have been taken from the valid website Fimi.cs.uk. These datasets are executed and the experimental results are shown in a graphical representation as validity results.
Journal ArticleDOI
TL;DR: This paper analyze the Lossless method using Run Length Encoding (RLE) Algorithm, Arithmetic Encoding, Punctured Elias Code and Goldbach Code to determine which algorithm is more efficient in doing data compression.
Abstract: In computer science, data compression or bit-rate reduction is a way to compress data so that it requires a smaller storage space making it more efficient in storing or shortening the data exchange time. Data compression is divided into 2 parts, Lossless Data Compression and Lossy Data Compression. Examples of Lossless methods are: Run Length, Huffman, Delta and LZW. While the example of Lossy method is: CS & Q (Coarser Sampling and / or Quantization). This paper analyze the Lossless method using Run Length Encoding (RLE) Algorithm, Arithmetic Encoding, Punctured Elias Code and Goldbach Code. This Paper also draw a comparison between the four algorithms in order to determine which algorithm is more efficient in doing data compression. Keyword Data Compression, Run Length Encoding, Arithmetic Encoding, Punctured Elias Code, Goldbach Code
Patent
29 Sep 2011
TL;DR: In this article, the run length encoding method is composed of four steps, i.e., the second or third shortest code words are used for shorter or longer sequences of pixels of a predetermined color (transparent), while the short code words were used for single pixels having different colors.
Abstract: PROBLEM TO BE SOLVED: To provide a method for optimally encoding a subtitle layer or subpicture layers. SOLUTION: The size of subtitle bitmaps may exceed video frame dimensions, so that only portions are displayed at a time. The bitmaps are a separate layer lying above the video for synchronized video subtitles, and contain a plurality of transparent pixels. The run length encoding method is composed of four steps. That is, the second or third shortest code words are used for shorter or longer sequences of pixels of a predetermined color (transparent), while the shortest code words are used for single pixels having different colors, and the third or fourth shortest code words are used for shorter or longer sequences of an equal color value. COPYRIGHT: (C)2011,JPO&INPIT
Journal Article
TL;DR: An algorithm that makes use of this similarity in a column oriented databases to integrate the compression and execution processes is discussed and a new indexing method called Binary Search Tree (BST) indexing that supports O (log n) insertion, deletion and look-up operations is proposed.
Abstract: Column oriented databases have attracted significant amount of attention recently and database systems based on Column oriented technology are used extensively for analytical processing such as those found in data warehouses, decision support, and business intelligence and forecasting applications. Column oriented databases have enormous potential for data compression because of the similarity of adjacent records. This paper will discuss an algorithm that makes use of this similarity in a column oriented databases to integrate the compression and execution processes. It proposes a new indexing method called Binary Search Tree (BST) indexing that supports O (log n) insertion, deletion and look-up operations. Additionally, the paper also describes the implementation of these basic operations.
Network Information
Related Topics (5)
Network packet
159.7K papers, 2.2M citations
76% related
Feature extraction
111.8K papers, 2.1M citations
75% related
Convolutional neural network
74.7K papers, 2M citations
74% related
Image processing
229.9K papers, 3.5M citations
74% related
Cluster analysis
146.5K papers, 2.9M citations
74% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202123
202020
201920
201828
201727
201624