scispace - formally typeset
Search or ask a question

Showing papers on "Run-length encoding published in 2008"


01 Jan 2008
TL;DR: This paper describes the development of a novel FPGA algorithm, performing a high speed real-time blob analysis using only one single pass, which reduces the latency to a minimum while at the same time allows the reuse of object labels thus achieving a memory efficient implementation.
Abstract: In the fields of machine vision and image processing applications the blob analysis became a well known method to detect objects in a digital image. Together with the increasing resolutions and frame rates of recent digital video cameras the requirements on the hardware to perform the blob analysis are very high. Software implementations may not be able to accomplish a satisfying performance. Furthermore, existing hardware solutions require a processing of the picture in multiple passes. This paper describes the development of a novel FPGA algorithm, performing a high speed real-time blob analysis using only one single pass. The input data are compressed using run length encoding. This allows the use of a full configuration camera link interface with data rates of up to 680MByte/s. Furthermore, object properties are passed to the host PC on-the-fly which reduces the latency to a minimum while at the same time allows the reuse of object labels thus achieving a memory efficient implementation.

21 citations


Proceedings ArticleDOI
18 Jun 2008
TL;DR: A hardware algorithm performing a high performance run length encoding for binary images using a parallel input using dedicated logic is described.
Abstract: Run length encoding can be found in numerous applications such as data transfer or image storing (Sayood, 2002). It is a well known, easy and efficient compression method based on the assumption of long data sequences without the change of content. These sequences can be described by their position and length of appearance. Implementations using dedicated logic are optimised for parallel data processing. Here, images are transferred in blocks of multiple pixels in parallel. A compression of these streams into a run length code requires an encoder with a parallel input. This run length encoder has to compress the sequence at a minimum of clock cycles to avoid long inhibit intervals at the input. This paper describes a hardware algorithm performing a high performance run length encoding for binary images using a parallel input.

17 citations


Proceedings ArticleDOI
12 Jul 2008
TL;DR: It is shown that when the file to be compressed is composed of heterogeneous data fragments, GP-zip is capable of achieving compression ratios that are superior to those obtained with well-known compression algorithms.
Abstract: In recent research we proposed GP-zip, a system which uses evolution to find optimal ways to combine standard compression algorithms for the purpose of maximally losslessly compressing files and archives. The system divides files into blocks of predefined length. It then uses a linear, fixed-length representation where each primitive indicates what compression algorithm to use for a specific data block. GP-zip worked well with heterogonous data sets, providing significant improvements in compression ratio compared to some of the best standard compression algorithms. In this paper we propose a substantial improvement, called GP-zip*, which uses a new representation and intelligent crossover and mutation operators such that blocks of different sizes can be evolved. Like GP-zip, GP-zip* finds what the best compression technique to use for each block is. The compression algorithms available in the primitive set of GP-zip* are: Arithmetic coding (AC), Lempel-Ziv-Welch (LZW), Unbounded Prediction by Partial Matching (PPMD), Run Length Encoding (RLE), and Boolean Minimization. In addition, two transformation techniques are available: the Burrows-Wheeler Transformation (BWT) and Move to Front (MTF). Results show that GP-zip* provides improvements in compression ratio ranging from a fraction to several tens of percent over its predecessor.

15 citations


Proceedings ArticleDOI
12 Dec 2008
TL;DR: A geometry encoder is proposed for time-varying meshes that finds spatial and temporal redundancy by coarse and fine level quantization and encodes binary sequences using run-length encoding (RLE).
Abstract: Time-varying meshes (TVM) is a new 3-D scene representation which are generated from multiple cameras. It captures highly detailed shape and texture as well as movement of real-world moving objects. Compression is a key technology for supporting TVM applications such as education, interactive broadcasting, and intangible heritage archiving. Previous works focused on compression of 3-D animation that has the same topology throughout the entire sequences. Unfortunately, the topology of TVMs change with time which makes it difficult to compress TVMs. In this paper, we propose a geometry encoder for TVMs. The encoder finds spatial and temporal redundancy by coarse and fine level quantization. Thereafter, vertex information is converted into binary sequences. And then, the binary sequences are encoded using run-length encoding (RLE). Experimental results show that vertices of TVMs which require 96 bits per vertex (bpv) are compressed to 1.9-15.4 bpv while maintaining a small geometric distortion ranging from 0.7 times 10-4 to 1.3 times 10-3 % of the maximum error.

14 citations


Journal ArticleDOI
TL;DR: A one-time signature scheme using run-length encoding is presented, that in the random oracle model offers security against chosen-message attacks and enables about 33% faster verification with a comparable signature size than a construction of Merkle and Winternitz.

12 citations


Proceedings ArticleDOI
26 Aug 2008
TL;DR: Experiments show that the proposed technique of fingerprint identification using gray Hopfield neural network improved by run-length encoding is useful in a number of different samples of fingerprint images in terms of converged images in quality, encoding and decoding performance.
Abstract: This paper presents a new technique of fingerprint identification using gray Hopfield neural network (GHNN) improved by run-length encoding (RLE). Gabor filter has been used for image enhancement at the stage of enrollment and vector field algorithm for core detection as a reference point. Finding this point will enable to cover most of information around the core. GHNN deals with gray level images by learning on bitplanes that represent the fingerprint image layers. For large number of images GHNN's memory needs very large storage space to cover all learned fingerprint images. RLE is a very simple and useful solution for saving the capacity of the net memory by encoding the stored weights, in which the weights data will reduce according to the repeated one. Experiments carried out on fingerprint images show that the proposed technique is useful in a number of different samples of fingerprint images in terms of converged images in quality, encoding and decoding performance.

9 citations


Journal Article
TL;DR: To improve the robustness of image real-time feedback control system, connected component labeling algorithm was applied to droplet contour extraction in space droplet evaporation experiment and demonstrates that the optimized algorithm is superior to conventional algorithms in terms of processing speed and memory occupation.
Abstract: To improve the robustness of image real-time feedback control system, connected component labeling algorithm was applied to droplet contour extraction in space droplet evaporation experiment The algorithm was optimized in two aspects to realize a faster processing speed and smaller memory occupation in real-time image processing First, it introduced run-length encoding into DSP real-time image processing to reduce memory occupation and the quantity of objects Second, it optimized Suzuki's labeling algorithm by solving the problem that it could lose some label equivalences in assignment operation in one scan process By changing the assignment of label connection table, it can memorize all label equivalences in one scanning The result of experiments demonstrates that the optimized algorithm is superior to conventional algorithms in terms of processing speed and memory occupation

4 citations


Proceedings ArticleDOI
A. Hamadou1, Kehua Yang1
12 Dec 2008
TL;DR: An efficient bitmap indexing technique that improves the performance of the word-aligned hybrid (WAH) for high-cardinality attributes in data warehousing applications by reorganizing the base data by reordering (sorting) and clustering the attribute to be indexed.
Abstract: This paper presents an efficient bitmap indexing technique that improves the performance of the word-aligned hybrid (WAH) for high-cardinality attributes in data warehousing applications. WAH compression is based on the run-length encoding, and its performance depends on the presence of long runs of identical bits. To ensure the existence of such long runs and thus achieve higher compression rate, our strategy is, before building the bitmap index, to reorganize the base data by reordering (sorting) and clustering the attribute to be indexed. The experiments showed that our strategy significantly reduces the size of WAH bitmap indices. Moreover, the response time measured for both equality and range queries was showed to decrease substantially.

3 citations


Journal Article
TL;DR: Experiment indicates that the improved technique which uses quarter-tree decomposition of image compression methods can greatly improve compression efficiency and presents a new approach for the technique of static image compression.
Abstract: Nowadays,with the rapid development of the information society,image has become one of the important carriers which can transmits information.Because of the massive content of information of unprocessed image,it is extremely important to try our best to research and develop the technology for image coding.This main topic is to explore for a large area of continuous shadow of the same color or image compression.We use quarter-tree decomposition of image compression methods to image compression because of the deficiencies of the RLE algorithm.Experiment indicates that the improved technique which uses quarter-tree decomposition of image compression methods can greatly improve compression efficiency.The technique presents a new approach for the technique of static image compression.

3 citations


Proceedings ArticleDOI
12 Dec 2008
TL;DR: The implementation of ldquointersectionrdquo operation based on this data structure is introduced, and all kinds of algebraic operations on the run-lengthpsilas attributes are realized in the process of implementation.
Abstract: Considering the deficiency of direct-encoding raster data in computation efficiency and storage capacity, in this paper, a new run-length encoding data structure is proposed, which is suitable for algebraic operations and could optimize the algebraic operations that are based direct-encoding raster data. In this paper, the implementation of ldquointersectionrdquo operation based on this data structure is introduced; besides, all kinds of algebraic operations on the run-lengthpsilas attributes are realized in the process of implementation. The approach of this algorithm is firstly using a linked-list to store the run-length set of every raster row, then, for every to-be-operated run-length unit, executing the intersection operation on it with the run-length set of corresponding raster row, afterwards, inserting or deleting run-length unit while carrying out the algebraic operations on the attributes of run-length. This algorithm is fit for most raster operations, and has an advantage in data precision and computation efficiency compared with the direct-encoding raster data.

3 citations


Journal Article
Liu Jie1
TL;DR: This paper improved an algorithm based on scan line theory, which includes 2 critical steps to reduce the loss of position accuracy in the process of rasterization, and common operations of run-length encoding system.
Abstract: Run-length encoding system(RLE) is a common technique of raster data compressing and expressing,it is used on occasion for area features rasterizing in GIS.Focused on the rasterizing method of area objects with RLE data structure,this paper improved an algorithm based on scan line theory,which includes 2 critical steps.Firstly,figure out all points of intersection between scanning line and area boundary curve,save these breakpoints in form of ordered linked list;Secondly,according to the parity of breakpoint sequence number in scanning line,to build up ordered run-length list.In order to reduce the loss of position accuracy in the process of rasterization,a method about run-length boundary recorded by real-value is introduced.In additional,this paper presented common operations of run-length encoding system,advantages and applications in GIS area operation are also illuminated by some examples.

Patent
31 Jan 2008
TL;DR: In this article, the authors propose an image encoding device which does not require a high-speed clock even when hardware whose processing speed is high is actualized. But it does not provide an image encoder that does not need a high speed clock.
Abstract: PROBLEM TO BE SOLVED: To provide an image encoding device which does not require a high-speed clock even when hardware whose processing speed is high is actualized SOLUTION: The image encoding device includes a reference pixel acquiring means 306 of acquiring a reference pixel in a circumference of a pixel to be encoded, a predicted state storage means 305 of storing predicted states of the reference pixel and superior symbols by contexts, a superior symbol acquiring means 307 of acquiring the superior symbols, a predictive conversion means 308 of generating right/wrong data from the superior symbols, a predicted state transition means 311 of making a transition in predicted state based upon a predictive transition table 312, a run length replacing means 309 of performing run length replacement when the run length of the right/wrong data is within a predetermined range of the numbers of successions, and a run length encoding means 310 of encoding the replaced run length COPYRIGHT: (C)2008,JPO&INPIT

Journal Article
TL;DR: This research aims to appear the effect of a simple lossless compression method, RLE or Run Length Encoding, on another lossless compressed algorithm which is the Huffman algorithm that generates an optimal prefix codes generated from a set of probabilities.
Abstract: Most digital data are not stored in the most compact form. Rather, they are stored in whatever way makes them easiest to use, such as: ASCII text from word processors, binary code that can be executed on a computer, individual samples from a data acquisition system, etc. Typically, these easy-to-use encoding methods require data files about twice as large as actually needed to represent the information. Data compression is the general term for the various algorithms and programs developed to address this problem. A compression program is used to convert data from an easy-to-use format to one optimized for compactness. Likewise, an decompression program returns the information to its original form. This research aims to appear the effect of a simple lossless compression method , RLE or Run Length Encoding , on another lossless compression algorithm which is the Huffman algorithm that generates an optimal prefix codes generated from a set of probabilities. While RLE simply replaces repeated bytes with a short description of which byte to repeat it. 1.

Journal Article
TL;DR: A new image compression algorithm based on the PCX algorithm, an image compression method used in the computer package of PC Paintbrush Bitmap Graphic, which is better than the original one in compression performance.
Abstract: In this paper, we present a new image compression algorithm based on the PCX algorithm, an image compression method used in the computer package of PC Paintbrush Bitmap Graphic. We first introduce the principles of image compressions and the structure of image file formats. We demonstrate the procedures of compression and decompression of the PCX algorithm. The original PCX algorithm only compresses one fourth of data using run-length encoding. The compression efficiency depends on the repeatability of data in the compressed area. If the repeatability is low, the compression performance will be bad. To avoid this, we propose a modified PCX algorithm which selects the best area for compression. We designed a computer package to implement the modified PCX algorithm using java programming language. The Unified Modeling Language (UML) was used to describe the structure and behaviors of the computer package. The pseudo codes for the compression and decompression of the modified PCX algorithm are also provided in this paper. We did an experiment to compare the performance between the original and modified algorithms. The experimental results show that the modified PCX algorithm is better than the original one in compression performance.

01 Jan 2008
TL;DR: To decrease size of data under transmission a new algorithm, namely RLE-BER (Run Length Encoding with Binary Encoded Runs) for radar images compression is proposed, and initial results are very promising, as the average compression ratio is near to 5: 1 when considering monochromepalette (binary image).
Abstract: One of the most important issue in maritime international transport is the necessity of increasing the level ofsafe in vessel's navigation. We can achieve this goal using various methods. One ofthem is the enlargementand enrichment of navigational data processed by own deck computer net and external systems AIS (Automatic Identification System) and VTS (Vessel Traffic Service). The radar image contains ex­ tensive and useful navigational information. That is why the incorporation of it into remote transmission is proposed here. In order to successfully realize this process the en­ hancement of the NMEA (National Marine Electronics As­ sociation) code is proposed through incorporation ofradar images into particular protocols. To decrease size of data under transmission a new algorithm, namely RLE-BER (Run Length Encoding with Binary Encoded Runs) for radar images compression is proposed Some experimental results are presented on real data. The detail description of the method is also provided The initial results are very promising, as the average compression ratio is near to 5: 1 when considering monochromepalette (binary image). The proposedmethod is lossless, because we assumed the maxi­ mum safety level ofthe final system.

Proceedings ArticleDOI
12 Dec 2008
TL;DR: In the process of overlapping, all kinds of polygon overlay modes have been realized, proving that the algorithm could be applied broadly.
Abstract: Aiming at the low computation efficiency and storage insufficiency of direct-encoded raster data, a new data structure on the basis of run-length encoding has been proposed to optimize the algebraic operations that are based on direct encoded raster data. In this paper, employing this new data structure, the realization of ldquointersectionrdquo is introduced; moreover, all kinds of algebraic operations on the run-lengthpsilas attribute are performed during the realization process. Adopting this ldquointersectionrdquo operation to implement polygon overlay, the steps are as follows: firstly, convert the polygons on two different layers into run-length sets, and then intersect the run-length set of the base layer with the run-lengths of the adding layer, carrying out this ldquointersectionrdquo operation until obtain the resulting run-length set of the two layers, afterwards, extract the required run-length units according to different overlay modes, such as ldquoUnionrdquo and ldquoEraserdquo, finally, vectiorize the run-lengths into polygons then output. In the process of overlapping, all kinds of polygon overlay modes have been realized, proving that the algorithm could be applied broadly.

Journal ArticleDOI
TL;DR: In this paper, image is compressed based on JPEG, but in place of Huffman coding, Run Length Encoding (RLE)/Vector Quantisation (VQ) has been applied and performance evaluated.
Abstract: Joint Photographic Experts Group (JPEG) is a standard for compressing continuous-tone images. In this paper, image is compressed based on JPEG, but in place of Huffman coding, Run Length Encoding (RLE)/Vector Quantisation (VQ) has been applied and performance evaluated. RLE does not require table storage, but produces lesser compression than Huffman. VQ achieves higher compression at the cost of reduced quality. Further, the reconstructed image has been filtered to reduce blocking artifacts. When the encoded image is corrupted with Additive White Gaussian Noise (AWGN), VQ is more robust than Huffman/RLE since it is fixed-length encoding technique.