scispace - formally typeset
Search or ask a question

Showing papers on "Run-length encoding published in 2014"


Patent
12 Nov 2014
TL;DR: In this article, a digital image compressing, encrypting and encoding combined method is proposed, which is achieved on the basis of the JPEG compressing and encoding standard, and the encrypting algorithm based on chaos is integrated with the encoding process.
Abstract: The invention discloses a digital image compressing, encrypting and encoding combined method, and belongs to the technical field of image encrypting. The method is achieved on the basis of the JPEG compressing and encoding standard which is most widely applied at present, and the encrypting algorithm based on chaos is integrated with the encoding process; according to the characteristic that DC coefficients and AC coefficients are separately encoded on the basis of the JPEG standard, the DC coefficients and the AC coefficients of an image are separately encrypted; in order to give consideration to both security and compressing efficiency, all the DC coefficients and part of the AC coefficients are encrypted through the method, coefficients at the same positions in all DCT blocks are divided into different groups and are scrambled and diffused within the groups, and damage to differential encoding and run length encoding in the encrypting process is reduced as much as possible; scrambling and diffusing are achieved on the basis of logistic chaotic mapping and Chebyshev chaotic mapping respectively. The experiments prove that the method has high data compressing capacity while providing effective image data security protection.

15 citations


Proceedings Article
02 Jul 2014
TL;DR: A novel preprocessing-based algorithm is introduced to solve the problem of determining the first arc of a shortest path in sparse graphs, which achieves query running times on the 100 nanosecond scale, being significantly faster than state-of-the-art first-move oracles from the literature.
Abstract: We introduce a novel preprocessing-based algorithm to solve the problem of determining the first arc of a shortest path in sparse graphs. Our algorithm achieves query running times on the 100 nanosecond scale, being significantly faster than state-of-the-art first-move oracles from the literature. Space consumption is competitive, due to a compression approach that rearranges rows and columns in a first-move matrix and then performs run length encoding (RLE) on the contents of the matrix.

12 citations


Journal ArticleDOI
TL;DR: A novel binary large object (BLOB) analysis algorithm by using run length encoding (RLE) to speed up the object location for the IC packaging inspection and demonstrate that the IC pins could be effectively and robustly located with the proposed algorithm.
Abstract: To speed up the object location for the IC packaging inspection, this paper develops a novel binary large object (BLOB) analysis algorithm by using run length encoding (RLE). First, the new data structures for RLE and BLOB-linked lists are designed to accelerate the access and modifications of data. Second, to avoid the traditional labeling conflicts and simplify the comparisons of connectivity branches, an efficient algorithm for BLOB analysis is presented. Furthermore, area feature of objects can be extracted when the BLOB-linked list are dynamically created or modified. Finally, to evaluate the performance of the proposed method, ICs with various types of small outline packaging packages were located by the proposed algorithm. The experimental results not only demonstrate that the IC pins could be effectively and robustly located with the proposed algorithm, but also show that the presented algorithm runs faster than other classical methods.

11 citations


Proceedings ArticleDOI
27 Mar 2014
TL;DR: A lossless two phase compression algorithm is presented for DNA sequences in the first phase a modified version of Run Length Encoding (RLE) is applied and in the second phase the resultant genetic sequences is compressed using ASCII values.
Abstract: The properties of DNA sequences offer an opportunity to develop DNA specific compression algorithm. A lossless two phase compression algorithm is presented for DNA sequences. In the first phase a modified version of Run Length Encoding (RLE) is applied and in the second phase the resultant genetic sequences is compressed using ASCII values. Using ASCII codes for eight bits ensures one-fourth compression irrespective of repeated or non-repeated behavior of the sequence and modified RLE technique enhances the compression further more. Not only the compression ratio of the algorithm is quite encouraging but the simple technique of compression makes it more interesting.

9 citations


Proceedings ArticleDOI
03 Apr 2014
TL;DR: A DPCM based approach for real-time compression of ECG data for real time telemonitoring application and the computational simplicity of the algorithm provides an opportunity to implement the coder using a low cost microcontroller.
Abstract: This paper illustrates a DPCM based approach for real-time compression of ECG data for real time telemonitoring application. For real time implementation a ‘frame’ is considered with one original sample followed by 64 first difference elements. The coder compresses the non-QRS regions of an ECG data stream through stages of first difference, joint sign and magnitude coding, and run length encoding. A hard thresholding at the equipotential regions have been applied to enhance the RLE efficiency. For testing 10 second ECG data from Physionet has been used with 10-bit quantization level. The CR, PRD and PRDN achieved with PTB Database (ptbdb) are 6.42, 9.77 and 9.77 respectively. With MIT-BIH arrhythmia data (mitdb), these values are 5.92, 8.19 and 8.19 respectively. With MIT-BIH ECG Compression test data (cdb), these values are 4.25, 5.37 and 6.65 respectively. The frame wise compression rises to a value of 12–14 in flat (TP) segments and low 1–2 in QRS regions. The computational simplicity of the algorithm provides an opportunity to implement the coder using a low cost microcontroller.

9 citations


Journal ArticleDOI
TL;DR: A complete multi-channel neural recording compression and communication system for wireless implants that addresses the challenging simultaneous requirements for low power, high bandwidth and error-free communication is presented.

6 citations


Journal ArticleDOI
TL;DR: A combination of Discrete Cosine Transform and fractal with quadtree technique and Run Length Encoding is proposed to compress the image and implementation result shows that the image is compressed effectively.
Abstract: Compression of the color images has many applications in most of the mobile technologies. Reducing the time taken for file transfer is important in digital communication fields. Image compression means reducing the graphics file size, without degrading the quality of the image. For digital images, Fractal image Compression (FIC) has been considered as an efficient method. FIC is a lossy compression method that explores the self similar property for natural image. In this paper, a combination of Discrete Cosine Transform and fractal with quadtree technique and Run Length Encoding is proposed to compress the image. Implementation result shows that the image is compressed effectively using the proposed work. Keywords— Image Compression, DCT, Quadtree, Fractal Image Compression, Run Length Encoding, Run Length Decoding

5 citations


Proceedings ArticleDOI
11 Nov 2014
TL;DR: It is shown that the Poxel rendering primitive aligns with optimized rasterization hardware and so results in high visual quality over ray casting methods.
Abstract: We present efficient rendering of opaque, sparse, voxel environments with data amplified in local graphics memory with stream-out from a geomery shader to a cached vertex buffer pool. We show that our Poxel rendering primitive aligns with optimized rasterization hardware and so results in high visual quality over ray casting methods. Lossless run length encoding of occlusion culled voxels and coordinate quantization further reduces host data transfers.

5 citations


Journal ArticleDOI
Chun-Hee Lee1, Chin-Wan Chung1
TL;DR: Two compression schemes, run-length encoding and bucketing scheme and the authors show that the compression schemes with data reordering are better than the original compression schemes in terms of the compression ratio.
Abstract: Although there have been many compression schemes for reducing data effectively, most schemes do not consider the reordering of data. In the case of unordered data, if the users change the data order in a given data set, the compression ratio may be improved compared to the original compression before reordering data. However, in the case of ordered data, the users need a mapping table that maps the original position to the changed position in order to recover the original order. Therefore, reordering ordered data may be disadvantageous in terms of space. In this paper, the authors consider two compression schemes, run-length encoding and bucketing scheme as bases for showing the impact of data reordering in compression schemes. Also, the authors propose various optimization techniques related to data reordering. Finally, the authors show that the compression schemes with data reordering are better than the original compression schemes in terms of the compression ratio.

5 citations


Posted Content
TL;DR: A new binary (bit-level) lossless compression catalyst method based on a modular arithmetic, called Binary Allocation via Modular Arithmetic (BAMA), has been introduced and allows a significant increase in the compression performance of binary sequences, among others.
Abstract: A new binary (bit-level) lossless compression catalyst method based on a modular arithmetic, called Binary Allocation via Modular Arithmetic (BAMA), has been introduced in this paper. In other words, BAMA is for storage and transmission of binary sequences, digital signal, images and video, also streaming and all kinds of digital transmission. As we said, our method does not compress, but facilitates the action of the real compressor, in our case, any lossless compression algorithm (Run Length Encoding, Lempel-Ziv-Welch, Huffman, Arithmetic, etc), that is, it acts as a compression catalyst. Finally, this catalyst allows a significant increase in the compression performance of binary sequences, among others.

4 citations


Proceedings ArticleDOI
01 Nov 2014
TL;DR: The paper proposes Generalized False Discovery Rate based thresholding procedure for estimating wavelet coefficients adaptively to compress ECG signal and achieved very low average value of percentage root mean difference (PRD) at comparable compression ratios (CR).
Abstract: The paper proposes Generalized False Discovery Rate (FDR) based thresholding procedure for estimating wavelet coefficients adaptively to compress ECG signal. To determine the adaptive threshold the connection between thresholding and hypothesis testing is used. In the proposed algorithm False Discovery Threshold (FDT) is achieved by computing the probability of each detail coefficient. The computed probabilities are arranged in ascending order. The adaptive critical significance level is calculated by k-FWER and step up k-FDR procedures of multiple hypotheses testing. The significance levels are compared with computed probabilities to satisfy desired FDR, which provides the FDT. Finally two stage entropy coding is done using zero Run Length Encoding (RLE) followed by Huffman coding. As compared to existing techniques for ECG compression the proposed algorithms achieved very low average value of percentage root mean difference (PRD) at comparable compression ratios (CR). Very less PRD values signifies faithful reconstruction of original signal.

Journal ArticleDOI
TL;DR: The proposed algorithm used Data compression technique tested different systems and results shows it is efficiency, while more accuracy for large systems will need more iterations calculation which mean increasing time consumption.
Abstract: This paper presents Data Compression Simulated algorithm for load flow calculation in electrical power systems. Real time monitor of grids required less computation time in calculation of power system analysis. Load flow problem is heart of this analysis and it basically required calculate active and reactive power flow in lines connected between buses in networks. Many topology and structures for Transmission and Distribution Systems has been proposed to reduce CPU times and memory. The proposed algorithm used Data compression technique tested different systems and results shows it is efficiency. More accuracy for large systems will need more iterations calculation which mean increasing time consumption, while Run Length Encoding (RLE) algorithm is fitness to optimized calculation numbers to exact number cause it has no zero values included. Network structure was represented as one dimension vector instead of 2D Matrix and it is effectiveness results was valid, by avoid exponential increased, by utilized this algorithm. Matlab results obtained by applied this algorithm match theoretical results.

Patent
12 Feb 2014
TL;DR: In this paper, the authors present a method and system for compressing and retrieving light detection and ranging output data, and, more specifically, to compressing Light Detection and Ranging output data by run length encoding or losslessly compressing LDR output data and rapidly accessing this compressed data which is filtered by attributes without the need to read or decompress the entire collection of data.
Abstract: The present invention relates to a method and system for compressing and retrieving Light Detection and Ranging output data, and, more specifically, to a method and system for compressing Light Detection and Ranging output data by Run Length Encoding or losslessly compressing Light Detection and Ranging output data and rapidly accessing this compressed data which is filtered by attributes without the need to read or decompress the entire collection of data.

01 Jan 2014
TL;DR: A technique to model the run length encoding algorithm on reconfigurable platform has been described and FPGA implementation result shows that the circuit requires very small amount of digital hardware.
Abstract: In computer science, compression algorithms are an important technique to reduce the original data bits into lesser number of bits. Lossless compression is a special class of data compression algorithm involves in reducing bits by identifying and eliminating statistical redundancy. A simple scheme that provides good lossless compression of data containing lots of runs of the same value is Run Length Encoding. In this paper a technique to model the run length encoding algorithm on reconfigurable platform has been described. FPGA implementation result shows that the circuit requires very small amount of digital hardware. Estimated power consumption and maximum operating frequency of the circuitry is also provided.

Journal ArticleDOI
TL;DR: A peculiar Hybrid Transform genesis technique for image compression using two orthogonal transforms using a combination of Huffman and Run length Encoding techniques referred here as Extended Huffman coding which is lossless compression techniques to enhance compression ratio.
Abstract: compression is a technique to reduce irrelevance and redundancy of the image data in order to be able to store or transmit data in an efficient form. This paper presents a peculiar Hybrid Transform genesis technique for image compression using two orthogonal transforms. The concept of hybrid transform is to combine the attributes of two different orthogonal transform wavelet to attain the vitality of both the transform wavelet. Discrete Cosine transform, Discrete Hartley transform, Discrete Walsh transform and Discrete Kekre Transform all are lossy compression techniques has been used. Also introduced a combination of Huffman and Run length Encoding techniques referred here as Extended Huffman coding which is lossless compression techniques to enhance compression ratio. The result has shown that hybrid transform performance is better than wavelet transforms.

Journal Article
TL;DR: The paper establishes a mapping between ocuppacy data and bits array based on binary characteristic of the data, and encodes the bits array by Ameliorated run-length encoding algorithm.
Abstract: To find a solution to compress and store massive electromagnetic spectrum ocuppacy data,the paper ameliorates run-length encoding algorithm.It establishes a mapping between ocuppacy data and bits array based on binary characteristic of the data,and encodes the bits array by Ameliorated run-length encoding algorithm.The practice shows that the compressing efficiency is remarkable and the result is lossless.

Book ChapterDOI
05 Sep 2014
TL;DR: MOARLE is developed, a novel matrix computation framework that saves memory space and computational time and can efficiently handle both sparse matrices and dense matrices in contrast to conventional matrix computational methods.
Abstract: Matrix computation is a key technology in various data processing tasks including data mining, machine learning, and information retrieval. Size of matrices has been increasing with the development of computational resources and dissemination of big data. Huge matrices are memory- and computational-time-consuming. Therefore, reducing the size and computational time of huge matrices is a key challenge in the data processing area. We develop MOARLE, a novel matrix computation framework that saves memory space and computational time. In contrast to conventional matrix computational methods that target to sparse matrices, MOARLE can efficiently handle both sparse matrices and dense matrices. Our experimental results show that MOARLE can reduce the memory usage to 2% of the original usage and improve the computational performance by a factor of 124x.

01 Jan 2014
TL;DR: An effective method for lossless and diagnostically lossless compression of fluoroscopic images is proposed and the proposed method is able to improve the achieved compression ratio by 488% as compared to that of the benchmark traditional methods.
Abstract: Diagnostic imaging devices such as fluoroscopy produce a vast number of sequential images, ranging from localization images to functional tracking of the contrast agent moving through anatomical structures such as the pharynx and esophagus. In this paper, an effective method for lossless and diagnostically lossless compression of fluoroscopic images is proposed. The two main contributions are: (1) compression through blockbased subtraction matrix division and adaptive Run Length Encoding (RLE), and (2) range conversion to improve the compression performance. The region of coding (RC) – in this case the pharynx and esophagus, is effectively cropped and compressed using customized correlation and the combination of Run-Length Encoding (RLE) and Huffman Coding (HC), to increase compression efficiency. The experimental results show that the proposed method is able to improve the achieved compression ratio by 488% as compared to that of the benchmark traditional methods.

Journal ArticleDOI
TL;DR: The experimental results on ISCAS’89 benchmark circuit shows the proposed codeword generator scheme provides better efficiency as well as significant reduction in test power.
Abstract: There are two major impacts in today industry while testing larger integrated circuits like large test data volume and high test power. In our proposed scheme target both two issues for achieving two aforementioned goals in full scan sequential circuits. Shift power is reduced by one of the adjacent filling. During testing we are filling the unspecified bits in the test pattern with either 0’s or 1’s depend on nearest specified bit from left side. After filling the don’t care bits test data can be compressed by shifted alternate frequency directed run length encoding. A new formulated codeword generator is introduced and it generates infinite number of codeword for large size input test pattern. Using this codeword generator test data volume can be effectively compressed. The experimental results on ISCAS’89 benchmark circuit shows our scheme provides better efficiency as well as significant reduction in test power.

01 Jan 2014
TL;DR: A near lossless image compression algorithm based on row by row classifier with encoding schemes like Lempel Ziv Welch (LZW), Huffman and Run Length Encoding (RLE) on color images is proposed, which reveals that the proposed algorithm have smaller bits per pixel (bpp) than simple LZW, HuffMan and RLE encoding techniques.
Abstract: Lossless image compression is needed in many fields like medical imaging, telemetry, geophysics, remote sensing and other applications, which require exact replica of original image and loss of information is not tolerable. In this paper, a near lossless image compression algorithm based on row by row classifier with encoding schemes like Lempel Ziv Welch (LZW), Huffman and Run Length Encoding (RLE) on color images is proposed. The algorithm divides the image into three parts R, G and B, apply row by row classification on each part and result of this classification is records in the mask image. After classification the image data is decomposed into two sequences each for R, G and B and mask image is hidden in them. These sequences are encoded using different encoding schemes like LZW, Huffman and RLE. An exhaustive comparative analysis is performed to evaluate these techniques, which reveals that the proposed algorithm have smaller bits per pixel (bpp) than simple LZW, Huffman and RLE encoding techniques.

01 Jan 2014
TL;DR: Two different Run Length based methods to compress the data where the same sequence appeared repeatedly, such as an image with little change, or a set of smooth fluid data are proposed.
Abstract: It is dicult to carry out visualization of the large-scale time-varying data directly, even with the supercomputers. Data compression and ROI (Region of Interest) detection are often used to improve eciency of the visualization of numerical data. It is well known that the Run Length encoding is a good technique to compress the data where the same sequence appeared repeatedly, such as an image with little change, or a set of smooth fluid data. Another advantage of Run Length encoding is that it can be applied to every dimension of data separately. Therefore, the Run Length method can be implemented easily as a parallel processing algorithm. We proposed two different Run Length based methods. When using the Run Length method to compress a data set, its size may increase after the compression if the data does not contain many repeated parts. We only apply the compression for the case that the data can be compressed effectively. By checking the compression ratio, we can detect ROI. The effectiveness and eciency of the proposed methods are demonstrated through comparing with several existing compression methods using different sets of fluid data. c

Journal ArticleDOI
Shota Ishikawa1, Haiyuan Wu1, Chongke Bi, Qian Chen1, Hirokazu Taki1, Kenji Ono 
01 Jan 2014
TL;DR: In this article, the authors proposed two different Run Length based methods to detect ROI in large-scale time-varying data, which can be implemented easily as a parallel processing algorithm.
Abstract: It is difficult to carry out visualization of the large-scale time-varying data directly, even with the supercomputers. Data compression and ROI (Region of Interest) detection are often used to improve efficiency of the visualization of numerical data. It is well known that the Run Length encoding is a good technique to compress the data where the same sequence appeared repeatedly, such as an image with little change, or a set of smooth fluid data. Another advantage of Run Length encoding is that it can be applied to every dimension of data separately. Therefore, the Run Length method can be implemented easily as a parallel processing algorithm. We proposed two different Run Length based methods. When using the Run Length method to compress a data set, its size may increase after the compression if the data does not contain many repeated parts. We only apply the compression for the case that the data can be compressed effectively. By checking the compression ratio, we can detect ROI. The effectiveness and efficiency of the proposed methods are demonstrated through comparing with several existing compression methods using different sets of fluid data.

Journal Article
TL;DR: The paper presents various methodologies of data compression for lossless data and Huffman and arithmetic coding are compare according to their performances.
Abstract: The paper presents various methodologies of data compression for lossless data. Huffman and arithmetic coding are compare according to their performances.

Patent
07 May 2014
TL;DR: In this article, an encoder (10) takes input data (D1) and generates corresponding encoded output data(D2) as a run-length encoded (RLE) representation of the input data, where at least one part is associated with original symbols and at least another part associated with counts of occurrence of those symbols.
Abstract: An encoder (10) takes input data (D1) and generates corresponding encoded output data (D2) as a run-length encoded (RLE) representation of the input data (D1). The encoder also splits the RLE representation into a plurality of parts (A, B), wherein at least one part is associated with original symbols and at least another part is associated with counts of occurrence of those symbols. The encoder then further compresses the parts (A, B) separately to generate the encoded output data (D2), using e.g. Huffman, Golomb, arithmetic coding etc. A corresponding decoder (50) generates corresponding decoded output data (D3). Optionally, the original symbols include at least one of: characters, alphabetic elements, numbers, bits, bytes, words.

Journal ArticleDOI
TL;DR: A new technique, decode aware compression technique, is proposed to improve both compression and decompression efficiencies in reconfigurable system and create a large set of matching pattern by using effective bitmask selection technique.
Abstract: In reconfigurable system, the size of bitstream and memory is reduced using bitstream compression. By reducing the reconfiguration time, the system can improve the communication bandwidth. Existing system shows that, at the cost of compression efficiency effective compression is achieved with slow or fast decompression rate. To improve both compression and decompression efficiencies, this paper proposes a new technique i.e., decode aware compression technique. Different approaches for the proposed system are i) A large set of matching pattern is created by using effective bitmask selection technique. ii) Using the bitmask and dictionary selection technique, a bitmask based compression is achieved because of which there is an efficient reduction in memory requirement. iii) Repetative patterns are generated by grouping bitmask based compression and run length encoding. Finally, using decompression engine the original bitstream can be generated.

Journal ArticleDOI
TL;DR: An arrangement of video frames in temporal-spatial (TEMPOSPA) domain, which is 3D to 2D mapping of video signals is proposed, providing high level of compression compared to previous compression standards.
Abstract: Motion based prediction used for video coding is an efficient method in the field of video compression. But the complexity and computation time involved in this method is a burden to apply them in real time applications. In this paper, an arrangement of video frames in temporal-spatial (TEMPOSPA) domain, which is 3D to 2D mapping of video signals is proposed. As video signals are more redundant in temporal domain compared to spatial domain, the video frames are arranged in such a manner to exploit both temporal and spatial redundancies to achieve good compression ratio. In order to reduce the time and complexity of DCT computation, Approximated DCT (ADCT) is used along with combined Retaining-RLE method. ADCT is an approximation of DCT, whose transformation matrix contains most of them zeros which reduces the number of multiplications involved in the normal DCT computation. The quantized 8x8 blocks are then encoded by combination of Retaining and Run Length Encoding (RLE) methods. Out of 64 quantized coefficients in an 8x8 block, only certain number of coefficients is retained while zig-zag scanning order and RLE is applied to this retained sequence of coefficients to reduce the data in retained sequence. Thus providing high level of compression compared to previous compression standards. General Terms Scalable video coding, Video compression algorithms, Efficient video signal storage and transmission, Data compression, Digital Video processing.