scispace - formally typeset
Search or ask a question
Topic

Run-length encoding

About: Run-length encoding is a research topic. Over the lifetime, 504 publications have been published within this topic receiving 4441 citations. The topic is also known as: RLE.


Papers
More filters
Journal ArticleDOI
TL;DR: A succinet full-text self-index as discussed by the authors is a data structure built on a text T = t1t2...tn, which takes little space (ideally close to that of the compressed text), permits efficient search for the occurren...
Abstract: A succinet full-text self-index is a data structure built on a text T = t1t2...tn, which takes little space (ideally close to that of the compressed text), permits efficient search for the occurren...

5 citations

Patent
Albert B. Cooper1
27 Apr 1999
TL;DR: In this paper, a data compressor receives an input stream of data characters and provides a corresponding stream of output codes corresponding to numerically increasing contiguous segments of a detected run of the same character.
Abstract: The disclosed data compressor receives an input stream of data characters and provides a corresponding stream of output codes. The compressor provides a sequence of numerically increasing output codes corresponding to numerically increasing contiguous segments of a detected run of the same character. The number of characters in the detected run is determined and a mathematical algorithm, using the number of characters in the run, mathematically generates the appropriate sequence of codes. One disclosed embodiment utilizes a mathematical algorithm that iteratively diminishes the number of run characters by an iteratively increasing segment index. Another embodiment utilizes a quadratic equation algorithm that computes the codes from the number of characters in the run utilizing equations derived from the expression for the sum of the first n numbers. In a further embodiment, the number of characters in the run segments are stored together with the respective codes representing the segments. In later encounters of a previously processed run, the stored data is accessed and the stored codes corresponding to the run segments are output as appropriate. Non-run characters of the input stream are transmitted directly in synchronism with incrementing the codes of the code sequence.

5 citations

Journal ArticleDOI
TL;DR: A modified video compression model is proposed that adapts the genetic algorithm to build an optimal codebook for adaptive vector quantization that is used as an activation function inside the neural network’s hidden layer to achieve higher compression ratio.
Abstract: Video compression has great significance in the communication of motion pictures. Video compression techniques try to remove the different types of redundancy within or between video sequences. In the temporal domain, the video compression techniques remove the redundancies between the highly correlated consequence frames of the video. In the spatial domain, the video compression techniques remove the redundancies between the highly correlated consequence pixels (samples) in the same frame. Evolving neural-networks based video coding research efforts are focused on improving existing video codecs by performing better predictions that are incorporated within the same codec framework or holistic methods of end-to-end video compression schemes. Current neural network-based video compression adapts static codebook to achieve compression that leads to learning inability from new samples. This paper proposes a modified video compression model that adapts the genetic algorithm to build an optimal codebook for adaptive vector quantization that is used as an activation function inside the neural network’s hidden layer. Background subtraction algorithm is employed to extract motion objects within frames to generate the context-based initial codebook. Furthermore, Differential Pulse Code Modulation (DPCM) is utilized for lossless compression of significant wavelet coefficients; whereas low energy coefficients are lossy compressed using Learning Vector Quantization (LVQ) neural networks. Finally, Run Length Encoding (RLE) is engaged to encode the quantized coefficients to achieve a higher compression ratio. Experiments have proven the system’s ability to achieve higher compression ratio with acceptable efficiency measured by PSNR.

5 citations

Book ChapterDOI
01 Jan 2011
TL;DR: A new approach for removing blocking artifacts in reconstructed block-encoded images is presented and digital halftoning is a nonlinear system that quantizes a gray level image to one bit per pixel.
Abstract: A new approach for removing blocking artifacts in reconstructed block-encoded images is presented in [1].The perceptual quality of video affected by packet losses, low resolution and low bit video coded by the H.264/AVC encoder is studied in [2]. Digital halftoning is a nonlinear system that quantizes a gray level image to one bit per pixel[3]. Halftoning by error diffusion scans the image, quantizes the current pixel, and subtracts the quantization error from neighboring pixels in fixed proportions according to the error filter. The error filter is designed to minimize a local weighted error introduced by quantization.

5 citations

Proceedings ArticleDOI
01 Jun 1992
TL;DR: In this paper, the authors proposed a non-real-time image compression algorithm for CFD data visualizations on disk, which allows the animation to maintain the spatial and intensity quality of rendered image and allows the display of the animation at approximately 30 frames/sec, the standard video rate.
Abstract: The visualization and animation of computational fluid dynamics (CFD) data is vital in understanding the varied parameters that exist in the solution field. Scientists need accurate and efficient visualization techniques. The animation of CFD data is not only computationally expensive but also expensive in the allocation of memory, both RAM and disk. Preserving animations of the CFD data visualizations is useful, since recreation of the animation is expensive when dealing with extremely large data structures. Researchers of CFD data may wish to follow a particle trace over an experimental fuselage design, but are unable to retain the animation for efficient retrieval without rendering or consuming a considerable amount of disk space. The spatial image resolution is reduced from 1280 X 1024 to 512 X 480 in going from the workstation format to a video format, therefore, a desire to save these animations on disk results. Saving on disk allows the animation to maintain the spatial and intensity quality of the rendered image and allows the display of the animation at approximately 30 frames/sec, the standard video rate. The goal is to develop optimal image compression algorithms that allow visualization animations, captures as independent RGB images, to be recorded to tape or disk. If recorded to disk, the image sequence is compressed in non-realtime with a technique which allows subsequent decompression at approximately 30 frames/sec to simulate the temporal resolution of video. Initial compression is obtained through mapping RGB colors in each frame to a 12-bit colormap image. The colormap is animation sequence dependent and is created by histogramming the colors in the animation sequence and mapping those colors with relation to specific regions of the L*a*b* color coordinate system to take advantage of the uniform nature of the L*a*b* color system. Further compression is obtained by taking interframe differences, specifically comparing respective blocks between consecutive frames. If no change has occurred within a block a zero is recorded otherwise the entire block containing the 12-bit indices of the colormap is retained. The resulting block differences of the sequential frames in each segment will be saved after huffman coding and run length encoding. Playback of an animation will avoid much of the computations involved with rendering the original scene by decoding and loading the video RAM through the pixel bus. The algorithms will be written to take advantage of the systems hardware, specifically the Silicon Graphics VGX graphics adapter.

5 citations

Network Information
Related Topics (5)
Network packet
159.7K papers, 2.2M citations
76% related
Feature extraction
111.8K papers, 2.1M citations
75% related
Convolutional neural network
74.7K papers, 2M citations
74% related
Image processing
229.9K papers, 3.5M citations
74% related
Cluster analysis
146.5K papers, 2.9M citations
74% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202123
202020
201920
201828
201727
201624