scispace - formally typeset
Search or ask a question

Showing papers on "Run-length encoding published in 2010"


Journal ArticleDOI
TL;DR: This paper presents a run- and label-equivalence-based one-and-a-half-scan algorithm for labeling connected components in a binary image that is directly applicable to run-length-encoded images, and can obtain contours of connected components efficiently.
Abstract: This paper presents a run- and label-equivalence-based one-and-a-half-scan algorithm for labeling connected components in a binary image. Major differences between our algorithm and conventional label-equivalence-based algorithms are: (1) all conventional label-equivalence-based algorithms scan all pixels in the given image at least twice, whereas our algorithm scans background pixels once and object pixels twice; (2) all conventional label-equivalence-based algorithms assign a provisional label to each object pixel in the first scan and relabel the pixel in the later scan(s), whereas our algorithm assigns a provisional label to each run in the first scan, and after resolving label equivalences between runs, by using the recorded run data, it assigns each object pixel a final label directly. That is, in our algorithm, relabeling of object pixels is not necessary any more. Experimental results demonstrated that our algorithm is highly efficient on images with many long runs and/or a small number of object pixels. Moreover, our algorithm is directly applicable to run-length-encoded images, and we can obtain contours of connected components efficiently.

32 citations


Proceedings Article
01 Jan 2010
TL;DR: A new approach of run length encoding (RLE) is proposed in this research to compress discrete cosine transform (DCT) coefficients of time domain ECG signals because of the high probability of redundancies in consecutive coefficients.
Abstract: A new approach of run length encoding (RLE) is proposed in this research to compress discrete cosine transform (DCT) coefficients of time domain ECG signals. Energy compaction property of DCT facilitates the process of length encoding by accumulating the correlative coefficients into separate segments. Thus the high probability of redundancies in consecutive coefficients facilitates the use of RLE. To increase the CR, two stages of RLE are performed on the quantized DCT coefficients. Then binary equivalent of RLE values are obtained by applying Huffman coding. Finally the distortion indices of relevant clinical diagnostic information of the reconstructed signal is measured in terms of weighted diagnostic distortion (WDD), percentage root-mean-squared difference (PRD) and root-mean-square (RMS) error indices. Results indicate that for MIT-BIH Arrhythmia database Record 117, the proposed compression algorithm can achieve a compression ratio of 14.87 with a bit rate of 185 bps.

23 citations


Proceedings ArticleDOI
Shuhui Wang1, Tao Lin1
29 Nov 2010
TL;DR: Experimental results show that compared to H.264, UC achieves 2dB~34dB PSNR improvement and better visual image quality for compound image with mixed text, graphics and picture.
Abstract: This paper proposes a compound image compression method named United Coding (UC). In this coding method, several lossless coding techniques such as run-length encoding (RLE), Portable Network Graphics (PNG) and gzip are combined into H.264 hybrid coding, and macroblock is the basic coding unit. All coders in UC are used to code each macroblock, and rate-distortion (R-D) optimization criterion is applied to select the optimum coder. Experimental results show that compared to H.264, UC achieves 2dB~34dB PSNR improvement and better visual image quality for compound image with mixed text, graphics and picture. Moreover, UC also has a partial-lossless (defined as partially lossless and partially near-lossless) compression performance.

23 citations


Book ChapterDOI
01 Sep 2010
TL;DR: The Burrows-Wheeler transform (BWT) as mentioned in this paper is a method for permuting a list with the aim of bringing repeated elements together, but it has a major flaw that there is no way to recover the original list unless the complete sorting permutation is also produced as part of the output.
Abstract: Introduction The Burrows–Wheeler transform (BWT) is a method for permuting a list with the aim of bringing repeated elements together. Its main use is as a preprocessing step in data compression. Lists with many repeated adjacent elements can be encoded compactly using simple schemes such as run length or move-to-front encoding. The result can then be fed into more advanced compressors, such as Huffman or arithmetic coding, to compress the input even more. Clearly, the best way of bringing repeated elements together is just to sort the list. But the idea has a major flaw as a preliminary to compression: there is no way to recover the original list unless the complete sorting permutation is also produced as part of the output. Without the ability to recover the original input, data compression is pointless; and if a permutation has to be produced as well, then compression is ineffective. Instead, the BWT achieves a more modest permutation, one that brings some but not all repeated elements into adjacent positions. The main advantage of the BWT is that the transform can be inverted using a single additional piece of information, namely an integer k in the range 0 ≤ k n , where n is the length of the (nonempty) input list. In this pearl we describe the BWT, identify the fundamental reason why inversion is possible, and use it to derive the inverse transform from its specification.

19 citations


Patent
27 Aug 2010
TL;DR: Subtitling aims at the presentation of text information and graphical data, encoded as pixel bitmaps, so that only portions are displayed at a time as discussed by the authors. But the size of the bitmaps may exceed video frame dimensions.
Abstract: Subtitling aims at the presentation of text information and graphical data, encoded as pixel bitmaps. The size of subtitle bitmaps may exceed video frame dimensions, so that only portions are displayed at a time. The bitmaps are a separate layer lying above the video, e.g. for synchronized video subtitles, animations and navigation menus, and therefore contain many transparent pixels. An advanced adaptation for bitmap encoding for HDTV, e.g. 1920.times.1280 pixels per frame as defined for the Blu-ray Disc Prerecorded format, providing optimized compression results for such subtitling bitmaps, is achieved by a four-stage run length encoding. Shorter or longer sequences of pixels of a preferred color, e.g. transparent, are encoded using the second or third shortest code words, while single pixels of different color are encoded using the shortest code words, and sequences of pixels of equal color use the third or fourth shortest code words.

17 citations


Journal ArticleDOI
TL;DR: The 3sum-hardness is proved for both the wildcard matching problem and the k-mismatch problem with run-length compressed inputs, which implies that it is very unlikely to devise an o(mn)-time algorithm for either of them.

15 citations


Journal ArticleDOI
TL;DR: Two algorithms to balance energy consumption among sensor nodes by distributing the workload of image compression tasks within a cluster on wireless sensor networks are proposed and achieve not only balancing the total energy consumptionamong sensor nodes and, thus, increasing the overall network lifetime, but also reducing block noise in image compression.
Abstract: This paper proposes two algorithms to balance energy consumption among sensor nodes by distributing the workload of image compression tasks within a cluster on wireless sensor networks. The main point of the proposed algorithms is to adopt the energy threshold, which is used when we implement the exchange and/or assignment of tasks among sensor nodes. The threshold is well adaptive to the residual energy of sensor nodes, input image, compressed output, and network parameters. We apply the lapped transform technique, an extended version of the discrete cosine transform, and run length encoding before Lempel-Ziv-Welch coding to the proposed algorithms to improve both quality and compression rate in image compression scheme. We extensively conduct computational experiments to verify the our methods and find that the proposed algorithms achieve not only balancing the total energy consumption among sensor nodes and, thus, increasing the overall network lifetime, but also reducing block noise in image compression.

13 citations


Patent
20 Oct 2010
TL;DR: In this article, a high-speed image compression VLSI coding method based on systolic array and an encoder is proposed, which consists of an image-level controller, a code block level controller, an image segmentation partitioning device, a first, a second and a third level 2D small wave converters, a QRG combined encoder and a code stream packer.
Abstract: The invention relates to a high-speed image compression VLSI coding method based on systolic array, and an encoder. The encoder comprises an image-level controller, a code block level controller, an image segmentation partitioning device, a first, a second and a third level two-dimensional small wave converters, a QRG combined encoder and a code stream packer; the method comprises the steps of: first, using the image segmentation partitioning device to segment the image into code blocks with the size of 32*32; respectively using the first, the second and third level two-dimensional small wave converters for carrying out three-level small wave conversion on the image, then using the QRG combined encoder to read the three-level small wave conversion coefficient, carrying out best quantization, self-adaptive zero run length encoding and Exp-Golomb encoding with k=0, and obtaining code stream; and finally, using the code stream packer to pack the code stream of each code block into file according to the preset format for outputting. The invention greatly accelerates the image compression speed, effectively prolongs the recording time, and improving the transmission capability.

7 citations


Proceedings ArticleDOI
01 Nov 2010
TL;DR: A new algorithm that meets the constraint of processing one pixel per clock cycle is described, based on run-length encoding the horizontal cracks between object and background pixels.
Abstract: Conventional chain coding techniques require random access to the input image. For stream processing, it is necessary to perform all of the processing in a single raster based scan. An FPGA implementation adds the constraint of processing one pixel per clock cycle. A new algorithm that meets these constraints is described. It is based on run-length encoding the horizontal cracks between object and background pixels. If necessary, the crack run-length code can be converted to a Freeman chain code for subsequent processing.

5 citations


Proceedings ArticleDOI
11 Nov 2010
TL;DR: This work presents the architecture for biomédical compression based on Discrete Wavelet Transform (DWT) and run length encoding: bank of register, coefficients block, control unit, multiplier/adder, thresholding and encoding.
Abstract: In real time compression algorithms, the mathematical used in the model must be easily modeled in hardware language Most of the developments have been probed on Field Programmable Gate Arrays (FPGA) because it is fast and reliable In this work, we present the architecture for biomedical compression based on Discrete Wavelet Transform (DWT) and run length encoding: bank of register, coefficients block, control unit, multiplier/adder, thresholding and encoding The DWT is performed in one level with sym4, the coefficients are threshold by hard rule and encoding is by zero run length The hardware resources correspond to 7% of the available in the Spartan3 of Xilinx; the simulation of each module was on ModelSim

5 citations


Proceedings ArticleDOI
03 Aug 2010
TL;DR: A custom designed communication protocol for bidirectional data telemetry to and from the implanted module is presented and a global controller is also presented which configures, operates and unites all the modules together effectively and efficiently into a 32-channel system.
Abstract: Multi-channel neural signal recordings need high data compression and efficient data transmission. Our previous work has shown a practical data compression solution based on discrete wavelet transform, multi-level thresholding and run length encoding. This paper presents a custom designed communication protocol for bidirectional data telemetry to and from the implanted module. A global controller is also presented which configures, operates and unites all the modules together effectively and efficiently into a 32-channel system. Performance of the communication protocol and the compression engine is analyzed.

Proceedings ArticleDOI
07 Jul 2010
TL;DR: This work presents a framework where compressed ID-shadow-maps are used for real-time rendering of static scene shadows, and the proposed decompression shader-program and the underlying data structures can be applied to any type of array consisting of integers.
Abstract: ID shadow-maps are used for robust real-time rendering of shadows. The primary disadvantage of using shadow-maps is their excessive size for large scenes in case high quality shadows are needed. To eliminate large memory requirements and texture-size limitations of the current generation GPUs, texture compression is an important tool. We present a framework where compressed ID-shadow-maps are used for real-time rendering of static scene shadows. The texture compression is performed off-line on the CPU and real-time decompression is performed on the GPU within a fragment-shader for shadowing the pixels. The use of ID shadow-maps (instead of conventional depth-based shadow-maps) is the key to high compression ratios. The ID shadow-map is compressed on the CPU by first partitioning it into blocks. Each compressed block is packed densely into a global array, while a pointer table is constructed that holds a pointer to the start of every compressed block in the global array. This data organization provides the GPU with a random access to the start of each compressed block thus enables fast parallel decompression. The proposed decompression shader-program and the underlying data structures can be applied to any type of array consisting of integers. The framework is implemented using OpenGL and GLSL.

Journal ArticleDOI
Jiechen Wang1, Can Cui1, Yingxia Pu1, Jinsong Ma1, Gang Chen1 
TL;DR: An algorithm of buffer construction incorporating run-length encoding and the idea of raster overlay method, on which the raster-based operations are carried out has integrated advantages with respect to time complexity, space complexity and computational accuracy.
Abstract: title/>This paper presents an algorithm of buffer construction incorporating run-length encoding and the idea of raster overlay method. In traditional raster methods, the buffer target is traced and scanned using a ‘brush’, the width of which is equal to the buffer distance. During this process, the brushed raster grids are marked. Then by carrying out dynamic calculation on these marked grids, the buffer zone, constituted of grids, would be generated. Finally, the desired boundary of the buffer zone could be obtained through vectorisation. Considering the obvious drawbacks of raster data in computing efficiency and storage capacity, this paper puts forward the idea of storing raster data by means of run-length encoding, on which the raster-based operations are carried out. In order to improve the spatial precision, the borderlines of each run-length unit are recorded in the data of real type. The tests and analyses indicate that this algorithm has integrated advantages with respect to time comple...

Journal Article
TL;DR: This paper introduces quantum evolutionary algorithm to complete establishing the module and designing the algorithm for hierarchical SOC under the condition of the power constraint and get the corresponding test set.
Abstract: Aiming at the reduction of SOC test time and test data volume,this paper introduces quantum evolutionary algorithm to complete establishing the module and designing the algorithm for hierarchical SOC under the condition of the power constraint and get the corresponding test set.In succession,the test set of multiple cores is integrated through share broadcast technology,and then the method of alternative run-length encoding is combined to compress the test set.This method takes into account both the"0"and"1"run length,so it can greatly reduce the number of the shorter run-length.The experimental results for SOC benchmark show that:Compared with other algorithm,quantum algorithm can efficiently meet the demand of test power while it acquires shorter test time.Compared with other compression encoding method,the method of this paper acquires a more effective compression.

01 Jan 2010
TL;DR: Modified Set Partitioning in Hierarchical Tree with Run Length Encoding is a new framework for fingerprint image compression that uses Peak Signal to noise ratio and Mean Square Error to compute the picture quality of fingerprint images.
Abstract: Modified Set Partitioning in Hierarchical Tree with Run Length Encoding is a new framework proposed for fingerprint image compression. The Proposed method is better because more number of images related to the fingerprint image are retrieved. Experiments on an image database of grayscale bitmap images show that the proposed technique performs well in compression and decompression. We use Peak Signal to noise ratio [3] and Mean Square Error [3] to compute the picture quality of fingerprint images.

Patent
30 Mar 2010
TL;DR: In this paper, the payloads of the data packets are compressed in addition to the compression of headers of data packets, without an intermediate storage, using Lempel Ziv algorithm or run length encoding algorithm.
Abstract: The method involves data by using protocols such as real time protocol (RTP), user datagram protocol (UDP), transmission control protocol (TCP) or internet protocol (IP). The compression of log headers is performed by using Lempel Ziv algorithm or run length encoding algorithm. The payloads of the data packets are compressed in addition to the compression of headers of the data packets, without an intermediate storage.

Journal ArticleDOI
TL;DR: This work proposes a new terrain compression technique that focuses on improving slope accuracy in compression of high resolution terrain data and proposes a Minimum Spanning Tree based encoding scheme that takes advantage of the spatial correlation between selected points.
Abstract: Accurate terrain representation with appropriate preservation of important terrain characteristics, especially slope steepness, is becoming more crucial and fundamental as the geographical models are becoming more complex. Based on our earlier success with Overdetermined Laplacian Partial Differential Equations (ODETLAP), which allows for compact yet accurate compression of the Digital Elevation Model (DEM), we propose a new terrain compression technique that focuses on improving slope accuracy in compression of high resolution terrain data. With high slope accuracy and a high compression ratio, this technique will help geographical applications that require a high precision in slope yet also have strict constraints on data size. Our proposed technique has the following contribution: we modify the ODETLAP system by adding slope equations for some key points picked automatically so that we can compress the elevation without explicitly storing slope values. By adding these slope equations, we can perturb the elevation in such a way that when slope is computed from the reconstructed surface, they are accurate. Note we are not storing the slope explicitly, instead we only store the elevation difference at a few locations. Since the ultimate goal is to have a compact terrain representation, encoding is also an integral part of this research. We have used Run Length Encoding (RLE) and linear prediction in the past, which gave us substantial file size reduction. In addition to that, we also propose a Minimum Spanning Tree based encoding scheme that takes advantage of the spatial correlation between selected points. On a typical test, our technique is able to achieve a 1:10 compression at the cost of 4.23 degree of RMS slope error and 3.30 meters of RMS elevation error.

Proceedings ArticleDOI
25 Oct 2010
TL;DR: A technical framework of DCT-based vector data compression is proposed for handling larger vector dataset and faster transportation and experiment results show that the algorithm is fairly good in efficiency.
Abstract: A technical framework of DCT-based vector data compression is proposed for handling larger vector dataset and faster transportation. Dynamic partitioning approach is applied to partition the vector data to be compressed into sets of items in order to prevent from side effect of unchangeable N value. DC coefficient and AC coefficient is encoded with Differential Pulse Code Modulation and Run Length Encoding respectively. Experiment results show that the algorithm is fairly good in efficiency, and the algorithm is very simple.

Journal Article
TL;DR: The paper introduces run-length encoding into algorithm to solve separation of connected component by image segmentation and shows that the parallel algori-thm runs faster than the traditional algorithm in multi-core processor.
Abstract: To meet the real-time requirement of moving objects detection and tracing based on omnidirectional vision,multi-core programming and parallel processing technology are applied to the redesign and realization of the connected component labeling algorithm The paper introduces run-length encoding into algorithm to solve separation of connected component by image segmentation Two segmented connected components for different tasks’ block are merged into one connected component Experiments show that the parallel algori-thm runs faster than the traditional algorithm in multi-core processor It has better solved the problem of hunger in multi-core processors and made the multi-core processor more efficient

19 Jul 2010
TL;DR: The size of image file is relatively bigger than text file, which needs large memory in saving and transmitting through communication media, so that it can increase the speed in sending and saving image file.
Abstract: The size of image file is relatively bigger than text file, which needs large memory in saving and transmitting through communication media. It influences the space and data processing especially in image. Therefore,this is the role of software design of image file compression by using run length encoding algorithm to minimize the memory. This study discuss image compression which has the same grayscale values. Run Length Encoding method is used for image compression by grouping the same grayscale value and save it in RLE format which can not be seen. To make the image file which compressed can be seen, it need to be decompressed by return the grayscale value of the image. Through this application, the size of image file becomes smaller, so that it can increase the speed in sending and saving image file. Universitas Sumatera Utara

Journal Article
TL;DR: A new algorithm for binary connected components labeling based on run-length encoding (RLE) and union-find sets has been put forward that can label the connected components of any shapes very quickly and exactly, save more memory, and facilitate the subsequent image analysis.
Abstract: Based on detailed analysis of advantages and disadvantages of the existing connected-component labeling (CCL) algorithm, a new algorithm for binary connected components labeling based on run-length encoding (RLE) and union-find sets has been put forward. The new algorithm uses RLE as the basic processing unit, converts the label merging of connected RLE into sets grouping in accordance with equivalence relation, and uses the union-find sets which is the realization method of sets grouping to solve the label merging of connected RLE. And the label merging procedure has been optimized: the union operation has been modified by adding the ”weighted rule” to avoid getting a degenerated-tree, and the ”path compression” has been adopted when implementing the find operation, then the time complexity of lahel merging is 0 (na(n)). The experiments show that the new algorithm can label the connected components of any shapes very quickly and exactly, save more memory, and facilitate the subsequent image analysis.

Patent
09 Dec 2010
TL;DR: In this paper, a run length vector generation unit for performing run length encoding processing to a binary image configured of background elements having pixel values showing a background and outline pixels being elements having pixels values showing an outline to show biological information being information unique to a living body.
Abstract: PROBLEM TO BE SOLVED: To provide an information processing apparatus and method, and a program, which facilitate narrowing down the number of registration data to be used as authentication candidates in one to N authentication with a method in which an arithmetic load is relatively small. SOLUTION: The information processing apparatus is provided with: a run length vector generation unit for performing run length encoding processing to a binary image configured of background elements being elements having pixel values showing a background and outline pixels being elements having pixel values showing an outline to show biological information being information unique to a living body, and for generating background run length vector information being information showing the successive number of the background pixels in the binary image showing the biological information; and an authentication unit for authenticating the background run length vector information generated by the run length vector generation unit on the basis of the registered background run length vector information being preregistered background run length vector information. COPYRIGHT: (C)2011,JPO&INPIT


01 Jan 2010
TL;DR: The test results indicate that the modified compression scheme shows a good performance aspect in addition to its simplicity.
Abstract: The aim of this research is to investigate the performance of a suggested image compression system. The scheme of the proposed system utilizes Tap 9 /7 wavelet transforms to decompose the image signal, and then uses Discrete Cosine Transform (DCT), and uniform quantization; to compress the approximate coefficients. The detail coefficients are coded by hierarchical uniform quantization, and then the original and modified Set Partitioning In Hierarchical Tree (SPIHT) methods were applied on each color band separately. At the end some spatial coding steps were applied on List of Significant Pixels (LSP) like Run Length Encoding (RLE) and Shift Coding to gain more compression. The test results indicate that the modified compression scheme shows a good performance aspect in addition to its simplicity.

01 Jan 2010
TL;DR: This paper presents a lossy compression technique for 5D geospatial data as a whole, instead of applying 3D compression method on each 3D slice of the 5D dataset, and shows that the proposed compression technique outperforms current 3D-SPIHT method on selected datasets.
Abstract: A five dimensional (5D) geospatial dataset consists of several multivariable 4D datasets, which are sequences of time-varying volumetric 3D geographical datasets. These datasets are typically very large in size and demand a great amount of resources for storage and transmission. In this paper, we present a lossy compression technique for 5D geospatial data as a whole, instead of applying 3D compression method on each 3D slice of the 5D dataset. Our lossy compression technique efficiently exploits spatial and temporal similarities between 2D data slices and 3D volumes in 4D oceanographic datasets. 5D-ODETLAP, which is an extension of, but essentially different from, the Laplacian partial differential equation, solves a sparse overdetermined system of equations to compute data at each point in (x,y,z,t,v) space from the data given at a representative set of points. 5D-ODETLAP is not restricted to certain types of datasets. For different datasets, it has the flexibility to approximate each one according to their respective data distributions by using suitable parameters. The final approximation is further compressed using Run Length Encoding. We use different datasets and metrics to test 5D-ODETLAP, and performance evaluations have shown that the proposed compression technique outperforms current 3D-SPIHT method on our selected datasets, from the World Ocean Atlas 2005. Having about the same mean percentage error, 5D-ODETLAP’s compression result produces much smaller maximum error than 3D-SPIHT. A user-defined mean or maximum error can be set to obtain desired compression in the proposed method, while not in 3D-SPIHT.

Proceedings ArticleDOI
25 Jul 2010
TL;DR: A new fast algorithm for the remote sensing image progressive compression was proposed that can decrease the coding and decoding time evidently compared with the JPEG2000 algorithm, while maintains favorable loss compression performance.
Abstract: A new fast algorithm for the remote sensing image progressive compression was proposed. This algorithm has three embedded characters (resolution, region of interest, and fidelity), low computing complexity and favorable loss compression performance. Every resolution of wavelet transform coefficients were partitioned into many precincts according to the area. In each sub-band of each precinct, the spatio-temporal neighborhood relationship was used to remove redundancies between different bit-planes and neighbors in the same bit-plane, and the bits of every bit-plane were modeled and reordered to form three sub-processes and run-length encoded only in one pass. The adaptive Golomb_Rice coding for the dyadic sequence was used to entropy code effectively. In addition, the uniform scalar quantization with dead-zone and adjustable parameter was used. The experiments showed that the new algorithm can decrease the coding and decoding time evidently compared with the JPEG2000 algorithm, while maintains favorable loss compression performance.

Patent
08 Dec 2010
TL;DR: In this article, the authors proposed a run-length encoding method consisting of four stages: shorter and longer sequences of pixels of a predetermined color (transparent), single pixels of individual color value (individual color value) are encoded using the shortest code words.
Abstract: PROBLEM TO BE SOLVED: To compress subtitle data used for display of text information and graphical data and coded as pixel bitmaps. SOLUTION: The run-length encoding method consists of four stages. Concretely, shorter and longer sequences of pixels of a predetermined color (transparent), are encoded using the second or third shortest code words, single pixels of individual color value are encoded using the shortest code words, and shorter and longer sequences of pixels of equal color value are encoded using the third or fourth shortest code words. COPYRIGHT: (C)2011,JPO&INPIT

Patent
04 Nov 2010
TL;DR: In this paper, an advanced adaptation for bitmap encoding for HDTV is defined as the Blu-ray (R) Disc Prerecorded format and provides optimized compression results for such subtitling bitmaps.
Abstract: PROBLEM TO BE SOLVED: To remove redundancy of superimposing subtitles including text information and graphics data. SOLUTION: The size of subtitle bitmaps may exceed video frame dimensions, so that only portions are displayed at a time. The bitmaps contain a plurality of transparent pixels. An advanced adaptation for bitmap encoding for HDTV, e.g. 1,920×1,280 pixels per frame is defined as the Blu-ray (R) Disc Prerecorded format and provides optimized compression results for such subtitling bitmaps. The adaptation is a four-stage run length encoding. Namely, shorter and longer sequences of pixels of a prescribed color (transparent) are encoded using the second or the third shortest code words, while single pixels of different color are encoded using the shortest code words, and shorter and longer sequences of pixels of equal color use the third or the fourth shortest code words. COPYRIGHT: (C)2011,JPO&INPIT