scispace - formally typeset
Search or ask a question

Showing papers on "Run-length encoding published in 2015"


Journal ArticleDOI
TL;DR: A novel approach to Compressed Path Databases, space efficient oracles used to very quickly identify the first edge on a shortest path, being significantly faster than state-of-the-art first-move oracles from the literature is introduced.
Abstract: We introduce a novel approach to Compressed Path Databases, space efficient oracles used to very quickly identify the first edge on a shortest path. Our algorithm achieves query running times on the 100 nanosecond scale, being significantly faster than state-of-the-art first-move oracles from the literature. Space consumption is competitive, due to a compression approach that rearranges rows and columns in a first-move matrix and then performs run length encoding (RLE) on the contents of the matrix. One variant of our implemented system was, by a convincing margin, the fastest entry in the 2014 Grid-Based Path Planning Competition. We give a first tractability analysis for the compression scheme used by our algorithm. We study the complexity of computing a database of minimum size for general directed and undirected graphs. We find that in both cases the problem is NP-complete. We also show that, for graphs which can be decomposed along articulation points, the problem can be decomposed into independent parts, with a corresponding reduction in its level of difficulty. In particular, this leads to simple and tractable algorithms with linear running time which yield optimal compression results for trees.

21 citations


Book ChapterDOI
16 Sep 2015
TL;DR: Initial results indicate that the quality for 3D reconstructions is almost indistinguishable from the original for a compression ratio of up to 10:1, and a hybrid lossless-lossy compression approach provides a good tradeoff between quality and compression ratio.
Abstract: We developed and evaluated different schemes for the real-time compression of multiple depth image streams. Our analysis suggests that a hybrid lossless-lossy compression approach provides a good tradeoff between quality and compression ratio. Lossless compression based on run length encoding is used to preserve the information of the highest bits of the depth image pixels. The lowest 10-bits of a depth pixel value are directly encoded in the Y channel of a YUV image and encoded by a x264 codec. Our experiments show that the proposed method can encode and decode multiple depth image streams in less than 12 ms on average. Depending on the compression level, which can be adjusted during application runtime, we are able to achieve a compression ratio of about 4:1 to 20:1. Initial results indicate that the quality for 3D reconstructions is almost indistinguishable from the original for a compression ratio of up to 10:1.

17 citations


Journal ArticleDOI
TL;DR: The simplicity and low cost infrastructural requirement of the algorithm, makes it suitable for implementation on an embedded platform to be used in mobile devices.
Abstract: This study proposes an algorithm for electrocardiogram (ECG) data compression using the conventional discrete Fourier transform. The coefficients are calculated using sine and cosine basis functions instead of complex exponentials, to avoid generation of complex coefficient values. Two well defined strategies are proposed for the choice of the significant coefficients – a fixed strategy based on the selection of a fixed band-limiting frequency, and an adaptive strategy depending on the spectral energy distribution of the signal. The different parameters for the two strategies are empirically selected based on extensive study of a wide variety of ECG data chosen from different databases. The significant coefficients are encoded using a unique adaptive bit assignment scheme to optimise the bit usage. The bit assignment map created to store the bit allocation information is run-length encoded to eliminate further redundancies. For the MIT-BIH arrhythmia database, the proposed technique achieves an average compression ratio of 14.67 for the fixed strategy and 16.58 for the adaptive strategy with excellent reconstruction quality, which is quite comparable to the other reported techniques. The simplicity and low cost infrastructural requirement of the algorithm, makes it suitable for implementation on an embedded platform to be used in mobile devices.

17 citations


Journal ArticleDOI
TL;DR: In this study combination of lossless compression techniques and Vigenere cipher was used in text steganography that makes use of email addresses to be the keys to reconstruct the secret message which has been embedded into the email text.
Abstract: In this study combination of lossless compression techniques and Vigenere cipher was used in text steganography that makes use of email addresses to be the keys to reconstruct the secret message which has been embedded into the email text. After selecting the cover text that has highest repetition pattern regarding to the secret message the distance matrix was formed. The members of distance matrix were compressed by following lossless compression algorithms as in written sequence; Run Length Encoding (RLE) + Burrows Wheeler Transform (BWT) + Move to Forward (MTF) + Run Length Encoding + Arithmetic Encoding (AE). Later on Latin Square was used to form stego key 1and then Vigenere table was used to increase complexity of extracting stego key 1. Final step was to choose e-mail addresses by using stego key 1 and stego key 2to embed secret message into forward e-mail platform. The experimental results showed that proposed method has reasonable performance with high complexity.

12 citations


Book ChapterDOI
01 Jan 2015
TL;DR: This is the novel approach that the authors have proposed for compression of image using compression ratio (CR) without losing the PSNR, quality ofimage using lesser bandwidth.
Abstract: Image compression is a very important useful technique for efficient transmission as well as storage of images. The demand for communication of multimedia data through the telecommunication network and accessing the multimedia data through internet by utilizing less bandwidth for communication is growing explosively. Basically the image data comprise of significant portion of multimedia data and they occupy maximum portion of communication bandwidth for multimedia communication. Therefore the development of efficient image compression technique is quite necessary. The 2D Haar wavelet transform along with Hard Thresholding and Run Length Encoding is one of the efficient proposed image compression technique. JPEG2000 is a standard image compression method capable of producing very high quality compressed images. Conventional Run Length Encoding(CRLE),Optimized Run Length Encoding(ORLE),Enhanced Run Length Encoding(ERLE) are different types of RLES applied on both proposed method of compression and JPEG2000. Conventional Run Length Encoding produces efficient result for proposed method whereas Enhanced Run Length Encoding produces efficient result in JPEG2000 compression. This is the novel approach that the authors have proposed for compression of image using compression ratio (CR) without losing the PSNR, quality of image using lesser bandwidth.

6 citations


Patent
Igor Kozintsev1
23 Feb 2015
TL;DR: In this paper, a method for updating a run length encoded (RLE) stream includes: receiving an element having an insertion value to be inserted into the RLE stream at an insertion position, the insertion value having one of a plurality of values.
Abstract: A method for updating a run length encoded (RLE) stream includes: receiving an element having an insertion value to be inserted into the RLE stream at an insertion position, the insertion value having one of a plurality of values, the RLE stream having elements arranged in runs, and each of the elements having one of the values; identifying a run containing the insertion position; determining whether the insertion value is the same as the value of the element at the insertion position; when the insertion value is different from the value of the element at the insertion position: determining whether the insertion position is adjacent to one or more matching runs of the runs, each element of the matching runs having a same value as the insertion value; and extending one of the matching runs when the insertion position is adjacent to only one of the matching runs.

4 citations


Proceedings ArticleDOI
09 Jul 2015
TL;DR: The result establishes that this developed compression scheme increases the Compression Ratio up to a reasonable value without threw off the Lossless nature.
Abstract: In this paper a new compression scheme has been formulated based on jointly delta modulation with biasing and Run Length Encoding technique for Direct Normalized Solar Irradiance data. The key features of this Compression Algorithm (Delta Modulation with Biasing and Run Length Encoding techniques viz. DMBRLE) are: it requires less memory as it belongs to direct compression method and its lossless nature. The main Goal of this algorithm has been achieved by compressing successive same sample elements as well as low frequency sample changes region. This is substantiated on jointly First Sample Difference (FSD) and S-Run Length Encoding (S-RLE) techniques. FSD is modified to concentrate at low frequency spectra. Simultaneously S-Run length Encoding is structured to fix the region having successive same sample value. The Compression Algorithm is modified stepwise and its effect on that is a large non-linear in nature. However, the quality of the recovered signal is demonstrated by computing Normalized Root Mean Square Error (NRMES). The result establishes that this developed compression scheme increases the Compression Ratio up to a reasonable value without threw off the Lossless nature.

3 citations


Patent
29 Apr 2015
TL;DR: In this article, a database can receive a query from a remote computing system, and the database can include a linear run length encoded compressed column, based on an original column of time series data partitioned into runs containing consecutive values and generated by run length encoding.
Abstract: A database can receive a query from a remote computing system. The database can include (i) a linear run length encoded compressed column, based on an original column of time series data partitioned into runs containing consecutive values and generated by run length encoding, (ii) a run index comprising at least one run index value having a run index position, the at least one run index value identifying runs in the original column, and (iii) an offsets column identifying the run index positions corresponding to the runs that contain a desired value. Using the run index, data responsive to the query can be identified. The identified data responsive to the query can be transmitted by the database to the remote computing system.

3 citations


Journal Article
TL;DR: The technique proposed in this paper basically hides secret data in images and also reduces the size of images to meet the need of bandwidth limited communication systems.
Abstract: Secure communication with the least possible utilization of communication link is the need of modern communication systems. The treats and limited bandwidth of the Internet requires high security and size reduction of data. The technique proposed in this paper basically hides secret data in images and also reduce the size of images to meet the need of bandwidth limited communication systems. Least significant bits substitution method has been adopted for data hiding in the image and then RLE scheme has been used to reduce the size of stego image. A hiding capacity of 50% has been achieved with a reasonable high PSNR (i.e. greater than 30dB limit) and a compression ratio of greater than 1 has been achieved. The proposed method insures the 100% recovery of the secret message at the receiver side.

3 citations


Journal Article
TL;DR: A new fingerprint compression algorithm based on Run-Length Encoding (RLE) with Patch Separation representation is introduced, which is suited for image compressing any type of Latent fingerprint regardless of its information content.
Abstract: Image compression is currently a prominent topic for both military and commercial researchers. Due to rapid growth of digital media and the subsequent need for reduced storage and to transmit the image in an effective manner Image compression is needed. Image compression attempts to reduce the number of bits required to digitally represent an image while maintaining its perceived visual quality. Biometrics has physiological characteristics means it includes of latent fingerprint recognition and verification and behavioral characteristics means it includes voice and signature. A new fingerprint compression algorithm based on Run-Length Encoding (RLE) with Patch Separation representation is introduced. RLE is based on the idea to replace a long sequence of the same symbol by a shorter sequence and is a good introduction into the image compression field for newcomers. The RLE Finger print compression algorithm first construct a Independent Component Analysis (ICA) Patch Extraction for predefined fingerprint image patches to overcome the dictionary creation. RLE is suited for image compressing any type of Latent fingerprint regardless of its information content, but the content of the data will affect the compression ratio achieved by RLE.

3 citations


Posted Content
TL;DR: A three states random evolution model is introduced as a framework for studying MTs dynamics in three transition states of growth, pause and shrinkage and a non-traditional stack run encoding scheme with 5 symbols for detecting transition states as well as to encode MT experimental data is introduced.
Abstract: Recent studies has revealed that Microtubules (MTs) exhibit three transition states of growth, shrinkage and pause. In this paper, we first introduce a three states random evolution model as a framework for studying MTs dynamics in three transition states of growth, pause and shrinkage. Then, we introduce a non-traditional stack run encoding scheme with 5 symbols for detecting transition states as well as to encode MT experimental data. The peak detection is carried out in the wavelet domain to effectively detect these three transition states. One of the added advantages of including peak information while encoding being that it enables to detect the peaks efficiently and encodes them simultaneously in the wavelet domain without having the need to do further processing after the decoding stage. Experimental results show that using this form of non-traditional stack run encoding has better compression and reconstruction performance as opposed to traditional stack run encoding and run length encoding schemes. Parameters for MTs modeled in the three states are estimated and is shown to closely approximate original MT data for lower compression rates. As the compression rate increases, we may end up throwing away details that are required to detect transition states of MTs. Thus, choosing the right compression rate is a trade-off between admissible level of error in signal reconstruction, its parameter estimation and considerable rate of compression of MT data.

Proceedings ArticleDOI
01 Jan 2015
TL;DR: The experimental results show that the two methods are competitive if training text and testing text are in a same set of languages, but the run-length encoding based method works better than the byte pattern based method if training texts are in different sets of languages.
Abstract: Text based pictures called ASCII art are often used in Web pages, email text and so on. They enrich expression in text data, but they can be noise for natural language processing and large ASCII arts are deformed in small display devices. We can ignore ASCII arts in text data or replace them with other strings by ASCII art extraction methods, which detect areas of ASCII arts in a given text data. Our research group and another research group independently proposed two different ASCII art extraction methods, which are a run-length encoding based method and a byte pattern based method respectively. Both of the methods use text classifiers constructed by machine learning algorithms, but they use different attributes of text. In this paper, we compare the two methods by ASCII art extraction experiments where training text and testing text are in English and Japanese. Our experimental results show that the two methods are competitive if training text and testing text are in a same set of languages, but the run-length encoding based method works better than the byte pattern based method if training text and testing text are in different sets of languages.

Patent
29 Jul 2015
TL;DR: In this article, a statistical method of an image histogram and a system thereof is described, in which identical continuous pixels are compressed in advance by means of run length encoding according to the characteristic that the identical continuously pixels exist in original image pixel data.
Abstract: The invention relates to an image processing technology, and discloses a statistical method of an image histogram and a system thereof. In the method and the system, identical continuous pixels are compressed in advance by means of run length encoding according to the characteristic that the identical continuous pixels exist in original image pixel data, histogram statistics of the identical continuous pixels can be finished in every operation, and the reading and writing operation steps of histogram statistics are reduced, so that the time and power consumption of histogram statistics are reduced greatly. Moreover, run length encoding is performed during acquisition of the original image pixel data, so that idle resources of a system can be utilized fully.

Journal ArticleDOI
TL;DR: In this paper, focus is on compressing gray-scale images and the efficiency of compression process is estimated using Compression Ratio and the quality of reconstructed image which is generated after decompression process is calculated using Peak-Signal-to-Noise Ratio.
Abstract: Multimedia data are large in size as compared to the plain-text data. And hence to transmit them over a low-bandwidth communication channel, it needs to be compressed. In this paper, focus is on compressing gray-scale images. Here two types of compression techniques are explained: Lossy and Lossless. Within lossy techniques, there are three techniques: Fractal encoding, Discrete Cosine Transform (DCT), and Discrete Wavelet Transform (DWT). And Following are lossless techniques: Arithmatic Encoding, Run Length Encoding (RLE), and Huffman Encoding. The efficiency of compression process is estimated using Compression Ratio (C.R.) and the quality of reconstructed image which is generated after decompression process is calculated using Peak-Signal-to-Noise Ratio (P.S.N.R.). General Terms Image Compression, Lossless, Lossy

Posted Content
TL;DR: Modifications to RLE are discussed, with which the run is only store the run for characters, that are actually compressible, getting rid of a lot of useless data like the runs of the characters that are uncompressible in the first place.
Abstract: Run Length Encoding(RLE) is one of the oldest algorithms for data-compression available, a method used for compression of large data into smaller and therefore more compact data. It compresses by looking at the data for repetitions of the same character in a row and storing the amount(called run) and the respective character(called run_value) as target-data. Unfortunately it only compresses within strict and special cases. Outside of these cases, it increases the data-size, even doubles the size in worst cases compared to the original, unprocessed data. In this paper, we will discuss modifications to RLE, with which we will only store the run for characters, that are actually compressible, getting rid of a lot of useless data like the runs of the characters, that are uncompressible in the first place. This will be achieved by storing the character first and the run second. Additionally we create a bit-list of 256 positions(one for every possible ASCII-character), in which we will store, if a specific (ASCII-)character is compressible(1) or not(0). Using this list, we can now say, if a character is compressible (store [the character]+[it's run]) or if it is not compressible (store [the character] only and the next character is NOT a run, but the following character instead). Using this list, we can also successfully decode the data(if the character is compressible, the next character is a run, if not compressible, the next character is a normal character). With that, we store runs only for characters, that are compressible in the first place. In fact, in the worst case scenario, the encoded data will create always just an overhead of the size of the bit-list itself. With an alphabet of 256 different characters(i.e. ASCII) it would be only a maximum of 32 bytes, no matter how big the original data was. [...]

Proceedings ArticleDOI
01 Oct 2015
TL;DR: This paper presents a fast implementation of multi-band blending for combining a set of registered images into a composite mosaic with no visible seams and minimal texture distortion, and presents detailed quantitative results compared with OpenCV and Enblend to demonstrate the speed improvements.
Abstract: This paper presents a fast implementation of multi-band blending for combining a set of registered images into a composite mosaic with no visible seams and minimal texture distortion. We first compute a unique seam image using two-pass nearest distance transform, which is independent on the order of input images and has good scalability. Each individual mask can be extracted from this seam image quickly. To promote execution speed and reduce memory usage in building large area mosaics, the seam image and masks are compressed using run-length encoding, and all the following mask operations are built on run-length encoding scheme. The use of run-length encoding for masks processing leads to reduced memory requirements and a compact storage of the mask data. We apply our fast blending system to large scale data sets and present detailed quantitative results compared with OpenCV and Enblend to demonstrate the speed improvements.

01 Jan 2015
TL;DR: This image encryption-then-compression (ETC) system proposes an encryption scheme operated in the prediction error clustering and run length encoding, shown to be able to provide a reasonably high level of security.
Abstract: In many practical scenarios, secure and efficient data transfer plays a vital role and it is the main aspect of the communication system. The classical way of transmitting redundant data over a bandwidth constrained insecure channel is to first compress it and then encrypt. This project investigates the novelty of reversing the order of compression and encryption, without compromising either the information secrecy or the encryption efficiency. This has led to the problem of how to design a pair of image encryption and compression algorithms such that compressing the encrypted images can still be efficiently performed. This image encryption-then-compression (ETC) system proposes an encryption scheme operated in the prediction error clustering and run length encoding, shown to be able to provide a reasonably high level of security. Lossless/lossy image coders can be exploited to efficiently compress the encrypted images, and both techniques have their own advantages. In this paper data compression uses Run Length Encoding (RLE) technique because this method has the ability to generate an exact output with low power consumption and reduced time delay. This entire system is implemented by writing Verilog description and is simulated using Xilinx ISE software simulation tools.

Proceedings ArticleDOI
TL;DR: A novel template matching algorithm for visual inspection of bare printed circuit board (PCB) using run-length encoding (RLE) which is not only fast but also more robust and reliable in matching results.
Abstract: In this paper we propose a novel template matching algorithm for visual inspection of bare printed circuit board (PCB). 1 In the conventional template matching for PCB inspection, the matching score and its relevant offsets are acquired by calculating the maximum value among the convolutions of template image and camera image. While the method is fast, the robustness and accuracy of matching are not guaranteed due to the gap between a design and an implementation resulting from defects and process variations. To resolve this problem, we suggest a new method which uses run-length encoding (RLE). For the template image to be matched, we accumulate data of foreground and background, and RLE data for each row and column in the template image. Using the data, we can find the x and y offsets which minimize the optimization function. The efficiency and robustness of the proposed algorithm are verified through a series of experiments. By comparing the proposed algorithm with the conventional approach, we could realize that the proposed algorithm is not only fast but also more robust and reliable in matching results.

01 Jan 2015
TL;DR: Good results with a high quality for the reconstructed audio file achieved from the compression performance of the system and this obvious from the test results presented in the paper.
Abstract: Audio compression becomes one of an important subject in the recent days. It addresses the problem of transmission requirements and the storage capacity, any compression system is satisfied by eliminating the redundant parts in the file. The purpose of this research is to design and implement an efficient lossy coding system based on discrete cosine transform (DCT). All files are used in this research are stereo type. Splitting process is applied on the two channels, framing each channel as preparing step to implement DCT, quantized the DCT coefficients by using appropriate quantization factor (Qf) and the main contribution in this proposed method is to isolate the equal adjacent samples in one slice with its location, so instead of sending two equals adjacent samples, only one of them is sent with its location. Differencing process is applied on the locations vector, run length encoding (RLE), one type of entropy coding, is applied in the last step in this system. Different sizes with different characteristics are used for audio file. Peak signal to noise ratio (PSNR) and compression factor (CF) are used to evaluate the performance of the system. Good results with a high quality for the reconstructed audio file achieved from the compression performance of the system and this obvious from the test results presented in the paper. CF is increased with increasing of frame size and increasing of Qf.


01 Jan 2015
TL;DR: The color images were preprocessed using median filter, then zigzag process is applied to the DCT coefficients, and the image is compressed using DCT and decompressed by reversing the whole process to the original color image.
Abstract: The color images were preprocessed using median filter.The preprocessed color image is compressed using DCT. Then zigzag process is applied to the DCT coefficients. The zigzag applied DCT coefficients are then decomposed using Quad tree Decomposition.Encoding is applied to the Quad tree Decomposed image using Fractal method and Runlength Encoding method.Thecompressed image is then decompressed by reversing the whole process to the original color image.Finally the performance of the method is measured by calculating the Compression Ratio and PSNR value.

Journal ArticleDOI
TL;DR: This paper has put an effort to build a simulation of Column-Store and applied the best bitmap compression technique RLE which further improves the retrieval time.
Abstract: Column oriented database have continued to grow over the past few decades. C-Store, Vertica Monet DB and Lucid DB are popular open source column oriented database. Columnstore in a nutshell, store each attribute values belonging to same column contiguously. Since column data is uniform type therefore, there are some opportunities for storage size optimization in Column-store, many renowned compression schemes such as RLE & LZW that make use of similarity of adjacent data to compress. Good Compression can also be achieved using bitmap index by order of magnitude through the sorting. The Run Length Encoding works best for the columns of ordered data, or data with few distinct values. This ensures long runs of identical values which RLE compresses quite well. In this paper we have put an effort to build a simulation of Column-Store and applied the best bitmap compression technique RLE which further improves the retrieval time. General Terms Your general terms must be any term which can be used for general classification of the submitted material such as Pattern Recognition, Security, Algorithms et. al.

Patent
18 Mar 2015
TL;DR: In this paper, a page dot matrix self-adaptive compression method and device is presented. But the method is not suitable for VDP work, as it requires page-based encoding and run-length encoding.
Abstract: The invention discloses a page dot matrix self-adaptive compression method and device The method comprises the steps that page dot matrix data in VDP work are acquired with a page acting as a unit; the page dot matrix data are compressed in turn and compression ratio of each page is recorded; and a compression mode is dynamically regulated according to compression ratio, and the compression mode comprises a previous-page-based encoding compression mode and a run length encoding compression mode The invention also discloses a page dot matrix self-adaptive reduction method and device With application of the method, decompression efficiency can be enhanced

01 Jan 2015
TL;DR: Different compression techniques using lossless compression techniques such as Huffman coding, Arithmatic coding and Run Length Encoding are discussed, a conclusion is derived on basis on these Methods.
Abstract: Most digital data are not stored in the most compact form. Rather, they are stored in whatever way makes them easiest to use, such as: ASCII text from word processors, binary code that can be executed on a computer, individual samples from a data acquisition system, etc. Typically, these easy to use encoding methods require data files about twice as large as actually needed to represent the information. Data compression has important application in the areas of file storage and distributed system.Compression is used to reduce redundancy in stored or communicated data thus increasing effective data density & to reduce the useage of resources..In this Paper we shall discuss about different compression techniques using lossless compression techniques such as Huffman coding,Arithmatic coding and Run Length Encoding(RLE).A conclusion is derived on basis on these Methods.

Patent
01 Sep 2015
TL;DR: In this paper, a method for updating a run length encoded (RLE) stream includes: receiving an element having an insertion value to be inserted into the RLE stream at an insertion position, the insertion value having one of a plurality of values.
Abstract: A method for updating a run length encoded (RLE) stream includes: receiving an element having an insertion value to be inserted into the RLE stream at an insertion position, the insertion value having one of a plurality of values, the RLE stream having elements arranged in runs, and each of the elements having one of the values; identifying a run containing the insertion position; determining whether the insertion value is the same as the value of the element at the insertion position; when the insertion value is different from the value of the element at the insertion position: determining whether the insertion position is adjacent to one or more matching runs of the runs, each element of the matching runs having a same value as the insertion value; and extending one of the matching runs when the insertion position is adjacent to only one of the matching runs.

01 Jan 2015
TL;DR: In this paper, the authors make a theoretical analysis about the performance of LLRUN compression algorithm on sparse bit strings and show that it offers a theoretical compression ratio between 87,5% and 50%.
Abstract: The purpose of this research was to make a theoretical analysis about the performance of LLRUN compression algorithm on sparse bit strings. Fraenkel and Klein (1985) proposed a method to compress sparse bit strings, using run length encoding (RLE), Elias - gamma coding and Huffman coding, with promising performance. The results of this research showed that LLRUN offers a theoretical compression ratio between 87,5% and 50%, being an effective tool to compress sparse bit strings.

Proceedings ArticleDOI
01 Aug 2015
TL;DR: This paper presents a fast implementation of multi-band blending for combining a set of registered images into a composite mosaic with no visible seams and minimal texture distortion, and presents detailed quantitative results compared with Open CV and Enblend to demonstrate the speed and memory improvements.
Abstract: This paper presents a fast implementation of multi-band blending for combining a set of registered images into a composite mosaic with no visible seams and minimal texture distortion. We first compute a unique seam image using two-pass nearest distance transform, which is independent on the order of input images and has good scalability. Each individual mask can be extracted from this seam image quickly. To promote execution speed and reduce memory usage in building large area mosaics, the seam image and masks are compressed using run-length encoding, and all the following mask operations are built on run-length encoding scheme. We apply our fast blending system to large scale data sets and present detailed quantitative results compared with Open CV and Enblend to demonstrate the speed and memory improvements.


Proceedings ArticleDOI
01 Dec 2015
TL;DR: A Zynq-based system to compute Run-Length encoding Matrix features for retinal image texture analysis using a co-processor architecture implemented in the programmable logic portion of the Zynqu platform is presented.
Abstract: This paper presents a Zynq-based system to compute Run-Length encoding Matrix features for retinal image texture analysis. In order to improve the performance of the software implementation, we propose a co-processor architecture implemented in the programmable logic portion of the Zynq platform. Experimental results show a speedup of 26.3× with respect to the software version implemented on the ARM processor alone, for 2496 × 1664 images. The additional area to implement the co-processor is limited to 13% of DSP48E1s slices and about 2% for LUTs and flip-flops.

Patent
28 Aug 2015
TL;DR: In this article, the authors proposed a four-stage run-length encoding method for bitmap encoding for HDTV with 1920×1280 pixels per frame, where the second or third shortest code words are used for shorter sequences and longer sequences of pixels of a prescribed (transparent) color.
Abstract: PROBLEM TO BE SOLVED: To optimally encode subtitle or subpicture layers for a high-resolution video.SOLUTION: Subtitles are used for presentation of text information and graphical data, encoded as pixel bitmaps. Bitmaps are a separate layer lying above a video for subtitles synchronized with video images, and contain many transparent pixels. Latest adaptation means for bitmap encoding for HDTV with 1920×1280 pixels per frame is a four-stage run length encoding method. That is, the second or third shortest code words are used for shorter sequences and longer sequences of pixels of a prescribed (transparent) color, the shortest code words are used for single pixels having individual color values, and the third or fourth shortest code words are used for shorter sequences and longer sequences of equal color values.SELECTED DRAWING: Figure 3