scispace - formally typeset
Search or ask a question

Showing papers on "Run-length encoding published in 2017"


Journal ArticleDOI
09 Feb 2017
TL;DR: An efficient electrocardiogram (ECG) data compression algorithm for tele-monitoring of cardiac patients from rural area, based on combination of two encoding techniques with discrete cosine transform, which provides good compression ratio (CR) with low percent root-mean-square difference (PRD) values.
Abstract: This paper reports an efficient electrocardiogram (ECG) data compression algorithm for tele-monitoring of cardiac patients from rural area, based on combination of two encoding techniques with discrete cosine transform. The proposed technique provides good compression ratio (CR) with low percent root-mean-square difference (PRD) values. For performance evaluation of the proposed algorithm 48 records of ECG signals are taken from MIT-BIH arrhythmia database. Each record of ECG signal is of duration 1 minute and sampled at sampling frequency of 360 Hz. Noise of the ECG signal has been removed using Savitzky-Golay filter. To transform the signal from time domain to frequency domain, discrete cosine transform has been used which compacts energy of the signal to lower order of frequency coefficients. After normalisation and rounding of transform coefficients, signals are encoded using dual encoding technique which consists of run length encoding and Huffman encoding. The dual encoding technique compresses data significantly without any loss of information. The proposed algorithm offers average values of CR, PRD, quality score, percent root mean square difference normalised, RMS error and SNR of 11.49, 3.43, 3.82, 5.51, 0.012 and 60.11 dB respectively.

20 citations


03 Apr 2017
TL;DR: This thesis deals with space-efficient algorithms to compress and index texts and shows that these two tools can be combined in a single index gathering the best features of the above-discussed indexes: fast queries, and strong compression rates (up to exponential compression can be achieved).
Abstract: This thesis deals with space-efficient algorithms to compress and index texts. The aim of compression is to reduce the size of a text by exploiting regularities such as repetitions or skewed character distributions. Indexing, on the other hand, aims at accelerating pattern matching queries (such as locating all occurrences of a pattern) with the help of data structures (indexes) on the text. Despite being apparently distant concepts, compression and indexing are deeply interconnected: both exploit the inner structure of the text to, respectively, reduce its size and speed up pattern matching queries. It should not be surprising, therefore, that compression and indexing can be “forced” (actually, quite naturally) to coincide: compressed full-text indexes support fast pattern matching queries while taking almost the same size of the compressed text. In the last two decades, several compressed text indexes based on the most efficient text compressors have been designed. These indexes can be roughly classified in two main categories: those based on suffix-sorting (Burrows-Wheeler transform indexes, compressed suffix arrays/trees) and those based on the replacement of repetitions by pointers (LempelZiv indexes, grammar indexes, block trees). Indexes based on a combination of the two techniques have also been proposed. In general, suffix sorting-based methods support very fast queries at the expense of space usage. This is due to several factors, ranging from weak compression methods (e.g. entropy compression, used in early FM-indexes, is not able to capture long repetitions), to heavy structures (e.g. suffix array sampling) flanked to the compressed text representation to speed up queries. The second class of indexes, on the other hand, offers strong compression rates, achieving up to exponential compression on very repetitive texts at the expense of query times—often quadratic in the pattern length or—in the worst case—linear in the text length. Among the most used compression techniques, run-length compression of the BurrowsWheeler transform and Lempel-Ziv parsing (LZ77) have been proved to be superior in the compression of very repetitive datasets. In this thesis we show that these two tools can be combined in a single index gathering the best features of the above-discussed indexes: fast queries (linear in the pattern length and logarithmic in the text length), and strong compression rates (up to exponential compression can be achieved). We describe an efficient implementation of our index and compare it with state of the art alternatives. Our solution turns out to be as space-efficient as the lightest index described in the literature while supporting queries up to three orders of magnitude faster. Apart from index definition and design, a third concern regards index construction complexity. Often, the input text is too big to be fully loaded into main memory. Even when this is feasible, classic compression/indexing algorithms use heavy data structures such as suffix trees/arrays which can easily take several times the space of the text. This is unsatisfactory, especially in cases where (i) the text is streamed and not stored anywhere (e.g. because of its size) and (ii) the compressed text is small enough to fit into main memory. A further contribution of this thesis consists in five algorithms compressing text within compressed working space and in two recompression techniques (i.e. algorithms to convert between different compression formats without full decompression). The complete picture we offer consists of a set of algorithms to space-efficiently convert among: • the plain text • two compressed self-indexes, and • three compressed-file formats (entropy, LZ77, and run-length BWT) The general idea behind all our compression algorithms is to read text characters from left to right and build a compressed dynamic Burrows-Wheeler transform of the reversed text. This structure is augmented with a dynamic suffix array sampling to support fast locate of text substrings. We employ three types of suffix array sampling: (i) evenly-spaced (ii) based on Burrows-Wheeler transform equal-letter runs, and (iii) based on Lempel-Ziv factors. Strategy (i) allows us to achieve entropy-compressed working space. Strategies (ii) and (iii) are novel and allow achieving a space usage proportional to the output size (i.e. the compressed file/index). As a last contribution of this thesis, we turn our attention to a practical and usable implementation of our suite of algorithmic tools. We introduce DYNAMIC, an open-source C++11 library implementing dynamic compressed data-structures. We prove almostoptimal theoretical bounds for the resources used by our structures, and show that our theoretical predictions are empirically tightly verified in practice. The implementation of the compression algorithms described in this thesis using DYNAMIC meets our expectations: on repetitive datasets our solutions turn out to be up to three orders of magnitude more space-efficient than state-of-the art algorithms performing the same tasks.

17 citations


Proceedings ArticleDOI
01 Jul 2017
TL;DR: The proposed method is based on grouping the adjacent points in blocks, and two encoding modes are supported for each block, which include the run-length encoding mode and palette mode.
Abstract: This paper proposes a color attribute compression method for MPEG Point Cloud Compression (PCC) by exploiting the spatial redundancy among the adjacent points. With the increased interest in representing real-world surface as 3D point clouds, compressing the attributes (i.e., colors and normal directions) of point cloud has attracted great attention in MPEG. The proposed method is based on grouping the adjacent points in blocks. And two encoding modes are supported for each block, which include the run-length encoding mode and palette mode. The final encoding mode for each block is determined through comparing two distortion values based on two encoding modes. Experimental results show that the proposed approach achieves about 28 percent compression ratio than that of MPEG PCC.

14 citations


Journal ArticleDOI
TL;DR: A new MPS method is proposed that combines the application of a technique called Locality Sensitive Hashing (LSH), which permits to accelerate the search for patterns similar to a target one, with a Run-Length Encoding (RLE) compression technique that speeds up the calculation of the Hamming similarity.

13 citations


Journal ArticleDOI
TL;DR: The implementation and measurement of the proposed telemetry data compression method show its effectiveness when used in a high-precision high-capacity multichannel acquisition system.
Abstract: The telemetry data are essential in evaluating the performance of aircraft and diagnosing its failures. This work combines the oversampling technology with the run-length encoding compression algorithm with an error factor to further enhance the compression performance of telemetry data in a multichannel acquisition system. Compression of telemetry data is carried out with the use of FPGAs. In the experiments there are used pulse signals and vibration signals. The proposed method is compared with two existing methods. The experimental results indicate that the compression ratio, precision, and distortion degree of the telemetry data are improved significantly compared with those obtained by the existing methods. The implementation and measurement of the proposed telemetry data compression method show its effectiveness when used in a high-precision high-capacity multichannel acquisition system.

12 citations


Journal ArticleDOI
TL;DR: A high-fidelity data compression method based on differential detection and run-length encoding is proposed for a time-stretch imaging system, where a spatial image is mapped to the time domain and then read out by a balanced photodetector for image reconstruction.
Abstract: A high-fidelity data compression method based on differential detection and run-length encoding is proposed for a time-stretch imaging system, where a spatial image is mapped to the time domain and then read out by a balanced photodetector for image reconstruction. Differential detection is capable of distinguishing discrepancy of consecutive scans and eliminating identical signals. After the detection, run-length encoding merges consecutive identical data to a single data. In the experiment, a 77.76-MHz line-scan imaging system is demonstrated. The compression ratio of more than 3.8 is achieved. After the data decompression, the image of high fidelity can be reconstructed.

11 citations


Journal ArticleDOI
TL;DR: An effort is made to prevent the computational burden of decompressing a document for text-line segmentation by identifying the separators in handwritten text in the compressed version of a document image.
Abstract: Line separators are used to segregate text-lines from one another in document image analysis. Finding the separator points at every line terminal in a document image would enable text-line segmentation. In particular, identifying the separators in handwritten text could be a thrilling exercise. Obviously it would be challenging to perform this in the compressed version of a document image and that is the proposed objective in this research. Such an effort would prevent the computational burden of decompressing a document for text-line segmentation. Since document images are generally compressed using run length encoding (RLE) technique as per the CCITT standards, the first column in the RLE will be a white column. The value (depth) in the white column is very low when a particular line is a text line and the depth could be larger at the point of text line separation. A longer consecutive sequence of such larger depth should indicate the gap between the text lines, which provides the separator region. In case of over separation and under separation issues, corrective actions such as deletion and insertion are suggested respectively. An extensive experimentation is conducted on the compressed images of the benchmark datasets of ICDAR13 and Alireza et al [17] to demonstrate the efficacy.

7 citations


Proceedings ArticleDOI
01 Dec 2017
TL;DR: A novel BWT accelerator based on the streaming sorting network that achieves 14.3X speedup compared with the state-of-art work when the data block size is 4KB and a lossless data compression system based on this accelerator.
Abstract: The Burrows-Wheeler Transform (BWT) has received special attention due to its effectiveness in lossless data compression algorithms Because BWT is a time-consuming task, the efficient hardware accelerator that can yield high throughputs is required in real-time applications This paper presents a novel BWT accelerator based on the streaming sorting network The streaming sorting network performs the suffix sorting of large amount of data which is the most difficult task in BWT Our BWT accelerator is implemented on a NetFPGA board Experimental results show that it achieves 143X speedup compared with the state-of-art work when the data block size is 4KB Furthermore, we design and implement a lossless data compression system based on the proposed BWT accelerator The hardware system is composed of Burrows-Wheeler Transform module, the move-to-front encoding module, the run length encoding module, and the canonical Huffman encoding module We evaluate the system performance on a NetFPGA board at the frequency of 155MHz The throughput of the system could reach 179 MB/s on board when we use only one streaming sorting network for a 4KB block The system throughput can be linearly improved up to 537 MB/s in simulation on a Virtex UltraScale xcvu440 chip if we use three streaming sorting networks to compute BWT

6 citations


Journal ArticleDOI
TL;DR: This study shows that the fast Hartley transform is more appropriate than wavelets one since it offers a higher compression ratio and a better speech quality.
Abstract: This paper presents a simulation and hardware implementation of a new audio compression scheme based on the fast Hartley transform in combination with a new modified run length encoding. The proposed algorithm consists of analyzing signals with fast Hartley Transform and then thresholding the ob-tained coefficients below a given threshold which are then encoded using a new approach of run length encoding. The thresholded coefficients are, finally, quantized and coded into binary stream. The experimental results show the ability of the fast Hartley transform to compress audio signals. Indeed, it concentrates the signal energy in a few coefficients and demonstrates the ability of the new approach of run length encoding to increase the compression factor. The results of the current work are compared with wavelet based compression by using objective assessments namely CR, SNR, PSNR and NRMSE. This study shows that the fast Hartley transform is more appropriate than wavelets one since it offers a higher compression ratio and a better speech quality. In addition, we have tested the audio compression system on DSP processor TMS320C6416.This test shows that our system fits with the real-time requirements and ensures a low complexity. The perceptual quality is evaluated with the Mean Opinion Score (MOS).

6 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed to find the separator points at every line terminal in a document image to enable text-line segmentation in the compressed version of a document.
Abstract: Line separators are used to segregate text-lines from one another in document image analysis. Finding the separator points at every line terminal in a document image would enable text-line segmentation. In particular, identifying the separators in handwritten text could be a thrilling exercise. Obviously it would be challenging to perform this in the compressed version of a document image and that is the proposed objective in this research. Such an effort would prevent the computational burden of decompressing a document for text-line segmentation. Since document images are generally compressed using run length encoding (RLE) technique as per the CCITT standards, the first column in the RLE will be a white column. The value (depth) in the white column is very low when a particular line is a text line and the depth could be larger at the point of text line separation. A longer consecutive sequence of such larger depth should indicate the gap between the text lines, which provides the separator region. In case of over separation and under separation issues, corrective actions such as deletion and insertion are suggested respectively. An extensive experimentation is conducted on the compressed images of the benchmark datasets of ICDAR13 and Alireza et al [17] to demonstrate the efficacy.

5 citations


Proceedings ArticleDOI
Haiquan Wang1
12 May 2017
TL;DR: This paper presents a novel APCA-Enhanced algorithm, which is shown to have better performance of compression ratio and query latency compared with run length encoding and APCA.
Abstract: With the widespread use of sensors, large-scale time series data becomes ubiquitous. As a result, it is important to provide efficient compression storage and retrieval algorithms, which are the base of subsequent data mining analysis, on these data. One of the most widely used data compression algorithms on time series data is Adaptive Piecewise Constant Approximation (APCA). However, APCA's compression storage overhead is still great for large-scale time series data. In this paper, we present a novel APCA-Enhanced algorithm. Extensive experiments over large real-world data sets demonstrate our algorithm's better performance of compression ratio and query latency compared with run length encoding and APCA.

Proceedings ArticleDOI
01 Dec 2017
TL;DR: This work shows that the quantized Huffman coding outperforms the RLE in some aspects as Compression Ratio (CR) and time consumed in compression and decompression, but Structural Similarity Index (SSIM) is the same for the two techniques.
Abstract: In this paper, We implement the Discrete Cosine Transform (DCT) coding (lossy compression method) followed by the proposed coding technique which called “Quantized Huffman Coding” in order to minimize the EEG data size. Therefore, adding a lossless compression algorithm after the lossy compression is a good idea to get a high compression ratio with acceptable distortion in the original signal. Here, we use DCT encoder followed by either quantized Huffman Coding or Run Length Encoding (RLE) then compare between them. Our work shows that, at the same Root Mean Square Error (RMSE), the quantized Huffman coding outperforms the RLE in some aspects as Compression Ratio (CR) and time consumed in compression and decompression, but Structural Similarity Index (SSIM) is the same for the two techniques.

Journal ArticleDOI
TL;DR: In this article, DCT and DWT based image compression algorithms have been implemented using MATLAB platform and the improvement of image compression through Run Length Encoding (RLE) has been achieved.
Abstract: The goal of image compression is to remove the redundancies by minimizing the number of bits required to represent an image. It is used for reducing the redundancy that is nothing but avoiding the duplicate data. It also reduces the storage memory to load an image. Image Compression algorithm can be Lossy or Lossless. In this paper, DCT and DWT based image compression algorithms have been implemented using MATLAB platform. Then, the improvement of image compression through Run Length Encoding (RLE) has been achieved. The three images namely Baboon, Lena and Pepper have been taken as test images for implementing the techniques. Various image objective metrics namely compression ratio, PSNR and MSE have been calculated. It has been observed from the results that RLE based image compression achieves higher compression ratio as compared with DCT and DWT based image compression algorithms.

Proceedings ArticleDOI
01 Oct 2017
TL;DR: The aim of this work is to offer a detail analysis of lossless compression methods and finding the one which is best suited for compression of multimedia data in cognitive radio environment.
Abstract: This paper considers implementation of audio compression using the lossless compression techniques like dynamic Huffman coding and Run Length Encoding (RLE). Audio file is firstly preprocessed to find sampling frequency and the encoded data bits in sample audio file. After that dynamic Huffman and RLE is applied. The design of dynamic Huffman coding technique involves evaluation of the probabilities of occurrence “on the fly”, as the ensemble is being transmitted and RLE is based on finding the runs of the data i.e. repeating strings and replacing it by single data element and its count. These techniques work with a common goal to obtain the utmost possible compression ratio and less Time Elapsed to compress. The competence of the proposed methods is verified by applying these techniques to variety of audio data. Stimulus behind this work is to offer a detail analysis of lossless compression methods and finding the one which is best suited for compression of multimedia data in cognitive radio environment.

Proceedings ArticleDOI
01 Dec 2017
TL;DR: A hybrid compression technique by integrating the Discrete Cosine Transform and a Non-Uniform Quantized Huffman in order to minimize the Electroencephalography data size achieves 90% compression compared to 59% by DCT/RLE with the same similarity.
Abstract: In this paper, We propose a hybrid compression technique by integrating the Discrete Cosine Transform (DCT) and a Non-Uniform Quantized Huffman in order to minimize the Electroencephalography (EEG) data size. Therefore, to get a high compression ratio we apply the lossy compression followed by a lossless compression algorithm. We use DCT encoder followed by either non-uniform quantized Huffman (NonUQH) coding or Run Length Encoding (RLE) then compare between them. The system performance is evaluated in terms of the compression/decompression time, the compression ratio, and the root mean square error. The proposed hybrid technique DCT/NonUQH achieves 90% compression compared to 59% by DCT/RLE with the same similarity. Furthermore, it needs 50% less time for compression/decompression process.

Patent
26 Apr 2017
TL;DR: In this paper, a rapid area erosion algorithm and device based on a random structural element of run-length encoding was proposed to reduce the memory occupation and reduce the time consumption, which can effectively reduce the computation time.
Abstract: The present invention provides a rapid area erosion algorithm and device based on a random structural element of run-length encoding. The algorithm comprises: performing run-length encoding of an input image; and performing erosion of the encoded interest area by employing the structural element of run-length encoding. The rapid area erosion algorithm and device based on the random structural element of run-length encoding can effectively reduce the memory occupation and reduce the time consumption.

Journal ArticleDOI
TL;DR: From the results obtained in this analysis, it is concluded that the proposed approach is an efficient method for image compression and it provides high compression ratio and better compression percentage.
Abstract: In this paper, we propose an Improved Run-Length Encoding (IRLE) scheme for image compression. Conventional RunLength Encoding is a widely used scheme as it is simple and effective but not suitable to represent non-block images. The IRLE which we have developed, combines runs and run values and it reduces the space required to store the image significantly. The compression ratio depends on the size and type of the input message. From the results obtained in this analysis, we conclude that the proposed approach is an efficient method for image compression and it provides high compression ratio and better compression percentage.

Patent
10 May 2017
TL;DR: In this paper, an algorithm for obtaining the maximal horizontally inscribed rectangle for any connected domain based on run length encoding is presented. But the algorithm is not suitable for image processing applications.
Abstract: The invention belongs to the technical field of image processing, and discloses an algorithm for obtaining the maximal horizontally inscribed rectangle for any connected domain based on run length encoding The algorithm comprises the following steps: acquiring a region of interest of an image; carrying out the run length encoding on the region of interest of the image; and traversing run lengths to obtain the maximal horizontally inscribed rectangle of the encoded region of interest The algorithm provided by the invention has the advantages that a run length encoding data structure is adopted for the region of interest of the image, so that the occupation of a computer memory is reduced, and the program running speed is improved; and the maximal horizontally inscribed rectangle for the region of interest is obtained through the run length traversing, so that higher practical significance is achieved for industrial applications

Journal ArticleDOI
TL;DR: This paper presents the implementation of Run Length Encoding for data compression, which provides good lossless compression of input data and is useful on data that contains many consecutive runs of the same values.
Abstract: In a recent era of modern technology, there are many problems for storage, retrieval and transmission of data. Data compression is necessary due to rapid growth of digital media and the subsequent need for reduce storage size and transmit the data in an effective and efficient manner over the networks. It reduces the transmission traffic on internet also. Data compression try to reduce the number of bits required to store digitally. The various data and image compression algorithms are widely use to reduce the original data bits into lesser number of bits. Lossless data and image compression is a special class of data compression. This algorithm involves in reducing numbers of bits by identifying and eliminating statistical data redundancy in input data. It is very simple and effective method. It provides good lossless compression of input data. This is useful on data that contains many consecutive runs of the same values. This paper presents the implementation of Run Length Encoding for data compression. Article History Received: 17 July 2017 Accepted:09 August 2017

Journal ArticleDOI
TL;DR: Experimental results show that the algorithm is effective in compressing data, effectively reducing the size of data storage and speeding up the transmission of background data, so it has very good application value.
Abstract: Aiming at the problem of data transmission in distributed framework of web project, a compression algorithm combining Huffman encoding and Run length encoding is used to compress data in this paper, thus reducing the amount of data and improving the speed of data transmission. The integrity of data can be guaranteed since lossless compression is used. Experimental results show that the algorithm is effective in compressing data, effectively reducing the size of data storage and speeding up the transmission of background data, so it has very good application value.

Journal Article
TL;DR: This paper presents an efficient code-compression technique, which significantly improves the compression ratio and offers a third algorithm namely, to combine the two previously proposed schemes along with run length encoding to compress the code.
Abstract: Memory plays a crucial role in designing embedded systems. Embedded systems are constrained by theavailable memory. A larger memory can accommodate more and large applications but increases cost, area, as well as energy requirements. Code-compression techniques address this issue by reducing the code size of application programs. It is a major challenge to develop an efficient code-compression technique that can generate substantial reduction in code size without affecting the overall system performance. We present an efficient code-compression technique, which significantly improves the compression ratio. Two previously proposed algorithm are evaluated. The first algorithm is dictionary-based method, provides a small separated dictionary is proposed to restrict the codeword length of high-frequency instructions, and a novel dictionary selection algorithm is proposed to achieve more satisfactory instruction selection, which in turn may reduce the average CR. The second algorithm is mixed-bit saving dictionary selection (MBSDS) the fully separated dictionary architecture is proposed to improve the performance of the dictionary-based decompression engine. This architecture has a better chance to parallel decompress instructions than existing single dictionary decoders Additionally, this paper offers a third algorithm namely, to combine the two previously proposed schemes along with run length encoding to compress the code.

Patent
20 Jun 2017
TL;DR: In this paper, a fast region corrosion algorithm based on run length encoding and a device was proposed, which comprises three steps: a rectangular structure element is selected to carry out corrosion operation on the region of interest after encoding, and the corrosion operation is expressed to be intersections between vectors of all pixel points relative to origins.
Abstract: The invention discloses a fast region corrosion algorithm based on run length encoding and a device. The method comprises steps: run length encoding is carried out on a region of interest of an image; a rectangular structure element is selected to carry out corrosion operation on the region of interest after encoding; and the corrosion operation is expressed to be intersections between vectors of all pixel points relative to origins after the rectangular structure element is mirrored and each run translation result of the region of interest. Memory occupancy can be effectively reduced, and consumed time is shortened.

Journal ArticleDOI
TL;DR: This paper analyze the Lossless method using Run Length Encoding (RLE) Algorithm, Arithmetic Encoding, Punctured Elias Code and Goldbach Code to determine which algorithm is more efficient in doing data compression.
Abstract: In computer science, data compression or bit-rate reduction is a way to compress data so that it requires a smaller storage space making it more efficient in storing or shortening the data exchange time. Data compression is divided into 2 parts, Lossless Data Compression and Lossy Data Compression. Examples of Lossless methods are: Run Length, Huffman, Delta and LZW. While the example of Lossy method is: CS & Q (Coarser Sampling and / or Quantization). This paper analyze the Lossless method using Run Length Encoding (RLE) Algorithm, Arithmetic Encoding, Punctured Elias Code and Goldbach Code. This Paper also draw a comparison between the four algorithms in order to determine which algorithm is more efficient in doing data compression. Keyword Data Compression, Run Length Encoding, Arithmetic Encoding, Punctured Elias Code, Goldbach Code

Journal ArticleDOI
TL;DR: In this article, the authors proposed a method by minimizing the memory size using lossless image compression techniques, which can exploit multiple kinds of redundant information and reduce the time required for sending over the Internet or downloading from Web pages.
Abstract: Visual Cryptography is a special encryption technique to hide information in images in such a way that it can be decrypted by the human vision if the correct key image is used. In Visual Cryptography the reconstructed image after decryption process encounters a major problem of Pixel expansion. This is overcome in this proposed method by minimizing the memory size using lossless image compression techniques. Image compression is minimizing the size in bytes of a graphics file without degrading the quality of the image to an unacceptable level. The reduction in file size allows more images to be stored in a given amount of disk or memory space. It also reduces the time required for images to be sent over the Internet or downloaded from Web pages. Hybrid techniques are used in this proposed method as it can exploit multiple kinds of redundant information Keywords: Visual Cryptography; HVS; Image Compression; Vector Quantization; Run Length Encoding; Huffman Coding

Patent
13 Jun 2017
TL;DR: In this article, a run-length encoding-based quick region expansion algorithm and apparatus is presented, which comprises the steps of performing runlength encoding on a region of interest of an image; selecting a rectangular structural element to perform expansion operation on the encoded region of interests; and representing the expansion operation as a union set of translational results of vectors, relative to an original point of the rectangular structural elements, of all pixel points after mirroring of the rectangle structural element, for run lengths of the region of importance.
Abstract: The invention discloses a run-length encoding-based quick region expansion algorithm and apparatus. The method comprises the steps of performing run-length encoding on a region of interest of an image; selecting a rectangular structural element to perform expansion operation on the encoded region of interest; and representing the expansion operation as a union set of translational results of vectors, relative to an original point of the rectangular structural element, of all pixel points after mirroring of the rectangular structural element, for run lengths of the region of interest. According to the algorithm and the apparatus, the memory occupation can be effectively reduced and the consumed time can be shortened.

Journal Article
TL;DR: The proposed method is helpful in securing the patient information and it provides high hiding capacity for storage in the hospital digital database with improved values of MSE and PSNR.
Abstract: Image embedding has a wide range of applications in the medical field. This method is helpful in securing the information of the patients from the intruders with high storage capacity. The medical images of different modalities like CT and PET along with Patient Medical Image (PMI) can be sent to the physicians across the world for the diagnosis. Due to the bandwidth and storage constraint, medical images must be compressed before transmission and storage. This paper presents an evaluation on fusion based image embedding and reconstruction process for CT and PET images. The comparison of image fusion technique is processed using Wavelet Transform (WT) and Complex Contourlet Transform (CCT) and the analysis of compression method is estimated by using Run Length Encoding (RLE) and Huffman Encoding (HE) respectively. The proposed method is helpful in securing the patient information and it provides high hiding capacity for storage in the hospital digital database with improved values of MSE and PSNR.

Proceedings ArticleDOI
01 Jul 2017
TL;DR: A lossy run length encoder is designed that exploits the pixel redundancy and visual imperceptibility of human eye to fine details in the digital images and is secure enough to thwart various statistical attacks while being easy to implement and fast.
Abstract: As the network technologies are improving, more challenges are coming forward in form of huge amount of data being transferred through the network. A large portion of such data is of multimedia type consisting of huge amount of digital images being sent and received through the network. In this paper, an integrated image compression and encryption technique using run length encoding scheme and henon chaotic map is presented. Run length encoding scheme is common scheme and a natural choice for image compression. Run length encoding generates (value, count) pairs such that the value is repeated ‘count’ number of times. In this paper, we used the run length encoding technique for lossy image compression. We designed a lossy run length encoder that exploits the pixel redundancy and visual imperceptibility of human eye to fine details in the digital images. Along with compression we perform image encryption using henon chaotic map. After encryption the size and resolution of the image is changed that further enhances the security. Various experiments are performed calculating various performance matrices-histogram, information entropy, PSNR, Compression ratio, and MSE. The algorithm is secure enough to thwart various statistical attacks while being easy to implement and fast.