scispace - formally typeset
Search or ask a question

Showing papers on "Lossless compression published in 2015"


Journal ArticleDOI
TL;DR: Lower overall complexity and good performance renders the proposed technique suitable for wearable/ambulatory ECG devices.
Abstract: This paper presents a novel electrocardiogram (ECG) processing technique for joint data compression and QRS detection in a wireless wearable sensor. The proposed algorithm is aimed at lowering the average complexity per task by sharing the computational load among multiple essential signal-processing tasks needed for wearable devices. The compression algorithm, which is based on an adaptive linear data prediction scheme, achieves a lossless bit compression ratio of 2.286x. The QRS detection algorithm achieves a sensitivity (Se) of 99.64% and positive prediction (+P) of 99.81% when tested with the MIT/BIH Arrhythmia database. Lower overall complexity and good performance renders the proposed technique suitable for wearable/ambulatory ECG devices.

112 citations


Journal ArticleDOI
TL;DR: These protocols facilitate high-speed lossless data compression and content-based multiview image fusion optimized for multicore CPU architectures, reducing image data size 30–500-fold and visualization, editing and annotation of multiterabyte image data and cell-lineage reconstructions with tens of millions of data points.
Abstract: Light-sheet microscopy is a powerful method for imaging the development and function of complex biological systems at high spatiotemporal resolution and over long time scales. Such experiments typically generate terabytes of multidimensional image data, and thus they demand efficient computational solutions for data management, processing and analysis. We present protocols and software to tackle these steps, focusing on the imaging-based study of animal development. Our protocols facilitate (i) high-speed lossless data compression and content-based multiview image fusion optimized for multicore CPU architectures, reducing image data size 30-500-fold; (ii) automated large-scale cell tracking and segmentation; and (iii) visualization, editing and annotation of multiterabyte image data and cell-lineage reconstructions with tens of millions of data points. These software modules are open source. They provide high data throughput using a single computer workstation and are readily applicable to a wide spectrum of biological model systems.

110 citations


Journal ArticleDOI
TL;DR: This work proposes an engine for lossless dynamic and adaptive compression of 3D medical images, which also allows the embedding of security watermarks within them and defines the architecture of a SaaS Cloud system, which is based on the aforementioned engine.

105 citations


Journal ArticleDOI
TL;DR: A novel electrocardiogram (ECG) compression method by adapting an adaptive Fourier decomposition (AFD) algorithm hybridized with a symbol substitution (SS) technique, which performs lossless compression enhancement and built-in data encryption, which is pivotal for e-health.
Abstract: This paper presents a novel electrocardiogram (ECG) compression method for e-health applications by adapting an adaptive Fourier decomposition (AFD) algorithm hybridized with a symbol substitution (SS) technique. The compression consists of two stages: first stage AFD executes efficient lossy compression with high fidelity; second stage SS performs lossless compression enhancement and built-in data encryption, which is pivotal for e-health. Validated with 48 ECG records from MIT-BIH arrhythmia benchmark database, the proposed method achieves averaged compression ratio (CR) of 17.6–44.5 and percentage root mean square difference (PRD) of 0.8–2.0% with a highly linear and robust PRD-CR relationship, pushing forward the compression performance to an unexploited region. As such, this paper provides an attractive candidate of ECG compression method for pervasive e-health applications.

96 citations


Proceedings ArticleDOI
02 May 2015
TL;DR: This work details a scalable fully pipelined FPGA accelerator that performs LZ77 compression and static Huffman encoding at rates up to 5.6 GB/s and explores tradeoffs between compression quality and FPN area that allow the same throughput at a fraction of the logic utilization in exchange for moderate reductions in compression quality.
Abstract: Data compression techniques have been the subject of intense study over the past several decades due to exponential increases in the quantity of data stored and transmitted by computer systems. Compression algorithms are traditionally forced to make tradeoffs between throughput and compression quality (the ratio of original file size to compressed file size). FPGAs represent a compelling substrate for streaming applications such as data compression thanks to their capacity for deep pipelines and custom caching solutions. Unfortunately, data hazards in compression algorithms such as LZ77 inhibit the creation of deep pipelines without sacrificing some amount of compression quality. In this work we detail a scalable fully pipelined FPGA accelerator that performs LZ77 compression and static Huffman encoding at rates up to 5.6 GB/s. Furthermore, we explore tradeoffs between compression quality and FPGA area that allow the same throughput at a fraction of the logic utilization in exchange for moderate reductions in compression quality. Compared to recent FPGA compression studies, our emphasis on scalability gives our accelerator a 3.0x advantage in resource utilization at equivalent throughput and compression ratio.

90 citations


Proceedings ArticleDOI
25 May 2015
TL;DR: A loss compression technique based on wavelet transformation for checkpoints is proposed, and its impact to application results is explored to show that the overall checkpoint time including compression is reduced, while relative error remains fairly constant.
Abstract: The scale of high performance computing (HPC) systems is exponentially growing, potentially causing prohibitive shrinkage of mean time between failures (MTBF) while the overall increase in the I/O performance of parallel file systems will be far behind the increase in scale. As such, there have been various attempts to decrease the checkpoint overhead, one of which is to employ compression techniques to the checkpoint files. While most of the existing techniques focus on lossless compression, their compression rates and thus effectiveness remain rather limited. Instead, we propose a loss compression technique based on wavelet transformation for checkpoints, and explore its impact to application results. Experimental application of our loss compression technique to a production climate application, NICAM, shows that the overall checkpoint time including compression is reduced by 81%, while relative error remains fairly constant at approximately 1.2% on overall average of all variables of compressed physical quantities compared to original checkpoint without compression.

86 citations


Proceedings ArticleDOI
29 Oct 2015
TL;DR: This paper presents the use of conventional image based compression methods for 3D point clouds, and presents the results of several lossless compression methods and the lossy JPEG on point cloud compression.
Abstract: Modern 3D laser scanners make it easy to collect large 3D point clouds. In this paper we present the use of conventional image based compression methods for 3D point clouds. We map the point cloud onto panorama images to encode the range, reflectance and color value for each point. An encoding method is presented to map the floating point measured ranges on to a three channel image. The image compression methods are used to compress the generated panorama images. We present the results of several lossless compression methods and the lossy JPEG on point cloud compression. Lossless compression methods are designed to retain the original data. On the other hand lossy compression methods sacrifice the details for higher compression ratio. This produces artefacts in the recovered point cloud data. We study the effects of these artefacts on encoded range data. A filtration process is presented for determination of range outliers from uncompressed point clouds.

66 citations


Proceedings ArticleDOI
17 Dec 2015
TL;DR: The algorithm is capable of compressing incrementally acquired data, local decompression and of decompressing a subsampled representation of the original data, and is based on local 2D parameterizations of surface point cloud data, for which it is described an efficient approach.
Abstract: With today's advanced 3D scanner technology, huge amounts of point cloud data can be generated in short amounts of time. Data compression is thus necessary for storage and especially for transmission, e.g., via wireless networks. While previous approaches delivered good compression ratios and interesting theoretical insights, they are either computationally expensive or do not support incrementally acquired data and locally decompressing the data, two requirements we found necessary in many applications. We present a compression approach that is efficient in storage requirements as well as in computational cost, as it can compress and decompress point cloud data in real-time. Furthermore, it is capable of compressing incrementally acquired data, local decompression and of decompressing a subsampled representation of the original data. Our method is based on local 2D parameterizations of surface point cloud data, for which we describe an efficient approach. We suggest the usage of standard image compression techniques for the compression of local details. While exhibiting state-of-the-art compression ratios, our approach remains easy to implement. In our evaluation, we compare our approach to previous ones and discuss the choice of parameters. Due to our algorithm's efficiency, we consider it as a reference concerning speed and compression rates.

65 citations


Patent
24 Aug 2015
TL;DR: In this paper, the authors propose a data processing method for efficiently storing and retrieving data, e.g., blocks of data, to and from memory, using linked lists and/or tables for tracking duplicate data blocks received for storage, the use of lossless data compression, and de-duplication based on comparing hash values, compressed data block sizes, and bit by bit comparisons of the block of data to be stored and previously stored blocks.
Abstract: Data processing methods and apparatus for efficiently storing and retrieving data, e.g., blocks of data, to and from memory. The data processing includes, e.g., techniques such as using linked lists and/or tables for tracking duplicate data blocks received for storage, the use of lossless data compression, and de-duplication based on comparing hash values, compressed data block sizes, and/or bit by bit comparisons of the block of data to be stored and previously stored blocks of data.

58 citations


Journal ArticleDOI
TL;DR: An extended data model and a network partitioning algorithm into long paths to increase the compression rates for the same error bound are proposed and integrated with the state-of-the-art Douglas-Peucker compression algorithm to obtain a new technique to compress road network trajectory data with deterministic error bounds.
Abstract: With the proliferation of wireless communication devices integrating GPS technology, trajectory datasets are becoming more and more available. The problems concerning the transmission and the storage of such data have become prominent with the continuous increase in volume of these data. A few works in the field of moving object databases deal with spatio-temporal compression. However, these works only consider the case of objects moving freely in the space. In this paper, we tackle the problem of compressing trajectory data in road networks with deterministic error bounds. We analyze the limitations of the existing methods and data models for road network trajectory compression. Then, we propose an extended data model and a network partitioning algorithm into long paths to increase the compression rates for the same error bound. We integrate these proposals with the state-of-the-art Douglas-Peucker compression algorithm to obtain a new technique to compress road network trajectory data with deterministic error bounds. The extensive experimental results confirm the appropriateness of the proposed approach that exhibits compression rates close to the ideal ones with respect to the employed Douglas-Peucker compression algorithm.

58 citations


Journal ArticleDOI
TL;DR: The aim of this paper is to use the wavelet to improve the compression ratio as well as visual quality which is implemented by the famous algorithm called sub band coding and decoding algorithm in MATLAB 7.1 software tool.
Abstract: In recent years the satellite communication has been growing for the development of rapid and efficient techniques for the storage and transmission of satellite images. The science of reducing the number of bits required to represent the image is called image compression. The aim of this paper is to reduce the size of the image, while transmitting the original image. This reduced image is called as compressed image. The compressed image is being transmitted and it is reconstructed in the receiving side so that the compressed image is being decompressed to obtain the original image. And also transmission of such satellite image increases communication accuracy but requires less bandwidth. In most systems, the amount of intimation that the user wishes to communicate or store necessities some form of compression for efficient and reliable use of communication or storage system. Although several well-known compression techniques exist, many of them require computationally intensive algorithm. In addition to that many compression techniques introduce unwanted attribute such as loss of intimation. This is a major problem in satellite imaging where image degradation may be critical. By considering the above, this paper is to use the wavelet to improve the compression ratio as well as visual quality which is implemented by the famous algorithm called sub band coding and decoding algorithm in MATLAB 7.1 software tool.

Journal ArticleDOI
TL;DR: A new lossless non-reference based FASTQ compression algorithm named Lossless FastQ Compressor is introduced and it is revealed that the algorithm achieves better compression ratios on LS454 and SOLiD datasets.
Abstract: MOTIVATION Next Generation Sequencing (NGS) technologies have revolutionized genomic research by reducing the cost of whole genome sequencing. One of the biggest challenges posed by modern sequencing technology is economic storage of NGS data. Storing raw data is infeasible because of its enormous size and high redundancy. In this article, we address the problem of storage and transmission of large FASTQ files using innovative compression techniques. RESULTS We introduce a new lossless non-reference based FASTQ compression algorithm named Lossless FASTQ Compressor. We have compared our algorithm with other state of the art big data compression algorithms namely gzip, bzip2, fastqz (Bonfield and Mahoney, 2013), fqzcomp (Bonfield and Mahoney, 2013), Quip (Jones et al., 2012), DSRC2 (Roguski and Deorowicz, 2014). This comparison reveals that our algorithm achieves better compression ratios on LS454 and SOLiD datasets. AVAILABILITY AND IMPLEMENTATION The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/rajasek/lfqc-v1.1.zip. CONTACT rajasek@engr.uconn.edu.

Journal ArticleDOI
TL;DR: Fractal lossy compression for Non ROI image and Context tree weighting lossless for ROI part of an image have been proposed for the efficient compression and compared with other such as Integer wavelet transform and Scalable RBC.

Journal ArticleDOI
TL;DR: This work introduces a new compression scheme for labeled trees based on top trees that is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast navigational queries directly on the compressed representation.
Abstract: We introduce a new compression scheme for labeled trees based on top trees. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast navigational queries directly on the compressed representation. We show that the new compression scheme achieves close to optimal worst-case compression, can compress exponentially better than DAG compression, is never much worse than DAG compression, and supports navigational queries in logarithmic time.

Proceedings ArticleDOI
05 Dec 2015
TL;DR: It is shown that HyComp, augmented with the proposed floating-point-number compression method, offers superior performance in comparison with prior art, and is contributed with a compression method that exploits value locality in data types with predefined semantic value fields.
Abstract: Proposed cache compression schemes make design-time assumptions on value locality to reduce decompression latency. For example, some schemes assume that common values are spatially close whereas other schemes assume that null blocks are common. Most schemes, however, assume that value locality is best exploited by fixed-size data types (e.g., 32-bit integers). This assumption falls short when other data types, such as floating-point numbers, are common. This paper makes two contributions. First, HyComp -- a hybrid cache compression scheme -- selects the best-performing compression scheme, based on heuristics that predict data types. Data types considered are pointers, integers, floating-point numbers and the special (and trivial) case of null blocks. Second, this paper contributes with a compression method that exploits value locality in data types with predefined semantic value fields, e.g., as in the exponent and the mantissa in floating-point numbers. We show that HyComp, augmented with the proposed floating-point-number compression method, offers superior performance in comparison with prior art.

Journal ArticleDOI
01 Nov 2015-Optik
TL;DR: An improved medical image compression technique based on region of interest (ROI) is proposed to maximize compression and a set of experiments is designed to assess the effectiveness of the proposed compression method.

Journal ArticleDOI
TL;DR: The delta compression approach is extended to allow users to trade a small maximum error margin for large improvements to the compression ratio, and a new trajectory compression system called Trajic is proposed based on the results of the study.
Abstract: The need to store vast amounts of trajectory data becomes more problematic as GPS-based tracking devices become increasingly prevalent There are two commonly used approaches for compressing trajectory data The first is the line generalisation approach which aims to fit the trajectory using a series of line segments The second is to store the initial data point and then store the remaining data points as a sequence of successive deltas The line generalisation approach is only effective when given a large error margin, and existing delta compression algorithms do not permit lossy compression Consequently there is an uncovered gap in which users expect a good compression ratio by giving away only a small error margin This paper fills this gap by extending the delta compression approach to allow users to trade a small maximum error margin for large improvements to the compression ratio In addition, alternative techniques are extensively studied for the following two key components of any delta-based approach: predicting the value of the next data point and encoding leading zeros We propose a new trajectory compression system called Trajic based on the results of the study Experimental results show that Trajic produces 15 times smaller compressed data than a straight-forward delta compression algorithm for lossless compression and produces 94 times smaller compressed data than a state-of-the-art line generalisation algorithm when using a small maximum error bound of 1 meter

Proceedings ArticleDOI
19 Apr 2015
TL;DR: The proposed algorithm for the compression of plenoptic images is compared with state-of-the-art image compression algorithms, namely, JPEG 2000 and JPEG XR and demonstrated that the proposed algorithm improves the coding efficiency.
Abstract: Plenoptic images are obtained from the projection of the light crossing a matrix of microlens arrays which replicates the scene from different direction into a camera device sensor. Plenoptic images have a different structure with respect to regular digital images, and novel algorithms for data compression are currently under research. This paper proposes an algorithm for the compression of plenoptic images. The micro images composing a plenoptic image are processed by an adaptive prediction tool, aiming at reducing data correlation before entropy coding takes place. The algorithm is compared with state-of-the-art image compression algorithms, namely, JPEG 2000 and JPEG XR. Obtained results demonstrate that the proposed algorithm improves the coding efficiency.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a new lossy compressor for the quality values presented in genomic data files (e.g. FASTQ and SAM files), which comprise roughly half of the storage space (in the uncompressed domain).
Abstract: MOTIVATION Recent advancements in sequencing technology have led to a drastic reduction in the cost of sequencing a genome. This has generated an unprecedented amount of genomic data that must be stored, processed and transmitted. To facilitate this effort, we propose a new lossy compressor for the quality values presented in genomic data files (e.g. FASTQ and SAM files), which comprise roughly half of the storage space (in the uncompressed domain). Lossy compression allows for compression of data beyond its lossless limit. RESULTS The proposed algorithm QVZ exhibits better rate-distortion performance than the previously proposed algorithms, for several distortion metrics and for the lossless case. Moreover, it allows the user to define any quasi-convex distortion function to be minimized, a feature not supported by the previous algorithms. Finally, we show that QVZ-compressed data exhibit better performance in the genotyping than data compressed with previously proposed algorithms, in the sense that for a similar rate, a genotyping closer to that achieved with the original quality values is obtained. AVAILABILITY AND IMPLEMENTATION QVZ is written in C and can be downloaded from https://github.com/mikelhernaez/qvz. CONTACT mhernaez@stanford.edu or gmalysa@stanford.edu or iochoa@stanford.edu SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.

Journal ArticleDOI
TL;DR: A new data coding and transmission method is proposed that is specifically targeted at the wireless SHM systems deployed on large civil infrastructures and is able to withstand the data loss up to 30% and still provides lossless reconstruction of the original sensor data with overwhelming probability.
Abstract: Lossy transmission is a common problem suffered from monitoring systems based on wireless sensors. Though extensive works have been done to enhance the reliability of data communication in computer networks, few of the existing methods are well tailored for the wireless sensors for structural health monitoring (SHM). These methods are generally unsuitable for resource-limited wireless sensor nodes and intensive data SHM applications. In this paper, a new data coding and transmission method is proposed that is specifically targeted at the wireless SHM systems deployed on large civil infrastructures. The proposed method includes two coding stages: 1) a source coding stage to compress the natural redundant information inherent in SHM signals and 2) a redundant coding stage to inject artificial redundancy into wireless transmission to enhance the transmission reliability. Methods with light memory and computational overheads are adopted in the coding process to meet the resource constraints of wireless sensor nodes. In particular, the lossless entropy compression method is implemented for data compression, and a simple random matrix projection is proposed for redundant transformation. After coding, a wireless sensor node transmits the same payload of coded data instead of the original sensor data to the base station. Some data loss may occur during the transmission of the coded data. However, the complete original data can be reconstructed losslessly on the base station from the incomplete coded data given that the data loss ratio is reasonably low. The proposed method is implemented into the Imote2 smart sensor platform and tested in a series of communication experiments on a cable-stayed bridge. Examples and statistics show that the proposed method is very robust against the data loss. The method is able to withstand the data loss up to 30% and still provides lossless reconstruction of the original sensor data with overwhelming probability. This result represents a significant improvement of data transmission reliability of wireless SHM systems.

Journal ArticleDOI
TL;DR: This study aims to secure medical data by combining them into one file format using steganographic methods, and increases data repository and transmission capacity of both MR images and EEG signals.

Patent
30 Jun 2015
TL;DR: In this article, a centralized-processing cloud-based RAN (C-RAN) architecture that offers reduced fronthaul data-rate requirements compared to common-public-radio-interface (CPRI) based C-rAN architectures is described.
Abstract: Systems and methods disclosed herein describe a centralized-processing cloud-based RAN (C-RAN or cloud-RAN) architecture that offers reduced front-haul data-rate requirements compared to common-public-radio-interface (CPRI) based C-RAN architectures. Base-band physical-layer processing can be divided between a BBU Pool and an enhanced RRH (eRRH). A frequency-domain compression approach that exploits LTE signal redundancy and user scheduling information can be used at the eRRH to significantly reduce front-haul data-rate requirements. Uniform scalar quantization and variable-rate Huffman coding in the frequency-domain can be applied in a compression approach based on the user scheduling information wherein a lossy compression is followed by a lossless compression.

Journal ArticleDOI
TL;DR: In this paper, two lossy grounded inductor simulators (GISs) and one lossless GIS including an inverting type current feedback operational amplifier (CFOA$$-$$-), two resistors and a capacitor are proposed.
Abstract: In this paper, two lossy grounded inductor simulators (GISs) and one lossless GIS including an inverting type current feedback operational amplifier (CFOA$$-$$-), two resistors and a capacitor are proposed. All the proposed GISs can be easily obtained with commercial available active devices such as AD844s. Also, they do not need any critical passive component matching conditions. Both of the proposed lossy GISs employ a grounded capacitor, whereas the lossless one has a floating capacitor. In order to show performance of the proposed GISs, a number of simulation and experimental test results are given.

Journal ArticleDOI
TL;DR: Group-simple, group-Scheme, Group-AFOR, and Group-PFD are proposed in this article to accelerate integer compression algorithms for data-oriented tasks, especially in the era of big data.
Abstract: Compression algorithms are important for data-oriented tasks, especially in the era of “Big Data.” Modern processors equipped with powerful SIMD instruction sets provide us with an opportunity for achieving better compression performance. Previous research has shown that SIMD-based optimizations can multiply decoding speeds. Following these pioneering studies, we propose a general approach to accelerate compression algorithms. By instantiating the approach, we have developed several novel integer compression algorithms, called Group-Simple, Group-Scheme, Group-AFOR, and Group-PFD, and implemented their corresponding vectorized versions. We evaluate the proposed algorithms on two public TREC datasets, a Wikipedia dataset, and a Twitter dataset. With competitive compression ratios and encoding speeds, our SIMD-based algorithms outperform state-of-the-art nonvectorized algorithms with respect to decoding speeds.

Journal ArticleDOI
TL;DR: In this article, a new method of projection of color images without color-sequential technique is proposed, which is a combination of the spatial division of the phase-only light modulator with a pixel separation noise suppression technique and an efficient propagation method called a scaled Fresnel diffraction.

Journal ArticleDOI
01 May 2015
TL;DR: Two approaches based on F-transform, a recent fuzzy approximation technique, are proposed, which allow a higher data compression rate with a lower distortion, even if data are not correlated.
Abstract: Graphical abstractExample 1: MSE behaviour for ambient temperature (dotted, blocks with CR=1.33; dashed, LS with CR=1.33; dot-dashed, blocks with CR=1.83; continuous, LS with CR=1.83; thick, DWT). Display Omitted HighlightsIn WSNs data compression is a way for transferring a large amount of data to a sink.When data are not correlated popular methods such as DWT do not perform well.We propose two F-transform based techniques to address these issues.Publicly available environmental data were used for a comparative study.If compared with DWT our approaches allow higher data compression rates with lower distortions. In wireless sensor networks a large amount of data is collected for each node. The challenge of transferring these data to a sink, because of energy constraints, requires suitable techniques such as data compression. Transform-based compression, e.g. Discrete Wavelet Transform (DWT), are very popular in this field. These methods behave well enough if there is a correlation in data. However, especially for environmental measurements, data may not be correlated. In this work, we propose two approaches based on F-transform, a recent fuzzy approximation technique. We evaluate our approaches with Discrete Wavelet Transform on publicly available real-world data sets. The comparative study shows the capabilities of our approaches, which allow a higher data compression rate with a lower distortion, even if data are not correlated.

Proceedings ArticleDOI
24 Aug 2015
TL;DR: The proposal of image compression using simple coding techniques called Huffman; Discrete Wavelet Transform (DWT) coding and fractal algorithm is done and it is shown that Fractal algorithm provides better Compression ratio (CR) and Peak Signal to noise ratio (PSNR).
Abstract: Image compression is one of the advantageous techniques in different types of multi-media services. Image Compression technique have been emerged as one of the most important and successful applications in image analysis. In this paper the proposal of image compression using simple coding techniques called Huffman; Discrete Wavelet Transform (DWT) coding and fractal algorithm is done. These techniques are simple in implementation and utilize less memory. Huffman coding technique involves in reducing the redundant data in input images. DWT can be able to improve the quality of compressed image. Fractal algorithm involves encoding process and gives better compression ratio. By using the above algorithms the calculation of Peak signal to noise ratio(PSNR), Mean Square error(MSE) and compression ratio(CR) and Bits per pixel(BPP) of the compressed image by giving 512×512 input images and also the comparison of performance analysis of the parameters with that above algorithms is done. The result clearly explains that Fractal algorithm provides better Compression ratio (CR) and Peak Signal to noise ratio (PSNR).

Journal ArticleDOI
TL;DR: b64pack is an efficient method for compression of short text messages based on standards which facilitate easy deployment and interoperability and is faster than compress, gzip and bzip2 by orders of magnitudes.

Journal ArticleDOI
TL;DR: These techniques can be implemented in the field for storing and transmitting medical images in a secure manner and have also been proved by the experimental results.
Abstract: Exchanging a medical image via network from one place to another place or storing a medical image in a particular place in a secure manner has become a challenge. To overwhelm this, secure medical image Lossless Compression LC schemes have been proposed. The original input grayscale medical images are encrypted by Tailored Visual Cryptography Encryption Process TVCE which is a proposed encryption system. To generate these encrypted images, four types of processes are adopted which play a vital role. These processes are Splitting Process, Converting Process, Pixel Process and Merging process. The encrypted medical image is compressed by proposed compression algorithms, i.e Pixel Block Short algorithm PBSA and one conventional Lossless Compression LC algorithm has been adopted JPEG 2000LS. The above two compression methods are used to separate compression for encrypted medical images. And also, decompressions have been done in a separate manner. The encrypted output image which is generated from decompression of the proposed compression algorithm, JPEG 2000LS are decrypted by the Tailored Visual Cryptography Decryption Process TVCD. To decrypt the encrypted grayscale medical images, four types of processes are involved. These processes are Segregation Process, Inverse Pixel Process, 8-Bit into Decimal Conversion Process and Amalgamate Process. However, this paper is focused on the proposed visual cryptography only. From these processes, two original images have been reconstructed which are given by two compression algorithms. Ultimately, two combinations are compared with each other based on the various parameters. These techniques can be implemented in the field for storing and transmitting medical images in a secure manner. The Confidentiality, Integrity and Availability CIA property of a medical image have also been proved by the experimental results. In this paper we have focused on only proposed visual cryptography scheme.

Journal ArticleDOI
TL;DR: Integrated, post-correlation data are first passed through a nearly lossless rounding step which compares the precision of the data to a generalized and calibration-independent form of the radiometer equation, and the newly developed Bitshuffle lossless compression algorithm is subsequently applied.