scispace - formally typeset
Open AccessJournal ArticleDOI

Adaptive Lossless Image Data Compression Method Inferring Data Entropy by Applying Deep Neural Network

Shinichi Yamagiwa, +2 more
- 09 Feb 2022 - 
- Vol. 11, Iss: 4, pp 504-504
Reads0
Chats0
TLDR
A method with a principal component analysis (PCA) and a deep neural network (DNN) to predict the entropy of data to be compressed and achieves a good compression ratio without trying to compress the entire amount of data at once.
Abstract
When we compress a large amount of data, we face the problem of the time it takes to compress it. Moreover, we cannot predict how effective the compression performance will be. Therefore, we are not able to choose the best algorithm to compress the data to its minimum size. According to the Kolmogorov complexity, the compression performances of the algorithms implemented in the available compression programs in the system differ. Thus, it is impossible to deliberately select the best compression program before we try the compression operation. From this background, this paper proposes a method with a principal component analysis (PCA) and a deep neural network (DNN) to predict the entropy of data to be compressed. The method infers an appropriate compression program in the system for each data block of the input data and achieves a good compression ratio without trying to compress the entire amount of data at once. This paper especially focuses on lossless compression for image data, focusing on the image blocks. Through experimental evaluation, this paper shows the reasonable compression performance when the proposed method is applied rather than when a compression program randomly selected is applied to the entire dataset.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

The Possibility of Combining and Implementing Deep Neural Network Compression Methods

TL;DR: The greatest model compression result on the disk was achieved by applying the PCQAT method, whose application led to a reduction in size of the initial model by as much as 45 times, whereas the greatest model acceleration result was achieved via distillation on the MobileNetV2 model.
Journal ArticleDOI

Lossless Medical Image Compression by Using Difference Transform

TL;DR: A new method of compressing digital images by using the Difference Transform applied in medical imaging, which proves to be competitive and in many cases better than the standards used for medical images such as TIFF or PNG.
Proceedings ArticleDOI

Spatial-temporal Data Compression of Dynamic Vision Sensor Output with High Pixel-level Saliency using Low-precision Sparse Autoencoder

TL;DR: In this article , the authors proposed an encoder-decoder-based autoencoder architecture with two convolutional and inverse-convolutional layers with only 10-mathrm{k} parameters.
Journal ArticleDOI

Modelling and Analysis of Hybrid Transformation for Lossless Big Medical Image Compression

TL;DR: In this article , a hybrid approach with advanced steganography, wavelet transform (WT), and lossless compression was developed to ensure privacy and storage of patient data through enhanced security and optimized storage of large data images that allow a pharmacologist to store twice as much information in the same storage space.
Proceedings ArticleDOI

Spatial-temporal Data Compression of Dynamic Vision Sensor Output with High Pixel-level Saliency using Low-precision Sparse Autoencoder

TL;DR: In this paper , the authors proposed an encoder-decoder-based autoencoder architecture with two convolutional and inverse-convolutional layers to compress event-based DVS output.
References
More filters
Journal ArticleDOI

Data Compression Using Adaptive Coding and Partial String Matching

TL;DR: This paper describes how the conflict can be resolved with partial string matching, and reports experimental results which show that mixed-case English text can be coded in as little as 2.2 bits/ character with no prior knowledge of the source.
Journal ArticleDOI

The context-tree weighting method: basic properties

TL;DR: The authors derive a natural upper bound on the cumulative redundancy of the method for individual sequences that shows that the proposed context-tree weighting procedure is optimal in the sense that it achieves the Rissanen (1984) lower bound.
Journal ArticleDOI

Data compression via textual substitution

TL;DR: A general model for data compression which includes most data compression systems in the fiterature as special cases is presented and trade-offs between different varieties of macro schemes, exact lower bounds on the amount of compression obtainable, and the complexity of encoding and decoding are discussed.
Proceedings ArticleDOI

UVG dataset: 50/120fps 4K sequences for video codec analysis and development

TL;DR: The proposed dataset is the first to provide complementary 4K sequences up to 120 fps and is therefore particularly valuable for cutting-edge multimedia applications and should be included in subjective and objective quality assessments of next-generation VVC codecs.
Proceedings ArticleDOI

DeepZip: Lossless Data Compression Using Recurrent Neural Networks

TL;DR: In this article, the authors combine recurrent neural network predictors with an arithmetic coder and losslessly compress a variety of synthetic, text and genomic datasets, achieving near-optimal compression for the synthetic datasets.