scispace - formally typeset
Proceedings ArticleDOI

Free energy coding

Reads0
Chats0
TLDR
This work introduces a new approach to the problem of optimal compression when a source code produces multiple codewords for a given symbol, and illustrates the performance of free energy coding on a simple problem where a compression factor of two is gained.
Abstract
We introduce a new approach to the problem of optimal compression when a source code produces multiple codewords for a given symbol. It may seem that the most sensible codeword to use in this case is the shortest one. However, in the proposed free energy approach, random codeword selection yields an effective codeword length that can be less than the shortest codeword length. If the random choices are Boltzmann distributed, the effective length is optimal for the given source code. The expectation-maximization parameter estimation algorithms minimize this effective codeword length. We illustrate the performance of free energy coding on a simple problem where a compression factor of two is gained by using the new method.

read more

Citations
More filters
Journal ArticleDOI

End-to-End Learnt Image Compression via Non-Local Attention Optimization and Improved Context Modeling

TL;DR: An end-to-end learnt lossy image compression approach, which is built on top of the deep nerual network (DNN)-based variational auto-encoder (VAE) structure with Non-Local Attention optimization and Improved Context modeling (NLAIC).
Journal ArticleDOI

Intrinsic Classification of Spatially Correlated Data

TL;DR: This work extends MML classification to domains where the ‘things’ have a known spatial arrangement and it may be expected that the classes of neighbouring things are correlated, and combines the Snob algorithm with a simple dynamic programming algorithm.
Posted Content

Bit-Swap: Recursive Bits-Back Coding for Lossless Compression with Hierarchical Latent Variables

TL;DR: Bit-Swap as mentioned in this paper is a new compression scheme that generalizes BB-ANS and achieves strictly better compression rates for hierarchical latent variable models with Markov chain structure, which is empirically superior to existing techniques.
Posted Content

HiLLoC: Lossless Image Compression with Hierarchical Latent Variable Models

TL;DR: Full convolutional VAE models trained on 32x32 ImageNet can generalize well, not just to 64x64 but also to far larger photographs, with no changes to the model, achieving state of the art for compression of full size ImageNet images.
Proceedings Article

HiLLoC: lossless image compression with hierarchical latent variable models

TL;DR: In this article, the authors make the following striking observation: fully convolutional VAE models trained on 32x32 ImageNet can generalize well to 64x64 and far larger photographs, with no changes to the model.
References
More filters
Journal ArticleDOI

A Method for the Construction of Minimum-Redundancy Codes

TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Journal ArticleDOI

A method for the construction of minimum-redundancy codes

TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Journal ArticleDOI

Arithmetic coding for data compression

TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.