scispace - formally typeset
Proceedings ArticleDOI

Sequential functional quantization

Reads0
Chats0
TLDR
In this article, the authors consider the problem of companding quantization under non-difference distortion measures, and derive the optimal distortion-rate exponent for quantization with sensitivity matrices.
Abstract
We consider the problem of lossy estimation of an arbitrary smooth function of correlated data in a stream. In this problem, a user sequentially observes correlated random variables and wants to construct an estimate of the specified function so that the mean squared estimation error is small. Techniques from high resolution quantization theory are applied and expanded for this problem, and the optimal distortion-rate exponent for companding quantization is determined. In the process, connections are established to sufficient statistics and to sensitivity matrices, as introduced by Linder et al. in the context of companding quantization under non-difference distortion measures. These results are applied to several example statistical functions, including the sample mean, sample variance, and the p-th order statistic.

read more

Citations
More filters
Journal ArticleDOI

Rate Distortion for Lossy In-Network Linear Function Computation and Consensus: Distortion Accumulation and Sequential Reverse Water-Filling

TL;DR: This work quantifies the accumulation of information loss in distributed computing, and obtains fundamental limits on network computation rate as a function of incremental distortions (and hence incremental loss of information) along the edges of the network.
Proceedings ArticleDOI

Towards optimal quantization of neural networks

TL;DR: This work develops an approach to quantizing deep networks using functional high-rate quantization theory and leads to an optimal quantizer that is computed using the celebrated backpropagation algorithm.
Dissertation

Quantization in acquisition and computation networks

TL;DR: This thesis characterize fundamental trade-offs when the desired computation is incorporated into the compression design and the code length is one, and determines the optimal quantizer for expected relative error (ERE), a measure that is widely useful but is often neglected in the source coding community.
Proceedings ArticleDOI

Coding for lossy function computation: Analyzing sequential function computation with distortion accumulation

TL;DR: The key in applying the analysis of Distortion Accumulation is showing that the random-coding based codeword on the receiver side is close in mean-square sense to the MMSE estimate of the sources, even if the knowledge of the source distribution is not fully accurate.
Proceedings ArticleDOI

Information dissipation in noiseless lossy in-network function computation

TL;DR: Lower bounds on the rate-distortion function are obtained that are tighter than classical cut-set bounds by a difference which can be arbitrarily large in both data aggregation and consensus.
References
More filters
Journal ArticleDOI

Data streams: algorithms and applications

TL;DR: Data Streams: Algorithms and Applications surveys the emerging area of algorithms for processing data streams and associated applications, which rely on metric embeddings, pseudo-random computations, sparse approximation theory and communication complexity.
Journal ArticleDOI

Quantization

TL;DR: The key to a successful quantization is the selection of an error criterion – such as entropy and signal-to-noise ratio – and the development of optimal quantizers for this criterion.
Book

Data Streams: Algorithms and Applications

TL;DR: In this paper, the authors present a survey of basic mathematical foundations for data streaming systems, including basic mathematical ideas, basic algorithms, and basic algorithms and algorithms for data stream processing.
Journal ArticleDOI

Multiterminal source coding with high resolution

TL;DR: It is implied that the loss in the sum of the coding rates due to the separation of the encoders vanishes in the limit of high resolution, and lattice quantizers followed by Slepian-Wolf lossless encoding are asymptotically optimal.
Journal ArticleDOI

Side information aware coding strategies for sensor networks

TL;DR: Coding strategies for estimation under communication constraints in tree-structured sensor networks are developed based on a generalization of Wyner-Ziv source coding with decoder side information, which promotes the flexibility, robustness, and scalability that wireless sensor networks need to operate in uncertain, changing, and resource-constrained environments.
Related Papers (5)