scispace - formally typeset

Journal ArticleDOI

A maximum entropy approach to interpolation

01 Sep 1990-Signal Processing (Elsevier)-Vol. 21, Iss: 1, pp 17-24

TL;DR: This paper addresses the problem of recovery of the missing samples of a signal from a few of its randomly distributed samples and extends the maximum entropy method to handle additional information about the signal, if available.

AbstractThe maximum entropy approach has been applied to several problems including spectrum estimation, image reconstruction, etc. In this paper we use this approach to address the interpolation problem. Specifically, we address the problem of recovery of the missing samples of a signal from a few of its randomly distributed samples. We also discuss the appropriateness of the maximum entropy method for different distributions of the known samples. Finally, we extend this method to handle additional information about the signal, if available.

...read more


Citations
More filters
Journal ArticleDOI
TL;DR: A bibliography of over 1600 references related to computer vision and image analysis, arranged by subject matter is presented, covering topics including architectures; computational techniques; feature detection, segmentation, and imageAnalysis.
Abstract: This paper presents a bibliography of over 1600 references related to computer vision and image analysis, arranged by subject matter. The topics covered include architectures; computational techniques; feature detection, segmentation, and image analysis; matching, stereo, and time-varying imagery; shape and pattern; color and texture; and three-dimensional scene analysis. A few references are also given on related topics, such as computational geometry, computer graphics, image input/output and coding, image processing, optical processing, visual perception, neural nets, pattern recognition, and artificial intelligence.

15 citations

Journal ArticleDOI
TL;DR: Tests with contrived data records indicate that two algorithms are preferable, one when the gap length is less than 15% of the record, and the other for 20%-50% gaps.
Abstract: The problem considered is to match the periodogram (spectrum) of a real sampled data sequence when only the samples outside a gap are available: that is, when the samples in the gap are missing or corrupted. Different arguments lead to three reasonable estimation algorithms. Tests with contrived data records indicate that two of these algorithms are preferable, one when the gap length is less than 15% of the record, and the other for 20%-50% gaps. An algorithm based on an autoregressive model is found to have an estimate performance that is relatively independent of gap length. >

4 citations

Journal ArticleDOI
TL;DR: Kirdin kinetic machine which is the ideal fine-grained structureless computer has been used to resolve the problem of the reconstruction of a gap in symbol sequence.
Abstract: The new method of a gap recovery in symbol sequences is presented. A covering is combined from the suitable reasonably short strings of the parts of a sequence available for observation. Two criteria are introduced to choose the best covering. It must yield the maximum of entropy of a frequency dictionary developed over the sequence obtained due to the recovery, if an overlapping combined from the copies of strings from the available parts of the sequence exists. The second criterion identifies the best covering in case when one has to use any string to cover the gap; here the best covering must yield the minimum of specific entropy of the frequency dictionary developed over the available parts of the sequence against the one developed over the entire sequence obtained due to the recovery. Kirdin kinetic machine which is the ideal fine-grained structureless computer has been used to resolve the problem of the reconstruction of a gap in symbol sequence.

3 citations

Proceedings ArticleDOI
26 Dec 2009
TL;DR: The property of H.264/AVC CAVLC(context-adaptive variable-length coding) and the characteristic of Huffman Coding are analyzed, and a high efficiency implement algorithm ofCAVLC decoding based on codeword classification mapping is proposed.
Abstract: The property of H.264/AVC CAVLC(context-adaptive variable-length coding) and the characteristic of Huffman Coding are analyzed, and a high efficiency implement algorithm of CAVLC decoding based on codeword classification mapping is proposed in this paper. According to the characteristic of Huffman Coding, mapping the codewords and it's syntax element to classification mapped tables, accelerate the process of CAVLC decoding enormously at the expense of little memory consumption. The simulation results show that the proposed algorithm achieves an approximate 24-50% speed advance without degrading video quality as compared to the conventional CAVLC decoding.

1 citations


References
More filters
Book
01 Jan 1965
TL;DR: This chapter discusses the concept of a Random Variable, the meaning of Probability, and the axioms of probability in terms of Markov Chains and Queueing Theory.
Abstract: Part 1 Probability and Random Variables 1 The Meaning of Probability 2 The Axioms of Probability 3 Repeated Trials 4 The Concept of a Random Variable 5 Functions of One Random Variable 6 Two Random Variables 7 Sequences of Random Variables 8 Statistics Part 2 Stochastic Processes 9 General Concepts 10 Random Walk and Other Applications 11 Spectral Representation 12 Spectral Estimation 13 Mean Square Estimation 14 Entropy 15 Markov Chains 16 Markov Processes and Queueing Theory

13,864 citations

Book
01 Jan 2002
Abstract: Part 1 Probability and Random Variables 1 The Meaning of Probability 2 The Axioms of Probability 3 Repeated Trials 4 The Concept of a Random Variable 5 Functions of One Random Variable 6 Two Random Variables 7 Sequences of Random Variables 8 Statistics Part 2 Stochastic Processes 9 General Concepts 10 Random Walk and Other Applications 11 Spectral Representation 12 Spectral Estimation 13 Mean Square Estimation 14 Entropy 15 Markov Chains 16 Markov Processes and Queueing Theory

12,403 citations

Journal ArticleDOI
Abstract: solution. The functions that require zeroing are real functions of real variables and it will be assumed that they are continuous and differentiable with respect to these variables. In many practical examples they are extremely complicated anld hence laborious to compute, an-d this fact has two important immediate consequences. The first is that it is impracticable to compute any derivative that may be required by the evaluation of the algebraic expression of this derivative. If derivatives are needed they must be obtained by differencing. The second is that during any iterative solution process the bulk of the computing time will be spent in evaluating the functions. Thus, the most efficient process will tenid to be that which requires the smallest number of function evaluations. This paper discusses certain modificatioins to Newton's method designed to reduce the number of function evaluations required. Results of various numerical experiments are given and conditions under which the modified versions are superior to the original are tentatively suggested.

2,279 citations

01 Jan 1967

2,052 citations

Journal ArticleDOI
01 Sep 1982
TL;DR: The relations between maximum-entropy (MAXENT) and other methods of spectral analysis such as the Schuster, Blackman-Tukey, maximum-likelihood, Bayesian, and Autoregressive models are discussed, emphasizing that they are not in conflict, but rather are appropriate in different problems.
Abstract: We discuss the relations between maximum-entropy (MAXENT) and other methods of spectral analysis such as the Schuster, Blackman-Tukey, maximum-likelihood, Bayesian, and Autoregressive (AR, ARMA, or ARIMA) models, emphasizing that they are not in conflict, but rather are appropriate in different problems. We conclude that: 1) "Orthodox" sampling theory methods are useful in problems where we have a known model (sampling distribution) for the properties of the noise, but no appreciable prior information about the quantities being estimated. 2) MAXENT is optimal in problems where we have prior information about multiplicities, but no noise. 3) The full Bayesian solution includes both of these as special cases and is needed in problems where we have both prior information and noise. 4) AR models are in one sense a special case of MAXENT, but in another sense they are ubiquitous in all spectral analysis problems with discrete time series. 5) Empirical methods such as Blackman-Tukey, which do not invoke even a likelihood function, are useful in the preliminary, exploratory phase of a problem where our knowledge is sufficient to permit intuitive judgments about how to organize a calculation (smoothing, decimation, windows, prewhitening, padding with zeroes, etc.) but insufficient to set up a quantitative model which would do the proper things for us automatically and optimally.

1,485 citations