scispace - formally typeset
Search or ask a question
Topic

K-distribution

About: K-distribution is a research topic. Over the lifetime, 1281 publications have been published within this topic receiving 51774 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the use of approximate posterior distributions resulting from operational prior distributions chosen with regard to the realized likelihood function is proposed, including mixed-type prior distributions with positive probabilities on singular subsets, and a new approximation is also given relating such distributions to absolutely continuous distributions with high local concentrations of density.
Abstract: This paper proposes the use of approximate posterior distributions resulting from operational prior distributions chosen with regard to the realized likelihood function. L.J. Savage's “precise measurement” is generalized for approximation in terms of an arbitrary operational prior density, including mixed-type prior distributions with positive probabilities on singular subsets. A new approximation is also given relating such distributions to absolutely continuous distributions with high local concentrations of density. Mixed-type distributions constructed from the natural conjugate prior distributions are proposed and illustrated in the normal-sampling case for unified Bayesian inference in testing and estimation contexts.

61 citations

Journal ArticleDOI
TL;DR: In this paper, the asymptotic distributions are used to determine the minimum sample size needed to discriminate between log-normal and log-logistic distributions for a user specified probability of correct selection.
Abstract: Log-normal and log-logistic distributions are often used to analyze lifetime data. For certain ranges of the parameters, the shape of the probability density functions or the hazard functions can be very similar in nature. It might be very difficult to discriminate between the two distribution functions. In this article, we consider the discrimination procedure between the two distribution functions. We use the ratio of maximized likelihood for discrimination purposes. The asymptotic properties of the proposed criterion are investigated. It is observed that the asymptotic distributions are independent of the unknown parameters. The asymptotic distributions are used to determine the minimum sample size needed to discriminate between these two distribution functions for a user specified probability of correct selection. We perform some simulation experiments to see how the asymptotic results work for small sizes. For illustrative purpose, two data sets are analyzed.

61 citations

Proceedings ArticleDOI
23 Oct 2005
TL;DR: This work forms sufficient separation conditions and presents a learning algorithm with provable guarantees for mixtures of distributions that satisfy these separation conditions, and shows that for isotropic power-laws, exponential, and Gaussian distributions, the separation condition is optimal up to a constant factor.
Abstract: We consider the problem of learning mixtures of arbitrary symmetric distributions. We formulate sufficient separation conditions and present a learning algorithm with provable guarantees for mixtures of distributions that satisfy these separation conditions. Our bounds are independent of the variances of the distributions; to the best of our knowledge, there were no previous algorithms known with provable learning guarantees for distributions having infinite variance and/or expectation. For Gaussians and log-concave distributions, our results match the best known sufficient separation conditions by D. Achlioptas and F. McSherry (2005) and S. Vempala and G. Wang (2004). Our algorithm requires a sample of size O/spl tilde/(dk), where d is the number of dimensions and k is the number of distributions in the mixture. We also show that for isotropic power-laws, exponential, and Gaussian distributions, our separation condition is optimal up to a constant factor.

60 citations

Journal ArticleDOI
TL;DR: In this paper, the probability distributions for quadratic quantum fields, averaged with a Lorentzian test function, were treated in four-dimensional Minkowski vacuum. But they are not able to give closed form expressions for the probability distribution, but rather use calculations of a finite number of moments to estimate the lower bounds, the asymptotic forms for large positive argument, and possible fits to the intermediate region.
Abstract: We treat the probability distributions for quadratic quantum fields, averaged with a Lorentzian test function, in four-dimensional Minkowski vacuum These distributions share some properties with previous results in two-dimensional spacetime Specifically, there is a lower bound at a finite negative value, but no upper bound Thus arbitrarily large positive energy density fluctuations are possible We are not able to give closed form expressions for the probability distribution, but rather use calculations of a finite number of moments to estimate the lower bounds, the asymptotic forms for large positive argument, and possible fits to the intermediate region The first 65 moments are used for these purposes All of our results are subject to the caveat that these distributions are not uniquely determined by the moments We apply the asymptotic form of the electromagnetic energy density distribution to estimate the nucleation rates of black holes and of Boltzmann brains

59 citations

Patent
07 Jul 2006
TL;DR: In this article, an expectation-maximization procedure is applied iteratively to the probability distribution to determine components of the probability distributions, and a method decomposes input data acquired of a signal.
Abstract: A method decomposes input data acquired of a signal. An input signal is sampled to acquire input data. The input data is represented as a probability distribution. An expectation-maximization procedure is applied iteratively to the probability distribution to determine components of the probability distributions.

59 citations


Network Information
Related Topics (5)
Markov chain
51.9K papers, 1.3M citations
80% related
Estimator
97.3K papers, 2.6M citations
78% related
Iterative method
48.8K papers, 1.2M citations
76% related
Wavelet
78K papers, 1.3M citations
76% related
Robustness (computer science)
94.7K papers, 1.6M citations
73% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20232
20228
20213
20207
201914
201816