scispace - formally typeset
Search or ask a question
Topic

Probability mass function

About: Probability mass function is a research topic. Over the lifetime, 2853 publications have been published within this topic receiving 101710 citations. The topic is also known as: pmf.


Papers
More filters
Posted Content
TL;DR: This work introduces several ways of regularizing the objective, which can dramatically stabilize the training of GAN models and shows that these regularizers can help the fair distribution of probability mass across the modes of the data generating distribution, during the early phases of training and thus providing a unified solution to the missing modes problem.
Abstract: Although Generative Adversarial Networks achieve state-of-the-art results on a variety of generative tasks, they are regarded as highly unstable and prone to miss modes. We argue that these bad behaviors of GANs are due to the very particular functional shape of the trained discriminators in high dimensional spaces, which can easily make training stuck or push probability mass in the wrong direction, towards that of higher concentration than that of the data generating distribution. We introduce several ways of regularizing the objective, which can dramatically stabilize the training of GAN models. We also show that our regularizers can help the fair distribution of probability mass across the modes of the data generating distribution, during the early phases of training and thus providing a unified solution to the missing modes problem.

456 citations

Book
01 Jan 1949

448 citations

Proceedings ArticleDOI
TL;DR: In this article, it is shown that these embedding methods are equivalent to a lowpass filtering of histograms that is quantified by a decrease in the HCF center of mass (COM), which is exploited in known scheme detection to classify unaltered and spread spectrum images using a bivariate classifier.
Abstract: The process of information hiding is modeled in the context of additive noise. Under an independence assumption, the histogram of the stegomessage is a convolution of the noise probability mass function (PMF) and the original histogram. In the frequency domain this convolution is viewed as a multiplication of the histogram characteristic function (HCF) and the noise characteristic function. Least significant bit, spread spectrum, and DCT hiding methods for images are analyzed in this framework. It is shown that these embedding methods are equivalent to a lowpass filtering of histograms that is quantified by a decrease in the HCF center of mass (COM). These decreases are exploited in a known scheme detection to classify unaltered and spread spectrum images using a bivariate classifier. Finally, a blind detection scheme is built that uses only statistics from unaltered images. By calculating the Mahalanobis distance from a test COM to the training distribution, a threshold is used to identify steganographic images. At an embedding rate of 1 b.p.p. greater than 95% of the stegoimages are detected with false alarm rate of 5%.

444 citations

Journal ArticleDOI
TL;DR: It is shown that, for a wide class of probability distributions on the data, the probability constraints can be converted explicitly into convex second-order cone constraints; hence the probability-constrained linear program can be solved exactly with great efficiency.
Abstract: In this paper, we discuss linear programs in which the data that specify the constraints are subject to random uncertainty. A usual approach in this setting is to enforce the constraints up to a given level of probability. We show that, for a wide class of probability distributions (namely, radial distributions) on the data, the probability constraints can be converted explicitly into convex second-order cone constraints; hence, the probability-constrained linear program can be solved exactly with great efficiency. Next, we analyze the situation where the probability distribution of the data is not completely specified, but is only known to belong to a given class of distributions. In this case, we provide explicit convex conditions that guarantee the satisfaction of the probability constraints for any possible distribution belonging to the given class.

404 citations

Book ChapterDOI
01 Jan 1993
TL;DR: The problem of converting possibility measures into probability measures has received attention in the past, but not by so many scholars, and has roots at least as much in the possibility/probability consistency principle of Zadeh (1978), that he proposed in the paper founding possibility theory.
Abstract: The problem of converting possibility measures into probability measures has received attention in the past, but not by so many scholars. This question is philosophically interesting as part of the debate between probability and fuzzy sets. The imbedding of fuzzy sets into random set theory as done by Goodman and Nguyen (1985), Wang Peizhuang (1983), among others, has solved this question in principle. However the conversion problem has roots at least as much in the possibility/probability consistency principle of Zadeh (1978), that he proposed in the paper founding possibility theory.

398 citations


Network Information
Related Topics (5)
Markov chain
51.9K papers, 1.3M citations
87% related
Estimator
97.3K papers, 2.6M citations
85% related
Probabilistic logic
56K papers, 1.3M citations
83% related
Inference
36.8K papers, 1.3M citations
82% related
Robustness (computer science)
94.7K papers, 1.6M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202311
202227
202161
202088
201991
201870