Topic
Upper and lower bounds
About: Upper and lower bounds is a research topic. Over the lifetime, 56902 publications have been published within this topic receiving 1143379 citations. The topic is also known as: majoring or minoring element.
Papers published on a yearly basis
Papers
More filters
Proceedings Article•
01 Jan 2014TL;DR: A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Abstract: How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions is two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.
20,769 citations
TL;DR: It is suggested that if Guttman's latent-root-one lower bound estimate for the rank of a correlation matrix is accepted as a psychometric upper bound, then the rank for a sample matrix should be estimated by subtracting out the component in the latent roots which can be attributed to sampling error.
Abstract: It is suggested that if Guttman's latent-root-one lower bound estimate for the rank of a correlation matrix is accepted as a psychometric upper bound, following the proofs and arguments of Kaiser and Dickman, then the rank for a sample matrix should be estimated by subtracting out the component in the latent roots which can be attributed to sampling error, and least-squares “capitalization” on this error, in the calculation of the correlations and the roots. A procedure based on the generation of random variables is given for estimating the component which needs to be subtracted.
6,722 citations
Posted Content•
TL;DR: In this paper, a stochastic variational inference and learning algorithm was proposed for directed probabilistic models with intractable posterior distributions and large datasets, which scales to large datasets.
Abstract: How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions is two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.
4,883 citations
TL;DR: The empirical upper bound relationship for membrane separation of gases initially published in 1991 has been reviewed with the myriad of data now presently available as mentioned in this paper, which indicates a different solubility selectivity relationship for perfluorinated polymers compared to hydrocarbon/aromatic polymers.
Abstract: The empirical upper bound relationship for membrane separation of gases initially published in 1991 has been reviewed with the myriad of data now presently available. The upper bound correlation follows the relationship P i = k α i j n , where Pi is the permeability of the fast gas, αij (Pi/Pj) is the separation factor, k is referred to as the “front factor” and n is the slope of the log–log plot of the noted relationship. Below this line on a plot of log αij versus log Pi, virtually all the experimental data points exist. In spite of the intense investigation resulting in a much larger dataset than the original correlation, the upper bound position has had only minor shifts in position for many gas pairs. Where more significant shifts are observed, they are almost exclusively due to data now in the literature on a series of perfluorinated polymers and involve many of the gas pairs comprising He. The shift observed is primarily due to a change in the front factor, k, whereas the slope of the resultant upper bound relationship remains similar to the prior data correlations. This indicates a different solubility selectivity relationship for perfluorinated polymers compared to hydrocarbon/aromatic polymers as has been noted in the literature. Two additional upper bound relationships are included in this analysis; CO2/N2 and N2/CH4. In addition to the perfluorinated polymers resulting in significant upper bound shifts, minor shifts were observed primarily due to polymers exhibiting rigid, glassy structures including ladder-type polymers. The upper bound correlation can be used to qualitatively determine where the permeability process changes from solution-diffusion to Knudsen diffusion.
4,525 citations
TL;DR: The proposed concept of compressibility is shown to play a role analogous to that of entropy in classical information theory where one deals with probabilistic ensembles of sequences rather than with individual sequences.
Abstract: Compressibility of individual sequences by the class of generalized finite-state information-lossless encoders is investigated. These encoders can operate in a variable-rate mode as well as a fixed-rate one, and they allow for any finite-state scheme of variable-length-to-variable-length coding. For every individual infinite sequence x a quantity \rho(x) is defined, called the compressibility of x , which is shown to be the asymptotically attainable lower bound on the compression ratio that can be achieved for x by any finite-state encoder. This is demonstrated by means of a constructive coding theorem and its converse that, apart from their asymptotic significance, also provide useful performance criteria for finite and practical data-compression tasks. The proposed concept of compressibility is also shown to play a role analogous to that of entropy in classical information theory where one deals with probabilistic ensembles of sequences rather than with individual sequences. While the definition of \rho(x) allows a different machine for each different sequence to be compressed, the constructive coding theorem leads to a universal algorithm that is asymptotically optimal for all sequences.
3,753 citations