scispace - formally typeset
Search or ask a question
Journal ArticleDOI

The integral of a symmetric unimodal function over a symmetric convex set and some probability inequalities

01 Feb 1955-Vol. 6, Iss: 2, pp 170-176
About: The article was published on 1955-02-01 and is currently open access. It has received 552 citations till now. The article focuses on the topics: Convex set & Subderivative.
Citations
More filters
Journal Article
TL;DR: In this paper, the strong law of large numbers and the central limit theorem for the random line segment co{0,(1, a + X) when X is a centred Gaussian random vector in a real, separable Banach space is discussed.
Abstract: Suppose E is a real, separable Banach space and for each x ∈ E denote by co{0,(1, x)} the line segment joining the two points 0 and (1, x) in R × E. The aim of this paper is to discuss the strong law of large numbers and the central limit theorem for the random line segment co{0,(1, a + X)} when X is a centred Gaussian random vector in E and a ∈ E. Finally, an application to mathematical finance is given.

11 citations


Additional excerpts

  • ...Note that γ(Z(a, γ)r) = 0 since γ(Z(a, γ)r) = γ(ra+ I(r)Oγ) ≤ γ(I(r)Oγ) by the Anderson inequality (see Anderson, 1955)....

    [...]

Journal ArticleDOI
01 Jan 1976
TL;DR: In this paper, it was shown that a marginal function of a unimodal function (even if it is symmetric) need not be unimmodal, and generalizations of Anderson's inequality were obtained in different directions.
Abstract: Anderson (1955) gave a definition of a unimodal function on RI and obtained an inequality for integrals of a symmetric unimodal function over translates of a symmetric convex set. Anderson's assumptions, especially the role of unimodality, are critically examined and generalizations of his inequality are obtained in different directions. It is shown that a marginal function of a unimodal function (even if it is symmetric) need not be unimodal.

11 citations


Cites background from "The integral of a symmetric unimoda..."

  • ...The main result of this paper is a generalization of the following theorem of Anderson (1955) on the integrals of a symmetric unimodal function over translates of a symmetric convex set....

    [...]

Journal ArticleDOI
18 Oct 1999-Metrika
TL;DR: In this paper, the distance optimality criterion for the parameter vector of the classical linear model under normally distributed errors is investigated, and DS-optimal designs are derived for first-order polynomial fit models.
Abstract: Properties of the most familiar optimality criteria, for example A-, D- and E-optimality, are well known, but the distance optimality criterion has not drawn much attention to date. In this paper properties of the distance optimality criterion for the parameter vector of the classical linear model under normally distributed errors are investigated. DS-optimal designs are derived for first-order polynomial fit models. The matter of how the distance optimality criterion is related to traditional D- and E-optimality criteria is also addressed.

11 citations


Cites background from "The integral of a symmetric unimoda..."

  • ...Hence by Anderson's theorem (Anderson 1955, cf. Tong 1990, p. 73) we have ce M1 P kb̂1 ÿ bkU e VP kb̂2 ÿ bkU e ce M2 for all e > 0. r A reasonable weakest requirement for a moment matrix M is that there be no competing moment matrix A which is better than M in the Loewner ordering sense....

    [...]

Journal ArticleDOI
Taeryon Choi1
TL;DR: In this paper, the authors investigated the asymptotic behavior of posterior distributions in nonparametric regression problems when the distribution of noise structure of the regression model is assumed to be non-Gaussian but symmetric such as the Laplace distribution.
Abstract: We investigate the asymptotic behavior of posterior distributions in nonparametric regression problems when the distribution of noise structure of the regression model is assumed to be non-Gaussian but symmetric such as the Laplace distribution. Given prior distributions for the unknown regression function and the scale parameter of noise distribution, we show that the posterior distribution concentrates around the true values of parameters. Following the approach by Choi and Schervish (Journal of Multivariate Analysis, 98, 1969–1987, 2007) and extending their results, we prove consistency of the posterior distribution of the parameters for the nonparametric regression when errors are symmetric non-Gaussian with suitable assumptions.

11 citations


Cites background or methods from "The integral of a symmetric unimoda..."

  • ...After identifying posterior distribution, we can approximate posterior distribution based on computer-intensive methods such as Gibbs samplings or Markov Chain Monte Carlo (MCMC) methods. In principle, with MCMC methods, Bayesian inference is performed and the posterior distribution of unknown parameters are obtained numerically. From methodological and computational point of views, several techniques and tools have been proposed and developed, while theoretical and asymptotic studies still leave much to be desired. For example, when the noise distribution is assumed to be known and the Gaussian process prior is used, computational methods have been developed such as in Neal (1996, 1997) and Paciorek (2003). In addition, Neal (1997) also considered Gaussian process regression by using a Student’s t-distributed noise....

    [...]

  • ...The seminal work by Stone (1977) initiated the issue of consistent estimation of nonparametric regression problems, investigating strong consistency with weak conditions imposed on the underlying distribution. So far, much effort has been given to the theoretical justification of nonparametric regression problems such as consistency, optimal rate of convergence, in particular, from a frequentist perspective. Bayesian approach to nonparametric regression problems provides an alternative statistical framework and needs to be justified in terms of asymptotic points of view, introducing the concept of posterior consistency and establishing it. Posterior consistency and the question about the rate of convergence of posterior distribution in nonparametric regression problems have been mainly studied under Gaussian noise distribution (e.g. Shen and Wasserman 2001; Huang 2004; Choi and Schervish 2007) and further efforts are expected to be taken under the general noise distribution. Specifically, a Bayesian approach in the nonparametric problem using a prior on the regression function and specifying a Gaussian error distribution has been shown to be consistent, based on the concept of almost sure posterior consistency in Choi and Schervish (2007). However, in contrast to the case where we specify the error as Gaussian, little attention has been paid to asymptotic behavior of Bayesian regression models with non-Gaussian error....

    [...]

  • ...The following assumption is about how fast those fixed covariate values fill out the interval [0, 1]. Assumption D.1 Let 0 = x0 < x1 ≤ x2 ≤ · · · ≤ xn < xn+1 = 1 be the design points on [0, 1] and let Si = xi+1 − xi , i = 0, . . . , n denote the spacings between them. There is a constant 0 < K1 < 1 such that the max0≤i≤n Si < 1/(K1n). Now, we provide a result about posterior consistency for fixed covariates, in which the data {Yn}∞n=1 are assumed to be conditionally independent with a symmetric conditional densityφ([y−η(x)]/σ)/σ givenη, σ and the covariates. To investigate posterior consistency with nonrandom covariates, we apply Theorem 1 of Choi and Schervish (2007) by making pi (z; θ) equal to fi (z; θ0) as φ([yi − η(x)]/σ)/σ and by assuming D....

    [...]

  • ...The seminal work by Stone (1977) initiated the issue of consistent estimation of nonparametric regression problems, investigating strong consistency with weak conditions imposed on the underlying distribution....

    [...]

  • ...To say that the posterior distribution of θ is almost surely consistent means that, for every neighborhood N , limn→∞ pn,N = 1 a.s. with respect to the joint distribution of the infinite sequence of data values. Similarly, in-probability consistency means that for all N pn,N converges to 1 in probability. To make these definitions precise, we must specify the topology on Θ , in particular on F . This topology can be chosen independently of whether one wishes to consider almost sure consistency or in-probability consistency of the posterior. For this purpose, we use a popular choice of topology on F , L1 topology related to a probability measure Q on the domain [0, 1] of the regression functions. The L1(Q) distance between two functions η1 and η2 is ‖η1 − η2‖1 = ∫ 1 0 |η1 − η2|dQ. In addition, we use a Hellinger metric for joint densities f for Z = (X, Y ) with respect to a product measure ξ = Q × λ, where λ is a Lebesgue measure, namely f (x, y) = φ([y − η(x)]/σ)/σ . The Hellinger distance between two densities f1 and f2 is {∫ [√ f1(x, y) − √ f2(x, y) ]2 dξ}1/2. These metrics were considered for looking at posterior consistency under normal noise distribution by Choi and Schervish (2007). Another frequently used neighborhood is the weak neighborhood of the true probability measure of P0 with the true joint density of X and Y , f0....

    [...]

Journal ArticleDOI
TL;DR: In this paper, the authors study the order of convergence of the Kolmogorov-Smirnov distance for the bootstrap of the mean and the quantiles when an arbitrary bootstrap sample size is used.
Abstract: We study the order of convergence of the Kolmogorov-Smirnov distance for the bootstrap of the mean and the bootstrap of quantiles when an arbitrary bootstrap sample size is used. We see that for the bootstrap of the mean, the best order of the bootstrap sample is of the order ofn, wheren is the sample size. In the case of non-lattice distributions and the bootstrap of the sample mean; the bootstrap removes the effect of the skewness of the distribution only when the bootstrap sample equals the sample size. However, for the bootstrap of quantiles, the preferred order of the bootstrap sample isn 2/3. For the bootstrap of quantiles, if the bootstrap sample is of ordern 2 or bigger, the bootstrap is not consistent.

10 citations


Cites background from "The integral of a symmetric unimoda..."

  • ...By the Anderson inequality (Corollary 2 in Anderson (1955)), for each x E /R and each t > 0, Pr{Ig I > t} < Pr{Ig + xl _> t}....

    [...]

References
More filters
Book
01 Jan 1953

10,512 citations

Journal ArticleDOI
TL;DR: In this article, a general method for calculating the limiting distributions of these criteria is developed by reducing them to corresponding problems in stochastic processes, which in turn lead to more or less classical eigenvalue and boundary value problems for special classes of differential equations.
Abstract: The statistical problem treated is that of testing the hypothesis that $n$ independent, identically distributed random variables have a specified continuous distribution function $F(x)$. If $F_n(x)$ is the empirical cumulative distribution function and $\psi(t)$ is some nonnegative weight function $(0 \leqq t \leqq 1)$, we consider $n^{\frac{1}{2}} \sup_{-\infty

3,082 citations


"The integral of a symmetric unimoda..." refers background in this paper

  • ...In Theorem 1 the equality in (1) holds for k<l if and only if, for every u, (E+y)r\Ku=Er\Ku-\-y....

    [...]

  • ...It will be noticed that we obtain strict inequality in (1) if and only if for at least one u, H(u)>H*(u) (because H(u) is continuous on the left)....

    [...]

BookDOI
01 Jan 1934
TL;DR: In this article, Minkowski et al. den engen Zusammenhang dieser Begriffbildungen und Satze mit der Frage nach der bestimmung konvexer Flachen durch ihre GAusssche Krtim mung aufgedeckt und tiefliegende diesbeztigliche Satze bewiesen.
Abstract: Konvexe Figuren haben von jeher in der Geometrie eine bedeutende Rolle gespielt. Die durch ihre KonvexiUitseigenschaft allein charakteri sierten Gebilde hat aber erst BRUNN zum Gegenstand umfassender geometrischer Untersuchungen gemacht. In zwei Arbeiten "Ovale und EifHichen" und "Kurven ohne Wendepunkte" aus den Jahren 1887 und 1889 (vgl. Literaturverzeichnis BRUNN [1J, [2J) hat er neben zahl reichen Satzen der verschiedensten Art tiber konvexe Bereiche und Korper einen Satz tiber die Flacheninhalte von parallelen ebenen Schnitten eines konvexen K6rpers bewiesen, der sich in der Folge als fundamental herausgestellt hat. Die Bedeutung dieses Satzes hervor gehoben zu haben, ist das Verdienst von MINKOWSKI. In mehreren Arbeiten, insbesondere in "Volumeri. und Oberflache" (1903) und in der groBztigig angelegten, unvollendet geblieben n Arbeit "Zur Theorie der konvexen K6rper" (Literaturverzeichnis [3], [4J) hat er durch Ein fUhrung von grundlegenden Begriffen wie Stutzfunktion, gemischtes VolulIl, en usw. die dem Problemkreis angemessenen formalen Hilfsmittel geschaffen und vor allem den Weg zu vielseitigen Anwendungen, speziell auf das isoperimetrische (isepiphane) und andere Extremalprobleme fUr konvexe Bereiche und K6rper er6ffnet. Weiterhin hat MINKOWSKI den engen Zusammenhang dieser Begriffsbildungen und Satze mit der Frage nach der Bestimmung konvexer Flachen durch ihre GAusssche Krtim mung aufgedeckt und tiefliegende diesbeztigliche Satze bewiesen.

927 citations

Journal ArticleDOI
TL;DR: In this paper, the authors extended the Cramer-Smirnov and von Mises test to the parametric case, a suggestion of Cramer [1], see also [2].
Abstract: The "goodness of fit" problem, consisting of comparing the empirical and hypothetical cumulative distribution functions (cdf's), is treated here for the case when an auxiliary parameter is to be estimated. This extends the Cramer-Smirnov and von Mises test to the parametric case, a suggestion of Cramer [1], see also [2]. The characteristic function of the limiting distribution of the test function is found by consideration of a Guassian stochastic process.

140 citations


"The integral of a symmetric unimoda..." refers background in this paper

  • ...f ud[H*(u) - H(u)} = b[H*(b) - H(b)] - a[H*(a) - H(a)} (3) " + f [(H(u) - H*(u)]du....

    [...]

  • ...J a Since/(x) has a finite integral over E, bH(b)—>0 as b—>oo and hence also bH*(b)—>0 as b—*<x>; therefore the first term on the right in (3) can be made arbitrarily small in absolute value....

    [...]