The integral of a symmetric unimodal function over a symmetric convex set and some probability inequalities
Citations
11 citations
Additional excerpts
...Note that γ(Z(a, γ)r) = 0 since γ(Z(a, γ)r) = γ(ra+ I(r)Oγ) ≤ γ(I(r)Oγ) by the Anderson inequality (see Anderson, 1955)....
[...]
11 citations
Cites background from "The integral of a symmetric unimoda..."
...The main result of this paper is a generalization of the following theorem of Anderson (1955) on the integrals of a symmetric unimodal function over translates of a symmetric convex set....
[...]
11 citations
Cites background from "The integral of a symmetric unimoda..."
...Hence by Anderson's theorem (Anderson 1955, cf. Tong 1990, p. 73) we have ce M1 P kb̂1 ÿ bkU e VP kb̂2 ÿ bkU e ce M2 for all e > 0. r A reasonable weakest requirement for a moment matrix M is that there be no competing moment matrix A which is better than M in the Loewner ordering sense....
[...]
11 citations
Cites background or methods from "The integral of a symmetric unimoda..."
...After identifying posterior distribution, we can approximate posterior distribution based on computer-intensive methods such as Gibbs samplings or Markov Chain Monte Carlo (MCMC) methods. In principle, with MCMC methods, Bayesian inference is performed and the posterior distribution of unknown parameters are obtained numerically. From methodological and computational point of views, several techniques and tools have been proposed and developed, while theoretical and asymptotic studies still leave much to be desired. For example, when the noise distribution is assumed to be known and the Gaussian process prior is used, computational methods have been developed such as in Neal (1996, 1997) and Paciorek (2003). In addition, Neal (1997) also considered Gaussian process regression by using a Student’s t-distributed noise....
[...]
...The seminal work by Stone (1977) initiated the issue of consistent estimation of nonparametric regression problems, investigating strong consistency with weak conditions imposed on the underlying distribution. So far, much effort has been given to the theoretical justification of nonparametric regression problems such as consistency, optimal rate of convergence, in particular, from a frequentist perspective. Bayesian approach to nonparametric regression problems provides an alternative statistical framework and needs to be justified in terms of asymptotic points of view, introducing the concept of posterior consistency and establishing it. Posterior consistency and the question about the rate of convergence of posterior distribution in nonparametric regression problems have been mainly studied under Gaussian noise distribution (e.g. Shen and Wasserman 2001; Huang 2004; Choi and Schervish 2007) and further efforts are expected to be taken under the general noise distribution. Specifically, a Bayesian approach in the nonparametric problem using a prior on the regression function and specifying a Gaussian error distribution has been shown to be consistent, based on the concept of almost sure posterior consistency in Choi and Schervish (2007). However, in contrast to the case where we specify the error as Gaussian, little attention has been paid to asymptotic behavior of Bayesian regression models with non-Gaussian error....
[...]
...The following assumption is about how fast those fixed covariate values fill out the interval [0, 1]. Assumption D.1 Let 0 = x0 < x1 ≤ x2 ≤ · · · ≤ xn < xn+1 = 1 be the design points on [0, 1] and let Si = xi+1 − xi , i = 0, . . . , n denote the spacings between them. There is a constant 0 < K1 < 1 such that the max0≤i≤n Si < 1/(K1n). Now, we provide a result about posterior consistency for fixed covariates, in which the data {Yn}∞n=1 are assumed to be conditionally independent with a symmetric conditional densityφ([y−η(x)]/σ)/σ givenη, σ and the covariates. To investigate posterior consistency with nonrandom covariates, we apply Theorem 1 of Choi and Schervish (2007) by making pi (z; θ) equal to fi (z; θ0) as φ([yi − η(x)]/σ)/σ and by assuming D....
[...]
...The seminal work by Stone (1977) initiated the issue of consistent estimation of nonparametric regression problems, investigating strong consistency with weak conditions imposed on the underlying distribution....
[...]
...To say that the posterior distribution of θ is almost surely consistent means that, for every neighborhood N , limn→∞ pn,N = 1 a.s. with respect to the joint distribution of the infinite sequence of data values. Similarly, in-probability consistency means that for all N pn,N converges to 1 in probability. To make these definitions precise, we must specify the topology on Θ , in particular on F . This topology can be chosen independently of whether one wishes to consider almost sure consistency or in-probability consistency of the posterior. For this purpose, we use a popular choice of topology on F , L1 topology related to a probability measure Q on the domain [0, 1] of the regression functions. The L1(Q) distance between two functions η1 and η2 is ‖η1 − η2‖1 = ∫ 1 0 |η1 − η2|dQ. In addition, we use a Hellinger metric for joint densities f for Z = (X, Y ) with respect to a product measure ξ = Q × λ, where λ is a Lebesgue measure, namely f (x, y) = φ([y − η(x)]/σ)/σ . The Hellinger distance between two densities f1 and f2 is {∫ [√ f1(x, y) − √ f2(x, y) ]2 dξ}1/2. These metrics were considered for looking at posterior consistency under normal noise distribution by Choi and Schervish (2007). Another frequently used neighborhood is the weak neighborhood of the true probability measure of P0 with the true joint density of X and Y , f0....
[...]
10 citations
Cites background from "The integral of a symmetric unimoda..."
...By the Anderson inequality (Corollary 2 in Anderson (1955)), for each x E /R and each t > 0, Pr{Ig I > t} < Pr{Ig + xl _> t}....
[...]
References
3,082 citations
"The integral of a symmetric unimoda..." refers background in this paper
...In Theorem 1 the equality in (1) holds for k<l if and only if, for every u, (E+y)r\Ku=Er\Ku-\-y....
[...]
...It will be noticed that we obtain strict inequality in (1) if and only if for at least one u, H(u)>H*(u) (because H(u) is continuous on the left)....
[...]
1,660 citations
927 citations
140 citations
"The integral of a symmetric unimoda..." refers background in this paper
...f ud[H*(u) - H(u)} = b[H*(b) - H(b)] - a[H*(a) - H(a)} (3) " + f [(H(u) - H*(u)]du....
[...]
...J a Since/(x) has a finite integral over E, bH(b)—>0 as b—>oo and hence also bH*(b)—>0 as b—*<x>; therefore the first term on the right in (3) can be made arbitrarily small in absolute value....
[...]