Journal•ISSN: 1066-5307
Mathematical Methods of Statistics
Springer Science+Business Media
About: Mathematical Methods of Statistics is an academic journal published by Springer Science+Business Media. The journal publishes majorly in the area(s): Estimator & Asymptotic distribution. It has an ISSN identifier of 1066-5307. Over the lifetime, 300 publications have been published receiving 2842 citations.
Topics: Estimator, Asymptotic distribution, Minimax, Random variable, Gaussian
Papers published on a yearly basis
Papers
More filters
••
TL;DR: In this article, a pq-dimensional random vector x distributed normally with mean vector θ and covariance matrix Λ assumed to be positive definite is considered, and a test based on the likelihood ratio is given to check if this model holds.
Abstract: In this article we consider a pq-dimensional random vector x distributed normally with mean vector θ and covariance matrix Λ assumed to be positive definite. On the basis of N independent observations on the random vector x, we want to estimate parameters and test the hypothesis H: Λ = Ψ ⊗ Σ, where Ψ = (ψij): q × q, ψqq = 1, and Σ = (σij): p × p, and Λ = (ψijΣ), the Kronecker product of Ψ and Σ. That is instead of 1/2pq(pq + 1) parameters, it has only 1/2p(p + 1) + 1/2q(q + 1) − 1 parameters. A test based on the likelihood ratio is given to check if this model holds. And, when this model holds, we test the hypothesis that Ψ is a matrix with intraclass correlation structure. The maximum likelihood estimators (MLE) are obtained under the hypothesis as well as under the alternatives. Using these estimators the likelihood ratio tests (LRT) are obtained. One of the main objects of the paper is to show that the likelihood equations provide unique estimators.
136 citations
••
TL;DR: In this article, the problem of finding the best linear and convex combination of M estimators of a density with respect to the mean squared risk is studied and the authors suggest aggregation procedures and prove sharp oracle inequalities for their risks.
Abstract: We study the problem of finding the best linear and convex combination of M estimators of a density with respect to the mean squared risk. We suggest aggregation procedures and we prove sharp oracle inequalities for their risks, i.e., oracle inequalities with leading constant 1. We also obtain lower bounds showing that these procedures attain optimal rates of aggregation. As an example, we consider aggregation of multivariate kernel density estimators with different bandwidths. We show that linear and convex aggregates mimic the kernel oracles in asymptotically exact sense. We prove that, for Pinsker’s kernel, the proposed aggregates are sharp asymptotically minimax simultaneously over a large scale of Sobolev classes of densities. Finally, we provide simulations demonstrating performance of the convex aggregation procedure.
110 citations
••
TL;DR: It is proved that the Group Lasso estimator satisfies a sparsity inequality, i.e., a bound in terms of the number of non-zero components of the oracle regression vector, which is better, in some cases, than the one achieved by the Lasso and the Dantzig selector.
Abstract: We consider the linear regression model with Gaussian error. We estimate the unknown parameters by a procedure inspired by the Group Lasso estimator introduced in [22]. We show that this estimator satisfies a sparsity inequality, i.e., a bound in terms of the number of non-zero components of the oracle regression vector. We prove that this bound is better, in some cases, than the one achieved by the Lasso and the Dantzig selector.
66 citations
•
TL;DR: In this paper, a nonparametric Gaussian shift model with independent observations is considered and the authors show that under regularity assumptions, this model can be approximated, in the sense of the Le Cam deficiency pseudodistance, by a Gaussian shift model Yi = Γ(f(i/n)) + εi, where ε 1,..., εn are i.i.d. standard normal r.v.
Abstract: We consider a nonparametric model E, generated by independent observations Xi, i = 1, ..., n, with densities p(x, θi), i = 1, ..., n, the parameters of which θi = f(i/n) ∈ Θ are driven by the values of an unknown function f : [0, 1]→ Θ in a smoothness class. The main result of the paper is that, under regularity assumptions, this model can be approximated, in the sense of the Le Cam deficiency pseudodistance, by a nonparametric Gaussian shift model Yi = Γ(f(i/n)) + εi, where ε1, ..., εn are i.i.d. standard normal r.v.’s, the function Γ(θ) : Θ → R satisfies Γ′(θ) = √ I(θ) and I(θ) is the Fisher information corresponding to the density p(x, θ).
63 citations
••
TL;DR: In this paper, a kernel estimate of the spatial regression function of a stationary multidimensional spatial process is investigated, and the weak and strong consistency of the estimate is shown under sufficient conditions on the mixing coefficients and the bandwidth.
Abstract: We investigate here a kernel estimate of the spatial regression function r(x) = E(Y
u | X
u = x), x ∈ ℝd, of a stationary multidimensional spatial process { Z
u = (X
u, Y
u), u ∈ ℝ
N
}. The weak and strong consistency of the estimate is shown under sufficient conditions on the mixing coefficients and the bandwidth, when the process is observed over a rectangular domain of ℝN. Special attention is paid to achieve optimal and suroptimal strong rates of convergence. It is also shown that this suroptimal rate is preserved by using a suitable spatial sampling scheme.
61 citations