scispace - formally typeset
Search or ask a question
Author

Frits H. Ruymgaart

Bio: Frits H. Ruymgaart is an academic researcher from Texas Tech University. The author has contributed to research in topics: Estimator & Asymptotic distribution. The author has an hindex of 22, co-authored 121 publications receiving 2011 citations. Previous affiliations of Frits H. Ruymgaart include Texas A&M University & Radboud University Nijmegen.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the mean square error of a large class of regularization methods (spectral methods) including the aforementioned estimators as well as many iterative methods, such as the Landweber iteration, was studied.
Abstract: Previously, the convergence analysis for linear statistical inverse problems has mainly focused on spectral cut-off and Tikhonov-type estimators. Spectral cut-off estimators achieve minimax rates for a broad range of smoothness classes and operators, but their practical usefulness is limited by the fact that they require a complete spectral decomposition of the operator. Tikhonov estimators are simpler to compute but still involve the inversion of an operator and achieve minimax rates only in restricted smoothness classes. In this paper we introduce a unifying technique to study the mean square error of a large class of regularization methods (spectral methods) including the aforementioned estimators as well as many iterative methods, such as $ u$-methods and the Landweber iteration. The latter estimators converge at the same rate as spectral cut-off but require only matrix-vector products. Our results are applied to various problems; in particular we obtain precise convergence rates for satellite gradiometry, $L^2$-boosting, and errors in variable problems.

234 citations

Journal ArticleDOI
TL;DR: The recovery of signals from indirect measurements, blurred by random noise, is considered under the assumption that prior knowledge regarding the smoothness of the signal is avialable and the general problem is embedded in an abstract Hilbert scale.
Abstract: The recovery of signals from indirect measurements, blurred by random noise, is considered under the assumption that prior knowledge regarding the smoothness of the signal is avialable. For greater flexibility the general problem is embedded in an abstract Hilbert scale. In the applications Sobolev scales are used. For the construction of estimators we employ preconditioning along with regularized operator inversion in the appropriate inner product, where the operator is bounded but not necessarily compact. A lower bound to certain minimax rates is included, and it is shown that in generic examples the proposed estimators attain the asymptotic minimax rate. Examples include errors-in-variables (deconvolution) and indirect nonparametric regression. Special instances of the latter are estimation of the source term in a differential equation and the estimation of the initial state in the heat equation.

223 citations

Journal ArticleDOI
TL;DR: In this paper, a multivariate analogue of Chernoff's theorem and a large deviation result for trimmed means are obtained as particular applications of the general theory, and the results extend previous results due to Borovkov, Hoadley and Stone.
Abstract: Some theorems on first-order asymptotic behavior of probabilities of large deviations of empirical probability measures are proved. These theorems extend previous results due to Borovkov, Hoadley and Stone. A multivariate analogue of Chernoff's theorem and a large deviation result for trimmed means are obtained as particular applications of the general theory.

132 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the e1 norm.
Abstract: This article is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individuallyq We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the e1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.

6,783 citations

Journal ArticleDOI
TL;DR: Convergence of Probability Measures as mentioned in this paper is a well-known convergence of probability measures. But it does not consider the relationship between probability measures and the probability distribution of probabilities.
Abstract: Convergence of Probability Measures. By P. Billingsley. Chichester, Sussex, Wiley, 1968. xii, 253 p. 9 1/4“. 117s.

5,689 citations

Journal ArticleDOI
TL;DR: The basic ideas of PCA are introduced, discussing what it can and cannot do, and some variants of the technique have been developed that are tailored to various different data types and structures.
Abstract: Large datasets are increasingly common and are often difficult to interpret. Principal component analysis (PCA) is a technique for reducing the dimensionality of such datasets, increasing interpretability but at the same time minimizing information loss. It does so by creating new uncorrelated variables that successively maximize variance. Finding such new variables, the principal components, reduces to solving an eigenvalue/eigenvector problem, and the new variables are defined by the dataset at hand, not a priori , hence making PCA an adaptive data analysis technique. It is adaptive in another sense too, since variants of the technique have been developed that are tailored to various different data types and structures. This article will begin by introducing the basic ideas of PCA, discussing what it can and cannot do. It will then describe some variants of PCA and their application.

4,289 citations

Journal ArticleDOI
TL;DR: The theoretical modeling of point defects in crystalline materials by means of electronic-structure calculations, with an emphasis on approaches based on density functional theory (DFT), is reviewed in this paper.
Abstract: Point defects and impurities strongly affect the physical properties of materials and have a decisive impact on their performance in applications. First-principles calculations have emerged as a powerful approach that complements experiments and can serve as a predictive tool in the identification and characterization of defects. The theoretical modeling of point defects in crystalline materials by means of electronic-structure calculations, with an emphasis on approaches based on density functional theory (DFT), is reviewed. A general thermodynamic formalism is laid down to investigate the physical properties of point defects independent of the materials class (semiconductors, insulators, and metals), indicating how the relevant thermodynamic quantities, such as formation energy, entropy, and excess volume, can be obtained from electronic structure calculations. Practical aspects such as the supercell approach and efficient strategies to extrapolate to the isolated-defect or dilute limit are discussed. Recent advances in tractable approximations to the exchange-correlation functional ($\mathrm{DFT}+U$, hybrid functionals) and approaches beyond DFT are highlighted. These advances have largely removed the long-standing uncertainty of defect formation energies in semiconductors and insulators due to the failure of standard DFT to reproduce band gaps. Two case studies illustrate how such calculations provide new insight into the physics and role of point defects in real materials.

1,846 citations

Journal ArticleDOI
TL;DR: In this article, the authors investigated the properties of a semiparametric method for estimating the dependence parameters in a family of multivariate distributions and proposed an estimator, obtained as a solution of a pseudo-likelihood equation, which is consistent, asymptotically normal and fully efficient at independence.
Abstract: SUMMARY This paper investigates the properties of a semiparametric method for estimating the dependence parameters in a family of multivariate distributions. The proposed estimator, obtained as a solution of a pseudo-likelihood equation, is shown to be consistent, asymptotically normal and fully efficient at independence. A natural estimator of its asymptotic variance is proved to be consistent. Comparisons are made with alternative semiparametric estimators in the special case of Clayton's model for association in bivariate data.

1,280 citations