scispace - formally typeset
Search or ask a question
Author

Wolfgang Wefelmeyer

Bio: Wolfgang Wefelmeyer is an academic researcher from University of Cologne. The author has contributed to research in topics: Estimator & Efficient estimator. The author has an hindex of 21, co-authored 99 publications receiving 1317 citations. Previous affiliations of Wolfgang Wefelmeyer include University of Siegen & Folkwang University of the Arts.


Papers
More filters
Journal ArticleDOI
TL;DR: For the class of estimators with bounded, symmetric, and neg-unimodal loss functions, this article showed that for any estimator T(n) there exists q ∗ such that the risk of ∆ + n−1 q ∆ ∆ (θ (n) ) is equal to o(n − 1 2 ) for all loss functions.

85 citations

Journal ArticleDOI
TL;DR: This work determines optimal configurations of detectors, varying the distances between the thresholds and the signal, as well as the noise level, to explore the detectability of the signal in a system with one or more detectors, with different thresholds.
Abstract: A subthreshold signal may be detected if noise is added to the data. We study a simple model, consisting of a constant signal to which at uniformly spaced times independent and identically distributed noise variables with known distribution are added. A detector records the times at which the noisy signal exceeds a threshold. There is an optimal noise level, called stochastic resonance. We explore the detectability of the signal in a system with one or more detectors, with different thresholds. We use a statistical detectability measure, the asymptotic variance of the best estimator of the signal from the thresholded data, or equivalently, the Fisher information in the data. In particular, we determine optimal configurations of detectors, varying the distances between the thresholds and the signal, as well as the noise level. The approach generalizes to nonconstant signals.

70 citations

Journal ArticleDOI
TL;DR: A class of estimators for the error variance that are related to difference-based estimators: covariate-matched U-statistics are introduced, and the explicit construction of the weights uses a kernel estimator for the covariate density.
Abstract: For nonparametric regression models with fixed and random design, two classes of estimators for the error variance have been introduced: second sample moments based on residuals from a nonparametric fit, and difference-based estimators. The former are asymptotically optimal but require estimating the regression function; the latter are simple but have larger asymptotic variance. For nonparametric regression models with random covariates, we introduce a class of estimators for the error variance that are related to difference-based estimators: covariate-matched U-statistics. We give conditions on the random weights involved that lead to asymptotically optimal estimators of the error variance. Our explicit construction of the weights uses a kernel estimator for the covariate density.

66 citations

Journal ArticleDOI
TL;DR: In this paper, the authors derived an i.i.d. representation for the empirical estimator based on residuals, using undersmoothed estimators for the regression curve.

50 citations

Journal ArticleDOI
TL;DR: In this article, the density of a sum of independent random variables can be estimated by the convolution of kernel estimators for the marginal densities, and the resulting estimator is n 1/2-consistent and converges in distribution in the spaces C 0(ℝ) and L 1 to a centered Gaussian process.
Abstract: The density of a sum of independent random variables can be estimated by the convolution of kernel estimators for the marginal densities. We show under mild conditions that the resulting estimator is n 1/2-consistent and converges in distribution in the spaces C 0(ℝ) and L 1 to a centered Gaussian process. Email: anton@math.binghamton.edu

47 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Convergence of Probability Measures as mentioned in this paper is a well-known convergence of probability measures. But it does not consider the relationship between probability measures and the probability distribution of probabilities.
Abstract: Convergence of Probability Measures. By P. Billingsley. Chichester, Sussex, Wiley, 1968. xii, 253 p. 9 1/4“. 117s.

5,689 citations

Book ChapterDOI
01 Jan 2011
TL;DR: Weakconvergence methods in metric spaces were studied in this article, with applications sufficient to show their power and utility, and the results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables.
Abstract: The author's preface gives an outline: "This book is about weakconvergence methods in metric spaces, with applications sufficient to show their power and utility. The Introduction motivates the definitions and indicates how the theory will yield solutions to problems arising outside it. Chapter 1 sets out the basic general theorems, which are then specialized in Chapter 2 to the space C[0, l ] of continuous functions on the unit interval and in Chapter 3 to the space D [0, 1 ] of functions with discontinuities of the first kind. The results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables. " The book develops and expands on Donsker's 1951 and 1952 papers on the invariance principle and empirical distributions. The basic random variables remain real-valued although, of course, measures on C[0, l ] and D[0, l ] are vitally used. Within this framework, there are various possibilities for a different and apparently better treatment of the material. More of the general theory of weak convergence of probabilities on separable metric spaces would be useful. Metrizability of the convergence is not brought up until late in the Appendix. The close relation of the Prokhorov metric and a metric for convergence in probability is (hence) not mentioned (see V. Strassen, Ann. Math. Statist. 36 (1965), 423-439; the reviewer, ibid. 39 (1968), 1563-1572). This relation would illuminate and organize such results as Theorems 4.1, 4.2 and 4.4 which give isolated, ad hoc connections between weak convergence of measures and nearness in probability. In the middle of p. 16, it should be noted that C*(S) consists of signed measures which need only be finitely additive if 5 is not compact. On p. 239, where the author twice speaks of separable subsets having nonmeasurable cardinal, he means "discrete" rather than "separable." Theorem 1.4 is Ulam's theorem that a Borel probability on a complete separable metric space is tight. Theorem 1 of Appendix 3 weakens completeness to topological completeness. After mentioning that probabilities on the rationals are tight, the author says it is an

3,554 citations

Book ChapterDOI
15 Feb 2011

1,876 citations

Journal ArticleDOI
TL;DR: In this paper, bias corrected generalized empirical likelihood (GEL) and generalized generalized method of moments (GMM) estimators have been compared and it is shown that GEL has no asymptotic bias due to correlation of the moment functions with their Jacobian.
Abstract: In an effort to improve the small sample properties of generalized method of moments (GMM) estimators, a number of alternative estimators have been suggested. These include empirical likelihood (EL), continuous updating, and exponential tilting estimators. We show that these estimators share a common structure, being members of a class of generalized empirical likelihood (GEL) estimators. We use this structure to compare their higher order asymptotic properties. We find that GEL has no asymptotic bias due to correlation of the moment functions with their Jacobian, eliminating an important source of bias for GMM in models with endogeneity. We also find that EL has no asymptotic bias from estimating the optimal weight matrix, eliminating a further important source of bias for GMM in panel data models. We give bias corrected GMM and GEL estimators. We also show that bias corrected EL inherits the higher order property of maximum likelihood, that it is higher order asymptotically efficient relative to the other bias corrected estimators.

844 citations

Posted Content
TL;DR: In this paper, the authors test parametric models by comparing their implied parametric density to the same density estimated nonparametrically, and do not replace the continuous-time model by discrete approximations, even though the data are recorded at discrete intervals.
Abstract: Different continuous-time models for interest rates coexist in the literature. We test parametric models by comparing their implied parametric density to the same density estimated nonparametrically. We do not replace the continuous-time model by discrete approximations, even though the data are recorded at discrete intervals. The principal source of rejection of existing models is the strong nonlinearity of the drift. Around its mean, where the drift is essentially zero, the spot rate behaves like a random walk. The drift then mean-reverts strongly when far away from the mean. The volatility is higher when away from the mean.

830 citations