scispace - formally typeset
Search or ask a question
Author

Miguel A. Arcones

Bio: Miguel A. Arcones is an academic researcher from Binghamton University. The author has contributed to research in topics: Estimator & Empirical process. The author has an hindex of 21, co-authored 62 publications receiving 1907 citations. Previous affiliations of Miguel A. Arcones include University of Utah & Columbia University.


Papers
More filters
Book ChapterDOI
TL;DR: The U-process theory as discussed by the authors is a collection of U-statistics over a family H of kernels h of m variables, based on a probability measure P on (S,S).
Abstract: A U-process is a collection of U-statistics. Concretely, a U-process over a family H of kernels h of m variables, based on a probability measure P on (S,S), is the collection \(\{ U_{n}^{{(m)}}(h,P):h \in \mathcal{H}\} \) of U-statistics. This chapter is devoted to the asymptotic theory of U-processes: we are interested in finding conditions on H and P ensuring that the law of large numbers, or the central limit theorem, or the law of the iterated logarithm, hold for \( U_n^{(m)} (h,P), \)uniformly in h ∈ H. The theory of empirical processes deals with the same questions for the case m = 1, and U-process theory is patterned after it. This is a relatively new subject, at least in the generality presented here (H being an arbitrary collection of kernels which are defined on a general measurable space). There is therefore a need to indicate its usefulness. To this effect, a section on applications is added at the end of the chapter (Section 5.5); there we illustrate the use of each of the main theorems in this chapter by deriving properties of certain multidimensional generalizations of the median, and in general, of M-estimators, and by studying estimators of the cumulative hazard and distribution functions of a random variable based on truncated data.

308 citations

Journal ArticleDOI
TL;DR: In this article, limit theorems for functions of stationary mean-zero Gaussian sequences of vectors satisfying long range dependence conditions are considered and a sufficient bracketing condition for these limit-theorems to happen uniformly over a class of functions is presented.
Abstract: Limit theorems for functions of stationary mean-zero Gaussian sequences of vectors satisfying long range dependence conditions are considered. Depending on the rate of decay of the coefficients, the limit law can be either Gaussian or the law of a multiple Ito-Wiener integral. We prove the bootstrap of these limit theorems in the case when the limit is normal. A sufficient bracketing condition for these limit theorems to happen uniformly over a class of functions is presented.

236 citations

Journal ArticleDOI
TL;DR: Bootstrap distributional limit theorems for $U$ and $V$ statistics are proved in this paper, under weak moment conditions and without restrictions on the bootstrap sample size (as long as it tends to be too large).
Abstract: Bootstrap distributional limit theorems for $U$ and $V$ statistics are proved. They hold a.s., under weak moment conditions and without restrictions on the bootstrap sample size (as long as it tends to $\infty$), regardless of the degree of degeneracy of $U$ and $V$. A testing procedure based on these results is outlined.

201 citations

Journal ArticleDOI
TL;DR: In this article, a uniform central limit theorem for weak convergence to Gaussian processes of empirical processes and U-processes from stationary β mixing sequences indexed by V-C subgraph classes of functions is given.
Abstract: This paper gives sufficient conditions for the weak convergence to Gaussian processes of empirical processes andU-processes from stationary β mixing sequences indexed byV-C subgraph classes of functions. If the envelope function of theV-C subgraph class is inL p for some 21. These conditions are almost minimal.

168 citations

Journal Article
TL;DR: In this paper, le theoreme central limite "bootstrap" presque sur et en probabilite, for des variables aleatoires X de second moment infini, dans le domaine d'attraction de la loi normale ou des autres lois stables, ou dans la domaine partiel deattraction d'une loi infiniment divisible.
Abstract: Nous etudions le theoreme central limite «bootstrap» presque sur et en probabilite, pour des variables aleatoires X de second moment infini, dans le domaine d'attraction de la loi normale ou des autres lois stables, ou dans le domaine partiel d'attraction d'une loi infiniment divisible. La taille m n de l'echantillon «bootstrap» n'est pas necessairement egale (et meme dans certains cas elle est necessairement differente) a la taille n de l'echantillon original

85 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Convergence of Probability Measures as mentioned in this paper is a well-known convergence of probability measures. But it does not consider the relationship between probability measures and the probability distribution of probabilities.
Abstract: Convergence of Probability Measures. By P. Billingsley. Chichester, Sussex, Wiley, 1968. xii, 253 p. 9 1/4“. 117s.

5,689 citations

Journal ArticleDOI
TL;DR: In this article, a rigorous distribution theory for kernel-based matching is presented, and the method of matching is extended to more general conditions than the ones assumed in the statistical literature on the topic.
Abstract: This paper develops the method of matching as an econometric evaluation estimator. A rigorous distribution theory for kernel-based matching is presented. The method of matching is extended to more general conditions than the ones assumed in the statistical literature on the topic. We focus on the method of propensity score matching and show that it is not necessarily better, in the sense of reducing the variance of the resulting estimator, to use the propensity score method even if propensity score is known. We extend the statistical literature on the propensity score by considering the case when it is estimated both parametrically and nonparametrically. We examine the benefits of separability and exclusion restrictions in improving the efficiency of the estimator. Our methods also apply to the econometric selection bias estimator. Matching is a widely-used method of evaluation. It is based on the intuitively attractive idea of contrasting the outcomes of programme participants (denoted Y1) with the outcomes of "comparable" nonparticipants (denoted Y0). Differences in the outcomes between the two groups are attributed to the programme. Let 1 and 11 denote the set of indices for nonparticipants and participants, respectively. The following framework describes conventional matching methods as well as the smoothed versions of these methods analysed in this paper. To estimate a treatment effect for each treated person iecI, outcome Yli is compared to an average of the outcomes Yoj for matched persons je10 in the untreated sample. Matches are constructed on the basis of observed characteristics X in Rd. Typically, when the observed characteristics of an untreated person are closer to those of the treated person ieI1, using a specific distance measure, the untreated person gets a higher weight in constructing the match. The estimated gain for each person i in the treated sample is

3,861 citations

Journal ArticleDOI
TL;DR: This work proposes a framework for analyzing and comparing distributions, which is used to construct statistical tests to determine if two samples are drawn from different distributions, and presents two distribution free tests based on large deviation bounds for the maximum mean discrepancy (MMD).
Abstract: We propose a framework for analyzing and comparing distributions, which we use to construct statistical tests to determine if two samples are drawn from different distributions. Our test statistic is the largest difference in expectations over functions in the unit ball of a reproducing kernel Hilbert space (RKHS), and is called the maximum mean discrepancy (MMD).We present two distribution free tests based on large deviation bounds for the MMD, and a third test based on the asymptotic distribution of this statistic. The MMD can be computed in quadratic time, although efficient linear time approximations are available. Our statistic is an instance of an integral probability metric, and various classical metrics on distributions are obtained when alternative function classes are used in place of an RKHS. We apply our two-sample tests to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where they perform strongly. Excellent performance is also obtained when comparing distributions over graphs, for which these are the first such tests.

3,792 citations