scispace - formally typeset
Search or ask a question

Showing papers on "Asymptotic distribution published in 2018"


Posted Content
TL;DR: A distributionally robust stochastic optimization framework that learns a model providing good performance against perturbations to the data-generating distribution is developed, and a convex formulation for the problem is given, providing several convergence guarantees.
Abstract: A common goal in statistics and machine learning is to learn models that can perform well against distributional shifts, such as latent heterogeneous subpopulations, unknown covariate shifts, or unmodeled temporal effects. We develop and analyze a distributionally robust stochastic optimization (DRO) framework that learns a model providing good performance against perturbations to the data-generating distribution. We give a convex formulation for the problem, providing several convergence guarantees. We prove finite-sample minimax upper and lower bounds, showing that distributional robustness sometimes comes at a cost in convergence rates. We give limit theorems for the learned parameters, where we fully specify the limiting distribution so that confidence intervals can be computed. On real tasks including generalizing to unknown subpopulations, fine-grained recognition, and providing good tail performance, the distributionally robust approach often exhibits improved performance.

221 citations


Journal ArticleDOI
TL;DR: The asymptotic distribution of empirical Wasserstein distances is derived as the optimal value of a linear programme with random objective function, which facilitates statistical inference in large generality.
Abstract: Summary The Wasserstein distance is an attractive tool for data analysis but statistical inference is hindered by the lack of distributional limits. To overcome this obstacle, for probability measures supported on finitely many points, we derive the asymptotic distribution of empirical Wasserstein distances as the optimal value of a linear programme with random objective function. This facilitates statistical inference (e.g. confidence intervals for sample-based Wasserstein distances) in large generality. Our proof is based on directional Hadamard differentiability. Failure of the classical bootstrap and alternatives are discussed. The utility of the distributional results is illustrated on two data sets.

110 citations


Journal ArticleDOI
TL;DR: In this paper, an asymptotic framework for conducting inference on parameters of the form (0), where is a known directionally dierentiable function and 0 is estimated by ^ n, is presented.
Abstract: This paper studies an asymptotic framework for conducting inference on parameters of the form ( 0), where is a known directionally dierentiable function and 0 is estimated by ^ n. In these settings, the asymptotic distribution of the plug-in estimator ( ^ n) can be readily derived employing existing extensions to the Delta method. We show, however, that the \standard" bootstrap is only consistent under overly stringent conditions { in particular we establish that dierentiability of is a necessary and sucient condition for bootstrap consistency whenever the limiting distribution of ^ n is Gaussian. An alternative resampling scheme is proposed which remains consistent when the bootstrap fails, and is shown to provide local size control under restrictions on the directional derivative of . We illustrate the utility of our results by developing a test of whether a Hilbert space valued parameter belongs to a convex set { a setting that includes moment inequality problems and certain tests of shape restrictions as special cases.

80 citations


Journal ArticleDOI
TL;DR: The asymptotic properties of the difference-in-means estimator under rerandomization are studied, based on the randomness of the treatment assignment without imposing any parametric modeling assumptions on the covariates or outcome, which reveals a non-Gaussian asymPTotic distribution for this estimator.
Abstract: Although complete randomization ensures covariate balance on average, the chance of observing significant differences between treatment and control covariate distributions increases with many covariates. Rerandomization discards randomizations that do not satisfy a predetermined covariate balance criterion, generally resulting in better covariate balance and more precise estimates of causal effects. Previous theory has derived finite sample theory for rerandomization under the assumptions of equal treatment group sizes, Gaussian covariate and outcome distributions, or additive causal effects, but not for the general sampling distribution of the difference-in-means estimator for the average causal effect. We develop asymptotic theory for rerandomization without these assumptions, which reveals a non-Gaussian asymptotic distribution for this estimator, specifically a linear combination of a Gaussian random variable and truncated Gaussian random variables. This distribution follows because rerandomization affects only the projection of potential outcomes onto the covariate space but does not affect the corresponding orthogonal residuals. We demonstrate that, compared with complete randomization, rerandomization reduces the asymptotic quantile ranges of the difference-in-means estimator. Moreover, our work constructs accurate large-sample confidence intervals for the average causal effect.

77 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of nonparametric regression under shape constraints, and study the behavior of the risk of the least squares estimator (LSE) and its pointwise limiting distribution.
Abstract: We consider the problem of nonparametric regression under shape constraints. The main examples include isotonic regression (with respect to any partial order), unimodal/convex regression, additive shaperestricted regression and constrained single index model. We review some of the theoretical properties of the least squares estimator (LSE) in these problems, emphasizing on the adaptive nature of the LSE. In particular, we study the behavior of the risk of the LSE, and its pointwise limiting distribution theory, with special emphasis to isotonic regression. We survey various methods for constructing pointwise confidence intervals around these shaperestricted functions. We also briefly discuss the computation of the LSE and indicate some open research problems and future directions.

75 citations


Posted Content
TL;DR: In this paper, a new estimator for causal effects with panel data is presented, which builds on insights behind the widely used difference in differences and synthetic control methods, and it performs well in settings where the conventional estimators are commonly used in practice.
Abstract: We present a new estimator for causal effects with panel data that builds on insights behind the widely used difference in differences and synthetic control methods. Relative to these methods we find, both theoretically and empirically, that this "synthetic difference in differences" estimator has desirable robustness properties, and that it performs well in settings where the conventional estimators are commonly used in practice. We study the asymptotic behavior of the estimator when the systematic part of the outcome model includes latent unit factors interacted with latent time factors, and we present conditions for consistency and asymptotic normality.

71 citations


Posted Content
TL;DR: In this paper, a formal estimation procedure for parameters of the fractional Poisson process (fPp) is proposed to make the fPp model more flexible by permitting non-exponential, heavy-tailed distributions of interarrival times and different scaling properties.
Abstract: The paper proposes a formal estimation procedure for parameters of the fractional Poisson process (fPp). Such procedures are needed to make the fPp model usable in applied situations. The basic idea of fPp, motivated by experimental data with long memory is to make the standard Poisson model more flexible by permitting non-exponential, heavy-tailed distributions of interarrival times and different scaling properties. We establish the asymptotic normality of our estimators for the two parameters appearing in our fPp model. This fact permits construction of the corresponding confidence intervals. The properties of the estimators are then tested using simulated data.

70 citations


Posted Content
TL;DR: In this article, leave-out estimators of quadratic forms designed for the study of linear models with unrestricted heteroscedasticity are proposed for the analysis of variance and tests of linear restrictions in models with many regressors.
Abstract: We propose leave-out estimators of quadratic forms designed for the study of linear models with unrestricted heteroscedasticity. Applications include analysis of variance and tests of linear restrictions in models with many regressors. An approximation algorithm is provided that enables accurate computation of the estimator in very large datasets. We study the large sample properties of our estimator allowing the number of regressors to grow in proportion to the number of observations. Consistency is established in a variety of settings where plug-in methods and estimators predicated on homoscedasticity exhibit first-order biases. For quadratic forms of increasing rank, the limiting distribution can be represented by a linear combination of normal and non-central $\chi^2$ random variables, with normality ensuing under strong identification. Standard error estimators are proposed that enable tests of linear restrictions and the construction of uniformly valid confidence intervals for quadratic forms of interest. We find in Italian social security records that leave-out estimates of a variance decomposition in a two-way fixed effects model of wage determination yield substantially different conclusions regarding the relative contribution of workers, firms, and worker-firm sorting to wage inequality than conventional methods. Monte Carlo exercises corroborate the accuracy of our asymptotic approximations, with clear evidence of non-normality emerging when worker mobility between blocks of firms is limited.

69 citations


Journal ArticleDOI
TL;DR: In this article, the asymptotic behavior of the one-dimensional elephant random walk (ERW) was studied in diffusive and critical regimes and in the superdiffusive regime.
Abstract: The purpose of this paper is to establish, via a martingale approach, some refinements on the asymptotic behavior of the one-dimensional elephant random walk (ERW). The asymptotic behavior of the ERW mainly depends on a memory parameter p which lies between zero and one. This behavior is totally different in the diffusive regime , the critical regime , and the superdiffusive regime . In the diffusive and critical regimes, we establish some new results on the almost sure asymptotic behavior of the ERW, such as the quadratic strong law and the law of the iterated logarithm. In the superdiffusive regime, we provide the first rigorous mathematical proof that the limiting distribution of the ERW is not Gaussian.

60 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider the asymptotic behavior of the posterior obtained from approximate Bayesian computation (ABC) and the ensuing posterior mean, and give general results on: (i) the rate of concentration of the ABC posterior on sets containing the true parameter (vector); (ii) the limiting shape of posterior; and (iii) the asyptotic distribution of ABC posterior mean.
Abstract: Approximate Bayesian computation (ABC) is becoming an accepted tool for statistical analysis in models with intractable likelihoods. With the initial focus being primarily on the practical import of ABC, exploration of its formal statistical properties has begun to attract more attention. In this paper we consider the asymptotic behavior of the posterior obtained from ABC and the ensuing posterior mean. We give general results on: (i) the rate of concentration of the ABC posterior on sets containing the true parameter (vector); (ii) the limiting shape of the posterior; and\ (iii) the asymptotic distribution of the ABC posterior mean. These results hold under given rates for the tolerance used within ABC, mild regularity conditions on the summary statistics, and a condition linked to identification of the true parameters. Using simple illustrative examples that have featured in the literature, we demonstrate that the required identification condition is far from guaranteed. The implications of the theoretical results for practitioners of ABC are also highlighted.

60 citations


Journal ArticleDOI
TL;DR: In this article, a general theory for establishing the asymptotic distribution of the aggregated M-estimators using a weighted average with weights depending on the subgroup sample sizes was developed.
Abstract: The divide and conquer method is a common strategy for handling massive data. In this article, we study the divide and conquer method for cubic-rate estimators under the massive data framework. We develop a general theory for establishing the asymptotic distribution of the aggregated M-estimators using a weighted average with weights depending on the subgroup sample sizes. Under certain condition on the growing rate of the number of subgroups, the resulting aggregated estimators are shown to have faster convergence rate and asymptotic normal distribution, which are more tractable in both computation and inference than the original M-estimators based on pooled data. Our theory applies to a wide class of M-estimators with cube root convergence rate, including the location estimator, maximum score estimator and value search estimator. Empirical performance via simulations and a real data application also validate our theoretical findings.

Journal ArticleDOI
TL;DR: In this article, it was shown that a sequence whose asymptotic distribution of pair correlations is Poissonian must necessarily be equidistributed, and that for sequences which are not, the square-integral of the density of the sequence gives a lower bound for the pair correlations.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the limiting distribution of the augmented Dickey-Fuller (ADF) test under the null hypothesis of a unit root is valid under a very general set of assumptions that goes far beyond the linear AR(∞) process assumption typically imposed.
Abstract: It is shown that the limiting distribution of the augmented Dickey–Fuller (ADF) test under the null hypothesis of a unit root is valid under a very general set of assumptions that goes far beyond the linear AR(∞) process assumption typically imposed. In essence, all that is required is that the error process driving the random walk possesses a continuous spectral density that is strictly positive. Furthermore, under the same weak assumptions, the limiting distribution of the ADF test is derived under the alternative of stationarity, and a theoretical explanation is given for the well-known empirical fact that the test's power is a decreasing function of the chosen autoregressive order p. The intuitive reason for the reduced power of the ADF test is that, as p tends to infinity, the p regressors become asymptotically collinear.

Journal ArticleDOI
TL;DR: In this paper, a cascade of increasingly complex transformation models that can be estimated, compared and analysed in the maximum likelihood framework is presented, and the asymptotic normality of the proposed estimators is established for discrete and continuous responses.
Abstract: We propose and study properties of maximum likelihood estimators in the class of conditional transformation models. Based on a suitable explicit parameterization of the unconditional or conditional transformation function, we establish a cascade of increasingly complex transformation models that can be estimated, compared and analysed in the maximum likelihood framework. Models for the unconditional or conditional distribution function of any univariate response variable can be set up and estimated in the same theoretical and computational framework simply by choosing an appropriate transformation function and parameterization thereof. The ability to evaluate the distribution function directly allows us to estimate models based on the exact likelihood, especially in the presence of random censoring or truncation. For discrete and continuous responses, we establish the asymptotic normality of the proposed estimators. A reference software implementation of maximum likelihood-based estimation for conditional transformation models that allows the same flexibility as the theory developed here was employed to illustrate the wide range of possible applications.

Journal ArticleDOI
01 Jun 2018-Extremes
TL;DR: In this paper, an adaptive weighted least-squares procedure matching nonparametric estimates of the stable tail dependence function with the corresponding values of a parametrically specified proposal yields a novel minimum-distance estimator.
Abstract: Likelihood-based procedures are a common way to estimate tail dependence parameters. They are not applicable, however, in non-differentiable models such as those arising from recent max-linear structural equation models. Moreover, they can be hard to compute in higher dimensions. An adaptive weighted least-squares procedure matching nonparametric estimates of the stable tail dependence function with the corresponding values of a parametrically specified proposal yields a novel minimum-distance estimator. The estimator is easy to calculate and applies to a wide range of sampling schemes and tail dependence models. In large samples, it is asymptotically normal with an explicit and estimable covariance matrix. The minimum distance obtained forms the basis of a goodness-of-fit statistic whose asymptotic distribution is chi-square. Extensive Monte Carlo simulations confirm the excellent finite-sample performance of the estimator and demonstrate that it is a strong competitor to currently available methods. The estimator is then applied to disentangle sources of tail dependence in European stock markets.

Journal ArticleDOI
TL;DR: In this paper, a new specification for the heterogenous autoregressive (HAR) model for the realized volatility of S&P 500 index returns is introduced, where the coefficients of the HAR are allowed to be time-varying with unspecified functional forms.
Abstract: This article introduces a new specification for the heterogenous autoregressive (HAR) model for the realized volatility of S&P 500 index returns. In this modeling framework, the coefficients of the HAR are allowed to be time-varying with unspecified functional forms. The local linear method with the cross-validation (CV) bandwidth selection is applied to estimate the time-varying coefficient HAR (TVC-HAR) model, and a bootstrap method is used to construct the point-wise confidence bands for the coefficient functions. Furthermore, the asymptotic distribution of the proposed local linear estimators of the TVC-HAR model is established under some mild conditions. The results of the simulation study show that the local linear estimator with CV bandwidth selection has favorable finite sample properties. The outcomes of the conditional predictive ability test indicate that the proposed nonparametric TVC-HAR model outperforms the parametric HAR and its extension to HAR with jumps and/or GARCH in terms of multi-st...

Journal ArticleDOI
TL;DR: In this paper, a unified M -estimation method is proposed for estimating the fixed-effects dynamic panel data (DPD) models containing three major types of spatial effects, namely spatial lag, spatial error and space-time lag.

Journal ArticleDOI
TL;DR: In this article, the authors introduce the general setting of a multivariate time series autoregressive model with stochastic time-varying coefficients and time-dependent conditional variance of the error process.
Abstract: In this article, we introduce the general setting of a multivariate time series autoregressive model with stochastic time-varying coefficients and time-varying conditional variance of the error process. This allows modelling VAR dynamics for non-stationary time series and estimation of time-varying parameter processes by the well-known rolling regression estimation techniques. We establish consistency, convergence rates, and asymptotic normality for kernel estimators of the paths of coefficient processes and provide pointwise valid standard errors. The method is applied to a popular seven-variable dataset to analyse evidence of time variation in empirical objects of interest for the DSGE (dynamic stochastic general equilibrium) literature.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a method for estimating edge parameters in a high-dimensional transelliptical model, which generalizes Gaussian and non-paranormal graphical models.
Abstract: Understanding complex relationships between random variables is of fundamental importance in high-dimensional statistics, with numerous applications in biological and social sciences. Undirected graphical models are often used to represent dependencies between random variables, where an edge between two random variables is drawn if they are conditionally dependent given all the other measured variables. A large body of literature exists on methods that estimate the structure of an undirected graphical model, however, little is known about the distributional properties of the estimators beyond the Gaussian setting. In this paper, we focus on inference for edge parameters in a high-dimensional transelliptical model, which generalizes Gaussian and nonparanormal graphical models. We propose ROCKET, a novel procedure for estimating parameters in the latent inverse covariance matrix. We establish asymptotic normality of ROCKET in an ultra high-dimensional setting under mild assumptions, without relying on oracle model selection results. ROCKET requires the same number of samples that are known to be necessary for obtaining a $\sqrt{n}$ consistent estimator of an element in the precision matrix under a Gaussian model. Hence, it is an optimal estimator under a much larger family of distributions. The result hinges on a tight control of the sparse spectral norm of the nonparametric Kendall’s tau estimator of the correlation matrix, which is of independent interest. Empirically, ROCKET outperforms the nonparanormal and Gaussian models in terms of achieving accurate inference on simulated data. We also compare the three methods on real data (daily stock returns), and find that the ROCKET estimator is the only method whose behavior across subsamples agrees with the distribution predicted by the theory.

OtherDOI
George Levy1
10 Dec 2018

Journal ArticleDOI
TL;DR: In this paper, the Akaike information criterion is used to construct a selection region and obtain the asymptotic distribution of estimators and linear combinations thereof conditional on the selected model.
Abstract: SummaryIgnoring the model selection step in inference after selection is harmful. In this paper we study the asymptotic distribution of estimators after model selection using the Akaike information criterion. First, we consider the classical setting in which a true model exists and is included in the candidate set of models. We exploit the overselection property of this criterion in constructing a selection region, and we obtain the asymptotic distribution of estimators and linear combinations thereof conditional on the selected model. The limiting distribution depends on the set of competitive models and on the smallest overparameterized model. Second, we relax the assumption on the existence of a true model and obtain uniform asymptotic results. We use simulation to study the resulting post-selection distributions and to calculate confidence regions for the model parameters, and we also apply the method to a diabetes dataset.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the asymptotic variance of estimators obtained using approximate Bayesian computation in a large-data limit can be approximated by a fixed-dimensional summary statistic that obeys a central limit theorem.
Abstract: Many statistical applications involve models for which it is difficult to evaluate the likelihood, but from which it is relatively easy to sample. Approximate Bayesian computation is a likelihood-free method for implementing Bayesian inference in such cases. We present results on the asymptotic variance of estimators obtained using approximate Bayesian computation in a large-data limit. Our key assumption is that the data is summarized by a fixed-dimensional summary statistic that obeys a central limit theorem. We prove asymptotic normality of the mean of the approximate Bayesian computation posterior. This result also shows that, in terms of asymptotic variance, we should use a summary statistic that is the same dimension as the parameter vector, p; and that any summary statistic of higher dimension can be reduced, through a linear transformation, to dimension p in a way that can only reduce the asymptotic variance of the posterior mean. We look at how the Monte Carlo error of an importance sampling algorithm that samples from the approximate Bayesian computation posterior affects the accuracy of estimators. We give conditions on the importance sampling proposal distribution such that the variance of the estimator will be the same order as that of the maximum likelihood estimator based on the summary statistics used. This suggests an iterative importance sampling algorithm, which we evaluate empirically on a stochastic volatility model.

Journal ArticleDOI
TL;DR: A distribution on the unit sphere called the elliptically symmetric angular Gaussian distribution is defined, which has the additional advantages of being simple and fast to simulate from, and having a density and hence likelihood that is easy and very quick to compute exactly.
Abstract: We define a distribution on the unit sphere $$\mathbb {S}^{d-1}$$ called the elliptically symmetric angular Gaussian distribution. This distribution, which to our knowledge has not been studied before, is a subfamily of the angular Gaussian distribution closely analogous to the Kent subfamily of the general Fisher–Bingham distribution. Like the Kent distribution, it has ellipse-like contours, enabling modelling of rotational asymmetry about the mean direction, but it has the additional advantages of being simple and fast to simulate from, and having a density and hence likelihood that is easy and very quick to compute exactly. These advantages are especially beneficial for computationally intensive statistical methods, one example of which is a parametric bootstrap procedure for inference for the directional mean that we describe.

Posted Content
TL;DR: The orthogonal random forest, an algorithm that combines Neyman-orthogonality to reduce sensitivity with respect to estimation error of nuisance parameters with generalized random forests, is proposed--a flexible non-parametric method for statistical estimation of conditional moment models using random forests.
Abstract: We propose the orthogonal random forest, an algorithm that combines Neyman-orthogonality to reduce sensitivity with respect to estimation error of nuisance parameters with generalized random forests (Athey et al., 2017)--a flexible non-parametric method for statistical estimation of conditional moment models using random forests. We provide a consistency rate and establish asymptotic normality for our estimator. We show that under mild assumptions on the consistency rate of the nuisance estimator, we can achieve the same error rate as an oracle with a priori knowledge of these nuisance parameters. We show that when the nuisance functions have a locally sparse parametrization, then a local $\ell_1$-penalized regression achieves the required rate. We apply our method to estimate heterogeneous treatment effects from observational data with discrete treatments or continuous treatments, and we show that, unlike prior work, our method provably allows to control for a high-dimensional set of variables under standard sparsity conditions. We also provide a comprehensive empirical evaluation of our algorithm on both synthetic and real data.

Journal ArticleDOI
01 Mar 2018-Test
TL;DR: In this article, the authors consider the problem of testing for parameter change in bivariate Poisson integer-valued GARCH(1, 1) models, constructed via a trivariate reduction method of independent Poisson variables.
Abstract: In this paper, we consider the problem of testing for a parameter change in bivariate Poisson integer-valued GARCH(1, 1) models, constructed via a trivariate reduction method of independent Poisson variables. We verify that the conditional maximum-likelihood estimator of the model parameters is asymptotically normal. Then, based on these results, we construct CMLE- and residual-based CUSUM tests and derive that their limiting null distributions are a function of independent Brownian bridges. A simulation study and real data analysis are conducted for illustration.

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a first-order zero-drift GARCH (ZD-GARCH(1, 1)) model, which is stable with its sample path oscillating randomly between zero and infinity over time.

Journal ArticleDOI
Yan Cui1, Fukang Zhu1
01 Jun 2018-Test
TL;DR: In this article, a new bivariate Poisson INGARCH model was proposed, which allows for positive or negative cross-correlation between two components, and the maximum likelihood method was used to estimate the unknown parameters, and consistency and asymptotic normality for estimators were given.
Abstract: Univariate integer-valued time series models, including integer-valued autoregressive (INAR) models and integer-valued generalized autoregressive conditional heteroscedastic (INGARCH) models, have been well studied in the literature, but there is little progress in multivariate models. Although some multivariate INAR models were proposed, they do not provide enough flexibility in modeling count data, such as volatility of numbers of stock transactions. Then, a bivariate Poisson INGARCH model was suggested by Liu (Some models for time series of counts, Dissertations, Columbia University, 2012), but it can only deal with positive cross-correlation between two components. To remedy this defect, we propose a new bivariate Poisson INGARCH model, which is more flexible and allows for positive or negative cross-correlation. Stationarity and ergodicity of the new process are established. The maximum likelihood method is used to estimate the unknown parameters, and consistency and asymptotic normality for estimators are given. A simulation study is given to evaluate the estimators for parameters of interest. Real and artificial data examples are illustrated to demonstrate good performances of the proposed model relative to the existing model.

Journal ArticleDOI
TL;DR: In this article, a generalized method of moments estimator for additive spatial autoregressive models (PLASARM) is proposed, where the nonparametric functions approximated by basis functions are obtained under mild conditions.
Abstract: In this paper, a class of partially linear additive spatial autoregressive models (PLASARM) is studied. With the nonparametric functions approximated by basis functions, we propose a generalized method of moments estimator for PLASARM. Under mild conditions, we obtain the asymptotic normality for the finite parametric vector and the optimal convergence rate for nonparametric functions. In order to make statistical inference for parametric component, we propose the estimator for asymptotic covariance matrix of the parameter estimator and establish the asymptotic properties for the resulting estimators. Finite sample performance of the proposed method is assessed by Monte Carlo simulation studies, and the developed methodology is illustrated by an analysis of the Boston housing price data.

Journal ArticleDOI
TL;DR: In this article, the asymptotic distributions of coordinates of regression M-estimates in the moderate p/n regime were investigated, where the number of covariates p grows proportionally with the sample size n.
Abstract: We investigate the asymptotic distributions of coordinates of regression M-estimates in the moderate p / n regime, where the number of covariates p grows proportionally with the sample size n. Under appropriate regularity conditions, we establish the coordinate-wise asymptotic normality of regression M-estimates assuming a fixed-design matrix. Our proof is based on the second-order Poincare inequality and leave-one-out analysis. Some relevant examples are indicated to show that our regularity conditions are satisfied by a broad class of design matrices. We also show a counterexample, namely an ANOVA-type design, to emphasize that the technical assumptions are not just artifacts of the proof. Finally, numerical experiments confirm and complement our theoretical results.

Journal ArticleDOI
TL;DR: The periodogram is introduced and by using an auxiliary operator, it is proved that the limiting distribution of the finite Fourier transform and the periodogram are multivariate complex normal and complex Wishart distributions, respectively.