scispace - formally typeset
Search or ask a question

Showing papers on "Nonparametric statistics published in 2022"


Reference EntryDOI
TL;DR: The use of statistics in social sciences has been studied extensively in the past few decades as mentioned in this paper , with a focus on the use of statistical power analysis in the context of test scores.
Abstract: Lifetime Data Analysis | Home SpringerErik Sudderth Donald Bren School of Information and Eric J. Tchetgen Tchetgen – Department of Statistics and Nonparametric Tests Boston UniversityStatistics (STAT) | Iowa State University CatalogFaculty | Department of StatisticsDepartment of Statistics and Data Science < Carnegie G*Power 3: a flexible statistical power analysis program Runze Li's Homepage Pennsylvania State UniversityStata: Software for Statistics and Data ScienceThe use of statistics in social sciences | Emerald InsightStatistics Assignment Help | Statistics Homework HelpWilcoxon Test Definition InvestopediaBiography and Activities | Susan HolmesComputation of different effect sizes like d, f, r and Find a Doctor | Clinicians, Researchers & Nurses GraphPad Prism 9 Statistics Guide Interpreting results Friedman test WikipediaETDAStatistics Final Exam Flashcards | QuizletInterpreting statistics Introduction to statistics Corn oil, but not cocaine, is a more effective reinforcer Log In BACBJournals American Statistical AssociationDownload Free any eBook PDF, Epub, Tuebl and MobiCausal inference in statistics: An overviewWhat is the rationale behind the magic number 30 in Behavioral Genetics Psychology Oxford BibliographiesTopic #1: Introduction to measurement and statisticsTest di Kruskal-Wallis WikipediaData Analysis of Students Marks with Descriptive StatisticsPearson Product-Moment Correlation Laerd StatisticsBootstrapping (statistics) Wikipedia

102 citations


Journal ArticleDOI
TL;DR: In this paper , the authors provide a comprehensive overview of the ensemble-modeling technique and a step-by-step tutorial on how to implement ensemble modeling in laparoscopic surgery.

59 citations


Proceedings ArticleDOI
01 Jun 2022
TL;DR: In this article , the authors propose a non-parametric alternative based on non-learnable prototypes, which is able to handle arbitrary number of classes with a constant amount of learnable parameters.
Abstract: Prevalent semantic segmentation solutions, despite their different network designs (FCN based or attention based) and mask decoding strategies (parametric softmax based or pixel-query based), can be placed in one category, by considering the softmax weights or query vectors as learnable class prototypes. In light of this prototype view, this study uncovers several limitations of such parametric segmentation regime, and proposes a nonparametric alternative based on non-learnable prototypes. Instead of prior methods learning a single weight/query vector for each class in a fully parametric manner, our model represents each class as a set of non-learnable prototypes, relying solely on the mean fea-tures of several training pixels within that class. The dense prediction is thus achieved by nonparametric nearest prototype retrieving. This allows our model to directly shape the pixel embedding space, by optimizing the arrangement between embedded pixels and anchored prototypes. It is able to handle arbitrary number of classes with a constant amount of learnable parameters. We empirically show that, with FCN based and attention based segmentation models (i.e., HR-Net, Swin, SegFormer) and backbones (i.e., ResNet, HRNet, Swin, MiT), our nonparametric framework yields compel-ling results over several datasets (i.e., ADE20K, Cityscapes, COCO-Stuff), and performs well in the large-vocabulary situation. We expect this work will provoke a rethink of the current de facto semantic segmentation model design.

37 citations


Journal ArticleDOI
TL;DR: In this paper , the authors discuss the mathematical foundation of the importance sampling technique and discuss two general classes of methods to construct the importance sample density (or probability measure) for reliability analysis, and explore the performances of the two classes of importance sampling methods through several benchmark numerical examples.

35 citations


Journal ArticleDOI
TL;DR: An anomaly detection framework using causal network and feature-attention-based long short-term memory (CN-FA-LSTM) has a stronger interpretability than other commonly used prediction models and the universal applicability of the method is verified.
Abstract: Most of the data-driven satellite telemetry data anomaly detection methods suffer from high false positive rate (FPR) and poor interpretability. To solve the above problems, we propose an anomaly detection framework using causal network and feature-attention-based long short-term memory (CN-FA-LSTM). In our method, a causal network of telemetry parameters is constructed by calculating normalized modified conditional transfer entropy (NMCTE) and optimized by conditional independence tests based on the conditional mutual information (CMI). Then, a CN-FA-LSTM is established to predict telemetry data, and a nonparametric dynamic $k$ -sigma threshold updating method is proposed to set thresholds. A case study on a real satellite demonstrates that anomaly detection using the CN-FA-LSTM and nonparametric dynamic $k$ -sigma threshold updating has an average F1-score of 0.9462 and an FPR of 0.0021, which are better than the baseline methods. Furthermore, CN-FA-LSTM has a stronger interpretability than other commonly used prediction models. Supplementary experiment on two public datasets verifies the universal applicability of our method.

30 citations


Journal ArticleDOI
TL;DR: An extension of the neural network to the quantile regression is proposed for survival data with right censoring, which is adjusted by the inverse of the estimated censoring distribution in the check function to show that the deep learning method could be flexible enough to predict nonlinear patterns more accurately compared to existing quantiles regression methods.

20 citations


Journal ArticleDOI
TL;DR: In this paper , the authors used satellite-recorded nighttime lights in a measurement error model framework to estimate the relationship between nighttime light growth and GDP growth, as well as the nonparametric distribution of errors in both measures, and they obtained three key results: (i) the elasticity of nighttime lights to GDP is about 1.3; (ii) national accounts GDP growth measures are less precise for low and middle income countries, and nighttime lights can play a big role in improving such measures; and (iii) their new measure of GDP growth implies that China and India had considerably lower growth rates than official data suggested.

20 citations


Journal ArticleDOI
TL;DR: This paper derives upper bounds for the $L^2$ minimax risk in nonparametric estimation and derives asymptotic distributions for the constructed network and a relating hypothesis testing procedure that is proven as minimax optimal under suitable network architectures.

19 citations


Journal ArticleDOI
TL;DR: In this paper , the grain yield stability in nineteen barley genotypes was investigated across five different locations over two consecutive years (2018-2020) and the additive main effects multiplicative interaction (AMMI) analysis showed that environments (E), genotypes (G) and GE interaction effects were significant for grain yield.
Abstract: Abstract Background Barley is one of the most important cereal crops with considerable tolerance to various environmental stresses, which can maintain its productivity well in marginal croplands. The selection of stable and high-yielding barley genotypes and ideal discriminative locations is an important strategy for the development of new cultivars in tropical climates. Different statistical methods have been developed to dissect the genotype-by-environment interaction effect and investigate the stability of genotypes and select discriminative environments. The main objective of the present study was to identify high-yielding and stable barley genotypes and testing environments located in the tropical regions of Iran using 23 parametric and nonparametric stability statistics. In the present study, the grain yield stability in nineteen barley genotypes was investigated across five different locations over two consecutive years (2018–2020). Results The additive main effects multiplicative interaction (AMMI) analysis showed that environments (E), genotypes (G) and GE interaction effects were significant for grain yield. Using Spearman’s rank correlation analysis, a pattern map developed simultaneously for assessing relationships between grain yield and stability statistics and clustering of them, which allowed identifying two main groups based on their stability concepts. The biplot rendered using the weighted average of absolute scores ( WAASB ) and mean grain yield identified superior genotypes in terms of performance and stability. Among test environments, Darab, Gonbad and Zabol showed a high discriminating ability and played the highest contribution in creating GEI. Hence, these regions are suggested as discriminative sites in Iran for the selection of high-yielding and stable barley genotypes. Conclusion As a conclusion from this research, all stability statistics together identify G10 and G12 as the superior barley genotypes; these genotypes could be released as commercial cultivars in tropical regions of Iran.

19 citations


Journal ArticleDOI
TL;DR: In this paper , the Wilcoxon rank sum test (WRST) has been used for operation state monitoring and automated fault detection of wind turbines, which can monitor one or several key process parameters of the wind turbine simultaneously.

17 citations


Journal ArticleDOI
TL;DR: This article developed a nonparametric approach to estimate demand for differentiated products, which then applied to California supermarket data, and showed that the non-parametric model predicts a much lower pass-through than a standard mixed logit model when considering a tax on one good.
Abstract: Demand estimates are essential for addressing a wide range of positive and normative questions in economics that are known to depend on the shape—and notably the curvature—of the true demand functions. The existing frontier approaches, while allowing flexible substitution patterns, typically require the researcher to commit to a parametric specification. An open question is whether these a priori restrictions are likely to significantly affect the results. To address this, I develop a nonparametric approach to estimation of demand for differentiated products, which I then apply to California supermarket data. While the approach subsumes workhorse models such as mixed logit, it allows consumer behaviors and preferences beyond standard discrete choice, including continuous choices, complementarities across goods, and consumer inattention. When considering a tax on one good, the nonparametric approach predicts a much lower pass‐through than a standard mixed logit model. However, when assessing the market power of a multiproduct firm relative to that of a single‐product firm, the models give similar results. I also illustrate how the nonparametric approach may be used to guide the choice among parametric specifications.

Journal ArticleDOI
156-215.801
TL;DR: In this paper , the authors investigated the factors that are associated with fatal and severe vehicle-pedestrian crashes in Great Britain by developing four parametric models and five non-parametric tools to predict the crash severity.
Abstract: The study aims to investigate the factors that are associated with fatal and severe vehicle–pedestrian crashes in Great Britain by developing four parametric models and five non-parametric tools to predict the crash severity. Even though the models have already been applied to model the pedestrian injury severity, a comparative analysis to assess the predictive power of such modeling techniques is limited. Hence, this study contributes to the road safety literature by comparing the models by their capabilities of identifying the significant explanatory variables, and by their performances in terms of the F-measure, the G-mean, and the area under curve. The analyses were carried out using data that refer to the vehicle–pedestrian crashes that occurred in the period of 2016–2018. The parametric models confirm their advantages in offering easy-to-interpret outputs and understandable relations between the dependent and independent variables, whereas the non-parametric tools exhibited higher classification accuracies, identified more explanatory variables, and provided insights into the interdependencies among the factors. The study results suggest that the combined use of parametric and non-parametric methods may effectively overcome the limits of each group of methods, with satisfactory prediction accuracies and the interpretation of the factors contributing to fatal and serious crashes. In the conclusion, several engineering, social, and management pedestrian safety countermeasures are recommended.

Journal ArticleDOI
TL;DR: In this article , a three-stage automated OMA algorithm based on a combination of the second-order blind identification (SOBI) and the covariance-driven stochastic subspace identification (SSI-COV) is proposed, which takes full advantage of both parametric and nonparametric algorithms while overcoming the limitations.

Journal ArticleDOI
TL;DR: In this article , a semiparametric model averaging prediction (SMAP) method for a dichotomous response is proposed to approximate the unknown score function by a linear combination of one-dimensional marginal score functions.

Journal ArticleDOI
TL;DR: It is found that papers with both objective and subjective measures do not hold the same reporting and analysis standards for both aspects of their evaluation, producing less rigorous work for the subjective qualities measured by Likert scales.
Abstract: Likert scales are often used in visualization evaluations to produce quantitative estimates of subjective attributes, such as ease of use or aesthetic appeal. However, the methods used to collect, analyze, and visualize data collected with Likert scales are inconsistent among evaluations in visualization papers. In this paper, we examine the use of Likert scales as a tool for measuring subjective response in a systematic review of 134 visualization evaluations published between 2009 and 2019. We find that papers with both objective and subjective measures do not hold the same reporting and analysis standards for both aspects of their evaluation, producing less rigorous work for the subjective qualities measured by Likert scales. Additionally, we demonstrate that many papers are inconsistent in their interpretations of Likert data as discrete or continuous and may even sacrifice statistical power by applying nonparametric tests unnecessarily. Finally, we identify instances where key details about Likert item construction with the potential to bias participant responses are omitted from evaluation methodology reporting, inhibiting the feasibility and reliability of future replication studies. We summarize recommendations from other fields for best practices with Likert data in visualization evaluations, based on the results of our survey. A full copy of this paper and all supplementary material are available at https://osf.io/exbz8/.

Journal ArticleDOI
TL;DR: In this article , the grain yield stability in nineteen barley genotypes was investigated across five different locations over two consecutive years (2018-2020) and the additive main effects multiplicative interaction (AMMI) analysis showed that environments (E), genotypes (G) and GE interaction effects were significant for grain yield.
Abstract: Abstract Background Barley is one of the most important cereal crops with considerable tolerance to various environmental stresses, which can maintain its productivity well in marginal croplands. The selection of stable and high-yielding barley genotypes and ideal discriminative locations is an important strategy for the development of new cultivars in tropical climates. Different statistical methods have been developed to dissect the genotype-by-environment interaction effect and investigate the stability of genotypes and select discriminative environments. The main objective of the present study was to identify high-yielding and stable barley genotypes and testing environments located in the tropical regions of Iran using 23 parametric and nonparametric stability statistics. In the present study, the grain yield stability in nineteen barley genotypes was investigated across five different locations over two consecutive years (2018–2020). Results The additive main effects multiplicative interaction (AMMI) analysis showed that environments (E), genotypes (G) and GE interaction effects were significant for grain yield. Using Spearman’s rank correlation analysis, a pattern map developed simultaneously for assessing relationships between grain yield and stability statistics and clustering of them, which allowed identifying two main groups based on their stability concepts. The biplot rendered using the weighted average of absolute scores ( WAASB ) and mean grain yield identified superior genotypes in terms of performance and stability. Among test environments, Darab, Gonbad and Zabol showed a high discriminating ability and played the highest contribution in creating GEI. Hence, these regions are suggested as discriminative sites in Iran for the selection of high-yielding and stable barley genotypes. Conclusion As a conclusion from this research, all stability statistics together identify G10 and G12 as the superior barley genotypes; these genotypes could be released as commercial cultivars in tropical regions of Iran.

Journal ArticleDOI
TL;DR: In this paper , the problem of nonparametric estimation of the expectile regression model for strong mixing functional time series data is investigated, and the almost complete consistency and the asymptotic normality of the kernel-type estimator under some mild conditions are established.
Abstract: In this paper, the problem of the nonparametric estimation of the expectile regression model for strong mixing functional time series data is investigated. To be more precise, we establish the almost complete consistency and the asymptotic normality of the kernel-type expectile regression estimator under some mild conditions. The usefulness of our theoretical results in the financial time series analysis is discussed. Further, we provide some practical algorithms to select the smoothing parameter or to construct the confidence intervals using the bootstrap techniques. In addition, a simulation study is carried out to verify the small sample behaviour of the proposed approach. Finally, we give an empirical example using the daily returns of the stock index SP500.

Journal ArticleDOI
TL;DR: An improved direct estimation approach is developed by including and optimizing continuous text features, along with a form of matching adapted from the causal inference literature that substantially improves performance in a diverse collection of 73 datasets.
Abstract: Abstract Some scholars build models to classify documents into chosen categories. Others, especially social scientists who tend to focus on population characteristics, instead usually estimate the proportion of documents in each category—using either parametric “classify-and-count” methods or “direct” nonparametric estimation of proportions without individual classification. Unfortunately, classify-and-count methods can be highly model-dependent or generate more bias in the proportions even as the percent of documents correctly classified increases. Direct estimation avoids these problems, but can suffer when the meaning of language changes between training and test sets or is too similar across categories. We develop an improved direct estimation approach without these issues by including and optimizing continuous text features, along with a form of matching adapted from the causal inference literature. Our approach substantially improves performance in a diverse collection of 73 datasets. We also offer easy-to-use software that implements all ideas discussed herein.

Journal ArticleDOI
05 Apr 2022
TL;DR: In this paper , a machine learning approach for model-independent new physics searches is presented, which is powered by recent large-scale implementations of kernel methods, nonparametric learning algorithms that can approximate any continuous function given enough data.
Abstract: Abstract We present a machine learning approach for model-independent new physics searches. The corresponding algorithm is powered by recent large-scale implementations of kernel methods, nonparametric learning algorithms that can approximate any continuous function given enough data. Based on the original proposal by D’Agnolo and Wulzer (Phys Rev D 99(1):015014, 2019, arXiv:1806.02350 [hep-ph]), the model evaluates the compatibility between experimental data and a reference model, by implementing a hypothesis testing procedure based on the likelihood ratio. Model-independence is enforced by avoiding any prior assumption about the presence or shape of new physics components in the measurements. We show that our approach has dramatic advantages compared to neural network implementations in terms of training times and computational resources, while maintaining comparable performances. In particular, we conduct our tests on higher dimensional datasets, a step forward with respect to previous studies.

Journal ArticleDOI
TL;DR: A new hybrid method is proposed to construct BNs and estimate the corresponding parameters considering the objectivity of field data and the accessibility of expert knowledge, combined with the ISM‐K2 (interpretive structural model) algorithm, copula theory, and the nonparametric method.
Abstract: Bayesian networks (BNs) can be automatically constructed with field data when the data can sufficiently support the objectivity of the model. However, in most risk assessments, field data cannot effectively support learning with BNs. In this paper, a new hybrid method is proposed to construct BNs and estimate the corresponding parameters considering the objectivity of field data and the accessibility of expert knowledge. This method is combined with the ISM‐K2 (interpretive structural model) algorithm, copula theory, and the nonparametric method. First, the ISM is used to identify the relationships among the directly and indirectly related variables (i.e., obtain the parent variable set). Second, based on the parent variable set, the K2 algorithm is used to construct BNs with the search volume reduced from an exponential to a quadratic form. Third, copula theory is introduced to consider several marginal distributions of variables, and a copula parameter is used to replace the multivariate joint cumulative distribution. The Gumbel copula function is first introduced to replace the often‐used normal copula function. Fourth, four types of distributions are utilized to fit the characteristics of the variables as the marginal distribution by using a nonparametric method. Finally, the proposed method was used to construct BNs for water inrush and estimate the risk of water inrush for a tunnel.

Journal ArticleDOI
TL;DR: In this article , a sharp uniform-in-bandwidth limit law for nonparametric estimation of a parameter, which is a zero of a certain estimating equation, indexed by a class of functions and depending on an infinite-dimensional covariate, is presented.

Journal ArticleDOI
TL;DR: In this article , the authors review some of the main Bayesian approaches that have been employed to define probability models where the complete response distribution may vary flexibly with predictors, and some extensions have been proposed to tackle this general problem using nonparametric approaches.
Abstract: Standard regression approaches assume that some finite number of the response distribution characteristics, such as location and scale, change as a (parametric or nonparametric) function of predictors. However, it is not always appropriate to assume a location/scale representation, where the error distribution has unchanging shape over the predictor space. In fact, it often happens in applied research that the distribution of responses under study changes with predictors in ways that cannot be reasonably represented by a finite dimensional functional form. This can seriously affect the answers to the scientific questions of interest, and therefore more general approaches are indeed needed. This gives rise to the study of fully nonparametric regression models. We review some of the main Bayesian approaches that have been employed to define probability models where the complete response distribution may vary flexibly with predictors. We focus on developments based on modifications of the Dirichlet process, historically termed dependent Dirichlet processes, and some of the extensions that have been proposed to tackle this general problem using nonparametric approaches.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed an online fast noisy input Gaussian process (online-FNIGP) to identify ship response models, which can incorporate new noisy measurements online and make fast predictions.

Journal ArticleDOI
TL;DR: In this paper , the authors reinterpret and test these two key assumptions of the current Budyko framework both theoretically and empirically, and conclude that, while the shape of the BudYko curves generally captures the global behavior of multiple catchments, their specific functional forms are arbitrary and not reflective of the dynamic behavior of individual catchments.
Abstract: Abstract. The Budyko framework posits that a catchment's long-term mean evapotranspiration (ET) is primarily governed by the availabilities of water and energy, represented by long-term mean precipitation (P) and potential evapotranspiration (PET), respectively. This assertion is supported by the distinctive clustering pattern that catchments take in Budyko space. Several semi-empirical, nonparametric curves have been shown to generally represent this clustering pattern but cannot explain deviations from the central tendency. Parametric Budyko equations attempt to generalize the nonparametric framework, through the introduction of a catchment-specific parameter (n or w). Prevailing interpretations of Budyko curves suggest that the explicit functional forms represent trajectories through Budyko space for individual catchments undergoing changes in the aridity index, PETP, while the n and w values represent catchment biophysical features; however, neither of these interpretations arise from the derivation of the Budyko equations. In this study, we reexamine, reinterpret, and test these two key assumptions of the current Budyko framework both theoretically and empirically. In our theoretical test, we use a biophysical model for ET to demonstrate that n and w values can change without invoking changes in landscape biophysical features and that catchments are not required to follow Budyko curve trajectories. Our empirical test uses data from 728 reference catchments in the United Kingdom (UK) and United States (US) to illustrate that catchments rarely follow Budyko curve trajectories and that n and w are not transferable between catchments or across time for individual catchments. This nontransferability implies that n and w are proxy variables for ETP, rendering the parametric Budyko equations underdetermined and lacking predictive ability. Finally, we show that the parametric Budyko equations are nonunique, suggesting their physical interpretations are unfounded. Overall, we conclude that, while the shape of Budyko curves generally captures the global behavior of multiple catchments, their specific functional forms are arbitrary and not reflective of the dynamic behavior of individual catchments.

Journal ArticleDOI
TL;DR: In this paper , a doubly robust augmented calibration weighting estimator was proposed to estimate the treatment effect of adjuvant chemotherapy for early-stage non-small-cell lung patients after surgery.
Abstract: Complementary features of randomized controlled trials (RCTs) and observational studies (OSs) can be used jointly to estimate the average treatment effect of a target population. We propose a calibration weighting estimator that enforces the covariate balance between the RCT and OS, therefore improving the trial-based estimator's generalizability. Exploiting semiparametric efficiency theory, we propose a doubly robust augmented calibration weighting estimator that achieves the efficiency bound derived under the identification assumptions. A nonparametric sieve method is provided as an alternative to the parametric approach, which enables the robust approximation of the nuisance functions and data-adaptive selection of outcome predictors for calibration. We establish asymptotic results and confirm the finite sample performances of the proposed estimators by simulation experiments and an application on the estimation of the treatment effect of adjuvant chemotherapy for early-stage non-small-cell lung patients after surgery.

Journal ArticleDOI
TL;DR: In this paper , the authors consider the endemic-epidemic framework, a class of autoregressive models for infectious disease surveillance counts, and replace the default autoregression on counts from the previous time period with more flexible weighting schemes inspired by discrete-time serial interval distributions.

Journal ArticleDOI
TL;DR: The metrics suggest that investments in cryptocurrencies are not likely to offer key diversification strategies in times of crisis, on the basis of evidence provided by this crisis.
Abstract: This paper features an analysis of cryptocurrencies and the impact of the COVID-19 pandemic on their effectiveness as a portfolio diversification tool and explores the correlations between the continuously compounded returns on Bitcoin, Ethereum and the S&P500 Index using a variety of parametric and non-parametric techniques. These methods include linear standard metrics such as the application of ordinary least squares regression (OLS) and the Pearson, Spearman and Kendall’s tau measures of association. In addition, non-linear, non-parametric measures such as the Generalised Measure of Correlation (GMC) and non-parametric copula estimates are applied. The results across this range of measures are consistent. The metrics suggest that, whilst the shock of the COVID-18 pandemic does not appear to have increased the correlations between the cryptocurrency series, it appears to have increased the correlations between the returns on cryptocurrencies and those on the S&P500 Index. This suggests that investments in cryptocurrencies are not likely to offer key diversification strategies in times of crisis, on the basis of evidence provided by this crisis.

Journal ArticleDOI
TL;DR: In this paper , the authors consider four Hubble parameter priors reflecting the Hubble tension and make use of two phenomenological functions, namely, a normalized dark energy density and a compactified dark energy equation of state.

Journal ArticleDOI
TL;DR: In this article , the authors study multivariate ranks and quantiles, defined using the theory of optimal transport, and derive the uniform consistency of these empirical estimates to their population versions, under certain assumptions.
Abstract: In this paper, we study multivariate ranks and quantiles, defined using the theory of optimal transport, and build on the work of Chernozhukov et al. (Ann. Statist. 45 (2017) 223–256) and Hallin et al. (Ann. Statist. 49 (2021) 1139–1165). We study the characterization, computation and properties of the multivariate rank and quantile functions and their empirical counterparts. We derive the uniform consistency of these empirical estimates to their population versions, under certain assumptions. In fact, we prove a Glivenko–Cantelli type theorem that shows the asymptotic stability of the empirical rank map in any direction. Under mild structural assumptions, we provide global and local rates of convergence of the empirical quantile and rank maps. We also provide a sub-Gaussian tail bound for the global L2-loss of the empirical quantile function. Further, we propose tuning parameter-free multivariate nonparametric tests—a two-sample test and a test for mutual independence—based on our notion of multivariate quantiles/ranks. Asymptotic consistency of these tests are shown and the rates of convergence of the associated test statistics are derived, both under the null and alternative hypotheses.

Journal ArticleDOI
TL;DR: In this article , a wind power interval prediction method based on hybrid semi-cloud model and nonparametric kernel density estimation is proposed, aiming at the asymmetrical distribution of prediction error.