scispace - formally typeset
Search or ask a question

Showing papers in "Test in 2007"


Journal ArticleDOI
16 Mar 2007-Test
TL;DR: The proliferation of panel data studies is explained in terms of data availability, the more heightened capacity for modeling the complexity of human behavior than a single cross-section or time series data can possibly allow, and challenging methodology.
Abstract: We explain the proliferation of panel data studies in terms of (i) data availability, (ii) the more heightened capacity for modeling the complexity of human behavior than a single cross-section or time series data can possibly allow, and (iii) challenging methodology. Advantages and issues of panel data modeling are also discussed.

691 citations


Journal ArticleDOI
28 Jun 2007-Test
TL;DR: In this article, the authors provide an overview of various developments that have taken place in this direction and also suggest some potential problems of interest for further research, including the potential problem of missing information.
Abstract: Properties of progressively censored order statistics and inferential procedures based on progressively censored samples have recently attracted considerable attention in the literature. In this paper, I provide an overview of various developments that have taken place in this direction and also suggest some potential problems of interest for further research.

489 citations


Journal ArticleDOI
06 Nov 2007-Test
TL;DR: In this paper, the tradeoff between the flexibility of alternative models and the power of the statistical tests is emphasized, and a selective overview on nonparametric inferences using generalized likelihood ratio (GLR) statistics is given.
Abstract: The advance of technology facilitates the collection of statistical data. Flexible and refined statistical models are widely sought in a large array of statistical problems. The question arises frequently whether or not a family of parametric or nonparametric models fit adequately the given data. In this paper we give a selective overview on nonparametric inferences using generalized likelihood ratio (GLR) statistics. We introduce generalized likelihood ratio statistics to test various null hypotheses against nonparametric alternatives. The trade-off between the flexibility of alternative models and the power of the statistical tests is emphasized. Well-established Wilks’ phenomena are discussed for a variety of semi- and non-parametric models, which sheds light on other research using GLR tests. A number of open topics worthy of further study are given in a discussion section.

99 citations


Journal ArticleDOI
27 Feb 2007-Test
TL;DR: In this paper, the problem of testing the equality of regression curves with dependent data is studied, and several methods based on nonparametric estimators of the regression function are described.
Abstract: In this paper, the problem of testing the equality of regression curves with dependent data is studied. Several methods based on nonparametric estimators of the regression function are described. In this setting, the distribution of the test statistic is frequently unknown or difficult to compute, so an approximate test based on the asymptotic distribution of the statistic can be considered. Nevertheless, the asymptotic properties of the methods proposed in this work have been obtained under independence of the observations, and just one of these methods was studied in a context of dependence as reported by Vilar-Fernandez and Gonzalez-Manteiga (Statistics 58(2):81?99, 2003). In addition, the distribution of these test statistics converges to the limit distribution with convergence rates usually rather slow, so that the approximations obtained for reasonable sample sizes are not satisfactory. For these reasons, many authors have suggested the use of bootstrap algorithms as an alternative approach. Our main concern is to compare the behavior of three bootstrap procedures that take into account the dependence assumption of the observations when they are used to approximate the distribution of the test statistics considered. A broad simulation study is carried out to observe the finite sample performance of the analyzed bootstrap tests

33 citations


Journal ArticleDOI
13 Mar 2007-Test
TL;DR: In this paper, a unified approach to residuals, leverages and outliers in the linear mixed model is developed, and formal and informal procedures are proposed to display the general features of residuals and leverages in order to detect outliers and high-leverage points in linear mixed models.
Abstract: Although the linear mixed model can be viewed as a direct extension of multiple regression, it is not obvious how to generalize the standard diagnostic tools such as residual analysis and detection of leverage points and outliers, which are available in the linear regression situation. A unified approach to residuals, leverages and outliers in the linear mixed model is developed. Formal and informal procedures are proposed to display the general features of residuals and leverages in order to detect outliers and high-leverage points in the linear mixed models. The relationship between the best linear unbiased predictor (BLUP) and residuals is established. Some properties of BLUPs are formulated and their use in detecting outlying observations are investigated.

33 citations


Journal ArticleDOI
27 Feb 2007-Test
TL;DR: In this paper, a family of estimators for estimating mean, ratio and product of two means of a finite population are suggested and studied under the two different situations of random non-response considered by Tracy and Osahan (1994, Statistica 54(2):163-168), Singh and Joarder (1998, Metrika 47:241-249) and Singh et al. (2000, Statisticsa 60(1):39-44).
Abstract: In this paper a family of estimators for estimating mean, ratio and product of two means of a finite population are suggested and studied under the two different situations of random non-response considered by Tracy and Osahan (1994, Statistica 54(2):163–168), Singh and Joarder (1998, Metrika 47:241–249) and Singh et al. (2000, Statistica 60(1):39–44). Asymptotic expressions of biases and mean squared errors of the proposed families are derived. Optimum conditions are obtained under which the proposed families of estimators have the minimum mean squared error (MSE). Furthermore, the optimum values, depending upon population parameters, when replaced by sample values, yield the estimators having the minimum MSE of the optimum estimators. The estimators for MSEs of the suggested families are also given.

33 citations


Journal ArticleDOI
Manuel Arellano1
06 Mar 2007-Test
TL;DR: In this article, the authors evaluated the rate of early (EVR) and sustained virological response (SVR), tolerability and baseline predictive factors associated with EVR and SVR in patients with chronic hepatitis C treated with individualized weight-based dosing regimen for both PegIFN alpha-2b and ribavirin.
Abstract: BACKGROUND AND AIM Increasing evidence to date highlights that individualized treatment regimens with pegylated interferon (PegIFN) and ribavirin represent a better approach for patients nowadays showing negative predictive factors for sustained virological response. The aims of this study were to assess the rate of early (EVR) and sustained virological response (SVR), tolerability and baseline predictive factors associated with EVR and SVR in patients with chronic hepatitis C treated with individualized weight-based dosing regimen for both PegIFN alpha-2b and ribavirin. METHODS The observational analysis included 234 consecutive patients with chronic hepatitis C genotype 1 treated with PegIFN alpha-2b and ribavirin on an out-patient basis between January 2003-March 2006. RESULTS The mean age of the study group was 49.5 years, and 35% were male patients; the group was slightly overweight (mean BMI=26.5 kg/sq.m). EVR was achieved in 84.6% (198/234 patients). The end-of-treatment and sustained biochemical responses were 76.3% and 66.1%, respectively. At the end of follow-up, an overall intent-to-treat SVR was achieved by 71 of 127 patients (in 55.9%). Lower baseline (< 1,000 000 IU/mL) HCV viral load was the only predictive factor associated with EVR (p=0.04); absent or mild fibrosis (F0-1) and a low histological activity (HAI < 8) were independently associated with SVR. Side effects resulted in PegIFN and ribavirin dose reductions in 9.4% and, respectively, 18.1%, but definitive discontinuation of therapy was necessary only in 8.7% of patients. CONCLUSION PegIFN alpha-2b and ribavirin can be safe and successful when using a weight-based dosing regimen, leading to high response rates even in overweight patients.

30 citations


Journal ArticleDOI
06 Mar 2007-Test
TL;DR: In this article, the inclusion probability of an element in the population is the probability that the element will be chosen in a sample, and the design elements should be furnished with the inclusion probabilities.
Abstract: Inclusion probabilities are design dependent and should be furnished with the design elements. Inclusion probability of an element in the population is the probability that the element will be chosen in a sample. In this paper the inclusion probabilities in the case of ranked set sampling design and some of its variations are furnished. This paper provides good and interesting examples of sampling designs for which the inclusion probabilities are not equal.

27 citations


Journal ArticleDOI
Robin C. Sickles1
24 Mar 2007-Test
TL;DR: In this article, the diagnostic value of FibroTest to discriminate between insignificant and significant fibrosis in order to avoid the liver biopsy currently used for selection of chronic hepatitis C patients eligible for antiviral therapy was assessed.
Abstract: AIM To assess the diagnostic value of FibroTest to discriminate between insignificant and significant fibrosis in order to avoid the liver biopsy currently used for selection of chronic hepatitis C patients eligible for antiviral therapy. PATIENTS AND METHODS A retrospective study was carried out in 206 chronic hepatitis C patients with liver biopsy performed before starting antiviral therapy and concomitant serum stored at -80 degrees C. Liver fibrosis was evaluated according to the METAVIR scoring system on a scale of F0 to F4. Biochemical markers assessed were: alpha 2 macroglobulin (alpha 2-MG), apolipoprotein A1 (Apo-A1), haptoglobin (Hapto), gamma-glutamyltransferase (GGT), total bilirubin (TB). The FibroTest score was computed after adjusting for age and gender. Predictive values and ROC curves were used to assess the accuracy of FibroTest results. RESULTS Alpha 2-MG, apo-A1, Hapto and gender were independent predictors for significant fibrosis. For FibroTest the observed area under ROC (ObAUROC) for the discrimination between minimal or no fibrosis (F0-F1) and significant fibrosis (F2-F4) was 0.782 (+/- 95 CI: 0.716-0.847) for a cutoff value 0.47. The sensitivity (Se), specificity (Sp), positive predictive value (PPV) and negative predictive value (NPV) of the FibroTest to differentiate significant from insignificant fibrosis were 80.2; 63.2; 78.9 and 65.8, respectively. The adjusted AUROC (AdAUROC) according to the prevalence of each individual stage of fibrosis was 0.856. CONCLUSION FibroTest could be an alternative to biopsy in most patients with chronic hepatitis C. It requires a strict adherence and observance of the technical recommendations for the assays of biochemical markers in order to avoid analytical variability.

25 citations


Journal ArticleDOI
27 Feb 2007-Test
TL;DR: In this paper, the authors report an unusual presentation and complication of caustic ingestion in a patient, who accidentally ingested sodium hydroxide and developed necrotizing esophagitis with progression to esophageal stenosis, which required surgical treatment.
Abstract: Caustic substances cause tissue destruction through liquefaction or coagulation reactions and the intensity of destruction depends on the type, concentration, time of contact and amount of the substance ingested. We report an unusual presentation and complication of caustic ingestion in a patient, who accidentally ingested sodium hydroxide. Our patient presented respiratory failure soon after admission and developed necrotizing esophagitis with progression to esophageal stenosis, which required surgical treatment. The complications were related to the amount of caustic soda ingested.

22 citations


Journal ArticleDOI
27 Feb 2007-Test
TL;DR: In this paper, a semi-parametric approach to the problem of statistical choice of extreme domains of attraction is proposed, based on the concepts of regular variation theory, and the asymptotic properties of Hasofer and Wang's test statistic are investigated.
Abstract: This paper deals with the semi-parametric approach to the problem of statistical choice of extreme domains of attraction. Relying on concepts of regular variation theory, it investigates the asymptotic properties of Hasofer and Wang’s test statistic based on the k upper extremes taken from a sample of size n, when k behaves as an intermediate sequence kn rather than remaining fixed while the sample size increases. In the process a Greenwood type test statistic is proposed which turns out to be useful in discriminating heavy-tailed distributions. The finite sample behavior of both testing procedures is evaluated in the light of a simulation study. The testing procedures are then applied to three real data sets.

Journal ArticleDOI
06 Mar 2007-Test
TL;DR: In this paper, the authors considered both the isotropic and anisotropic settings of the Nadaraya-Watson estimator and provided a theoretical procedure for construction of confidence intervals via the normal quantiles, which in practice must be appropriately estimated.
Abstract: In this work, the Nadaraya–Watson semivariogram estimation is considered for both the isotropic and the anisotropic settings. Several properties of these estimators are analyzed and, particularly, their asymptotic normality is established in terms of unknown characteristics of the random process. The latter provides a theoretical procedure for construction of confidence intervals for the semivariogram via the normal quantiles, which in practice must be appropriately estimated. A numerical study is included to illustrate the performance of the Nadaraya–Watson estimation when used to obtain confidence intervals.

Journal ArticleDOI
10 Mar 2007-Test
TL;DR: A review of panel data methods and their application in economics can be found in this article, where the authors argue that the proliferation of panel applications in economics is due to the wider availability of the panel data in both developed and developing countries.
Abstract: This paper provides an excellent review of panel data methods and their application in economics. Professor Hsiao is the leading authority on panel data econometrics; he wrote the first textbook on the subject that appeared as an econometric society monograph in 1986 with the second edition appearing in 2003. As the author points out in the introduction, it is impossible to do justice to the vast and growing literature on panel data. In my discussion, I will try to complement his review with additional references that the reader might want to read. These papers are discussed in the panel data textbooks cited in the paper including Hsiao (2003), Arellano (2003) and Baltagi (2005). The paper starts by reviewing the advantages of panel data arguing that the proliferation of panel applications in economics is due to the wider availability of panel data in both developed and developing countries. Because of space limitations, the paper does not go into pseudo-panels. These are panels constructed from consumer surveys which may not involve the same individuals or households. It does so, by focusing on cohorts, see Deaton (1985). Also, the paper does not have the space to discuss problems of attrition in panels which can be somewhat alleviated with refreshment samples, or rotating panels, see Biorn (1981). Advantages of panel data over time series data or cross-section data is more degrees of freedom, less multicollinearity, and more variation in the data that results in

Journal ArticleDOI
28 Jun 2007-Test
TL;DR: A look at a more realistic and interesting problem that allows the random removal process to be dependent on the failure time, and Sequential procedures, in which the value of Ri is not prefixed but determined at the time of observing the ith failure, are discussed.
Abstract: We first congratulate Professor Balakrishnan for developing numerous elegant results in the area of progressive censoring methodology and putting together a comprehensive review of this topic. For Open Problem 14, Professor Balakrishnan suggested a look at a more realistic and interesting problem that allows the random removal process to be dependent on the failure time. We would like to further discuss this idea in a life-testing experiment. Sequential procedures, in which the value of Ri is not prefixed but determined at the time of observing the ith failure (Xi:m:n), will be discussed.

Journal ArticleDOI
06 Mar 2007-Test
TL;DR: In this paper, it was shown that a homogeneous point process is a Poisson point process if and only if it is a point process with homogeneous independent thinning and N = N − N 1.
Abstract: Let N, N1 and N2 be point processes such that N1 is obtained from N by homogeneous independent thinning and N2 = N − N1. We give a new elementary proof that N1 and N2 are independent if and only if N is a Poisson point process. We also present an application of this result to test if a homogeneous point process is a Poisson point process.

Journal ArticleDOI
27 Feb 2007-Test
TL;DR: In this article, a simple matrix formula for the bias of order n-2 of this estimate, where n is the sample size, and define a third-order bias-corrected estimate in generalized linear models, which displays much smaller bias in small samples.
Abstract: Cordeiro and McCullagh (J Roy Stat Soc Ser B 53:629–643, 1991) derived a second-order bias-corrected estimate, which displays smaller bias than the classical maximum likelihood estimate in generalized linear models. This estimate, although consistent, can display pronounced bias in small to moderately large samples, as shown by Monte Carlo simulations here. In this paper, we obtain a simple matrix formula for the bias of order n-2 of this estimate, where n is the sample size, and define a third-order bias-corrected estimate in this class of models, which displays much smaller bias in small samples. In particular, some Monte Carlo simulations show that our new estimate can deliver substantial improvements in terms of bias and mean squared errors over the usual maximum likelihood estimate and Cordeiro and McCullagh’s estimate.


Journal ArticleDOI
27 Feb 2007-Test
TL;DR: In this paper, the problem of reporting a posterior distribution using a parametric family of distributions was considered in a nonparametric framework, and the posterior distribution was obtained as the solution to a decision problem via a well-known optimization algorithm.
Abstract: This paper considers the problem of reporting a ?posterior distribution? using a parametric family of distributions while working in a nonparametric framework. This "posterior" is obtained as the solution to a decision problem and can be found via a well-known optimization algorithm

Journal ArticleDOI
08 Mar 2007-Test
TL;DR: Glycogen storage disease type I is a rare condition, but with possible life-threatening consequences, and it has to be kept in mind whenever important hepatomegaly and/or hypoglycemia are present.
Abstract: Background and aims To describe the characteristics of patients with type I glycogenosis, the presentation types, the main clinical signs, the diagnostic criteria and also the disease outcomes on long term follow-up. Methods The study group consisted of 6 patients (medium age 3 years 6 months) admitted in hospital between 2001 and 2005 and followed-up for 1 to 5 years. The sex ratio was 1:1. Results The referral reasons varied from hepatomegaly incidentally discovered (3 of 6 patients) to abdominal pain (4 of 6 patients), growth failure (3 of 6 patients), symptoms of hypoglycemia (3 of 6 patients), recurrent epistaxis (1 patient). Hepatomegaly was present in all cases. Biological profile: hypoglycemia, increased transaminase values, hypertriglyceridemia, lactic acidosis, normal uric acid levels. Two patients had neutropenia and other two had increased glomerular filtration rate. Liver biopsy showed glycogen-laden hepatocytes and markedly increased fat. Four patients had type Ia and 2 patients type Ib glycogenosis. The therapy consisted of: diet, ursodeoxycholic acid, granulocyte colony-stimulating factor, broad spectrum antibiotics for those with type Ib glycogenosis. The follow-up parameters were clinical, biological, imaging. Metabolic interventions and antiinfectious therapy were necessary. All patients are alive, two of them on the waiting list for liver transplantation. Conclusions Glycogen storage disease type I is a rare condition, but with possible life-threatening consequences. It has to be kept in mind whenever important hepatomegaly and/or hypoglycemia are present.

Journal ArticleDOI
04 Apr 2007-Test
TL;DR: A criterion for sample size choice based on the predictive probability of observing decisive and correct evidence is proposed to select the minimal sample size that guarantees a sufficiently high pre-experimental probability that an alternative Bayes factor provides strong evidence in favor of the true hypothesis.
Abstract: Alternative Bayes factors are families of methods used for hypothesis testing and model selection when sensitivity to priors is a concern and also when prior information is weak or lacking. This paper deals with two related problems that arise in the practical use of these model choice criteria: sample size determination and evaluation of discriminatory power. We propose a pre-experimental approach to cope with both these issues. Specifically, extending the evidential approach of Royall (J Am Stat Assoc 95(451):760–780, 2000) and following De Santis (J Stat Plan Inference 124(1):121–144, 2004), we propose a criterion for sample size choice based on the predictive probability of observing decisive and correct evidence. The basic idea is to select the minimal sample size that guarantees a sufficiently high pre-experimental probability that an alternative Bayes factor provides strong evidence in favor of the true hypothesis. It is also argued that a predictive analysis is a natural approach to the measurement of discriminatory power of alternative Bayes factors. The necessity of measuring discrimination ability depends on the fact that alternative Bayes factors are, in general, less sensitive to prior specifications than ordinary Bayes factors and that this gain in robustness corresponds to a reduced discriminative power. Finally, implementation of the predictive approach with improper priors is discussed and possible strategies are proposed.


Journal ArticleDOI
27 Feb 2007-Test
TL;DR: In this article, the theory and results of Wolfinger are extended to the balanced two-factor nested random effects model, and an example illustrates the flexibility and unique features of the Bayesian simulation method for the construction of tolerance intervals.
Abstract: Statistical intervals, properly calculated from sample data, are likely to be substantially more informative to decision makers than obtaining a point estimate alone and are often of paramount interest to practitioners and thus management (and are usually a great deal more meaningful than statistical significance or hypothesis tests). Wolfinger (1998, J Qual Technol 36:162–170) presented a simulation-based approach for determining Bayesian tolerance intervals in a balanced one-way random effects model. In this note the theory and results of Wolfinger are extended to the balanced two-factor nested random effects model. The example illustrates the flexibility and unique features of the Bayesian simulation method for the construction of tolerance intervals.

Journal ArticleDOI
27 Feb 2007-Test
TL;DR: In this article, the authors introduced the concept of optional randomized response (ORR) to accommodate direct responses offered by such respondents, and extended Kuk's method to varying probability stratified sampling when RR's as well as DR's are permitted.
Abstract: Warner’s pioneering randomized response (RR) device, as a method for reducing evasive answer bias while estimating the proportion of people in a community bearing a sensitive attribute, has been studied extensively over the last four decades. In many practical surveys it was observed that a character considered to be stigmatizing by a group of respondents did not appear to be such to another group, and direct responses (DR’s) to divulge its true characteristics were offered. Chaudhuri and Mukerjee (Calcutta Stat Assoc Bull 34:225–229, 1985; Randomized response: theory and techniques. Dekker, New York, 1988) introduced the concept of optional RR (ORR) to accommodate direct responses offered by such respondents. Since unequal probability stratified sampling is followed in large-scale socio-economic surveys, it is necessary to develop RR procedures for such complex surveys. Kuk’s method is extended to varying probability stratified sampling when RR’s as well as DR’s are permitted. A numerical study comparing the performance of alternative procedures is also reported.

Journal ArticleDOI
16 Mar 2007-Test
TL;DR: In this paper, two new classes of nonparametric hazard estimators for censored data are proposed, based on a formula that expresses the hazard rate of interest as a product of the hazard rates of the observable lifetime and the conditional probability of uncensoring.
Abstract: Two new classes of nonparametric hazard estimators for censored data are proposed in this paper. One is based on a formula that expresses the hazard rate of interest as a product of the hazard rate of the observable lifetime and the conditional probability of uncensoring. The second class follows presmoothing ideas already used by Cao et al. (J Nonparametr Stat 17:31–56, 2005) for the cumulative hazard function. Asymptotic representations for some estimators in these classes are obtained and used to prove their limit distributions. Finally, a simulation study illustrates the comparative behavior of the estimators studied along the paper.

Journal ArticleDOI
27 Feb 2007-Test
TL;DR: This work considers multifractal functions defined as lacunar wavelet series observed in a white noise model, constructed estimators of these two parameters and discusses statistical properties of this important model: the rate of the Fisher information and a testing procedure to check the multifractals feature of an observed noisy signal.
Abstract: Multifractal functions are widely used to model irregular signals such as turbulence, data stream or road traffic. Here, we consider multifractal functions defined as lacunar wavelet series observed in a white noise model. These random functions are statistically characterized by two parameters. The first parameter governs the intensity of the wavelet coefficients while the second one governs its sparsity. We construct estimators of these two parameters and discuss statistical properties of this important model: the rate of the Fisher information and a testing procedure to check the multifractal feature of an observed noisy signal.

Journal ArticleDOI
13 Mar 2007-Test
TL;DR: In this paper, the authors focus on the problem of estimating the parameters of the statistical model of the process and testing specific hypotheses about it, but only half of the problem is addressed.
Abstract: In most applications of statistical analysis in the sciences, the process by which the observed data are generated is transparent having usually been determined by the investigator by design. In contrast, in many applications in the social sciences, especially in economics, the mechanism by which the data are generated is opaque. In such circumstances, estimation of the parameters of the statistical model of the process and testing specific hypotheses about it are only half the problem of inference. My own view is that understanding the process by which the observations at hand are generated is of equal importance. Were the data, for example, obtained from a sample of firms selected by stratified random sampling from a census of all firms in the United States in 2000? Were they obtained from regulatory activity? In the case of time series, the data are almost always “fabricated,” in one way or another, by aggregation, interpolation, or extrapolation, or by all three. The nature of the sampling frame or the way in which the data are fabricated must be part of the model specification on which parametric inference or hypothesis testing is based. In his exemplary survey of panel data analysis, Cheng Hsiao focuses primarily on problems of estimation and inference from a parametrically well-specified model of how the observed data were generated. In my commentary, I would like briefly to address some of the issues associated with the other half of the problem. Since such a discussion is data specific, it is possible only to deal with the issues in the context of a specific, although possibly abstract, example. Suppose a longitudinal household survey in which the same households are questioned over time about their actions in, say, a number of consecutive months or years and, initially, about various

Journal ArticleDOI
27 Feb 2007-Test
TL;DR: Two case-deletion diagnostics which evaluate the effect of the omission on the linear functions which determine Fisher’s Linear Discriminant Rule are proposed in this paper.
Abstract: Various influence diagnostics in Multiple Discriminant Analysis can be found in the literature. Almost all of them are based on the overall probability of misclassification. Two case-deletion diagnostics which evaluate the effect of the omission on the linear functions which determine Fisher’s Linear Discriminant Rule are proposed in this paper. Both measures are based on the L 2-norm: The first diagnostic is calculated from the data set and the second diagnostic is calculated in the minimum hypercube which covers it.

Journal ArticleDOI
27 Feb 2007-Test
TL;DR: Adopting the maxiset approach, it is shown that a natural hard thresholding procedure attained the minimax rate of convergence within a logarithmic factor over two types of Besov balls.
Abstract: We consider the problem of estimating an unknown function f in a Gaussian noise setting under the global \(\mathbb{L}^{p}\) risk. The particularity of the model considered is that it utilizes a secondary function v which complicates the estimate significantly. While varying the assumptions on this function, we investigate the minimax rate of convergence over two types of Besov balls. One is defined as usual and the other belongs to the family of weighted spaces. Adopting the maxiset approach, we show that a natural hard thresholding procedure attained the minimax rate of convergence within a logarithmic factor over such weighted Besov balls.


Journal ArticleDOI
06 Nov 2007-Test
TL;DR: Fan and Jiang as mentioned in this paper proposed the generalized likelihood ratio (GLR) test for nonparametric inference and applied it to a large variety of function estimation problems, including the Wilks phenomenon.
Abstract: I would like to warmly congratulate Professors Fan and Jiang for their stimulating, lucid, and insightful account of the promising concept of generalized likelihood ratio tests, which they nicely demonstrate in a large variety of function estimation contexts. This seminal concept helps to fill the void that still exists regarding generally applicable and well-reasoned tools for inference in function space. It is of great importance to fill this void as in the absence of generally accepted and appealing tools for nonparametric inference, many practitioners will simply stay away from these methods, and therefore their great potential will not be fully realized. In conjunction with the Wilks phenomenon, the development of the generalized likelihood ratio tests has gone a long way towards a general and versatile theory of testing in function spaces. This will be particularly useful in those cases as considered in the examples where one cannot make use of semiparametric efficient approaches. Fan and Jiang cover an amazingly large array of important inference problems for which they demonstrate that the GLR test works. In the following, I mention some of the thoughts that this very interesting paper generated—none of them may be new or compelling. Recently, the likelihood ratio approach to testing has been revisited by various authors and alternative tests with better finite sample properties under certain complex alternatives have found renewed interest (Lehmann 2006, and the references cited therein). In these alternative tests, ratios of averages are considered rather than the ratio of maxima. Such alternatives to likelihood ratio tests may prove useful in functional settings. One suggestion that flows from this is to enter smoothing parameters simultaneously over a range of values into the test statistic, rather than re-