scispace - formally typeset
Search or ask a question

Showing papers in "Scandinavian Journal of Statistics in 2005"


Journal ArticleDOI
TL;DR: In this paper, the authors provide an introductory overview of a portion of distribution theory which is currently under intense development and illustrate connections with various areas of application, including selective sampling, models for compositional data, robust methods, some problems in econometrics, non-linear time series, especially in connection with financial data, and more.
Abstract: . This paper provides an introductory overview of a portion of distribution theory which is currently under intense development. The starting point of this topic has been the so-called skew-normal distribution, but the connected area is becoming increasingly broad, and its branches include now many extensions, such as the skew-elliptical families, and some forms of semi-parametric formulations, extending the relevance of the field much beyond the original theme of ‘skewness’. The final part of the paper illustrates connections with various areas of application, including selective sampling, models for compositional data, robust methods, some problems in econometrics, non-linear time series, especially in connection with financial data, and more.

657 citations


Journal ArticleDOI
TL;DR: The results suggest that SCV for full bandwidth matrices is the most reliable of the CV methods, and observe that experience from the univariate setting can sometimes be a misleading guide for understanding bandwidth selection in the multivariate case.
Abstract: The performance of multivariate kernel density estimates depends crucially on the choice of bandwidth matrix, but progress towards developing good bandwidth matrix selectors has been relatively slow. In particular, previous studies of cross-validation (CV) methods have been restricted to biased and unbiased CV selection of diagonal bandwidth matrices. However, for certain types of target density the use of full (i.e. unconstrained) bandwidth matrices offers the potential for significantly improved density estimation. In this paper, we generalize earlier work from diagonal to full bandwidth matrices, and develop a smooth cross-validation (SCV) meth- odology for multivariate data. We consider optimization of the SCV technique with respect to a pilot bandwidth matrix. All the CV methods are studied using asymptotic analysis, simulation experiments and real data analysis. The results suggest that SCV for full bandwidth matrices is the most reliable of the CV methods. We also observe that experience from the univariate setting can sometimes be a misleading guide for understanding bandwidth selection in the multivariate case.

305 citations


Journal ArticleDOI
TL;DR: In this article, an extension of the so-called generalized functional linear model to the case of sparse longitudinal predictors is proposed, which is illustrated with data on primary biliary cirrhosis.
Abstract: . We review and extend some statistical tools that have proved useful for analysing functional data. Functional data analysis primarily is designed for the analysis of random trajectories and infinite-dimensional data, and there exists a need for the development of adequate statistical estimation and inference techniques. While this field is in flux, some methods have proven useful. These include warping methods, functional principal component analysis, and conditioning under Gaussian assumptions for the case of sparse data. The latter is a recent development that may provide a bridge between functional and more classical longitudinal data analysis. Besides presenting a brief review of functional principal components and functional regression, we develop some concepts for estimating functional principal component scores in the sparse situation. An extension of the so-called generalized functional linear model to the case of sparse longitudinal predictors is proposed. This extension includes functional binary regression models for longitudinal data and is illustrated with data on primary biliary cirrhosis.

164 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigated the possible use of different notions of data depth in non-parametric discriminant analysis and proposed a different depth-based classification technique for unequal prior problems, which is also useful for equal prior cases, especially when the populations have different scatters and shapes.
Abstract: Over the last couple of decades, data depth has emerged as a powerful exploratory and inferential tool for multivariate data analysis with wide-spread applications. This paper investigates the possible use of different notions of data depth in non-parametric discriminant analysis. First, we consider the situation where the prior probabilities of the competing populations are all equal and investigate classifiers that assign an observation to the population with respect to which it has the maximum location depth. We propose a different depth-based classification technique for unequal prior problems, which is also useful for equal prior cases, especially when the populations have different scatters and shapes. We use some simulated data sets as well as some benchmark real examples to evaluate the performance of these depth-based classifiers. Large sample behaviour of the misclassification rates of these depth-based non-parametric classifiers have been derived under appropriate regularity conditions.

126 citations


Journal ArticleDOI
TL;DR: In this paper, Barndorff-Nielsen and Prause give expressions for (absolute) moments of generalized hyperbolic (GH) and normal inverse Gaussian (NIG) laws in terms of moments of corresponding symmetric laws.
Abstract: Expressions for (absolute) moments of generalized hyperbolic (GH) and normal inverse Gaussian (NIG) laws are given in terms of moments of the corresponding symmetric laws. For the (absolute) moments centered at the location parameter mu explicit expressions as series containing Bessel functions are provided. Furthermore the derivatives of the logarithms of (absolute) mu-centered moments with respect to the logarithm of time are calculated explicitly for NIG Levy processes. Computer implementation of the formulae obtained is briefly discussed. Finally some further insight into the apparent scaling behaviour of NIG Levy processes (previously discussed in Barndorff-Nielsen and Prause (2001)) is gained.

112 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider large sample inference in a semiparametric logistic/proportional-hazards mixture model and derive consistent variance estimates for both the parametric and non-parametric components.
Abstract: . We consider large sample inference in a semiparametric logistic/proportional-hazards mixture model. This model has been proposed to model survival data where there exists a positive portion of subjects in the population who are not susceptible to the event under consideration. Previous studies of the logistic/proportional-hazards mixture model have focused on developing point estimation procedures for the unknown parameters. This paper studies large sample inferences based on the semiparametric maximum likelihood estimator. Specifically, we establish existence, consistency and asymptotic normality results for the semiparametric maximum likelihood estimator. We also derive consistent variance estimates for both the parametric and non-parametric components. The results provide a theoretical foundation for making large sample inference under the logistic/proportional-hazards mixture model.

84 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed covariate adjusted correlation (Cadcor) analysis, which targets the correlation between two hidden variables that are observed after being multiplied by an unknown function of a common observable confounding variable.
Abstract: . We propose covariate adjusted correlation (Cadcor) analysis to target the correlation between two hidden variables that are observed after being multiplied by an unknown function of a common observable confounding variable. The distorting effects of this confounding may alter the correlation relation between the hidden variables. Covariate adjusted correlation analysis enables consistent estimation of this correlation, by targeting the definition of correlation through the slopes of the regressions of the hidden variables on each other and by establishing a connection to varying-coefficient regression. The asymptotic distribution of the resulting adjusted correlation estimate is established. These distribution results, when combined with proposed consistent estimates of the asymptotic variance, lead to the construction of approximate confidence intervals and inference for adjusted correlations. We illustrate our approach through an application to the Boston house price data. Finite sample properties of the proposed procedures are investigated through a simulation study.

70 citations


Journal ArticleDOI
TL;DR: In this article, the authors discuss the likelihood ratio-based CIs for the distribution function and the quantile function and compare these intervals to several different intervals based on the MLE.
Abstract: The likelihood ratio statistic for testing pointwise hypotheses about the survival time distribution in the current status model can be inverted to yield confidence intervals (CIs). One advantage of this procedure is that CIs can be formed without estimating the unknown parameters that figure in the asymptotic distribution of the maximum likelihood estimator (MLE) of the distribution function. We discuss the likelihood ratio-based CIs for the distribution function and the quantile function and compare these intervals to several different intervals based on the MLE. The quantiles of the limiting distribution of the MLE are estimated using various methods including parametric fitting, kernel smoothing and subsampling techniques. Comparisons are carried out both for simulated data and on a data set involving time to immunization against rubella. The comparisons indicate that the likelihood ratio-based intervals are preferable from several perspectives.

67 citations


Journal ArticleDOI
TL;DR: In this paper, the authors considered a semiparametric time-varying coefficients regression model where the influences of some covariates vary non-parametrically with time while the effects of the remaining covariates follow certain parametric functions of time.
Abstract: In this paper, we consider a semiparametric time-varying coefficients regression model where the influences of some covariates vary non-parametrically with time while the effects of the remaining covariates follow certain parametric functions of time. The weighted least squares type estimators for the unknown parameters of the parametric coefficient functions as well as the esti- mators for the non-parametric coefficient functions are developed. We show that the kernel smoothing that avoids modelling of the sampling times is asymptotically more efficient than a single nearest neighbour smoothing that depends on the estimation of the sampling model. The asymptotic optimal bandwidth is also derived. A hypothesis testing procedure is proposed to test whether some covariate effects follow certain parametric forms. Simulation studies are conducted to compare the finite sample performances of the kernel neighbourhood smoothing and the single nearest neighbour smoothing and to check the empirical sizes and powers of the proposed testing procedures. An application to a data set from an AIDS clinical trial study is provided for illustration.

60 citations


Journal ArticleDOI
TL;DR: In this paper, a class of generalized log-rank tests for incomplete survival data is presented and evaluated using simulation studies and illustrated by a set of real data from a cancer study.
Abstract: . Several non-parametric test procedures have been proposed for incomplete survival data: interval-censored failure time data. However, most of them have unknown asymptotic properties with heuristically derived and/or complicated variance estimation. This article presents a class of generalized log-rank tests for this type of survival data and establishes their asymptotics. The methods are evaluated using simulation studies and illustrated by a set of real data from a cancer study.

56 citations


Journal ArticleDOI
TL;DR: Methodology for Bayesian inference is considered for a stochastic epidemic model which permits mixing on both local and global scales, and focuses on estimation of the within‐ and between‐group transmission rates given data on the final outcome.
Abstract: . Methodology for Bayesian inference is considered for a stochastic epidemic model which permits mixing on both local and global scales. Interest focuses on estimation of the within- and between-group transmission rates given data on the final outcome. The model is sufficiently complex that the likelihood of the data is numerically intractable. To overcome this difficulty, an appropriate latent variable is introduced, about which asymptotic information is known as the population size tends to infinity. This yields a method for approximate inference for the true model. The methods are applied to real data, tested with simulated data, and also applied to a simple epidemic model for which exact results are available for comparison.

Journal ArticleDOI
TL;DR: In this article, a product-limit estimator for censored and truncated data is proposed, with an almost sure representation for these estimators, with a remainder term that is of a certain negligible order, uniformly over a class of phi-functions.
Abstract: Let X be a d-variate random vector that is completely observed, and let Y be a random variable that is subject to right censoring and left truncation. For arbitrary functions phi we consider expectations of the form E[phi(X, Y)], which appear in many statistical problems, and we estimate these expectations by using a product-limit estimator for censored and truncated data, extended to the context where covariates are present. An almost sure representation for these estimators is obtained, with a remainder term that is of a certain negligible order, uniformly over a class of phi-functions. This uniformity is important for the application to goodness-of-fit testing in regression and to inference for the regression depth, which we consider in more detail.

Journal ArticleDOI
TL;DR: In this paper, a detailed study of the breakdown properties of some multivariate M-functionals related to Tyler's [Ann. Statist. 15 (1987) 234] "distribution-free" M-functional of scatter is given.
Abstract: For probability distributions on ℝq, a detailed study of the breakdown properties of some multivariate M-functionals related to Tyler's [Ann. Statist. 15 (1987) 234] ‘distribution-free’ M-functional of scatter is given. These include a symmetrized version of Tyler's M-functional of scatter, and the multivariate t M-functionals of location and scatter. It is shown that for ‘smooth’ distributions, the (contamination) breakdown point of Tyler's M-functional of scatter and of its symmetrized version are 1/q and inline image, respectively. For the multivariate t M-functional which arises from the maximum likelihood estimate for the parameters of an elliptical t distribution on ν ≥ 1 degrees of freedom the breakdown point at smooth distributions is 1/(q + ν). Breakdown points are also obtained for general distributions, including empirical distributions. Finally, the sources of breakdown are investigated. It turns out that breakdown can only be caused by contaminating distributions that are concentrated near low-dimensional subspaces.

Journal ArticleDOI
TL;DR: Asymptotic normality is derived of kernel-type deconvolution estimators of the density, the distribution function at a fixed point, and of the probability of an interval of thedensity and distribution function.
Abstract: We derive asymptotic normality of kernel-type deconvolution estimators of the density, the distribution function at a fixed point, and of the probability of an interval. We consider so-called super smooth deconvolution problems where the characteristic function of the known distribution decreases exponentially, but faster than that of the Cauchy distribution. It turns out that the limit behaviour of the pointwise estimators of the density and distribution function is relatively straightforward, while the asymptotic behaviour of the estimator of the probability of an interval depends in a complicated way on the sequence of bandwidths.

Journal ArticleDOI
TL;DR: In this paper, Bayesian survival analysis of right-censored survival data is studied using priors on Bernstein polynomials and Markov chain Monte Carlo methods, which easily take into consideration geometric information like convexity or initial guess on the cumulative hazard functions, select only smooth functions, can have large enough support, and can be easily specified and generated.
Abstract: . Bayesian survival analysis of right-censored survival data is studied using priors on Bernstein polynomials and Markov chain Monte Carlo methods. These priors easily take into consideration geometric information like convexity or initial guess on the cumulative hazard functions, select only smooth functions, can have large enough support, and can be easily specified and generated. Certain frequentist asymptotic properties of the posterior distribution are established. Simulation studies indicate that these Bayes methods are quite satisfactory.

Journal ArticleDOI
TL;DR: It is shown that the asymptotic bias and mean squared error of the estimator are considerably smaller than that of the standard kernel df estimator.
Abstract: . A new kernel distribution function (df) estimator based on a non-parametric transformation of the data is proposed. It is shown that the asymptotic bias and mean squared error of the estimator are considerably smaller than that of the standard kernel df estimator. For the practical implementation of the new estimator a data-based choice of the bandwidth is proposed. Two possible areas of application are the non-parametric smoothed bootstrap and survival analysis. In the latter case new estimators for the survival function and the mean residual life function are derived.

Journal ArticleDOI
TL;DR: In this article, the goodness-of-fit tests for Aalen's additive risk model were proposed, which are based on test statistics the asymptotic distributions of which are determined under both the null and alternative hypotheses.
Abstract: . In this paper we propose goodness-of-fit tests for Aalen's additive risk model. They are based on test statistics the asymptotic distributions of which are determined under both the null and alternative hypotheses. The results are derived using martingale techniques for counting processes. An important feature of these tests is that they can be adjusted to particular alternatives. One of the alternatives we consider is Cox's multiplicative risk model. It is perhaps remarkable that such a test needs no estimate of the baseline hazard in the Cox model. We present simulation studies which give an impression of the performance of the proposed tests. In addition, the tests are applied to real data sets.

Journal ArticleDOI
TL;DR: In this paper, the robustness of SIR is investigated by deriving and plotting the influence function for a variety of contamination structures, and the asymptotic variance of the estimates is also derived for the single index model when the explanatory variable is known to be normally distributed.
Abstract: . Sliced inverse regression (SIR) is a dimension reduction technique that is both efficient and simple to implement. The procedure itself relies heavily on estimates that are known to be highly non-robust and, as such, the issue of robustness is often raised. This paper looks at the robustness of SIR by deriving and plotting the influence function for a variety of contamination structures. The sample influence function is also considered and used to highlight that common outlier detection and deletion methods may not be entirely useful to SIR. The asymptotic variance of the estimates is also derived for the single index model when the explanatory variable is known to be normally distributed. The asymptotic variance is then compared for varying choices of the number of slices for a simple model example.

Journal ArticleDOI
TL;DR: Skew-normal distributions have been studied extensively in the literature as mentioned in this paper, with a focus on their properties and their application in the context of Bayesian inference and statistical inference.
Abstract: Department of Statistics, Texas A&M UniversityFirst, I would like to congratulate Professor Azzalini for an excellent overview paper on thetopic of the skew-normal distribution and its various extensions. This is a fast-growing area ofresearch that has been pioneered by Professor Azzalini’s seminal article on A class of dis-tributions which includes the normal ones (Azzalini, 1985). Presently, both frequentist andBayesian statisticians are actively involved in the study of these univariate and multivariateskewed distributions. A sample of current researchers have recently contributed to an editedbook entitled Skew-elliptical distributions and their applications: a journey beyond normality(Genton, 2004a). Professor Azzalini fully deserves the credit for generating such a diversefamily of researchers!Mydiscussionconsistsofsomeadditionallinkstoexistingresultsfromtheliteratureaswellassuggestionsforfurther research.Mycommentsarecentredaroundthreemain themes.Thefirstone, in section 7, is devoted to the construction of skew-normal distributions, their properties,and some extensions. The second theme is concerned with flexibility and inference with skeweddistributions in section 8. The third theme, in section 9, is motivated by applications.7. Skew-normal and related distributionsIenjoyedtheparallelpresentationoftheunivariateandmultivariateskew-normaldistributionsinsections2and3,respectively.Iwouldliketostartwithsomecommentsabouttheformoftheperturbationfunction.Asmentionedinthearticle,theexpressionGfw(z)gappearinginlemmas1 and 3 can be replaced by an arbitrary skewing function p : R

Journal ArticleDOI
TL;DR: In this paper, the existence of the largest CGs with no flags was shown to be a natural characterization of equivalence classes of chain graphs of this kind, with respect to both the LWF- and the AMP-Markov properties.
Abstract: . A Markov property associates a set of conditional independencies to a graph. Two alternative Markov properties are available for chain graphs (CGs), the Lauritzen–Wermuth–Frydenberg (LWF) and the Andersson–Madigan– Perlman (AMP) Markov properties, which are different in general but coincide for the subclass of CGs with no flags. Markov equivalence induces a partition of the class of CGs into equivalence classes and every equivalence class contains a, possibly empty, subclass of CGs with no flags itself containing a, possibly empty, subclass of directed acyclic graphs (DAGs). LWF-Markov equivalence classes of CGs can be naturally characterized by means of the so-called largest CGs, whereas a graphical characterization of equivalence classes of DAGs is provided by the essential graphs. In this paper, we show the existence of largest CGs with no flags that provide a natural characterization of equivalence classes of CGs of this kind, with respect to both the LWF- and the AMP-Markov properties. We propose a procedure for the construction of the largest CGs, the largest CGs with no flags and the essential graphs, thereby providing a unified approach to the problem. As by-products we obtain a characterization of graphs that are largest CGs with no flags and an alternative characterization of graphs which are largest CGs. Furthermore, a known characterization of the essential graphs is shown to be a special case of our more general framework. The three graphical characterizations have a common structure: they use two versions of a locally verifiable graphical rule. Moreover, in case of DAGs, an immediate comparison of three characterizing graphs is possible.

Journal ArticleDOI
TL;DR: It is shown that the concentration of the data‐dependent smoothing factor and the ‘size’ of the hypothesized class of densities play a key role in the performance of the test.
Abstract: Given an i.i.d. sample drawn from a density f on the real line, the problem of testing whether f is in a given class of densities is considered. Testing procedures constructed on the basis of minimizing the L1-distance between a kernel density estimate and any density in the hypo- thesized class are investigated. General non-asymptotic bounds are derived for the power of the test. It is shown that the concentration of the data-dependent smoothing factor and the 'size' of the hypothesized class of densities play a key role in the performance of the test. Consistency and non- asymptotic performance bounds are established in several special cases, including testing simple hypotheses, translation/scale classes and symmetry. Simulations are also carried out to compare the behaviour of the method with the Kolmogorov-Smirnov test and an L2 density-based approach due to Fan (Econ. Theory 10 (1994) 316).

Journal ArticleDOI
TL;DR: In this paper, the authors considered the case where a terminal event censors a non-terminal event, but not vice versa, and formulated the dependence structure of the event times with the gamma frailty copula on the upper wedge, with the marginal distributions unspecified.
Abstract: . We consider the case where a terminal event censors a non-terminal event, but not vice versa. When the events are dependent, estimation of the distribution of the non-terminal event is a competing risks problem, while estimation of the distribution of the terminal event is not. The dependence structure of the event times is formulated with the gamma frailty copula on the upper wedge, with the marginal distributions unspecified. With a consistent estimator of the association parameter, pseudo self-consistency equations are derived and adapted to the semiparametric model. Existence, uniform consistency and weak convergence of the new estimator for the marginal distribution of the non-terminal event is established using theories of empirical processes, U-statistics and Z-estimation. The potential practical utility of the methodology is illustrated with simulated and real data sets.

Journal ArticleDOI
TL;DR: In this paper, a new estimator of the conditional quantiles based on the local linear method is proposed, and an algorithm for its numerical implementation is given for its asymptotic properties and evaluated on simulated data sets.
Abstract: . Censored regression models have received a great deal of attention in both the theoretical and applied statistics literature. Here, we consider a model in which the response variable is censored but not the covariates. We propose a new estimator of the conditional quantiles based on the local linear method, and give an algorithm for its numerical implementation. We study its asymptotic properties and evaluate its performance on simulated data sets.

Journal ArticleDOI
TL;DR: In this paper, the authors considered classification of the realization of a multivariate spatial-temporal Gaussian random field into one of two populations with different regression mean models and factorized covariance matrices.
Abstract: . We consider classification of the realization of a multivariate spatial–temporal Gaussian random field into one of two populations with different regression mean models and factorized covariance matrices. Unknown means and common feature vector covariance matrix are estimated from training samples with observations correlated in space and time, assuming spatial–temporal correlations to be known. We present the first-order asymptotic expansion of the expected error rate associated with a linear plug-in discriminant function. Our results are applied to ecological data collected from the Lithuanian Economic Zone in the Baltic Sea.

Journal ArticleDOI
TL;DR: In this paper, the autocorrelation structure of aggregates from a continuous-time process is studied and closed-form expressions for the limiting autocorerelation function and the normalized spectral density of the aggregates, as the extent of aggregation increases to infinity.
Abstract: . We study the autocorrelation structure of aggregates from a continuous-time process. The underlying continuous-time process or some of its higher derivative is assumed to be a stationary continuous-time auto-regressive fractionally integrated moving-average (CARFIMA) process with Hurst parameter H. We derive closed-form expressions for the limiting autocorrelation function and the normalized spectral density of the aggregates, as the extent of aggregation increases to infinity. The limiting model of the aggregates, after appropriate number of differencing, is shown to be some functional of the standard fractional Brownian motion with the same Hurst parameter of the continuous-time process from which the aggregates are measured. These results are then used to assess the loss of forecasting efficiency due to aggregation.

Journal ArticleDOI
TL;DR: It is identified that the moment method without resmoothing via a smaller bandwidth will produce a curve with nicks occurring at the censoring times, whereas there is no such problem with the least squares method.
Abstract: Recurrent event data are largely characterized by the rate function but smoothing techniques for estimating the rate function have never been rigorously developed or studied in statistical literature. This paper considers the moment and least squares methods for estimating the rate function from recurrent event data. With an independent censoring assumption on the recurrent event process, we study statistical properties of the proposed estimators and propose bootstrap procedures for the bandwidth selection and for the approximation of confidence intervals in the estimation of the occurrence rate function. It is identified that the moment method without resmoothing via a smaller bandwidth will produce a curve with nicks occurring at the censoring times, whereas there is no such problem with the least squares method. Furthermore, the asymptotic variance of the least squares estimator is shown to be smaller under regularity conditions. However, in the implementation of the bootstrap procedures, the moment method is computationally more efficient than the least squares method because the former approach uses condensed bootstrap data. The performance of the proposed procedures is studied through Monte Carlo simulations and an epidemiological example on intravenous drug users.

Journal ArticleDOI
TL;DR: In this article, the authors considered the problem of estimating the individual probabilities of a discrete distribution, where the true distribution of the independent observations is a mixture of a family of power series distributions.
Abstract: The problem of estimating the individual probabilities of a discrete distribution is considered. The true distribution of the independent observations is a mixture of a family of power series distributions. First, we ensure identifiability of the mixing distribution assuming mild conditions. Next, the mixing distribution is estimated by non-parametric maximum likelihood and an estimator for individual probabilities is obtained from the corresponding marginal mixture density. We establish asymptotic normality for the estimator of individual probabilities by showing that, under certain conditions, the difference between this estimator and the empirical proportions is asymptotically negligible. Our framework includes Poisson, negative binomial and logarithmic series as well as binomial mixture models. Simulations highlight the benefit in achieving normality when using the proposed marginal mixture density approach instead of the empirical one, especially for small sample sizes and/or when interest is in the tail areas. A real data example is given to illustrate the use of the methodology.

Journal ArticleDOI
TL;DR: The Bayesian level set estimator proves to be competitive and is also easy to compute, which is of no small importance, and existing rates of convergence in the frequentist non‐parametric literature are compared.
Abstract: . We are interested in estimating level sets using a Bayesian non-parametric approach, from an independent and identically distributed sample drawn from an unknown distribution. Under fairly general conditions on the prior, we provide an upper bound on the rate of convergence of the Bayesian level set estimate, via the rate at which the posterior distribution concentrates around the true level set. We then consider, as an application, the log-spline prior in the two-dimensional unit cube. Assuming that the true distribution belongs to a class of Holder, we provide an upper bound on the rate of convergence of the Bayesian level set estimates. We compare our results with existing rates of convergence in the frequentist non-parametric literature: the Bayesian level set estimator proves to be competitive and is also easy to compute, which is of no small importance. A simulation study is given as an illustration.

Journal ArticleDOI
TL;DR: In this paper, a binary response-based longitudinal adaptive design for the allocation of individuals to a better treatment and a weighted generalized quasi-likelihood approach for the consistent and efficient estimation of the regression parameters including the treatment effects is proposed.
Abstract: . In an adaptive clinical trial research, it is common to use certain data-dependent design weights to assign individuals to treatments so that more study subjects are assigned to the better treatment. These design weights must also be used for consistent estimation of the treatment effects as well as the effects of the other prognostic factors. In practice, there are however situations where it may be necessary to collect binary responses repeatedly from an individual over a period of time and to obtain consistent estimates for the treatment effect as well as the effects of the other covariates in such a binary longitudinal set up. In this paper, we introduce a binary response-based longitudinal adaptive design for the allocation of individuals to a better treatment and propose a weighted generalized quasi-likelihood approach for the consistent and efficient estimation of the regression parameters including the treatment effects.

Journal ArticleDOI
TL;DR: In this article, the authors focus on a class of non-standard problems involving nonparametric estimation of a monotone function that is characterized by n 1/3 rate of convergence of the maximum likelihood estimator, non-Gaussian limit distributions and the non-existence of ffiffiffi p -regular estimators.
Abstract: We focus on a class of non-standard problems involving non-parametric estimation of a monotone function that is characterized by n 1/3 rate of convergence of the maximum likelihood estimator, non-Gaussian limit distributions and the non-existence of ffiffiffi p -regular estimators. We have shown elsewhere that under a null hypothesis of the type w(z0) ¼ h0 (w being the monotone function of interest) in non-standard problems of the above kind, the likelihood ratio statistic has a � universallimit distribution that is free of the underlying parameters in the model. In this paper, we illustrate its limiting behaviour under local alternatives of the form wn(z), where wn(AE) and w(AE) vary in O(n � 1/3 ) neighbourhoods around z0 and wn converges to w at rate n 1/3 in an appropriate metric. Apart from local alternatives, we also consider the behaviour of the likelihood ratio statistic under fixed alternatives and establish the convergence in probability of an appropriately scaled version of the same to a constant involving a Kullback-Leibler distance.