scispace - formally typeset
Search or ask a question

Showing papers on "Asymptotic distribution published in 2008"


Posted Content
TL;DR: In this paper, the largest difference in expectations over functions in the unit ball of a reproducing kernel Hilbert space (RKHS) is defined, and the test statistic can be computed in quadratic time, although efficient linear time approximations are available.
Abstract: We propose a framework for analyzing and comparing distributions, allowing us to design statistical tests to determine if two samples are drawn from different distributions. Our test statistic is the largest difference in expectations over functions in the unit ball of a reproducing kernel Hilbert space (RKHS). We present two tests based on large deviation bounds for the test statistic, while a third is based on the asymptotic distribution of this statistic. The test statistic can be computed in quadratic time, although efficient linear time approximations are available. Several classical metrics on distributions are recovered when the function space used to compute the difference in expectations is allowed to be more general (eg. a Banach space). We apply our two-sample tests to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where they perform strongly. Excellent performance is also obtained when comparing distributions over graphs, for which these are the first such tests.

1,259 citations


Posted Content
TL;DR: This study develops a methodology of inference for a widely used Cliff-Ord type spatial model containing spatial lags in the dependent variable, exogenous variables, and the disturbance terms, while allowing for unknown heteroskedasticity in the innovations.
Abstract: One important goal of this study is to develop a methodology of inference for a widely used Cliff-Ord type spatial model containing spatial lags in the dependent variable, exogenous variables, and the disturbance terms, while allowing for unknown heteroskedasticity in the innovations. We first generalize the generalized moments (GM) estimator suggested in Kelejian and Prucha (1998,1999) for the spatial autoregressive parameter in the disturbance process. We prove the consistency of our estimator; unlike in our earlier paper we also determine its asymptotic distribution, and discuss issues of efficiency. We then define instrumental variable (IV) estimators for the regression parameters of the model and give results concerning the joint asymptotic distribution of those estimators and the GM estimator under reasonable conditions. Much of the theory is kept general to cover a wide range of settings. We note the estimation theory developed by Kelejian and Prucha (1998, 1999) for GM and IV estimators and by Lee (2004) for the quasi-maximum likelihood estimator under the assumption of homoskedastic innovations does not carry over to the case of heteroskedastic innovations. The paper also provides a critical discussion of the usual specification of the parameter space.

955 citations


Journal Article
TL;DR: The adaptive Lasso has the oracle property even when the number of covariates is much larger than the sample size, and under a partial orthogonality condition in which the covariates with zero coefficients are weakly correlated with the covariate with nonzero coefficients, marginal regression can be used to obtain the initial estimator.
Abstract: We study the asymptotic properties of the adaptive Lasso estimators in sparse, high-dimensional, linear regression models when the number of covariates may increase with the sample size. We consider variable selection using the adap- tive Lasso, where the L1 norms in the penalty are re-weighted by data-dependent weights. We show that, if a reasonable initial estimator is available, under ap- propriate conditions, the adaptive Lasso correctly selects covariates with nonzero coefficients with probability converging to one, and that theestimators of nonzero coefficients have the same asymptotic distribution they would have if the zero co- efficients were known in advance. Thus, the adaptive Lasso hasan oracle property in the sense of Fan and Li (2001) and Fan and Peng (2004). In addition, under a partial orthogonality condition in which the covariates with zero coefficients are weakly correlated with the covariates with nonzero coefficients, marginal regression can be used to obtain the initial estimator. With this initial estimator, the adaptive Lasso has the oracle property even when the number of covariates is much larger than the sample size.

594 citations


Journal ArticleDOI
TL;DR: A treatment of the mathematical properties is provided for the Lindley distribution, which includes moments, cumulants, characteristic function, failure rate function, mean residual life function, and mean deviations.

541 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigated the asymptotic properties of quasi-maximum likelihood estimators for spatial dynamic panel data with the same number of individuals and the number of time periods.

490 citations


Journal ArticleDOI
TL;DR: In this paper, the authors studied the asymptotic properties of bridge estimators in sparse, high-dimensional, linear regression models when the number of covariates may increase to infinity with the sample size.
Abstract: We study the asymptotic properties of bridge estimators in sparse, high-dimensional, linear regression models when the number of covariates may increase to infinity with the sample size. We are particularly interested in the use of bridge estimators to distinguish between covariates whose coefficients are zero and covariates whose coefficients are nonzero. We show that under appropriate conditions, bridge estimators correctly select covariates with nonzero coefficients with probability converging to one and that the estimators of nonzero coefficients have the same asymptotic distribution that they would have if the zero coefficients were known in advance. Thus, bridge estimators have an oracle property in the sense of Fan and Li [J. Amer. Statist. Assoc. 96 (2001) 1348--1360] and Fan and Peng [Ann. Statist. 32 (2004) 928--961]. In general, the oracle property holds only if the number of covariates is smaller than the sample size. However, under a partial orthogonality condition in which the covariates of the zero coefficients are uncorrelated or weakly correlated with the covariates of nonzero coefficients, we show that marginal bridge estimators can correctly distinguish between covariates with nonzero and zero coefficients with probability converging to one even when the number of covariates is greater than the sample size.

415 citations


Journal ArticleDOI
TL;DR: In this paper, the authors studied the asymptotic properties of bridge estimators in sparse, high-dimensional, linear regression models when the number of covariates may increase to infinity with the sample size.
Abstract: We study the asymptotic properties of bridge estimators in sparse, high-dimensional, linear regression models when the number of covariates may increase to infinity with the sample size. We are particularly interested in the use of bridge estimators to distinguish between covariates whose coefficients are zero and covariates whose coefficients are nonzero. We show that under appropriate conditions, bridge estimators correctly select covariates with nonzero coefficients with probability converging to one and that the estimators of nonzero coefficients have the same asymptotic distribution that they would have if the zero coefficients were known in advance. Thus, bridge estimators have an oracle property in the sense of Fan and Li [J. Amer. Statist. Assoc. 96 (2001) 1348-1360] and Fan and Peng [Ann. Statist. 32 (2004) 928-961]. In general, the oracle property holds only if the number of covariates is smaller than the sample size. However, under a partial orthogonality condition in which the covariates of the zero coefficients are uncorrelated or weakly correlated with the covariates of nonzero coefficients, we show that marginal bridge estimators can correctly distinguish between covariates with nonzero and zero coefficients with probability converging to one even when the number of covariates is greater than the sample size.

411 citations


Journal ArticleDOI
TL;DR: In this article, the authors derived the asymptotic distribution of DEA estimators under variable returns to scale and proved consistency of two different bootstrap procedures (one based on subsampling, the other based on smoothing).
Abstract: Nonparametric data envelopment analysis (DEA) estimators based on linear programming methods have been widely applied in analyses of productive efficiency. The distributions of these estimators remain unknown except in the simple case of one input and one output, and previous bootstrap methods proposed for inference have not been proved consistent, making inference doubtful. This paper derives the asymptotic distribution of DEA estimators under variable returns to scale. This result is used to prove consistency of two different bootstrap procedures (one based on subsampling, the other based on smoothing). The smooth bootstrap requires smoothing the irregularly bounded density of inputs and outputs and smoothing the DEA frontier estimate. Both bootstrap procedures allow for dependence of the inefficiency process on output levels and the mix of inputs in the case of input-oriented measures, or on input levels and the mix of outputs in the case of output-oriented measures.

284 citations


Journal ArticleDOI
TL;DR: A regularized estimation procedure for variable selection that combines basis function approximations and the smoothly clipped absolute deviation penalty and establishes the theoretical properties of the procedure, including consistency in variable selection and the oracle property in estimation.
Abstract: Nonparametric varying-coefficient models are commonly used for analyzing data measured repeatedly over time, including longitudinal and functional response data. Although many procedures have been developed for estimating varying coefficients, the problem of variable selection for such models has not been addressed to date. In this article we present a regularized estimation procedure for variable selection that combines basis function approximations and the smoothly clipped absolute deviation penalty. The proposed procedure simultaneously selects significant variables with time-varying effects and estimates the nonzero smooth coefficient functions. Under suitable conditions, we establish the theoretical properties of our procedure, including consistency in variable selection and the oracle property in estimation. Here the oracle property means that the asymptotic distribution of an estimated coefficient function is the same as that when it is known a priori which variables are in the model. The method is...

280 citations


Journal ArticleDOI
TL;DR: This work considers selecting a regression model, using a variant of the general-to-specific algorithm in PcGets, when there are more variables than observations, and obtains the asymptotic distribution of the mean and variance in a location-scale model under the null that no impulses matter.
Abstract: We consider selecting a regression model, using a variant of the general-to-specific algorithm in PcGets, when there are more variables than observations. We look at the special case where the variables are single impulse dummies, one defined for each observation. We show that this setting is unproblematic if tackled appropriately, and obtain the asymptotic distribution of the mean and variance in a location-scale model, under the null that no impulses matter. Monte Carlo simulations confirm the null distributions and suggest extensions to highly non-normal cases.

249 citations


Journal ArticleDOI
TL;DR: In this article, a nonparametric test of conditional independence based on the weighted Hellinger distance between the two conditional densities, f(y|x,z) and f(x|x) under the null, is proposed.
Abstract: We propose a nonparametric test of conditional independence based on the weighted Hellinger distance between the two conditional densities, f(y|x,z) and f(y|x), which is identically zero under the null. We use the functional delta method to expand the test statistic around the population value and establish asymptotic normality under β-mixing conditions. We show that the test is consistent and has power against alternatives at distance n−1/2h−d/4. The cases for which not all random variables of interest are continuously valued or observable are also discussed. Monte Carlo simulation results indicate that the test behaves reasonably well in finite samples and significantly outperforms some earlier tests for a variety of data generating processes. We apply our procedure to test for Granger noncausality in exchange rates.

Journal ArticleDOI
TL;DR: A novel maximally selected rank statistic is derived from this framework for a censored response partitioned with respect to two ordered categorical covariates and potential interactions and is employed to search for a high-risk group of rectal cancer patients treated with a neo-adjuvant chemoradiotherapy.
Abstract: Maximally selected statistics for the estimation of simple cutpoint models are embedded into a generalized conceptual framework based on conditional inference procedures. This powerful framework contains most of the published procedures in this area as special cases, such as maximally selected chi-squared and rank statistics, but also allows for direct construction of new test procedures for less standard test problems. As an application, a novel maximally selected rank statistic is derived from this framework for a censored response partitioned with respect to two ordered categorical covariates and potential interactions. This new test is employed to search for a high-risk group of rectal cancer patients treated with a neo-adjuvant chemoradiotherapy. Moreover, a new efficient algorithm for the evaluation of the asymptotic distribution for a large class of maximally selected statistics is given enabling the fast evaluation of a large number of cutpoints.

Journal ArticleDOI
TL;DR: In this paper, a statistical test is proposed to distinguish between true long memory and spurious long memory based on invariance of the long memory parameter for temporal aggregates of the process under the null of real long memory.
Abstract: It is well known that long memory characteristics observed in data can be generated by nonstationary structural-break or slow regime switching models. We propose a statistical test to distinguish between true long memory and spurious long memory based on invariance of the long memory parameter for temporal aggregates of the process under the null of true long memory. Geweke Porter-Hudak estimates of the long memory parameter obtained from different temporal aggregates of the underlying time series are shown to be asymptotically jointly normal, leading to a test statistic that is constructed as the quadratic form of a demeaned vector of the estimates. The result is a test statistic that is very simple to implement. Simulations show the test to have good size and power properties for the classic alternatives to true long memory that have been suggested in the literature. The asymptotic distribution of the test statistic is also valid for a stochastic volatility with Gaussian long memory model. The test is a...

Journal ArticleDOI
TL;DR: The statistical properties of the autoregressive distance between ARIMA processes are investigated and the asymptotic distribution of the squared AR distance and an approximation which is computationally efficient are derived.

Journal ArticleDOI
TL;DR: In this article, the authors introduce the concepts of weighted sample consistency and asymptotic normality, and derive conditions under which the transformations of the weighted sample used in the SMC algorithm preserve these properties.
Abstract: In the last decade, sequential Monte Carlo methods (SMC) emerged as a key tool in computational statistics [see, e.g., Sequential Monte Carlo Methods in Practice (2001) Springer, New York, Monte Carlo Strategies in Scientific Computing (2001) Springer, New York, Complex Stochastic Systems (2001) 109–173]. These algorithms approximate a sequence of distributions by a sequence of weighted empirical measures associated to a weighted population of particles, which are generated recursively. Despite many theoretical advances [see, e.g., J. Roy. Statist. Soc. Ser. B 63 (2001) 127–146, Ann. Statist. 33 (2005) 1983–2021, Feynman–Kac Formulae. Genealogical and Interacting Particle Systems with Applications (2004) Springer, Ann. Statist. 32 (2004) 2385–2411], the large-sample theory of these approximations remains a question of central interest. In this paper we establish a law of large numbers and a central limit theorem as the number of particles gets large. We introduce the concepts of weighted sample consistency and asymptotic normality, and derive conditions under which the transformations of the weighted sample used in the SMC algorithm preserve these properties. To illustrate our findings, we analyze SMC algorithms to approximate the filtering distribution in state-space models. We show how our techniques allow to relax restrictive technical conditions used in previously reported works and provide grounds to analyze more sophisticated sequential sampling strategies, including branching, resampling at randomly selected times, and so on.

Proceedings Article
08 Dec 2008
TL;DR: A kernel-based method for change-point analysis within a sequence of temporal observations and proposes a test statistic based upon the maximum kernel Fisher discriminant ratio as a measure of homogeneity between segments to establish the consistency under the alternative hypothesis.
Abstract: We introduce a kernel-based method for change-point analysis within a sequence of temporal observations. Change-point analysis of an unlabelled sample of observations consists in, first, testing whether a change in the distribution occurs within the sample, and second, if a change occurs, estimating the change-point instant after which the distribution of the observations switches from one distribution to another different distribution. We propose a test statistic based upon the maximum kernel Fisher discriminant ratio as a measure of homogeneity between segments. We derive its limiting distribution under the null hypothesis (no change occurs), and establish the consistency under the alternative hypothesis (a change occurs). This allows to build a statistical hypothesis testing procedure for testing the presence of a change-point, with a prescribed false-alarm probability and detection probability tending to one in the large-sample setting. If a change actually occurs, the test statistic also yields an estimator of the change-point location. Promising experimental results in temporal segmentation of mental tasks from BCI data and pop song indexation are presented.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the asymptotic behavior of penalized spline estimators in the univariate case, where the number of knots is assumed to converge to infinity as the sample size increases.
Abstract: SUMMAvRY We study the asymptotic behaviour of penalized spline estimators in the univariate case. We use B-splines and a penalty is placed on mth-order differences of the coefficients. The number of knots is assumed to converge to infinity as the sample size increases. We show that penalized splines behave similarly to Nadaraya-Watson kernel estimators with 'equivalent' kernels depending upon m. The equivalent kernels we obtain for penalized splines are the same as those found by Silverman for smoothing splines. The asymptotic distribution of the penalized spline estimator is Gaussian and we give simple expressions for the asymptotic mean and variance. Provided that it is fast enough, the rate at which the number of knots converges to infinity does not affect the asymptotic distribution. The optimal rate of convergence of the penalty parameter is given. Penalized splines are not design-adaptive.

Posted Content
TL;DR: In this article, the authors derived the limiting distribution of the Sup-Wald test under mild conditions on the errors and regressors for a variety of testing problems and showed that even if the coefficients of the integrated regressors are held fixed but the intercept is allowed to change, the limit distributions are not the same as would prevail in a stationary framework.
Abstract: This paper considers issues related to testing for multiple structural changes in cointegrated systems. We derive the limiting distribution of the Sup-Wald test under mild conditions on the errors and regressors for a variety of testing problems. We show that even if the coefficients of the integrated regressors are held fixed but the intercept is allowed to change, the limit distributions are not the same as would prevail in a stationary framework. Including stationary regressors whose coefficients are not allowed to change does not affect the limiting distribution of the tests under the null hypothesis. We also propose a procedure that allows one to test the null hypothesis of, say, k changes, versus the alternative hypothesis of k 1 changes. This sequential procedure is useful in that it permits consistent estimation of the number of breaks present. We show via simulations that our tests maintain the correct size in finite samples and are much more powerful than the commonly used LM tests, which suffer from important problems of non-monotonic power in the presence of serial correlation in the errors.

Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of estimating the unconditional distribution of a post-model-selection estimator and show that no estimator for this distribution can be uniformly consistent (not even locally).
Abstract: We consider the problem of estimating the unconditional distribution of a post-model-selection estimator. The notion of a post-model-selection estimator here refers to the combined procedure resulting from first selecting a model (e.g., by a model selection criterion like AIC or by a hypothesis testing procedure) and then estimating the parameters in the selected model (e.g., by least-squares or maximum likelihood), all based on the same data set. We show that it is impossible to estimate the unconditional distribution with reasonable accuracy even asymptotically. In particular, we show that no estimator for this distribution can be uniformly consistent (not even locally). This follows as a corollary to (local) minimax lower bounds on the performance of estimators for the distribution; performance is here measured by the probability that the estimation error exceeds a given threshold. These lower bounds are shown to approach 1/2 or even 1 in large samples, depending on the situation considered. Similar impossibility results are also obtained for the distribution of linear functions (e.g., predictors) of the post-model-selection estimator.

Journal ArticleDOI
TL;DR: In this paper, a wide variety of test and confidence intervals for partially-identified parameters that are defined by moment inequalities and equalities have been compared, and a recommended test statistic, moment selection critical value method, and implementation method has been proposed.
Abstract: This paper is concerned with tests and confidence intervals for partially-identified parameters that are defined by moment inequalities and equalities. In the literature, different test statistics, critical value methods, and implementation methods (i.e., asymptotic distribution versus the bootstrap) have been proposed. In this paper, we compare a wide variety of these methods. We provide a recommended test statistic, moment selection critical value method, and implementation method. In addition, we provide a data-dependent procedure for choosing the key moment selection tuning parameter and a data-dependent size-correction factor.

Journal ArticleDOI
TL;DR: In this article, the authors propose a general method for estimating the parameters indexing ODEs from times series by using a nonparametric estimator of regression functions as a first step in the construction of an M-estimator and show the consistency of the derived estimator under general conditions.
Abstract: Ordinary differential equations (ODE’s) are widespread models in physics, chemistry and biology. In particular, this mathematical formalism is used for describing the evolution of complex systems and it might consist of high-dimensional sets of coupled nonlinear differential equations. In this setting, we propose a general method for estimating the parameters indexing ODE’s from times series. Our method is able to alleviate the computational difficulties encountered by the classical parametric methods. These difficulties are due to the implicit definition of the model. We propose the use of a nonparametric estimator of regression functions as a first-step in the construction of an M-estimator, and we show the consistency of the derived estimator under general conditions. In the case of spline estimators, we prove asymptotic normality, and that the rate of convergence is the usual $\sqrt{n}$-rate for parametric estimators. Some perspectives of refinements of this new family of parametric estimators are given.

Journal ArticleDOI
TL;DR: In this article, a complete model classification is presented for ergodic Pearson diffusions, which form a flexible class of diffusions defined by having linear drift and quadratic squared diffusion coefficient, and it is demonstrated that for this class explicit statistical inference is feasible.
Abstract: . The Pearson diffusions form a flexible class of diffusions defined by having linear drift and quadratic squared diffusion coefficient. It is demonstrated that for this class explicit statistical inference is feasible. A complete model classification is presented for the ergodic Pearson diffusions. The class of stationary distributions equals the full Pearson system of distributions. Well-known instances are the Ornstein–Uhlenbeck processes and the square root (CIR) processes. Also diffusions with heavy-tailed and skew marginals are included. Explicit formulae for the conditional moments and the polynomial eigenfunctions are derived. Explicit optimal martingale estimating functions are found. The discussion covers GMM, quasi-likelihood, non-linear weighted least squares estimation and likelihood inference too. The analytical tractability is inherited by transformed Pearson diffusions, integrated Pearson diffusions, sums of Pearson diffusions and Pearson stochastic volatility models. For the non-Markov models, explicit optimal prediction-based estimating functions are found. The estimators are shown to be consistent and asymptotically normal.

Posted Content
TL;DR: In this paper, the authors extended the results of Ai and Chen (2003) on efficient estimation of semiparametric conditional moment models containing unknown parametric components (theta) and unknown functions of endogenous variables (h).
Abstract: This paper greatly extends the results of Ai and Chen (2003) on efficient estimation of semiparametric conditional moment models containing unknown parametric components (theta) and unknown functions of endogenous variables (h). We show that (1) the penalized sieve minimum distance (PSMD) estimator (hat{theta},hat{h}) can simultaneously achieve root-n asymptotic normality of hat{theta} and nonparametric optimal convergence rate of hat{h}, allowing for models with possibly nonsmooth residuals and noncompact infinite dimensional parameter spaces; (2) a simple weighted bootstrap procedure consistently estimates the limiting distribution of the PSMD hat{theta}; (3) the semiparametric efficiency bound formula of Ai and Chen (2003) remains valid for conditional models with nonsmooth residuals, and the optimally weighted PSMD estimator achieves the bound; (4) the profiled optimally weighted PSMD criterion is asymptotically chi-square distributed. We illustrate our general theories using a partially linear quantile instrumental variables regression, a Monte Carlo study, and an empirical estimation of the shape-invariant quantile Engel curves with endogenous total expenditure.

Journal ArticleDOI
TL;DR: In this article, the authors provide a transparent, robust, and computationally feasible statistical platform for restricted likelihood ratio testing (RLRT) for zero variance components in linear mixed models.
Abstract: The goal of our article is to provide a transparent, robust, and computationally feasible statistical platform for restricted likelihood ratio testing (RLRT) for zero variance components in linear mixed models. This problem is nonstandard because under the null hypothesis the parameter is on the boundary of the parameter space. Our proposed approach is different from the asymptotic results of Stram and Lee who assumed that the outcome vector can be partitioned into many independent subvectors. Thus, our methodology applies to a wider class of mixed models, which includes models with a moderate number of clusters or nonparametric smoothing components. We propose two approximations to the finite sample null distribution of the RLRT statistic. Both approximations converge weakly to the asymptotic distribution obtained by Stram and Lee when their assumptions hold. When their assumptions do not hold, we show in extensive simulation studies that both approximations outperform the Stram and Lee approximation and...

Journal ArticleDOI
TL;DR: In this paper, the authors present two robust estimates for GARCH models, the first is defined by the minimization of a conveniently modified likelihood and the second is similarly defined, but includes an additional mechanism for restricting the propagation of the effect of one outlier on the next estimated conditional variances.

Journal ArticleDOI
TL;DR: In this article, a sound approach to bandwidth selection in nonparametric kernel testing is proposed, where the main idea is to find an Edgeworth expansion of the asymptotic distribution of the test concerned and then determine how the bandwidth should be chosen according to certain requirements for both the size and power functions.
Abstract: We propose a sound approach to bandwidth selection in nonparametric kernel testing. The main idea is to find an Edgeworth expansion of the asymptotic distribution of the test concerned. Due to the involvement of a kernel bandwidth in the leading term of the Edgeworth expansion, we are able to establish closed-form expressions to explicitly represent the leading terms of both the size and power functions and then determine how the bandwidth should be chosen according to certain requirements for both the size and power functions. For example, when a significance level is given, we can choose the bandwidth such that the power function is maximized while the size function is controlled by the significance level. Both asymptotic theory and methodology are established. In addition, we develop an easy implementation procedure for the practical realization of the established methodology and illustrate this on two simulated examples and a real data example.

Journal ArticleDOI
TL;DR: In this paper, a new nonparametric estimation of conditional value-at-risk and expected shortfall functions is proposed by inverting the weighted double kernel local linear estimate of the conditional distribution function.

Journal ArticleDOI
TL;DR: It is established that the hazard function of Birnbaum-Saunders distribution is an upside down function for all values of the shape parameter.

Journal ArticleDOI
TL;DR: A new, simple, consistent and powerful test for independence by using symbolic dynamics and permutation entropy as a measure of serial dependence and a standard asymptotic distribution of an affine transformation of the permutations entropy under the null hypothesis of independence is constructed.

Journal ArticleDOI
TL;DR: The statistical inferences on Weibull parameters when the data are type-II hybrid censored are presented and the method of obtaining the optimum censoring scheme based on the maximum information measure is developed.
Abstract: A hybrid censoring scheme is a mixture of type-I and type-II censoring schemes. This article presents the statistical inferences on Weibull parameters when the data are type-II hybrid censored. The maximum likelihood estimators, and the approximate maximum likelihood estimators are developed for estimating the unknown parameters. Asymptotic distributions of the maximum likelihood estimators are used to construct approximate confidence intervals. Bayes estimates, and the corresponding highest posterior density credible intervals of the unknown parameters, are obtained using suitable priors on the unknown parameters, and by using Markov chain Monte Carlo techniques. The method of obtaining the optimum censoring scheme based on the maximum information measure is also developed. We perform Monte Carlo simulations to compare the performances of the different methods, and we analyse one data set for illustrative purposes.