scispace - formally typeset
Search or ask a question

Showing papers in "Test in 2002"


Journal ArticleDOI
01 Jun 2002-Test
TL;DR: The univariate skew-normal distribution was introduced by Azzalini in 1985 as a natural extension of the classical normal density to accommodate asymmetry and was extended to include the multivariate analog of the skew normal by Arnold et al. as mentioned in this paper.
Abstract: The univariate skew-normal distribution was introduced by Azzalini in 1985 as a natural extension of the classical normal density to accommodate asymmetry. He extensively studied the properties of this distribution and in conjunction with coauthors, extended this class to include the multivariate analog of the skew-normal. Arnold et al. (1993) introduced a more general skew-normal distribution as the marginal distribution of a truncated bivariate normal distribution in whichX was retained only ifY satisfied certain constraints. Using this approach more general univariate and multivariate skewed distributions have been developed. A survey of such models is provided together with discussion of related inference questions.

273 citations


Journal ArticleDOI
01 Dec 2002-Test
TL;DR: A functional nonparametric model for time series prediction using as predictor a continuous set of past values linking the rates of convergence with the fractal dimension of the functional process.
Abstract: In this paper we propose a functional nonparametric model for time series prediction. The originality of this model consists in using as predictor a continuous set of past values. This time series problem is presented in the general framework of regression estimation from dependent samples with regressor valued in some infinite dimensional semi-normed vectorial space. The curse of dimensionality induced by our approach is overridden by means of fractal dimension considerations. We give asymptotics for a kernel type nonparametric predictor linking the rates of convergence with the fractal dimension of the functional process. Finally, our method has been implemented and applied to some electricity consumption data.

84 citations


Journal ArticleDOI
01 Dec 2002-Test
TL;DR: In this article, a class of consistent semi-parametric estimators of a positive tail index γ, parameterized in atuning or control parameter α, is considered, which enables access, for any available sample, to an estimator of γ with a null dominant component of asymptotic bias, and with a reasonably flat Mean Squared Error pattern.
Abstract: In this paper we consider a class of consistent semi-parametric estimators of a positive tail index γ, parameterized in atuning orcontrol parameter α. Such a control parameter enables us to have access, for any available sample, to an estimator of γ with a null dominant component of asymptotic bias, and with a reasonably flatMean Squared Error pattern, as a functional ofk, the number of top order statistics considered. Moreover, we are able to achieve a high efficiency relatively to the classical Hill estimator, provided we may have access to a larger number of top order statistics than the number needed for optimal estimation through the Hill estimator.

37 citations


Journal ArticleDOI
Robert T. Clemen1
01 Jun 2002-Test
TL;DR: Conditions under which a DM can use a strictly proper scoring rule as a contract to give an expert an incentive to gather an amount of information that is optimal from the DM’s perspective are described.
Abstract: When a decision maker (DM) contracts with an expert to provide information, the nature of the contract can, create incentives for the expert, and it is up to the DM to ensure that the contract provides incentives that align the expert’s and DM’s interests. In this paper, scoring rules (and related functions) are viewed as such contracts and are reinterpreted in terms of agency theory and the theory of revelation games from economics. Although scoring rules have typically been discussed in the literature as devices for eliciting and evaluating subjective probabilities, this study relies on the fact that strictly proper scoring rules reward greater expertise as well as honest revelation. We describe conditions under which a DM can use a strictly proper scoring rule as a contract to give an expert an incentive to gather an amount of information that is optimal from the DM’s perspective. The conditions we consider focus on the expert’s cost structure, and we find that the DM must have substantial knowledge of that cost structure in order to design a specific contract that provides the correct incentives. The model and analysis suggest arguments for hiring and maintaining experts in-house rather than using outside consultants.

28 citations


Journal ArticleDOI
01 Jun 2002-Test
TL;DR: In this article, a new empirical curve for approximating a distribution function under right-censoring and length-bias is introduced, which is closely related to the product-limit Kaplan-Meier estimator.
Abstract: Length-biased and censored data may appear when analyzing times of duration In this work, a new empirical curve \(\tilde F\) for approximating a distribution functionF under right-censoring and length-bias is introduced. The proposed estimate is (not equal to but) closely related to the product-limit Kaplan-Meier estimator. Strong consistency and distributional convergence is established for a general empirical parameter \(\tilde \gamma = g\left( {\int {\varphi _1 d\tilde F} ,...,\int {\varphi _\tau d\tilde F} } \right)\). As applications, one can obtain the corresponding large sample results for estimates of the distribution function, the cumulative harard function, and the mean residual time function. The new method is illustrated with real data concerning unempoloyment duration.

23 citations


Journal ArticleDOI
01 Dec 2002-Test
TL;DR: In this paper, general sufficient conditions for the moderate deviations of M-estimators are presented, including the p-th quantile, the spatial median, the least absolute deviation estimator in linear regression, maximum likelihood estimators and other location estimators.
Abstract: General sufficient conditions for the moderate deviations of M-estimators are presented. These results are applied to many different types of M-estimators such as the p-th quantile, the spatial median, the least absolute deviation estimator in linear regression, maximum likelihood estimators and other location estimators. Moderate deviations theorems from empirical processes are applied.

20 citations


Journal ArticleDOI
01 Dec 2002-Test
TL;DR: In this article, the nonparametric estimation of the regression function and its derivatives using a modified version of estimators obtained by weighted local polynomial fitting was studied, and the asymptotic properties of the proposed estimators were studied: expressions for the bias and the variance/covariance matrix of the estimators are obtained.
Abstract: Consider the fixed regression model with random observation error that follows an AR(1) correlation structure. In this paper, we study the nonparametric estimation of the regression function and its derivatives using a modified version of estimators obtained by weighted local polynomial fitting. The asymptotic properties of the proposed estimators are studied: expressions for the bias and the variance/covariance matrix of the estimators are obtained and the joint asymptotic normality is established. In a simulation study, a better behavior of the Mean Integrated Squared Error of the proposed regression estimator with respect to that of the classical local polynomial estimator is observed when the correlation of the observations is large.

17 citations


Journal ArticleDOI
01 Jun 2002-Test
TL;DR: In this article, a weighted L2-Wasserstein distance is considered and it is proven that these statistics retain the loss of degrees of freedom property for general classes of distributions, if applied separately to the location family and to the scale family and if the "right" weight function is used.
Abstract: In two recent papers del Barrio et al. (1999) and del Barrio et al. (2000) consider a new class of goodness-of-fit statistics based on the L2-Wasserstein distance. They derive the limiting distribution of these statistics and show that the normal distribution is the only location-scale family for which this limiting distribution has the "loss of degrees of freedom" property, due to the estimation of the unknown parameters. In this paper a weighted L2-Wasserstein distance is considered and it is proven that these statistics retain the loss of degrees of freedom property for general classes of distributions, if applied separately to the location family and to the scale family and if the "right" weight function is used. These weight functions are such that the corresponding minimum distance estimators for the location parameter and the scale parameter are asymptotically efficient. Examples are discussed for both location and scale families.

16 citations


Journal ArticleDOI
01 Dec 2002-Test
TL;DR: This paper is a review of spatial-temporal nonlinear filtering, and it is illustrated in a Command and Control setting where the objects are highly mobile weapons, and the nonlinear function of object locations is a two-dimensional surface known as the danger-potential field.
Abstract: A hierarchical statistical model is made up generically of a data model, a process model, and occasionally a prior model for all the unknown parameters. The process model, known as the state equations in the filtering literature, is where most of the scientist's physical/chemical/biological knowledge about the problem is used. In the case of a dynamically changing configuration of objects moving through a spatial domain of interest, that knowledge is summarized through equations of motion with random perturbations. In this paper, our interest is in dynamically filtering noisy observations on these objects, where the state equations are nonlinear. Two recent methods of filtering, the Unscented Particle filter (UPF) and the Unscented Kalman filter, are presented and compared to the better known Extended Kalman filter. Other sources of nonlinearity arise when we wish to estimate nonlinear functions of the objects positions; it is here where the UPF shows its superiority, since optimal estimates and associated variances are straightforward to obtain. The longer computing time needed for the UPF is often not a big issue, with the ever faster processors that are available. This paper is a review of spatial-temporal nonlinear filtering, and we illustrate it in a Command and Control setting where the objects are highly mobile weapons, and the nonlinear function of object locations is a two-dimensional surface known as the danger-potential field.

13 citations


Journal ArticleDOI
01 Jun 2002-Test
TL;DR: In this article, a general theorem concerning the asymptotic null distribution of weighted correlation test statistics for scale families was formulated, and it was shown that the resulting tests may work not only for all Weibull scale families, but also for all Pareto scale families.
Abstract: We show that weighted versions of recent correlation tests do not require light underlying tails. We formulate a general theorem concerning the asymptotic null distribution of weighted correlation test statistics for scale families, and demonstrate that the resulting tests may work not only for all Weibull scale families, but, with suitable choices of the weight functions, even for all Pareto scale families, in each of which the scale varies the left endpoint of the distribution.

8 citations


Journal ArticleDOI
01 Jun 2002-Test
TL;DR: In this article, a non-linear biplots to achieve a joint representation of multivariate normal populations and any parametric function without assumptions about the covariance matrices is extended.
Abstract: Some previous ideas about non-linear biplots to achieve a joint representation of multivariate normal populations and any parametric function without assumptions about the covariance matrices are extended. Usual restrictions on the covariance matrices (such as homogeneity) are avoided. Variables are represented as curves corresponding to the directions of maximum means variation. To demonstrate the versatility of the method, the representation of variances and covariances as an example of further possible interesting parametric functions have been developed. This method is illustrated with two different data sets, and these results are compared with those obtained using two other distances for the normal multivariate case: the Mahalanobis distance (assuming a common covariance matrix for all populations) and Rao’s distance, assuming a common eigenvector structure for all the covariance matrices.

Journal ArticleDOI
01 Dec 2002-Test
TL;DR: In this article, generalized Liouville family is investigated to model compositional data which includes covariates and Semiparametric Bayesian methods are proposed to estimate the probability density.
Abstract: Compositional data occur as natural realizations of multivariate observations comprising element proportions of some whole quantity. Such observations predominate in disciplines like geology, biology, ecology, economics and chemistry. Due to unit sum constraint on compositional data, specialized statistical methods are required for analyzing these data. Dirichlet distributions were originally used to study compositional data even though this family of distribution is not appropriate (see Aitchison, 1986) because of their extreme independence properties. Aitchison (1982) endeavored to provide a viable alternative to existing methods by employing Logistic Normal distribution to analyze such constrained data. However this family does not include the Dirichlet class and is therefore unable to address the issue of extreme independence. In this paper generalized Liouville family is investigated to model compositional data which includes covariates. This class permits distributions that admit negative or mixed correlation and also contains non-Dirichlet distributions with non-positive correlation and overcomes deficits in the Dirichlet class. Semiparametric Bayesian methods are proposed to estimate the probability density. Predictive distributions are used to assess performance of the model. The methods are illustrated on a real data set.

Journal ArticleDOI
01 Jun 2002-Test
TL;DR: In this paper, a confidence set which has smallest expected effective length at the origin is proposed, and for the known variance case, its induced test can be shown to have constant level α.
Abstract: Under the multivariate normal setup, the bioequivalence problem is studied, using the confidence approach. First a confidence set which has smallest expected effective length at the origin is proposed. For the known variance case, its induced test can be shown to have constant level α. Also, it is unbiased and uniformly most powerful among equivariant tests. For the unknown variance case, an approximated confidence set is proposed. The induced test enjoys similar good properties. Simulation shows that our test substantially outperforms some existing tests, in general.

Journal ArticleDOI
01 Dec 2002-Test
TL;DR: The alternative procedure for updating probabilities (that is, to calculate the posterior distribution from the prior distribution) proposed by Richard Jeffrey is considered, which allows the addition of new information to thePrior distribution under more circumstances than with the Bayesian conditioning.
Abstract: In this paper the alternative procedure for updating probabilities (that is, to calculate the posterior distribution from the prior distribution) proposed by Richard Jeffrey is considered, which allows the addition of new information to the prior distribution under more circumstances than with the Bayesian conditioning. A predictivistic approach for the Jeffrey’s rule is introduced and a definition of conjugacy according to this rule (named Jeffrey-conjugacy) is established. Results for Jeffrey-conjugacy in the exponential family are also presented. As a by-product, these results provide full predictivistic characterizations of some predictive distributions. By using both the predictivistic Jeffrey’s rule and Jeffrey-conjugacy, a forecasting procedure which is applied to the Chilean stock market, data is also developed. The Jeffrey’s rule with the Bayesian conditioning according to their capability of incorporating unpredictable information in the forecast is compared.

Journal ArticleDOI
01 Jan 2002-Test
TL;DR: This paper examines a new class of continuous distribution estimators obtained as a combination of Barron-type estimators with the frequency polygon and proves the consistency of these estimators in expected information divergence and expected χ2-divergence.
Abstract: Barron-type estimators are histogram-based distribution estimators that have been proved to have good consistency properties according to several information theoretic criteria. However they are not continuous. In this paper, we examine a new class of continuous distribution estimators obtained as a combination of Barron-type estimators with the frequency polygon. We prove the consistency of these estimators in expected information divergence and expected χ2-divergence. For one of then we evaluate the rate of convergence in expected χ2-divergence.

Journal ArticleDOI
01 Jun 2002-Test
TL;DR: A complete Bayesian method for analyzing a threshold autoregressive (TAR) model when the order of the model is unknown and is based on a version of the reversible jump algorithm of Green and the method for estimating marginal likelihood from the Metropolis-Hasting algorithm.
Abstract: We provide a complete Bayesian method for analyzing a threshold autoregressive (TAR) model when the order of the model is unknown. Our approach is based on a version (Godsill (2001)) of the reversible jump algorithm of Green (1995), and the method for estimating marginal likelihood from the Metropolis-Hasting algorithm by Chib and Jeliazkov (2001). We illustrate our results with simulated data and the Wolfe’s sunspot data set.

Journal ArticleDOI
01 Jun 2002-Test
TL;DR: In this article, the estimation of the finite population distribution function under several sampling strategies based on a PPS cluster sampling, i.e., with cluster selection probabilities proportional to size, is studied.
Abstract: The estimation of the finite population distribution function under several sampling strategies based on a PPS cluster sampling, i.e., with cluster selection probabilities proportional to size, is studied. For the estimation of population means and totals, it is well-known that this type of strategies gives good results if the cluster selection probabilities are proportional to the total of the variable under study or to a related auxiliary variable over the cluster. It is proved that, for the estimation of the distribution function using cluster sampling, this solution is not good in general and, under an appropriate criteria, the optimal cluster selection probabilities that minimize the variance of the estimation, is obtained. This methodology is applied to two classical PPS sampling strategies: sampling with replacement, with the Hansen-Hurwitz estimator, and random groups sampling, with the Rao-Hartley-Cochran estimator. Finally a small simulation to compare the efficiency of this approach with other methods is presented.

Journal ArticleDOI
01 Dec 2002-Test
TL;DR: In this article, the relationship between exhaustivity and invariance is studied under the assumption of stability and bounded completeness, and an analogous result of two classical theorems of Hall, Wijsman and Ghosh (1965) and Berk (1972) is given making use of a concept introduced under the name of boundedG-completeness.
Abstract: In this paper, the relationship between exhaustivity (or sufficiency in the sense of Blackwell) and invariance is studied. An analogous result of two classical theorems of Hall, Wijsman and Ghosh (1965) and Berk (1972) on the relationship between sufficiency and invariance is given making use of a concept introduced here under the name of boundedG-completeness; in particular, we get the same conclusion under the assumptions of stability and bounded completeness.

Journal ArticleDOI
01 Dec 2002-Test
TL;DR: In this article, the robustness of a set of proposals to estimate the parameters in the MA(1) time series model is studied empirically by means of simulations, and the estimation of the variance of the contaminated errors is also studied through simulations.
Abstract: The main purpose of this work is to study empirically by means of simulations, the robustness of a set of proposals to estimate the parameters in the MA(1) time series model. The non-normal populations are mixtures of normal distributions, defined by $g(x)=pN(0,k)+(1-p)N(0,1)$, where the proportion of contamination most frequently used is $p=0.10$ and $k$ is the variance of the distribution used in the contamination; $\alpha $ is taken to be $0.90$, which is close to the region of non-invertibility. Key results are that the estimation procedures used in the study provide good results in terms of biases in the estimation of the parameters, and that the biases are not changed when contaminated errors (mixtures) are considered. The estimation of the variance of the contaminated errors is also studied through simulations.

Journal ArticleDOI
01 Dec 2002-Test
TL;DR: In this paper, it was shown that the geometric criterion obtained in this way is asymptotically equivalent to the Schwarz's BIC criterion, and that the optimal selection of a regression model is achieved by maximizing the posterior probability of a submodel.
Abstract: The posterior probabilities of $K$ given models when improper priors are used depend on the proportionality constants assigned to the prior densities corresponding to each of the models. It is shown that this assignment can be done using natural geometric priors in multiple regression problems if the normal distribution of the residual errors is truncated. This truncation is a realistic modification of the regression models, and since it will be made far away from the mean, it has no other effect beyond the determination of the proportionality constants, provided that the sample size is not too large. In the case $K=2$, the posterior odds ratio is related to the usual $F$ statistic in ``classical'' statistics. Assuming zero-one losses the optimal selection of a regression model is achieved by maximizing the posterior probability of a submodel. It is shown that the geometric criterion obtained in this way is asymptotically equivalent to Schwarz's asymptotic Bayesian criterion, sometimes called the BIC criterion. An example of polynomial regression is used to provide numerical comparisons between the new geometric criterion, the BIC criterion and the Akaike information criterion.