scispace - formally typeset
Search or ask a question

Showing papers in "Biometrika in 1984"


Journal Article•DOI•
TL;DR: In this paper, the authors developed a test for unit roots which is based on an approximation of an autoregressive-moving average model by an auto-gression, which has a limit distribution whose percentiles have been tabulated.
Abstract: SUMMARY Recently, methods for detecting unit roots in autoregressive and autoregressivemoving average time series have been proposed. The presence of a unit root indicates that the time series is not stationary but that differencing will reduce it to stationarity. The tests proposed to date require specification of the number of autoregressive and moving average coefficients in the model. In this paper we develop a test for unit roots which is based on an approximation of an autoregressive-moving average model by an autoregression. The test statistic is standard output from most regression programs and has a limit distribution whose percentiles have been tabulated. An example is provided.

3,231 citations


Journal Article•DOI•
TL;DR: For multinomial logistic regression models, this article proved existence theorems by considering the possible patterns of data points, which fall into three mutually exclusive and exhaustive categories: complete separation, quasicomplete separation and overlap.
Abstract: SUMMARY The problems of existence, uniqueness and location of maximum likelihood estimates in log linear models have received special attention in the literature (Haberman, 1974, Chapter 2; Wedderburn, 1976; Silvapulle, 1981). For multinomial logistic regression models, we prove existence theorems by considering the possible patterns of data points, which fall into three mutually exclusive and exhaustive categories: complete separation, quasicomplete separation and overlap. Our results suggest general rules for identifying infinite parameter estimates in log linear models for frequency tables.

1,040 citations


Journal Article•DOI•
Adrian Bowman1•
TL;DR: An alternative method of cross-validation, based on integrated squared error, recently also proposed by Rudemo (1982), is derived, and Hall (1983) has established the consistency and asymptotic optimality of the new method.
Abstract: Cross-validation with Kullback-Leibler loss function has been applied to the choice of a smoothing parameter in the kernel method of density estimation. A framework for this problem is constructed and used to derive an alternative method of cross-validation, based on integrated squared error, recently also proposed by Rudemo (1982). Hall (1983) has established the consistency and asymptotic optimality of the new method. For small and moderate sized samples, the performances of the two methods of cross-validation are compared on simulated data and specific examples.

931 citations


Journal Article•DOI•
TL;DR: In this paper, the maximum likelihood method for fitting the linear model when residuals are correlated and when the covariance among the residuals is determined by a parametric model containing unknown parameters is described.
Abstract: We describe the maximum likelihood method for fitting the linear model when residuals are correlated and when the covariance among the residuals is determined by a parametric model containing unknown parameters. Observations are assumed to be Gaussian. We give conditions which ensure consistency and asymptotic normality of the estimators. Our main concern is with the analysis of spatial data and in this context we describe some simulation experiments to assess the small sample behaviour of estimators. We also discuss an application of the spectral approximation to the likelihood for processes on a lattice.

858 citations


Journal Article•DOI•
TL;DR: In this article, the asymptotic bias from omitting covariates is shown to be zero if the regression of the response variable on treatment and covariates was linear or exponential, and, in regular cases, this is a necessary condition for zero bias.
Abstract: SUMMARY Certain important nonlinear regression models lead to biased estimates of treatment effect, even in randomized experiments, if needed covariates are omitted. The asymptotic bias is determined both for estimates based on the method of moments and for maximum likelihood estimates. The asymptotic bias from omitting covariates is shown to be zero if the regression of the response variable on treatment and covariates is linear or exponential, and, in regular cases, this is a necessary condition for zero bias. Many commonly used models do have such exponential regressions; thus randomization ensures unbiased treatment estimates in a large number of important nonlinear models. For moderately censored exponential survival data, analysis with the exponential survival model yields less biased estimates of treatment effect than analysis with the proportional hazards model of Cox, if needed covariates are omitted. Simulations confirm that calculations of asymptotic bias are in excellent agreement with the bias observed in experiments of modest size.

609 citations


Journal Article•DOI•
TL;DR: In this paper, the authors show that several other distributions have equally simple properties, the main example being the inverse Gaussian distribution, which makes the population homogeneous with time, whereas for the gamma the relative heterogeneity is constant.
Abstract: Taking account of heterogeneity between the individuals in population based mortality studies is important. A systematic way of describing heterogeneity is by an unobserved quantity called frailty, entering the hazard multiplicatively. Until now most studies have used a gamma distributed frailty, which is mathematically convenient; for example, the distribution among survivors is also gamma. This paper shows that several other distributions have equally simple properties, the main example being the inverse Gaussian distribution. Consequences of the different distributions are examined; the inverse Gaussian makes the population homogeneous with time, whereas for the gamma the relative heterogeneity is constant.

436 citations


Journal Article•DOI•
Colin B. Becg1, Robert Gray1•
TL;DR: In this paper, the use of individualized logistic regression, in which a series of separate simple logistic regressions are performed as a replacement for polychotomous logistic regression, is studied.
Abstract: SUMMARY The use of individualized logistic regression, in which a series of separate simple logistic regression analyses are performed as a replacement for polychotomous logistic regression, is studied. The asymptotic relative efficiencies of the individual parameter estimates are observed to be generally high, as are the efficiencies of predicted probability estimates and, to a somewhat lesser extent, joint tests of parameters from different regressions.

359 citations


Journal Article•DOI•
TL;DR: In this paper, the authors consider binary regression models when some of the predictors are measured with error and show that if the measurement error is large, the usual estimate of the probability of the event in question can be substantially in error, especially for high risk groups.
Abstract: SUMMARY We consider binary regression models when some of the predictors are measured with error. For normal measurement errors, structural maximum likelihood estimates are considered. We show that if the measurement error is large, the usual estimate of the probability of the event in question can be substantially in error, especially for high risk groups. In the situation of large measurement error, we investigate a conditional maximum likelihood estimator and its properties.

215 citations


Journal Article•DOI•
TL;DR: In this paper, an extra-binomial variation, or overdispersion, reflects the fact that the binary response of cells from a given survivor tend to be more alike than are the responses of distinct survivors having the same estimated radiation exposure level and other covariate values.
Abstract: Correlated binary observations arise in a variety of applications. For example, in animal studies the term 'litter effect' is used to describe the greater alikeness of responses within a litter as compared to that between litters, at a given set of experimental conditions. In a setting that motivated this work, cells on individual atomic bomb survivors were scored as to the presence or absence of chromosomal aberrations. The number of aberrant cells varied among subjects at a specified radiation exposure estimate, age at exposure, city of exposure and sex, in a manner that substantially exceeds that consistent with a binomial error structure (Otake & Prentice, 1984). This extra-binomial variation, or overdispersion, reflects the fact that the binary response of cells from a given survivor tend to be more alike than are the responses of cells from distinct survivors having the same estimated radiation exposure level and other covariate values. Such overdispersion may result from individual differences in susceptibility to radiation damage, from omitted covariates, or, most plausibly in this setting, from substantial random errors in the estimated radiation exposure levels. Failure to acknowledge overdispersion may lead to serious underestimation of the standard errors associated with regression parameter estimates and to unduly precise inferences.

143 citations


Journal Article•DOI•
TL;DR: In this article, a new form of expected response function involving log contrasts of the proportions is introduced for experiments with mixtures, and the advantages and disadvantages of log contrast models are discussed and illustrated in applications.
Abstract: SUMMARY A new form of expected response function involving log contrasts of the proportions is introduced for experiments with mixtures. The advantages and disadvantages of log contrast models are discussed and illustrated in applications. In particular, the parameters of the model are related to certain useful hypotheses about mixtures and some relevant contrasts are identified.

143 citations


Journal Article•DOI•
TL;DR: In this paper, the estimation of parameters in hazard rate models with a changepoint was discussed and a consistent estimator of the change point was obtained by examining the properties of the density represented as a mixture.
Abstract: SUMMARY This paper discusses the estimation of parameters in hazard rate models with a changepoint. Due to the irregularity of the models, the classical maximum likelihood method and the method of moments cannot be used. A consistent estimator of the change-point is obtained by examining the properties of the density represented as a mixture. The performance of the estimator is checked via simulation.

Journal Article•DOI•
TL;DR: In this article, the role of the sample selection mechanism in a model-based approach to finite population inference is examined and conditions under which partially known designs can be ignored are established.
Abstract: SUMMARY The role of the sample selection mechanism in a model-based approach to finite population inference is examined. When the data analyst has only partial information on the sample design then a design which is ignorable when known fully may become informative. Conditions under which partially known designs can be ignored are established and examined for some standard designs. The results are illustrated by an example used by Scott (1977).

Journal Article•DOI•
TL;DR: In this paper, a new approach to factor analysis and related latent variable methods is proposed which is based on data reduction using the idea of Bayesian sufficiency, and considerations of symmetry, invariance and independence are used to determine an appropriate family of models.
Abstract: SUMMARY A new approach to factor analysis and related latent variable methods is proposed which is based on data reduction using the idea of Bayesian sufficiency. Considerations of symmetry, invariance and independence are used to determine an appropriate family of models. The results are expressed in terms of linear functions of the manifest variables after the manner of principal components analysis. The approach justifies some of the practices based on the normal theory factor model and lays a foundation for the treatment of nonnormal, including categorical, variables. 1. BACKGROUND Factor analysis is a widely used statistical technique but its theoretical foundations are somewhat obscure and subject to dispute. It is one of a family of multivariate methods which also includes latent structure and latent trait analysis. The common feature of the models underlying these methods is that the observed random variables are assumed to depend on latent, that is unobserved, random variables. There is sometimes debate about whether these latent variables are 'real' in any sense but they can be viewed simply as constructs designed to simplify and summarize the complex web of interrelated variables with which nature confronts us. By expressing these relationships in terms of a small number of latent variables, or factors, the models effect a reduction in dimensionality which aids comprehension. An early theoretical account of the subject is by Anderson & Rubin (1956) and more recent and comprehensive treatments are provided by Lawley & Maxwell (1971) and Harman (1968). The present paper is an attempt to provide a coherent framework within which existing methodology can be evaluated and a base from which new methods can be developed. Our approach is to start from a very general statement of the problem in terms of the distributions of the random variables involved and then to invoke ideas of symmetry, invariance and conditional independence to determine the class of models that it is reasonable to consider. This not only exhibits the unity of the various models already in existence but resolves many of the ambiguities and obscurities with which the subject has been bedevilled. An early approach on these lines is given by Anderson (1959) and was recognized by Birnbaum in his contribution to Lord & Novick (1968). The same line was followed by Bartholomew (1980). In spite of this the practical implications do not seem to have been made fully explicit or generally recognized.

Journal Article•DOI•
TL;DR: In this article, the effect of the use of a long autoregression, of order clog T when T is large, in the first stage of the process is investigated and in particular its effect on the speed of convergence of the estimates.
Abstract: SUMMARY The problem considered is that of estimating an autoregressive-moving average system, including estimating the degrees of the autoregressive and moving average lag operators. The basic method is that introduced by Hannan & Rissanen (1982). However, that method may sometimes overestimate the degrees and modifications are here introduced to correct this. The problem is itself due to the use of a long autoregression, of order clog T when T is large, in the first stage of the process. The effect of this is investigated and in particular its effect on the speed of convergence of the estimates.

Journal Article•DOI•
TL;DR: In this article, the authors examine the question: if interest lies in the qualitative effect on failure of various explanatory variables, how critical is the choice of model family in assessing the relative importance of the explanatory variables?
Abstract: The accelerated life and proportional hazards families of models are widely used in the analysis of survival data. We examine the question: if interest lies in the qualitative effect on failure of various explanatory variables, how critical is the choice of model family in assessing the relative importance of the explanatory variables? We consider small effects. When censoring is independent of the explanatory variables, to first order the regression parameters are proportional under the alternative models. This is not true when the censoring varies appreciably with the explanatory variables. Some higherorder theory is examined for the special case of the two-sample problem.

Journal Article•DOI•
TL;DR: In this paper, the problem of selecting the number k from a given range 1 < k < K under the model k, we have a least squares estimate of the regression parameters.
Abstract: We first show a simplification and a refinement of the result obtained by Shibata ('1976) It is an evaluation of the mean squared error of estimates of regression parameters when the number of regression variables is selected by a certain procedure The evaluation yields a good approximation to the efficiency of a selection procedure when the number of regression variables is small The approximate efficiency of FPE (Akaike, 1970), AIC (Akaike, 1973) or Cp (Mallows, 1973) turns out to be not satisfactorily high, although they are all asymptotically efficient in the sense of Shibata (1981) Based on such an approximation, a procedure for choosing a out of the family FPE, (Bhansali & Downham, 1977) is suggested A further generalization is also developed Consider a regression model y = XJI(k)l+ e, where y' = (yl yj) is a vector of observations, X is an n x K design matrix and f,(k)' = (fl1, , fk, 0, , 0) is a vector of regression parameters For simplicity we assume e' = (El, , EJ) is a vector of independent normally distributed random variables with mean zero and variance a' We call the above model 'model k' In this paper, we are concerned only with the problem of selecting the number k from a given range 1 < k < K Under the model k, we have a least squares estimate of the regression parameters, 43(k)' = (/h, ,/f1k), which is a solution of X(k)'X(k) /(k) = X(k)'y Here X(k) is an n x k submatrix consisting of the first k column vectors of the design matrix X Hereafter, f,(k) is occasionally considered as the corresponding K-dimensional vector with the undefined entries being zero Suppose that observations yl,, yn come from a model ko (1 < ko < K) with the regression parameter ,B such that f3k0 * 0 Our main concern is the mean squared error

Journal Article•DOI•
TL;DR: In this article, upper and lower bounds for the strength of the effect of a dichotomous confounding factor on the exposure-disease relationship are developed, for case-control studies of a A problem in the design and analysis of a case control study is the identification of confounding factors.
Abstract: SUMMARY Upper and lower bounds for the strength of the effect of a dichotomous confounding factor on the exposure-disease relationship are developed, for case-control studies of a A problem in the design and analysis of a case-control study is the identification of confounding factors. Controversies frequently arise about whether an exposure-disease relationship is spurious, i.e. caused by extraneous or confounding factors that were not measured or controlled as part of the study. The effect of confounding factors has been discussed quantitatively by Bross (1966) and Schlesselman (1978). In this paper we develop upper and lower bounds for the stratum-specific odds ratios which are applicable for assessing the effect of a confounding factor. We restrict our attention to the simple case involving a dichotomous disease variable, denoted by D if present and D if absent, a dichotomous exposure variable, denoted by E if present and E if absent, and a single dichotomous extraneous factor F with F1 and Fo denoting its two levels. 2. BOUNDS We parameterize the joint distributions of E and F in D and D as follows:

Journal Article•DOI•
TL;DR: In this paper, the authors compared the empirical significance levels and power of 13 tests in terms of their empirical significance and power in small samples and found a modified likelihood ratio test to give the best overall performance, and a test due to van Montfort & Otten is also recommended.
Abstract: SUMMARY When a sequence of extreme values of a physical process has been observed it is often of interest to test whether the observations are distributed according to a type I FisherTippett extreme-value distribution rather than one of types II or III. This is equivalent to testing whether the shape parameter is zero in the generalized extreme-value distribution. Thirteen tests of this hypothesis are here compared in terms of their empirical significance levels and power in small samples. A-modified likelihood ratio test is found to give the best overall performance, and a test due to van Montfort & Otten is also recommended.

Journal Article•DOI•
TL;DR: In this paper, the maximum likelihood estimates of the parent-child interclass correlation and other parameters from familial data when the families have unequal numbers of offspring are given, and two alternative sets of estimates are given.
Abstract: SUMMARY The maximum likelihood estimates of the parent-child interclass correlation and other parameters from familial data when the families have unequal numbers of offspring is given. Also given are two alternative sets of estimates which are easier to compute than the maximum likelihood estimates. Using these estimates, a test for the interclass correlation to be zero requires only the ordinary Student's t-distribution. Similarly other tests use only known distributions. In the analysis of familial data the degree of mother-sib resemblance is measured by the interclass correlation, the correlation between two different types of family member. Several estimates of this correlation have been proposed in the literature; see Rosner, Donner & Hennekens (1977) for details and their Monte Carlo comparisons of mean squared errors. In particular, they gave maximum likelihood estimates when the sib sizes are equal. However, when the sib sizes are not equal, Rosner (1979) proposed an algorithm for finding the maximum likelihood estimates which involves iterative maximization of an implicit function of two parameters. This algorithm is difficult to implement and may not even converge for some sets of data. Recently Mak & Ng (1981) used the linear model approach of Kempthorne & Tandan (1953) to obtain the maximum likelihood estimates. This reduces the problem to iterative maximization of a function of one parameter. However, nothing is known about the convergence of the procedure. Another iterative method of finding the maximum likelihood estimates has been given by Smith (1980). In this paper, an alternative approach is given which requires solving only one equation. Also, two alternative sets of estimates are given. These estimates are easier to compute than the maximum likelihood estimates. Using these estimates, a test for the interclass correlation to be zero is given; it requires only the ordinary Student's t distribution. Similarly, other tests use only known results.

Journal Article•DOI•
TL;DR: In this article, the authors examine large-sample properties of cross-validation for estimating cell probabilities, starting from a completely general measure of loss, and derive necessary and sufficient conditions on the loss function for the resulting estimator to be consistent, or to minimize expected loss.
Abstract: SUMMARY We examine large-sample properties of cross-validation for estimating cell probabilities, starting from a completely general measure of loss. Necessary and sufficient conditions on the loss function are derived for the resulting estimator to be consistent, or to minimize expected loss. These results reveal that cross-validation is extremely sensitive to the shape of the loss function. Nevertheless, when the loss function is chosen correctly, cross-validation can be relied on to perform well for large samples. We provide a simple method of generating loss functions with optimal properties. Extension to the estimation of univariate probability density functions is discussed at a heuristic level.

Journal Article•DOI•
TL;DR: In this paper, a simple reparameterization is given that can implicitly restrict the autoregressive moving average parameters to the stationary and invertible region, where the stationary region is defined as the region of invertibility.
Abstract: SUMMARY A simple reparameterization is given that can implicitly restrict the autoregressivemoving average parameters to the stationary and invertible region.

Journal Article•DOI•
TL;DR: In this paper, a modified and extended tensor notation is introduced that is sufficient to cover multivariate moments and cumulants as special cases, using this notation, two basic identities are given.
Abstract: A modified and extended tensor notation is introduced that is sufficient to cover multivariate moments and cumulants as special cases. Using this notation, two basic identities are given. The first of these expresses generalized cumulants in terms of ordinary cumulants. The second gives the joint cumulant generating function of any polynomial transformation in terms of the cumulants of the original variables. Three applications of the basic identities are given. The first application is concerned with sample cumulants or k-statistics, the second to Edgeworth series and the third to exponential family models.

Journal Article•DOI•
TL;DR: In this article, the authors determine the natural parameter space of the iterated FGM distribution, and thereby show that the maximum correlation is higher than what was previously known, and they also show that one single iteration can result in tripling the covariance for certain marginals and that there exist no marginals for which the single iteration will bring about higher negative correlation.
Abstract: : In this note the authors determine the natural parameter space of the iterated FGM distribution, and thereby show that the maximum correlation is higher than what was previously known. They also show that one single iteration can result in tripling the covariance for certain marginals and that there exist no marginals for which the single iteration will bring about higher negative correlation. (Author)

Journal Article•DOI•
TL;DR: In this article, Edgeworth expansion is applied to studentized parameter estimates when the standard error has been computed by a jackknife method and adjustments to the usual jackknife confidence limit formulae are obtained.
Abstract: SUMMARY Edgeworth expansion is applied to studentized parameter estimates when the standard error has been computed by a jackknife method. Adjustments to the usual jackknife confidence limit formulae are obtained. This approach is contrasted with a bootstrap approach in numerical illustrations for estimation of a ratio. Jackknife methods are nonparametric methods for estimating the bias and standard error of an estimate T. Approximate confidence limits for the estimand can be found by using a large-sample normal approximation for T. Somewhat curiously, little is known about possible improvements to the normal approximation in this context, although improvements based on Edgeworth expansions are familiar in other problems. In this paper we show that Edgeworth expansion methods can be applied in the jackknife context. Numerical results for a particular application raise the possibility that the resulting improvements of jackknife methods can be matched by a suitable use of bootstrap methods (Efron, 1982).

Journal Article•DOI•
TL;DR: In this article, the authors compared the multinomial model used for estimating the size of a closed population with the highly flexible Poisson models introduced by Cormack (1981), and discussed the substantial differences between the variances under the two models.
Abstract: SUMMARY The classical multinomial model used for estimating the size of a closed population is compared to the highly flexible Poisson models introduced by Cormack (1981). The multinomial model, and generalizations of it which allow for dependence between samples, may be obtained from that of Cormack by conditioning on the population size. The maximum likelihood estimators for N, the population size, and 0, the vector of parameters describing the capture process, are the same in both models. Completely general formulae for the asymptotic variances of the maximum likelihood estimates of N for both models are given. The substantial differences between the variances under the two models are discussed. Hypotheses concerning 0 may be tested using the log likelihood ratio; the procedures which result from both models are asymptotically equivalent under the null hypothesis but differ in power under the alternative.

Journal Article•DOI•
Adelchi Azzalini1•
TL;DR: In this paper, the analysis of a number of independent first-order autoregressive time series is considered in a normal theory context and a model is studied which allows for nonstationary and nonidentical distribution of the series caused by both fixed effect and random effect components.
Abstract: SUMMARY The analysis of a number of independent first-order autoregressive time series is considered in a normal theory context. A model is studied which allows for nonstationary and nonidentical distribution of the series caused by both fixed effect and random effect components.

Journal Article•DOI•
TL;DR: Godambe and Thompson as discussed by the authors developed concepts of parameter defining function and effective parameter, and provided theory and techniques for choosing from a given set of robust parameters one that is most effective, or one that can most efficiently be estimated.
Abstract: SUMMARY Utilizing the theory of estimating equations (Godambe, 1960; Godambe & Thompson, 1978), this paper develops concepts of parameter defining function and effective parameter. The paper provides theory and techniques for choosing from a given set of robust parameters one that is most effective, or one that can most efficiently be estimated. The word parameter is used in statistics in two rather distinct contexts. In the data- analytic context a parameter characterizes some 'interesting' aspect of the distribution such as its mean, its median or some percentile. In the context of modern statistical theory, founded by K. Pearson, R. A. Fisher and others, the parameter is related to and derives its meaning from a specific probabilistic model. In this second context the model is of central importance, and one seeks to complete it or perfect it by estimating its unknown parameter. In the first context there is considerable flexibility and a parameter can typically be defined in many ways. The choice of a parameter definition involves the following considerations. (a) How much does it tell about the 'interesting' or 'practically important' part of the distribution? (b) How 'insensitive' or 'robust' it is with respect to the 'uninteresting' part of the distribution? (c) How best can it be estimated? The two contexts described above may of course overlap. In the present paper we develop a theory which emphasizes the data-analytic meaning of the parameter. The approach, on the other hand, is more in line with the modern statistical theory mentioned above than that of most other works on robust estimation. In particular, 'asymptotic minimaxity' plays no role. The theory is designed to incorporate questions (a), (b) and (c) raised above. It is based on the theory of estimating equations (Godambe, 1960; Godambe & Thompson, 1978). The new concepts employed are those of 'parameter defining function' and 'effectiveness of a parameter'. These are defined in ? 2 1. Within

Journal Article•DOI•
TL;DR: In this paper, a selection criterion based on a Wald statistic is proposed to change the status of one observation from uncensored to censored, which is formally equivalent to Mallows's Cp and thus the problem is reduced to one readily handled by standard statistical packages.
Abstract: SUMMARY This paper shows that within the framework of the proportional hazards model all subsets regression can be performed with very little computational effort. A selection criterion based on a Wald statistic is motivated by an argument similar to crossvalidation in which the status of one observation is changed from uncensored to censored. This criterion is formally equivalent to Mallows's Cp and thus the problem is reduced to one readily handled by standard statistical packages. The procedure is applied to some multiple myeloma data to give results remarkably different from those obtained by previous workers using stepwise procedures. New insights are gained and the superiority of all subsets regression over stepwise regression is clearly demonstrated.

Journal Article•DOI•
TL;DR: In this paper, the exact distribution of the likelihood ratio statistic for testing equality of covariance matrices of multivariate Gaussian models has been derived and the percentage points of the test statistic have also been tabulated.
Abstract: SUMMARY In this paper the exact distribution of the likelihood ratio statistic for testing equality of covariance matrices of multivariate Gaussian models has been derived. Percentage points of the test statistic have also been tabulated.

Journal Article•DOI•
T. J. Diciccio1•
TL;DR: In this paper, transformation formulae for one-dimensional curved exponential families were derived and compared with the signed square root of the likelihood-ratio statistic, and their second-order properties were considered.
Abstract: SUMMARY Parameterizations which reduce the asymptotic bias and skewness of various pivotal quantities arising in large-sample theory are discussed for models depending on an unknown scalar parameter. Transformation formulae by which such parameterizations can be obtained are derived, and these formulae extend those for one-dimensional curved exponential families given by Hougaard (1982). To assess the accuracy of normal approximations to the distributions of the pivots, their second-order properties are considered and comparisons with the signed square root of the likelihood-ratio statistic are drawn.