scispace - formally typeset
Search or ask a question

Showing papers in "Biometrika in 1968"


Journal ArticleDOI
M. B. Wilk1, R. Gnanadesikan1
TL;DR: This paper describes and discusses graphical techniques, based on the primitive empirical cumulative distribution function and on quantile (Q-Q) plots, percent (P-P) plots and hybrids of these, which are useful in assessing a one-dimensional sample, either from original data or resulting from analysis.
Abstract: SUMMARY This paper describes and discusses graphical techniques, based on the primitive empirical cumulative distribution function and on quantile (Q-Q) plots, percent (P-P) plots and hybrids of these, which are useful in assessing a one-dimensional sample, either from original data or resulting from analysis. Areas of application include: the comparison of samples; the comparison of distributions; the presentation of results on sensitivities of statistical methods; the analysis of collections of contrasts and of collections of sample variances; the assessment of multivariate contrasts;_ and the structuring of analysis of variance mean squares. Many of the objectives and techniques are illustrated by examples. This paper reviews a variety of old and new statistical techniques based on the cumulative distribution function and its ramifications. Included in the coverage are applications, for various situations and purposes, of quantile probability plots (Q-Q plots), percentage probability plots (P-P plots) and extensions and hybrids of these. The general viewpoint is that of analysis of data by statistical methods that are suggestive and constructive rather than formal procedures to be applied in the light of a tightly specified mathematical model. The technological background is taken to be current capacities in data collection and highspeed computing systems, including graphical display facilities. It is very often useful in statistical data analysis to examine and to present a body of data as though it may have originated as a one-dimensional sample, i.e. data which one wishes to treat for purposes of analysis, as an unstructured array. Sometimes this is applicable to ' original' data; even more often such a viewpoint is useful with 'derived' data, e.g. residuals from a model fitted to the data. The empirical cumulative distribution function and probability plotting methods have a key role in the statistical treatment of one-dimensional samples, being of relevance for summarization and palatable description as well as for exposure and inference.

1,301 citations


Journal ArticleDOI
TL;DR: It is shown that the estimates are BAN, and that the iterative procedure is convergent, for a four-way contingency table for which the marginal probabilities pi and p j are known and fixed.
Abstract: SUMMARY In its simplest formulation the problem considered is to estimate the cell probabilities pij Of an r x c contingency table for which the marginal probabilities pi and p j are known and fixed, so as to minimize E2pij In (Pi/r1ij), where rij are the corresponding entries in a given contingency table. An iterative procedure is given for determining the estimates and it is shown that the estimates are BAN, and that the iterative procedure is convergent. A summary of results for a four-way contingency table is given. An illustrative example is given.

399 citations


Journal ArticleDOI
TL;DR: The problem of outlying observations is considered from a Bayesian viewpoint and the linear model is considered, which assumes that a good observation is normally distributed about its mean with variance o.2, and a bad one is normal with the same mean but a larger variance.
Abstract: The problem of outlying observations is considered from a Bayesian viewpoint. We suppose that each of the observations in an experiment may come from either a 'good' run or a 'bad' run. By specifying the models corresponding to good and bad runs and the prior probabilities of which runs being bad, we then employ standard Bayesian inference procedures to derive the appropriate analysis. In particular, we consider the linear model and assume that a good observation is normally distributed about its mean with variance o.2, and a bad one is normal with the same mean but a larger variance k2o-2. An example is given.

250 citations


Journal ArticleDOI
TL;DR: In this article, a set of n base points with known co-ordinates relative to orthogonal axes, and a further point Pn+1, with known distance from each of the base set, are given.
Abstract: A set of n base points P4i=1, 2,…, n), with known co-ordinates relative to orthogonal axes, and a further point Pn+1, with known distance from each of the base set, are given. The co-ordinates of Pn+1relative to the axes of the base set are found. The formula is particularly simple when the base set is referred to its principal axes, when the co-ordinates of Pn+1 for a subset of all the axes can be calculated from the co-ordinates of the Pi in this subset only. The classical results for adding a point to a principal components or canonical variates analyses are obtained when the base set is derived using the appropriate distance functions. An example is given.

184 citations


Journal ArticleDOI
TL;DR: In this article, a new estimation theory for sample surveys is proposed, which is based on the assumption that a character attached to the units is measured on a known scale with a finite set of scale points.
Abstract: : A new estimation theory for sample surveys is proposed. The basic feature of the theory is a special parametrization of finite populations based on the assumption that a character attached to the units is measured on a known scale with a finite set of scale points. In the class of estimators which do not functionally depend on the 'identification labels' preattached to the units, the following results are proved: (1) For simple or stratified simple random sampling without replacement, the customary estimators are unbiased minimum variance. (2) For simple random sampling with replacement, the sample mean based only on the distinct units in the sample is the maximum likelihood estimator of the population mean. (3) If a concomitant variable with known population mean is also observed, an approximation to the maximum likelihood estimator of the population mean is closely related to the customary regression estimator. (4) If prior information in the form a prior distribution is available, 'Bayes estimators' can be derived using the complete likelihood. (Author)

156 citations


Journal ArticleDOI
B. Ajne1
TL;DR: In this article, Johansen et al. studied a test statistic defined as the maximal number of points in the sample that can be covered by a semicircle in a circle.
Abstract: SUMMARY Consider a finite set of points, located on the circumference of a circle. Several tests have been proposed of the hypothesis that the points constitute a random sample from a uniform distribution. In this paper we study a test statistic defined as the maximal number of points that can be covered by some semicircle. Exact and asymptotic distributions under the null hypothesis, and under a certain alternative hypothesis, are given together with some tables. A related test statistic is studied briefly. An expression is obtained concerning most powerful invariant tests of the hypothesis of a uniform circular distribution. In 1965, Dr G. Borenius described an unpublished experiment with a bubble chamber, where points representing events were observed through a circular window. A natural hypothesis was that the events occurred at random with a constant probability density within the circle. In one case it was observed that 67 out of 100 events fell within a suitably chosen semicircle. The question then arose whether this asymmetry should be judged inconsistent with the hypothesis of a uniform distribution. More generally, suppose that n points are observed and that each point is moved radially to the circumferences of the circle. We then have a sample of n points on the circumference and want to test the hypothesis that the underlying probability distribution is uniform over the circumference. The test statistic suggested by the foregoing paragraph is the maximal number of points in the sample that can be covered by a suitably chosen semicircle. In the following this test statistic will be denoted by N. We reject the hypothesis if N is too large. Many other tests of the same hypothesis have been proposed. For example, the classical Kolmogorov-Smirnov test has been adapted to circular distributions by Kuiper; see Kuiper (1960) and Stephens (1965). Watson (1961) did the same thing for the Crame6r-von Mises test. A detailed study of the null distribution of Watson's test statistic has been made by Stephens (1963, 1964). For a general review of statistical methods in connexion with circular distributions, see Batschelet (1965). The problem of determining the distribution of N under the hypothesis is purely combinatorial. It was solved by Borenius for sample sizes n up to n = 15 and the general solution was inductively conjectured by him. He thus found that the above-mentioned observation, N = 67 for a sample of size n = 100, corresponds to a level of significance P = 1U6 0/. Dr S. Johansen, University of Copenhagen, has told the author that he, too, has found the null-distribution of N. This was done in connexion with an application to the study of

128 citations


Journal ArticleDOI

105 citations


Journal ArticleDOI
Colin L. Mallows1

87 citations


Journal ArticleDOI
TL;DR: In this article, the authors present procedures for multivariate analysis of variance, which simultaneously test for significance all variables together as well as all subsets of variables, or the differences between all samples and between the samples of any subgroup.
Abstract: This paper presents procedures for multivariate analysis of variance which simultaneously test for significance all variables together as well as all subsets of variables, or the differences between all samples as well as between the samples of any subgroup. These procedures ensure a predetermined probability of making no type I error on any of the tests and are coherent in preserving the implication relations which exist between hypotheses on different groups of samples. The procedure using Roy's maximum characteristic root statistics is found to be preferable to all other procedures of this kind.

87 citations



Journal ArticleDOI
TL;DR: Stochastic models are investigated for collections of populations which are distributed spatially, either at discrete points on a line or at the nodes of a square or cubic lattice, and which are subject to birth, death, and migration to or from neighbouring populations.
Abstract: Stochastic models are investigated for collections of populations which are distributed spatially, either at discrete points on a line or at the nodes of a square or cubic lattice, and which are subject to birth, death, and migration to or from neighbouring populations. Explicit expressions are given for the stochastic means. A general indication is provided for the calculation of variances and covariances, with explicit values for some special cases.

Journal ArticleDOI
TL;DR: Stochastic models based upon the assumption that the periods spent in each stage of development, excluding the possibility of death, are independent random variables with a characteristic form of probability distribution are proposed for the development of a biological organism through recognizable distinct stages.
Abstract: SUMMARY This paper is concerned with stochastic models for the representation of the development of a biological organism through recognizable distinct stages. The models are based upon the assumption that the periods spent in each stage of development, excluding the possibility of death, are independent random variables with a characteristic form of probability distribution. Within each stage the organism is liable to be taken by predators or to die from other causes, and incidents of this type are assumed to occur as events in a Poisson process. The development of the theory for various forms of distribution of the period spent in a given stage is considered, including the negative exponential and second and third order special Erlangian distributions. An example is given of the application of the proposed models to the analysis of sampling data from a study of the life cycle of the grasshopper, Corthippus parallelus. The main features of the life cycle of a biological organism exhibit a similar pattern over a wide variety of different types and species. The birth of the organism occurs at a clearly defined point in time and the organism then passes through a period of growth and development until it reaches maturity. In certain types of organism this process is characterized by transition through a number of distinct and easily recognizable states in turn. An insect, for example, passes through a succession of larval instars. In other types of organism the process of development is less well defined, although it is usually possible to divide the life cycle into discrete states by reference to the presence or absence or to the size of characteristic features of the organism. At every moment of its life the organism is liable to suffer death, either as a result of the action of predators, of accidents or for other reasons. If the organism does reach maturity, its life will eventually be terminated, either by natural or other causes. In order to carry out quantitative studies of a population of a given type of organism it is often helpful to set up a mathematical model to represent the process of birth, development and death. Since there will generally be variations from one organism to another within the same population, such a model must preferably embody a stochastic or random element, as the assessment of individual variability will form an essential part of the description of the life cycle. The main features which must be taken into account are the distributions of the times of birth, of the periods spent in each stage of development and of mortality in the various stages. The process of growth, as revealed by the size of the organism at any given stage in its development, may also be of interest. This paper is concerned with a model which was originally developed to describe the life cycle of the grasshopper, Corthippus parallelus. The model is, however, of more general application, not only to other biological organisms, but also to studies of the structure of human populations. For example, the 'population' may correspond to a large organization, 'birth' may correspond to the recruit


Journal ArticleDOI
TL;DR: A statistic is considered which has been proposed for testing whether a multivariate normal population covariance matrix is equal to a given matrix, and comparisons are made by using a series expansion of the distribution function.
Abstract: SUMMARY A statistic is considered which has been proposed for testing whether a multivariate normal population covariance matrix is equal to a given matrix. Approximations to its distribution are derived, and comparisons are made by using a series expansion of the distribution function. A table of significance points obtained from the series is given for certain values of the parameters.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the adequacy of the asymptotic results and gave power functions of the tests and compared the performance of the separate family tests of the exponential versus the log normal distribution with that of other tests for departure from the exponential distribution.
Abstract: SUMMARY Tests of the log normal distribution versus the exponential distribution were proposed by Cox (1961, 1962), who gave their large-sample distributions. We investigate the adequacy of the asymptotic results and give power functions of the tests. We then use Cox's general results to derive tests for the log normal distribution versus the gamma distribution. Finally, we compare the performance of the separate family tests of the exponential versus the log normal distribution with that of other tests for departure from the exponential distribution.

Journal ArticleDOI
TL;DR: In this paper, the authors considered an epidemic model in which there is an initial introduction of a carrier or carriers into a population and further carriers may be created from the susceptibles.
Abstract: SUMMARY An epidemic model is discussed in which there is an initial introduction of a carrier or carriers into a population and further carriers may be created from the susceptibles. The probability distribution for the number of survivors is derived and approximations and numerical illustrations are given. In a recent paper Weiss (1965) has considered an epidemic in a closed population which is spread not by the infected individuals but by carriers. He supposed that one or more carriers were introduced into the population and that these carriers infected susceptible individuals until detected, the time before detection having an exponential distribution. At no time during the epidemic were any additional carriers introduced or created and the epidemic terminated as soon as all the carriers were detected or all susceptibles had been infected. Weiss's model has the mathematical advantage that it is completely soluble in terms of elementary functions; see Dietz (1966) and Downton (1967b). It is, however, unrealistic in its assumption that no new carriers can be created. The present paper discusses a model in which it is supposed that after the initial introduction of a carrier or carriers, no new carriers are introduced from outside the population, but that new carriers may be created from the susceptibles in that population. The model is appropriate for the situation where a proportion 7T of those infected contract the disease in such a mild form that their symptoms are unnoticeable even though they are capable of passing on the disease. Such subelinically infected persons would then act as carriers until detected. For such a case we would expect oT to be small; oT = O corresponds to the Weiss model. The main aim of this paper is to obtain approximate expressions for the probability distribution and moments of the number of susceptibles surviving the epidemic in that case. The ultimate behaviour of the epidemic does not, however, depend upon oT in a simple way so that while valid approximations have been obtained for small it, even if this results in a large epidemic, it turns out that these approximations may be used in certain cases even when or is not small. In particular, they may be employed even if 7t = 1, when the mathematics becomes identical with that of the Kermack-McKendrick (1927) model for the general stochastic epidemic provided attention is confined to small subcritical epidemics. The stochastic model describing the process is defined in ? 2, where an explicit expression for the probability distribution of the number of survivors is given, together with an indication of the ways in which it may be obtained. In ? 3 the deterministic analogue of the stochastic model is discussed, providing the deterministic approximation to the mean number of survivors. This approximation is valid for small epidemics in large populations. Returning to the stochastic model ? 4 shows that the survivor probabilities may be expressed in terms of

Journal ArticleDOI
TL;DR: In this article, the authors present stochastic models to describe the changes that take place with time in the numbers of females and males in the various age-intervals in the population.
Abstract: SUMMARY In this paper we present stochastic models to describe the changes that take place with time in the numbers of females and males in the various age-intervals in the population. We first consider the case where females, or males, are marriage dominant, i.e. where the joint distribution of the numbers of boys and girls born at time t depends upon the numbers of females, or males, in the various age-intervals in the immediately preceding time-period. We then consider the case where neither females nor males are marriage-dominant, i.e. where the joint distribution of the numbers of boys and girls born at time t depends upon both the numbers of females and males in the various age-intervals in the immediately preceding time-period. Methods are given for calculating the expected values, variances, and covariances of the number of females and males in each age-interval at time t, and asymptotic results are presented for the case where t -* so. The application of these results is illustrated with data on the fertility and mortality conditions in the U.S. in 1965. The literature on two-sex stochastic models of population growth starts with articles by Kendall (1949) and Goodman (1953). Kendall outlined some of the analytic difficulties of a study of the stochastic aspects of two-sex population growth, and for simplicity limited his discussion of two-sex stochastic models to the special case where (a) each birth is equally likely to add a new female or a new male to the population, (b) the death-rate for females is equal to that for males, and (c) the birth- and death-rates per person are constants that are independent of the age of the person, and also independent of other relevant variables. Although Goodman's analysis was not limited by (a) and (b), it was limited by (c), as was all subsequent analytic work by other authors. One of the models of Goodman (1953) was a stochastic model where one sex is marriage-dominant, i.e. where the joint distribution of the numbers of girls and boys born at time t depends only upon the number of the dominant sex in the population in the immediately preceding time-period; he calculated the means, variances, and covariances of the numbers of females and males in the population at time t. A similar analysis of this model was made by Joshi (1954); see also Barucha-Reid (1960, pp. 175-9), Bailey (1964, pp. 119-20), and Keyfitz (1968). Lamens (1957) supposed that the birth- and death-rates may be functions of time, but he did not allow for possible dependence of these rates on the age-composition of the population, nor for the possibility that neither females nor males are marriage-dominant. Here we shall deal with stochastic models where the birth- and death-rates for females and for males may depend upon the age-composition of the population. First, say, the females will be assumed marriage-dominant, and then the more general situation will be considered. The above articles dealt mainly with some of the stochastic aspects of two-sex populationgrowth. Some of the deterministic aspects have been discussed in the demographic and bio

Journal ArticleDOI
TL;DR: In this article, the distribution of the largest and smallest characteristic roots of a Wishart matrix is computed based on an approximation derived from work of Pillai, and simultaneous confidence intervals for the variance components of the two-way layout with unequal variances are presented.
Abstract: SUMMARY This paper tabulates the distribution of the largest and smallest characteristic roots of a Wishart matrix; the computation is based on an approximation derived from work of Pillai As an application, we present simultaneous confidence intervals for the variance components of the two-way layout with unequal variances

Journal ArticleDOI
TL;DR: In this article, approximate likelihood ratio tests are derived for certain structures in multivariate normal correlation matrices, and reasonable asymptotic distributions for the approximate tests are proposed and examined by empirical sampling.
Abstract: SUMMARY Exact and approximate likelihood ratio tests are derived for certain structures in multivariate normal correlation matrices Reasonable asymptotic distributions for the approximate tests are proposed and examined by empirical sampling The powers of these tests are compared with those of previously proposed tests Tests for certain structures in correlation matrices have been proposed by several authors Hotelling (1940) proposed a conditional t test for the equality of two correlations in a trivariate normal distribution; Bartlett (1950, 1951), Anderson (1963) and Lawley (1963) considered tests for equality of all correlations in a multivariate normal distribution; and Bartlett & Rajalakshman (1953) and Kullback (1959) proposed a test for a completely specified correlation matrix The object of the present investigation is to examine likelihood ratio tests, or simple approximations to them, for the above hypotheses Asymptotic distributions are not available for the approximate tests, but reasonable approximations are proposed and examined by empirical sampling The powers of these tests are compared with those of previously proposed tests

Journal ArticleDOI
TL;DR: In this article, exact non-central c.d.f.s of four criteria in the two-roots case are derived for tests of the hypothesis E,= E2 against one-sided alternatives, where E, and E2 are covariance matrices of two normal populations.
Abstract: SUMMARY Exact non-central c.d.f.'s of four criteria in the two-roots case are derived for tests of the hypothesis E ,= E2 against one-sided alternatives, where E, and E2 are covariance matrices of two normal populations. The tests are based on Roy's largest root, and on LawleyHotelling's, Pillai's and Wilks's criteria. Powers of the last three criteria have been tabulated extensively, and power comparisons have been made between the three, and also with the largest root, whose powers have been tabulated elsewhere. In addition, powers of the largest root for large deviations are also given for the canonical correlation and multivariate analysis of variance cases, providing power comparisons with the other three criteria.

Journal ArticleDOI
TL;DR: Gosset, Fisher and Pearson as mentioned in this paper discussed the problems in estimation and significance testing which were brought to the front at that date by the unusual character of the sampling distribution of the correlation coefficient.
Abstract: Letters or extracts from letters which passed between W. S. Gosset, R. A. Fisher and Karl Pearson during the years 1912-20 are reproduced. They throw light on the start of Fisher's statistical career. In the notes accompanying the correspondence, attention is drawn to the problems in estimation and significance testing which were brought to the front at that date by the unusual character of the sampling distribution of the correlation coefficient. An early disagreement between Pearson and Fisher on estimation through minimizing %2 and maximizing likelihood is covered.

Journal ArticleDOI
TL;DR: In this paper, a modified systematic sampling procedure was used for estimating the milk yield in the presence of certain trends, such as "random", "quadratic" and "periodic" trends.
Abstract: SUMMARY In the presence of a linear trend in the population, a slight modification of the usual systematic sampling procedure is found to be highly effective in reducing the error variance of the estimator for the population mean. The efficiency of this modified design in the presence of certain trends, other than linear, such as 'random', 'quadratic' and 'periodic' trends, has been evaluated in comparison with usual systematic sampling methods. The adaptability and suitability of the modified systematic sampling procedure has been illustrated by its application to a survey for estimating the milk yield.

Journal ArticleDOI
TL;DR: In this article, the problem of selecting a subset of the smallest possible fixed size s that will contain the t best of k populations (t < s < k), based on any given common sample size from each of the k populations, is considered.
Abstract: SUMMARY The problem considered is that of selecting a subset of the smallest possible fixed size s that will contain the t best of k populations (t < s < k), based on any given common sample size from each of the k populations. Special emphasis, with a table included for finding s, is given to the case of normal distributions with larger means being better and with a common known variance. A criterion for efficiency, comparisons with other procedures, and a dual problem are also discussed.

Journal ArticleDOI
TL;DR: In this paper, four test procedures, namely, the asymptotic test, modified Bartlett test, the multiple correlation test and the Flmax test, were proposed to test the equality of a set of p variances when the p variates are correlated with common correlation coefficient.
Abstract: SUMMARY To test the equality of a set of p variances when the p variates are correlated with common correlation coefficient, four test procedures are proposed in this paper, namely, the asym- ptotic test, the modified Bartlett test, the multiple correlation test and the Flmax test. Comparisons among these four tests were made. A Monte Carlo study showed that, when p = 3, the sample size needed is about 40 in order that the asymptotic theory holds, while for p = 4 it needs about 50. the asymptotic test criterion, which can be shown to be asymptotically equivalent to the likelihood ratio test. A modification of the Bartlett test in large samples is given in ? 3. Section 4 deals with the multiple correlation test. Hartley (1950) considered the maximum F-ratio as a short-cut test for heterogeneity of variance in the independent case. A similar approach in the dependent case is given in ? 5. When p = 2, the four tests are equivalent. When p > 2, the asymptotic test is generally better than others for large samples. The relative efficiency of the modified Bartlett test to the asymptotic test is between 0-88 and 1 when p > 0, but is poor when p < 0. The multiple correlation test is an exact test for any size of sample. Unfortunately its relative efficiency to the asymptotic test is generally low. The Fmax test is worth considering for particular patterns of oj, e.g. one very large, one very small and the rest in the middle. The asymptotic test, the modified Bartlett test and the Fmax test were derived for large samples. A Monte Carlo study is given in ? 6 which showed that when p = 3, the sample size needed is about 40 in order that the asymptotic theory holds, while for p = 4 it needs about 50.

Journal ArticleDOI
TL;DR: Some results are obtained concerning the optimum allocation of sampling effort among k strata at the second phase of a two phase sampling procedure, using information obtained from the first phase.
Abstract: : In this paper we obtain some results concerning the optimum allocation of sampling effort among k strata at the second phase of a two phase sampling procedure, using information obtained from the first phase. Two different approaches are employed; a Bayesian posterior analysis and a Bayesian preposterior analysis. Two different allocation methods are derived and illustrated with some numerical examples, for cases where some or all of the nuisance parameters are unknown. The problem when all nuisance parameters are known has been discussed by Ericson (1965). (Author)

Journal ArticleDOI
TL;DR: In this article, two methods for testing hypotheses concerning the equality of two proportions are compared, using three different measures of asymptotic efficiency, under conditions where both of them are applicable, and the matched pairs procedure is shown to do almost as well as the usual large sample test.
Abstract: SUMMARY Two methods for testing hypotheses concerning the equality of two proportions are compared. The first method is the large sample test for proportions considered in most elementary texts. The other method is applicable under more general assumptions than the first and requires a pairing of the observations from the two populations. The two procedures are compared, using three different measures of asymptotic efficiency, under conditions where both are applicable. At least for the measures of efficiency considered, the matched pairs procedure is shown to do almost as well as the usual large sample test.

Journal ArticleDOI
TL;DR: In this paper, the early history of the law of large numbers is described in a logarithmic form and the role of N. Bernoulli's theorem is discussed.
Abstract: SUMMARY This paper is devoted to the early history of the law of large numbers. An outline of the prehistory of this law is given in ? 1. The algebraic part of J. Bernoulli's theorem is presented in a logarithmic form and the lesser known role of N. Bernoulli is described in ? 2. Comments on the derivation of the De Moivre-Laplace limit theorems by De Moivre, in particular, on the inductive character of his work, on the priority of De Moivre as to the continuous uniform distribution, on the unaccomplished possibility of Simpson having arrived at the normal distribution and on the role of Laplace are presentedin ? 3. The historical role of J. Bernoulli's form of the law of large numbers is discussed in ? 4.


Journal ArticleDOI

Journal ArticleDOI
TL;DR: This paper considers the consequences of adopting a piecewise-linear loss function for a situation where interval estimates are required for location or scale parameters and suggests intervals which are uniformly best invariant, with respect to the group of ' positive' linear transformations, can be found for this frequentist decision problem.
Abstract: Estimation, in both theory and practice, tends to be divided into two almost distinct branches. On the one hand, point estimation, by providing a working value for the unknown true parameter value, is undoubtedly useful in further investigations of the given situation although its reliability may be difficult to assess in real terms through the medium of the estimated standard error. On the other hand, confidence interval estimation, by presenting a whole set of more or less plausible values of the parameter, is often less easy to apply but has a more direct assessment of reliability through the confidence coefficient. The dilemma in estimation arises essentially from this need for a balance between usefulness and reliability-usefulness through the practical advantages of narrowing down the set of plausible values and the increasing reliability which attends the enlarging of the set. This dilemma is one of the sources of awkward questions from users about estimation. What confidence coefficient should be used? Why not use the whole parameter space as confidence interval and so ensure 100 % confidence? Why use a point estimate when it is almost certainly not the true parameter value? While answers to such naive questions can be expressed in terms of the distribution of the estimator or the class of confidence intervals associated with various confidence coefficients, they are not the only solution to the dilemma, and, in our view, are not readily appreciated by users. Any serious attempt to take account of the consequences of unreliability in not capturing the true parameter value and of lack of usefulness in excessive width should, we feel, involve the specification of some reasonable loss function and the subsequent examination of the problem in terms of decision theory. Such an attempt is usually beset by the well-known difficulties of the non-existence of a uniformly best solution in a frequentist approach and of the assessment of the prior distribution in a Bayesian approach. There is, however, one unexploited specification of the decision-theory approach to estimation which has some degree of realism and for which a satisfactory frequentist solution can be readily obtained. This paper considers the consequences of adopting a piecewise-linear loss function for a situation where interval estimates are required for location or scale parameters. Under certain circumstances intervals which are uniformly best invariant, with respect to the group of ' positive' linear transformations, can be found for this frequentist decision problem. Such interval estimates, while of interest in their own right as solutions of the particular decision problem,