scispace - formally typeset
Search or ask a question

Showing papers in "Biometrika in 1970"


Journal ArticleDOI
TL;DR: A generalization of the sampling method introduced by Metropolis et al. as mentioned in this paper is presented along with an exposition of the relevant theory, techniques of application and methods and difficulties of assessing the error in Monte Carlo estimates.
Abstract: SUMMARY A generalization of the sampling method introduced by Metropolis et al. (1953) is presented along with an exposition of the relevant theory, techniques of application and methods and difficulties of assessing the error in Monte Carlo estimates. Examples of the methods, including the generation of random orthogonal matrices and potential applications of the methods to numerical problems arising in statistics, are discussed. For numerical problems in a large number of dimensions, Monte Carlo methods are often more efficient than conventional numerical methods. However, implementation of the Monte Carlo methods requires sampling from high dimensional probability distributions and this may be very difficult and expensive in analysis and computer time. General methods for sampling from, or estimating expectations with respect to, such distributions are as follows. (i) If possible, factorize the distribution into the product of one-dimensional conditional distributions from which samples may be obtained. (ii) Use importance sampling, which may also be used for variance reduction. That is, in order to evaluate the integral J = X) p(x)dx = Ev(f), where p(x) is a probability density function, instead of obtaining independent samples XI, ..., Xv from p(x) and using the estimate J, = Zf(xi)/N, we instead obtain the sample from a distribution with density q(x) and use the estimate J2 = Y{f(xj)p(x1)}/{q(xj)N}. This may be advantageous if it is easier to sample from q(x) thanp(x), but it is a difficult method to use in a large number of dimensions, since the values of the weights w(xi) = p(x1)/q(xj) for reasonable values of N may all be extremely small, or a few may be extremely large. In estimating the probability of an event A, however, these difficulties may not be as serious since the only values of w(x) which are important are those for which x -A. Since the methods proposed by Trotter & Tukey (1956) for the estimation of conditional expectations require the use of importance sampling, the same difficulties may be encountered in their use. (iii) Use a simulation technique; that is, if it is difficult to sample directly from p(x) or if p(x) is unknown, sample from some distribution q(y) and obtain the sample x values as some function of the corresponding y values. If we want samples from the conditional dis

14,965 citations


Journal ArticleDOI
TL;DR: In this article, the authors developed measures of multivariate skewness and kurtosis by extending certain studies on robustness of the t statistic, and the asymptotic distributions of the measures for samples from a multivariate normal population are derived and a test for multivariate normality is proposed.
Abstract: SUMMARY Measures of multivariate skewness and kurtosis are developed by extending certain studies on robustness of the t statistic. These measures are shown to possess desirable properties. The asymptotic distributions of the measures for samples from a multivariate normal population are derived and a test of multivariate normality is proposed. The effect of nonnormality on the size of the one-sample Hotelling's T2 test is studied empirically with the help of these measures, and it is found that Hotelling's T2 test is more sensitive to the measure of skewness than to the measure of kurtosis. measures have proved useful (i) in selecting a member of a family such as from the Karl Pearson family, (ii) in developing a test of normality, and (iii) in investigating the robustness of the standard normal theory procedures. The role of the tests of normality in modern statistics has recently been summarized by Shapiro & Wilk (1965). With these applications in mind for the multivariate situations, we propose measures of multivariate skewness and kurtosis. These measures of skewness and kurtosis are developed naturally by extending certain aspects of some robustness studies for the t statistic which involve I1 and 32. It should be noted that measures of multivariate dispersion have been available for quite some time (Wilks, 1932, 1960; Hotelling, 1951). We deal with the measure of skewness in ? 2 and with the measure of kurtosis in ? 3. In ? 4 we give two important applications of these measures, namely, a test of multivariate normality and a study of the effect of nonnormality on the size of the one-sample Hotelling's T2 test. Both of these problems have attracted attention recently. The first problem has been treated by Wagle (1968) and Day (1969) and the second by Arnold (1964), but our approach differs from theirs.

3,774 citations


Journal ArticleDOI
TL;DR: In this article, a generalization of the Kruskal-Wallis test for testing the equality of K continuous distribution functions when observations are subject to arbitrary right censorship is proposed, where the distribution of the censoring variables is allowed to differ for different populations.
Abstract: SUMMARY A generalization of the Kruskal-Wallis test, which extends Gehan's generalization of Wilcoxon's test, is proposed for testing the equality of K continuous distribution functions when observations are subject to arbitrary right censorship. The distribution of the censoring variables is allowed to differ for different populations. An alternative statistic is proposed for use when the censoring distributions may be assumed equal. These statistics have asymptotic chi-squared distributions under their respective null hypotheses, whether the censoring variables are regarded as random or as fixed numbers. Asymptotic power and efficiency calculations are made and numerical examples provided. A generalization of Wilcoxon's statistic for comparing two populations has been proposed by Gehan (1965a) for use when the observations are subject to arbitrary right censorship. Mantel (1967), as well as Gehan (1965b), has considered a further generalization to the case of arbitrarily restricted observation, or left and right censorship. Both of these authors base their calculations on the permutation distribution of the statistic, conditional on the observed censoring pattern for the combined sample. However, this model is inapplicable when there are differences in the distribution of the censoring variables for the two populations. For instance, in medical follow-up studies, where Gehan's procedure has so far found its widest application, this would happen if the two populations had been under study for different lengths of time. This paper extends Gehan's procedure for right censored observations to the comparison of K populations. The probability distributions of the relevant statistics are here considered in a large sample framework under two models: Model I, corresponding to random or unconditional censorship; and Model II, which considers the observed censoring times as fixed numbers. Since the distributions of the censoring variables are allowed to vary with the population, Gehan's procedure is also extended to the case of unequal censorship. For Model I these distributions are theoretical distributions; for Model II they are empirical. Besides providing chi-squared statistics for use in testing the hypothesis of equality of the K populations against general alternatives, the paper shows how single degrees of freedom may be partitioned for use in discriminating specific alternative hypotheses. Several investigators (Efron, 1967) have pointed out that Gehan's test is not the most efficient against certain parametric alternatives and have proposed modifications to increase its power. Asymptotic power and efficiency calculations made below demonstrate that their criticisms would apply equally well to the test proposed here. Hopefully some of the modifications they suggest can likewise eventually be generalized to the case of K

1,351 citations


Journal ArticleDOI
TL;DR: In this article, a general multivariate normal distribution with a general parametric form of the mean vector and the variance-covariance matrix is proposed, where any parameter of the model may be fixed, free or constrained to be equal to other parameters.
Abstract: SUMMARY It is assumed that observations on a set of variables have a multivariate normal distribution with a general parametric form of the mean vector and the variance-covariance matrix. Any parameter of the model may be fixed, free or constrained to be equal to other parameters. The free and constrained parameters are estimated by maximum likelihood. A wide range of models is obtained from the general model by imposing various specifications on the parametric structure of the general model. Examples are given of areas and problems, especially in the behavioural sciences, where the method may be useful. 1. GENERAL METHODOLOGY 11. The general model We consider a data matrix X = {xOq} of N observations on p response variables and the following model. Rows of X are independently distributed, each having a multivariate normal distribution with the same variance-covariance matrix E of the form

1,115 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of making inference about the point in a sequence of zero-one variables at which the binomial parameter changes is discussed, and the asymptotic distribution of the maximum likelihood estimate of the change-point is derived in computable form using random walk results.
Abstract: : The report discusses the problem of making inference about the point in a sequence of zero-one variables at which the binomial parameter changes. The asymptotic distribution of the maximum likelihood estimate of the change-point is derived in computable form using random walk results. The asymptotic distributions of likelihood ratio statistics are obtained for testing hypotheses about the change-point. Some exact numerical results for these asymptotic distributions are given and their accuracy as finite sample approximations is discussed. (Author)

766 citations


Journal ArticleDOI
TL;DR: In this article, the authors derived optimal strategies involving linear estimators under certain variance assumptions and compared them under various assumptions under a superpopulation model, and showed that the conventional ratio estimator is, in a certain natural sense, optimal.
Abstract: SUMMARY Problems of estimating totals in finite populations, when auxiliary information regarding variate values is available, are considered under some linear regression, 'super-population', models. Optimal strategies involving linear estimators are derived under certain variance assumptions and compared under various assumptions. For a model which seems to apply in many practical problems, the conventional ratio estimator is shown to be, in a certain natural sense, optimal, but for all models considered, the optimal sampling plans are purposive, i.e. nonrandom. With a squared error loss function, the strategy of using a probability proportional to size sampling plan and the Horvitz-Thompson estimator is shown to be inadmissible in many models for which the strategy seems 'reasonable' and in a particular model for which it is, in one sense, optimal. Some of the results concerning purposive sampling and the ratio estimator are supported by an empirical study.

393 citations


Journal ArticleDOI

384 citations



Journal ArticleDOI
TL;DR: In this paper, a probability model consisting of a product of multinomial distributions is derived for a bird banding experiment in which a new batch of banded birds is released at the beginning of each year and the bands from the dead birds are returned by observers in subsequent years.
Abstract: SUMMARY A probability model consisting of a product of multinomial distributions is derived for a bird banding experiment in which a new batch of banded birds is released at the beginning of each year and the bands from the dead birds are returned by observers in subsequent years. The survival and reporting probabilities are assumed to vary from year to year and maximum likelihood estimates of these probabilities together with their asymptotic variances are derived. A goodness of fit test of the model is outlined and this study concludes with a worked example.

173 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that the C(x) tests used by Neyman are asymptotically equivalent to the use of the likelihood ratio test and to tests using the maximum likelihood estimators.
Abstract: SUMMARY Suppose we are given sample observations from a distribution f(Xj 1, I ., Ok), and it is desired to test the null hypothesis 01 = 0 against the alternative hypothesis that 091 * 0, the values Of 02, . . ** Ok being unknown. Various asymptotically optimal procedures for doing this are considered. It is shown that the C(x) tests used by Neyman are asymptotically equivalent to the use of the likelihood ratio test and to tests using the maximum likelihood estimators. The necessary conditions for C(ac) tests to be asymptotically optimal are re-examined and the way in which they can be applied to randomized and nonrandomized experiments explained and illustrated. The generalization of these tests to cases where the alternative hypothesis involves changes in more than one parameter is also mentioned.

127 citations


Journal ArticleDOI

Journal ArticleDOI
TL;DR: In this paper, it was shown that a simple count of sign changes is nearly as efficient as the familiar DurbinWatson test of autoregression of residuals, and that individual decisions based on this test are closely similar to those from the number of runs test.
Abstract: SUMMARY From a constructed example of 100 random samples of size 40, in conjunction with the author's ACV method for comparing the relative efficiency of different tests of significance, it is found that a simple count of sign changes is nearly as efficient as the familiar DurbinWatson test of autoregression of residuals. Individual decisions based on this test are closely similar to those from the number of runs test. On another actual body of data the three tests seem to be about equally efficient. A table is supplied giving cumulative binomial probabilities for assessing significance for the sign changes test. Probably most workers in multivariate regression of time series, who are computerless, or without a von Neumann subprogram in their computer, adopt the practice of counting the number of sign changes amongst residuals, for the purpose of assessing probable presence of autoregression. If sign changes are few, residual autoregression is inferred, i.e. the regression is not satisfactory because some significant independent variables have been omitted, the linear form assumed is not valid, etc. The practice is rational; if T is the number of sets of observations and if signs, plus or minus, in the sequence are in random order, the frequency of r sign changes will be (T- 1)!/{r!(T- 1 --)!}, the total frequency (2T-1 1) almost the binomial with p = 1. The - I in total frequency arises because a sequence of all T signs the same is inadmissible since the sum of regression residuals is zero. Incidentally, the latter constraint implies that the sequence of values of the residuals or their signs cannot be independent of one another, assumed, as regards number of sign changes, in the use of the binomial. With these, the effect is believed to be negligible when T is not small, and is ignored here. The main object of the present note is to assess the relative efficiency of the count of sign changes r compared with the familiar d statistic developed by Durbin & Watson (1950, 1951). As the writer is unable to cope with the problem of the noncentral frequency of r by algebra, recourse is made to the Monte Carlo method applied to a single particular case.

Journal ArticleDOI
TL;DR: In this article, conditions for a quadratic form in singular normal variables to follow a x2 distribution are obtained, correcting and extending previous results by Good (1969) and extending earlier results by Ogasawara & Takahashi (1951).
Abstract: SUMMARY Conditions for a quadratic form in singular normal variables to follow a x2 distribution are obtained, correcting and extending previous results by Good (1969). Connexions with the noncentral case and independence of two quadratic forms are also given extending earlier results by Ogasawara & Takahashi (1951). Finally, a further generalization of the well-known theorem of Cochran (1934) is presented. 1. CENTRAL X2 DISTRIBUTION Throughout this section let x follow a multivariate normal distribution with mean vector 0 and covariance matrix C, possibly singular, and let A be a symmetric matrix, not necessarily semidefinite. Then Good (1969) obtained conditions for x'Ax to follow a x2 distribution; these conditions simplify those proved earlier by Ogasawara & Takahashi (1951), and more recently by Khatri (1963), Rayner & Livingstone (1965), Rao (1966) and

Journal ArticleDOI
TL;DR: A general recursive least square procedure for the analysis of experimental designs is described in this paper, where the analysis process consists of a sequence of sweeps of the data vector, determined by the factors of the model, the sweep being the only form of arithmetic operation required.
Abstract: SUMMARY A general recursive least squares procedure for the analysis of experimental designs is described. Any experimental design can be analyzed with a finite sequence of sweeps, in each of which a set of effects for a factor of the model is calculated and subtracted from the vector of observations. The effects are usually either simple means or effective means, which are ordinary means divided by an efficiency factor. The analysis for a particular design and model is characterized by a set of K efficiency factors for each factor of the model, where K is the order of balance of that factor, and by a triangular control matrix of indicators (0 or 1), in which subdiagonal zeros indicate orthogonality between pairs of factors in the model, and diagonal zeros indicate factors that are completely aliased with previous factors. The control matrix determines the minimal sweep sequence for analysis. The procedure may be implemented in an adaptive or 'learning' form, in which the information that characterizes the analysis is determined progressively from preliminary analyses of special dummy variates, each generated from an arbitrarily assigned set of effects for a factor of the model. A simple extension of the procedure produces the multistratum analysis required for stratified designs such as split plots and confounded factorials. Observations commonly arise from designed experiments, the symmetries and pattern of which implicitly affect the analysis and inference from the data, but are not usually explicitly characterized and utilized in general linear model formulations. This paper describes a simple procedure for least squares analysis of experimental designs with respect to a linear factorial model, in which the sequence of operations required is fully determined and controlled by the symmetries and pattern in the design. The analysis process consists of a sequence of sweeps of the data vector, determined by the factors of the model, the sweep being the only form of arithmetic operation required. In a sweep for a factor of the model, a set of effects for that factor is calculated and subtracted


Journal ArticleDOI
TL;DR: In this article, the authors provide answers to questions concerning adequate sample size in a one-way analysis of variance situation depend on such things as the number of categories to be compared, the levels of risk an experimenter is willing to assume and some knowledge of the noncentrality parameter.
Abstract: SUMMARY Answers to questions concerning adequate sample size in a one-way analysis of variance situation depend on such things as the number of categories to be compared, the levels of risk an experimenter is willing to assume and some knowledge of the noncentrality parameter. The accompanying tables which provide answers without need of iteration are for the experimenter who can deal better intuitively with an estimate of the standardized range of the means than with the noncentrality parameter. Maximum values of the standardized range, r, are tabulated when the means of k groups, each containing N observations, are being compared at a and /8 levels of risk, for ac = 0-01, 0.05, 0O1, 0-2; /3 = 0-005, 0-01, 0 05, 0-1, 0-2, 0-3; k = 2 (1) 6; N = 2 (1) 8 (2) 30, 40, 60, 80, 100, 200, 500, 1000.

Journal ArticleDOI

Journal ArticleDOI
TL;DR: In this paper, the LD50 mechanism is used to measure the tolerance of a unit to a stimulus, i.e., the unit will respond if and only if the strength of the stimulus applied exceeds this value.
Abstract: where (D is the standardized normal distribution function, and A and B are unknown parameters. The purpose of an informative experiment which subjects n units to a stimulus, the ith unit receiving strength xi, is simply the estimation of the parameters A and B, or some function of them, for example -A/B, the so-called LD50. The mechanism traditionally visualized in this confrontation between stimulus and subject is that each subject has some specific natural tolerance to the stimulus and will respond if and only if the strength of the stimulus applied exceeds this value. On the assumption that tolerance is N(,t, o2), i.e. normally distributed with mean It and variance o2, over the experimental units, the probability that a unit, chosen at random, responds to a stimulus strength x is

Journal ArticleDOI
TL;DR: In this paper, a random sample, xl,..., x), from a Poisson distribution was considered, and the Rao-Blackwell method was used to compute the distribution. But
Abstract: SUMMARY Consider a random sample, xl, ..., x), from a Poisson distribution. The Rao-Blackwell

Journal ArticleDOI
TL;DR: In this paper, a play-the-winner (PW) sampling rule was proposed for selecting the better binomial population with probability P* when the two singletrial probabilities of success, p and p', differ by at least A* where P* and i* are prescribed.
Abstract: The sequential allocation of treatments by an experimenter is considered for determining which of two binomial populations has the larger probability of success. Of particular interest in this study is a 'Play-the-Winner' (PW) sampling rule which prescribes that one continues with the same population after each success and one switches to the opposite population after each failure. The performance of the PW rule is examined for the selection problem, i.e. for selecting the better population with probability P* when the two singletrial probabilities of success, p and p', differ by at least A*, where P* and i\* are prescribed. A comparison is made between PW sampling and 'Vector-at-a-Time' (VT) sampling. In comparing results a criterion used is the expected number of failures that could have been avoided by using the better population throughout. It is shown for a particular common termination rule that with A* close to zero the PW sampling is superior to VT sampling if and only if I (p +p') > J. Other comparisons are also discussed.

Journal ArticleDOI
TL;DR: In this paper, the bias introduced by fitting a polynomial of too low a degree is considered, and the criterion used for selecting a design is the mean square error in estimating the height of the surface averaged over some region in the factor space.
Abstract: The main emphasis in statistical work on the design of experiments has been on the comparison of treatments and, especially, on the estimation of treatment contrasts. An exception is in the study of response surface designs, that is in the design of experiments in which the treatments are identified by the values of quantitative variables and in which the expected response is a smooth function of these variables. The emphasis in developing these designs has been on the estimation of the height of the response surface at points in the factor space, i.e. on the estimation of absolute response rather than of differences in response. In particular, in the work of Box & Draper (1959) on the bias introduced by fitting a polynomial of too low a degree, the criterion used for selecting a design is the mean square error in estimating the height of the surface averaged over some region in the factor space. But even in such experiments, differences in response will often be of more importance than the absolute response. If differences at points close together in the factor space are involved, this implies that estimation of the local slope of the response surface is of interest. In what follows the choice of designs to estimate the slope of response surfaces is therefore considered. In designing these experiments, allowance is made for bias due to an inadequate model. Theexperimental errors are assumed to be independently and identically distributed. A first order polynomial is fitted to the results by standard least squares methods and the fitted coefficients used to estimate the slope of the response surface, either at a specified point or somewhere within a given region of interest. This region is not necessarily the same as the experimental region, that is the region in factor space in which it is feasible to perform trials. If the true response surface is quadratic, how should experiments be designed so that the estimate of the slope is as precise as possible? Designs for one factor are the subject of the following section. In ? 3 the theory of designs for any number of factors is developed and, in ? 4, applied to a two-dimensional example.


Journal ArticleDOI
TL;DR: In this paper, the relative merits of several solutions to the Behrens-Fisher problem are considered on the basis of the stability of their size and the magnitude of their power, and it is shown that if the sample sizes are both larger than 7, then the solutions due to Pagurova and Welch are very good.
Abstract: SUMMARY The relative merits of several solutions to the Behrens-Fisher problem are considered on the basis of the stability of their size and the magnitude of their power. It is shown that if the sample sizes are both larger than 7, then the solutions due to Pagurova and Welch are very good in the above sense. For smaller sample sizes certain modifications of Pagurova's solution are presented. 1. PRELIMINARIES Suppose that we have two normal distributions, the first with mean and variance parameters ,l and o2 , and the second with parameters #2 and 4-2 Samples of sizes n, and n2, respectively, are taken, and the sample means and variances are x-1 and s82, and x2 and s2. The Behrens-Fisher problem consists in testing the null hypothesis Ho: q = (,cl- ,t2)/0l = 0 against one of the alternatives q > 0, q 0. Although several solutions to this problem have been proposed in the past forty years, no adequate guidelines have so far been established as to which one to follow in a given practical situation. Our aim is partly to provide such a guideline based on a comparative study of some of the solutions. For this purpose we have chosen the ones due to Banerjee (1960), Fisher (1936), Pagurova (1968a), Wald (1955) and Welch (1947). Letting Ai= l/ni (i = 1, 2), we see that the solutions are all of the following general form. Reject Ho if v-= (xl-x2)/V/(Ai s~2 +A2s) > Va(C), where K(C) is a function of C = Als2/(Asl +A2s2) and the chosen level of significance ac. The expressions for VJ'(C) for these five solutions are presented in Table 1. In the case of Fisher's and Welch's solutions, where K (C) is based on an asymptotic expansion, we have taken it to the second order in the sample size, and in the case of Wald's solution, which is applicable only when the sample sizes are equal, we have taken the common sample size to be nl.

Journal ArticleDOI
R. Srinivasan1
TL;DR: In this article, a general method for testing the goodness of fit of a continuous distribution against unspecified alternatives was developed for test cases where the null hypothesis specifies only the functional form of the distribution and leaves some or all of its parameters unspecified.
Abstract: A general method is developed for testing the goodness of fit of a continuous distribution against unspecified alternatives when the null hypothesis specifies only the functional form of the distribution and leaves some or all of its parameters unspecified. The exponential and normal distributions are treated in detail. For these distributions tables of percentile points of the test statistics are provided. Power comparisons of our tests with those developed by Lilliefors are also given. All the numerical results involved are derived using Monte Carlo methods.


Journal ArticleDOI
TL;DR: In this paper, a method is developed for constructing D-optimal designs in a linear regression context with k explanatory variables, when previous observations are available and a fixed number of new sets of values of the explanatory variables are to be chosen within the unit sphere.
Abstract: SUMMARY A method is developed for constructing D-optimal designs in a linear regression context with k explanatory variables, when previous observations are available and a fixed number of new sets of values of the explanatory variables are to be chosen within the unit sphere. The effect of previous observations on the relationship between D-optimality and E-optimality is discussed. The relationship between designs constructed sequentially to be optimum for each new set of values and fully optimal designs is also discussed. Some brief comment is made on the relevance of the ideas to Bayesian design.

Journal ArticleDOI
TL;DR: In this paper, a general mathematical theory for quantitative investigation of the properties of a dilutely distributed particulate phase is presented, which provides unbiased estimates of any linear property of the particle size cumulative frequency function under the assumption that particle centres are uniformly distributed within the specimen.
Abstract: This paper presents a general mathematical theory for quantitative investigation of the properties of a dilutely distributed particulate phase. If the process by which the specimen is sampled and the process by which particles in the sample are measured can be appropriately modelled, then this theory provides unbiased estimates of any linear property of the particle size cumulative frequency function under the assumption that particle centres are uniformly distributed within the specimen. Uniqueness of the estimate is discussed and variance formulae given. Some classical results on spherical particles and new results on cylindrical particles are presented as specific examples. The theory unifies a large body of current literature and also provides the basis for the inclusion of the effects of sample preparation and any anomalies encountered in the measurement process.


Journal ArticleDOI
TL;DR: In this paper, the difficulty in obtaining the maximum likelihood estimates of the parameters of the Cauchy distribution is discussed, and comparisons between the best linear unbiased estimators are presented.
Abstract: SUMMARY Tables, based on maximum likelihood estimators, are presented which enable one to obtain confidence intervals and test hypotheses about parameters of the Cauchy distribution. The difficulty in obtaining the maximum likelihood estimates of the parameters is discussed. Comparisons between the maximum likelihood estimators and the best linear unbiased estimators are presented.

Journal ArticleDOI
TL;DR: In this paper, two related algorithms are given to find a monotone function of one or more variables that best approximates in the least squares sense given function values that are not already monotones.
Abstract: SUMMARY Two related algorithms are given to find a monotone function of one or more variables that best approximates in the least squares sense given function values that are not already monotone. For one independent variable, such algorithms are well known; see, for example, Bartholomew (1959). This case is briefly repeated in ? 2 for completeness and as an introduction to the more difficult case of two or more independent variables. Of the two algorithms given in ?? 3 and 5, the second one should generally be shorter. Section 4 contains an auxiliary algorithm that is needed in connexion with both ?? 3 and 5. The paper closes with an example in ? 6 and some remarks in ? 7.