scispace - formally typeset
Search or ask a question

Showing papers on "Sampling (statistics) published in 1971"


Journal ArticleDOI
TL;DR: The grid-by-number sampling procedure is the only surface-oriented sampling procedure directly comparable (equivalent) to conventional bulk sieve analysis as mentioned in this paper, and it is therefore the only correct sampling procedure for the study of surface-dependent phenomena.
Abstract: Procedures for sampling the bed material of gravel-bed rivers consist of five independent steps: selection of site and time, collection of a sample, determination of grain size, computation of size distribution, and presentation of results. Because several choices of methods are available at each step, much confusion exists in the literature. Simple geometric arguments, illustrated with a model based on densely packed cubes, are used to develop weighting factors to convert from one sampling procedure to another. Comparison with several sets of field data shows that the conversion method is applicable to gravel deposits. Differences remaining after conversion are attributable to random differences between samples, or to population differences. The grid-by-number procedure is the only surface-oriented procedure directly comparable (equivalent) to customary bulk sieve analysis. It is, therefore, the only correct sampling procedure for the study of surface-dependent phenomena. Additional field data are presented to compare three equivalent procedures.

408 citations


Journal ArticleDOI
TL;DR: Raiffa and Schlaifer's theory of conjugate prior distributions is applied to Jeffrey's theory for simple normal sampling, for model I analysis of variance, and for univariate and multivariate Behrens-Fisher probelms as discussed by the authors.
Abstract: Raiffa and Schlaifer's theory of conjugate prior distributions is here applied to Jeffrey's theory of tests for a sharp hypothesis, for simple normal sampling, for model I analysis of variance, and for univariate and multivariate Behrens-Fisher probelms. Leonard J. Savage's Bayesianization of Jeffrey's theory is given with new generalizations. A new conjugate prior family for normal sampling which allows prior independence of unknown mena and variance is given.

385 citations


Journal ArticleDOI
TL;DR: In this paper the optimal discrete-time linear-quadratic regulator problem is carefully presented and the basic results are reviewed.
Abstract: In this paper the optimal discrete-time linear-quadratic regulator problem is carefully presented and the basic results are reviewed. Dynamic programming is used to determine the optimization equations. Special attention is given to problems unique to the discrete-time case; this includes, for example, the possibility of a singular system matrix and a singular control-effort weighting matrix. Some problems associated with sampled-data systems are also summarized, e.g., sensitivity to sampling time, and loss of controllability due to sampling. Computational methods for the solution of the optimization equations are outlined and a simple example is included to illustrate the various computational approaches.

290 citations


Journal ArticleDOI
TL;DR: A new sampling technique, which gives unbiased and alias-free estimates, is described, which treats the spike train as a series of delta functions and generates samples by digital filtering.
Abstract: Spectral analysis provides powerful techniques for describing the lower order moments of a stochastic process and interactions between two or more stochastic processes. A major problem in the application of spectral analysis to neuronal spike trains is how to obtain equispaced samples of the spike trains which will give unbiased and alias-free spectral estimates. Various sampling methods, which treat the spike train as a continuous signal, a point process and as a series of Dirac delta-functions, are reviewed and their limitations discussed. A new sampling technique, which gives unbiased and alias-free estimates, is described. This technique treats the spike train as a series of delta functions and generates samples by digital filtering. Implementation of this technique on a small computer is simple and virtually on-line.

173 citations


Journal ArticleDOI
TL;DR: A square-framed, conical net (300-μ pores) was used in the kicking technique described herein for sampling the larger invertebrates of stony streams, with consistent percentage population components differing significantly from large ones.
Abstract: A square-framed, conical net (300-μ pores) was used in the kicking technique described herein for sampling the larger invertebrates of stony streams. Small samples have consistent percentage popula...

155 citations


Book ChapterDOI
TL;DR: The following sections are included:INTRODUCTIONALTERNATIVE ESTIMATORSSAMPLING RESULTS Conclusion as mentioned in this paper, and the following sections include the following abstracts:
Abstract: The following sections are included:INTRODUCTIONALTERNATIVE ESTIMATORSSAMPLING RESULTSCONCLUSIONSREFERENCES (This abstract was borrowed from another version of this item.)

155 citations



Journal ArticleDOI
TL;DR: In this paper, the optimal sampled-data control for linear processes with quadratic criteria is determined through application of the discrete minimum principle, and the effect of sampling on the closed-loop system's performance is investigated and the asymptotic behaviour of the optimal cost for large sampling periods is determined.
Abstract: Optimal sampled-data controls for linear processes with quadratic criteria are determined through application of the discrete minimum principle. The effect of sampling on the closed-loop system's performance is investigated and the asymptotic behaviour of the optimal cost for large sampling periods is determined. The resulting design method is applicable to continuous, sampled-data and discrete regulators.

126 citations


Journal ArticleDOI
TL;DR: In this paper, a two-stage sampling method using the Sedgwick-Rafter cell was developed, which is appropriate for larger phytoplankton species (> 10−15 µ) having relatively high population densities.
Abstract: Quantitative processing of large numbers of phytoplankton collections requires a sampling method that will yield precise and reproducible estimates of abundance within an acceptably short counting time. A two-stage sampling plan was developed, using the Sedgwick-Rafter cell, which satisfies these criteria. The sampling design is appropriate for larger phytoplankton species (> 10–15 µ) having relatively high population densities (⩾ 105 cells/liter) .

102 citations


Journal ArticleDOI
TL;DR: The underlying probability model that enables us to denote the variance of the sample mean as a function of the autoregressive representation of the process under study is presented and the estimation and testing of the parameters of the Autoregressive Representation in a way that can easily be "built into" a computer program is described.
Abstract: A method is described for estimating and collecting the sample size needed to estimate the mean of a process (with a specified level of statistical precision) in a simulation experiment. Steps are also discussed for incorporating the determination and collection of the sample size into a computer library routine that can be called by the ongoing simulation program. We present the underlying probability model that enables us to denote the variance of the sample mean as a function of the autoregressive representation of the process under study and describe the estimation and testing of the parameters of the autoregressive representation in a way that can easily be “built into” a computer program. Several reliability criteria are discussed for use in determining sample size. Since these criteria assume that the variance of the sample mean is known, an adjustment is necessary to account for the substitution of an estimate for this variance. It is suggested that Student's distribution be used as the sampling d...

101 citations



Journal ArticleDOI
TL;DR: This paper presents a unified discussion of the various types of frequency sampling designs and shows how to realize them, both recursively and nonrecursively.
Abstract: A great deal of work has been done recently on techniques for optimally designing finite duration impulse response (FIR) filters. One of these techniques, called frequency sampling, is a method for designing a digital filter from a set of samples of the desired filter frequency response. In this paper we present a unified discussion of the various types of frequency sampling designs and show how to realize them, both recursively and nonrecursively.

Journal ArticleDOI
TL;DR: The work of Lunney (1970) concerning the appropriateness of analysis of variance (ANOVA) techniques on dichotomous data is discussed and extended in this paper, and relations between standard statistical techniques for analyzing dichotomyous data and ANOVA procedures are indicated.
Abstract: The work of Lunney (1970) concerning the appropriateness of analysis of variance (ANOVA) techniques on dichotomous data is discussed and extended. Relations between standard statistical techniques for analyzing dichotomous data and ANOVA procedures are indicated. The need for usefulness of analyzing transformed data as opposed to direct analysis of dichotomous data are discussed. Required statistical procedures employing transformed data are outlined.

Journal ArticleDOI
TL;DR: In this paper, the MIL-STD-414 Sampling Procedures and Tables for Inspection by Variables for Percent Defective are presented. But they do not specify a set of test cases.
Abstract: (1975). MIL-STD-414 Sampling Procedures and Tables for Inspection by Variables for Percent Defective. Journal of Quality Technology: Vol. 7, Standards and Specifications, pp. 51-60.

Book
01 Jan 1971

Journal ArticleDOI
TL;DR: In this article, a computer model using estimates of patch structure parameters determined from an ocean study was developed to study the effects of spatial structure on the error of replicate net tows.
Abstract: A computer model using estimates of patch structure parameters determined from an ocean study was developed to study the effects of spatial structure on the error of replicate net tows. The precision of nine replicate tows increases with increasing patch size, regardless of the position of the patch centers, and the largest net provides the most precise estimates. Lengthening the tow also results in significant increases in precision (up to a factor of 2 ), not related directly to increases in the volume of water filtered, but apparently to the increase in the probability of sampling the “right” number of patches. When, spatial structure is homogeneous, increasing tow length results in a larger reduction of sampling error than dots a comrresponding increase in net diameter. In general, better estimates of species proportions are obtained when the numbers of individuals per species are inequitably distributed and the degree of patch overlap is the greatest. INTRODUCIXON In this paper I shall examine the interactions of the patch structure variables, size, distribution, and density (i.e., numbers of individuals per volume) with net size and tow length, and their relation to the precision of net tows. The results of these experiments can be used to qualify only one of the assumptions basic to survey samples ( Taft 1960; Wiebe 1970) ; that is, that plankton samples taken with nets represent quantitatively population and community parameters in the parcel of water sampled. The term parcel refers to the water body that would be sampled if a series of replicate tows were taken at a geographic location. None of the work reported here relates to the other basic assumption: that the parcel itself contains numbers and kinds of organisms representative of the general arca of the station. My approach to the study of sampling error associated with the patchiness of zoo

Journal ArticleDOI
TL;DR: Samples of the human food chain at Thule, Greenland, were collected during the summer of 1968, after the nuclear weapon incident in January, and the highest animal levels were found in bivalves, crustacea, polychaeta, and echinodermata.
Abstract: Samples of the human food chain at Thule, Greenland, were collected during the summer of 1968, after the nuclear weapon incident in January. As was to be expected from the increased plutonium levels in bottom sediments, the highest animal levels were found in bivalves, crustacea, polychaeta, and echinodermata. The levels in these bottom animals were on the average two orders of magnitude as high as the fall-out background, in a few cases four orders of magnitude. Fish from the bottom water also showed an increased plutonium content whereas sea weed, plankton, sea birds, seals, and walruses did not differ significantly from the fall-out background. The plutonium concentration in sea water was twice the fall-out background. No samples displayed plutonium levels that were considered hazardous to man or higher animals in the Thule district or in any other part of Greenland.


Journal ArticleDOI
TL;DR: The sample was drawn from these 10 wards, contact being made with persons in the sample through their general practitioners, and the plan was explained to the doctors, 91 of whom agreed to co-operate and allowed us access to their lists of patients.
Abstract: Edinburgh is divided into 23 city wards in 10 of which one of us (J.W.) and colleagues provide a geriatric service. The sample was drawn from these 10 wards, contact being made with persons in the sample through their general practitioners. Ninetyfive general practitioners were working in 50 practices with premises in this area. The plan was explained to the doctors, 91 of whom agreed to co-operate and allowed us access to their lists of patients held by the Edinburgh Executive Council. A census was made from the records of the Executive Council of the name, address, date of birth, and National Health Service number of all those persons born in 1905 or earlier who lived in the defined area and who were on the list of a doctor with a surgery address in that area. There were 26,903 such persons. This population excludes those who were not on a doctor's list and those who lived in the area but attended a doctor with a surgery address outside the area. The National Health Service number helps to trace persons who have died

Journal ArticleDOI
TL;DR: In this article, a sampling procedure involving subsampling of nonrespondents is discussed, under which the subs sampling fraction is not kept constant, but varied according to the sample nonresponse rates.
Abstract: The sampling procedure involving subsampling of nonrespondents is discussed. A rule for selecting a subsample of nonrespondents is proposed under which the subsampling fraction is not kept constant, but varied according to the sample nonresponse rates. Using this sampling rule, the variance of the estimator of the population mean is independent of the unknown rate of nonresponse in the population. This rule provides a procedure for determining the initial sample size and subsampling fraction in order to have a desired precision of the estimator when the rate of nonresponse is not accurately known. Similar sampling rules for the selection of subsamples when several attempts are made for obtaining information are also proposed.

Journal ArticleDOI
TL;DR: A broad class of random sampling schemes, for which the sampling intervals are dependent, is constructed and it is shown that this class is “alias free≓ relative to various families of spectra”.
Abstract: This paper deals with the problem of perfect reconstruction of the spectrum of a weakly stationary stochastic process x ( t ) from a set of random samples { x ( t n )}. A broad class of random sampling schemes, for which the sampling intervals are dependent, is constructed. It is shown that this class is “alias free≓ relative to various families of spectra. It is further shown that the alias free property of this class of sampling schemes is invariant under random deletion of samples.


Journal ArticleDOI
TL;DR: In this article, two sampling rules are analyzed for the problem of selecting the better of two binomial populations: alternate sampling (vector-at-a-time) and the play-the-winner rule introduced by Robbins and discussed by Zelen.
Abstract: In this article two sampling rules are analyzed for the problem of selecting the better of two binomial populations. The first is alternate sampling (vector-at-a-time) and the second is the “play-the-winner” rule introduced by Robbins [3], and discussed by Zelen [8]. The termination rule studied is inverse sampling. One result is that the probabilities of correct selection for these two methods of sampling are exactly equal. It is also found that the play-the-winner sampling is uniformly preferable to the vector-at-a-time sampling rule in the sense that for the same probability requirement the expected number of trials on the poorer population is always smaller.

Journal ArticleDOI
TL;DR: In this paper, a procedure for changing measures within fixed strata, then for strata with changed units is presented, followed by procedures for changing units within fixed and new strata.
Abstract: Survey samples are often based on primary sampling units selected from initial strata with probabilities pj proportional to initial measures. However, later samples can be better served with new strata and new probabilities, Pj, based on new information. The differences between the initial and new strata and measures may be due to changes either in population distributions or in survey objectives. It is efficient to retain in the new sample the maximum permissible number of initial selections. Procedures are presented first for changing measures within fixed strata, then for strata with changed units. Modifications, improvements and simplifications are also introduced.


Journal ArticleDOI
TL;DR: It is shown that, for suitably restricted test functions, H(t) can be represented by a conventional sampling series and by a series of delta functions with weights equal to the sampling values.
Abstract: For an arbitrary band-limited generalized function H(t) a representation is obtained that is a combination of a Taylor series and a conventional sampling series A parameter N that enters can be chosen such as to make the series rapidly convergent Furthermore, it is shown that, for suitably restricted test functions, H(t) can be represented by a conventional sampling series and by a series of delta functions with weights equal to the sampling values The results apply also to ordinary band-limited functions

Journal ArticleDOI
28 May 1971-Nature
TL;DR: After comparing gravity cores with samples collected by SCUBA divers, it is suggested that this is not so.
Abstract: GRAVITY corers are widely used1,2 for the collection of the smallest marine metazoans (meiofauna) from subtidal grounds, and it has been generally accepted that cores which appear undisturbed give reasonably representative samples. But, after comparing gravity cores with samples collected by SCUBA divers, I suggest that this is not so.


Patent
Makino J1, Nohara H1
05 Nov 1971
TL;DR: In this paper, a sinusoidal ac signal is subjected to periodic sampling having a period Delta t, where a product of one sampled value of the signal taken at a certain sampling point and another sampled value at the next sampling point apart, by an interval of time equal to the period delta t, from the previous sampling point is made.
Abstract: A sinusoidal ac signal is subjected to periodic sampling having a period Delta t. There is made a product of one sampled value of the signal taken at a certain sampling point and another sampled value at the next sampling point apart, by an interval of time equal to the period Delta t, from the previous sampling point. If the product is of negative value, a zero point of the signal must lie between the two sampling points. The zero point can equivalently be obtained by drawing a straight line passing the two sampled values. On the basis of the zero points of sinusoidal ac signals obtained in such a manner as above, the frequencies, phase differences, powers and the impedances of the associated transmission lines are obtained by means of a digital computer.

Journal ArticleDOI
TL;DR: In this paper, a sequential procedure is given for selecting from a given multinomial distribution with k cells, the cell with the largest probability, which is called the "most probable" category.
Abstract: A sequential procedure is given for selecting from a given multinomial distribution with k cells, the cell with the largest probability, which is called the “most probable” category. Observations being taken sequentially from the given distribution, the sampling is terminated when the difference between the highest and the next highest cell counts is equal to a positive integer r. The cell with the highest count when the sampling is terminated, is selected as the most probable category. It is shown that the sampling terminates with probability 1. The probability of a correct selection and the expected total number of observations are given. The given procedure is compared with other known procedures.