scispace - formally typeset
Search or ask a question

Showing papers on "Sampling (statistics) published in 1984"


Book
03 Feb 1984
TL;DR: This paper presents the results of a series of experiments conducted in farmers' fields in the Czech Republic over a period of three years to investigate the effects of agricultural pesticides on animal welfare and human health.
Abstract: Elements of Experimentation. Single-Factor Experiments. Two-Factor Experiments. Three-or More-Factor Experiments. Comparison Between Treatment Means. Analysis of Multiobservation Data. Problem Data. Analysis of Data from a Series of Experiments. Regression and Correlation Analysis. Covariance Analysis. Chi-Square Test. Soil Heterogeneity. Competition Effects. Mechanical Errors. Sampling in Experimental Plots. Experiments in Farmers' Fields. Presentation of Experimental Results. Appendices. Index.

13,377 citations


Book
01 Jan 1984

2,040 citations


Book
01 Jan 1984
TL;DR: This chapter discusses collecting, Analyzing, and Reporting Ecological Data, and Analysis of Communities and Aquatic Microecosystems, which is concerned with the collection, analyzing, and reporting of ecological data.
Abstract: Unit 1*Collecting, Analyzing, and Reporting Ecological Data 1a. Ecological Sampling 1b. Data Analysis 1c. Writing Research Reports Unit 2*Analysis of Habitats 2a. Microhabitat Analysis 2b. Atmospheric Analysis 2c. Substrate Analysis 2d. Analysis of Aquatic Habitats 2e. Chemical Analysis of Habitats 2f. Habitat Assessment Unit 3*Biotic Sampling Methods 3a. Plot Sampling 3b. Transect Sampling 3c. Point-quarter Sampling 3d. Terrestrial Invertebrate Sampling 3e. Aquatic Sampling 3f. Capture-recapture Sampling 3g. Removal Sampling 3h. Terrestrial Vertebrate Sampling Unit 4*Analysis of Populations 4a. Age Structure and Survivorship 4b. Population Growth 4c. Population Dispersion 4d. Competition 4e. Predation Unit 5* Analysis of Communities 5a. Community Structure 5b. Species Diversity 5c. Community Similarity Unit 6*Analysis of Production 6a. Biomass Measurements 6b. Aquatic Productivity 6c. Aquatic Microecosystems Appendixes A. Symbols and Abbreviations B. Equivalents for Units of Measurements C. Atomic Weights of Elements D. Common Logarithm E. Microcomputer Programming

2,013 citations


Journal ArticleDOI
TL;DR: The zeros of the discrete time system obtained when sampling a continuous time system are explored and theorems for the limiting zeros for large and small sampling periods are given.

866 citations





Journal ArticleDOI
TL;DR: It is concluded that if complete population rosters are unavailable and if the population to be sampled has the high rates of telephone ownership typical of much of the United States, telephone-based sampling can yield a nearly random sample of the individuals in a population, often at much less expense than can dwelling- based sampling.
Abstract: Results are described from four epidemiologic studies in the United States which used random digit dialing in over 30,000 households to identify controls from the general population for use in case-control studies. Methods and problems in telephone sampling are discussed. It is concluded that if complete population rosters are unavailable and if the population to be sampled has the high rates of telephone ownership typical of much of the United States, telephone-based sampling can yield a nearly random sample of the individuals in a population, often at much less expense than can dwelling-based sampling.

250 citations


Journal ArticleDOI
TL;DR: In this paper, a Markov process of partitions of the natural numbers is constructed by defining a Polya-like urn model and the marginal distributions of this process are the Ewens' sampling distributions of population genetics.
Abstract: A Markov process of partitions of the natural numbers is constructed by defining a Polya-like urn model. The marginal distributions of this process are the Ewens' sampling distributions of population genetics.

210 citations


Journal ArticleDOI
TL;DR: In this article, a new procedure is presented for minimizing the sampling requirements necessary to estimate a mappable spatial function at a specified level of accuracy based on universal kriging, an estimation method within the theory of regionalized variables.
Abstract: A new procedure is presented for minimizing the sampling requirements necessary to estimate a mappable spatial function at a specified level of accuracy. The technique is based on universal kriging, an estimation method within the theory of regionalized variables. Neither actual implementation of the sampling nor universal kriging estimations are necessary to make an optimal design. The average standard errorand maximum standard error of estimationover the sampling domain are used as global indices of sampling efficiency. The procedure optimally selects those parameters controlling the magnitude of the indices, including the density and spatial pattern of the sample elements and the number of nearest sample elements used in the estimation. As an illustration, the network of observation wells used to monitor the water table in the Equus Beds of Kansas is analyzed and an improved sampling pattern suggested. This example demonstrates the practical utility of the procedure, which can be applied equally well to other spatial sampling problems, as the procedure is not limited by the nature of the spatial function.

176 citations


Journal ArticleDOI
TL;DR: In this article, the authors present filtering methods for interfacing time-discrete systems with different sampling frequencies, which are applicable for sampling rate conversion between any two sampling frequencies; the conversion ratio may even be irrational or slowly time varying.
Abstract: The paper presents filtering methods for interfacing time-discrete systems with different sampling frequencies. The methods are applicable for sampling rate conversion between any two sampling frequencies; the conversion ratio may even be irrational or slowly time varying. Interpolation by irrational factors requires digital filters with nonperiodically varying coefficients. This is dealt with in two ways. 1) All possible coefficient values are precalculated. This is, in a sense, possible because of the finite resolution needed. Or 2), the coefficients can be updated in real time using either FIR or IIR filters. The first solution requires a huge coefficient memory; the second scheme, on the other hand, is computationally intensive. While discussing both of these solutions, more practical intermediate schemes incorporating both FIR-and IIR-type filters are suggested. The suggested practical implementations are either based on analog reconstruction filters where the derived digital filter coefficients are functions of the distances between current input and output samples or digital interpolators combined with simple analog interpolation schemes for finding the desired values in between the uniform output samples from the digital interpolator.


Journal ArticleDOI
TL;DR: Within the class of ARMAX models, the authors consider the effects systematic sampling or temporal aggregation may have on the dynamic relationships between variables, including changes in lag lengths and causal ordering and may occur even in simple models.

Journal ArticleDOI
TL;DR: In this paper, the problem of hypothesis testing of regression analysis of data sets with multiple measurements from individual sampling units is discussed. But these problems arise when regression analysis is applied to data sets which contain many measurements from different sampling units.

01 Oct 1984
TL;DR: This work presents a relatively complete and comprehensive description of a general class of Monte Carlo sampling plans for estimating g = gs, T, the probability that a specified node s is connected to all nodes in a node set T, and describes worst-case bounds on sample sizes K.
Abstract: For an undirected network G = V, E whose arcs are subject to random failure, we present a relatively complete and comprehensive description of a general class of Monte Carlo sampling plans for estimating g = gs, T, the probability that a specified node s is connected to all nodes in a node set T. We also provide procedures for implementing these plans. Each plan uses known lower and upper bounds [B, A] on g to produce an estimator of g that has a smaller variance A-gg-B/K on K independent replications than that obtained for crude Monte Carlo sampling B = 0, A = 1. We describe worst-case bounds on sample sizes K, in terms of B and A, for meeting absolute and relative error criteria. We also give the worst-case bound on the amount of variance reduction that can be expected when compared with crude Monte Carlo sampling. Two plans arc studied in detail for the case T = {t}. An example illustrates the variance reductions achievable with these plans. We also show how to assess the credibility that a specified error criterion for g is met as the Monte Carlo experiment progresses, and show how confidence intervals can be computed for g. Lastly, we summarize the steps needed to implement the proposed technique.

Journal ArticleDOI
TL;DR: A computer model was developed to predict the feeding selectivity of planktivorous white crappie (Pomoxis annularis) from a known distribution of zooplankton and proved to be very accurate at predicting the species and size distribution of the ingested prey across the range of light intensities, turbidities, temperatures, and zoopLankton densities encountered.
Abstract: A computer model was developed to predict the feeding selectivity of planktivorous white crappie (Pomoxis annularis) from a known distribution of zooplankton. The model was based on the assumption that each predation event could be subdivided into a series of independent steps: prey location, pursuit, attack, and retention. The probability that white crappie successfully completed each step was determined for potential zooplankton prey species in a series of laboratory experiments. The four steps were then incorporated into a stochastic model where the probability of a particular prey type being consumed is equal to the product of the probabilities of the individual steps. The model was field tested by sampling fish, zooplankton, and physical parameters from discrete depth strata in a small reservoir on nine dates from October 1978 through November 1979. The model proved to be very accurate at predicting the species and size distribution of the ingested prey across the range of light intensities, turbidities, temperatures, and zooplankton densities encountered. Prey consumption could not be characterized as simply size selective; rather, it reflected the selectivity expressed at each step in the feeding cycle.

Journal ArticleDOI
01 Feb 1984-Ecology
TL;DR: Downy Woodpeckers exhibited a sampling strategy very similar to that predicted by a simple energy maximization model, and such sampling abilities are crucial to more complex stochastic foraging models.
Abstract: I describe a study in which free—roaming downy Woodpeckers (Picoides pubescens) were allowed to forage in three different, patchy stochastic environments where patches (small, thin logs with holes drilled into them to hold food items) contained either zero or a fixed number of food items. All the patches in an environment were, however, identical in appearance. Downy Woodpeckers systematically searched the holes of a patch for food items, and thus the foraging task for an energy—maximizing woodpecker was to determine to what extent to sample a patch without success (without finding a food item) before giving the patch up as being empty and moving on. A model is presented to determine the optimal sampling solution. Although this foraging task is relatively simple, optimal solutions to many stochastic foraging problems may be quite complex. Thus, the observed foraging behavior is compared to the model's predictions as well as to some suggestions form the literature of simple behavioral mechanisms (e.g., a fixed giving—up—time strategy) that animals may use to approximate optimal solutions. The results show that the woodpeckers clearly recognized the fact that some patches contained food items while others did not, and use this information in following a sampling strategy more sophisticated than any of the suggested simple behavioral mechanisms. The predicted sampling behavior on empty patches and that observed were in qualitative but not quantitative agreement. The model predicted that a single number of holes in a patch should be sampled without success while, in fact, a distribution of number of holes sampled was observed. The mode of the distributions, however, corresponded to the predicted value in two of the three environments. It is shown that the woodpeckers did not know with complete accuracy at least some of the model's parameters, and that this may be related to a weak counting ability. A weak counting ability, the need to sample for environmental changes, a stochastic element in a bird's behavior itself, or some combination of these factor may contribute to the observed variability in sampling behavior. Nevertheless, perhaps through some ability to estimate time, the woodpeckers exhibited a sampling strategy very similar to that predicted by a simple energy maximization model, and such sampling abilities are crucial to more complex stochastic foraging models.

Journal ArticleDOI
TL;DR: A state-of-the-art survey of the principal variance reduction techniques that can improve the efficiency of large-scale simulation experiments.
Abstract: SYNOPTIC ABSTRACTIn the design and analysis of large-scale simulation experiments, It Is generally difficult to estimate model performance parameters with adequate precision at an acceptable sampling cost. This paper provides a state-of-the-art survey of the principal variance reduction techniques that can Improve the efficiency of such experiments.

Journal ArticleDOI
TL;DR: The main result of this paper is the design and analysis of Algorithm D, which does the sampling in O(n) time, on the average; roughly n uniform random variates are generated, and approximately n exponentiation operations are performed during the sampling.
Abstract: Several new methods are presented for selecting n records at random without replacement from a file containing N records. Each algorithm selects the records for the sample in a sequential manner—in the same order the records appear in the file. The algorithms are online in that the records for the sample are selected iteratively with no preprocessing. The algorithms require a constant amount of space and are short and easy to implement. The main result of this paper is the design and analysis of Algorithm D, which does the sampling in O(n) time, on the average; roughly n uniform random variates are generated, and approximately n exponentiation operations (of the form ab, for real numbers a and b) are performed during the sampling. This solves an open problem in the literature. CPU timings on a large mainframe computer indicate that Algorithm D is significantly faster than the sampling algorithms in use today.

Journal ArticleDOI
TL;DR: In this paper, a method to design an optimal sampling network for estimation of the variogram is proposed using a constant number of pairs of points (Np) per lag class, the criterion for selecting the location of the sampling points is the uniformity of the values of the lag vector, h, within a given lag class.
Abstract: Kriging techniques for interpolating spatial phenomena have recently been applied by soil scientists to map soil properties in heterogeneous field. A fundamental component of these techniques is the variogram which has to be estimated experimentally. A method to design an optimal sampling network for estimation of the variogram is proposed. Using a constant number of pairs of points (Np) per lag class, the criterion for selecting the location of the sampling points is the uniformity of the values of the lag vector, h, within a given lag class, for each of the lag classes which cover the field domain. For a given sample size, N, the method provides a set of scaling factors which in turn is used to calculate the new locations of the sampling points by an iterative procedure. Using the aforementioned criterion, the best set of the sampling points is selected. Analysis of the results showed that by using the proposed method, the variability of h within a given lag class, as well as the variability of h among the lag classes, was considerably reduced relative to the situation where the original locations were used. The method was found to be sensitive to both N and Np as well as to the initial locations of the points. The effect of the method on the estimation of the variogram was tested using simulated realizations of a twodimensional stationary and isotropic field. The results for fifty different realizations showed that the variograms which had been estimated from data generated on the coordinates of the sampling points provided by the method were smoother and fitted the theoretical variograms better than those which were estimated from data generated on the original coordinates of the sampling points. When only the smaller lag classes were considered, the differences between the two sets of the variograms become smaller. Additional Index Words: kriging techniques, spatial variability, geostatistics. View complete article To view this complete article, insert Disc 4 then click button8

Journal ArticleDOI
TL;DR: Although counting provides the most precise estimates for a given sample size, the former method is both easier and faster to apply than counting, which makes the zero-group method advantageous to use at high population densities where counting becomes very tedious.
Abstract: SUMMARY (1) A simple but general model of density dependent dispersal in an arthropod population is used to explain the relationship between the density and the spatial distribution of two species of mites: the two-spotted spider mite, Tetranychus urticae, and its phytoseiid enemy Phytoseiulus persimilis, used for biological control of spider mites in glasshouses. (2) Based on the above relationship, estimates of mean mite densities in glasshouses are obtained from sampling data in which only presence or absence of individuals has been recorded. (3) Confidence limits for the true mean density estimated from the proportion of sampling units without individuals (the zero-group) are provided for the two mite species. Estimates of variances require that Taylor's power law is a suitable model of the variance/mean relationship. (4) Reliability of density estimates obtained by the zero group method is compared with that based on direct counts of individuals occurring in a sample. Although counting provides the most precise estimates for a given sample size, the former method is both easier and faster to apply than counting. This makes the zero-group method advantageous to use at high population densities where counting becomes very tedious. (5) The overall spatial distribution of spider mites is predicted from an observed proportion of empty sampling units. This presupposes, however, that the negative binomial is a general model of the underlying spatial distribution of the species but not that the parameter k of the negative binomial is independent of the population density.

Journal ArticleDOI
TL;DR: A method of developing in-phase and quadrature samples of a band-limited RF waveform is presented, realized as a pair of 900 phase splitting networks with several symmetries which are exploited to save computation.
Abstract: We present a method of developing in-phase and quadrature samples of a band-limited RF waveform. The problem of matching gain and phase response differences between the two components is avoided by a combination of mixing to an IF frequency, sampling and digitizing, and digital filtering. The novelty of the method is in the design of the digital filter, which is realized as a pair of 900 phase splitting networks with several symmetries which are exploited to save computation.

Journal ArticleDOI
TL;DR: Computer simulations based on 2 sets of real survey data indicate that this 2‐phase survey strategy reduces the skew in the biomass estimate (thus reducing the probability of gross over‐estimation) as well as reducing the expected error.
Abstract: To minimise the error in biomass estimates from stratified random trawl surveys, the allocation of trawl stations amongst strata should be made according to prior information on fish distribution. Often such information is slight. A 2‐phase survey strategy is proposed in which allocation of trawl stations in Phase 2 is based on catches obtained in Phase 1. Computer simulations based on 2 sets of real survey data indicate that this strategy reduces the skew in the biomass estimate (thus reducing the probability of gross over‐estimation) as well as reducing the expected error. The efficiency of the 2‐phase design compared to the conventional design was 127% and 180% on the 2 survey areas. In additon, the adaptive nature of the design strategy allows for greater flexibility at sea.

Journal ArticleDOI
TL;DR: In this paper, a path is selected from the butt of a tree to one of its terminal shoots and the weight (or volume etc.) of each internodal segment in the path is multiplied by the reciprocal of its probability of occurring in the sample.
Abstract: A procedure is described using randomized branch sampling and importance sampling - a Monte Carlo integration technique. Both techniques involve sampling with selection probabilities proportional to estimated size and produce unbiased estimates of tree components and their variances. A path is selected from the butt of a tree to one of its terminal shoots. The weight (or volume etc.) of each internodal segment in the path is multiplied by the reciprocal of its probability of occurring in the sample. The sum of these inflated weights is the estimate of tree biomass. Total fresh weight (foliage plus wood) was estimated using this method for 8 trees (Quercus rubra, Q. alba, Fagus grandifolia, Acer rubrum, Betula lenta and Carya ovata) removed from a mixed oak stand in Connecticut. The overall % error was 4.9% with a range of 5.6-14.4% for each species considered separately. 6 references.


Journal ArticleDOI
TL;DR: In this paper, an optimal rectangular grid sampling configuration for linear and spherical semi-variograms is presented. But the sampling strategy is not discussed. But it is shown that the true variances can be much less than those apparent using classical theory, and the necessary sampling effort much less, and that the estimation variance of a bulked sample can be identical with that of a kriged estimate.
Abstract: SUMMARY Estimates of mean values of soil properties within small rectangular blocks of land can be obtained by kriging provided the semi-variogram is known. This paper describes optimal rectangular grid sampling configurations whereby estimation variances can be minimized. For linear semi-variograms square blocks are best estimated by sampling at the nodes of a centrally placed grid with its interval equal to the block side divided by the square root of the sample size. For spherical semi-variograms the same configuration is almost optimal. The estimation variance of a bulked sample can be identical with that of a kriged estimate where the semi-variogram is linear and equal portions of soil are taken from each node on the optimally configured grid and provided the soil property is additive. For spherical semi-variograms the above is approximately true. Comparisons with estimates that take no account of known spatial dependence show that the true variances can be much less than those apparent using classical theory, and the necessary sampling effort much less. Within block-variances are often needed for planning, and an appendix gives two-dimensional auxiliary functions from which they can be calculated for linear and spherical semi-variograms.

Journal ArticleDOI
TL;DR: Field techniques used in recent studies of the population ecology of European tortoises are discussed and an efficient marking system suitable for large sample sizes is described.
Abstract: Field techniques used in recent studies of the population ecology of European tortoises are discussed. An efficient marking system suitable for large sample sizes is described. The dual problems ofageing adults and sexing juveniles are highlighted. The relative merits of sampling techniques are dependent on habitat and population density. Mark-recapture techniques are most suitable when sampling is conducted within a grid system which enables results to be stratified, allowing for differential ease of location. However, grid sampling is only practical where tortoise densities are high. At low density sites, or for brief surveys, recording numbers observed per man hour along random paths is the most convenient method, but allowances must be made for differential observer skill. Existing sampling methods bias against finding inactive animals and where these form a large proportion of the population, sample size is strongly influenced by habitat type.


Journal ArticleDOI
TL;DR: In this paper, the effect of experimental parameters on the sensitivity of two-dimensional NMR experiments is analyzed and a set of recommendations are provided for maximizing the sensitivity in terms of parameters such as sampling rates, acquisition times, and weighting functions in the two dimensions.

Journal ArticleDOI
TL;DR: In this paper, a variation of standard eddy correlation techniques for determining eddy fluxes by sampling air in two separate systems depending on whether the vertical velocity is positive or negative is presented.
Abstract: “Eddy accumulation” is a variation of standard eddy correlation techniques for determining eddy fluxes by sampling air in two separate systems depending on whether the vertical velocity is positive or negative. In concept, the corresponding eddy flux is determined directly from measurements of the pollutant concentration (or accumulation) difference between the two sampling systems. In practice, the method has not yet been demonstrated for a slowly-depositing pollutant. A numerical simulation of the eddy accumulation technique has been used to test the sensitivity of the method to errors arising from various sources, including sensor orientation, sampling limitations and chemical resolution. These tests were conducted using artificial pollutant concentration signals derived from real meteorological data (obtained above a forest canopy), in order to avoid the possibility of injecting unwanted errors by employing a poor quality pollutant signal. To detect a pollutant deposition velocity of 0.1 cm s...