scispace - formally typeset
Search or ask a question

Showing papers on "Sampling (statistics) published in 1989"


Journal ArticleDOI
TL;DR: Presentation d'une methode d'echantillonnage pour l'integration de la zone de Brillouin qui converge exponentiellement avec le nombre de points d’echant Dillonnage, sans perte de precision des techniques d'elargissement (broadening).
Abstract: We present a sampling method for Brillouin-zone integration in metals which converges exponentially with the number of sampling points, without the loss of precision of normal broadening techniques. The scheme is based on smooth approximants to the \ensuremath{\delta} and step functions which are constructed to give the exact result when integrating polynomials of a prescribed degree. In applications to the simple-cubic tight-binding band as well as to band structures of simple and transition metals, we demonstrate significant improvement over existing methods. The method promises general applicability in the fields of total-energy calculations and many-body physics.

5,862 citations


Book
01 Jan 1989
TL;DR: In this paper, the authors present an approach to estimating coverage error in analytical statistics, including the role of the survey interviewer and its effect on the sample design, as well as the effect of non-response.
Abstract: 1. An Introduction to Survey Errors.1.1 Diverse Perspectives on Survey Research.1.2 The Themes of Integration: Errors and Costs.1.3 The Languages of Error.1.4 Classifications of Error Within Survey Statistics.1.5 Terminology of Errors in Psychological Measurement.1.6 The Language of Errors in Econometrics.1.7 Debates About Inferential Errors.1.8 Important features of language Differences.1.9 Summary: The Tyranny of the Measurable.1.10 Summary and Plan of This Book.2. An Introduction to Survey Costs.2.1 Rationale for a Joint Concern About Costs and Errors.2.2 Use of Cost and Error Models in Sample Design.2.3 Criticisms of Cost-Error Modeling to Guide Survey Decisions.2.4 Nonlinear Cost Models Often Apply to Practical Survey Administration.2.5 Survey Cost Models are Inherently Discontinuous.2.6 Cost Models Often Have Stochastic Features.2.7 Domains of Applicability of Cost Models Must Be Specified.2.8 Simulation Studies Might Be Best Suited to Design Decisions.2.9 Is Time Money?2.10 Summary: Cost Models and Survey Errors.3. Costs and Errors of Covering the Population.3.1 Definitions of Populations Relevant to the Survey.3.2 Coverage Error in Descriptive Statistics.3.3 An Approach to Coverage Error in Analytic Statistics.3.4 Components of Coverage Error.3.5 Coverage Problems with the Target Population of Households.3.6 Measurement of and Adjustments for Noncoverage Error.3.7 Survey Cost Issues Involving Coverage Error.3.8 Summary.4. Nonresponse in Sample Surveys.4.1 Nonresponse Rates.4.2 Response Rate Calculation.4.3 Temporal Change in Response Rates.4.4 Item Missing Data.4.5 Statistical Treatment of Nonresponse in Surveys.4.6 Summary.5. Probing the Causes of Nonresponse and Efforts to Reduce Nonresponse.5.1 Empricial Correlates of Survey Participation.5.2 Efforts by Survey Methodologists to Reduce Refusals.5.3 Sociological Concepts Relevant to Survey Nonresponse.5.4 Psychological Attributes of Nonrespondents.5.5 Summary.6. Costs and Errors Arising from Sampling.6.1 Introduction.6.2 The nature of Sampling Error.6.3 Measuring Sampling Error.6.4 Four Effects of the Design on Sampling Error.6.5 The Effect of Nonsampling Errors on Sampling Error Estimates.6.6 Measuring Sampling Errors on Sample Means and Proportions from Complex Samples.6.7 The Debate on Reflecting the Sample Design in Estimating Analytic Statistics.6.8 Reflecting the Sample Design When Estimating Complex Statistics.6.9 Summary.7. Empirical Estimation of Survey Measurement Error.7.1 A First Estimation of Observational Errors Versus Errors of Nonobservation.7.2 Laboratory Experiments Resembling the Survey Interview.7.3 Measures External to the Survey.7.4 Randomized Assignment of Measurement Procedures to Sample Persons.7.5 Repeated measurements of the Same Persons.7.6 Summary of Individual Techniques of Measurement Errors Estimation.7.7 Combinations of Design Features.7.8 Summary.8. The Interviewer as a Source of Survey Measurement Error.8.1 Alternative Views on the Role of the Observer.8.2 The Roles of the Survey Interviewer.8.3 Designs for Measuring Interview Variance.8.4 Interviewer Effects in Personal Interview Surveys.8.5 Interviewer Effects in Centralized Telephone Surveys.8.6 Explaining the Magnitude of Interviewer Effects.8.7 Summary of Research on Interviewer Variance.8.8 Measurement of Interviewer Compliance With Training Guidelines.8.9 Experiments in Manipulating Interviewer Behavior.8.10 Social Psychological and Sociological Explanations for Response Error Associated with Interviewers.8.11 Summary.9. The Respondent as a Source of Measurement Error.9.1 Components of Response Formation Process.9.2 The Encoding Process and the Absence of Knowledge Relevant to the Survey Question.9.3 Comprehension of the Survey Question.9.4 Retrieval of Information from Memory.9.5 Judgment of Appropriate Answer.9.6 Communication of Response.9.7 Sociological and Demographic Correlates of Respondent Error.9.8 Summary.10. Measurement Errors Associated with the Questionnaire.101 Properties of Words in Questions.10.2 Properties of Question Structure.10.3 Properties of Question Order.10.4 Conclusions About Measurement Error Related to the Questionnaire.10.5 Estimates of Measurement Error from Multiple Indicator Models.10.6 Cost Models Associated with Measurement Error Through the Questionnaire.10.7 Summary.11. Response Effects of the Mode of Data Collection.11.1 Two Very Different Questions About Mode of Data Collection.11.2 Properties of Media of Communication.11.3 Applying Theories of Mediated Communication to Survey Research.11.4 Findings from the Telephone Survey Literature.11.5 Interaction Effects of Mode and Other Design.11.6 Survey Costs and the Mode of Data Collection.11.7 Cost and Error Modeling with Telephone and Face to Face Surveys.11.8 Summary.References.Index.

1,470 citations


Book
26 Jun 1989
TL;DR: Units for Measurements.
Abstract: Units for Measurements Statistical Concepts for Field Sampling Frequency and Cover Density Biomass Monitoring and Evaluation Remote Sensing Appendix Index

934 citations


Journal ArticleDOI
TL;DR: Applications are given to a GI/G/1 queueing problem and response surface estimation and Computation of the theoretical moments arising in importance sampling is discussed and some numerical examples given.
Abstract: Importance sampling is one of the classical variance reduction techniques for increasing the efficiency of Monte Carlo algorithms for estimating integrals. The basic idea is to replace the original...

646 citations


Book
01 May 1989
TL;DR: This chapter discusses the importance of preliminary analysis, strategies for Variance Estimation, and strategies for Replicated Sampling in relation to design-Based and Model-Based Analyses.
Abstract: Series Editor's Introduction Acknowledgments 1. Introduction 2. Sample Design and Survey Data Types of Sampling The Nature of Survey Data A Different View of Survey Data 3. Complexity of Analyzing Survey Data Adjusting for Differential Representation: The Weight Developing the Weight by Poststratification Adjusting the Weight in a Follow-Up Survey Assessing the Loss or Gain in Precision: The Design Effect The Use of Sample Weights for Survey Data Analysis 4. Strategies for Variance Estimation Replicated Sampling: A General Approach Balanced Repeated Replication Jackknife Repeated Replication The Bootstrap Method The Taylor Series Method (Linearization) 5. Preparing for Survey Data Analysis Data Requirements for Survey Analysis Importance of Preliminary Analysis Choices of Method for Variance Estimation Available Computing Resources Creating Replicate Weights Searching for Appropriate Models for Survey Data Analysis 6. Conducting Survey Data Analysis A Strategy for Conducting Preliminary Analysis Conducting Descriptive Analysis Conducting Linear Regression Analysis Conducting Contingency Table Analysis Conducting Logistic Regression Analysis Other Logistic Regression Models Design-Based and Model-Based Analyses 7. Concluding Remarks Notes References Index About the Authors

608 citations


Book
01 Jan 1989
TL;DR: Amir Aczel and Jayavel Sounderpandian as mentioned in this paper, Complete Business Statistics 6/e Table of Contents 0 Working with Templates1 Introduction and Descriptive Statistics2 Probability3 Random Variables4 The Normal Distribution5 Sampling and Sampling Distributions6 Confidence Intervals7 Hypothesis Testing8 The Comparison of Two Populations9 Analysis of Variance10 Simple Linear Regression and Correlation11 Multiple Regression
Abstract: Amir Aczel and Jayavel Sounderpandian, Complete Business Statistics 6/e Table of Contents0 Working with Templates1 Introduction and Descriptive Statistics2 Probability3 Random Variables4 The Normal Distribution5 Sampling and Sampling Distributions6 Confidence Intervals7 Hypothesis Testing8 The Comparison of Two Populations9 Analysis of Variance10 Simple Linear Regression and Correlation11 Multiple Regression and Correlation12 Time Series, Forecasting, and Index Numbers13 Quality Control and Improvement14 Nonparametric Methods and Chi-Square Test15 Bayesian Statistics and Decision AnalysisAppendicesA: ReferencesB: Answers to Most Odd-Numbered ProblemsC: Statistical TablesOn the CD16 Sampling Methods17 Multivariate Analysis

572 citations


Journal ArticleDOI
TL;DR: Importance sampling as a technique to improve the Monte Carlo method for probability integration can be shown to be extremely efficient and versatile.

514 citations



Book
01 Jan 1989
TL;DR: The Econometrics of individual Behaviour Part 1 The Theory of Rational Choice: Choice Sets, Preferences and Utility Functions The Opportunity Set Direct Utility and the Interior Optimum Indirect Utility and Roy's Identity The Cost Function and Shephard's Lemma The Distance Function Variation in Preferences as mentioned in this paper.
Abstract: The Econometrics of Individual Behaviour Part 1 The Theory of Rational Choice: Choice Sets, Preferences and Utility Functions The Opportunity Set Direct Utility and the Interior Optimum Indirect Utility and Roy's Identity The Cost Function and Shephard's Lemma The Distance Function Variation in Preferences.Part 2 Survey Methods and Cross-Section Econometrics: Sampling Techniques The Survey enquiry An Example - the UK Family Expenditure Survey Estimation Under Exogenous Sampling Estimation Under Endogenous Sampling Non-Response and Extraneous Sample Selection. Part 3 Choice Among Discrete Alternatives: Applications of Discrete Choice Models Estimation and Testing Random Parameter Models Alternative-Specific Random Errors Tversky's Elimination Model Hierarchical and Sequential Discrete Choice Composite Discrete-Continuous Choice. Part 4 Zero Expenditures and Corner Solutions: Single Equation Censored Regression (Tobit) Models Systems of Censored Regression equations Corner Solutions Consumption and Purchases - the P-Tobit Model Partially-Observed Explanatory Variables. Part 5 Kinked and Discontinuous Budget Frontiers Some Examples Models with Deterministic Preferences Models with Random Preferences More on Stochastic Specification. Part 6 Sequential Choice in Continuous Time - Duration Models The Hazard Function Applications of Duration Models Extensions Parametric Estimation Semi-Parametric Methods. Part 7 Barriers to Choice: Point Rationing Bounds on Behaviour Discrete Opportunities. Appendices: logistic, extreme value and generalised value distributions truncated and censored distributions the computation of probability integrals.

369 citations


Book
01 Jan 1989
TL;DR: This paper presents a meta-analyses of Behavior Sampling and Recording and its applications to Time Series Analysis and Reliability, both of which have shown high levels of generalizability.
Abstract: Contents: Preface. Introduction. Behavior Sampling and Recording. Subject Sampling and Assignment. Time Sampling. Reliability: Conventional Methods. Reliability: The Generalizability Approach. Validity. Time Series Analysis: Introduction. Time Series Analysis: ARIMA. Time Series Analysis: Other Methods. Summary. Appendices: Introductory Statistics. Probabilities of the Systematic Error in Single-Subject Time Sampling for Selected Intervals. Maximum and Minimum Kappa for Given Proportion Agreement Value.

346 citations


Journal ArticleDOI
TL;DR: Efficiency of sampling in reserves is an important consideration in conservation planning if reserve systems are to become fully representative before the options for protection are exhausted, and scoring and iterative approaches to wildlife conservation evaluation were compared.

Book ChapterDOI
01 Mar 1989
TL;DR: Canopy structure can indirectly affect such processes as photosynthesis, transpiration, cell enlargement, infection by pathogens, growth and multiplication of insects, photomorphogenesis, and competition between species in a plant community.

Journal ArticleDOI
TL;DR: In this article, the authors present a longitudinal model that is capable of separating true changes in school effects from sampling and measurement error, and provide a means for estimating the effects of school policies and practices.
Abstract: This paper presents a general longitudinal model for estimating school effects and their stability. Previous research on the stability of school performance over successive years has produced inconsistent findings. We argue that the findings have been inconsistent for at least two reasons: researchers have estimated different types of school effects, and they have not distinguished between instability due to true changes in school performance and instability due to measurement and sampling error. We describe two different types of school effects, each relevant to a different policy audience, and we present a longitudinal model that is capable of separating true changes in school effects from sampling and measurement error. The model also provides a means for estimating the effects of school policies and practices while controlling statistically for the effects of factors exogenous to the schooling system. This paper provides an example of the approach based on data describing two cohorts of students from one Education Authority in Scotland. It concludes with a discussion of the limitations of the model and implications for those collecting indicators of school performance for planning and evaluation purposes.

Book
01 Jan 1989
TL;DR: In this article, the authors studied the effect of turbulence and electrostatic forces on the air flow near aerosol samplers and compared the performance of thin-walled sampler and blunt sampler.
Abstract: Fluid mechanical background aerosol mechanical background experimental methods in aerosol sampler studies the nature of air flow near aerosol samplers aspiration efficiency of a thin-walled sampler in moving air aspiration efficiency of a blunt sampler in moving air aspiration efficiency of a sampler in calm air effects of turbulence and electrostatic forces on aspiration efficiencies of samplers wall effect contributions to sampler performance criteria for practical aerosol sampling sampling in stacks and ducts sampling probes for stack sampling sampling for coarse aerosol in workplaces sampling for fine aerosol fractions in workplaces sampling for aerosols in the ambient atmosphere aerosol spectrometers general sampling considerations.

Journal ArticleDOI
TL;DR: In this paper, a method for obtaining a representative sample of the floristic variation in a forested area of c. 20 000 km 2 was described, where a series of gradsects (transects incorporating significant environmental gradients) were selected to represent the environmental variability present in the area.

Journal ArticleDOI
TL;DR: A new sampling technique is presented that consists of picking two elements at random, and deterministically generating a long sequence of pairwise-independent elements that is guaranteed to intersect, with high probability, any set of non-negligible density.

Book
Eric R. Ziegel1
31 Aug 1989


Journal ArticleDOI
TL;DR: In this paper, the authors evaluate methods of estimating chemical potential from molecular simulations, based on formulas given by Widom, by Bennett, and by Shing and Gubbins, and they are tested with molecular dynamics simulations of Lennard-Jones liquids along the isotherm kT/e=1.2.
Abstract: This paper evaluates methods of estimating chemical potential—actually fugacity coefficient—from molecular simulations. These methods are based on formulas given by Widom, by Bennett, and by Shing and Gubbins. They are tested with molecular dynamics simulations of Lennard‐Jones liquids along the isotherm kT/e=1.2 for densities from 0.65σ−3 to 0.90σ−3. A new test molecule sampling method, excluded volume map sampling, is found to be as much as 2 orders of magnitude more efficient than uniform test molecule sampling; it is more efficient and reliable than restricted umbrella sampling proposed by Shing and Gubbins. Several difficulties of estimating the variance of the fugacity coefficient estimates are surmounted by novel application of standard statistical methods. The statistical analysis shows that Bennett’s method yields estimates with the least variance and Widom’s method yields estimates with the greatest variance although all methods are consistent to within statistical error. Pressure and fugacity c...

Journal ArticleDOI
TL;DR: In this paper, a multivariate procedure for spatial grouping of sampling sites is described. But the method is not suitable for soil survey data from two small areas in Britain and from a transect and the results of the latter are compared with those of strict segmentation.
Abstract: Earth scientists and land managers often wish to group sampling sites that are both similar with respect to their properties and near to one another on the ground. This paper outlines the geostatistical rationale for such spatial grouping and describes a multivariate procedure to implement it. Sample variograms are calculated from the original data or their leading principal components and then the parameters of the underlying functions are estimated. A dissimilarity matrix is computed for all sampling sites, preferably using Gower's general similarity coefficient. Dissimilarities are then modified using the variogram to incorporate the form and extent of spatial variation. A nonhierarchical classification of sampling sites is performed on the leading latent vectors of the modified dissimilarity matrix by dynamic clustering to an optimum. The technique is illustrated with results of its application to soil survey data from two small areas in Britain and from a transect. In the case of the latter results of spatially weighted classifications are compared with those of strict segmentation. An appendix lists a Genstat program for a spatially constrained classification using a spherical variogram as an example.

Journal ArticleDOI
TL;DR: In this article, a technique for spatially degrading high-resolution satellite data to produce comparable data sets over a range of coarser resolutions was proposed, and the results showed that sampling procedures that incorporate averaging result in decreased variance, while sampling procedures adopting single-value selection have higher variance and produce data values comparable with those from the original data.
Abstract: Consideration is given to a technique for spatially degrading high-resolution satellite data to produce comparable data sets over a range of coarser resolutions. Landsat MSS data is used to produce seven spatial resolution data sets by applying a spatial filter designed to simulate sensor response. Also, spatial degradation of coarse resolution data to provide data compression for the production of global-scale data sets is examined. NOAA AVHRR Global Area Coverage data is compared to other sampling procedures. It is found that sampling procedures that incorporate averaging result in decreased variance, while sampling procedures adopting single-value selection have higher variances and produce data values comparable with those from the original data.


Proceedings ArticleDOI
25 May 1989
TL;DR: It will be shown that effective band-width compro-ation between jitter reduction and operating speed are important for more advanced converter design, and accuracy and speed limitations of the converter will be discussed.
Abstract: 1.Introductlon Recent LSI technology advances have resulted in major improvements in electronics. For A-to-D converters. particularly. not only a very high precision over 90 dB [I] but also an ultra high speed of 2 GHz for 6bit [2] have already been achieved. Accuracy and speed limitations of the converters are obviously important. Jitter problems are thought to be the principle limiting factors on the accuracy and speed of the converter. In this paper, our research penaining to these problems will be presented. First, a jitter generation model far a practical sampling system. including signal and clock generator jitter. will be proposed. Based on this model. a precision jitter measurement method is shown. which also enables each jitter component to be separated. Finally. accuracy and speed limitations of the converter will be discussed. and it will also be shown that effective band-width compro-ation between jitter reduction and operating speed are important for more advanced converter design.

Journal ArticleDOI
TL;DR: In this article, the authors consider the properties of Shewhart control charts when the sampling interval used after each sample is not tixed but instead depends on what is observed in the sample.
Abstract: When a control chart is used to detect changes in a process the usual practice is to take samples from the process using a fixed sampling interval between samples. This paper considers the properties of Shewhart control charts when the sampling interval used after each sample is not tixed but instead depends on what is observed in the sample. The basic rationale is that the sampling interval should be short if there is some indication of a change in the process and long if there is no indication of a change. If the indication of a change is strong enough then the chart signals in the same way as the fixed sampling interval Shewhart chart. The result is that samples will be taken more frequently when there is a change in the process, and this process change can be detected much more quickly than when fixed sampling intervals are used. Expressions for properties such as the average time to signal and the average number of samples to signal are developed. It is shown that if the sampling interval must be cho...

Journal Article
TL;DR: The findings differ from those of previous studies and suggest that the technique of fine needle sampling employed for cytodiagnosis can be left to the personal preference of the operator.
Abstract: One hundred consecutive superficial mass lesions in various body sites were sampled by both conventional fine needle aspiration (FNA) and by a fine needle without the application of syringe suction The latter technique is based on the principle of capillarity and may be termed "fine needle capillary" (FNC) sampling The two sampling techniques were compared using five objective parameters: (1) the amount of diagnostic cellular material present, (2) the retention of appropriate architecture and cellular arrangement, (3) the degree of cellular degeneration, (4) the cellular trauma and (5) the volume of obscuring background blood and clots There was no statistically significant difference between the efficacies of the two sampling techniques for any of the parameters studied FNA sampling was diagnostic in a greater number of cases than was FNC sampling, but this difference was not statistically significant at a level of P = 05 When FNC sampling was diagnostic, it more frequently produced superior-quality material; conventional FNA, although diagnostic in a greater number of cases, mostly produced adequate, rather than superior-quality, material This trend was not, however, statistically significant at a level of P = 05 These findings differ from those of previous studies (which have shown overall superiority of FNC sampling over conventional FNA sampling) and suggest that the technique of fine needle sampling employed for cytodiagnosis can be left to the personal preference of the operator

Journal ArticleDOI
TL;DR: In this article, an optimal sampling plan for groundwater quality monitoring is formulated as a mixed integer programming (MIP) problem, which is defined by the minimization of the variance of estimation error subject to resource and unbiasedness constraints.
Abstract: The optimal sampling plan for groundwater quality monitoring is formulated as a mixed integer programming (MIP) problem. A sampling plan consists of the number and locations of sampling sites as well as the temporal sampling frequency. The MIP network problem is defined by the minimization of the variance of estimation error subject to resource and unbiasedness constraints. The mean and covariance of the spatial/temporal variable (chloride concentration measurements) are derived from the advection-dispersion equation governing mass transport. The solution for the optimal sampling proceeds in two stages: (1) parameter estimation and (2) network optimization. The MIP model was successfully tested with a network design problem in a buried valley aquifer in Butler County, Ohio. The application illustrates the role of objective function, resource constraint, mass transport processes, and hydrogeologic setting in groundwater quality monitoring network design.

Book ChapterDOI
TL;DR: The investigation is confined to procedures which provide error estimates and concentrates on approaches based on numerical integration and simulation where particular attention is focused on variance reduction techniques.

Book
01 Jan 1989
TL;DR: The history and Evolution of Quality Control, including Sampling, Descriptive Statistics, and Inference, and Probability Models for Quality Control and Process Control are studied.
Abstract: OVERVIEW. History and Evolution of Quality Control. Management of Quality. PROBABILITY AND STATISTICS. Probability Models for Quality Control. Descriptive Statistics, Sampling, and Inference. PROCESS CONTROL. Control Chart Principles. Control Charts for Attributes. Control Charts for Variables. Special Process Control Procedures. Specifications and Tolerances. ACCEPTANCE SAMPLING. Fundamental Concepts in Acceptance Sampling. Acceptance Sampling by Attributes. Acceptance Sampling by Variables. Special Attribute Sampling Procedures. RELATED SUBJECTS. Graphic Methods for Quality Improvement. Computers and Quality Control. Reliability Prediction and Life Testing. Appendix. Index.

Book ChapterDOI
TL;DR: The results indicate that the decision-making process in sampling must be viewed as a flexible exercise, dictated not by generalized recommendations but by specific objectives: there is no panacea in ecological sampling.
Abstract: In this paper we emphasize that sampling decisions in population and community ecology are context dependent. Thus, the selection of an appropriate sampling procedure should follow directly from considerations of the objectives of an investigation. We recognize eight sampling alternatives, which arise as a result of three basic dichotomies: parameter estimation versus pattern detection, univariate versus multivariate, and a discrete versus continuous sampling universe. These eight alternative sampling procedures are discussed as they relate to decisions regarding the required empirical sample size, the selection or arrangement of sampling units, and plot size and shape. Our results indicate that the decision-making process in sampling must be viewed as a flexible exercise, dictated not by generalized recommendations but by specific objectives: there is no panacea in ecological sampling. We also point to a number of unresolved sampling problems in ecology.

Journal ArticleDOI
TL;DR: Test de l'efficacite de trois procedes de piegeage pour l'evaluation de la quantite et de the diversite des fourmis dans les savanes.
Abstract: Test de l'efficacite de trois procedes de piegeage pour l'evaluation de la quantite et de la diversite des fourmis dans les savanes