scispace - formally typeset
Search or ask a question

Showing papers on "Sample size determination published in 1970"



Book
01 Jan 1970
TL;DR: In this article, the authors discuss the role of research in knowledge science, and the meaning of research, as well as the roles of theory, operational definitions of variables, and data analysis and interpretation.
Abstract: Each chapter concludes with "Summary," "Exercises" and "References." Preface I. INTRODUCTION TO EDUCATIONAL RESEARCH: DEFINITIONS, RESEARCH PROBLEMS, PROPOSALS, AND REPORT WRITING. 1. The Meaning of Research. Why Should I Study Research? The Search for Knowledge Science The Role of Theory Operational Definitions of Variables The Hypothesis The Research Hypothesis The Null Hypothesis (Ho) Populations Sampling Randomness The Simple Random Sample Random Numbers The Systematic Sample The Stratified Random Sample The Area or Cluster Sample Nonprobability Samples Sample Size Purposes of Research Fundamental or Basic Research Applied Research Action Research Assessment, Evaluation, and Descriptive Research Types of Educational Research 2. Selecting a Problem and Preparing a Research Proposal. The Academic Research Problem Levels of Research Projects Sources of Problems Evaluating the Problem Using the Library Finding Related Literature References and Bibliography Fair Use of Copyrighted Materials The Research Proposal The First Research Project Submitting a Research Proposal to a Funding Agency Thesis Proposal Ethics in Human Experimentation History of Research Ethics Regulations From Regulations to Practice. 3. The Research Report. Style Manuals Format of the Research Report Main Body of the Report References and Appendices The Thesis or Dissertation Style of Writing Reference Form Pagination Tables Figures The Line Graph The Bar Graph or Chart The Circle, Pie, or Sector Chart Maps Organization Charts Evaluating a Research Report II. RESEARCH METHODS. 4. Historical Research. Purpose of Histroical Reseach on American Education History and Science Historical Generalization The Historical Hypothesis Hypotheses in Educational Historical Research Difficulties Encountered in Historical Research Sources of Data Primary Sources of Data Primary Sources of Educational Data Secondary Sources of Data Historical Criticism External Criticism Internal Criticism Examples of Topics for Educational Historical Study Writing the Historical Report 5. Descriptive Studies: Assessment, Evaluation, and Research. Assessment Studies The Survey Social Surveys Public Opinion Surveys National Center for Education Statistics International Assessment Activity Analysis Trend Studies Evaluation Studies School Surveys Program Evaluation Assessment and Evaluation in Problem Solving Descriptive Research Causal Comparative Research Correlational Research Follow-up Research Other Descriptive Research Replication and Secondary Analysis The Post Hoc Fallacy 6. Experimental and Quasi-Experimental Research. Early Experimentation Experimental and Control Groups Variables Independent and Dependent Variables Confounding Variables Controlling Extraneous Variables Experimental Validity Threats to Internal Experimental Validity Threats to External Experimental Validity Experimental Design Pre-experimental Designs True Experimental Designs Quasi-Experimental Designs Factorial Designs 7. Single-Subject Experimental Research. General Procedures Repeated Measurement Baselines Manipulating Variables Length of Phases Transfer of Training and Response Maintenance Assessment Target Behavior Data Collection Strategies Basic Designs A-B-A-B Designs Multiple Baseline Designs Other Designs Evaluating Data 8. Qualitative Research. A Qualitative Research Model. Themes of Qualitative Research Research Questions Theoretical Traditions Research Strategies Document or Content Analysis The Case Study Ethnographic Studies Data Collection Techniques Observations Interviews Review of Documents Other Qualitative Data Collection Techniques Data Analysis and Interpretation Combining Qualitative and Quantitative Research 9. Methods and Tools of Research. Reliability and Validity of Research Tools Quantitative Studies Qualitative Studies Psychological and Educational Tests and Inventories Qualities of a Good Test and Inventory Validity Reliability Economy Interest Types of Tests and Inventories Achievement Tests Aptitude Tests Interest Inventories Personality Inventories Projective Devices Observation Validity and Reliability of Observation Recording Observations Systematizing Data Collection Characteristics of Good Observation Inquiry Forms: The Questionnaire Closed Form The Open Form Improving Questionnaire Items Characteristics of a Good Questionnaire Preparing and Administering the Questionnaire A Sample Questionnaire Validity and Reliability of Questionnaires Inquiry Forms: The Opinionnaire Thurstone Technique Likert Method Semantic Differential The Interview Validity and Reliability of the Interview Q Methodology Social Scaling Sociometry Scoring Sociometric Choices The Sociogram "Guess-who" Technique Social-distance Scale Organization of Data Collection Outside Criteria for Comparison Limitations and Sources of Error III. DATA ANALYSIS. 10. Descriptive Data Analysis. What Is Statistics? Parametric and Nonparametric Data Descriptive and Inferential Analysis The Organization of Data Grouped Data Distributions Statistical Measures Measures of Central Tendency Measures of Spread or Dispersion Normal Distribution Nonnormal Distributions Interpreting the Normal Probability Distribution Practical Applications of the Normal Curve Measures of Relative Position: Standard Scores The Z Score (Z) The T Score (T) The College Board Score (ZCb). Stanines Percentile Rank. Measures of Relationship Pearson's Product-Moment Coefficient of Correlation (r) Rank Order Correlation (r) Phi Correlation Coefficient (f) Interpretation of a Correlation Coefficient Outliers Misinterpretation of the Coefficient of Correlation Prediction Standard Error of Estimate A Note of Caution 11. Inferential Data Analysis. Statistical Inference The Central Limit Theorem Parametric Tests Testing Statistical Significance The Significance of the Difference between the Means of Two Independent Groups The Null Hypothesis (Ho) The Level of Significance Decision Making Two-Tailed and One-Tailed Tests of Significance Degrees of Freedom A One-Sample Z Test Student's Distribution (t) Significance of the Difference between Two Small Sample Independent Means Homogeneity of Variances Significance of the Difference between the Means of Two Matched or correlated Groups (Nonindependent Samples) Statistical Significance of a Coefficient of correlation Analysis of Variance (ANOVA) Analysis of Covariance (ANCOVA) and Partial Correlation Multiple Regression and Correlation Nonparametric Tests The Chi Square Test (c2) The Mann-Whitney Test Outliers and Missing Data 12. Computer Data Analysis. The Computer Data Organization Computer Analysis of Data Descriptive Statistics Graphs Multiple Regression ANOVA Results from Analyses of Appendix B Data Statistics on the World Wide Web Qualitative Analyses Using Computer Software Appendix A: Statistical Formulas and Symbols. Appendix B: Sample Data. Appendix C: Percentage of Area Lying Between the Mean and Successive Standard Deviation Units Under the Normal Curve. Appendix D: Critical Values for Pearson's Product-Moment Correlation (r). Appendix E: Critical Values of Student's Distribution (t). Appendix F: Abridged Table of Critical Values for Chi Square. Appendix G: Critical Values of the F Distribution. Appendix H: Research Report Evaluation. Appendix I: Answers to Statistics Exercises.

3,598 citations


Journal ArticleDOI
TL;DR: In this article, it is shown that Fisher's non-randomizing exact test for 2 × 2-tables, which is a conditional test, can by simple means be changed into an unconditional test using raised levels of significance; not seldom, especially for not too large samples, the level of significance can be doubled.
Abstract: Summary In this paper it is to be shown that Fisher's non-randomizing exact test for 2 × 2-tables, which is a conditional test, can by simple means be changed into an unconditional test using raised levels of significance; not seldom, especially for not too large samples, the level of significance can be doubled. This leads in many cases to a considerable increase of power of the test. A table with raised levels has been prepared up to sample sizes of 50 and a rule of thumb, which can be used if this table is not available, has been developEd.

149 citations


Journal ArticleDOI
TL;DR: Employing simulated data, several methods for estimating correlation and variance-covariance matrices are studied for observations missing at random from data matrices.
Abstract: Employing simulated data, several methods for estimating correlation and variance-covariance matrices are studied for observations missing at random from data matrices. The effect of sample size, number of variables, percent of missing data and average intercorrelations of variables are examined for several proposed methods.

86 citations


Journal ArticleDOI
TL;DR: A decision theoretic approach to the design of a clinical trial is considered for the situation in which a total of N patients with a disease receive one of two treatments and the responses to the treatments are dichotomous as mentioned in this paper.
Abstract: A decision theoretic approach to the design of a clinical trial is considered for the situation in which a total of N patients with a disease receive one of two treatments and the responses to the treatments are dichotomous. Two costs are considered: The cost of treating a patient with the inferior treatment and the cost of conducting the trial. Minimax and Bayes procedures are used to determine the optimum size of a fixed sample trial. Bayes solutions for the optimum sample size are given for a variety of beta prior distributions and various values of N. Minimax solutions are given for a variety of regions over which the minimaxing was performed, and for various values of N. The optimum sample sizes are found to be asymptotically proportional to N 1/2 using the Bayes procedure and N 2/3 using the minimax procedure. The consequences of erring in specifying the value of N are explored.

70 citations


Journal ArticleDOI
Jacob Cohen1
TL;DR: Cohen, 1969 as mentioned in this paper discussed the need and relative neglect of statistical power analysis of the Neyman-Pearson (1928- 1933) type in the design and interpretation of research in the behavioral sciences.
Abstract: IN the course of preparing a handbook for power analysis (Cohen, 1969), it became apparent that at the cost of (a) working with approximate rather than &dquo;exact&dquo; solutions, and (b) doing a modest amount of computing, many frequently encountered problems of statistical power analysis encountered by hypothesis-testing behavioral scientists could be brought into a simple common framework. Past publications in this program have discussed (Cohen, 1965, pp. 95-101) and documented (Cohen, 1962) the need and relative neglect of statistical power analysis of the Neyman-Pearson (1928, 1933) type in the design and interpretation of research in the behavioral sciences. This article and the more

70 citations


Journal ArticleDOI
TL;DR: The data above indicate that the sample sizes required to reject the null hypothesis, (F = 0), are much larger than hitherto employed, given the range of expected in a natural population, and the commonly employed method of using the χ2 statistic to estimate F in natural populations is unlikely to give significant estimates unless very large samples are used.
Abstract: The data above indicate that the sample sizes required to reject the null hypothesis, (F = 0), are much larger than hitherto employed, given the range of expected in a natural population (e.g., from .0004 to .009 in nonprimitive human populations). The use of the χ2 statistic without considering the power of the test to estimate F from small samples would seem to be unjustified since the magnitude of inbreeding expected in a natural population would not cause significant χ2 values. On the other hand, should significant χ2 values be obtained, the magnitude of F required is so great that it would be illogical to attribute the deviations from H-W proportions to inbreeding. Thus it would seem that the problem of estimating inbreeding in a natural population is analogous to the problem of estimating selection (Lewontin and Cockerham 1959; Neel and Schull 1968) because sample sizes far in excess of those normally used are required. The material presented here indicates the size of sample required to detect a sp...

68 citations


Journal ArticleDOI
TL;DR: In this article, the authors provide answers to questions concerning adequate sample size in a one-way analysis of variance situation depend on such things as the number of categories to be compared, the levels of risk an experimenter is willing to assume and some knowledge of the noncentrality parameter.
Abstract: SUMMARY Answers to questions concerning adequate sample size in a one-way analysis of variance situation depend on such things as the number of categories to be compared, the levels of risk an experimenter is willing to assume and some knowledge of the noncentrality parameter. The accompanying tables which provide answers without need of iteration are for the experimenter who can deal better intuitively with an estimate of the standardized range of the means than with the noncentrality parameter. Maximum values of the standardized range, r, are tabulated when the means of k groups, each containing N observations, are being compared at a and /8 levels of risk, for ac = 0-01, 0.05, 0O1, 0-2; /3 = 0-005, 0-01, 0 05, 0-1, 0-2, 0-3; k = 2 (1) 6; N = 2 (1) 8 (2) 30, 40, 60, 80, 100, 200, 500, 1000.

66 citations



Journal ArticleDOI
TL;DR: In this article, the minimum sample sizes per treatment (or level) for all combinations of alpha = 0.5, 0.3, 1.0(0.5)3.
Abstract: Tables are provided which give the minimum sample sizes per treatment (or level) for all combinations of alpha = 0.5, 0.3, 0.25, 0.2, 0.1, 0.05, 0.01 and beta = 0.3, 0.2, 0.1, 0.05, for relative discrimination (Delta/Sigma) = 1.0(0.5)3.0, and for the nu..

58 citations


Journal ArticleDOI
TL;DR: In this paper, the relative merits of several solutions to the Behrens-Fisher problem are considered on the basis of the stability of their size and the magnitude of their power, and it is shown that if the sample sizes are both larger than 7, then the solutions due to Pagurova and Welch are very good.
Abstract: SUMMARY The relative merits of several solutions to the Behrens-Fisher problem are considered on the basis of the stability of their size and the magnitude of their power. It is shown that if the sample sizes are both larger than 7, then the solutions due to Pagurova and Welch are very good in the above sense. For smaller sample sizes certain modifications of Pagurova's solution are presented. 1. PRELIMINARIES Suppose that we have two normal distributions, the first with mean and variance parameters ,l and o2 , and the second with parameters #2 and 4-2 Samples of sizes n, and n2, respectively, are taken, and the sample means and variances are x-1 and s82, and x2 and s2. The Behrens-Fisher problem consists in testing the null hypothesis Ho: q = (,cl- ,t2)/0l = 0 against one of the alternatives q > 0, q 0. Although several solutions to this problem have been proposed in the past forty years, no adequate guidelines have so far been established as to which one to follow in a given practical situation. Our aim is partly to provide such a guideline based on a comparative study of some of the solutions. For this purpose we have chosen the ones due to Banerjee (1960), Fisher (1936), Pagurova (1968a), Wald (1955) and Welch (1947). Letting Ai= l/ni (i = 1, 2), we see that the solutions are all of the following general form. Reject Ho if v-= (xl-x2)/V/(Ai s~2 +A2s) > Va(C), where K(C) is a function of C = Als2/(Asl +A2s2) and the chosen level of significance ac. The expressions for VJ'(C) for these five solutions are presented in Table 1. In the case of Fisher's and Welch's solutions, where K (C) is based on an asymptotic expansion, we have taken it to the second order in the sample size, and in the case of Wald's solution, which is applicable only when the sample sizes are equal, we have taken the common sample size to be nl.

Journal ArticleDOI
TL;DR: In this article, the Pitman-Morgan test for comparing marginal variances in a bivariate distribution can be applied directly and the power function of the test and graphs to estimate the sample size for specified power may be based on David's tables.
Abstract: Comparison of the reproducibility of two instruments or two techniques involves statistical methods which are now available in textbooks whenever the variability of specimens employed in successive measurements can be effectively eliminated. However, conditions where variations of material under test between successive measures are unavoidable appear to be particularly common in biological work. A technique for separating these two sources of error (techniques and material being measured) was obtained by Grubbs [1948], but has previously lacked an adequate test by which the relative precision of the techniques compared can be evaluated. It is here shown that the Pitman-Morgan test for comparing marginal variances in a bivariate distribution can be applied directly. The power function of the test and graphs to estimate the sample size for specified power may be based on David's tables.

Journal ArticleDOI
TL;DR: In this paper, the median lethal dose parameter of a quantal, logistic dose response curve is estimated using Bayesian decision theory and a stopping rule and terminal decision rule to minimize the prior expectation of the total cost of observation plus estimation loss.
Abstract: This paper concerns the sequential design of experiments for estimating the median lethal dose parameter of a quantal, logistic dose response curve. Bayesian decision theory is used to provide a stopping rule and terminal decision rule to minimize the prior expectation of the total cost of observation plus estimation loss. Special cases in which trials can only be performed at one, two or three dose levels are considered, and optimal strategies are evaluated numerically. The expected losses are compared with those of the up and down method using a fixed sample size equal to the prior expectation of the number of trials under the sequential design. The up and down method is shown to be surprisingly good, having efficiency usually in excess of 90 %.

Journal ArticleDOI
TL;DR: Sample size tables for tolerance limits on a normal distribution are given in this article, where the criterion used for determining sample size is as follows: for a tolerance limit such that Pr (coverage ≥ P) = γ, choose P′ > P and δ (small) and require Pr (cover ≥ P′) ≤ δ.
Abstract: Sample size tables are given for tolerance limits on a normal distribution. Wald-Wolfowitz two-sided limits and one-sided limits are considered. The criterion used for determining sample size is as follows: For a tolerance limit such that Pr (coverage ≥ P) = γ, choose P′ > P and δ (small) and require Pr (coverage ≥ P′) ≤ δ. Five levels of P, three levels of γ, three levels of P′, and three levels of δ are used in the tables. The tables are given for the common case where the degrees of freedom for the x2 is one less than the sample size, but it is shown how to use the tables for other cases which occur in simple linear regression and some experimental designs. Examples are given to illustrate the use of the tables.

Journal ArticleDOI
TL;DR: A state-of-the-art summary of the methods of employing the Weibull distribution for analysis of experimental data with emphasis on practical application can be found in this paper.
Abstract: Because of the scatter of fatigue test data, statistical methods are required for the interpretation of the data. A method, which features the Weibull distribution as the basic statistical model, is analyzed. This point estimation method is applicable in cases where the sample sizes are relatively large. Methods for estimating the parameters for the two and three parameter Weibull distributions are summarized. Furthermore, estimates of the confidence intervals for the parameters in the case of the two-parameter family are presented. The paper is a state-of-the-art summary of the methods of employing the Weibull distribution for analysis of experimental data with emphasis on practical application.


Journal ArticleDOI
TL;DR: In this paper, the authors show that the Nagar approach can yield an estimate for finite sample bias that differs from the true sample bias to the same order of sample size, and it can yield estimates of bias which are finite (infinite) while the true bias is infinite.
Abstract: The exact sampling distributions of estimators of structural parameters of econometric models are unknown except for a few simple cases. In this situation two alternative approaches towards evaluating finite sample properties of various estimators have been adopted in the literature: (i) Monte Carlo experiments, and (ii) the approach pioneered by Nagar and his students in which the sampling error of an estimator is expressed as the sum of an infinite series of random variables, successive terms of which are of decreasing order of sample size in probability. It is claimed that the small sample properties of the estimator under consideration can be approximated by those of the first few terms of such an infinite series. This paper shows through examples that the Nagar approach can be misleading in the sense that it can yield an estimate for finite sample bias that differs from the true finite sample bias to the same order of sample size. And it can yield estimates of bias which are finite (infinite) while the true bias is infinite (finite). The paper also draws attention to some of the pitfalls to be avoided in studying the properties of an infinite sequence of random variables.

Journal ArticleDOI
TL;DR: In this paper, the maximum likelihood estimators and the best linear unbiased estimators for the parameters were evaluated for symmetric, or no, censoring, while the maximum-likelihood estimators are better for strongly asymmetric censoring.
Abstract: where a and b are location and scale parameters. The range of application of the logistic distribution as a probability model to describe random phenomenon covers such areas as psychosensory response systems, population growth, bioassay, life tests and physiochemical phenomena. Harter & Moore (1967) have given an extensive review of the problem of point estimation of the parameters in the logistic distribution using a censored sample. They evaluate the maximum likelihood estimators and the best linear unbiased estimators for the parameters, stating that the precision of the two is nearly the same for symmetric, or no, censoring, while the maximum likelihood estimators are better for strongly asymmetric censoring. Their evaluation is given for samples of size 10. They also present the asymptotic variances and covariances of the maximum likelihood estimators for 30 different censoring proportions. Gupta & Gnanadesikan (1966) have used the sample quantiles to estimate the parameters as proposed by Ogawa (1951). They indicate that Ogawa's estimator of a with b known is rather good while his estimator of b with a unknown is fair. They also evaluate the estimators, also linear in the order statistics, proposed by Jung (1955) and Blom (1958, p. 120, equation 10.3.12). When a and b are both unknown, they recommend that Blom's estimator of a and Jung's estimator of b be used, stating that these are nearly as efficient as the best linear unbiased estimators. Our concern is with inference based on the maximum likelihood estimators of a and b, which we shall denote by a and &. While there are examples, for example, the double exponential, in which the maximum likelihood estimators are not as good as the best linear unbiased estimators, it appears that they are always somewhat better in the case of the logistic distribution. For symmetric censoring, 2 is always unbiased and an unbiasing factor for 8 that depends only upon the sample size n, can be determined; this factor can be obtained by Monte Carlo methods. The task of finding the maximum likelihood estimators is easily accomplished by a brief computer program or with a bit of patience and a desk calculator, though the latter method

Journal ArticleDOI
TL;DR: In this paper, the statistical relation between the sample and population characteristic vectors of correlation matrices with squared multiple correlations as communality estimates was investigated and the sampling fluctuations were found to relate only to differences in the square roots of characteristic roots and to sample size.
Abstract: Data are reported which show the statistical relation between the sample and population characteristic vectors of correlation matrices with squared multiple correlations as communality estimates. Sampling fluctuations were found to relate only to differences in the square roots of characteristic roots and to sample size. A principle for determining the number of factors to rotate and interpret after rotation is suggested.

Journal ArticleDOI
TL;DR: Two alternate formulas for determining the significance of the difference between a certain two correlation coefficients have been reported in the literature as mentioned in this paper, r13 and r23, denote the correlation of two predictor variables (1 and 2) with a predictand (3) within a single sample drawn from a population.
Abstract: Two alternate formulas for determining the significance of the difference between a certain two correlation coefficients have been reported in the literature.2 These coefficients, r13 and r23, denote the correlation of two predictor variables (1 and 2) with a predictand (3) within a single sample drawn from a population. From these statistics the researcher can tell whether the correlation of 1 with 3 is significantly different from the correlation of 2 with 3. The older formula for testing the significance of this difference was derived by Hotelling (1940) and is commonly cited in statistics textbooks (e.g. Ferguson, 1966; McNemar, 1969; Tate, 1955; and Walker and Lev, 1953). This statistic, which has Student's t distribution with N 3 degrees of freedom (where N is the sample size), is


Journal ArticleDOI
TL;DR: In this article, a method for construction of beta-content tolerance regions at confidence level gamma was developed, when sampling from the k-variate normal, and tables supplied. But the method assumes that sample sizes are large.
Abstract: : A method for construction of beta-content tolerance regions at confidence level gamma is developed, when sampling from the k-variate normal, and tables supplied. Certain efficiency considerations are discussed with respect to beta-content 'non-parametric' regions and beta-content 'normal' regions, and here also tables are supplied. The method assumes that sample sizes are large. (Author)



Journal ArticleDOI
H. J. Malik1
01 Dec 1970-Metrika
TL;DR: In this paper, the authors derived distributions of the product of sample values, the sample geometric mean, the products of two minimum values from sample of unequal size and product ofk minimum values for a population of equal size from a Pareto population.
Abstract: Distributions are derived of the product of sample values, the sample geometric mean, the product of two minimum values from sample of unequal size and product ofk minimum values from sample of equal size from aPareto population. The distributions can be conveniently transformed tox2.

Journal ArticleDOI
TL;DR: In this article, the problem of designing a life test to estimate the Weibull hazard rate with prescribed precision based on the first T out of n order statistics t, < t, *.. < t and known shape parameter m was considered.
Abstract: 1. ~NTIWDUCTIOS We consider the problem of designing a life test to estimate the Weibull hazard rate with prescribed precision based on the first T out of n order statistics t, < t, * . . < t, and known shape parameter m. Tables for the required number of failures and graphs for choosing a desirable sample size are provided.


Journal ArticleDOI
TL;DR: Four different sequential sampling procedures are applied to the analysis of data generated by a computer simulation experiment with a multi-item inventory model to achieve given levels of statistical precision.
Abstract: Four different sequential sampling procedures are applied to the analysis of data generated by a computer simulation experiment with a multi-item inventory model. For each procedure the cost of computer time required to achieve given levels of statistical precision is calculated. Also the cost of computer time using comparable fixed sample size methods is calculated. The computer costs of fixed sample size procedures versus sequential sampling procedures are compared.

Journal ArticleDOI
TL;DR: In this article, a new test statistic is proposed for detecting association between two continuous variables, defined as any departure whatsoever from independence between two quantitative traits; we define association as any deviation from independence.
Abstract: SUMMARY A new test statistic is proposed for detecting association between two continuous variables. For sample sizes of 20 or more, the statistic's distribution (when the variables are independent) is tolerably well approximated by a normal distribution. Expressions are given for determining the mean and variance of this distribution, so that a test call be performed. This test is very powerful against various types of alternatives for which a test based on the sample correlation coefficient is powerless. When the underlying distribu- tion is bivariate normal, the sample correlation is appreciably better only if either the sample size is less than 50 or the true correlation is less than 0.5. between two quantitative traits; we define association as any departure whatsoever from independence. The most commonly used statistic for de- tecting association between quantitative traits is the correlation coefficient. This has the disadvantage that its power lies in detecting linear association, whereas in many biological situations the relationship may be far from linear. If the relationship between the two variables is monotonic, such as may be expected when the two variables are, e.g., urinary concentration and plasma concentration of a given amino acid, a rank correlation coeffi- cient can be used to advantage. The correlation ratio can also be used to detect many types of non-linear relationships. But these statistics are not sensitive to certain types of association that can occur in biological data. For example, they cannot detect a relationship that may be described by a circle. Another method is commonly used to detect association, especially when the form of the relationship, if any, is unknown and not at all obvious from a plot of the data. The scales on which the traits are measured are divided into class intervals to form a two-way table containing the number of ob- servations falling into each subclass, and this is treated as a contingency table. Either the usual x2 test for association can be performed, or a test that is based on one of the various measures of association proposed by Goodman and Kruskal (1954; 1959; 1963) can be used. However, all these

Journal ArticleDOI
TL;DR: In this article, a comparison between single and sequential s.p. curves is made in the context of acceptance inspection, and it is shown that the single s. p. curve is not highly superior to the sequential curve in terms of costs.
Abstract: tial test of a simple hypothesis against a simple alternative is optimal in the sense that no other test with the same producer's and consumer's risk can have a smaller expected sample size. The significance of this theorem for applied statistics is limited for two reasons: first, optimality as defined above does not indicate the extent to which sequential tests are superior to tests based on samples of fixed size, and second, the whole 0. C. curve is important for the practical operation of a s.p., not just two of its points. Therefore, more informative than the theorem of Wald and Wolfowitz are numerical computations like that performed by Wald in [7], p. 57. These computations show that under many circumstances sequential tests and fixed-size sample tests with almost coinciding 0. C. curves have (expected) sample sizes in the relation 1: 2. However, these facts do not justify the common belief in the superiority of sequential s.p. over single s.p. in acceptance inspection. If the sample size is small compared with the lot size, the costs of sampling will be only a tiny fraction of the 'costs' connected with the decision to accept or reject the whole lot. Even if small differences between the 0. C. curves of two sampling plans entail relatively small differences in the 'costs' (of acceptance or rejection) these differences can be large compared with the costs of sampling. However, if the sample size is not small compared with the lot size, the difference between the sample sizes gains an importance which cannot be adequately characterized by the costs of sampling. In this case, the sorting of the sample will have a large effect, and the 0. C. curve is not the decisive criterion for the comparison of s.p. In this paper, comparisons between single and sequential s.p. are performed. A special model of acceptance inspection is chosen by which an objective comparison of s.p. in terms of costs can be made. This model will be a reasonable approximation to reality for only parts of the situations in which acceptance inspection is applied. By using this model, we show that the sequential s.p. is not highly superior to the single s.p.